Logging#
The logging module provides structured logging capabilities for TorchFX with support for debug logging, performance profiling, and custom handlers.
Overview#
TorchFX follows Python logging best practices:
NullHandler by default - No log output unless explicitly enabled
Convenience functions - Easy configuration for common use cases
Hierarchical loggers - Fine-grained control over log output
Performance logging - Built-in tools for profiling pipelines
Quick Start#
Enable debug logging:
import torchfx
torchfx.logging.enable_debug_logging()
Profile a filter chain:
from torchfx.logging import log_performance
with log_performance("filter_chain"):
result = wave | filter1 | filter2
# Logs: "filter_chain completed in 0.045s"
Configuration Functions#
- torchfx.logging.get_logger(name=None)[source]#
Get a logger for a TorchFX module.
- Parameters:
name (str | None, optional) – The module name to get a logger for. If None, returns the root TorchFX logger. If provided, returns a child logger under “torchfx.<name>”.
- Returns:
A logger instance for the specified module.
- Return type:
Examples
Get the root TorchFX logger:
>>> logger = get_logger() >>> logger.name 'torchfx'
Get a logger for a specific module:
>>> logger = get_logger("filter.iir") >>> logger.name 'torchfx.filter.iir'
- torchfx.logging.enable_logging(level='INFO', format_string='%(asctime)s - %(name)s - %(levelname)s - %(message)s', date_format='%Y-%m-%d %H:%M:%S', stream=None)[source]#
Enable TorchFX logging at the specified level.
This function configures the TorchFX root logger with a StreamHandler that outputs to the specified stream (stderr by default).
- Parameters:
level ({"DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"}, optional) – The logging level to set. Default is “INFO”.
format_string (str, optional) – The format string for log messages. Default includes timestamp, logger name, level, and message.
date_format (str, optional) – The date format for timestamps. Default is “%Y-%m-%d %H:%M:%S”.
stream (file-like object | None, optional) – The stream to write log messages to. Default is sys.stderr.
- Return type:
None
Examples
Enable INFO level logging:
>>> import torchfx >>> torchfx.logging.enable_logging()
Enable DEBUG level logging:
>>> torchfx.logging.enable_logging(level="DEBUG")
Custom format:
>>> torchfx.logging.enable_logging( ... level="DEBUG", ... format_string="%(levelname)s: %(message)s" ... )
- torchfx.logging.enable_debug_logging(format_string='%(asctime)s - %(name)s - %(levelname)s - %(message)s', date_format='%Y-%m-%d %H:%M:%S')[source]#
Enable DEBUG level logging for TorchFX.
This is a convenience function equivalent to calling
enable_logging(level="DEBUG").- Parameters:
- Return type:
None
Examples
>>> import torchfx >>> torchfx.logging.enable_debug_logging()
- torchfx.logging.disable_logging()[source]#
Disable TorchFX logging.
This function removes all handlers from the TorchFX logger and re-attaches a NullHandler to suppress output.
Examples
>>> import torchfx >>> torchfx.logging.enable_debug_logging() >>> # ... do some work ... >>> torchfx.logging.disable_logging() # Suppress further output
- Return type:
None
Performance Logging#
Context Manager#
- torchfx.logging.log_performance(operation_name, level=20, logger=None)[source]#
Context manager for logging the execution time of a code block.
- Parameters:
operation_name (str) – A descriptive name for the operation being timed.
level (int, optional) – The logging level for the timing message. Default is INFO.
logger (logging.Logger | None, optional) – The logger to use. If None, uses the torchfx.performance logger.
- Yields:
dict – A dictionary that will contain timing information after the block completes. Keys: “elapsed_seconds”, “operation_name”.
- Return type:
Examples
Basic usage:
>>> from torchfx.logging import log_performance >>> with log_performance("filter_chain"): ... result = wave | filter1 | filter2 # Logs: "filter_chain completed in 0.045s"
Capture timing information:
>>> with log_performance("processing") as timing: ... result = wave | effect >>> print(f"Took {timing['elapsed_seconds']:.3f}s")
With custom logger:
>>> import logging >>> my_logger = logging.getLogger("myapp") >>> with log_performance("operation", logger=my_logger): ... pass
Decorator#
- class torchfx.logging.LogPerformance(operation_name=None, level=20, logger=None)[source]#
Decorator for logging function execution time.
This decorator wraps a function to automatically log its execution time each time it is called.
- Parameters:
operation_name (str | None, optional) – A descriptive name for the operation. If None, uses the function name.
level (int, optional) – The logging level for timing messages. Default is INFO.
logger (logging.Logger | None, optional) – The logger to use. If None, uses the torchfx.performance logger.
Examples
Basic usage with automatic naming:
>>> from torchfx.logging import LogPerformance >>> @LogPerformance() ... def process_audio(wave): ... return wave | filter1 | filter2 # Each call logs: "process_audio completed in X.XXXs"
Custom operation name:
>>> @LogPerformance("audio_processing_pipeline") ... def process(wave): ... return wave | filter
With custom logger:
>>> import logging >>> my_logger = logging.getLogger("myapp") >>> @LogPerformance("processing", logger=my_logger) ... def process(wave): ... return wave | filter
Constants#
- torchfx.logging.DEFAULT_FORMAT = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"#
str(object=’’) -> str str(bytes_or_buffer[, encoding[, errors]]) -> str
Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to ‘strict’.
- torchfx.logging.DEFAULT_DATE_FORMAT = "%Y-%m-%d %H:%M:%S"#
str(object=’’) -> str str(bytes_or_buffer[, encoding[, errors]]) -> str
Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to ‘strict’.
Usage Examples#
Basic Logging Configuration#
import torchfx
# Enable INFO level logging (default)
torchfx.logging.enable_logging()
# Enable DEBUG level logging
torchfx.logging.enable_debug_logging()
# Enable WARNING level only
torchfx.logging.enable_logging(level="WARNING")
# Disable logging
torchfx.logging.disable_logging()
Custom Log Format#
import torchfx
# Simple format with just level and message
torchfx.logging.enable_logging(
level="DEBUG",
format_string="%(levelname)s: %(message)s"
)
# Output to a file
with open("torchfx.log", "w") as f:
torchfx.logging.enable_logging(level="DEBUG", stream=f)
Performance Profiling#
Using the context manager:
from torchfx.logging import log_performance, enable_logging
enable_logging()
# Time a code block
with log_performance("audio_processing"):
wave = Wave.from_file("input.wav")
result = wave | filter1 | filter2 | reverb
result.save("output.wav")
# Capture timing information
with log_performance("filter_chain") as timing:
result = wave | complex_filter
print(f"Processing took {timing['elapsed_seconds']:.3f}s")
Using the decorator:
from torchfx.logging import LogPerformance
@LogPerformance("process_audio")
def process_audio(wave):
return wave | filter1 | filter2
# Each call logs execution time
result = process_audio(wave)
Module-Specific Logging#
from torchfx.logging import get_logger
# Get logger for a specific module
wave_logger = get_logger("wave")
filter_logger = get_logger("filter.iir")
# These inherit the root logger's level
wave_logger.debug("Loading audio file")
filter_logger.info("Computing coefficients")
Standard Python Logging#
TorchFX integrates with Python’s standard logging:
import logging
# Configure using standard Python logging
logging.getLogger("torchfx").setLevel(logging.DEBUG)
# Add custom handler
handler = logging.FileHandler("torchfx.log")
handler.setFormatter(logging.Formatter("%(asctime)s - %(message)s"))
logging.getLogger("torchfx").addHandler(handler)
Logger Hierarchy#
TorchFX uses hierarchical loggers for fine-grained control:
torchfx # Root logger
├── torchfx.wave # Wave class operations
├── torchfx.effect # Effect processing
├── torchfx.filter # Filter operations
│ ├── torchfx.filter.iir
│ └── torchfx.filter.fir
├── torchfx.validation # Validation messages
└── torchfx.performance # Performance timing
You can enable logging for specific subsystems:
import logging
# Only log filter operations at DEBUG level
logging.getLogger("torchfx.filter").setLevel(logging.DEBUG)
# Only log performance timing
logging.getLogger("torchfx.performance").setLevel(logging.INFO)