Datasets¶
There are two main types of datasets: torchsig.datasets.datasets.NewDataset
and torchsig.datasets.datasets.StaticDataset
.
NewDataset and its counterparts torchsig.datasets.narrowband.NewDataset
and torchsig.datasets.wideband.NewWideband
are for generating synthetic data in memory (infinitely).
Samples are not saved after being returned, and previous samples are inaccesible.
To then save a dataset to disk, use a torchsig.utils.writer.DatasetCreator
which accepts a NewDataset` object.
StaticDataset (torchsig.datasets.narrowband.StaticNarrowband
and torchsig.datasets.wideband.StaticWideband
) are for loading a saved dataset to disk.
Samples can be accessed in any order and previously generated samples are accesible.
Note: If a NewDataset is written to disk with no transforms and target transforms, it is considered raw. Otherwise, it is considered to processed. raw means when the dataset is loaded back in using a StaticDataset object, users can define transforms and target transforms to be applied. When a processed dataset is loaded back in, users cannot define any transforms and target transform to be applied.
Base Classes¶
TorchSig Datasets¶
Dataset Base Classes for creation and static loading.
- class torchsig.datasets.datasets.NewTorchSigDataset(dataset_metadata: DatasetMetadata | str | dict, **kwargs)[source]¶
Bases:
Dataset
,Seedable
Creates a new TorchSig dataset that generates data infinitely unless num_samples inside dataset_metadata is defined.
This base class provides the functionality to generate signals and write them to disk if necessary. The dataset will continue to generate samples infinitely unless a num_samples value is defined in the dataset_metadata.
- property dataset_metadata¶
Returns the dataset metadata.
- Returns:
The dataset metadata.
- Return type:
- class torchsig.datasets.datasets.StaticTorchSigDataset(root: str, impaired: bool | int, dataset_type: str, transforms: list = [], target_transforms: list = [], file_handler_class: ~torchsig.utils.file_handlers.base_handler.TorchSigFileHandler = <class 'torchsig.utils.file_handlers.zarr.ZarrFileHandler'>, train: bool | None = None)[source]¶
Bases:
Dataset
Static Dataset class, which loads pre-generated data from a directory.
This class assumes that the dataset has already been generated and saved to disk using a subclass of NewTorchSigDataset. It allows loading raw or processed data from disk for inference or analysis.
- Parameters:
root (str) – The root directory where the dataset is stored.
impaired (bool) – Whether the data is impaired (default: False).
dataset_type (str) – Type of the dataset, either “narrowband” or “wideband”.
transforms (list, optional) – Transforms to apply to the data (default: []).
target_transforms (list, optional) – Target transforms to apply (default: []).
file_handler_class (TorchSigFileHandler, optional) – Class used for reading the dataset (default: ZarrFileHandler).
Dataset Metadata¶
Dataset Metadata class for Narrowband and Wideband
- class torchsig.datasets.dataset_metadata.DatasetMetadata(num_iq_samples_dataset: int, fft_size: int, impairment_level: int, num_signals_max: int, sample_rate: float = 10000000.0, num_signals_min: int = 0, num_signals_distribution: ndarray | List[float] | None = None, snr_db_min: float = 0.0, snr_db_max: float = 50.0, signal_duration_percent_min: float = 0.0, signal_duration_percent_max: float = 100.0, transforms: list = [], target_transforms: list = [], class_list: List[str] | None = None, class_distribution: ndarray | List[float] | None = None, num_samples: int | None = None, dataset_type: str = 'None', **kwargs)[source]¶
Bases:
Seedable
Dataset Metdata. Contains useful information about the dataset.
Maintains the metadata for the parameters of the datasets, such as sample rate. The class holds all of the high level information about the dataset that the signals, impairments and other processes will require. Parameters that are common to all signals will be stored in the dataset metadata. For example, all signal generation requires a common and consistent sampling rate reference.
This class is needed needed at almost every level of the DSP, therefore rather than pass around multiple variables, or a dict, or use globals, this class is defined and passed as a parameter.
This class stores metadata related to the dataset, including parameters related to signal generation, transforms, dataset path, and sample distribution. It also handles the verification of dataset settings and ensures that the configuration is valid for the dataset creation process.
- to_dict() Dict[str, Any] [source]¶
Converts the dataset metadata into a dictionary format.
This method organizes various metadata fields related to the dataset into categories such as general dataset information, signal generation parameters, and dataset writing information.
- Returns:
A dictionary representation of the dataset metadata.
- Return type:
Dict[str, Any]
- property center_freq_min: None¶
Defines the minimum center frequency boundary for a signal
- Raises:
NotImplementedError – Inherited classes must implement this method.
- property center_freq_max: None¶
Defines the minimum center frequency boundary for a signal
- Raises:
NotImplementedError – Inherited classes must implement this method.
- property bandwidth_min: None¶
Defines the minimum bandwidth for a signal
- Raises:
NotImplementedError – Inherited classes must implement this method.
- property bandwidth_max: None¶
Defines the maximum bandwidth for a signal
- Raises:
NotImplementedError – Inherited classes must implement this method.
- property num_iq_samples_dataset: int¶
Length of I/Q array per sample in dataset.
Returns the number of IQ samples of the dataset, this is the length of the array that contains the IQ samples
- Returns:
number of IQ samples
- Return type:
- property sample_rate: float¶
Sample rate for the dataset.
Returns the sampling rate associated with the IQ samples of the dataset
- Returns:
sample rate
- Return type:
- property num_signals_max: int¶
Max number of signals in each sample in the dataset
Returns the number of distinct signals in the wideband dataset
- Returns:
max number of signals
- Return type:
- property num_signals_min: int¶
Minimum number of signals in each sample in the dataset.
- Returns:
min number of signals
- Return type:
- property num_signals_range: List[int]¶
Range of num_signals can be generated by a sample.
- Returns:
List of num_signals possibilities.
- Return type:
List[int]
- property num_signals_distribution: List[float]¶
Probabilities for each value in num_signals_range.
- Returns:
Probabilties sample generates N signals per sample.
- Return type:
List[float]
- property transforms: list¶
Transforms to perform on signal data (after signal impairments).
- Returns:
Transform to apply to data.
- Return type:
- property impairment_level: int¶
Level of signal impairments to apply to signals (0-2)
- Returns:
Impairment level.
- Return type:
- property impairments: Impairments¶
Impairment signal and dataset transforms
- Returns:
Transforms or impairments
- Return type:
- property class_list: List[str]¶
Signal modulation class list for dataset.
- Returns:
List of signal modulation class names
- Return type:
List[str]
- property class_distribution: ndarray | List[str]¶
Signal modulation class distribution for dataset generation.
- Returns:
List of class probabilites.
- Return type:
np.ndarray | List[str]
- property num_samples: int¶
Getter for the number of samples in the dataset.
This property returns the number of samples that the dataset is configured to have. If the value is set to None, it indicates that the number of samples is considered infinite.
- Returns:
The number of samples in the dataset, or a representation of infinite samples if set to None.
- Return type:
- property num_samples_generated: int¶
Getter for the number of samples that have been generated.
This property returns the count of how many samples have been generated so far from the dataset. This could be used to track progress during data generation or when sampling data.
- Returns:
The number of samples that have been generated.
- Return type:
- property noise_power_db: float¶
Reference noise power (dB) for the dataset
The noise power is a common reference to be used for all signal generation in order to establish accurate SNR calculations. The noise power dB is given in decibels. The PSD estimate of the AWGN is calculated such that the averaging across all frequency bins average to noise_power_db.
- Returns:
noise power in dB
- Return type:
- property snr_db_min: float¶
Minimum SNR in dB for signals in dataset
Signals within the dataset will be assigned a signal to noise ratio (SNR), across a range defined by a minimum and maximum value. snr_db_min is the low end of the SNR range.
- Returns:
minimum SNR in dB
- Return type:
- property snr_db_max: float¶
Minimum SNR in dB for signals in dataset
Signals within the dataset will be assigned a signal to noise ratio (SNR), across a range defined by a minimum and maximum value. snr_db_max is the high end of the SNR range.
- Returns:
maximum SNR in dB
- Return type:
- property signal_duration_percent_max: float¶
Getter for the maximum signal duration percentage.
This property returns the maximum percentage of the total signal duration that is allowed or configured for the dataset. The duration percentage is typically used in signal generation or processing to specify the maximum duration of individual signals relative to the total dataset duration.
- Returns:
The maximum allowed percentage of the signal duration.
- Return type:
- property signal_duration_percent_min: float¶
Getter for the minimum signal duration percentage.
This property returns the minimum percentage of the total signal duration that is allowed or configured for the dataset. The duration percentage is used in signal generation or processing to specify the minimum duration of individual signals relative to the total dataset duration.
- Returns:
The minimum allowed percentage of the signal duration.
- Return type:
- property fft_size: int¶
The size of FFT (number of bins) to be used in spectrogram.
The FFT size used to compute the spectrogram for the wideband dataset.
- Returns:
FFT size
- Return type:
- property fft_stride: int¶
The stride of input samples in FFT (number of samples)
The FFT stride controls the distance in samples between successive FFTs. A smaller FFT stride means more averaging between FFTs, a larger stride means less averaging between FFTs. fft_stride = fft_size means there is no overlap of samples between the current and next FFT. fft_stride = fft_size/2 means there is 50% overlap between the input samples of the the current and next fft.
- Returns:
FFT stride
- Return type:
- property fft_frequency_resolution: float¶
Frequency resolution of the spectrogram
The frequency resolution, or resolution bandwidth, of the FFT.
- Returns:
frequency resolution
- Return type:
- property fft_frequency_min: float¶
The minimum frequency associated with the FFT
Defines the smallest frequency within the FFT of the spectrogram. The FFT has discrete bins and therefore each bin has an associated frequency. This frequency is associated with the 0th bin or left-most frequency bin.
- Returns:
minimum FFT frequency
- Return type:
- property fft_frequency_max: float¶
The maximum frequency associated with the FFT
Defines the largest frequency within the FFT of the spectrogram. The FFT has discrete bins and therefore each bin has an associated frequency. This frequency is associated with the N-1’th bin or right-most frequency bin.
- Returns:
maximum FFT frequency
- Return type:
- property frequency_min: float¶
Minimum representable frequency
Boundary edge for testing the lower Nyquist sampling boundary.
- Returns:
minimum frequency
- Return type:
- property frequency_max: float¶
Maximum representable frequency
Boundary edge for testing the upper Nyquist sampling boundary. Due to the circular nature of the frequency domain, both -fs/2 and fs/2 represent the boundary, therefore an epsilon value is used to back off the upper edge slightly.
- Returns:
maximum frequency
- Return type:
- class torchsig.datasets.dataset_metadata.NarrowbandMetadata(num_iq_samples_dataset: int, fft_size: int, impairment_level: int, sample_rate: float = 10000000.0, num_signals_min: int | None = None, num_signals_distribution: ndarray | List[float] | None = None, snr_db_min: float = 0.0, snr_db_max: float = 50.0, signal_duration_percent_min: float = 80.0, signal_duration_percent_max: float = 100.0, transforms: list = [], target_transforms: list = [], class_list: List[str] = ['ook', 'bpsk', '4ask', 'qpsk', '8ask', '8psk', '16qam', '16ask', '16psk', '32qam', '32qam_cross', '32ask', '32psk', '64qam', '64ask', '64psk', '128qam_cross', '256qam', '512qam_cross', '1024qam', '2fsk', '2gfsk', '2msk', '2gmsk', '4fsk', '4gfsk', '4msk', '4gmsk', '8fsk', '8gfsk', '8msk', '8gmsk', '16fsk', '16gfsk', '16msk', '16gmsk', 'ofdm-64', 'ofdm-72', 'ofdm-128', 'ofdm-180', 'ofdm-256', 'ofdm-300', 'ofdm-512', 'ofdm-600', 'ofdm-900', 'ofdm-1024', 'ofdm-1200', 'ofdm-2048', 'fm', 'am-dsb-sc', 'am-dsb', 'am-lsb', 'am-usb', 'lfm_data', 'lfm_radar', 'chirpss', 'tone'], class_distribution=None, num_samples: int | None = None, **kwargs)[source]¶
Bases:
DatasetMetadata
Narrowband Dataset Metadata Class
This class encapsulates the metadata for a narrowband dataset, extending the base DatasetMetadata class. It provides useful information about the dataset such as the number of samples, the sample rate, the FFT size, the impairment level, and signal-related parameters. Additionally, it handles specific properties for narrowband signals, such as oversampling rates and center frequency offset (CFO) error percentage.
- property center_freq_min: float¶
The minimum center frequency for a signal
The minimum is a boundary condition such that the center frequency will not alias across the lower sampling rate boundary.
- Returns:
minimum center frequency boundary for signal
- Return type:
- property center_freq_max: float¶
The maximum center frequency for a signal
The maximum is a boundary condition such that the center frequency will not alias across the upper sampling rate boundary.
- Returns:
maximum center frequency boundary for signal
- Return type:
- class torchsig.datasets.dataset_metadata.WidebandMetadata(num_iq_samples_dataset: int, fft_size: int, impairment_level: int, num_signals_max: int, sample_rate: float = 10000000.0, num_signals_min: int | None = None, num_signals_distribution: ndarray | List[float] | None = None, snr_db_min: float = 0.0, snr_db_max: float = 50.0, signal_duration_percent_min: float = 0.0, signal_duration_percent_max: float = 100.0, transforms: list = [], target_transforms: list = [], class_list: List[str] = ['ook', 'bpsk', '4ask', 'qpsk', '8ask', '8psk', '16qam', '16ask', '16psk', '32qam', '32qam_cross', '32ask', '32psk', '64qam', '64ask', '64psk', '128qam_cross', '256qam', '512qam_cross', '1024qam', '2fsk', '2gfsk', '2msk', '2gmsk', '4fsk', '4gfsk', '4msk', '4gmsk', '8fsk', '8gfsk', '8msk', '8gmsk', '16fsk', '16gfsk', '16msk', '16gmsk', 'ofdm-64', 'ofdm-72', 'ofdm-128', 'ofdm-180', 'ofdm-256', 'ofdm-300', 'ofdm-512', 'ofdm-600', 'ofdm-900', 'ofdm-1024', 'ofdm-1200', 'ofdm-2048', 'fm', 'am-dsb-sc', 'am-dsb', 'am-lsb', 'am-usb', 'lfm_data', 'lfm_radar', 'chirpss', 'tone'], class_distribution=None, num_samples: int | None = None, **kwargs)[source]¶
Bases:
DatasetMetadata
Wideband Dataset Metadata Class
This class encapsulates all useful metadata for a wideband dataset, extending the DatasetMetadata class. It adds functionality to manage the FFT size used to compute the spectrogram, along with additional parameters specific to wideband signals like bandwidth, center frequency, and impairments.
- minimum_params: List[str] = ['num_iq_samples_dataset', 'fft_size', 'num_signals_max', 'impairment_level']¶
- property center_freq_min: float¶
The minimum center frequency for a signal
The minimum is a boundary condition such that the center frequency will not alias across the lower sampling rate boundary.
- Returns:
minimum center frequency boundary for signal
- Return type:
- property center_freq_max: float¶
The maximum center frequency for a signal
The maximum is a boundary condition such that the center frequency will not alias across the upper sampling rate boundary.
The calculation includes a small epsilon such that the center_freq_max is never equal to sample_rate/2 to avoid the aliasing condition because -sample_rate/2 is equivalent to sample_rate/2.
- Returns:
maximum center frequency boundary for signal
- Return type:
Narrowband¶
NarrowbandMetadata and NewNarrowband Class
- class torchsig.datasets.narrowband.NewNarrowband(dataset_metadata: DatasetMetadata | str | dict, **kwargs)[source]¶
Bases:
NewTorchSigDataset
Creates a Narrowband dataset.
This class is responsible for creating the Narrowband dataset, which includes the dataset metadata and signal impairments.
- Parameters:
dataset_metadata (DatasetMetadata | str | dict) – Metadata for the Narrowband dataset. This can be a DatasetMetadata object, a string (path to the metadata file), or a dictionary.
**kwargs – Additional keyword arguments passed to the parent class (NewTorchSigDataset).
- class torchsig.datasets.narrowband.StaticNarrowband(root: str, impaired: bool, transforms: list = [], target_transforms: list = [], file_handler_class: ~torchsig.utils.file_handlers.base_handler.TorchSigFileHandler = <class 'torchsig.utils.file_handlers.zarr.ZarrFileHandler'>, train: bool | None = None, **kwargs)[source]¶
Bases:
StaticTorchSigDataset
Loads and provides access to a pre-generated Narrowband dataset.
This class allows for loading a narrowband dataset stored on disk, with the ability to apply transformations to the data and target labels. The dataset can be accessed in raw or impaired form.
- Parameters:
root (str) – The root directory where the dataset is stored.
impaired (bool) – Whether the dataset contains impaired signals. Defaults to False.
transforms (list, optional) – A transformation to apply to the data. Defaults to [].
target_transforms (list, optional) – A transformation to apply to the targets. Defaults to [].
file_handler_class (TorchSigFileHandler, optional) – The file handler class for reading the dataset. Defaults to ZarrFileHandler.
**kwargs – Additional keyword arguments passed to the parent class (StaticTorchSigDataset).
Wideband¶
WidebandMetadata and NewWideband Class
- class torchsig.datasets.wideband.NewWideband(dataset_metadata: DatasetMetadata | str | dict, **kwargs)[source]¶
Bases:
NewTorchSigDataset
Creates a Wideband dataset.
This class is responsible for creating a Wideband dataset, including the metadata and any transformations needed.
- Parameters:
dataset_metadata (DatasetMetadata | str | dict) – Metadata for the Wideband dataset. This can be a DatasetMetadata object, a string (path to the metadata file), or a dictionary.
**kwargs – Additional keyword arguments passed to the parent class (NewTorchSigDataset).
- class torchsig.datasets.wideband.StaticWideband(root: str, impaired: bool, transforms: list = [], target_transforms: list = [], file_handler_class: ~torchsig.utils.file_handlers.base_handler.TorchSigFileHandler = <class 'torchsig.utils.file_handlers.zarr.ZarrFileHandler'>, train: bool | None = None, **kwargs)[source]¶
Bases:
StaticTorchSigDataset
Loads and provides access to a pre-generated Wideband dataset.
This class allows loading a pre-generated Wideband dataset from disk, and includes options for applying transformations to both the data and target labels. The dataset can be accessed in raw or impaired form, depending on the flags set.
- Parameters:
root (str) – The root directory where the dataset is stored.
impaired (bool) – Whether the dataset contains impaired signals. Defaults to False.
transforms (list, optional) – A transformation to apply to the data. Defaults to [].
target_transforms (list, optional) – A transformation to apply to the targets. Defaults to [].
file_handler_class (TorchSigFileHandler, optional) – The file handler class for reading the dataset. Defaults to ZarrFileHandler.
**kwargs – Additional keyword arguments passed to the parent class (StaticTorchSigDataset).
Datamodules¶
PyTorch Lightning DataModules for Narrowband and Wideband
Learn More: https://lightning.ai/docs/pytorch/stable/data/datamodule.html
If dataset does not exist at root, creates new dataset and writes to disk If dataset does exsit, simply loaded it back in
- class torchsig.datasets.datamodules.TorchSigDataModule(root: str, dataset: str, train_metadata: ~torchsig.datasets.dataset_metadata.DatasetMetadata | str | dict, val_metadata: ~torchsig.datasets.dataset_metadata.DatasetMetadata | str | dict, batch_size: int = 1, num_workers: int = 1, collate_fn: ~typing.Callable | None = None, create_batch_size: int = 8, create_num_workers: int = 4, file_handler: ~torchsig.utils.file_handlers.base_handler.TorchSigFileHandler = <class 'torchsig.utils.file_handlers.zarr.ZarrFileHandler'>, transforms: list = [], target_transforms: list = [])[source]¶
Bases:
LightningDataModule
PyTorch Lightning DataModule for managing TorchSig datasets.
- Parameters:
root (str) – The root directory where datasets are stored or created.
dataset (str) – The name of the dataset (either ‘narrowband’ or ‘wideband’).
train_metadata (DatasetMetadata | str | dict) – Metadata for the training dataset.
val_metadata (DatasetMetadata | str | dict) – Metadata for the validation dataset.
batch_size (int, optional) – The batch size for data loading. Defaults to 1.
num_workers (int, optional) – The number of worker processes for data loading. Defaults to 1.
collate_fn (Callable, optional) – A function to collate data into batches.
create_batch_size (int, optional) – The batch size used during dataset creation. Defaults to 8.
create_num_workers (int, optional) – The number of workers used during dataset creation. Defaults to 4.
file_handler (TorchSigFileHandler, optional) – The file handler for managing data storage. Defaults to ZarrFileHandler.
transforms (list, optional) – A list of transformations to apply to the input data. Defaults to an empty list.
target_transforms (list, optional) – A list of transformations to apply to the target labels. Defaults to an empty list.
- train: DataLoader¶
- val: DataLoader¶
- test: DataLoader¶
- prepare_data() None [source]¶
Prepares the dataset by creating new datasets if they do not exist on disk. The datasets are created using the DatasetCreator class.
If the dataset already exists on disk, it is loaded back into memory.
- setup(stage: str = 'train') None [source]¶
Sets up the train and validation datasets for the given stage.
- Parameters:
stage (str, optional) – The stage of the DataModule, typically ‘train’ or ‘test’. Defaults to ‘train’.
- train_dataloader() DataLoader [source]¶
Returns the DataLoader for the training dataset.
- Returns:
A PyTorch DataLoader for the training dataset.
- Return type:
DataLoader
- class torchsig.datasets.datamodules.NarrowbandDataModule(root: str, dataset_metadata: ~torchsig.datasets.dataset_metadata.NarrowbandMetadata | str | dict, num_samples_train: int, num_samples_val: int | None = None, batch_size: int = 1, num_workers: int = 1, collate_fn: ~typing.Callable | None = None, create_batch_size: int = 8, create_num_workers: int = 4, file_handler: ~torchsig.utils.file_handlers.base_handler.TorchSigFileHandler = <class 'torchsig.utils.file_handlers.zarr.ZarrFileHandler'>, transforms: ~torchsig.transforms.base_transforms.Transform | ~typing.List[~typing.Callable | ~torchsig.transforms.base_transforms.Transform] = [], target_transforms: ~torchsig.transforms.target_transforms.TargetTransform | ~typing.List[~typing.Callable | ~torchsig.transforms.target_transforms.TargetTransform] = [])[source]¶
Bases:
TorchSigDataModule
DataModule for creating and managing narrowband datasets.
- Parameters:
root (str) – The root directory where datasets are stored or created.
dataset_metadata (NarrowbandMetadata | str | dict) – Metadata for the narrowband dataset.
num_samples_train (int) – The number of training samples.
num_samples_val (int, optional) – The number of validation samples. Defaults to 10% of training samples if not provided.
batch_size (int, optional) – The batch size for data loading. Defaults to 1.
num_workers (int, optional) – The number of worker processes for data loading. Defaults to 1.
collate_fn (Callable, optional) – A function to collate data into batches.
create_batch_size (int, optional) – The batch size used during dataset creation. Defaults to 8.
create_num_workers (int, optional) – The number of workers used during dataset creation. Defaults to 4.
file_handler (TorchSigFileHandler, optional) – The file handler for managing data storage. Defaults to ZarrFileHandler.
transforms (Transform | List[Callable | Transform], optional) – A list of transformations to apply to the input data.
target_transforms (TargetTransform | List[Callable | TargetTransform], optional) – A list of transformations to apply to the target labels.
- train: DataLoader¶
- val: DataLoader¶
- test: DataLoader¶
- class torchsig.datasets.datamodules.WidebandDataModule(root: str, dataset_metadata: ~torchsig.datasets.dataset_metadata.WidebandMetadata | str | dict, num_samples_train: int, num_samples_val: int | None = None, batch_size: int = 1, num_workers: int = 1, collate_fn: ~typing.Callable | None = None, create_batch_size: int = 8, create_num_workers: int = 4, file_handler: ~torchsig.utils.file_handlers.base_handler.TorchSigFileHandler = <class 'torchsig.utils.file_handlers.zarr.ZarrFileHandler'>, transforms: ~torchsig.transforms.base_transforms.Transform | ~typing.List[~typing.Callable | ~torchsig.transforms.base_transforms.Transform] = [], target_transforms: ~torchsig.transforms.target_transforms.TargetTransform | ~typing.List[~typing.Callable | ~torchsig.transforms.target_transforms.TargetTransform] = [])[source]¶
Bases:
TorchSigDataModule
DataModule for creating and managing wideband datasets.
- Parameters:
root (str) – The root directory where datasets are stored or created.
dataset_metadata (WidebandMetadata | str | dict) – Metadata for the wideband dataset.
num_samples_train (int) – The number of training samples.
num_samples_val (int, optional) – The number of validation samples. Defaults to 10% of training samples if not provided.
batch_size (int, optional) – The batch size for data loading. Defaults to 1.
num_workers (int, optional) – The number of worker processes for data loading. Defaults to 1.
collate_fn (Callable, optional) – A function to collate data into batches.
create_batch_size (int, optional) – The batch size used during dataset creation. Defaults to 8.
create_num_workers (int, optional) – The number of workers used during dataset creation. Defaults to 4.
file_handler (TorchSigFileHandler, optional) – The file handler for managing data storage. Defaults to ZarrFileHandler.
transforms (Transform | List[Callable | Transform], optional) – A list of transformations to apply to the input data.
target_transforms (TargetTransform | List[Callable | TargetTransform], optional) – A list of transformations to apply to the target labels.
- train: DataLoader¶
- val: DataLoader¶
- test: DataLoader¶
- class torchsig.datasets.datamodules.OfficialTorchSigDataModdule(root: str, dataset: str, impaired: bool | int, batch_size: int = 1, num_workers: int = 1, collate_fn: ~typing.Callable | None = None, create_batch_size: int = 8, create_num_workers: int = 4, file_handler: ~torchsig.utils.file_handlers.base_handler.TorchSigFileHandler = <class 'torchsig.utils.file_handlers.zarr.ZarrFileHandler'>, transforms: ~torchsig.transforms.base_transforms.Transform | ~typing.List[~typing.Callable | ~torchsig.transforms.base_transforms.Transform] = [], target_transforms: ~torchsig.transforms.target_transforms.TargetTransform | ~typing.List[~typing.Callable | ~torchsig.transforms.target_transforms.TargetTransform] = [])[source]¶
Bases:
TorchSigDataModule
A PyTorch Lightning DataModule for official TorchSignal datasets.
This class manages the dataset metadata, configuration, and data loading process for datasets with official configurations instead of using custom metadata. It initializes the train and validation metadata based on the dataset type and impairment level.
- Parameters:
root (str) – Root directory where the dataset is stored.
dataset (str) – Name of the dataset.
impaired (bool | int) – Defines the impairment level of the dataset.
batch_size (int, optional) – Batch size for the dataloaders. Default is 1.
num_workers (int, optional) – Number of workers for data loading. Default is 1.
collate_fn (Callable, optional) – Function to merge a list of samples into a batch. Default is None.
create_batch_size (int, optional) – Batch size used during dataset creation. Default is 8.
create_num_workers (int, optional) – Number of workers used during dataset creation. Default is 4.
file_handler (TorchSigFileHandler, optional) – File handler used to read/write dataset. Default is ZarrFileHandler.
transforms (Transform | List[Callable | Transform], optional) – List of transforms applied to dataset. Default is empty list.
target_transforms (TargetTransform | List[Callable | TargetTransform], optional) – List of transforms applied to targets. Default is empty list.
- train: DataLoader¶
- val: DataLoader¶
- test: DataLoader¶
- class torchsig.datasets.datamodules.OfficialNarrowbandDataModule(root: str, impaired: bool | int, batch_size: int = 1, num_workers: int = 1, collate_fn: ~typing.Callable | None = None, create_batch_size: int = 8, create_num_workers: int = 4, file_handler: ~torchsig.utils.file_handlers.base_handler.TorchSigFileHandler = <class 'torchsig.utils.file_handlers.zarr.ZarrFileHandler'>, transforms: list = [], target_transforms: list = [])[source]¶
Bases:
OfficialTorchSigDataModdule
A DataModule for the official Narrowband dataset.
This class extends OfficialTorchSigDataModdule and sets the dataset type to ‘narrowband’. It initializes the necessary parameters for the dataset and loads the train and validation metadata accordingly.
- Parameters:
root (str) – Root directory where the dataset is stored.
impaired (bool | int) – Defines the impairment level of the dataset.
batch_size (int, optional) – Batch size for the dataloaders. Default is 1.
num_workers (int, optional) – Number of workers for data loading. Default is 1.
collate_fn (Callable, optional) – Function to merge a list of samples into a batch. Default is None.
create_batch_size (int, optional) – Batch size used during dataset creation. Default is 8.
create_num_workers (int, optional) – Number of workers used during dataset creation. Default is 4.
file_handler (TorchSigFileHandler, optional) – File handler used to read/write dataset. Default is ZarrFileHandler.
transforms (list, optional) – List of transforms applied to dataset. Default is empty list.
target_transforms (list, optional) – List of transforms applied to targets. Default is empty list.
- train: DataLoader¶
- val: DataLoader¶
- test: DataLoader¶
- class torchsig.datasets.datamodules.OfficialWidebandDataModule(root: str, impaired: bool | int, batch_size: int = 1, num_workers: int = 1, collate_fn: ~typing.Callable | None = None, create_batch_size: int = 8, create_num_workers: int = 4, file_handler: ~torchsig.utils.file_handlers.base_handler.TorchSigFileHandler = <class 'torchsig.utils.file_handlers.zarr.ZarrFileHandler'>, transforms: list = [], target_transforms: list = [])[source]¶
Bases:
OfficialTorchSigDataModdule
A DataModule for the official Wideband dataset.
This class extends OfficialTorchSigDataModdule and sets the dataset type to ‘wideband’. It initializes the necessary parameters for the dataset and loads the train and validation metadata accordingly.
- Parameters:
root (str) – Root directory where the dataset is stored.
impaired (bool | int) – Defines the impairment level of the dataset.
batch_size (int, optional) – Batch size for the dataloaders. Default is 1.
num_workers (int, optional) – Number of workers for data loading. Default is 1.
collate_fn (Callable, optional) – Function to merge a list of samples into a batch. Default is None.
create_batch_size (int, optional) – Batch size used during dataset creation. Default is 8.
create_num_workers (int, optional) – Number of workers used during dataset creation. Default is 4.
file_handler (TorchSigFileHandler, optional) – File handler used to read/write dataset. Default is ZarrFileHandler.
transforms (list, optional) – List of transforms applied to dataset. Default is empty list.
target_transforms (list, optional) – List of transforms applied to targets. Default is empty list.
- train: DataLoader¶
- val: DataLoader¶
- test: DataLoader¶