Image Datasets¶
Synthetic spectrogram datasets (not from I/Q data) and tools for signal spectrogram detection and classification. Read more in the Torchsig GNU Radio Conference 2024 publication.
Datasets¶
- class torchsig.image_datasets.datasets.synthetic_signals.GeneratorFunctionDataset(generator_function, transforms=None)[source]¶
Bases:
Dataset
- torchsig.image_datasets.datasets.synthetic_signals.generate_tone(tone_width: int, max_height: int = 10, min_height: int = 3)[source]¶
- torchsig.image_datasets.datasets.synthetic_signals.tone_generator_function(tone_width: int, max_height: int = 10, min_height: int = 3)[source]¶
- torchsig.image_datasets.datasets.synthetic_signals.generate_chirp(chirp_width: int, height: int, width: int, random_height_scale: float = [1, 1], random_width_scale: float = [1, 1])[source]¶
- torchsig.image_datasets.datasets.synthetic_signals.chirp_generator_function(chirp_width: int, height: int, width: int, random_height_scale: float = [1, 1], random_width_scale: float = [1, 1])[source]¶
- torchsig.image_datasets.datasets.synthetic_signals.generate_rectangle_signal(min_width: int = 10, max_width: int = 100, max_height: int = 50, min_height: int = 5, use_blur=True)[source]¶
- torchsig.image_datasets.datasets.synthetic_signals.rectangle_signal_generator_function(min_width: int = 10, max_width: int = 100, max_height: int = 50, min_height: int = 5, use_blur=True)[source]¶
- torchsig.image_datasets.datasets.synthetic_signals.generate_repeated_signal(generator_fn, min_gap: int = 2, max_gap: int = 10, repeat_axis=-1, min_repeats=8, max_repeats=16)[source]¶
- torchsig.image_datasets.datasets.synthetic_signals.repeated_signal_generator_function(generator_fn, min_gap: int = 2, max_gap: int = 10, repeat_axis=-1, min_repeats=30, max_repeats=50)[source]¶
- torchsig.image_datasets.datasets.file_loading_datasets.extract_bounding_boxes(filepath, filter_strength=None)[source]¶
- torchsig.image_datasets.datasets.file_loading_datasets.extract_bounding_boxes_from_image(img, isolate=True, filter_strength=None)[source]¶
- torchsig.image_datasets.datasets.file_loading_datasets.isolate_soi(soi_image, filter_strength=0)[source]¶
- torchsig.image_datasets.datasets.file_loading_datasets.extract_sois(filepaths, filter_strength=None)[source]¶
- class torchsig.image_datasets.datasets.file_loading_datasets.SOIExtractorDataset(filepath: str, transforms=None, read_black_hot=False, filter_strength=None)[source]¶
Bases:
Dataset
- class torchsig.image_datasets.datasets.file_loading_datasets.ImageDirectoryDataset(filepath: str, transforms=None, read_black_hot=False)[source]¶
Bases:
Dataset
- class torchsig.image_datasets.datasets.file_loading_datasets.LazyImageDirectoryDataset(filepath: str, transforms=None, read_black_hot=False)[source]¶
Bases:
Dataset
- class torchsig.image_datasets.datasets.composites.ConcatDataset(component_datasets, balance=True, transforms=[])[source]¶
Bases:
Dataset
- class torchsig.image_datasets.datasets.protocols.FrequencyHoppingDataset(signal_fn, channel_height: int, num_channels: int, signal_length: int, num_signals, hopping_function=None, transforms=None)[source]¶
Bases:
Dataset
- class torchsig.image_datasets.datasets.protocols.YOLOFrequencyHoppingDataset(signal_fn, channel_height: int, num_channels: int, signal_length: int, num_signals, hopping_function=None, transforms=None)[source]¶
Bases:
FrequencyHoppingDataset
- class torchsig.image_datasets.datasets.protocols.CFGSignalProtocolDataset(initial_token: str | None = None, transforms=None)[source]¶
Bases:
Dataset
- class torchsig.image_datasets.datasets.protocols.VerticalCFGSignalProtocolDataset(initial_token: str | None = None, transforms=None)[source]¶
Bases:
CFGSignalProtocolDataset
- class torchsig.image_datasets.datasets.protocols.YOLOCFGSignalProtocolDataset(initial_token: str | None = None, transforms=None)[source]¶
Bases:
CFGSignalProtocolDataset
- class torchsig.image_datasets.datasets.protocols.YOLOVerticalCFGSignalProtocolDataset(initial_token: str | None = None, transforms=None)[source]¶
- class torchsig.image_datasets.datasets.yolo_datasets.YOLODatum(img=None, labels=[])[source]¶
Bases:
object
A class for wrapping YOLO data; contains a single datum for a YOLO dataset, with image and label data together. This class can be treated as a tuple of (image_data, labels/class_id), and can be returned in datasets. If no labels are provided, a class_id can be supplied, and the datum will be represented as (image_data, class_id), otherwise it will be (image_data, labels). A YOLODatum with a class_id and no labels is assumed to have one label at [class_id, 0.5, 0.5, 1, 1].
- property labels¶
- property shape¶
- append_labels(new_labels)[source]¶
adds new labels to the list of labels; Inputs:
new_labels: either a list of tuples to add, a single tuple of (class_id, cx, cy, width, height), or an int class_id, in which case (class_id, 0.5, 0.5, 1.0, 1.0) will be added
- transpose_yolo_labels(yolo_datum, top_left)[source]¶
A function for transposing YOLO labels for boxes in one image to the appropriate labels for the same boxes in a larger composite image containing the smaller image; Inputs:
yolo_datum: the pair (img1, old_labels), where img1 is the smaller image on which old_labels are accurate as a torch [n_channels, height, width] tensor top_left: the coordinates of the top left corner of img1 within self.img, as (x,y). such that self.img[:,y,x] is the top left corner of img1
- Outputs:
new_labels: the new YOLO labels which describe the boxes from old_labels in self.img
- append_yolo_labels(yolo_datum, top_left)[source]¶
A function for adding YOLO labels for boxes in one image to the appropriate labels for the same boxes in a larger composite image containing the smaller image; automatically deletes labels for boxes which do not fall entirely inside of the larger image. this object will be modified to contain the labels from yolo_datum, trasposed appropriately. Inputs:
yolo_datum: the pair (img1, old_labels), where img1 is the smaller image on which old_labels are accurate as a torch [n_channels, height, width] tensor top_left: the coordinates of the top left corner of img1 within img2, as (y,x). such that img2[:,y,x] is the top left corner of img1
- compose_yolo_data(yolo_datum, top_left, image_composition_mode='add')[source]¶
- A function for composing this YOLODatum with another YOLODatum, such that the resulting image composes the two image with yolo_datum.img starting at top_left in self.img,
and the resulting labels contain labels from both YOLODatum objects
- Inputs:
yolo_datum: the datum to compose into this datum top_left: the top left corner as (x,y) in which to append yolo_datum.img image_composition_mode: a string denoting the mode in which to compose the image data from the two images; either ‘replace’, ‘max’, or ‘add’; ‘add’ by default;
- class torchsig.image_datasets.datasets.yolo_datasets.YOLODatasetAdapter(dataset: Dataset, class_id: int | None = None)[source]¶
Bases:
Dataset
A class for adapting generic image datasets to YOLO image datasets. Expects a dataset which returns only image tensors, and a class label to apply to the dataset. All returned data will be of the form (image_data, [(class_id, 0.5, 0.5, 1.0 1.0)]), or (image_data, []) if class_id = None
- class torchsig.image_datasets.datasets.yolo_datasets.YOLOImageCompositeDatasetComponent(component_dataset, min_to_add=0, max_to_add=1, class_id=None, use_source_yolo_labels=False)[source]¶
Bases:
Dataset
Defines a component of a composite dataset; this will contain any information the composites should use to place instances of this component in the composites, such as how many instances should be place Inputs:
component_dataset: a Dataset object which contains instances of this component, represented as (image_component: ndarray(c,height,width), class_id: int) min_to_add: the fewest instances of this component type to be placed in each composite max_to_add: the most instances of this type to be placed in each composite; the number of instances will be selected unifomly from min_to_add to max_to_add class_id: the int id to use for labeling data;
if provided, all returned data will be of the form (component_dataset[n], (class_id, 0.5, 0.5, 1.0, 1.0)) representing a single box taking up the full image component of class class_id
use_source_yolo_labels: if true, load YOLO labels from the component_dataset; otherwise component_dataset is assumed to return only image tensors;
If neither class_id nor use_source_yolo_labels is provided, all data will be assumed to have no labels, and (component_dataset[n], []) will be returned
- class torchsig.image_datasets.datasets.yolo_datasets.YOLOImageCompositeDataset(composite_scale, transforms=None, components=[], dataset_size=10, max_add=False)[source]¶
Bases:
Dataset
A Dataset class generating synthetic composite images in yolo format from other image datasets Inputs:
composite_scale: a tuple of the form (height, width, num_channels) specifying the scale of the image compisites to be generated; (if a 2d tuple is passed in, it will work in greyscale) transforms: either a single function or list of functions from images to images to be applied to each SOI; used for adding noise and impairments to data; defaults to None
<NOTE>: The dataset will not have any components to add to the composite at initialization; these must be added by calling my_instance.add_component(image_dataset_to_add) All components should be torch datasets which output an image in the form of an ndarray and an integer class id label as: (image_height, image_width, ?image_depth), class_id
- torchsig.image_datasets.datasets.yolo_datasets.read_yolo_datum(root_dir, fname)[source]¶
loads a YOLODatum from a root directory and file name that point to a dataset in yolo format
- torchsig.image_datasets.datasets.yolo_datasets.yolo_to_pixels_on_image(img, box)[source]¶
returns the (x_start, y_start, x_end, y_end) pixels of an input box in the yolo format (cx, cy, width, height) on img
- torchsig.image_datasets.datasets.yolo_datasets.yolo_box_on_image(img, box)[source]¶
returns an image tensor containing the portion of img that falls within box, where box is a tuple (cx, cy, width, height) in yolo format
- torchsig.image_datasets.datasets.yolo_datasets.extract_yolo_boxes(yolo_datum)[source]¶
returns a list of new YOLODatum objects which each contain a single box from the input object
- class torchsig.image_datasets.datasets.yolo_datasets.YOLOFileDataset(filepath: str, transforms=None)[source]¶
Bases:
Dataset
A Dataset class for loading image and label files in YOLO format from a root directory Inputs:
filepath: a string file path to a folder containing the yolo dataset transforms: either a single function or list of functions from images to images to be applied to each loaded image; used for adding noise and impairments to data; defaults to None read_black_hot: whether or not to read loaded images as black-hot; this will invert the value of loaded SOIs
- class torchsig.image_datasets.datasets.yolo_datasets.YOLOSOIExtractorDataset(filepath: str, transforms=None, read_black_hot=False, soi_classes: list | None = None, filter_strength=1)[source]¶
Bases:
Dataset
A Dataset class for loading marked signals of interest (SOIs) from a yolo format dataset Inputs:
filepath: a string file path to a folder containing images in which all signals of interest have been marked wit ha colored bounding box transforms: either a single function or list of functions from images to images to be applied to each SOI; used for adding noise and impairments to data; defaults to None read_black_hot: whether or not to read loaded images as black-hot; this will invert the value of loaded SOIs soi_classes: which classes from the yolo dataset are to be considered signals of interest; None for all classes; defaults to None
Plotting¶
Transforms¶
- torchsig.image_datasets.transforms.denoising.normalize_image(image, axis=None)[source]¶
returns the infinity norm of an image Inputs:
image: image to norm as a 2d ndarray
- Outputs:
the normalized image
- torchsig.image_datasets.transforms.denoising.isolate_foreground_signal(image, filter_strength=0)[source]¶
filters image (a tensor of shape [1, width, height] in grayscale) to seperate foreground from background noise, and returns the filtered image tensor; an integer filter_strength can be passed in to tune the filtration effect
- class torchsig.image_datasets.transforms.impairments.GaussianNoiseTransform(mean: float = 0, std: float = 0.2, **kwargs)[source]¶
Bases:
Seedable
- class torchsig.image_datasets.transforms.impairments.ScaleTransform(scale: float, **kwargs)[source]¶
Bases:
Seedable
- class torchsig.image_datasets.transforms.impairments.RandomGaussianNoiseTransform(mean: float = 0, range=(0.01, 0.5), **kwargs)[source]¶
Bases:
Seedable
- class torchsig.image_datasets.transforms.impairments.RippleNoiseTransform(strength, num_emitors=30, image_shape=None, a=1e-06, b=1e-10, base_freq=100, **kwargs)[source]¶
Bases:
Seedable
- class torchsig.image_datasets.transforms.impairments.RandomRippleNoiseTransform(range, num_emitors=30, image_shape=None, a=1e-06, b=1e-10, base_freq=100, **kwargs)[source]¶
Bases:
Seedable
Annotation Tools¶
- torchsig.image_datasets.annotation_tools.yolo_annotation_tool.setup_yolo_directories(root_path)[source]¶
- torchsig.image_datasets.annotation_tools.yolo_annotation_tool.load_and_process_image(fpath)[source]¶
- torchsig.image_datasets.annotation_tools.yolo_annotation_tool.save_yolo_labels(labels, fpath)[source]¶
- torchsig.image_datasets.annotation_tools.yolo_annotation_tool.save_as_yolo_data(output_image_dir, output_label_dir, fname, img, bboxes, class_names)[source]¶
Saves data from the annotator widget as yolo image/label files in the output directory Inputs:
output_image_dir - the path of the image directory for the new yolo data output_label_dir - the path of the label directory for the new yolo data fname - the name of the image being saved img - the image being saved bboxes - the bounding boxes to be saved
- torchsig.image_datasets.annotation_tools.yolo_annotation_tool.yolo_annotator(input_image_dir, output_root_path, class_names=['Signal'])[source]¶
loads and runs an interactive notebook cell with an annotation tool that lets ou label the images in input_image_dir in yolo format and save the outputs to output_root_path annotations are saved as you label them, and the tool will recognize and skip images which already have labels, so terminating and reruning the tool will pick up labeling on the next unlabeled image by default the tool uses a single ‘signal’ class, but an array of string class_names can be passed in