| name | torchvision |
| description | Computer vision library for PyTorch featuring pretrained models, advanced image transforms (v2), and utilities for handling complex data types like bounding boxes and masks. (torchvision, transforms, tvtensor, resnet, cutmix, mixup, pretrained models, vision transforms) |
Overview
TorchVision provides models, datasets, and transforms for computer vision. It has recently transitioned to "v2" transforms, which support more complex data types like bounding boxes and masks alongside images, using a unified API.
When to Use
Use TorchVision for standard CV tasks like classification, detection, or segmentation. Use the v2 transforms for performance-critical pipelines or when applying augmentations like MixUp/CutMix that require batch-level processing.
Decision Tree
- Are you starting a new project?
- YES: Use
torchvision.transforms.v2.
- YES: Use
- Do you need a pretrained model?
- YES: Use the
weightsparameter (e.g.,ResNet50_Weights.DEFAULT).
- YES: Use the
- Do you have bounding boxes that need to move with the image?
- YES: Use
TVTensorsfor automatic coordinate transformation.
- YES: Use
Workflows
Standard Inference with Pretrained Models
- Select a model and its specific weights (e.g.,
ResNet50_Weights.DEFAULT). - Initialize the model with those weights and set to
.eval(). - Extract the required preprocessing from the weights using
weights.transforms(). - Apply the transform to the input image and run the forward pass.
- Select a model and its specific weights (e.g.,
Advanced Data Augmentation with MixUp
- Import
MixUpandCutMixfromtorchvision.transforms.v2. - Incorporate them into the training loop logic (they act on batches, not individual samples).
- Apply the transform to the
(images, labels)pair to generate augmented training data.
- Import
Migrating to Transforms v2
- Update the import from
torchvision.transformstotorchvision.transforms.v2. - Use
v2.Composefor combining transforms. - Switch from PIL-based logic to Tensor-based logic for significant performance gains.
- Leverage v2's ability to handle dicts/tuples for complex inputs like
{image, boxes, mask}.
- Update the import from
Non-Obvious Insights
- Weight-specific Transforms: Preprocessing is now bundled with weights; accessing
weights.transforms()ensures the inference data exactly matches the training distribution of the specific weight recipe. - TVTensors: v2 transforms recognize special tensor subclasses (
TVTensors) for bounding boxes and masks, allowing them to be rotated or flipped automatically whenever the image is. - Backend Performance: The
video_readerbackend is faster thanpyavbut requires manual compilation from source in some environments to be available.
Evidence
- "As of v0.13, TorchVision offers a new Multi-weight support API... weights = ResNet50_Weights.DEFAULT." (https://pytorch.org/vision/stable/models.html)
- "v2 transforms generally accept an arbitrary number of leading dimensions (..., C, H, W) and can handle batched images or batched videos." (https://pytorch.org/vision/stable/transforms.html)
Scripts
scripts/torchvision_tool.py: Utility to load models and apply v2 transforms to batched data.scripts/torchvision_tool.js: Node.js interface to process images via TorchVision Python scripts.
Dependencies
- torchvision
- torch
- pillow (for image loading)