Skip to content

Commit 29fda20

Browse files
committed
Merge branch 'fffffgggg54-main'
2 parents 13c7183 + 9a53c3f commit 29fda20

37 files changed

+2737
-1026
lines changed

.github/workflows/tests.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,9 +40,10 @@ jobs:
4040
- name: Install torch on ubuntu
4141
if: startsWith(matrix.os, 'ubuntu')
4242
run: |
43-
pip install --no-cache-dir torch==${{ matrix.torch }}+cpu torchvision==${{ matrix.torchvision }}+cpu -f https://download.pytorch.org/whl/torch_stable.html
43+
sudo sed -i 's/azure\.//' /etc/apt/sources.list
4444
sudo apt update
4545
sudo apt install -y google-perftools
46+
pip install --no-cache-dir torch==${{ matrix.torch }}+cpu torchvision==${{ matrix.torchvision }}+cpu -f https://download.pytorch.org/whl/torch_stable.html
4647
- name: Install requirements
4748
run: |
4849
pip install -r requirements.txt

README.md

Lines changed: 67 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -21,12 +21,73 @@ And a big thanks to all GitHub sponsors who helped with some of my costs before
2121

2222
## What's New
2323

24-
### 🤗 Survey: Feedback Appreciated 🤗
25-
26-
For a few months now, `timm` has been part of the Hugging Face ecosystem. Yearly, we survey users of our tools to see what we could do better, what we need to continue doing, or what we need to stop doing.
27-
28-
If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts:
29-
[**hf.co/oss-survey**](https://hf.co/oss-survey) 🙏
24+
* ❗Updates after Oct 10, 2022 are available in 0.8.x pre-releases (`pip install --pre timm`) or cloning main❗
25+
* Stable releases are 0.6.x and available by normal pip install or clone from [0.6.x](https://github.com/rwightman/pytorch-image-models/tree/0.6.x) branch.
26+
27+
### Jan 20, 2023
28+
* Add two convnext 12k -> 1k fine-tunes at 384x384
29+
* `convnext_tiny.in12k_ft_in1k_384` - 85.1 @ 384
30+
* `convnext_small.in12k_ft_in1k_384` - 86.2 @ 384
31+
32+
* Push all MaxxViT weights to HF hub, and add new ImageNet-12k -> 1k fine-tunes for `rw` base MaxViT and CoAtNet 1/2 models
33+
34+
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
35+
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
36+
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
37+
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
38+
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
39+
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
40+
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
41+
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
42+
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
43+
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
44+
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
45+
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
46+
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
47+
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
48+
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
49+
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
50+
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
51+
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
52+
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
53+
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
54+
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
55+
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
56+
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
57+
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
58+
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
59+
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
60+
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
61+
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
62+
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
63+
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
64+
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
65+
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
66+
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
67+
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
68+
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
69+
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
70+
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
71+
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
72+
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
73+
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
74+
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
75+
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
76+
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
77+
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
78+
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
79+
80+
### Jan 11, 2023
81+
* Update ConvNeXt ImageNet-12k pretrain series w/ two new fine-tuned weights (and pre FT `.in12k` tags)
82+
* `convnext_nano.in12k_ft_in1k` - 82.3 @ 224, 82.9 @ 288 (previously released)
83+
* `convnext_tiny.in12k_ft_in1k` - 84.2 @ 224, 84.5 @ 288
84+
* `convnext_small.in12k_ft_in1k` - 85.2 @ 224, 85.3 @ 288
85+
86+
### Jan 6, 2023
87+
* Finally got around to adding `--model-kwargs` and `--opt-kwargs` to scripts to pass through rare args directly to model classes from cmd line
88+
* `train.py /imagenet --model resnet50 --amp --model-kwargs output_stride=16 act_layer=silu`
89+
* `train.py /imagenet --model vit_base_patch16_clip_224 --img-size 240 --amp --model-kwargs img_size=240 patch_size=12`
90+
* Cleanup some popular models to better support arg passthrough / merge with model configs, more to go.
3091

3192
### Jan 5, 2023
3293
* ConvNeXt-V2 models and weights added to existing `convnext.py`

benchmark.py

Lines changed: 17 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
from timm.layers import set_fast_norm
2323
from timm.models import create_model, is_model, list_models
2424
from timm.optim import create_optimizer_v2
25-
from timm.utils import setup_default_logging, set_jit_fuser, decay_batch_step, check_batch_size_retry
25+
from timm.utils import setup_default_logging, set_jit_fuser, decay_batch_step, check_batch_size_retry, ParseKwargs
2626

2727
has_apex = False
2828
try:
@@ -108,12 +108,15 @@
108108
help='Enable gradient checkpointing through model blocks/stages')
109109
parser.add_argument('--amp', action='store_true', default=False,
110110
help='use PyTorch Native AMP for mixed precision training. Overrides --precision arg.')
111+
parser.add_argument('--amp-dtype', default='float16', type=str,
112+
help='lower precision AMP dtype (default: float16). Overrides --precision arg if args.amp True.')
111113
parser.add_argument('--precision', default='float32', type=str,
112114
help='Numeric precision. One of (amp, float32, float16, bfloat16, tf32)')
113115
parser.add_argument('--fuser', default='', type=str,
114116
help="Select jit fuser. One of ('', 'te', 'old', 'nvfuser')")
115117
parser.add_argument('--fast-norm', default=False, action='store_true',
116118
help='enable experimental fast-norm')
119+
parser.add_argument('--model-kwargs', nargs='*', default={}, action=ParseKwargs)
117120

118121
# codegen (model compilation) options
119122
scripting_group = parser.add_mutually_exclusive_group()
@@ -124,7 +127,6 @@
124127
scripting_group.add_argument('--aot-autograd', default=False, action='store_true',
125128
help="Enable AOT Autograd optimization.")
126129

127-
128130
# train optimizer parameters
129131
parser.add_argument('--opt', default='sgd', type=str, metavar='OPTIMIZER',
130132
help='Optimizer (default: "sgd"')
@@ -168,19 +170,21 @@ def count_params(model: nn.Module):
168170

169171

170172
def resolve_precision(precision: str):
171-
assert precision in ('amp', 'float16', 'bfloat16', 'float32')
172-
use_amp = False
173+
assert precision in ('amp', 'amp_bfloat16', 'float16', 'bfloat16', 'float32')
174+
amp_dtype = None # amp disabled
173175
model_dtype = torch.float32
174176
data_dtype = torch.float32
175177
if precision == 'amp':
176-
use_amp = True
178+
amp_dtype = torch.float16
179+
elif precision == 'amp_bfloat16':
180+
amp_dtype = torch.bfloat16
177181
elif precision == 'float16':
178182
model_dtype = torch.float16
179183
data_dtype = torch.float16
180184
elif precision == 'bfloat16':
181185
model_dtype = torch.bfloat16
182186
data_dtype = torch.bfloat16
183-
return use_amp, model_dtype, data_dtype
187+
return amp_dtype, model_dtype, data_dtype
184188

185189

186190
def profile_deepspeed(model, input_size=(3, 224, 224), batch_size=1, detailed=False):
@@ -228,9 +232,12 @@ def __init__(
228232
self.model_name = model_name
229233
self.detail = detail
230234
self.device = device
231-
self.use_amp, self.model_dtype, self.data_dtype = resolve_precision(precision)
235+
self.amp_dtype, self.model_dtype, self.data_dtype = resolve_precision(precision)
232236
self.channels_last = kwargs.pop('channels_last', False)
233-
self.amp_autocast = partial(torch.cuda.amp.autocast, dtype=torch.float16) if self.use_amp else suppress
237+
if self.amp_dtype is not None:
238+
self.amp_autocast = partial(torch.cuda.amp.autocast, dtype=self.amp_dtype)
239+
else:
240+
self.amp_autocast = suppress
234241

235242
if fuser:
236243
set_jit_fuser(fuser)
@@ -243,6 +250,7 @@ def __init__(
243250
drop_rate=kwargs.pop('drop', 0.),
244251
drop_path_rate=kwargs.pop('drop_path', None),
245252
drop_block_rate=kwargs.pop('drop_block', None),
253+
**kwargs.pop('model_kwargs', {}),
246254
)
247255
self.model.to(
248256
device=self.device,
@@ -560,7 +568,7 @@ def _try_run(
560568
def benchmark(args):
561569
if args.amp:
562570
_logger.warning("Overriding precision to 'amp' since --amp flag set.")
563-
args.precision = 'amp'
571+
args.precision = 'amp' if args.amp_dtype == 'float16' else '_'.join(['amp', args.amp_dtype])
564572
_logger.info(f'Benchmarking in {args.precision} precision. '
565573
f'{"NHWC" if args.channels_last else "NCHW"} layout. '
566574
f'torchscript {"enabled" if args.torchscript else "disabled"}')

inference.py

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020
from timm.data import create_dataset, create_loader, resolve_data_config
2121
from timm.layers import apply_test_time_pool
2222
from timm.models import create_model
23-
from timm.utils import AverageMeter, setup_default_logging, set_jit_fuser
23+
from timm.utils import AverageMeter, setup_default_logging, set_jit_fuser, ParseKwargs
2424

2525
try:
2626
from apex import amp
@@ -72,6 +72,8 @@
7272
metavar='N', help='mini-batch size (default: 256)')
7373
parser.add_argument('--img-size', default=None, type=int,
7474
metavar='N', help='Input image dimension, uses model default if empty')
75+
parser.add_argument('--in-chans', type=int, default=None, metavar='N',
76+
help='Image input channels (default: None => 3)')
7577
parser.add_argument('--input-size', default=None, nargs=3, type=int,
7678
metavar='N N N', help='Input all image dimensions (d h w, e.g. --input-size 3 224 224), uses model default if empty')
7779
parser.add_argument('--use-train-size', action='store_true', default=False,
@@ -110,6 +112,7 @@
110112
help='lower precision AMP dtype (default: float16)')
111113
parser.add_argument('--fuser', default='', type=str,
112114
help="Select jit fuser. One of ('', 'te', 'old', 'nvfuser')")
115+
parser.add_argument('--model-kwargs', nargs='*', default={}, action=ParseKwargs)
113116

114117
scripting_group = parser.add_mutually_exclusive_group()
115118
scripting_group.add_argument('--torchscript', default=False, action='store_true',
@@ -170,12 +173,19 @@ def main():
170173
set_jit_fuser(args.fuser)
171174

172175
# create model
176+
in_chans = 3
177+
if args.in_chans is not None:
178+
in_chans = args.in_chans
179+
elif args.input_size is not None:
180+
in_chans = args.input_size[0]
181+
173182
model = create_model(
174183
args.model,
175184
num_classes=args.num_classes,
176-
in_chans=3,
185+
in_chans=in_chans,
177186
pretrained=args.pretrained,
178187
checkpoint_path=args.checkpoint,
188+
**args.model_kwargs,
179189
)
180190
if args.num_classes is None:
181191
assert hasattr(model, 'num_classes'), 'Model must have `num_classes` attr if not set on cmd line/config.'

results/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ An ImageNet test set of 10,000 images sampled from new images roughly 10 years a
3838

3939
### ImageNet-Adversarial - [`results-imagenet-a.csv`](results-imagenet-a.csv)
4040

41-
A collection of 7500 images covering 200 of the 1000 ImageNet classes. Images are naturally occuring adversarial examples that confuse typical ImageNet classifiers. This is a challenging dataset, your typical ResNet-50 will score 0% top-1.
41+
A collection of 7500 images covering 200 of the 1000 ImageNet classes. Images are naturally occurring adversarial examples that confuse typical ImageNet classifiers. This is a challenging dataset, your typical ResNet-50 will score 0% top-1.
4242

4343
For clean validation with same 200 classes, see [`results-imagenet-a-clean.csv`](results-imagenet-a-clean.csv)
4444

tests/test_models.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
'vit_*', 'tnt_*', 'pit_*', 'swin_*', 'coat_*', 'cait_*', '*mixer_*', 'gmlp_*', 'resmlp_*', 'twins_*',
2828
'convit_*', 'levit*', 'visformer*', 'deit*', 'jx_nest_*', 'nest_*', 'xcit_*', 'crossvit_*', 'beit*',
2929
'poolformer_*', 'volo_*', 'sequencer2d_*', 'swinv2_*', 'pvt_v2*', 'mvitv2*', 'gcvit*', 'efficientformer*',
30-
'coatnet*', 'coatnext*', 'maxvit*', 'maxxvit*', 'eva_*', 'flexivit*'
30+
'eva_*', 'flexivit*'
3131
]
3232
NUM_NON_STD = len(NON_STD_FILTERS)
3333

@@ -38,7 +38,7 @@
3838
'*efficientnet_l2*', '*resnext101_32x48d', '*in21k', '*152x4_bitm', '*101x3_bitm', '*50x3_bitm',
3939
'*nfnet_f3*', '*nfnet_f4*', '*nfnet_f5*', '*nfnet_f6*', '*nfnet_f7*', '*efficientnetv2_xl*',
4040
'*resnetrs350*', '*resnetrs420*', 'xcit_large_24_p8*', 'vit_huge*', 'vit_gi*', 'swin*huge*',
41-
'swin*giant*', 'convnextv2_huge*']
41+
'swin*giant*', 'convnextv2_huge*', 'maxvit_xlarge*', 'davit_giant', 'davit_huge']
4242
NON_STD_EXCLUDE_FILTERS = ['vit_huge*', 'vit_gi*', 'swin*giant*', 'eva_giant*']
4343
else:
4444
EXCLUDE_FILTERS = []
@@ -53,7 +53,7 @@
5353
TARGET_FFEAT_SIZE = 96
5454
MAX_FFEAT_SIZE = 256
5555
TARGET_FWD_FX_SIZE = 128
56-
MAX_FWD_FX_SIZE = 224
56+
MAX_FWD_FX_SIZE = 256
5757
TARGET_BWD_FX_SIZE = 128
5858
MAX_BWD_FX_SIZE = 224
5959

@@ -269,7 +269,7 @@ def test_model_features_pretrained(model_name, batch_size):
269269

270270
EXCLUDE_JIT_FILTERS = [
271271
'*iabn*', 'tresnet*', # models using inplace abn unlikely to ever be scriptable
272-
'dla*', 'hrnet*', 'ghostnet*', # hopefully fix at some point
272+
'dla*', 'hrnet*', 'ghostnet*' # hopefully fix at some point
273273
'vit_large_*', 'vit_huge_*', 'vit_gi*',
274274
]
275275

timm/data/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
from .auto_augment import RandAugment, AutoAugment, rand_augment_ops, auto_augment_policy,\
22
rand_augment_transform, auto_augment_transform
3-
from .config import resolve_data_config
3+
from .config import resolve_data_config, resolve_model_data_config
44
from .constants import *
55
from .dataset import ImageDataset, IterableImageDataset, AugMixDataset
66
from .dataset_factory import create_dataset

0 commit comments

Comments
 (0)