Skip to content

Commit 8f6d638

Browse files
committed
Update README.md
1 parent 1618527 commit 8f6d638

File tree

1 file changed

+12
-1
lines changed

1 file changed

+12
-1
lines changed

README.md

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,10 @@ I'm fortunate to be able to dedicate significant time and money of my own suppor
2323

2424
## What's New
2525

26+
### March 23, 2022
27+
* Add `ParallelBlock` and `LayerScale` option to base vit models to support model configs in [Three things everyone should know about ViT](https://arxiv.org/abs/2203.09795)
28+
* `convnext_tiny_hnf` (head norm first) weights trained with (close to) A2 recipe, 82.2% top-1, could do better with more epochs.
29+
2630
### March 21, 2022
2731
* Merge `norm_norm_norm`. **IMPORTANT** this update for a coming 0.6.x release will likely de-stabilize the master branch for a while. Branch `0.5.x` or a previous 0.5.x release can be used if stability is required.
2832
* Significant weights update (all TPU trained) as described in this [release](https://github.com/rwightman/pytorch-image-models/releases/tag/v0.1-tpu-weights)
@@ -45,7 +49,8 @@ I'm fortunate to be able to dedicate significant time and money of my own suppor
4549
* `resnetrs200` - 83.85 @ 256, 84.44 @ 320
4650
* HuggingFace hub support fixed w/ initial groundwork for allowing alternative 'config sources' for pretrained model definitions and weights (generic local file / remote url support soon)
4751
* SwinTransformer-V2 implementation added. Submitted by [Christoph Reich](https://github.com/ChristophReich1996). Training experiments and model changes by myself are ongoing so expect compat breaks.
48-
* MobileViT models w/ weights adapted from https://github.com/apple/ml-cvnets (
52+
* Swin-S3 (AutoFormerV2) models / weights added from https://github.com/microsoft/Cream/tree/main/AutoFormerV2
53+
* MobileViT models w/ weights adapted from https://github.com/apple/ml-cvnets
4954
* PoolFormer models w/ weights adapted from https://github.com/sail-sg/poolformer
5055
* VOLO models w/ weights adapted from https://github.com/sail-sg/volo
5156
* Significant work experimenting with non-BatchNorm norm layers such as EvoNorm, FilterResponseNorm, GroupNorm, etc
@@ -344,13 +349,16 @@ A full version of the list below with source links can be found in the [document
344349
* FBNet-V3 - https://arxiv.org/abs/2006.02049
345350
* HardCoRe-NAS - https://arxiv.org/abs/2102.11646
346351
* LCNet - https://arxiv.org/abs/2109.15099
352+
* MobileViT - https://arxiv.org/abs/2110.02178
347353
* NASNet-A - https://arxiv.org/abs/1707.07012
348354
* NesT - https://arxiv.org/abs/2105.12723
349355
* NFNet-F - https://arxiv.org/abs/2102.06171
350356
* NF-RegNet / NF-ResNet - https://arxiv.org/abs/2101.08692
351357
* PNasNet - https://arxiv.org/abs/1712.00559
358+
* PoolFormer (MetaFormer) - https://arxiv.org/abs/2111.11418
352359
* Pooling-based Vision Transformer (PiT) - https://arxiv.org/abs/2103.16302
353360
* RegNet - https://arxiv.org/abs/2003.13678
361+
* RegNetZ - https://arxiv.org/abs/2103.06877
354362
* RepVGG - https://arxiv.org/abs/2101.03697
355363
* ResMLP - https://arxiv.org/abs/2105.03404
356364
* ResNet/ResNeXt
@@ -367,12 +375,15 @@ A full version of the list below with source links can be found in the [document
367375
* ReXNet - https://arxiv.org/abs/2007.00992
368376
* SelecSLS - https://arxiv.org/abs/1907.00837
369377
* Selective Kernel Networks - https://arxiv.org/abs/1903.06586
378+
* Swin S3 (AutoFormerV2) - https://arxiv.org/abs/2111.14725
370379
* Swin Transformer - https://arxiv.org/abs/2103.14030
380+
* Swin Transformer V2 - https://arxiv.org/abs/2111.09883
371381
* Transformer-iN-Transformer (TNT) - https://arxiv.org/abs/2103.00112
372382
* TResNet - https://arxiv.org/abs/2003.13630
373383
* Twins (Spatial Attention in Vision Transformers) - https://arxiv.org/pdf/2104.13840.pdf
374384
* Visformer - https://arxiv.org/abs/2104.12533
375385
* Vision Transformer - https://arxiv.org/abs/2010.11929
386+
* VOLO (Vision Outlooker) - https://arxiv.org/abs/2106.13112
376387
* VovNet V2 and V1 - https://arxiv.org/abs/1911.06667
377388
* Xception - https://arxiv.org/abs/1610.02357
378389
* Xception (Modified Aligned, Gluon) - https://arxiv.org/abs/1802.02611

0 commit comments

Comments
 (0)