wide_resnet50_2.racm_in1k
Property | Value |
---|---|
Parameter Count | 68.9M |
License | Apache 2.0 |
Top-1 Accuracy | 82.27% |
Image Size | 288x288 (test) / 224x224 (train) |
GMACs | 18.9 |
What is wide_resnet50_2.racm_in1k?
This is a Wide ResNet-50 variant trained on ImageNet-1k using an advanced RandAugment (RACM) recipe. It's based on the Wide Residual Networks architecture, which increases the width of residual networks to improve performance. The model employs a 2x width multiplier compared to standard ResNet-50, resulting in enhanced representational capacity.
Implementation Details
The model implements several key architectural features:
- ReLU activations throughout the network
- Single layer 7x7 convolution with pooling
- 1x1 convolution shortcut downsample
- RandAugment RACM training recipe inspired by EfficientNet
- RMSProp optimizer with TF 1.0 behavior
- Step-based learning rate schedule with warmup
Core Capabilities
- High-accuracy image classification (82.27% top-1 on ImageNet)
- Robust feature extraction for downstream tasks
- Efficient processing with 18.9 GMACs
- Support for both 224x224 training and 288x288 inference resolutions
Frequently Asked Questions
Q: What makes this model unique?
This model combines the increased capacity of Wide ResNets with an optimized RandAugment training recipe, achieving strong performance while maintaining reasonable computational requirements. The RACM training approach represents a significant improvement over standard training methods.
Q: What are the recommended use cases?
The model is well-suited for: 1) High-accuracy image classification tasks, 2) Feature extraction for transfer learning, 3) Applications requiring robust visual representations with moderate computational resources.