resnet50_gn.a1h_in1k

resnet50_gn.a1h_in1k

timm

ResNet-50 with Group Normalization, trained on ImageNet-1k using A1 recipe. 25.6M parameters, optimized for image classification with 81.22% top-1 accuracy.

PropertyValue
Parameter Count25.6M
LicenseApache 2.0
Training DataImageNet-1k
Top-1 Accuracy81.22%
PaperResNet Strikes Back

What is resnet50_gn.a1h_in1k?

This is a ResNet-50 architecture enhanced with Group Normalization, trained using the advanced A1 recipe described in "ResNet Strikes Back". It represents a modern approach to improving the classic ResNet architecture while maintaining its efficiency and effectiveness.

Implementation Details

The model utilizes a ResNet-B structure with several key optimizations:

  • Group Normalization layers instead of Batch Normalization
  • Single 7x7 convolution layer with pooling
  • 1x1 convolution shortcut downsample
  • LAMB optimizer with cosine learning rate schedule
  • Enhanced dropout and stochastic depth

Core Capabilities

  • Image Classification on 1000 classes
  • Feature extraction capabilities
  • Support for various image sizes (224px training, 288px inference)
  • Efficient inference with 4.1 GMACs

Frequently Asked Questions

Q: What makes this model unique?

This model combines the proven ResNet architecture with Group Normalization, making it more stable across different batch sizes and better suited for various deployment scenarios. The A1H training recipe provides enhanced performance compared to standard training approaches.

Q: What are the recommended use cases?

The model is well-suited for general image classification tasks, transfer learning, and as a backbone for more complex computer vision tasks. It performs particularly well in scenarios where batch size might vary or when working with high-resolution images.

Socials
PromptLayer
Company
All services online
Location IconPromptLayer is located in the heart of New York City
PromptLayer © 2026