tf_efficientnet_b1.ns_jft_in1k
Property | Value |
---|---|
Parameter Count | 7.8M |
Model Type | Image Classification |
Input Size | 240x240 |
GMACs | 0.7 |
Activations | 10.9M |
Training Data | ImageNet-1k + JFT-300M |
What is tf_efficientnet_b1.ns_jft_in1k?
This model is a PyTorch implementation of the EfficientNet-B1 architecture, trained using the Noisy Student semi-supervised learning approach. Originally developed in TensorFlow by the paper authors and later ported to PyTorch by Ross Wightman, it combines the efficiency of EfficientNet architecture with enhanced training on both labeled ImageNet-1k and unlabeled JFT-300M datasets.
Implementation Details
The model leverages compound scaling principles from the EfficientNet paper, optimizing width, depth, and resolution scaling factors simultaneously. With 7.8M parameters and 0.7 GMACs, it achieves an excellent balance between computational efficiency and accuracy.
- Optimized for 240x240 input images
- Features Noisy Student training methodology
- Supports multiple operational modes including classification, feature extraction, and embedding generation
- Provides flexible feature map extraction capabilities
Core Capabilities
- Image classification with 1000 ImageNet classes
- Feature map extraction at multiple scales
- Image embedding generation
- Pre-logits feature extraction
Frequently Asked Questions
Q: What makes this model unique?
This model combines EfficientNet's efficient architecture with Noisy Student training on both ImageNet-1k and JFT-300M datasets, resulting in robust performance while maintaining computational efficiency. The dual-dataset training approach enhances its generalization capabilities.
Q: What are the recommended use cases?
The model is well-suited for image classification tasks, feature extraction for downstream tasks, and generating image embeddings. Its balanced efficiency makes it particularly suitable for production environments where computational resources are a consideration.