MNasNet 100 RMSProp ImageNet-1k
Property | Value |
---|---|
Parameter Count | 4.42M |
Model Type | Image Classification |
License | Apache-2.0 |
Paper | MnasNet: Platform-Aware Neural Architecture Search for Mobile |
Dataset | ImageNet-1k |
What is mnasnet_100.rmsp_in1k?
MNasNet is a mobile-optimized neural network architecture developed through platform-aware neural architecture search. This specific variant has been trained on ImageNet-1k using RMSProp optimization, featuring 4.42M parameters and designed for efficient mobile deployment.
Implementation Details
The model utilizes a specialized training recipe that includes RMSProp optimizer with TensorFlow 1.0 behavior, implementing EMA weight averaging and step-based learning rate scheduling with warmup. The architecture processes 224x224 images and requires only 0.3 GMACs for inference.
- Employs RandomErasing and mixup augmentation techniques
- Features dropout for regularization
- Implements standard random-resize-crop augmentation
- Optimized for mobile deployment with 5.5M activations
Core Capabilities
- Image classification with ImageNet-1k classes
- Feature map extraction with multiple resolution outputs
- Image embedding generation
- Mobile-optimized inference
Frequently Asked Questions
Q: What makes this model unique?
The model stands out for its platform-aware architecture design, specifically optimized for mobile deployment while maintaining competitive accuracy. Its RMSProp training recipe and efficient parameter utilization make it particularly suitable for resource-constrained environments.
Q: What are the recommended use cases?
This model is ideal for mobile and edge device deployment where efficient image classification is required. It's particularly suitable for real-time applications needing reasonable accuracy while maintaining minimal computational overhead.