SPNasNet-100
Property | Value |
---|---|
Parameter Count | 4.46M |
Model Type | Image Classification |
License | Apache-2.0 |
Paper | Single-Path NAS |
Dataset | ImageNet-1k |
What is spnasnet_100.rmsp_in1k?
SPNasNet-100 is a compact image classification model created through neural architecture search (NAS), specifically designed for hardware efficiency. It represents a breakthrough in automated model design, developed to optimize both performance and computational resources. The model was trained using RMSProp optimization on the ImageNet-1k dataset, incorporating modern training techniques like RandomErasing, mixup, and dropout.
Implementation Details
The model features a carefully optimized architecture with 4.46M parameters and requires only 0.3 GMACs for inference. It operates on 224x224 pixel images and produces 6.0M activations during processing. The training recipe employs a specialized RMSProp optimizer with TF 1.0 behavior and implements EMA weight averaging for improved stability.
- Optimized using step-based learning rate schedule with warmup
- Implements standard random-resize-crop augmentation
- Utilizes efficient feature extraction capabilities
- Supports both classification and embedding generation
Core Capabilities
- Image classification with 1000 ImageNet classes
- Feature map extraction at multiple scales
- Generation of image embeddings
- Hardware-efficient inference
Frequently Asked Questions
Q: What makes this model unique?
This model stands out for its efficient architecture, designed through neural architecture search in less than 4 hours, making it particularly suitable for hardware deployment while maintaining competitive performance on ImageNet classification tasks.
Q: What are the recommended use cases?
The model is ideal for resource-constrained environments requiring image classification, feature extraction, or embedding generation. It's particularly well-suited for mobile and edge devices where computational efficiency is crucial.