EdgeNeXt Base USI Model
Property | Value |
---|---|
Parameter Count | 18.5M |
Model Type | Image Classification / Feature Backbone |
License | MIT |
Training Data | ImageNet-1k |
Image Size | Train: 256x256, Test: 320x320 |
GMACs | 3.8 |
What is edgenext_base.usi_in1k?
EdgeNeXt Base USI is an innovative vision model that efficiently combines CNN and Transformer architectures for mobile applications. Trained using the advanced USI (Unified Scheme for Training) methodology on ImageNet-1k, it represents a breakthrough in balancing computational efficiency with performance.
Implementation Details
The model employs a sophisticated architecture that achieves 18.5M parameters while maintaining strong performance. It features 15.6M activations and operates with 3.8 GMACs, making it particularly suitable for mobile vision applications. The model utilizes F32 tensor types and supports various input modes including image classification, feature map extraction, and image embeddings.
- Optimized for mobile vision applications
- Supports multiple feature extraction levels
- Implements efficient CNN-Transformer fusion
- Trained using advanced USI distillation techniques
Core Capabilities
- Image classification with high accuracy
- Feature map extraction at multiple scales
- Generation of image embeddings
- Flexible integration with PyTorch workflows
Frequently Asked Questions
Q: What makes this model unique?
EdgeNeXt Base USI uniquely combines CNN and Transformer architectures while maintaining mobile-friendly efficiency. Its USI training methodology and optimized architecture make it particularly effective for resource-constrained environments.
Q: What are the recommended use cases?
The model is ideal for mobile vision applications, image classification tasks, feature extraction, and as a backbone for transfer learning in computer vision projects requiring efficient processing.