PNasNet-5 Large
Property | Value |
---|---|
Parameter Count | 86.2M |
Model Type | Image Classification |
Architecture | Progressive Neural Architecture Search |
License | Apache 2.0 |
Paper | Progressive Neural Architecture Search |
Input Size | 331 x 331 |
What is pnasnet5large.tf_in1k?
PNasNet-5 Large is a sophisticated image classification model that represents the culmination of Progressive Neural Architecture Search (PNAS) technology. Originally developed by researchers at Google, this model has been successfully ported from TensorFlow to PyTorch, maintaining its powerful classification capabilities while offering broader framework compatibility.
Implementation Details
The model features 86.1M parameters and requires 25.0 GMACs for inference. It processes images at 331x331 resolution and produces 92.9M activations. The architecture was discovered through progressive search techniques, making it highly optimized for image classification tasks.
- Trained on ImageNet-1k dataset
- Supports feature map extraction with multiple scales
- Capable of generating image embeddings
- Implements efficient architecture search principles
Core Capabilities
- High-accuracy image classification on 1000 ImageNet classes
- Feature extraction at multiple scales (from 96 to 4320 channels)
- Embedding generation for downstream tasks
- Flexible integration with both TensorFlow and PyTorch workflows
Frequently Asked Questions
Q: What makes this model unique?
PNasNet-5 Large stands out due to its architecture being automatically discovered through progressive neural architecture search, resulting in an optimal balance between computational efficiency and accuracy. The model's large parameter count (86.2M) and sophisticated feature extraction capabilities make it particularly suitable for complex image classification tasks.
Q: What are the recommended use cases?
The model excels in high-resolution image classification tasks, feature extraction for transfer learning, and generating image embeddings for downstream tasks. It's particularly well-suited for applications requiring high accuracy and where computational resources aren't a major constraint.