resmlp_12_224.fb_in1k

resmlp_12_224.fb_in1k

timm

ResMLP model for image classification with 15.4M params. Trained on ImageNet-1k, processes 224x224 images using feedforward architecture.

PropertyValue
Parameter Count15.4M
LicenseApache-2.0
Image Size224x224
FrameworkPyTorch (timm)
PaperResMLP Paper

What is resmlp_12_224.fb_in1k?

ResMLP-12/224 is a feedforward neural network designed specifically for image classification tasks. Developed by Facebook Research, this model represents a innovative approach to computer vision that relies purely on Multi-Layer Perceptron (MLP) architecture, moving away from traditional convolutional or attention-based methods.

Implementation Details

The model features a carefully designed architecture with 15.4M parameters, operating on 224x224 pixel images. It achieves 3.0 GMACs computational efficiency with 5.5M activations, making it relatively lightweight for its capabilities.

  • Data-efficient training methodology on ImageNet-1k dataset
  • Pure feedforward architecture without convolutions
  • Optimized for both classification and feature extraction tasks

Core Capabilities

  • Image classification with state-of-the-art accuracy
  • Feature extraction for downstream tasks
  • Efficient processing of 224x224 resolution images
  • Supports both classification and embedding generation

Frequently Asked Questions

Q: What makes this model unique?

ResMLP stands out for its pure MLP-based architecture, achieving competitive performance without using convolutions or attention mechanisms. This makes it an interesting alternative to traditional CNN-based models while maintaining efficiency.

Q: What are the recommended use cases?

The model is ideal for image classification tasks and can be used as a feature extractor for transfer learning. It's particularly suitable for applications requiring a good balance between computational efficiency and accuracy.

Socials
PromptLayer
Company
All services online
Location IconPromptLayer is located in the heart of New York City
PromptLayer © 2026