Brief Details: DistilBERT base uncased - Lightweight BERT variant (67M params), trained on BookCorpus & Wikipedia. Fast, efficient language model for NLP tasks.
Brief-details: Powerful multilingual speech recognition model with 1.54B parameters, supporting 99 languages. Trained on 680k hours of audio data for transcription and translation.
Brief-details: RoBERTa-large: 355M parameter transformer model by Facebook AI, trained on 160GB text data for masked language modeling, achieving SOTA results on GLUE benchmarks.
Brief-details: Vision Transformer (ViT) model with 86.4M parameters, trained on ImageNet-21k dataset for image recognition tasks. Supports PyTorch and JAX frameworks.
Brief-details: A PyTorch-based speaker recognition model using ResNet34 architecture, trained on VoxCeleb dataset for voice embedding and speaker verification tasks. Popular with 13M+ downloads.
Brief Details: OPT-1.3B is Meta AI's open-source language model with 1.3B parameters, trained on 180B tokens for text generation and language understanding.
Brief Details: CLIP-based vision transformer model with large architecture (patch size 14, 336px images) for zero-shot image classification and multimodal tasks
Brief-details: RoBERTa base - 125M parameter transformer model by FacebookAI. Pretrained on 160GB text data using masked language modeling. Popular for NLP tasks.
Brief-details: CLIP (Contrastive Language-Image Pre-training) by OpenAI - A powerful vision-language model using ViT-B/16 architecture for zero-shot image classification with 20M+ downloads.
Brief Details: A powerful English speech recognition model with 315M parameters, fine-tuned from XLSR-53. Achieves 19.06% WER on Common Voice, optimized for 16kHz audio.
Brief Details: Qwen2.5-1.5B-Instruct is a 1.54B parameter instruction-tuned LLM with 32K context length, supporting 29+ languages and optimized for chat applications
BRIEF DETAILS: OpenAI's CLIP-ViT-Large visual transformer model with 428M parameters for zero-shot image classification, featuring dual image-text encoders and extensive dataset training.
Brief Details: Efficient sentence embedding model with 22.7M parameters, maps text to 384D vectors. Trained on 1B+ sentence pairs, ideal for similarity tasks.
Brief-details: BERT base uncased (110M params) - Foundational transformer model for English language tasks with masked language modeling, trained on BookCorpus and Wikipedia.
Brief-details: XLM-RoBERTa large: Multilingual transformer model with 561M parameters, trained on 2.5TB CommonCrawl data covering 100 languages. Optimized for masked language modeling and cross-lingual tasks.
Brief-details: Powerful sentence embedding model with 109M params, trained on 1B+ sentence pairs. Maps text to 768D vectors for semantic search & similarity tasks.
BRIEF-DETAILS: CLIP-ViT model for zero-shot image classification, using Vision Transformer architecture. 23M+ downloads, created by OpenAI for research purposes.
Brief-details: A powerful 14B parameter code-generation model from Qwen with 128K context length, optimized for programming, code reasoning, and fixing. Built on Qwen2.5 architecture.
Brief Details: A specialized BERT-large model trained on 256GB of legal text, optimized for legal NLP tasks with 32k vocabulary including legal terms
BRIEF-DETAILS: QwQ-32B-Preview: A 32.8B parameter experimental research model focused on advanced reasoning, featuring 32K context length and specialized architecture with RoPE and SwiGLU.
Brief-details: A highly saturated VAE-integrated anime-style image generation model with multiple versions (v1-v3), optimized for cute character aesthetics and detailed clothing expressions.