Brief Details: EdgeNeXt base model optimized for mobile vision, 18.5M params, trained on ImageNet-1k with USI distillation, achieving efficient CNN-Transformer fusion
BRIEF DETAILS: Microsoft's multilingual document AI model extending LayoutLMv2 capabilities for cross-lingual document understanding with text, layout & image processing
Brief Details: RoBERTa-based model fine-tuned on NYT news for 8-class topic classification. Achieves 91% accuracy across metrics. MIT licensed.
Brief Details: 4-bit quantized Mistral-7B instructional model optimized by Unsloth, offering 2.2x faster performance with 62% less memory usage
Brief-details: Christian-AI-LLAMA is a PEFT-based model derived from meta-llama/Llama-3.2-1B, utilizing Safetensors technology with significant community adoption (17.6K+ downloads).
BRIEF DETAILS: Meta's 405B parameter LLM quantized to INT4 using GPTQ, supporting 8 languages with 203GB VRAM requirement for inference. Optimized for multilingual dialogue.
BRIEF-DETAILS: H2O.ai's 13B parameter LLaMA 2 chat model variant with 4096 context window, optimized for text generation and conversational AI tasks
Brief-details: BilingualChildEmo is a text classification model based on XLM-RoBERTa, designed for emotion analysis with 17.8K downloads and Apache 2.0 license.
Brief Details: PVNet_summation - A specialized PyTorch model for national UK solar power forecasting, combining GSP-level predictions through summation.
Brief-details: A wav2vec2-based emotion recognition model trained on IEMOCAP dataset, achieving 78.7% accuracy for speech emotion classification using SpeechBrain framework.
BRIEF DETAILS: StableLM 2 1.6B is a powerful 1.6B parameter language model supporting 7 languages, trained on 2T tokens with advanced architecture and Flash Attention 2 capability.
Brief Details: BERT-based model fine-tuned for paraphrase detection on MRPC dataset, achieving 86% accuracy and 90.4% F1 score. Optimized for sentence pair classification.
Brief-details: Spanish text readability classifier using RoBERTa architecture. Classifies text into 3 complexity levels with 78.81% F1 score. Based on BERTIN model.
Brief-details: A compact 135M parameter instruction-tuned LLM optimized for efficient deployment, featuring multi-dataset training and conversational abilities
Brief Details: A medium-sized BERT variant (L=8, H=512) designed for efficient pre-training, part of a family of compact BERT models for NLI tasks.
Brief Details: Falcon-11B: A powerful 11B-parameter language model trained on 5,000B tokens, supporting 10 languages, optimized for text generation and conversation.
Brief Details: A 1.5B parameter code-specialized LLM built on Qwen2.5, featuring 32K context window and significant improvements in code generation and reasoning.
Brief Details: Powerful Swin Transformer model with 197M params, trained on ImageNet-22k and fine-tuned on ImageNet-1k. Excellent for image classification and feature extraction.
Brief Details: Neural machine translation model for Catalan to Portuguese conversion, achieving 44.9 BLEU score, built by Helsinki-NLP using transformer-align architecture
Brief-details: A 6.9B parameter language model from EleutherAI's Pythia suite, trained on deduplicated Pile dataset for interpretability research and scientific analysis.
Brief Details: A cross-lingual Cross-Encoder model optimized for EN-DE passage re-ranking, achieving 72.43 NDCG@10 on TREC-DL19 EN-EN with 1600 docs/sec processing speed