Brief-details: Advanced vision foundation model from Microsoft capable of multi-task image understanding including captioning, detection, and OCR. Features 0.77B parameters with extensive fine-tuning capabilities.
Brief-details: A fine-tuned DistilBERT model for tweet sentiment extraction achieving 80.4% accuracy with low carbon footprint (3.65g CO2). Popular with 46.8k downloads.
Brief-details: NuExtract-1.5-smol-GGUF is a quantized 1.71B parameter text generation model available in 2-8 bit GGUF formats, optimized for efficient local deployment
Brief Details: German BERT-based semantic search model trained on MSMARCO dataset, achieving SOTA performance for document retrieval with NDCG@1 of 0.53
BRIEF DETAILS: 8B parameter uncensored LLaMA 3.1 variant optimized for roleplay and creative writing, supporting 11 languages with 128k context window.
Brief-details: Open-source 7B parameter LLaMA reproduction trained on RedPajama dataset (1T tokens), achieving comparable performance to original LLaMA with Apache 2.0 license
Brief-details: LLaVA-v1.6-Mistral-7B is a 7.57B parameter multimodal chatbot combining vision and language capabilities, built on Mistral-7B-Instruct-v0.2.
Brief-details: Swin Transformer tiny model with 28.3M params for image classification. Features hierarchical vision transformer architecture with shifted windows. ImageNet-1k trained.
Brief Details: Wav2Vec2-based Persian speech emotion recognition model with 90% accuracy across 6 emotions. Built on XLSR architecture, supports ShEMO dataset.
Brief Details: A fine-tuned Wav2Vec2-XLSR-53 model for Telugu speech recognition, achieving 44.98% WER on OpenSLR dataset. Supports 16kHz audio input.
BRIEF DETAILS: Spanish instruction-tuned 7B parameter LLM based on Falcon-7B, optimized for GPTQ quantization with multiple compression options and specialized for Spanish language tasks.
Brief-details: Llama-2-7B is Meta's open-source 7B parameter LLM, optimized for text generation with strong performance on reasoning and knowledge tasks. Requires license.
Brief-details: Compact NLI model based on MiniLMv2, trained on SNLI/MultiNLI datasets for natural language inference and zero-shot classification tasks
Brief Details: A powerful multilingual translation model supporting 50 languages for many-to-one translation, developed by Facebook with 48K+ downloads
Brief-details: A 1.7B parameter multilingual language model supporting 48 languages, trained by BigScience. Optimized for text generation with specialized architecture and FP16 precision.
BRIEF DETAILS: 13B parameter instruction-tuned Code Llama model for code synthesis and understanding. Features chat capabilities and code completion with Meta's Llama 2 architecture.
Brief-details: Powerful 13B parameter LLM fine-tuned on 300k+ instructions, featuring long responses and low hallucination rates. Built on Llama-2 architecture.
Brief-details: A Japanese-focused NLP model fine-tuned for zero-shot classification and NLI tasks, based on mDeBERTa-v3-base with 67.42% F1 score performance
BRIEF-DETAILS: Powerful 72B parameter instruction-tuned LLM with 131K context length, superior performance in reasoning, coding, and multilingual tasks, built on advanced Transformer architecture.
Brief-details: A 3.09B parameter GGUF-formatted instruction model optimized for various quantization levels (2-8 bit), suitable for efficient local deployment and conversational AI tasks.
Brief Details: A sophisticated text-to-image model combining HyperRealism 1.2 and DreamPhotoGASM, specialized in photorealistic imagery with enhanced eye details and compositions.