Brief-details: Russian BERT model fine-tuned for sentiment analysis, offering 3-class classification (neutral/positive/negative) with 178M parameters, trained on RuSentiment dataset.
BRIEF DETAILS: A wav2vec2-large model specialized in speech transcription with punctuation, achieving 4.45% WER on LibriSpeech, ideal for TTS applications
Brief-details: A powerful English-to-Chinese translation model by Helsinki-NLP achieving 31.4 BLEU score. Supports multiple Chinese variants including Mandarin, Cantonese, and Classical Chinese.
Brief-details: 4-bit quantized version of Meta's Llama 3.1 8B instruction-tuned model, optimized for memory efficiency and speed while maintaining performance
Brief-details: A powerful open-source text embedding model with 137M parameters, supporting 8192 token context length and outperforming OpenAI's text-embedding models on MTEB benchmarks.
BRIEF-DETAILS: English abusive speech detection model (238M params) fine-tuned on MuRIL, detects normal vs abusive text with academic backing and 638K+ downloads
Brief-details: FLAN-T5-XXL is an 11.3B parameter language model fine-tuned on 1800+ tasks, offering superior performance in text generation, translation, and reasoning.
Brief Details: ECAPA-TDNN speaker verification model trained on VoxCeleb, achieving 0.80% EER. Supports embedding extraction and speaker verification using cosine similarity.
Brief Details: Cross-encoder model trained on MS Marco for passage ranking, achieving 74.31 NDCG@10 score on TREC DL 19, optimized for query-passage matching tasks.
Brief-details: Optimized ONNX version of Phi-3-mini-128k for accelerated inference, supporting multiple hardware platforms with up to 9x faster performance than PyTorch.
Brief Details: A powerful 7B parameter instruction-tuned LLM with impressive performance in language understanding, coding, and mathematics. Supports 131K context length.
Brief Details: EnCodec 24kHz is Meta AI's neural audio codec offering real-time compression with 23.3M parameters, achieving high-fidelity audio at various bandwidths (1.5-12 kbps).
Brief Details: Toxic-BERT - A 109M parameter model for detecting toxic content in text with multi-label classification capabilities and multilingual support
Brief-details: Intent classification model built on DistilBERT, specializing in categorizing user queries into keyword search, semantic search, or direct QA for the Danswer project.
Brief-details: Llama 3.1 (8B params) is Meta's latest multilingual LLM optimized for dialogue, featuring 128k context window and support for 8 languages
Brief-details: Russian language encoder model with 139M params, based on RoPEBert architecture. Trained on CulturaX dataset with 2048 token context window. Optimized for feature extraction and embedding generation.
Brief-details: EfficientNet B0 variant trained with RandAugment on ImageNet-1k. Lightweight (5.3M params) CNN optimized for image classification with excellent efficiency-accuracy trade-off.
Brief Details: RealVisXL_V4.0 is a powerful SDXL-based text-to-image model focused on photorealism, supporting both SFW/NSFW content with 671K+ downloads and strong community adoption.
Brief Details: GLiNER fine-tuned NER model optimized for news content, supporting 8 entity types with improved accuracy up to 7.5% across benchmarks. Apache 2.0 licensed.
Brief-details: A specialized MuRIL-based model for detecting abusive speech in Tamil-English code-mixed text, featuring binary classification and academic validation.
Brief Details: LLaVA 1.5 7B - Advanced vision-language model with 7B parameters. Fine-tuned on LLaMA/Vicuna for multimodal tasks. Supports image-text conversations.