Brief Details: IndoBERT NER model (110M params) fine-tuned for Named Entity Recognition in Indonesian, achieving 83.8% F1 score and 95.3% accuracy.
Brief-details: NVIDIA's multilingual speech model supporting ASR in 4 languages and translation, with 1B params and state-of-the-art performance on multiple benchmarks.
Brief-details: Compact 129M parameter Mamba architecture model optimized for text generation, featuring selective state space sequences and CUDA optimization support
Brief Details: RMBG-2.0 is a state-of-the-art background removal model with 221M parameters, trained on 15,000+ high-quality images for commercial-grade segmentation.
Brief-details: Quantized 7B parameter Mistral-based model optimized for instruction-following and conversations, featuring uncensored responses and Tree of Thought reasoning.
Brief Details: EfficientNetV2-S model trained on ImageNet-21k and fine-tuned on ImageNet-1k, offering 21.6M params with strong classification performance
Brief Details: French NER model based on DistilCamemBERT, offering 2x faster inference than CamemBERT while maintaining high accuracy for entity recognition.
Brief-details: Powerful Chinese text embedding model using BERT architecture. Optimized for similarity search and retrieval tasks. Part of BGE (BAAI General Embedding) family.
Brief Details: Russian conversational language model - distilled version of RuBERT with 135.4M parameters, trained on social media & subtitles data, 50% faster inference
Brief-details: SmolLM-135M is a compact 135M parameter language model trained on high-quality educational content, offering efficient text generation with minimal computational requirements.
BRIEF DETAILS: Fast and efficient vision-language model for zero-shot image classification, offering 74.4% ImageNet accuracy while being 2.3x faster than comparable models
Brief-details: TripoSR is a fast feed-forward 3D model by Stability AI that converts single images to 3D objects, trained on Objaverse dataset with MIT license.
BRIEF DETAILS: ESG-BERT: A 110M parameter BERT model specialized for sustainable investing text analysis. Achieves 90% F1-score in ESG classification tasks with extensive fine-tuning capabilities.
Brief-details: Pythia-2.8B is a research-focused language model with 2.8B parameters, trained on The Pile dataset, designed for interpretability studies and scientific experiments.
Brief-details: DeBERTa-based PII detection model capable of identifying 30+ types of personal information including names, addresses, financial data & digital identifiers. 50 likes, 41k+ downloads.
Brief-details: A highly efficient text-to-image transformer model that generates high-quality 512x512 images, trained with just 10.8% of SD1.5's training time and comparable performance.
BRIEF DETAILS: Swedish speech recognition model based on wav2vec2-large-xlsr-53, achieving 14.3% WER and 4.9% CER on Common Voice Swedish. Supports 16kHz audio input.
Brief-details: Document analysis model suite for PDF processing - includes layout detection and table structure recognition with SOTA performance on DocLayNet dataset.
Brief-details: ConvNeXt V2 base model with 88.7M parameters, trained on ImageNet-22k and fine-tuned on ImageNet-1k, achieving 86.74% top-1 accuracy.
Brief Details: Qwen2.5-14B-Instruct-AWQ is a 4-bit quantized LLM with 14.7B parameters, offering enhanced capabilities in coding, math, and multilingual support with 128K context length.
Brief Details: A powerful 34B parameter code generation model achieving 80.48% on HumanEval, surpassing many proprietary models. Optimized for coding tasks with state-of-the-art performance.