Brief Details: A Chinese speech recognition model based on WavLM-large, fine-tuned on Common Voice 7.0, optimized for 16kHz audio processing.
BRIEF-DETAILS: A specialized BERT model fine-tuned on Dutch and English legal documents, achieving strong performance in legal topic classification and multi-class legal document tasks.
Brief Details: A dual-encoder neural network that enables natural language image search, trained on MS-COCO dataset with 30k images and inspired by CLIP approach.
Brief Details: A specialized 3D MRI segmentation model for brain tumor analysis, achieving 85.18% average Dice score across tumor subregions using BraTS 2018 dataset.
Brief-details: CodeT5-large-ntp-py is a 770M parameter encoder-decoder model by Salesforce, specialized in Python code generation and understanding, trained on CodeSearchNet and GCPY datasets.
BRIEF DETAILS: E-commerce focused LLM based on Mistral-7B, specialized for retail tasks through instruction tuning using ECInstruct dataset.
Brief-details: Res2Next50 is a powerful multi-scale backbone architecture with 24.7M parameters, optimized for ImageNet classification and feature extraction at 224x224 resolution.
Brief-details: French sentence embedding model based on FlauBERT, achieving SOTA performance with 85.5% Pearson correlation on STS-B benchmark. 137M parameters.
BRIEF-DETAILS: InternLM2.5-7B-Chat-1M GGUF - Optimized 7B parameter chat model from Shanghai AI Lab, available in multiple quantization formats for efficient local deployment
Brief-details: A 3B parameter Taiwan-focused instruction-tuned LLM based on Llama architecture, specialized in Traditional Chinese and English, with emphasis on Taiwanese cultural context and legal knowledge.
Brief Details: FluxDev-HyperSD-nf4 is a diffusers-based model from ModelsLab, designed for advanced image generation tasks. Direct implementation details are limited but available via Hugging Face.
Brief Details: YOLOv10l is a state-of-the-art real-time object detection model, offering improved accuracy and efficiency compared to previous YOLO versions
BRIEF-DETAILS: 4-bit quantized Mistral-7B model optimized for faster training and lower memory usage, featuring 2.2x faster performance and 62% less memory consumption
BRIEF DETAILS: Weighted/imatrix GGUF quantized version of Deepseeker-Kunou-Qwen2.5-14b with multiple compression options ranging from 3.7GB to 12.2GB, optimized for different performance/size tradeoffs.
Brief Details: SigLIP-based vision-language model optimized for shape efficiency, trained on WebLI dataset. Features 400M parameters and 448px resolution for zero-shot image classification and retrieval tasks.
BRIEF-DETAILS: BERT-based classifier for analyzing demographic bias and social perceptions in language, trained on 1.7K biased samples. Published in 2019.
Brief-details: Ilama-3.2-1B is a 1.3 billion parameter language model developed by ArthurZ, available on Hugging Face Hub. Implementation details and specific capabilities pending documentation.
Brief Details: Vision Transformer (ViT) large model trained on LAION-400M dataset, compatible with both OpenCLIP and timm frameworks, specialized for image processing tasks
BRIEF-DETAILS: 3B parameter instruction-tuned Llama model using QLORA quantization for efficient fine-tuning with INT4 precision and EO8 optimization
Brief-details: Maya is an 8B parameter multilingual vision-language model supporting 8 languages, built on LLaVA framework with SigLIP vision encoding and cultural sensitivity focus.
Brief Details: A 7B parameter hybrid architecture model combining attention and convolution, offering competitive performance with Transformers and 32k context length support.