BRIEF DETAILS: Spanish legal domain RoBERTa model trained on 8.9GB legal text. Achieves 98.7% F1 on POS tagging. Apache 2.0 licensed. Supports masked language modeling.
BRIEF DETAILS: DialoGPT-medium based conversational AI model fine-tuned on Harry Potter dialogue, enabling character-like interactions in the wizarding world style.
Brief Details: NbAiLab's nb-gpt-j-6B is a 6 billion parameter language model based on GPT-J architecture, focused on safe and ethical AI applications.
BRIEF DETAILS: SinBERT-large: Advanced RoBERTa-based language model pre-trained on 15M Sinhala texts, optimized for Sinhala text classification tasks
Brief-details: MS-BERT is a BERT-based model pre-trained on 75,000 neurological examination notes from MS patients, specialized for multiple sclerosis clinical text analysis
Brief-details: Text-Moderation is a DeBERTa-v3-based model for detecting 8 categories of harmful content, achieving 75% accuracy with ethical content filtering capabilities.
Brief-details: GLPN-NYU is a monocular depth estimation model using SegFormer backbone, fine-tuned on NYUv2 dataset for accurate depth prediction from single images.
Brief Details: A specialized LoRA model for generating claymation-style images with Flux base model. Features precise control over cartoon-like clay figures with 64 network dimensions and 32 alpha.
Brief-details: Fine-tuned version of Qwen2-7B optimized for improved performance across benchmarks, featuring ChatML templating and GGUF quantization support. Strong scores in IFEval and MMLU-PRO.
Brief-details: A fine-tuned version of Qwen2-7B optimized for enhanced performance, featuring ChatML prompt format and GGUF quantized versions for efficient deployment.
Brief Details: Fine-tuned version of Qwen2-7B with improved benchmark performance, featuring 7B parameters and ChatML prompt format. Achieves 23.23% average score on key benchmarks.
Brief-details: Fine-tuned Qwen2-7B model optimized for improved performance, featuring ChatML prompt format and achieving 23.20 average score across benchmarks.
Brief Details: Granite-3.1-2B-Instruct: A 2.5B parameter multilingual instruction-tuned LLM with 128K context window, supporting 12 languages and optimized for long-context tasks.
BRIEF-DETAILS: Fine-tuned version of Qwen2-7B with ChatML support, GGUF quantization, and improved benchmark performance. Offers seamless integration via HuggingFace.
Brief-details: A fine-tuned 7B parameter LLM based on Qwen2, featuring strong performance on benchmarks like IFEval (38.25) and BBH (30.96), using ChatML format.
Brief-details: Fine-tuned version of Qwen2-7B optimized for better benchmark performance, featuring ChatML prompt template and GGUF quantization support with strong eval metrics
BRIEF DETAILS: CoTracker3 - A transformer-based point tracking model by Facebook that can track any pixel in videos with improved efficiency and accuracy through pseudo-labelling
Brief-details: A lightweight BERT-tiny model fine-tuned for SMS spam detection with impressive 98% validation accuracy, ideal for efficient text classification.
BRIEF-DETAILS: OpenNiji is a Stable Diffusion model fine-tuned on Nijijourney images, featuring improved hand generation and anime-style artwork capability, with CreativeML OpenRAIL-M license.
Brief-details: PixArt-Sigma is a state-of-the-art text-to-image generation model developed by PixArt-alpha, designed for high-quality image creation with efficient processing.
Brief Details: A 10B parameter instruction-tuned LLM supporting 4 languages, 32K context, optimized for STEM/reasoning with state-of-the-art performance on technical tasks.