Brief-details: GBC10M-PromptGen-200M is a 200M parameter model for generating Graph-Based Captions (GBC) from text prompts, combining region captions and scene graphs.
BRIEF-DETAILS: A 7B parameter Mamba-based instruction-tuned LLM with 32K context length, achieving SOTA results in reasoning and STEM tasks. Built by TII.
Brief-details: NVIDIA's multi-headed classifier for analyzing prompts across 11 task types and 6 complexity dimensions, built on DeBERTa-v3-base architecture
Brief-details: T-lite-it-1.0 is a Qwen 2.5-based model with extensive Russian language capabilities, trained on 140B tokens and optimized for instruction-following tasks.
Brief Details: EDM music generation AI model specialized in creating high-quality, key-locked, tempo-synced samples. Features supersaw chords, melodies, plucks & effects. Trained on 600k+ samples.
Brief-details: 125M parameter dense biencoder embedding model producing 768-dim vectors for English text, trained on open-source datasets with enterprise-friendly licensing. Strong BEIR benchmark performance.
BRIEF DETAILS: A comprehensive set of GGUF quantizations for QwQ-32B-Preview, offering 27 different compression variants from 65GB to 10GB with varying quality-size tradeoffs.
Brief Details: Japanese text embedding model with 1.2B parameters, achieving SOTA on JMTEB. 1792-dimensional vectors, 8192 token context, optimized for Japanese text analysis.
Brief-details: PaliGemma2 3B Mix 448 is a Google-developed language model requiring license acceptance on HuggingFace, optimized for 448 token contexts
Brief Details: Vision-based AI model designed for ComfyUI Flux Redux workflows. 384-dimension output, specialized for image processing and computer vision tasks.
Brief-details: FLUX.1-Canny-dev-lora is a specialized LoRA model focused on edge detection and image processing, developed by black-forest-labs for non-commercial use.
Brief-details: Advanced 38B parameter multimodal LLM with superior vision-language capabilities, featuring enhanced training strategies and data quality optimization from the InternVL family.
Brief Details: DMD2MOD is an adapted version of DMD2 trajectory consistency model optimized for multiple samplers and higher CFG settings, featuring LORA-64 base extraction
BRIEF-DETAILS: Fast anime-style image generation model based on Flux.1 Shnell, optimized for 4-8 steps, commercial-friendly with Apache 2.0 license. Perfect for high-quality anime art production.
BRIEF DETAILS: A specialized Flux-based LoRA model trained for pencil sketch generation, featuring 64 network dimensions and optimized with AdamW. Perfect for black and white artistic renders.
BRIEF DETAILS: Enterprise-grade multilingual embedding model with 568M parameters, optimized for retrieval tasks. Supports 128-byte compression and 8192 context window.
Brief-details: Mistral-Small-Instruct-2409 is a compact instruction-tuned language model from Mistral AI, designed for efficient processing and commercial applications with privacy considerations.
BRIEF-DETAILS: Meta's Llama 3.1 70B model optimized to 4-bit quantization for MLX framework, enabling efficient large-scale inference with reduced memory footprint
BRIEF DETAILS: BERT large uncased model with 336M parameters, fine-tuned on SQuAD dataset. Uses whole word masking and achieves 93.15 F1 score for question answering.
Brief-details: Specialized OCR model for converting mathematical equations and text from images into LaTeX format, enabling seamless digital math content creation.
Brief-details: Lambda - An AI model by unslothai focused on environment monitoring and statistical logging, designed for tracking and analyzing environment behaviors.