Brief Details: A sophisticated 938M parameter vision-language model specialized in converting table images to LaTeX/HTML/Markdown formats with high efficiency and accuracy.
Brief-details: A fine-tuned Gemma 2-9b model specialized in converting natural language queries to Cypher database queries, built by Neo4j using PEFT techniques.
Brief-details: ONNX-converted StyleTTS2 model for CPU-based text-to-speech synthesis, derived from LibriTTS base model with MIT license and English language support.
Brief-details: DeBERTa-based financial sentiment classifier trained on 4 finance datasets, capable of 3-way classification (neg/neu/pos), 184M params, MIT license
Brief Details: 8-bit MLX-optimized Qwen-based model supporting English/Russian text generation. 434M params, Apache 2.0 licensed, built for efficient inference.
Brief Details: Specialized 1.5B parameter reward model built on Qwen2.5, focused on mathematical and code reasoning with process reward mechanisms for step-by-step evaluation.
Brief-details: 8B parameter GGUF-quantized transformer model with multiple quantization variants, optimized for conversational tasks with MIT license.
Brief-details: LoRa model for CogVideoX that enables controlled camera movements in 6 directions for video generation, supporting smooth transitions and high-quality motion effects.
Brief-details: A 7.62B parameter conversational AI model based on Qwen2.5, featuring UNA (Uniform Neural Alignment) and MGS optimization, with strong performance on various benchmarks. Available in multiple GGUF quantizations.
Brief Details: Compact 2.25B param multimodal AI model that processes image+text inputs. Features efficient image compression and 81 visual tokens for 384×384 patches. Apache 2.0 licensed.
Brief Details: Hunyuan-A52B-Instruct-3bit is a 60.7B parameter MLX-optimized language model, featuring 3-bit quantization for efficient deployment on Apple Silicon.
Brief-details: A 22B parameter GGUF-quantized language model merging Cydonia-22B-v1.3 and Magnum-v4, offering multiple quantization options from 8.27GB to 44.5GB.
Brief-details: A bilingual Persian-English sentence embedding model based on XLM-RoBERTa, generating 1024-dimensional vectors for semantic search and clustering tasks.
Brief Details: AIMv2 large vision model with 309M parameters for image feature extraction. Outperforms CLIP/SigLIP on multimodal tasks and supports PyTorch/JAX.
Brief-details: A LORA model trained on Illustrious-xl v0.1, utilizing 4x RTX3060 GPUs with 50.5 hours of training time. Compatible with ILXL-based models.
Brief-details: German-focused 1.1B parameter language model trained on RedPajama V2, optimized for German text generation using TinyLlama architecture.
Brief Details: Bilingual 1.5B parameter instruction-following LLM optimized for Russian/English, based on Qwen2.5, trained on GrandMaster-PRO-MAX dataset
Brief Details: Shuttle-3: A 72.7B parameter multilingual LLM based on Qwen2.5, fine-tuned to emulate Claude 3's writing style with extensive role-play capabilities.
BRIEF DETAILS: A 12.7B parameter multimodal LLM optimized for detailed image captioning, built on Pixtral-12B-2409 with relaxed constraints for enhanced descriptions
Brief-details: VoiceRestore is a 300M+ parameter flow-matching transformer model designed to enhance degraded voice recordings, handling noise, reverb, and distortion with MIT license.
BRIEF DETAILS: Uncensored version of Qwen2.5-7B-Instruct (7.62B params) optimized through abliteration technique. Enhanced performance in IF_Eval and GPQA benchmarks.