Brief-details: DeepSeek-Coder-V2-Instruct quantized versions optimized for different hardware configurations, ranging from 52GB to 250GB, with various quality-performance tradeoffs
BRIEF DETAILS: BERT-based model that evaluates question-answer pair validity. Trained on major QA datasets like SQuAD and RACE. Focuses on semantic relationship assessment.
BRIEF-DETAILS: Med42-70b is a large-scale medical AI model by M42 Health requiring approved access, built on a 70B parameter architecture for healthcare applications
Brief-details: C4AI Command-R v0.1 4-bit quantized model by CohereForAI - optimized language model with reduced precision for efficient deployment while maintaining performance.
BRIEF-DETAILS: Google's Gemma-2-9b is a powerful 9B parameter language model requiring Hugging Face authentication and license acceptance for access.
Brief-details: CAMEL is a specialized counseling AI model that implements CBT (Cognitive Behavioral Therapy) techniques through a structured dialogue system with automated planning
Brief Details: Advanced depth estimation model converted to safetensors format, optimized for ComfyUI integration. Enables accurate depth perception from single images.
Brief Details: CoreML-optimized version of Mistral-7B-Instruct v0.3 for Apple devices, featuring FP16 & Int4 precision variants and requiring macOS Sequoia (15) Beta
BRIEF DETAILS: Facebook's 7B parameter LLM compiler model, designed for code-related tasks. Features Meta's privacy compliance and hosted on HuggingFace.
BRIEF DETAILS: 8B parameter uncensored LLM, quantized version of NeuralDaredevil. Strong performance in MMLU tests. Optimized for unaligned tasks and role-playing.
BRIEF-DETAILS: 8B parameter Llama 3-based model optimized for Japanese-English tasks, achieving strong performance on ELYZA100 and MT-Bench benchmarks with 8e-6 learning rate
Brief Details: WhiteRabbitNeo 8B model specialized in cybersecurity, built on Llama-3. Offers both offensive and defensive security capabilities with comprehensive usage restrictions.
Brief-details: PaliGemma 3B Mix 224 is Google's vision-language model requiring license acceptance on HuggingFace, designed for multimodal tasks with 3B parameters.
BRIEF-DETAILS: Fast neural vocoder for high-quality speech synthesis from mel spectrograms, compatible with HiFi-GAN features, trained on 800+ hours of Ukrainian audiobooks at 44.1kHz
BRIEF-DETAILS: Neo-7B: Open-source bilingual LLM with 7B parameters. Complete transparency in training data and process. Multiple variants available including base, SFT, and instruct versions.
BRIEF-DETAILS: 8B parameter coding-specialized LLaMA model with multiple GGUF quantization variants from 8.54GB to 2.01GB, optimized for code generation and completion tasks.
BRIEF DETAILS: Palmyra-Med-70B is a 70 billion parameter medical-focused language model by Writer, designed for healthcare applications and requiring license agreement acceptance.
Brief Details: OCR Error Detection model by datalab-to - specialized model for identifying mistakes and inaccuracies in optical character recognition output.
Brief Details: An optimized 2B parameter GPT-2 variant hosted on Hugging Face, focusing on improved performance and efficiency for NLP tasks
Brief Details: Motion LoRA module for AnimateDiff that adds zoom-in animation effects to generated videos. Part of guoyww's motion control suite.
BRIEF-DETAILS: Hungarian BERT model trained on Common Crawl and Wikipedia, achieving SOTA performance on NER (97.62%) and chunking tasks. Developed by SZTAKI-HLT.