BRIEF-DETAILS: A 9B parameter LLM fine-tuned on Yi-1.5-9B, optimized for roleplay with 8k context window. Features bilingual support and enhanced creative writing capabilities.
Brief-details: QwQ-32B-abliterated-i1-GGUF is a quantized version of the QwQ-32B model offering various compression levels from 7.4GB to 27GB, optimized for efficient deployment.
BRIEF-DETAILS: TxAgent-T1-Llama-3.1-8B is an AI therapeutic agent that leverages 211 biomedical tools for drug interaction analysis and personalized treatment recommendations, achieving 92.1% accuracy in drug reasoning tasks.
Brief-details: Novel 3B parameter vision-language model for direct hand pose estimation from 2D images, bypassing traditional pose estimation pipelines using Qwen architecture
Brief Details: Provence is a 430M parameter context pruning model that optimizes retrieval-augmented generation by removing irrelevant sentences from passages, based on DeBERTav3.
BRIEF-DETAILS: DistilBERT model fine-tuned for emotion detection, achieving 93.05% accuracy and 0.9309 F1 score. Built on distilbert-base-uncased architecture.
Brief-details: BERT-based model for classifying text according to UN Sustainable Development Goals (SDGs). 90% accuracy, trained on OSDG-CD dataset, supports 16 SDGs.
Brief Details: Akan language grapheme-to-phoneme (G2P) conversion model by fiifinketia, designed for processing Akan text into phonetic representations.
Brief Details: BERT-based model fine-tuned for Named Entity Recognition (NER) tasks. Uncased version optimized for identifying and classifying named entities in text.
Brief-details: Q8_0 GGUF quantized version of DeepSeek-R1-Distill-Qwen-32B model, optimized for efficient deployment while maintaining performance. Created by RolePlai team.
Brief-details: CLIP model combining ViT-B/32 vision architecture with RoBERTa base text encoder, trained on LAION-2B dataset for zero-shot image classification and retrieval tasks.
Brief Details: A compact 968M-parameter multimodal VLM optimized for edge devices, featuring 9x token reduction and improved hallucination control through DPO training.
BRIEF-DETAILS: Meta's Llama-2-70b is their largest language model with 70B parameters, offering state-of-the-art performance for open-source AI
Brief Details: 32B parameter reasoning model fine-tuned from Qwen2.5-32B-Instruct, achieving competitive performance in math and coding tasks, trained on 17K verified responses
Brief-details: Falcon-180B-chat is TII's powerful 180B-parameter chat model, optimized for dialogue and featuring comprehensive language understanding capabilities.
BRIEF DETAILS: Experimental uncensored 27B parameter Gemma model using layerwise abliteration technique for improved response freedom while maintaining coherence.
Brief Details: NoobAI XL Inpainting ControlNet SFW - A specialized model for precise image inpainting and outpainting, trained on 90% SFW dataset with masked image control capabilities.
Brief Details: Google's Gemma-3b model fine-tuned with horror elements and optimized quantization. Features enhanced instruction following and creative writing capabilities with 128k context.
BRIEF DETAILS: Gemma 3B model optimized with "Neo Imatrix" dataset and maxed quantization settings. Features 128k context, enhanced instruction following, and improved creative capabilities.
Brief-Details: OLMo-2-0325-32B-Instruct-4bit is a 4-bit quantized version of the OLMo 32B instruction-tuned model, optimized for MLX framework deployment
Brief Details: SOTA image super-resolution model enabling arbitrary-scale enhancement with anti-aliasing capabilities, based on EDSR architecture and trained on DIV2K dataset.