Brief Details: OpenThinker-7B-abliterated is an uncensored LLM variant of OpenThinker-7B, modified using abliteration technique to remove refusal behaviors. Deployable via Ollama.
Brief-details: LlamaThink-8B-instruct is an 8B parameter instruction-tuned LLM built on LLaMA-3, featuring dual-section outputs for structured thinking and answers. Apache 2.0 licensed.
Brief Details: 15B parameter instruct-tuned LLM optimized for roleplay. Built on Phi-4, trained on 1B+ tokens of literary content. Features ChatML formatting and GGUF/EXL2 quantization.
Brief-details: DeepHermes-3-Llama-3-8B is a quantized LLM focused on deep thinking and systematic reasoning, featuring extensive chain-of-thought capabilities
BRIEF-DETAILS: Multi-speaker text-to-speech model based on LLaMA architecture supporting Chinese, English, Japanese & Korean voices from Genshin Impact, developed by HKUSTAudio.
Brief-details: Efficient zero-shot classifier with ModernBERT-base backbone, optimized for sequence classification tasks like topic detection and sentiment analysis. 151M parameters.
Brief Details: Multimodal multilingual embedding model (11B params) combining vision-language capabilities with SOTA performance on MMEB benchmark
Brief-details: Quantized 8B parameter Llama model optimized by Allen AI, focused on knowledge sharing and accessibility through GGUF format compression.
Brief Details: A quantized GGUF version of the Quran Tafsir GPT-2 model, offering multiple compression variants from Q2 to Q8 with file sizes ranging from 0.2GB to 0.4GB
Brief-details: A 32B parameter GGUF-quantized language model built on DeepSeek-R1 and Qwen2.5, optimized for reasoning capabilities and Japanese language support
Brief Details: HumanOmni-7B is a 7 billion parameter language model focused on human-like interactions, built by StarJiaxing and available on HuggingFace.
BRIEF-DETAILS: Advanced character card environment for SillyTavern with dynamic scenarios, customizable attributes, and automated features for enhanced roleplay experiences.
Brief-details: A multilingual MoE (Mixture of Experts) text embedding model developed by Nomic AI, focusing on unsupervised contrastive pretraining for text embeddings.
Brief Details: A quantized version of JSL-MedQwen-14b offering multiple GGUF variants optimized for different size/performance tradeoffs, ranging from 3.7GB to 12.2GB
Brief-details: Web content format classifier based on GTE-base model with 140M parameters, categorizing text into 24 formats without URL dependence. Fine-tuned on Llama annotations.
Brief-details: A 7B parameter math reasoning model achieving 94.0% pass@1 on MATH-500, utilizing Outcome REwArd-based reinforcement Learning (OREAL) framework for enhanced mathematical problem-solving capabilities.
Brief Details: Velvet-2B is a 2-billion parameter language model by Almawave, focused on personal data processing with privacy considerations in mind.
Brief Details: Ovis2-16B: A 16B parameter multimodal LLM optimized for visual-text alignment, featuring enhanced reasoning, video processing, and multilingual OCR capabilities.
Brief-details: GGUF quantized version of MN-12B-FoxFrame-Miyuri model with multiple compression variants (Q2-Q8), optimized for different size/quality tradeoffs
Brief-details: A 22B parameter Japanese LLM focused on safe responses, trained using Self-Augmented DPO technique with improved alignment and toxicity handling
BRIEF-DETAILS: TimeSformer base model fine-tuned on Kinetics-600 dataset for video classification, utilizing space-time attention mechanisms for advanced video understanding.