Brief-details: Yi-34B-Chat-4bits is a 4-bit quantized version of Yi-34B-Chat, offering efficient performance with only 20GB VRAM requirement while maintaining strong bilingual capabilities.
Brief Details: TowerInstruct-7B-v0.1: A 6.74B parameter multilingual LLM optimized for translation tasks across 10 languages, fine-tuned on TowerBase with diverse translation capabilities.
Brief Details: OrionStar-Yi-34B-Chat is a 34.4B parameter bilingual chat model fine-tuned on 150K+ high-quality datasets, achieving impressive scores on MMLU (78.32) and C-Eval (77.71).
Brief Details: Dragon-Yi-6B-v0: RAG-instruct trained 6B parameter model optimized for business/legal Q&A with 99.5% benchmark accuracy and minimal hallucination.
Brief Details: A pruned and pre-trained 2.7B parameter LLaMA2 variant achieving strong performance with just 50B training tokens, optimized for efficient deployment.
Brief-details: Southeast Asian-focused LLM supporting 10 languages, fine-tuned from Llama-2 with enhanced cultural adaptation and superior performance in non-Latin scripts
Brief Details: A 13B parameter Self-RAG LLaMA2 model that combines text generation with self-reflection capabilities for adaptive retrieval and output criticism.
Brief-details: SQLCoder-7B is a Mistral-based LLM specialized in SQL query generation, outperforming GPT-3.5-turbo with 71% accuracy on novel datasets.
Brief-details: SDXL LoRA model for creating papercut-style artwork, trained on papercut images. Uses stabilityai/stable-diffusion-xl-base-1.0 as base model. Created by TheLastBen.
BRIEF-DETAILS: Aguila-7B: A trilingual (Catalan, Spanish, English) language model with 6.85B parameters, based on Falcon-7B. Trained on 26B tokens for enhanced multilingual capabilities.
Brief Details: A 4-bit quantized version of Llama2-13B optimized for Chinese language tasks, featuring enhanced Chinese dialogue capabilities through LoRA fine-tuning and transformers integration.
Brief-details: WizardLM-13B-Uncensored-GGML is a GGML-formatted variant of WizardLM 13B, optimized for CPU+GPU inference with multiple quantization options from 2-bit to 8-bit.
Brief Details: PMC_LLAMA_7B - A specialized medical LLaMA model fine-tuned on PMC papers, offering enhanced medical text generation capabilities with 7B parameters
Brief Details: OPT-30B-Erebus is a specialized text generation model based on OPT-30B, fine-tuned on adult content. NOT suitable for minors. 1.5K+ downloads.
BRIEF-DETAILS: Fine-tuned Stable Diffusion v1.5 model specialized in artistic landscapes inspired by Sin Jong Hun's work. Uses "sjh style" token for distinctive minimalist backgrounds.
Brief Details: Text-to-image diffusion model trained on public domain/CC0 images, featuring ethical AI development with limited visual quality but growing capabilities.
Brief-details: Large-scale 70B reward model by NVIDIA for evaluating LLM responses, achieving top performance on alignment benchmarks with 94.1% overall accuracy.
BRIEF-DETAILS: Artistic style model based on Kuvshinov's distinctive anime/manga artwork, trained via Textual Inversion for Stable Diffusion. MIT licensed, community-validated with 60 likes.
Brief-details: BERT-based punctuation restoration model achieving 90% F1 score, capable of restoring 8 punctuation marks and capitalization in English text, ideal for ASR outputs.
Brief-details: A 7.62B parameter uncensored language model based on Qwen2-7B, featuring 128k context window, instruction-following, coding, and function-calling capabilities.
Brief-details: A 7B parameter multilingual LLM optimized for German, French and Spanish, featuring enhanced token efficiency and specialized for automotive/engineering domains