BRIEF-DETAILS: Text-to-image diffusion model specializing in 2D illustration styles, inspired by Hollie Mengert's artwork. Licensed under CreativeML OpenRAIL-M.
Brief Details: Biomedical NER model based on DistilBERT, trained on Maccrobat dataset. Recognizes 107 medical entities with 66.4M parameters. Apache 2.0 licensed.
Brief-details: High-quality image-to-video synthesis model from Ali-vilab that converts still images into fluid video sequences using cascaded diffusion models, supporting 1280x720 resolution.
Brief Details: Chinese-optimized 7B parameter instruction-following LLM based on Llama-2, supporting both Chinese and English with 4K context window and expandable to 18K+.
Brief-details: A specialized anime-style text-to-image model built on Stable Diffusion, optimized for high-quality anime illustrations with enhanced attention to details like eyes and hands.
Brief-details: GGML quantized version of Meta's Llama-2-70B-Chat model offering efficient CPU/GPU inference with various quantization options from 2-bit to 8-bit precision
Brief Details: 8B parameter Chinese-English LLM based on Meta-Llama-3, optimized for Chinese chat with GGUF 8-bit quantization. Features enhanced roleplay, function calling and math capabilities.
BRIEF-DETAILS: An experimental AI model that de-distills guidance from FLUX.1-dev, implementing true classifier-free guidance through reversed distillation process.
Brief-details: T5-based grammar correction model trained on JFLEG dataset. Fixes grammatical errors in text using beam search. Popular with 7K+ downloads.
Brief-details: A powerful 34B parameter LLM optimized for 200K context length, featuring multi-turn conversations and advanced reasoning capabilities using Yi-34B base model
Brief-details: 4-bit quantized 30B parameter LLM based on Alpaca, optimized for both GPU (GPTQ) and CPU (GGML) inference with strong performance benchmarks
BRIEF DETAILS: A 10.7B parameter LLaMA-based language model optimized for text generation, supporting Alpaca/Vicuna formats with FP16 precision.
Brief Details: A 7B parameter conversational LLM fine-tuned from Meta's LLaMA, optimized for dialogue generation with persona-based interactions.
BRIEF DETAILS: OpenCoder-8B-Instruct is a powerful 7.77B parameter code LLM supporting English and Chinese, trained on 2.5T tokens with strong performance in code generation and understanding.
BRIEF-DETAILS: Vicuna-7b-delta-v0: A fine-tuned LLaMA variant trained on ShareGPT conversations, requiring base LLaMA weights for implementation.
Brief-details: SmolLM-1.7B: State-of-the-art 1.7B parameter LLM trained on Cosmo-Corpus, optimized for efficiency with strong reasoning capabilities
Brief Details: Artistic text-to-image model leveraging StableDiffusionPipeline for creative image generation. Popular with 162 likes and 425 downloads.
BRIEF DETAILS: A 7B parameter GPTQ-quantized uncensored LLaMA model, optimized for unrestricted conversations with multiple quantization options for different hardware setups.
BRIEF-DETAILS: A powerful 34B parameter code generation model achieving 73.8% pass@1 on HumanEval, fine-tuned on 1.5B tokens of programming data with multi-language support
BRIEF DETAILS: A powerful 46.7B parameter MoE model fine-tuned from Mixtral-8x7B using DPO, supporting 5 languages and achieving top performance on the Open LLM Leaderboard.
Brief-details: A specialized LoRA model for FLUX.1-dev that creates creative photo compositions with 4 real background images and a central cartoon summary, featuring one-click generation capability.