Brief Details: An 8B parameter LLM that outperforms Meta's Llama-3.1-8B-Instruct across benchmarks, featuring enhanced instruction-following and function-calling capabilities.
Brief Details: A specialized Stable Diffusion model fine-tuned on Tron: Legacy film screenshots, offering unique sci-fi aesthetic generation with the 'trnlgcy' token
Brief-details: A specialized Stable Diffusion 1.5 fine-tune for anime pencil concept art, featuring 5 versions with increasing training steps and dataset sizes.
Brief-details: A 7B parameter Mistral-based model fine-tuned with DPO, achieving strong performance across benchmarks. Features ChatML format and specialized instruction-following capabilities.
Brief-details: A specialized Stable Diffusion model trained for creating artistic double exposure effects, featuring 'dublex' style tokens and optimized for portraits
Brief Details: Qwen2's 7B instruction-tuned model optimized for GGUF format. Features multi-language support, coding capabilities, and various quantization options.
BRIEF DETAILS: State-of-the-art 34B parameter LLM for SQL generation, outperforming GPT-4 with 84% accuracy on novel datasets. Fine-tuned on CodeLlama.
Brief Details: Stable Code Instruct 3B is a 2.7B parameter code-generation model fine-tuned for programming tasks with strong performance across multiple languages
Brief Details: CodeLlama-34b-hf is a powerful 34B parameter code generation model from Meta, optimized for programming tasks with extensive language capabilities and commercial usage rights.
BRIEF-DETAILS: HDR photography-focused diffusion model trained on 600 images, optimized for 768x768 resolution with multi-resolution support and no licensing restrictions.
Brief Details: CPM-Bee-10B: A 10B-parameter bilingual (Chinese-English) open-source LLM with commercial usage rights. Features trillion-token training and advanced conversational capabilities.
Brief Details: Wavyfusion is a versatile text-to-image model with 123M parameters, trained on diverse artistic styles using Dreambooth. Requires "wa-vy style" token.
BRIEF DETAILS: 7B parameter instruction-tuned LLM with 128k context window. Optimized for reasoning tasks, code, and math. Built by Microsoft with focus on efficiency.
Brief Details: ToonCrafter is an AI video diffusion model that creates cartoon interpolation animations from two images and text prompts at 512x320 resolution.
Brief Details: TemporalDiff - Enhanced text-to-video model with improved coherency at 512x512 resolution, featuring optimized frame stride for smoother animations.
Brief-details: Advanced 236B parameter chat model from DeepSeek AI with superior performance in coding and hard prompts, ranking #11 on LMSYS Chatbot Arena, requiring 80GB*8 GPUs for inference.
Brief-details: LLaMA-Pro-8B: An 8.36B parameter model optimized for programming and math tasks, built on LLaMA2 architecture with enhanced capabilities and benchmark improvements.
Brief-details: IP-Composition Adapter enables composition-focused image generation in SD1.5/SDXL, preserving spatial arrangements while allowing creative freedom in style and content.
Brief-details: DeepSeek Coder 6.7B Instruct - Advanced code-focused LLM with 6.7B parameters, trained on 2T tokens (87% code, 13% language), optimized for programming tasks
Brief Details: Stable Diffusion-based model trained on 19.2M anime/manga images, optimized for character generation with balanced artistic style and anatomy quality.
Brief-details: A specialized Stable Diffusion model fine-tuned on Pokémon images, enabling text-to-image generation of unique Pokémon characters. Created by Lambda Labs, trained on BLIP-captioned dataset.