Brief-details: RDT-1B is a 1B-parameter robotics model that combines language, vision, and action prediction for multi-robot control, supporting various manipulator types and configurations.
Brief-details: Squid is an 8.11B parameter AI model optimized for on-device RAG, treating long context as a new modality for efficient language processing and inference.
Brief Details: Florence-2-Flux-Large is an 823M parameter image-text-to-text model built on Microsoft's Florence architecture, specialized in detailed image description and analysis.
Brief-details: A specialized LoRA model fine-tuned on Cinestill 800T images, designed for generating night photography with distinctive halation effects using FLUX.1-Dev as base.
Brief-details: Advanced image-to-text model based on Florence-2-large, fine-tuned on 40K images from Ejafa/ye-pop dataset with CogVLM2-generated captions. 823M params, FP16 precision.
Brief Details: A 72.7B parameter multilingual LLM fine-tuned on Qwen2, optimized for Claude-like responses with strong performance on IFEval (75.6%) and BBH (57.85%)
BRIEF-DETAILS: NovelAI's original anime-focused Stable Diffusion model, featuring UNet and VAE components, optimized for anime-style image generation with CLIP skip 2 setting.
Brief Details: An 8B parameter LLM fine-tuned for protein sequence generation, supporting both controlled and uncontrolled protein engineering with LoRA optimization.
Brief Details: 8B parameter creative writing model focused on uncensored storytelling with very low censorship (9.1/10). Built on EtherealRainbow base with specialized instruction following.
Brief Details: A vibrant text-to-image LoRA model built on FLUX.1-dev, specializing in retro-modern illustrations with bold linework and surreal elements.
Brief Details: A 6.91B parameter theorem-proving LLM achieving SOTA results (63.5% on miniF2F, 25.3% on ProofNet) using RL and Monte-Carlo tree search
Brief Details: Tanuki-8B is an 8B-parameter Japanese-English LLM, pre-trained on 1.3T tokens and fine-tuned with DPO for conversational AI. Supports both languages with strong performance.
BRIEF DETAILS: Paper cutout style LoRA for FLUX.1-dev model, enabling artistic paper-cut aesthetics in image generation. Popular with 978 downloads.
BRIEF DETAILS: A 7.74B parameter fact-checking model that evaluates if claims are supported by source documents, achieving SOTA performance with efficient processing of 500+ docs/min.
Brief Details: Long-context LLM based on Meta-Llama-3.1-8B, capable of generating 10,000+ words, supports English/Chinese, optimized for extended text generation.
Brief-details: FLUX-Prompt-Generator is an AI prompt engineering tool with local and cloud deployment options, featuring Groq API integration and Llama 3 8B compatibility
BRIEF DETAILS: Latest Hermes series 70B parameter LLM built on Llama 3.1, optimized for chat, function calling, and structured outputs with FP8 quantization.
Brief-details: A specialized ControlNet Canny model fine-tuned for FLUX.1-dev, enabling precise edge-aware image generation with customizable prompts. Non-commercial license.
Brief Details: DiarizationLM-8b-Fisher-v2 is an 8B parameter LLM specialized in speaker diarization post-processing, built on Llama 3 architecture with improved completion-focused training.
Brief-details: FLUX.1-schnell-dev-merged-fp8 is a powerful text-to-image model optimized for fast inference with 6-8 steps, featuring FP8 precision and dual-pass generation capabilities.
Brief Details: Uncensored 70B parameter LLaMA 3.1 variant using LoRA-abliteration technique. Maintains high quality while removing safety filters.