Brief-details: Stable Diffusion model fine-tuned on Arcane TV show imagery, enabling generation of Arcane-style artwork using "arcane style" token. Popular with 752 likes and 2.8K downloads.
BRIEF DETAILS: WizardCoder-Python-34B-V1.0: A powerful code-focused LLM achieving 73.2% pass@1 on HumanEval, surpassing GPT-4 (Mar 2023). Built on Llama2 architecture.
Brief-details: A 1.2B parameter Text-to-Speech model trained on 100K hours of speech, featuring emotional speech synthesis, voice cloning, and zero-shot capabilities.
Brief-details: A powerful text-to-video and image-to-video generation model based on Flow Matching, capable of producing high-quality 10-second videos at 768p/24FPS
Brief-details: DCLM-7B: A 7B parameter open-source LLM trained on 2.5T tokens, achieving 63.7% on MMLU. Features strong performance in reasoning and QA tasks.
BRIEF-DETAILS: Efficient C++ implementation of OpenAI's Whisper ASR models, offering various quantized versions for optimal performance and size trade-offs
Brief Details: MPT-7B-StoryWriter is a 6.7B parameter LLM optimized for long-form fiction with 65k+ token context length using ALiBi attention.
Brief Details: State-of-the-art code generation model achieving 73.8% pass@1 on HumanEval, fine-tuned on 1.5B tokens of programming data, supporting multiple languages.
Brief-details: Llama-2-7B-Chat-GGML is a quantized version of Meta's Llama 2 chat model, optimized for CPU/GPU inference with GGML format, offering various quantization options.
BRIEF DETAILS: A specialized text-to-image diffusion model trained on analog photographs, featuring vintage film effects and requiring "analog style" as an activation token.
Brief Details: Octopus-v2 is a 2.51B parameter on-device LLM optimized for function calling, achieving GPT-4-level accuracy with 168% faster inference speed.
Brief-details: StableBeluga2 is a Llama2 70B-based language model fine-tuned on Orca-style datasets, optimized for instruction-following and safe AI assistance
Brief-details: Control-LoRA brings efficient model control to consumer GPUs by adding low-rank parameter tuning to ControlNet, reducing model size from 4.7GB to ~738MB while maintaining performance.
BRIEF DETAILS: Popular text-to-image diffusion model with multiple versions (v1-1 to v1-4), trained on LAION datasets. Creates photo-realistic images from text prompts with CreativeML OpenRAIL M license.
Brief-details: A 30B parameter LLaMa-based language model fine-tuned by OpenAssistant, utilizing XOR weights distribution to comply with Meta's licensing while maintaining high performance.
Brief Details: Fine-tuned Stable Diffusion 1.5 model specialized in modern Disney-style image generation, with 948 likes and 2,134 downloads.
Brief Details: A specialized text-to-image model focused on realistic textures and Asian faces, optimized for danbooru-style prompts with non-commercial licensing.
Brief-details: A 70B parameter LLM using Mistral architecture, optimized for GGUF format with 32k context window and high-frequency base settings
Brief Details: A specialized Stable Diffusion model finetuned for creating art in a unique style inspired by Gorillaz, FLCL, and Yoji Shinkawa, requiring "nvinkpunk" trigger word
Brief Details: Fuyu-8B: A 9.41B parameter multimodal decoder-only transformer by Adept AI. Handles image-text tasks with arbitrary resolutions and fast inference.
Brief Details: IP-Adapter is a lightweight (22M params) adapter for text-to-image models enabling image prompt capabilities with state-of-the-art performance