BRIEF DETAILS: Vicuna-33B v1.3: Advanced chat assistant fine-tuned from LLaMA, trained on 125K ShareGPT conversations. Research-focused with strong performance in benchmarks.
Brief-details: OpenChat is a fine-tuned LLaMA-13B model achieving 105.7% of ChatGPT performance using only 6K GPT-4 conversations, with 2048/8192 context variants.
Brief Details: AuraSR is a GAN-based super-resolution model with 618M parameters for upscaling generated images, offering 4x upscaling capability using PyTorch.
Brief-details: ChatGLM2-6B-32K is an enhanced bilingual LLM with 32K context length, built on ChatGLM2-6B architecture with optimized KV cache and position interpolation.
Brief Details: MiniCPM-2B-sft-fp32 is a compact 2.4B parameter LLM with remarkable performance matching Mistral-7B, supporting both English and Chinese, optimized for mobile deployment.
Brief-details: RPG V6 is a Flux1-based text-to-image model specialized in role-playing game art generation with 95% accuracy, built on the FLUX.1-dev base model.
Brief-details: Powerful Mixtral-based 46.7B parameter model optimized for coding and general tasks, featuring 8 expert models and 32k context length. GGUF format for efficient deployment.
BRIEF-DETAILS: A specialized text-to-image model focused on high-quality portrait generation with consistent eye details and composition, optimized for 1:1 aspect ratio.
Brief-details: GPT4All-J is a 6.17B parameter Apache-2.0 licensed chatbot, fine-tuned from GPT-J for assistant-style interactions with strong performance across multiple benchmarks.
Brief Details: High-resolution anime-style text-to-image model based on SDXL 1.0, optimized for 1024x1024 generation using Danbooru-style prompts
Brief-details: CausalLM 14B - State-of-the-art LLaMA2-compatible model with impressive multilingual capabilities, achieving top benchmarks scores and DPO version ranking #1 in ~13B category.
Brief-details: GPT-JT-6B-v1: A 6B parameter model fine-tuned from GPT-J using UL2 training objectives, outperforming 100B+ models on classification tasks.
Brief Details: StableLM-3B-4E1T: A 3B parameter decoder-only LLM trained on 1T tokens for 4 epochs, featuring strong performance on various NLP tasks with 2560 hidden size and 32 layers.
BRIEF DETAILS: Multimodal vision-language model built on Mistral-7B, featuring SigLIP-400M integration and function calling capabilities for advanced visual understanding and automation.
Brief-details: Large-scale Chinese text embedding model (326M params) optimized for retrieval and similarity tasks, achieving SOTA performance on C-MTEB benchmark with 1024d embeddings.
Brief Details: A specialized ControlNet model for creating artistic QR codes, trained on 150k QR code pairs. Compatible with SD 1.5/2.1, enables scannable artistic QR generation.
Brief-Details: Hotshot-XL is an advanced text-to-GIF AI model that works alongside SDXL, supporting 8 FPS animations and custom LORA integrations for personalized content generation.
Brief-details: Chinese-Llama-2-7b is a bilingual (Chinese-English) LLM based on Llama 2, trained on 10M instruction pairs, featuring commercial usage rights and full compatibility with original Llama-2-chat optimizations.
Brief Details: Dolly-v1-6b is a 6B parameter instruction-tuned LLM based on GPT-J, fine-tuned on Alpaca dataset for 30 minutes, demonstrating strong instruction-following capabilities.
Brief Details: A fine-tuned Stable Diffusion model specialized in anime/manga-style image generation, trained on 40,000 high-res images with multiple checkpoint versions available.
Brief-details: Code Llama 70B - Meta's largest code generation model with 69B parameters, optimized for programming tasks, supporting multiple languages and 100k context window.