Brief-details: A 6.7B parameter language model trained on The Pile dataset, optimized for research and text generation tasks with Chinchilla scaling laws implementation.
Brief-details: PVC v2 is a specialized anime-style text-to-image model fine-tuned on WD v1.4 using PVC figure images, featuring high-quality character generation with Danbooru tag support
Brief Details: Cool Japan Diffusion 2.1.0 - A specialized Stable Diffusion fine-tuned model focused on anime, manga, and Japanese cultural content generation.
Brief-details: A 30B parameter LLM merging GPT4-Alpaca and Open Assistant, optimized for instruction following while maintaining high-quality prose generation
Brief-details: Galactica-1.3B is a scientific language model by Meta AI, trained on 106B tokens of scientific text for tasks like citation prediction and scientific QA.
Brief-details: Stable Diffusion v1-1 is a powerful text-to-image latent diffusion model trained on LAION-2B dataset, capable of generating photorealistic images from text prompts.
Brief-details: CoreMLStableDiffusion - Apple-focused conversion toolkit for running Stable Diffusion models on CoreML, optimized for iOS/iPadOS/macOS devices
Brief-details: EVA-CLIP is a high-performance vision-language model series with state-of-the-art zero-shot classification capabilities, trained on large-scale datasets including LAION-400M and Merged-2B.
Brief-details: Advanced Chinese document AI model by Microsoft, optimized for text and image processing with 65 likes and 3.1K+ downloads. Strong performance on XFUND (92.02% F1).
BRIEF-DETAILS: An 8B parameter LLaMA-3.1 variant fine-tuned for creative writing and conversation, featuring multi-stage training and improved multi-turn coherency.
BRIEF DETAILS: A 6B parameter GPT-J model fine-tuned on 2GB of literary content for creative fiction generation, featuring annotated prompting capabilities
Brief-details: Chinese RoBERTa-based classification model fine-tuned on Chinanews dataset, optimized for news topic classification with strong performance on mainland Chinese political content.
Brief Details: A 12.5B parameter video captioning model built on Llama-3, designed to convert video data into detailed textual descriptions for training CogVideoX models.
Brief-details: Efficient 2.2B parameter multimodal LLM combining InternViT-300M vision model with InternLM2-Chat-1.8B language model, optimized for image/video understanding and conversation.
Brief Details: Next-gen text-to-image model using Next-DiT (2B params) & Gemma-2B encoder. Supports 2K resolution, multi-language, with improved speed and efficiency.
Brief-details: Einstein-v6.1-Llama3-8B is an 8B parameter LLM fine-tuned on 38 datasets, optimized for STEM tasks with strong performance in science and math reasoning.
Brief Details: Mistral_Pro_8B_v0.1 - An enhanced 8.9B parameter model built on Mistral-7B, optimized for programming and mathematics with improved performance metrics.
Brief Details: Open-Assistant's fine-tuned version of CodeLlama 13B, optimized for code generation and chat. Features chatml format compatibility and 13B parameters.
Brief Details: Japanese-optimized Llama 2 model (7B params) with instruction tuning, supporting both Japanese and English languages. Built by ELYZA team for enhanced Japanese NLP capabilities.
Brief-details: IDEFICS-80B is a powerful 80B parameter multimodal model that can process both images and text, offering capabilities like visual Q&A, image captioning, and story generation.
BRIEF DETAILS: CodeLlama-13B-fp16 is Meta's 13B parameter coding model in FP16 format, optimized for code synthesis and understanding, supporting up to 100K tokens and featuring advanced infilling capabilities.