Brief-details: Merlinite-7b is a 7.24B parameter language model from IBM Research using LAB methodology, built on Mistral-7B with Mixtral-8x7B as teacher, achieving strong benchmark performance.
Brief-details: Multi-task information extraction model supporting NER, relation extraction, summarization, sentiment analysis & QA. Built on GLiNER framework, SOTA performance.
Brief-details: A 7B parameter Mistral-based model optimized for helpful AI assistance, featuring GGUF quantization for efficient deployment and uncensored responses using ChatML format.
BRIEF-DETAILS: A specialized text-to-image model fine-tuned on 1928 Mickey Mouse public domain content, utilizing SDXL for generating authentic vintage Mickey artwork.
Brief Details: NexusRaven-13B is a state-of-the-art function calling LLM based on CodeLlama-13B, achieving 95% success rate in cybersecurity tools usage with commercial viability.
Brief Details: A 6.74B parameter finance-focused LLM based on LLaMA-1-7B, achieving competitive performance with larger models like BloombergGPT-50B.
Brief Details: ChatLaw-Text2Vec is a specialized Chinese legal text similarity model trained on 936,727 legal cases, optimized for legal document comparison and vector database creation.
Brief Details: A 30B parameter instruction-tuned LLaMA model optimized for 2048 token sequences, featuring strong performance on scientific QA and general instruction tasks.
Brief-details: StableSR is a diffusion-based image super-resolution model that leverages Stable Diffusion for high-quality upscaling, featuring time-aware encoding and controllable feature wrapping.
Brief Details: LLaMA-based model fine-tuned on Stack Exchange data using RL, optimized for technical Q&A across programming, math, and physics domains. PEFT-adapted.
BRIEF-DETAILS: A powerful multilingual translation model supporting 196 languages, using Mixture-of-Experts architecture for efficient large-scale language translation
Brief-details: BioGPT-Large-PubMedQA is a specialized biomedical language model for medical text generation and QA, achieving 78.2% accuracy on PubMedQA.
Brief Details: Anime-focused text-to-image model built on Stable Diffusion, featuring improved VAE for high-quality anime generation with detailed prompts and danbooru tag support.
Brief Details: OpenOrcaXOpenChat Preview2 13B - Advanced LLM fine-tuned on GPT-4 data achieving 103% of original Orca performance with 1/10th compute requirements
Brief-details: A fine-tuned Stable Diffusion model specialized in generating nail art designs, trained on nail set images with CLIP Interrogator captioning for 10,000 steps.
Brief-details: A 7B parameter Mistral-based model optimized for roleplay & creative text generation, featuring high MT-bench scores (7.96) and strong general performance.
Brief-details: IDEFICS-9b-instruct is a 9B parameter multimodal model for image-text tasks, fine-tuned on instruction data for enhanced performance in conversational settings.
Brief-details: A 7B parameter instruction-tuned language model built by Together Computer, featuring multi-task capabilities and efficient inference options for both GPU and CPU.
Brief-details: SuperCOT-LoRA is a specialized LoRA model trained on chain-of-thought datasets to enhance LLaMA's prompt-following capabilities, particularly for Langchain applications
Brief-details: SEmix is a Japanese-focused text-to-image Stable Diffusion model specializing in anime-style character generation, requiring no VAE and featuring EasyNegative compatibility.
BRIEF DETAILS: Photorealistic text-to-image model built on Stable Diffusion v1-5, featuring enhanced image quality at 768-1024px resolution with improved human rendering and environmental details.