Brief-details: Versatile multimodal 7B parameter model supporting text, images, audio & video input/output. Features real-time voice chat and strong performance across modalities.
Brief Details: Dolphin3.0-Mistral-24B: A 24B parameter open-source LLM built on Mistral, optimized for general-purpose tasks including coding, math, and function calling. Offers local deployment with customizable system prompts.
Brief-details: Brazilian Portuguese TTS model based on F5-TTS architecture. Trained on 330+ hours of audio data. Supports emotional speech synthesis with reference audio capabilities.
Brief-details: MatterGen is Microsoft's diffusion model for generating inorganic materials structures, capable of predicting atomic coordinates, elements, and lattice vectors with 46.8M parameters.
Brief-details: FLUX.1-dev-onnx is a development model from black-forest-labs available on HuggingFace, requiring non-commercial license agreement for usage.
Brief-details: A powerful 400M parameter code embedding model by Salesforce Research optimized for multilingual code retrieval, achieving 61.9 NDCG@10 on CoIR benchmark
BRIEF-DETAILS: Foundation model for 3D blood vessel segmentation across multiple imaging domains, trained on real & synthetic data for universal vessel detection
BRIEF-DETAILS: Qwen1.5-0.5B-Chat is a compact 0.5B parameter chat model supporting 32K context, offering efficient multilingual capabilities and various quantization options.
BRIEF-DETAILS: A LoRA model trained for enhanced image generation, featuring integration with the diffusers library and Flux-Realism-FineDetailed base model
Brief-details: Innovative hybrid vision model combining Mamba and Transformer architectures, achieving 87.3% top-1 accuracy on ImageNet-1K with 739.6M parameters at 256x256 resolution.
BRIEF-DETAILS: A dark-tuned variant of Cohere's Command-A model (111B parameters), optimized for morally ambiguous and antagonistic interactions with enhanced creative freedom.
BRIEF DETAILS: Quantized instruct-tuned LLM based on Mistral-Nemo-Base with 40 layers, 5120 dim, multilingual capabilities, and strong benchmark performance.
BRIEF-DETAILS: Dragon Ball-inspired AI model that transforms images into Super Saiyan videos using Wan2.1 14B I2V 480p base. Features realistic transformations with glowing effects.
Brief-details: A 7.76B parameter multilingual LLM optimized for English, Korean, Japanese & Chinese. Features 32-layer architecture, 4K context window, and strong performance on reasoning tasks.
Brief-details: A 3B parameter LLAMA-based model optimized for roleplay and creative tasks with medium censorship (4.5/10). Features short responses and specialized character formats.
Brief-Details: 8B parameter LLM using novel SuperBPE tokenizer that combines subword & superword tokens, offering 27% inference efficiency gain vs standard BPE
Brief Details: A LoRA model trained on Replicate's Flux trainer, designed for image generation using TOK trigger words. Built for use with diffusers library and Canopus base model.
BRIEF-DETAILS: GGUF quantized versions of amoral-gemma3-12B with various compression levels (2-23GB). Features imatrix quantization and optimized formats for different hardware configurations.
Brief-details: OLMo-2-Instruct-Math-32B is a specialized 32B parameter LLM fine-tuned by TNG Technology on AMD MI300X GPUs, optimized for mathematical reasoning using the Open R1 dataset.
BRIEF DETAILS: A 32B parameter Japanese-focused instruction-tuned LLM, quantized for llama.cpp compatibility, featuring strong performance on Japanese language tasks and reasoning capabilities.
BRIEF-DETAILS: LoRA model for Flux image generation, trained on Replicate. Uses TOK trigger word, compatible with diffusers library and optimized for SDXL base.