Brief Details: A powerful 28.9B parameter multimodal LLM built on Gemma2-27B, featuring enhanced image processing and chain-of-thought reasoning capabilities.
Brief-details: A specialized LoRA model trained on FLUX.1-dev for generating photorealistic images of K9 self-propelled artillery, offering high-quality military vehicle visualization capabilities.
Brief-details: CogVideoX-Fun-V1.1-Reward-LoRAs optimizes video generation through reward backpropagation, offering LoRA models for enhanced human preference alignment.
Brief-details: BERT base Chinese model for masked language modeling tasks. Pre-trained on Chinese text with 21,128 vocab size and 12 hidden layers. Developed by HuggingFace team.
Brief-details: Compact 1.1B parameter chat model quantized to 4-bit GPTQ, based on TinyLlama architecture. Efficient for resource-constrained deployments with multiple quantization options.
Brief-details: A GGUF-quantized 8.03B parameter language model derived from Llama3.1, optimized with SLERP-TIES merging technique, offering multiple quantization options for efficient deployment.
Brief Details: Quantized version of Mistral-7B-Instruct-v0.3 in GGUF format, optimized for efficient local deployment with 7.25B parameters. Supports multiple bit precisions (2-8 bit).
Brief Details: A lightweight BERT-based sentence transformer model (4.39M params) that maps text to 128-dimensional vectors for semantic similarity tasks using Safetensors.
Brief-details: An early access image generation model trained on Danbooru and e621 datasets, featuring quality-based tagging system and native caption support. Currently in active development phase.
Brief Details: Moirai-1.0-R-small is a 13.8M parameter transformer-based time series forecasting model, pre-trained on LOTSA data with universal forecasting capabilities.
Brief-details: Compact 33.4M parameter embedding model optimized for sentence similarity and feature extraction, with support for PyTorch and ONNX inference
Brief Details: EfficientNet B3 variant trained with RandAugment (RA2) recipe on ImageNet-1k. 12.3M params, optimized for image classification with state-of-the-art performance.
Brief Details: Anime-style text-to-image model with 67.7K downloads. Features high-quality anime character generation with specialized negative prompts and DPM++ sampling.
Brief Details: Lightweight 29.4M parameter LLaMA-based model optimized for feature extraction, using BF16 precision with transformers architecture.
BRIEF-DETAILS: A merged text-to-image model combining Incredible World 3 and Real Life 2, optimized for photorealistic and artistic outputs with strong emphasis on portrait and scene generation.
Brief Details: A photorealistic Stable Diffusion 1.5-based model optimized for 768x768px images with strong focus on photo-quality output and specialized aspect ratio handling.
Brief-details: A specialized LORA model for SDXL 1.0 focused on generating versatile logo designs, supporting both detailed and minimalist styles at 1024x1024 resolution
Brief-details: ColBERT-XM: Multilingual semantic search model supporting 81 languages, using token-level embeddings and XMOD backbone. 853M params, efficient retrieval with strong zero-shot capabilities.
Brief Details: A 72B parameter math-specialized LLM supporting both Chain-of-Thought and Tool-integrated Reasoning for solving math problems in English and Chinese.
Brief-details: VideoLLaMA2-7B is an advanced multimodal LLM that processes videos and images using a CLIP-based visual encoder and Mistral-7B language decoder, handling 8-frame sequences.
Brief-details: Vietnamese sentence similarity model based on SimCSE and PhoBERT, offering 136M params for both supervised and unsupervised learning approaches with state-of-the-art performance.