BRIEF DETAILS: Text-to-image diffusion model specializing in photorealistic and sci-fi imagery, with CreativeML Open RAIL-M license. Popular with 17.9K downloads.
Brief Details: MobileViT XS - Lightweight vision transformer (2.3M params) for mobile-friendly image classification. Efficient architecture balancing performance and size.
Brief Details: A T5-based keyword generation model (275M params) for English & Polish texts, trained on scientific articles with strong extractive capabilities
Brief Details: Baichuan-7B is a 7B parameter bilingual LLM optimized for Chinese/English, trained on 1.2T tokens with SOTA performance on MMLU/C-EVAL benchmarks.
Brief-details: A LoRA model for FLUX1.-dev that creates dark fantasy illustrations with a retro aesthetic, optimized for 1.2 strength and compatible with other LoRAs.
Brief Details: Multilingual TTS model trained on 700k hours across 8 languages, with emphasis on English/Chinese (300k hours each). Non-commercial use only.
Brief-details: A 9.4B parameter bilingual (Chinese/English) chat model with multiple GGUF quantized versions optimized for different hardware configurations and RAM constraints
Brief-details: Korean RoBERTa model fine-tuned on NLI data, optimized for sentence embeddings with 768-dimensional vectors. Achieves 82.83% Cosine Pearson correlation on KorSTS.
BRIEF-DETAILS: A powerful 7B parameter chat model with 200K context window, strong performance in reasoning, math and code. Features advanced capabilities in tool usage and data analysis.
Brief-details: Fine-tuned 8B parameter Llama-3.1 model specialized for inductive reasoning and pattern recognition, trained on multiple datasets with puzzle-solving capabilities.
BRIEF DETAILS: Llama-3 8B model with extended context length (1048k tokens), optimized for chat/instruct tasks. Built on Meta's Llama-3, featuring improved instruction-following and context handling.
Brief-details: Multilingual 8B parameter LLaMA-3 variant optimized for multiple languages including Japanese, German, French, Russian and Chinese, with strong MT-Bench scores.
Brief Details: Emu3-VisionTokenizer: A 271M parameter vision model enabling next-token prediction for multimodal tasks, supporting image/video tokenization and generation.
Brief Details: A 2.5B parameter AI safety model from IBM designed to detect risks in prompts/responses across multiple dimensions including harm, bias, and ethics.
BRIEF-DETAILS: Neural machine translation model for French to German conversion. Built by Helsinki-NLP using transformer architecture with strong BLEU scores on news datasets.
Brief Details: Image classification model for detecting AI-generated images with 85.8M parameters, achieving 97.36% accuracy. Built on Vision Transformer architecture under Apache 2.0 license.
Brief Details: EfficientNetV2 variant optimized for speed and accuracy, featuring 24.1M params and RandAugment training. Ideal for ImageNet classification tasks.
Brief Details: OmDet-Turbo is a 115M parameter zero-shot object detection model with real-time capabilities using transformer architecture with Swin-Tiny backbone.
Brief Details: DistilBERT-based NER model with 65.2M params. Achieves 92% F1 score on CoNLL-2003. Efficient for named entity recognition tasks.
Brief-details: A Helsinki-NLP Transformer model for Catalan-to-English translation, achieving 51.4 BLEU score on Tatoeba dataset, built using OPUS data and Marian architecture
Brief-details: A lightweight 8.42M parameter transformer model optimized for text generation, utilizing FP16 precision and Safetensors format for efficient inference