Brief Details: MusicGen-small: A 591M parameter text-to-music AI model by Meta/Facebook capable of generating high-quality instrumental music from text descriptions at 32kHz.
BRIEF-DETAILS: Mobius is a state-of-the-art debiased diffusion model using domain-agnostic debiasing and constructive deconstruction for superior image generation across diverse styles.
BRIEF-DETAILS: EVA02 small variant vision transformer (22.1M params) with 336px input, pre-trained on ImageNet-22k using masked image modeling, fine-tuned on ImageNet-1k.
Brief-details: A Danish to English translation model by Helsinki-NLP using transformer architecture, achieving 63.6 BLEU score on Tatoeba dataset. Open-source with Apache 2.0 license.
Brief-details: FastPDN is a Polish Named Entity Recognition model with 124M parameters, built on BERT architecture and achieving 0.68 F1 score on test data. Optimized for Polish text analysis.
Brief-details: Encoder-only half-precision protein language model based on T5 architecture, trained on UniRef50 dataset for efficient protein embedding generation with reduced GPU memory requirements.
Brief Details: OmniGen-v1 is a unified 3.88B parameter image generation model capable of multi-modal prompting, supporting text-to-image and image-to-image generation without additional plugins.
Brief-details: CLAP (Contrastive Language-Audio Pretraining) model for audio-text matching, featuring SWINTransformer and RoBERTa architectures. Trained on general audio, music, and speech.
BRIEF DETAILS: Multilingual LayoutLM-style transformer combining LiLT with XLM-RoBERTa, supporting 94 languages with 284M parameters. MIT licensed.
BRIEF DETAILS: Efficient VoVNet variant with 6.55M params, trained on ImageNet-1k using RandAugment. Optimized for energy and GPU computation with depthwise separable convolutions.
Brief Details: Optimized 7B parameter Qwen2.5 instruction model with 4-bit quantization, offering 2x faster performance and 60% less memory usage.
Brief Details: PII detection model with 278M params, supports 17 PII types across 6 languages with 98.27% recall and 99.44% accuracy
Brief-details: SPECTER is an AI model for generating document-level embeddings, pre-trained on citation graphs. Built by AllenAI with 63K+ downloads.
Brief-details: Advanced 46.7B parameter Mixtral-based conversational AI model with enhanced coding capabilities, trained on 8 datasets with 16k context window. Apache 2.0 licensed.
Brief-details: ECAPA-TDNN language identification model trained on CommonLanguage dataset, capable of identifying 45 languages with 85% accuracy. Ideal for multilingual speech processing.
Brief-details: Mixtral-8x7B-Instruct is a powerful 46.7B parameter MoE model quantized in GGUF format, supporting multiple languages and optimized for various hardware configurations.
Brief Details: A powerful 13B parameter chat assistant fine-tuned from Llama 2, developed by LMSYS for research and conversational AI tasks.
Brief Details: DeBERTa-v3 model fine-tuned for prompt injection detection with 184M params, achieving 95.25% accuracy. Optimized for English language security.
Brief-details: A quantized version of Zephyr ORPO 141B, based on Mixtral-8x22B, offering various precision levels (2-16 bit) in GGUF format for efficient text generation
Brief-details: Kandinsky 2.1 is an advanced text-to-image diffusion model combining CLIP encoding with diffusion image prior, offering high-quality image generation and manipulation capabilities.
BRIEF DETAILS: ChemBERTa is a BERT-like transformer model trained on SMILES chemical strings, achieving 0.398 loss over 5 epochs. Specializes in chemical structure prediction and analysis.