Brief Details: A multilingual sentence transformer model with 278M parameters that maps text to 768-dimensional vectors, optimized for semantic search and clustering.
Brief-details: MusicGen-Large is a 3.3B parameter text-to-music generation model by Meta AI, capable of producing high-quality instrumental music from text descriptions.
Brief Details: CodeRankEmbed: A 137M parameter bi-encoder model for code retrieval with 8192 token context length, outperforming competitors in code search tasks.
Brief-details: EleutherAI's 5.8B parameter Korean language model trained on 863GB of Korean text, optimized for text generation and NLP tasks with state-of-the-art performance.
Brief-details: A transformer-based feature extraction model optimized for text generation inference, utilizing F32 tensor type with LLaMA architecture integration
Brief Details: Llama-2-13b-hf is Meta's 13B parameter language model featuring advanced text generation capabilities, trained on 2T tokens with enhanced reasoning and knowledge comprehension.
Brief Details: EfficientNet B4 variant trained on ImageNet-1k using RandAugment (RA2). 19.5M params, optimized for image classification with strong accuracy-efficiency trade-off.
Brief Details: CodeT5-base model fine-tuned for multi-lingual code summarization, supporting 6 programming languages with state-of-the-art performance.
Brief-details: Riffusion is an innovative text-to-audio AI model based on Stable Diffusion v1.5, specialized in real-time music generation through spectrogram image creation.
Brief-details: RoBERTa-based detector model (125M params) fine-tuned to identify GPT-2 generated text with 95% accuracy. Built by OpenAI for synthetic text detection.
BRIEF DETAILS: Multilingual NER model based on BERT architecture with 209M parameters, capable of identifying custom entity types. Supports multiple languages and achieves state-of-the-art performance.
Brief-details: Large-scale CLIP model using ConvNeXt-XXLarge architecture, trained on LAION-2B dataset, achieving 79.4% ImageNet zero-shot accuracy with state-of-the-art performance
Brief-details: Sakura-14B is a specialized Japanese-to-Chinese light novel translation model with 14.8B parameters, featuring improved translation accuracy and terminology consistency using GQA architecture.
Brief-details: Massive multilingual text-to-text transformer model by Google, supporting 101 languages. Pre-trained on mC4 dataset with state-of-the-art performance on multilingual NLP tasks.
Brief Details: Named entity recognition model for Balkan languages (Bosnian, Croatian, Montenegrin, Serbian) with 110M params, achieving 91.38 F1 score
Brief Details: A powerful SDXL-based text-to-image model trained on high-quality datasets, optimized for artistic and realistic generation with improved aesthetics and lighting.
Brief Details: 12B parameter language model from EleutherAI's Pythia suite, trained on The Pile dataset for research and interpretability studies. Supports text generation with 36 attention layers.
Brief Details: Yi-1.5-34B-Chat-16K is a powerful 34.4B parameter LLM with 16K context window, fine-tuned for chat and enhanced reasoning capabilities
Brief Details: Arabic BERT base model trained on 8.2B words from OSCAR and Wikipedia. 111M parameters, optimized for Arabic NLP tasks including dialectical Arabic.
BRIEF-DETAILS: State-of-the-art depth estimation model trained on 595K synthetic + 62M real images. Features 97.5M params, DPT architecture with DINOv2 backbone. 10x faster than SD models.
Brief Details: A LoRA model for Stable Diffusion focused on photorealistic image generation, specializing in amateur-style photos with natural composition and lighting effects.