Brief-details: 8B parameter GGUF-quantized language model optimized for long-context text generation, supporting both English and Chinese, with multiple compression variants for different hardware configurations.
Brief-details: A high-performing 7B parameter chat model quantized for CPU/GPU inference using GGUF format, based on Mistral-7B with strong MT-Bench scores
Brief Details: ALBERT-based emotion classification model achieving 93.6% accuracy, optimized for Twitter sentiment analysis with 6 emotion categories
BRIEF DETAILS: Openjourney is a Stable Diffusion fine-tuned model optimized for Midjourney-style image generation, featuring 123M parameters and specialized for text-to-image tasks.
Brief-details: Pythia-1.4B is a research-focused language model with 1.4B parameters, trained on The Pile dataset for interpretability studies, featuring 154 checkpoints for analysis.
Brief-details: A Stable Diffusion-based text-to-image model optimized for anime-style artwork, featuring high download count (28K+) and specialized in character generation
Brief-details: A powerful German/English embedding model with 487M parameters, offering binary quantization and MRL support. Achieves SOTA performance with NDCG@10 of 51.7.
Brief-details: Quantized 7B parameter Qwen model offering multiple GGUF variants for different hardware configurations, optimized for conversation and text generation
Brief-details: Bilingual embedding model optimized for RAG applications, supporting Chinese and English with strong cross-lingual capabilities. 768-dimensional embeddings with Apache 2.0 license.
Brief-details: A specialized text-to-image diffusion model optimized for photorealistic outputs, particularly suited for film-like and realistic photography generation.
Brief-details: WavLM-Base speaker verification model, trained on 16kHz speech audio. Uses utterance mixing and X-Vector head with Additive Margin Softmax loss. 94k hours training data.
Brief-Details: BERT Large model configuration for Habana Gaudi HPU processors, enabling optimized training with mixed precision and fused operations
Brief-details: EVA-Qwen2.5-72B is a powerful 72.7B parameter LLM with multiple quantized versions, trained on 9 high-quality datasets, optimized for conversational and creative tasks
Brief Details: BioLORD-2023-M is a multilingual biomedical language model with 278M parameters, supporting 7 European languages and optimized for medical text similarity tasks.
Brief-details: SDXL InstructPix2Pix model for image editing with text instructions. Built on SDXL base, supports 768x768 resolution, trained for 15k steps on A100 GPUs.
Brief Details: Korean SBERT model trained on NLI data, maps sentences to 768D vectors. Achieves 82.24% Cosine Pearson correlation on KorSTS. Popular with 28.5K downloads.
Brief-details: State-of-the-art embedding model with in-context learning capabilities, 7.11B parameters, achieves SOTA on MTEB/BEIR benchmarks, ideal for semantic search and retrieval tasks
Brief-details: LLaMA-7B is a powerful open-source language model from Meta AI featuring 7B parameters, trained on diverse datasets for research purposes with strong reasoning capabilities.
Brief-details: BERT base model pre-trained on Indonesian Wikipedia & newspapers (1.5GB), optimized for masked language modeling with 32k vocabulary
Brief-details: VILA1.5-3b is a visual language model capable of multi-image reasoning and visual chain-of-thought, optimized for edge deployment with AWQ 4-bit quantization.
Brief-details: Arabic sentiment analysis model built on AraBERT, achieving 80.03% accuracy and 0.6543 macro F1 score. 135M parameters, Apache 2.0 licensed.