Brief-details: 8B parameter Llama-3 based model optimized for function calling and JSON outputs, featuring ChatML format and enhanced conversational abilities.
Brief Details: Romanian speech recognition model based on wav2vec2-large-xlsr-53, achieving 24.84% WER on Common Voice dataset. Optimized for 16kHz audio processing.
Brief Details: BEiT-v2 vision transformer (103M params) for image classification, pre-trained on ImageNet-1k with masked image modeling and fine-tuned on ImageNet-22k.
Brief Details: Qwen1.5-14B-Chat-AWQ is a quantized large language model with 3.25B parameters, offering efficient multilingual chat capabilities with 32K context length support.
Brief Details: A powerful 7.67B parameter multimodal AI model from Allen AI that combines vision and language capabilities, performing between GPT-4V and GPT-4o.
Brief-details: Lightweight MaxViT model variant with 15.5M parameters optimized for 256x256 images, achieving 82.93% top-1 accuracy on ImageNet-1k
Brief Details: Italian BERT model specialized in sentence embeddings, offering 768-dimensional vectors for semantic similarity tasks. 110M parameters, MIT licensed.
Brief-details: Dreamlike Diffusion 1.0 is a fine-tuned version of Stable Diffusion 1.5, optimized for high-quality artistic image generation with improved aesthetics and composition.
Brief Details: Qwen2.5-72B-Instruct-AWQ is a 4-bit quantized large language model with 72.7B parameters, offering enhanced capabilities in coding, mathematics, and multilingual support across 29+ languages.
Brief Details: ProtBert-BFD is a BERT-based protein language model trained on 2.1B sequences, specializing in protein feature extraction and masked language modeling tasks.
Brief-details: Multilingual sentence embedding model with 135M params, maps text to 512D vectors. Optimized for semantic search and clustering across languages.
Brief-details: A 1.08B parameter language model from EleutherAI's Pythia suite, trained on deduplicated Pile dataset, designed for research and interpretability studies.
Brief-details: A powerful 34B parameter LLM built on Yi-34B, fine-tuned on 1M GPT-4 generated entries. Excels in reasoning and achieves strong benchmark performance.
Brief-details: Artigenz-Coder-DS-6.7B is a lightweight code generation model fine-tuned on DeepSeek-Coder, offering 6.7B parameters with only 13GB memory footprint for efficient local development.
Brief Details: 70B parameter Llama-3.1-based model optimized for creative writing and roleplay, featuring 128K context length and diverse training dataset
Brief Details: SegFormer B5 model fine-tuned on ADE20k dataset for semantic segmentation tasks. Features hierarchical Transformer encoder and MLP decode head. 640x640 resolution.
Brief Details: Spanish BERT model fine-tuned for question-answering tasks, achieving 86.07% F1 score on SQuAD2.0-es. Built on BETO base model with whole word masking.
Brief-details: Realistic Vision V4.0 is a powerful text-to-image model focused on photorealistic generation, featuring optimized parameters and comprehensive negative prompting for enhanced output quality.
Brief-details: Optimized English speech recognition model using CTranslate2 format, derived from Whisper medium.en for faster inference with MIT license.
Brief Details: BERTurk: A 185M parameter BERT model for Turkish language processing, trained on 35GB of text with 128k vocab size. MIT licensed.
Brief-details: ConvNeXt tiny model trained on ImageNet-12k with 36.9M parameters, optimized for image classification with 4.5 GMACs computation.