Brief-details: Large-scale Vision Transformer model pretrained on ImageNet-21k (14M images). Features 32x32 patch size, 224x224 resolution, specialized for computer vision tasks.
Brief-details: ConvNeXtV2-Tiny model with 28.6M params, trained via FCMAE and fine-tuned on ImageNet-22k/1k. Efficient architecture delivering 83.9% top-1 accuracy.
Brief-details: GLINER-large-v2 is an advanced NLP model designed for generic information extraction tasks, building upon prior GLINER architectures with enhanced capabilities
Brief Details: A 3B parameter LLaMA-based language model by thkim0305. Limited public information available. Hosted on Hugging Face Hub.
Brief Details: Qwen2's 72B parameter instruction-tuned model with 4-bit quantization, supporting 131K context length and optimized for performance and efficiency.
Brief-details: CheXagent-2-3b is a 2.3B parameter foundation model for chest X-ray interpretation developed by Stanford AIMI, capable of advanced medical image analysis and natural language interaction.
Brief-details: BERT model fine-tuned on MRPC (Microsoft Research Paraphrase Corpus) for paraphrase detection, optimized for sentence pair classification tasks.
Brief Details: Nomic-embed-text-v1.5-Embedding-GGUF is an optimized GGUF format embedding model for efficient text vectorization and semantic search applications.
BRIEF-DETAILS: Advanced chest X-ray interpretation model from Stanford AIMI, based on ViT architecture with SigLIP approach for medical imaging analysis
Brief-details: A versatile tagging model hosted on HuggingFace by fancyfeast, designed for efficient content tagging and classification purposes. Accessible through the HuggingFace platform.
Brief Details: EasyNegative is an embedding model designed to enhance image generation by providing negative prompting capabilities, created by "embed".
Brief-details: TangoFlux is a fast text-to-audio generation model using flow matching and CLAP-ranked optimization, capable of 44.1kHz audio up to 30s
Brief-details: Phind-CodeLlama-34B-v2-GPTQ: State-of-the-art 34B parameter coding model achieving 73.8% pass@1 on HumanEval, optimized for multi-language programming assistance
BRIEF-DETAILS: Russian T5-based denoising autoencoder for text normalization. Restores word order, missing punctuation, and proper word inflections in corrupted Russian text.
Brief Details: Optimized Russian T5 model (244M params) derived from MT5-base, featuring reduced vocabulary (30K tokens) and 60% smaller size
BRIEF-DETAILS: Pegasus-based text summarization model fine-tuned on BookSum dataset, specialized for generating concise book summaries from longer texts.
BRIEF-DETAILS: A distilled French language model based on CamemBERT, offering 83% FLUE CLS performance while being lighter and faster than its parent model.
Brief-details: DistilCamemBERT-based sentiment analysis model for French text, offering 5-class sentiment classification with 61% accuracy and faster inference than CamemBERT.
Brief Details: ClimateBERT model fine-tuned for climate sentiment analysis. Classifies text into opportunity, neutral, or risk categories. Built on distilroberta-base.
Brief Details: ClimateBERT language model fine-tuned on climate research data. 15.79kg CO2 training footprint. Built on DistilRoBERTa. Optimized for climate text analysis.
Brief-details: Fine-tuned Wav2Vec2-Large-XLSR-53 model for Fon language speech recognition, achieving 14.97% WER on test set. Trained on 8,235 samples.