Brief Details: A powerful vision transformer model with 43.8M parameters, trained on ImageNet-1k, specializing in image classification with spatial attention mechanisms.
Brief-details: Advanced sentence embedding model based on BGE, optimized for text similarity with 109M parameters and multi-dimensional output capabilities
Brief Details: French NER model (110M params) fine-tuned from camemBERT, achieving 89.14% F1 score. Specialized in entity recognition for French text including persons, organizations, and locations.
Brief Details: A specialized FLAN-T5 variant trained for scene graph parsing, combining VG and FACTUAL datasets with 248M parameters in F32 format.
BRIEF DETAILS: Meta's Llama 3.2 1B model optimized for 4-bit quantization, offering 765M parameters with multilingual capabilities and 2.4x faster performance
Brief-details: A powerful MLP-based image classification model with 19.4M parameters, trained on ImageNet-1k, offering efficient processing of 224x224 images with 4.4 GMACs.
Brief Details: SelecSLS42b - A lightweight 32.5M param CNN for real-time multi-person 3D motion capture and image classification, trained on ImageNet-1K.
Brief Details: RobustSAM is a 312M parameter vision model that enhances SAM's segmentation capabilities for degraded images, offering superior performance in zero-shot conditions.
Brief Details: GENet large model optimized for GPU efficiency with 31.2M params. Trained on ImageNet-1k for image classification using BYOB architecture.
Brief Details: FP8-quantized 70B parameter Llama 3.1 model optimized for efficient deployment, supporting 8 languages with 99.88% accuracy retention
Here's the content: BRIEF DETAILS: BiRefNet_lite is a 44.4M parameter AI model for high-resolution dichotomous image segmentation, offering efficient background removal and mask generation capabilities.
Brief Details: ECA-BotNeXt is a 10.6M parameter image classification model combining efficient channel attention with BotNet architecture, optimized for ImageNet-1k with 256x256 input resolution.
Brief-details: HaloNet-based image classification model with 10.8M params, featuring efficient channel attention and ResNeXt architecture, trained on ImageNet-1k.
Brief Details: An 8.35B parameter multimodal chatbot combining Llama-3 with advanced vision capabilities, optimized for research and academic tasks
Brief Details: Japanese text embedding model based on LUKE, optimized for sentence similarity and semantic search. 768-dim output, supports 512 tokens max.
Brief Details: DeBERTa-large is Microsoft's enhanced BERT model using disentangled attention, achieving SOTA performance on NLU tasks with 80GB training data
Brief Details: VOLO (Vision Outlooker) is a 26.6M parameter image classification model optimized for ImageNet-1k, featuring token labelling and 224x224 input resolution.
Brief Details: A powerful 184M parameter DeBERTa-V3 based domain classifier that categorizes text into 26 domains with high PR-AUC (0.9873). Built by NVIDIA.
Brief-details: A powerful Chinese text embedding model (326M params) achieving strong performance across classification, clustering, and retrieval tasks. Features enhanced negative sampling techniques.
BRIEF-DETAILS: 8B parameter Japanese-English LLM based on Meta's Llama 3, optimized for Japanese usage with GGUF quantization (Q4_K_M) for efficient deployment
Brief Details: Res2Net backbone architecture with 25.1M params, designed for image classification and feature extraction. Trained on ImageNet-1k, offers multi-scale capabilities.