BRIEF-DETAILS: BERT-based Named Entity Recognition model specialized for Indonesian language processing, developed by cahya for identifying and classifying named entities in Indonesian text.
BRIEF DETAILS: Smegmma-Deluxe-9B-v1 is a 9B parameter GGUF format language model by bartowski, optimized for efficient inference and deployment.
Brief-details: A compact variant of the Qwen architecture, randomly initialized for experimental purposes. Hosted by katuni4ka on HuggingFace, suitable for research and development.
Brief-details: A specialized LayoutLM model fine-tuned for table row detection and understanding in scientific documents, developed by Allen AI for enhanced document layout analysis.
Brief-details: Multilingual T5-based zero-shot classifier supporting 100+ languages with bidirectional text-label understanding and strong performance in cross-lingual tasks.
BRIEF-DETAILS: SDXL-based model focused on X-Files theme created by John6666, derived from an original model and processed through qnt_iler
BRIEF DETAILS: Prov-GigaPath: A whole-slide foundation model for digital pathology, featuring dual-encoder architecture for both tile and slide-level analysis of pathology images.
Brief Details: Quantized (INT8) ONNX variant of bge-large-en-v1.5 embeddings model with 4.8X faster performance on 10-core systems using DeepSparse acceleration
BRIEF-DETAILS: Meta's Llama 3.3 70B Instruct model optimized with AWQ. Supports 8 languages, 128k context, and excels in multilingual tasks. Strong benchmark performance.
Brief-details: ConvNeXt V2 tiny model trained on ImageNet-22K, using FCMAE framework and GRN layer for improved image classification at 224x224 resolution
Brief-details: Large-scale Arabic BERT model with 371M parameters, trained on 77GB of Arabic text. Offers advanced NLP capabilities without pre-segmentation.
Brief-details: Gemma-1.1-2b-it: Google's 2B parameter instruction-tuned language model requiring license acceptance. Optimized for interactive tasks and inference.
Brief-details: A compact variant of the Jais language model, created by katuni4ka, focused on efficient processing while maintaining core functionalities.
BRIEF-DETAILS: 7B parameter instruction-tuned model with impressive 1M token context length, optimized for both short and long-form tasks. GGUF quantized version by the community.
BRIEF-DETAILS: Pythia-14M is a lightweight language model from EleutherAI with 14M parameters, designed for efficient NLP tasks and research experimentation.
Brief Details: PixelWave FLUX.1-dev_03 is a fine-tuned general-purpose model optimized for art and photo styles, featuring DPM++ sampling methods and flexible step configurations.
Brief Details: MagicQuill is an intelligent interactive image editing system developed by LiuZichen, focusing on AI-powered image manipulation and enhancement capabilities.
Brief-details: GGUF-quantized version of bge-m3 embedding model, optimized through llama.cpp and LM-Kit.NET for improved efficiency and deployment flexibility
BRIEF DETAILS: Polish language model with 11B parameters optimized for instruction-following. Achieves SOTA performance on Polish benchmarks, outperforming many larger models while maintaining strong English capabilities.
Brief-details: AstroLLaVA_v2 is an advanced astronomical vision-language model built using PEFT 0.5.0, designed for processing and analyzing astronomical data and images.
Brief-details: Sparse autoencoder (SAE) trained on Llama 3.1 8B using RedPajama v2, featuring MultiTopK loss for flexible sparsity levels at inference.