Brief Details: RoBERTa-based AI text detector achieving 85.2% accuracy across multiple LLMs. Trained on 44k samples with balanced human/AI content.
BRIEF-DETAILS: Quantized version of Google's Gemma-2b-it model with multiple GGUF variants offering different compression/quality tradeoffs, optimized for inference with LlamaEdge.
Brief Details: 4-bit quantized MLX version of DeepSeek Coder V2 Lite, optimized for Apple Silicon, supporting code generation and instruction following
Brief Details: CLIP Vision Transformer (ViT-L/14) by OpenAI - A zero-shot image classification model combining vision and text transformers for robust general-purpose image understanding.
BRIEF DETAILS: Qwen2.5-14B-Instruct model converted to MLX format with 4-bit quantization, optimized for Apple Silicon, featuring 14B parameters.
Brief Details: ColabPro - An environment monitoring model by unslothai that tracks and logs statistics to identify breaking environments and issues.
Brief-details: A lightweight speech recognition model that achieves 45% size reduction and 2x speed improvement over wav2vec2-base while maintaining reasonable accuracy.
BRIEF DETAILS: Temporal Vision Transformer pre-trained on US Landsat/Sentinel-2 data. Handles multi-temporal satellite imagery for environmental monitoring. 100M parameters, developed by IBM-NASA collaboration.
BRIEF-DETAILS: Qwen2.5-Math-72B: Advanced mathematical LLM supporting both Chain-of-Thought and Tool-integrated Reasoning for English and Chinese math problems, achieving 87.8% on MATH benchmark.
BRIEF-DETAILS: Specialized 1.5B parameter math-focused LLM supporting both English and Chinese problem-solving through Chain-of-Thought and Tool-integrated Reasoning
BRIEF-DETAILS: Japanese-optimized Mistral-based instruction model, fine-tuned for Japanese language tasks. Built on Mistral-Nemo-Instruct-2407 with Apache-2.0 license.
Brief-details: Bio-Medical-Llama-3-8B is an 8B parameter LLM fine-tuned on 500K+ biomedical entries, specialized for healthcare applications with strong performance on medical tasks.
Brief Details: Flux1_dev is an experimental AI model by lllyasviel, hosted on HuggingFace, focused on advancing machine learning capabilities (specific details not available in source)
BRIEF-DETAILS: Decensored version of Gemma 27B offering unrestricted responses with maintained performance. Available in GGUF, iMatrix, and EXL2 formats.
Brief Details: Stable Diffusion 3 Medium - A powerful text-to-image diffusion model by Stability AI, offering enhanced generation capabilities with medium computational requirements.
Brief-details: ESM3-sm-open-v1 is a small-scale protein language model by EvolutionaryScale, designed for protein sequence analysis and prediction tasks, available on HuggingFace.
BRIEF-DETAILS: Turkish-optimized 8B parameter LLM based on Llama 3, fine-tuned on high-quality Turkish instruction sets. Achieves 56.16% on Winogrande_tr benchmark.
BRIEF-DETAILS: Optimized Whisper base.en model converted to CTranslate2 format for faster speech recognition, featuring FP16 precision and English-specific capabilities
Brief-details: GGUF quantized version of DeepSeek-R1-Distill-Qwen-32B offering multiple compression options from 12.4GB to 34.9GB, optimized for efficient deployment.
Brief Details: KoELECTRA small variant optimized for Korean question-answering, distilled from larger model and fine-tuned on KorQuAD dataset with 384 max sequence length
Brief Details: WhisperKit-CoreML is an optimized speech recognition framework specifically designed for Apple Silicon, offering on-device processing capabilities.