BRIEF DETAILS: Depth estimation transformer model (ViT-S/14) that converts images to depth maps, part of the Depth Anything project with 18.9K+ downloads
Brief-details: WhiteRabbitNeo-13B-GGUF is a powerful 13B parameter LLaMA2-based model optimized for cybersecurity tasks, available in multiple GGUF quantizations for efficient deployment
BRIEF-DETAILS: A fine-tuned Wav2vec2-XLS-R model for English-Filipino speech recognition, achieving 57.5% WER, trained on filipino_voice dataset with 20 epochs using Adam optimizer.
BRIEF DETAILS: Advanced 8B parameter LLaMA-3 based model optimized for function calling and structured outputs. Features ChatML format and enhanced conversational abilities.
Brief Details: BERT-based toxic comment classifier achieving 0.95 AUC. Fine-tuned on Kaggle toxicity data. 19K+ downloads. Suitable for content moderation.
Brief-details: Chinese sentence similarity model (102M params) optimized for RAG applications. Uses CoSENT framework with BERT architecture. Apache 2.0 licensed.
Brief Details: Optimized 7B parameter code-generation model with 4-bit AWQ quantization, supporting 128K context length and specialized for programming tasks.
Brief Details: A sophisticated text-to-image model combining CrystalClearRemix and RealisticVision 1.2, optimized for photorealistic and semi-realistic image generation with 19k+ downloads.
Brief-details: A specialized 7.24B parameter AI model built on Mistral-7B, fine-tuned for penetration testing with Kali Linux tools integration and ethical hacking guidance.
Brief-details: LLaVA 1.6 Mistral 7B GGUF is a quantized multimodal model offering various compression levels (3-8 bits) for efficient image-text interaction, based on Mistral-7B architecture.
Brief-details: T0_3B is a 2.85B parameter zero-shot task generalization model capable of performing various NLP tasks through natural language prompts, based on T5 architecture.
Brief-details: NewRealityXL is a text-to-image SDXL-based model optimized for ultra-realistic image generation, offering global and NSFW capabilities with StableDiffusion API integration.
Brief-details: High-quality French text-to-speech model from MyShell.ai's MeloTTS suite. Supports CPU real-time inference with MIT license. 19K+ downloads.
Brief Details: T5-efficient-tiny is a compact 15.58M parameter deep-narrow transformer model, pre-trained on C4 dataset for English text tasks, optimized for downstream performance.
BRIEF DETAILS: Specialized sentence transformer model that maps biomedical text to 768D vectors, trained on multiple NLI datasets for robust sentence embeddings
BRIEF DETAILS: TCD-SDXL-LoRA: A fast few-step image generation model using Trajectory Consistency Distillation, offering flexible inference steps and high-quality outputs with SDXL.
BRIEF-DETAILS: ResNet-18 model pre-trained on Instagram-1B dataset using semi-weakly supervised learning, fine-tuned for ImageNet. 11.7M params, optimized for image classification.
Brief Details: A specialized NER model fine-tuned on XLM-RoBERTa for Indian context, achieving 0.813 F1 score. 277M parameters, MIT licensed.
BRIEF DETAILS: Enhanced LoRA model for FLUX.1-dev enabling uncensored content generation. Features improved stability, performance & streamlined weights. Non-commercial license.
Brief Details: RoBERTa-based sentence embedding model with 125M params, optimized for semantic similarity tasks. Maps text to 768-dim vectors.
Brief Details: A Vision Transformer model with 21.7M params trained using DINO self-supervised learning, optimized for image feature extraction and classification.