Brief Details: MilkyWonderland_v1 is a specialized Stable Diffusion art model focused on creating dreamy, anime-style illustrations with milky, soft aesthetics.
BRIEF DETAILS: GGUF quantized version of DeepSeek 70B model with various compression options (15.4GB-58GB), optimized for efficiency and memory usage
Brief-details: DeepHermes-3 is an 8B parameter LLM that uniquely combines reasoning and standard response modes, built on Llama-3 architecture with advanced function calling and JSON output capabilities.
Brief-details: A powerful ConvNeXt vision model (200M params) pre-trained on LAION-2B and fine-tuned on ImageNet-1k, achieving 87.3% top-1 accuracy with efficient processing speed.
Brief Details: Polish language text embedding model, distilled from E5, generates 384D vectors, trained on 60M Polish-English pairs, optimized for semantic tasks
Brief Details: Advanced speaker diarization model offering 22.25% better WDER than pyannote3.0, proven across 1.25M+ tokens with excellent performance in earnings calls
Brief-details: ResNet50 image classification model with 25.6M parameters, trained on ImageNet-1k. Features ReLU activation, 7x7 convolution, and 4.1 GMACs compute requirements.
Brief Details: Pythia-31m: A compact 31M parameter language model from EleutherAI's Pythia suite, designed for research and lightweight NLP tasks.
Brief-details: Multilingual MiniLMv2 model distilled from XLM-R Large, offering efficient cross-lingual understanding with reduced parameters while maintaining strong performance
Brief Details: DeepSeek-Coder-V2-Base is a powerful 236B-parameter MoE coding model with 21B active params, supporting 338 programming languages and 128K context length.
BRIEF-DETAILS: Specialized GGUF projector files for KoboldCpp enabling multimodal vision capabilities, compatible with various language model architectures including Mistral 7B.
Brief-details: A 13B parameter GPT-4 aligned Alpaca model, fine-tuned natively and optimized for CPU inference through GGML quantization. Compatible with Alpaca.cpp, Llama.cpp, and Dalai.
BRIEF-DETAILS: Chinese-Alpaca-LoRA-7B is a fine-tuned 7B parameter LLaMA model optimized for Chinese language tasks using LoRA adaptation technique.
BRIEF-DETAILS: Fine-tuned FinBERT model for financial topic classification achieving 91% accuracy across 20 topics. Optimized for Twitter financial news with weighted class balancing.
Brief-details: ESPnet2 TTS model using JETS architecture trained on LJSpeech dataset, featuring phone-level encoding and Tacotron-style G2P processing for English TTS.
Brief-details: JETS is an ESPnet2 TTS model trained on LJSpeech, featuring a transformer-based architecture with pitch/energy prediction and HiFiGAN vocoder integration.
Brief-details: Speech recognition model for Amharic language based on wav2vec 2.0, achieving 24.92% WER on test set. Developed by AIOX Labs for African low-resource languages.
Brief Details: Pruned BERT model with 6 layers and 80% sparsity, optimized for SQuADv1 with EM=79.55 and F1=87.00. Uses Optimal BERT Surgeon method.
BRIEF DETAILS: BART-based LSG model optimized for 4096-length sequences with local-sparse-global attention, fine-tuned for multi-document summarization achieving 47.10 ROUGE-1 score.
Brief Details: SDG text classifier for UN Sustainable Development Goals 1-15. High accuracy (89.7%), trained on short paragraphs. Efficient CO2 footprint.
Brief Details: XLM-RoBERTa-based language detection model supporting 20 languages with 99.6% accuracy. Fine-tuned for sequence classification.