Brief-details: InsightFace is an open-source facial analysis toolkit offering face detection, recognition, and alignment capabilities through SCRFD architecture and person detection features.
BRIEF-DETAILS: DialoGPT-based conversational AI model trained to mimic Peter Griffin's speech patterns and personality from Family Guy, created by TropicalJuice.
Brief Details: A Filipino-English sentence transformer model that maps text to 768-dim vectors, trained on parallel data using teacher-student approach, achieving 0.75 correlation on Filipino STS tasks.
Brief Details: ERNIE-Gram-zh is a Chinese language model with explicit N-gram masking, featuring 12 layers, 768 hidden units, and 12 attention heads, optimized for NLU tasks.
Brief Details: Multilingual sentence transformer model fine-tuned for Latin text, mapping sentences to 768-dimensional vectors for semantic search and clustering tasks.
BRIEF DETAILS: T5-small model fine-tuned for Spanish-to-Quechua translation. Trained on 102K+ sentences, primarily biblical texts. Achieves 2.97 BLEU score for Ayacucho Quechua.
BRIEF-DETAILS: SepFormer speech enhancement model trained on DNS-4 dataset. Optimized for 16kHz audio denoising with DNSMOS scores: SIG:2.999, BAK:3.076, OVRL:2.437.
Brief-details: A state-of-the-art BERT-based model specialized for Hebrew morphological tagging, offering detailed linguistic analysis including POS tagging and morphological features
Brief Details: DeBERTa-XLarge: 750M parameter NLU model with disentangled attention and enhanced mask decoder, achieving SOTA on GLUE benchmarks
Brief-details: Quantized version of DeepSeek-V2-Lite-Chat-Uncensored with multiple compression variants (Q2_K to Q8_0), optimized for different size/quality tradeoffs
Brief Details: Qwen2.5-72B-Instruct optimized for 4-bit quantization - 72B parameter model with 128K context length, multilingual support, and enhanced capabilities in coding and mathematics
Brief-details: Swedish handwriting recognition model for historical texts (1600-1900), trained by Swedish National Archives. Specializes in Swedish running text transcription with strong performance on period documents.
Brief-details: SmolLM2-360M is a compact 360M-parameter language model trained on 4T tokens, optimized for on-device use with strong instruction-following and reasoning capabilities.
Brief Details: ONNX-optimized DistilRoBERTa model for bias detection in text, offering efficient inference with maintained accuracy through Optimum library integration
Brief-details: Uncensored Mistral-7B variant with 32k context, optimized for conversation & coding. Features ChatML format, function calling & agentic abilities.
Brief Details: Custom AI model by EnvyIrys hosted on HuggingFace, likely focused on machine learning applications. Limited public information available.
Brief-details: English to Hebrew neural machine translation model from Helsinki-NLP, achieving 40.1 BLEU score on Tatoeba test set with transformer architecture
Brief Details: BioMedGPT-LM-7B: Llama2-based 7B parameter biomedical language model, fine-tuned on 26B tokens from medical papers, specialized for biomedical QA tasks.
BRIEF-DETAILS: StarCoderBase-1B is a foundational code generation model by BigCode, built with 1B parameters and licensed under OpenRAIL-M for open-source development.
Brief-details: A 13B parameter GPTQ-quantized language model merging Pygmalion and SuperHOT, featuring 8K context window, optimized for dialogue and conversational AI applications
Brief-details: Protogen Infinity is an AI model by darkstorm2150, designed for advanced image generation and processing capabilities. Available on HuggingFace for public use.