Brief Details: Japanese text recognition model using ParseQ architecture, designed for OCR tasks. Part of the yomitoku series, offers open beta access for text detection.
Brief-details: Multi-domain T5 model for chemical and language tasks, handling reactions, retrosynthesis, molecular captioning, and text generation. Published 2023, MIT license.
Brief Details: LlamaV-o1: An 11B parameter multimodal LLM optimized for visual reasoning with superior performance in step-by-step analysis and explanations.
BRIEF-DETAILS: RecurrentGemma-2B is Google's 2-billion parameter recurrent language model requiring Hugging Face license acceptance for access
Brief-details: PixArt-alpha is a research-focused text-to-image generation model that emphasizes high-quality image synthesis with efficient architecture optimizations
Brief-details: A 33B parameter GPTQ-quantized LLM merging WizardLM Uncensored with SuperHOT 8K. Features 8K context window, 4-bit quantization, and enhanced inference accuracy.
BRIEF-DETAILS: An anime-style 2D image generation model by stb, hosted on HuggingFace, designed for creating anime-like artistic renderings and illustrations.
Brief Details: ESRGAN - Enhanced Super-Resolution Generative Adversarial Network for high-quality image upscaling, developed by utnah and available on HuggingFace.
Brief Details: Solar-based 11B parameter LLM optimized for GGUF quantization, supporting both Alpaca and Vicuna formats. Known for efficient performance and versatility.
Brief-details: SEW-D speech model by ASAPP for ASR tasks. Pre-trained on 16kHz audio with 1.9x faster inference than wav2vec 2.0. Achieves 4.34% WER on LibriSpeech clean test.
Brief Details: DNA_bert_6 is a specialized BERT model trained for DNA sequence analysis with 6-mer tokenization, developed by zhihan1996 for genomic research.
Brief-details: NetBERT - A BERT-base variant specialized for computer networking, pre-trained on 23GB of networking text for enhanced domain-specific understanding
BRIEF-DETAILS: Binary classification model for detecting hateful memes with 76.7% accuracy. Features strong AUC (0.789) and balanced precision-recall metrics.
Brief Details: BioBERT-based NER model specialized in genetic entity recognition, fine-tuned on JNLPBA and BC2GM datasets for biomedical applications.
BRIEF DETAILS: BioBERT-based NER model specialized in disease entity recognition, fine-tuned on BC5CDR and NCBI disease datasets for biomedical text analysis.
Brief Details: BioBERT-based model specialized in chemical named entity recognition, fine-tuned on BC5CDR-chemicals and BC4CHEMD datasets for biomedical applications
Brief-details: A long-document transformer model capable of processing 16K tokens, based on BART-large architecture, specialized for long-range summarization and QA tasks
BRIEF-DETAILS: RuBioRoBERTa is a specialized biomedical language model for Russian text mining, built on RoBERTa architecture and trained on medical corpora.
BRIEF-DETAILS: Danish BERT model trained on 2M+ sentences and 40M words, optimized for Danish NLP tasks with small-cased architecture
Brief Details: AutoNLP-trained entity extraction model achieving 97.4% accuracy, specialized in named entity recognition tasks using the WikiAnn dataset.
Brief-details: MBart-large-cc25-en-ar is a specialized English-to-Arabic translation model, fine-tuned on UN corpus data. Research-grade implementation, not production-ready.