Brief Details: HRNet-W18 is a 21.4M parameter deep learning model for image classification and feature extraction, trained on ImageNet-1k with state-of-the-art high-resolution representation learning capabilities.
Brief Details: MoLFormer-XL is a 46.8M parameter chemical language model trained on SMILES strings from ZINC/PubChem for molecular property prediction and feature extraction.
BRIEF DETAILS: Qwen2.5-32B-Instruct-AWQ: A 4-bit quantized 32.5B parameter LLM with 128K context length, optimized for instruction-following and multilingual support.
Brief Details: A 410M parameter language model from EleutherAI's Pythia suite, trained on deduplicated Pile dataset for interpretability research.
Brief-details: Japanese NER model based on BERT-base-v3, fine-tuned on Wikipedia dataset for named entity recognition. Popular with 60k+ downloads, specialized for Japanese text analysis.
Brief Details: Efficient sentence embedding model with 33.4M params. Maps text to 384-dim vectors. Great for semantic search & clustering. Apache 2.0 licensed.
Brief-details: A specialized biomedical cross-encoder model for ranking medical articles, trained on PubMed search logs for zero-shot retrieval tasks
Brief-details: A natural language inference model based on DistilRoBERTa, trained on SNLI and MultiNLI datasets for zero-shot classification and textual entailment tasks.
Brief Details: A lightweight sentence embedding model with 33.4M parameters that maps text to 384-dimensional vectors, optimized for semantic search and similarity tasks.
Brief Details: Japanese language Supervised SimCSE model built on BERT-large for semantic similarity and sentence embeddings, trained on JSNLI dataset, supports sentence-transformers.
Brief Details: EdgeNeXt small model optimized for mobile vision - 5.59M params, ImageNet trained with USI distillation, delivers efficient CNN-Transformer hybrid architecture
Brief-details: BLEURT-tiny-512 is a lightweight PyTorch implementation of Google's BERT-based text evaluation metric, optimized for assessing text generation quality with 61K+ downloads.
Brief-details: FuguMT is a Marian-NMT based English-to-Japanese translation model with BLEU score of 32.7, supporting sentence-level translation using transformers and sentencepiece.
Brief-details: UperNet with ConvNeXt-small backbone for semantic segmentation, combining advanced architecture for pixel-wise labeling. MIT licensed, highly downloaded.
Brief Details: PNasNet-5 Large: 86.2M param image classification model, trained on ImageNet-1k. Progressive Neural Architecture Search design. 331x331 input.
Brief Details: Google's T5-v1.1-small model - Improved text-to-text transformer with GEGLU activation, trained on C4 dataset, 61K+ downloads, Apache 2.0 licensed
Brief Details: ColQwen2-v0.1 is a visual retriever model based on Qwen2-VL-2B-Instruct, implementing ColBERT strategy for efficient document indexing and retrieval.
Brief-details: A Mixtral-8x7B-based GPTQ-quantized model optimized for coding and general tasks, featuring 6.09B parameters and multiple quantization options
Brief-details: EfficientNet B1 variant trained on ImageNet-1k with 7.8M params, optimized for mobile/edge deployment with 81.44% top-1 accuracy
Brief Details: A multilingual relation extraction model supporting 18 languages, based on REBEL architecture with 611M parameters, offering seq2seq capabilities.
Brief-details: A powerful multilingual translation model by Helsinki-NLP that converts Romance languages (French, Spanish, Italian, Portuguese, etc.) to English.