nb-bert-base-ner
Property | Value |
---|---|
Parameter Count | 124M |
License | CC-BY-4.0 |
Language | Norwegian |
Framework | PyTorch |
Dataset | NorNE |
What is nb-bert-base-ner?
nb-bert-base-ner is a specialized Named Entity Recognition (NER) model developed by NbAiLab, specifically designed for processing Norwegian text. Built upon the BERT architecture, this model has been fine-tuned using the NorNE dataset to identify and classify named entities in Norwegian text with high accuracy.
Implementation Details
The model is implemented using the Transformers library and PyTorch framework, featuring 124 million parameters. It utilizes the BERT base architecture with additional fine-tuning layers for token classification tasks. The model supports inference endpoints and uses safetensors for efficient model weight storage.
- Based on BERT architecture with token classification head
- Trained specifically for Norwegian language processing
- Implements efficient inference using PyTorch backend
- Utilizes the NorNE dataset for fine-tuning
Core Capabilities
- Named Entity Recognition in Norwegian text
- Token-level classification for identifying entities
- Support for batch processing and inference endpoints
- Integration with Hugging Face's transformers pipeline
Frequently Asked Questions
Q: What makes this model unique?
This model is specifically optimized for Norwegian language NER tasks, which makes it particularly effective for processing Norwegian text compared to general-purpose multilingual models.
Q: What are the recommended use cases?
The model is ideal for applications requiring named entity recognition in Norwegian text, such as information extraction, document analysis, and automated content tagging in Norwegian language processing systems.