whisper-tiny-ar-quran
Property | Value |
---|---|
License | Apache 2.0 |
Framework | PyTorch |
Final WER | 7.0535 |
Training Steps | 5000 |
What is whisper-tiny-ar-quran?
whisper-tiny-ar-quran is a specialized speech recognition model fine-tuned from OpenAI's Whisper-tiny architecture, specifically optimized for Arabic Quranic recitation recognition. The model demonstrates impressive performance with a Word Error Rate (WER) of 7.05%, achieved through careful optimization and progressive training.
Implementation Details
The model utilizes the Transformers framework with PyTorch backend, implementing a linear learning rate scheduler with warmup steps. Training was conducted using mixed-precision Native AMP, with an Adam optimizer (betas=0.9,0.999, epsilon=1e-08) and a learning rate of 0.0001.
- Batch size: 16 for training, 8 for evaluation
- Warmup steps: 500
- Total training steps: 5000
- Progressive WER improvement from 20.02% to 7.05%
Core Capabilities
- Specialized Arabic Quranic speech recognition
- Efficient processing with tiny model architecture
- Optimized for accuracy with minimal computational requirements
- Supports TensorBoard integration for monitoring
Frequently Asked Questions
Q: What makes this model unique?
This model stands out for its specialized focus on Quranic Arabic recognition, achieving impressive accuracy despite using the compact Whisper-tiny architecture. The progressive improvement in WER from 20.02% to 7.05% demonstrates its effectiveness for religious text transcription.
Q: What are the recommended use cases?
The model is specifically designed for transcribing Quranic recitations and Arabic religious content. It's particularly suitable for applications requiring accurate Arabic speech recognition in religious contexts, especially where computational resources are limited.