Granite TimeSeries TTM-R2
Property | Value |
---|---|
Parameter Count | 805k parameters |
License | Apache 2.0 |
Paper | Research Paper |
Training Data Size | ~700M samples |
What is granite-timeseries-ttm-r2?
Granite TimeSeries TTM-R2 is a revolutionary compact pre-trained model for multivariate time-series forecasting, developed by IBM Research. As part of the TinyTimeMixers (TTM) family, this model represents a significant breakthrough in efficient time series prediction, requiring only 805k parameters while delivering state-of-the-art performance.
Implementation Details
The model employs a focused pre-training approach, where each variant is optimized for specific forecasting settings based on context and forecast lengths. It supports various configurations including 512, 1024, and 1536 context lengths, with forecast capabilities up to 720 timepoints. TTM-R2 models are pre-trained on approximately 700M samples, offering a 15% performance improvement over their R1 counterparts.
- Supports both zero-shot and fine-tuned forecasting
- Specialized for minutely and hourly resolution predictions
- Includes channel-independent and channel-mix forecasting capabilities
- Supports exogenous variable integration
Core Capabilities
- Zero-shot multivariate forecasting without training
- Fine-tuning with minimal data (as little as 5% of training data)
- Rolling forecasts with extended prediction lengths
- Support for exogenous and categorical features
- Optimal for high-frequency time series (minutely to hourly)
Frequently Asked Questions
Q: What makes this model unique?
TTM-R2 is the first-ever "tiny" pre-trained model for time series forecasting, achieving state-of-the-art results with just 805k parameters, making it deployable even on CPU-only machines while outperforming billion-parameter alternatives.
Q: What are the recommended use cases?
The model is ideal for high-frequency time series forecasting (10 min, 15 min, 1 hour intervals) and works best with standardized data. It's not recommended for lower resolution data (weekly/monthly) or when artificial context length extension is needed.