Granite TimeSeries TTM-R1
Property | Value |
---|---|
Parameters | 805k |
License | Apache 2.0 |
Paper | View Paper |
Tensor Type | F32 |
What is granite-timeseries-ttm-r1?
Granite-TimeSeries-TTM-R1 is a revolutionary compact pre-trained model for multivariate time-series forecasting, developed by IBM Research. As one of the first "tiny" pre-trained models in its domain, it achieves remarkable performance with less than 1 million parameters. The model specializes in handling minutely to hourly resolution data and has been pre-trained on 250M public training samples.
Implementation Details
The model implements a focused pre-training approach, where each TTM variant is optimized for specific forecasting settings based on context and forecast lengths. Currently, two main variants are available: 512-96 and 1024-96, capable of processing 512 or 1024 time points to predict the next 96 time points respectively.
- Efficient architecture requiring minimal computational resources
- Supports both zero-shot and fine-tuned forecasting
- Capable of running on single GPU or even CPU machines
- Pre-trained on diverse public time series datasets
Core Capabilities
- Zero-shot multivariate forecasting
- Channel-independent and channel-mix fine-tuning
- Support for exogenous/control variables
- Rolling forecasts for extended prediction lengths
- Integration with static categorical features
Frequently Asked Questions
Q: What makes this model unique?
TTM-R1's uniqueness lies in its extremely compact size (805k parameters) while maintaining competitive performance against models with billions of parameters. It's one of the first tiny pre-trained models specifically designed for time-series forecasting.
Q: What are the recommended use cases?
The model is best suited for minutely to hourly resolution time-series data (10 min, 15 min, 1 hour). It's particularly effective for multivariate forecasting scenarios requiring predictions up to 96 time points into the future, with either 512 or 1024 historical time points as context.