Decision Transformer for Atari Games
Property | Value |
---|---|
Author | edbeeching |
Model Type | Reward-conditioned GPT |
Architecture | 6-layer transformer with 8 attention heads |
Embedding Dimension | 128 |
Supported Games | Breakout, Pong, Qbert, Seaquest |
What is decision_transformer_atari?
Decision Transformer for Atari is an implementation of the Decision Transformer architecture specifically optimized for classic Atari games. It uses a GPT-style architecture to learn and predict optimal gaming actions based on previous states and expected rewards.
Implementation Details
The model implements a 6-layer transformer architecture with 8 attention heads and 128 embedding dimensions. It processes sequences of length 90 and operates with a vocabulary size of 4. The implementation includes pretrained weights for four popular Atari games, trained with seed 123.
- Transformer-based architecture with 6 layers
- 8 attention heads for complex pattern recognition
- 128-dimensional embeddings for state representation
- Maximum sequence length of 90 timesteps
- Reward-conditioned decision making
Core Capabilities
- Game state processing and action prediction
- Reward-conditioned behavior learning
- Support for multiple Atari game environments
- Easy integration with PyTorch frameworks
Frequently Asked Questions
Q: What makes this model unique?
This model uniquely applies the Decision Transformer architecture to Atari games, offering a transformer-based approach to game playing instead of traditional reinforcement learning methods. It provides pretrained weights for multiple games and uses reward conditioning to guide decision-making.
Q: What are the recommended use cases?
The model is specifically designed for playing Atari games (Breakout, Pong, Qbert, and Seaquest). It can be used for research in reinforcement learning, game AI development, and studying transformer applications in game-playing scenarios.