TAPAS Large Fine-tuned WTQ
Property | Value |
---|---|
Parameter Count | 337M |
License | Apache 2.0 |
Developer | |
Primary Paper | TAPAS: Weakly Supervised Table Parsing via Pre-training |
Best Dev Accuracy | 50.97% |
What is tapas-large-finetuned-wtq?
TAPAS large fine-tuned WTQ is a specialized BERT-like transformer model designed for table question answering tasks. Developed by Google, this model represents a significant advancement in processing and understanding tabular data, incorporating both natural language and numerical reasoning capabilities.
Implementation Details
The model utilizes a sophisticated architecture trained through multiple stages: initial pre-training on Wikipedia data, intermediate pre-training for numerical reasoning, and final fine-tuning on SQA, WikiSQL, and WTQ datasets. It employs relative position embeddings and processes inputs in the format [CLS] Question [SEP] Flattened table [SEP].
- 337M parameters with F32 tensor type
- Trained on 32 Cloud TPU v3 cores for 50,000 steps
- Uses Adam optimizer with 1.93581e-5 learning rate
- Maximum sequence length of 512 and batch size of 512
Core Capabilities
- Advanced table parsing and understanding
- Numerical reasoning on tabular data
- Question answering based on table contents
- Support for complex aggregation operations
- Handles both relative and absolute position embeddings
Frequently Asked Questions
Q: What makes this model unique?
This model stands out for its specialized pre-training approach that includes both masked language modeling and intermediate pre-training for numerical reasoning, making it particularly effective for table-based tasks.
Q: What are the recommended use cases?
The model is specifically designed for answering questions about tabular data, making it ideal for applications in data analysis, business intelligence, and automated reporting where understanding and extracting information from tables is crucial.