A test run is a single execution of a benchmark test using a defined model configuration.
Each run represents how a particular large language model (LLM) — such as GPT-4, Claude-3, or Gemini — performed on a given task at a specific time, with specific settings.
A test run includes:
Together, test runs make it possible to compare models, providers, and configurations across benchmarks in a transparent and reproducible way.
{'century': [20], 'document-type': ['newspaper-page'], 'language': ['en'], 'layout': ['prose', 'columns'], 'script': ['latin'], 'task': ['document-understanding'], 'writing': ['printed']}
| Provider | genai |
| Model | gemini-3.1-pro-preview |
| Temperature | 0.0 |
| Dataclass | MagazinePage |
| Normalized Score | 0.00 % |
| Test time | unknown seconds |
Extract all advertisements and return their bounding boxes.
The original size of the page is {width} x {height} pixels.
no valid result
| Fuzzy Score | F1 micro / macro | Micro precision/recall | Tue/False Positives | |||||
| n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| Micro Precision | Micro Recall | Instances | TP | FP | FN | |||
| Pricing Date: n/a, n/a. | Tokens: 52.3K IT + 3.4K OT = 55.7K TT | Cost: 0.105$ + 0.041$ = 0.145$ |
{'century': [20], 'document-type': ['newspaper-page'], 'language': ['en'], 'layout': ['prose', 'columns'], 'script': ['latin'], 'task': ['document-understanding'], 'writing': ['printed']}
| Provider | openai |
| Model | gpt-5.4-2026-03-05 |
| Temperature | 1.0 |
| Dataclass | MagazinePage |
| Normalized Score | 70.90 % |
| Test time | unknown seconds |
Extract all advertisements and return their bounding boxes.
The original size of the page is {width} x {height} pixels.
no valid result
| Fuzzy Score | F1 micro / macro | Micro precision/recall | Tue/False Positives | |||||
| n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a | n/a |
| Micro Precision | Micro Recall | Instances | TP | FP | FN | |||
| Pricing Date: n/a, n/a. | Tokens: 136.7K IT + 2.2K OT = 138.9K TT | Cost: 0.342$ + 0.034$ = 0.375$ |