09

PANGAEA

[IMAGE GOES HERE]

The AI4EO challenge series is back – this time with a long-term, open-format challenge built around PANGAEA, a cutting-edge benchmark dataset which helps researchers, scientists and coders to test their machine learning models for various applications, such as land cover classification, change detection, and environmental monitoring.

PANGAEA is designed as an evaluation protocol that covers a diverse set of datasets, tasks, resolutions, sensor modalities, and temporalities. Additionally, PANGAEA targets benchmarking Geospatial Foundation Models (GFMs), an increasing trend in AI and key strategic capability. By supporting PANGAEA and similar datasets, AI4EO is aiming to advance the state-of-the-art in Earth observation research.

This challenge is open-ended, giving participants the flexibility to explore, experiment, and iterate on their models over time. Alongside the open challenge, we’ll be launching regular Data Sprints – short, high-impact tasks that run for about two months each and focus on specific use cases within the PANGAEA dataset. These sprints will come with their own goals, deadlines, and prizes.

Whether you’re looking to dive deep into benchmarking or tackle targeted geospatial tasks, this challenge is built to support both.


What is PANGAEA?

PANGAEA is a highly curated, comprehensive benchmark dataset for Earth observation (EO). It’s designed to evaluate the performance of machine learning models across a broad range of real-world geospatial tasks such as:

  • Land cover classification

  • Change detection

  • Environmental monitoring

  • Multi-sensor and multi-temporal analysis

What makes PANGAEA unique is its diversity and structure: it covers a wide spectrum of resolutions, sensor types, and temporal layers. It also provides a standardized protocol for evaluating model performance – crucial for comparing results across researchers, institutions, and AI approaches.

This benchmark is also tightly aligned with the rapidly emerging field of Geospatial Foundation Models (GFMs) – a new generation of models capable of generalizing across EO tasks and datasets. PANGAEA is designed to test and refine these models.


An open challenge with a competitive twist

The PANGAEA challenge is run as a permanently open challenge. This means:

  • No fixed end date

  • Continuous submission and evaluation of your models

  • A live leaderboard to track performance and compare approaches

  • Full access to the dataset via EOTDL’s cloud workspace or Python/CLI tools

  • Optional model sharing for transparency and reproducibility

Use the challenge as your personal benchmarking sandbox, or aim for top leaderboard positions—it’s up to you.


Buckle your shoes and get ready for the sprints: Short tasks, big wins!

Embedded in the open challenge are Data Sprints: high-intensity, 2-month mini-challenges that call for sharp thinking and targeted solutions.

Each sprint:

  • Focuses on a specific real-world task using the PANGAEA dataset (e.g., identifying changes over time in a specific region or classifying a particular land type).

  • Has clear goals and metrics.

  • Comes with its own prize pool and recognition opportunities.

  • Encourages practical application of your skills and quick iteration.

Sprints are ideal for teams looking to make a splash, try something new, or just have fun competing under time pressure. Keep an eye on the AI4EO platform for upcoming sprint announcements!


“All you need is love tools” – we got them!

This challenge is hosted on the AI4EO Challenges Platform and integrated with the Earth Observation Training Data Lab (EOTDL)—giving you access to a powerful environment for developing and evaluating your geospatial models.

You’ll get:

  • Full dataset access: PANGAEA and its subsets are already staged and ready for cloud-based or local access.

  • Standardized evaluation: Submit your metrics for automatic scoring and leaderboard ranking.

  • Leaderboard tracks: Compete across different tasks, modalities, or data subsets.

  • Community and visibility: Engage with an international network of EO and AI researchers and get visibility at major events like the ESA-NASA Workshop on AI Foundation Models for EO (May 2025).


Recognition, prizes, community!

Beyond the leaderboard, participants will gain exposure across the global Earth observation and AI communities. Winners of Data Sprints will receive monetary prizes and the opportunity to showcase their work to experts at ESA, NASA, and beyond.

Your results may also be featured at conferences and workshops focused on AI for EO.


Jump in – anytime!

The PANGAEA Benchmark Challenge is live and ongoing.

Click below to access the challenge, explore the dataset, and start benchmarking your models. Whether you’re here to push the limits of geospatial AI or take part in fast-paced sprints, there’s a place for you in the challenge.

Start benchmarking. Join the sprints. Help shape the future of EO.

→ Oh and, of course stay tuned for the first data sprint, more info coming soon!

Why join?

Benchmark the future of geospatial AI

PANGAEA sets the stage for standardized evaluation—test your models against a diverse and curated EO dataset and see how they measure up on the global leaderboard.

Level up your skills in an open challenge:

Join the PANGAEA open challenge anytime to explore powerful datasets, refine your models, and grow your expertise in geospatial AI at your own pace.

Sprint to win with focused challenges:

Take part in regular Data Sprints—targeted, two-month mini-challenges using the PANGAEA dataset, offering real-world tasks and the chance to win exciting prizes.

Initiated by

Implemented by