08

EASi Workshop // HYPERVIEW2

EASi Workshop // HYPERVIEW2

Make AI in space make sense

#HYPERVIEW2 // EASi // ECAI

As we increasingly rely on AI for satellite operations, Earth observation, and mission-critical decisions, the question isn’t just what AI predictsbut why. EASi and HYPERVIEW 2 by KP Labs, MI² (Warsaw University of Technology), Poznan University of Technology and ESA Φ-Lab tackle the frontier of Explainable AI (XAI) for space, spotlighting the urgent need for trust, clarity, and accountability in high-stakes environments.

From understanding atmospheric changes to operating spacecraft autonomously, AI decisions impact real missions and real lives. But without transparency, we risk losing trust in these systems—especially when lives, missions, or environmental decisions are on the line.

With HYPERVIEW2, we’re challenging researchers, engineers, and data scientists to explore new ways of making AI interpretable and explainable for satellite data processing, space operations, and Earth monitoring.

Challenge? Workshop? Both!

The new HYPERVIEW2 Challenge is implemented as part of the Explainable AI in Space (EASi) workshop, that KP Labs, ESA, and the AI4EO team are organising as part of the European Conference on Artificial Intelligence (ECAI), which is hosted in Bologna, Italy from 25th–30th October 2025.

Participants are welcome to engage in multiple ways: you can take part in the challenge only, submit a paper to the workshop only, or combine both for a deeper contribution. Submitting a paper is not required to join the challenge, and vice versa—though we strongly encourage participants in the challenge to consider sharing their methods and insights through a workshop submission. This flexible format allows researchers, engineers, and AI practitioners to participate in the way that best suits their expertise and goals. Whether you’re aiming to publish, to compete, or to do both, EASi and HYPERVIEW2 provide a shared platform to advance the frontiers of trustworthy, explainable AI in space applications.

 

Transparency beyond the stratosphere

As space exploration and Earth observation increasingly rely on artificial intelligence (AI) for mission-critical tasks such as satellite operations, data analysis, and decision-making, ensuring the reliability, interpretability, and accountability of these AI systems becomes paramount. The EASi workshop addresses a critical intersection between Explainable Artificial Intelligence (XAI) and space applications, offering broad relevance to multiple research and industrial communities beyond traditional AI approaches. We promote methods to make AI models more explainable and transparent, particularly in safety-critical and high-stakes environments like space, targeting the topics having far-reaching implications across various sectors:

  • Aerospace: The safety, reliability & transparency of AI in space applications impact both governmental space agencies and private sector efforts. Ensuring that AI-driven systems can be trusted, explained, and verified in mission-critical environments is crucial to avoiding catastrophic failures & ensuring public trust.

  • Earth Observation and Environmental Sciences: With AI systems playing an increasing role in analyzing vast amounts of satellite imagery and data to monitor climate change, disasters, and ecosystem health, explainability and interpretability are essential for accurate, reliable insights. Transparency in AI decisions allows scientists, policymakers, and stakeholders to trust and act on AI-generated insights. Decision Intelligence further enhances this process by integrating AI-driven insights with human expertise, enabling more informed and effective decision-making in addressing environmental challenges.
  • AI Research Community: The methods and techniques discussed are applicable beyond space. They offer valuable insights for improving AI transparency in other fields such as finance, healthcare, robotics, and automotive industries, where trust in AI decisions is critical.

 

The topics of interest at EASi include, but are not limited to: 

“Where XAI meets space”

  • Counterfactual explanations in Space applications 
  • Presentation and interpretation of AI explanations in Space applications 
  • Explainable on-board AI for critical and non-critical Space applications 
  • Safety of AI models for Space applications (Earth observation, mission-critical) 
  • Explaining AI models for Earth observation and satellite operations 
  • Techniques for assessing the quality of explanations in Space applications 
  • Physics-aware AI for Earth observation and satellite operations 
  • Trustworthiness of AI systems in Space applications 
  • Realistic generative AI models for optical and radar Earth observation 
  • Verification and validation of AI in Space applications 
  • AI specific Cal/Val paradigms 
  • XAI methods for explaining adversarial attacks in Space 
  • Theoretical bound for AI methods in Space applications 
  • XAI methods for signal and image processing for Space applications 
  • Exploring the potential impact of incorrect explanations in Space applications 
  • XAI methods for risk assessment in critical Space applications 
  • Multimodal XAI methods for Space applications 
  • XAI methods for explaining federated and continual learning in Space 
  • Simple, fast, low power, low bit rate AI for Space applications 
  • XAI methods for privacy-preserving Space systems 
  • Non-technical presentation of AI explanations in Space operations 
  • XAI methods for model governance in Space applications 
  • On-board explainable AI for Earth observation and satellite operations 
  • XAI methods for situational awareness in Space 
  • Optimization of AI models using XAI for Space applications 
  • XAI methods for ensuring algorithmic transparency in Space applications 

 

“XAI techniques and tools for (not only) space”

  • Benchmarking of XAI systems 
  • Robustness and faithfulness of explanations 
  • Dataset-centric explanations 
  • XAI for time series analysis approaches 
  • Explaining bias and fairness of XAI systems 
  • XAI methods for estimating AI models’ confidence 
  • Resource utilization and resource frugality of on-board XAI methods  

 

 

Program

The full EASi program will be announced soon.

 

Keynote speakers

The keynote speakers will be announced soon.

 

Important dates

Please note that we have deadlines to meet both for your participation at the EASi workshop as well as for the HYPERVIEW2 deadline

EASi paper submission time and deadlines
  • Paper submission deadline: 26th June 2025
  • Notification to authors: 26th July 2025 
  • Camera-ready deadline: TBA at a later date (deadline likely in mid-August 2025)
  • EASi finalized program: TBA at a later date (deadline likely end of August 2025)
  • EASi date: 25-26th October 2025 
HYPERVIEW2 Challenge time and deadlines
  • Challenge launch: 25th April 2025
  • Online workshop (introduction to HYPERVIEW2): Date announced soon
  • XAI checkpoint 1 : 16th June 2025
  • Reproducibility check (top 10) 2 : 16th June – 11th July 2025
  • Release of top-performing models to other teams for XAI: 14th July 2025
  • Challenge closing: 31st August 2025
  • Award ceremony (during EASi workshop!): 25-26th October 2025 


1
Checkpoint to take top-5 best models for XAI.

2 Verification of reproducibility of top-10 best models. The top 10 best performing teams will be asked to submit thier Jupyter notebooks with their models.

 

 

 

Prizes for the HYPERVIEW2 Challenge

 

  • 1st Place: 2000 EUR + 1-month access to Leopard DPU by KP Labs through Smart Mission Lab to benchmark the models on flight hardware with KP Labs’ support (+ an option to widely promote the benchmarking results through KP Labs’ social media) + diploma
  • 2nd Place: 1000 EUR + 1-month access to Leopard DPU by KP Labs through Smart Mission Lab to benchmark the models on flight hardware + diploma
  • 3rd Place: 500 EUR + 1-month access to Leopard DPU by KP Labs through Smart Mission Lab to benchmark the models on flight hardware + diploma

 


Meet the team

Organizers and Chairs 

Przemysław Biecek
Warsaw University of Technology, MI².AI
Poland
File:Google Scholar logo.svg - Wikimedia Commons

 

Marek Kraft
Poznan University of Technology,
Poland
File:Google Scholar logo.svg - Wikimedia Commons

 

Nicolas Longépé
Φ-Lab, European Space Agency
Italy
File:Google Scholar logo.svg - Wikimedia Commons

 

Jakub Nalepa
Silesian University of Technology, KP Labs
Poland
File:Google Scholar logo.svg - Wikimedia Commons

 

Evridiki Ntagiou
European Space Operations Centre, European Space Agency
Germany

 

 

Lukasz Tulczyjew
KP Labs, Silesian University of Technology
Poland
File:Google Scholar logo.svg - Wikimedia Commons

 

Agata M. Wijata
Silesian University of Technology, KP Labs
Poland
File:Google Scholar logo.svg - Wikimedia Commons

 

Technical and Scientific Co-Chairs

Przemyslaw Aszkowski
Poznan University of Technology
Poland
File:Google Scholar logo.svg - Wikimedia Commons

 

Hubert Baniecki
University of Warsaw
Poland
File:Google Scholar logo.svg - Wikimedia Commons  
Web Icon Logo PNG Vector (EPS) Free Download

 

Gabriele Cavallaro
University of Iceland, Forschungszentrum Jülich
Germany
File:Google Scholar logo.svg - Wikimedia Commons

 

Mihai Datcu
University Politehnica Bucharest
Romania
File:Google Scholar logo.svg - Wikimedia Commons

 

Nataliia Kussul
University of Maryland
United States of America
File:Google Scholar logo.svg - Wikimedia Commons

 

Tymoteusz Kwieciński
Warsaw University of Technology
Poland
File:Google Scholar logo.svg - Wikimedia Commons

 

Ribana Roscher
Forschungszentrum Jülich, University of Bonn
Germany
File:Google Scholar logo.svg - Wikimedia Commons

 

Bogdan Ruszczak
Opole University of Technology
Poland
File:Google Scholar logo.svg - Wikimedia Commons

 

Vladimir Zaigrajew
MI².AI
Poland
File:Google Scholar logo.svg - Wikimedia Commons

 

Scientific Committee 

  • Agata Wijata, Silesian University of Technology, KP Labs 
  • Andrew Macdonald, Mission Control 
  • Andrei Anghel, POLITEHNICA Bucharest 
  • Begüm Demir, TU Berlin, BIFOLD 
  • Diego Valsesia, Politecnico di Torino 
  • Eduardo Soares, IBM Research – Brazil 
  • Enrico Magli, Politecnico di Torino 
  • Evgenios Tsigkanos, OHB Hellas 
  • Evridiki Ntagiou, European Space Agency (ESA) 
  • Fabio Del Frate, University of Rome “Tor Vergata” 
  • Francesco Spinnato, University of Pisa 
  • Gabriele Cavallaro, Forschungszentrum Jülich, Univ. of Iceland 
  • Gianluca Valentino, University of Malta 
  • Hubert Baniecki, University of Warsaw 
  • Jakub Nalepa, Silesian University of Technology, KP Labs 
  • Jocelyn Chanussot, INRIA 
  • Jon Alvarez Justo, Norwegian Univ. of Science and Technology, ESA 
  • Krzysztof Kotowski, KP Labs 

 

  • Leonardo De Laurentiis, European Space Agency (ESA) 
  • Mahtab Sarvmaili, Dalhousie University 
  • Mattia Setzu, University of Pisa 
  • Mihail Datcu, Politehnica Bucharest 
  • Nataliia Kussul, University of Maryland 
  • Nicolas Longepe, European Space Agency (ESA) 
  • Nikolaos Dionelis, European Space Agency (ESA) 
  • Peter Naylor, European Space Agency (ESA) 
  • Plamen Angelov, Lancaster University 
  • Przemysław Biecek, Warsaw University of Technology 
  • Roberto Camarero, European Space Agency (ESA) 
  • Roberto Del Prete, University of Naples Federico II 
  • Samantha Lavender, Telespazio UK 
  • Sebastian Lapuschkin, Fraunhofer HHI 
  • Thomas Brunschwiler, IBM Research – Europe 
  • Vincenzo Lomonaco, University of Pisa, ContinualAI 
  • Wojciech Samek, TU Berlin, BIFOLD, Fraunhofer HHI 
  • Xiaoxiang Zhu, Technical University of Munich 

 

 

 

 

Why should you join?

Drive clarity in the cosmos

Redefine what explainability means for real-world, mission-critical AI systems in space missions.

Merge ethics and engineering

Design AI that works – and makes sense to humans

Build trust from the ground up

Shape the future of responsible AI in space, Earth observation and beyond

Initiated by

Implemented by