1st IJCAI Workshop on Safe Physical AI
IJCAI/ECAI 2026
Bremen, Germany
Date: 15-17 August 2026 (2 days, exact days TBD)
About the Workshop
Recent progress in deep learning and robot learning has brought the vision of autonomous, embodied systems operating in real-world environments significantly closer to reality. Robots and other physically situated AI systems are now demonstrating strong generalization, contact-rich manipulation and long-horizon reasoning capabilities. As these increasingly operate in human environments, ensuring both physical and cognitive safety becomes an urgent research priority.
This workshop focuses on Safe Physical AI: the study and development of intelligent systems that can act safely and reliably in the real world. We seek to identify fundamental challenges in verification, interpretability, uncertainty quantification, human–robot interaction, and alignment for physical AI systems. The workshop will unite researchers from robotics, deep learning, AI safety, planning, and AI governance to develop a shared conceptual and methodological foundation for safe embodied intelligence. By fostering interdisciplinary dialogue and mapping concrete research directions, the workshop aims to advance the scientific and ethical foundations required for trustworthy physical AI.
Key Objectives
- To foster cross-disciplinary exchange between researchers in robotics, AI safety, machine learning, planning, control, ethics, and AI governance, encouraging collaboration and shared methodologies for safety assurance in embodied systems.
- To define challenges and opportunities in developing safe, reliable, and interpretable embodied agents by analysing concrete safety and alignment issues in real-world interaction, manipulation, and long-horizon autonomy.
- To develop a shared understanding of fundamental concepts, risk taxonomies, and evaluation strategies for assessing the safety and trustworthiness of embodied AI.
- To investigate how AI methods—including reinforcement learning, uncertainty-aware models, causal reasoning, and formal verification—can contribute to safe decision-making and control under uncertainty.
- To identify gaps in current architectures and design methodologies for engineering embodied agents that meet rigorous safety, robustness, and ethical requirements.
Call for Papers
This workshop invites submissions that develop methods, theory, empirical insights, benchmarks, and conceptual frameworks for Safe Physical AI: real-world intelligent systems that act safely and reliably under uncertainty. We welcome technical, empirical, theoretical, and position papers.
We welcome submissions of work that has been previously published or is under review at other venues. Authors should indicate this at submission time. Such submissions will be evaluated based on their relevance to the workshop themes.
Submission Formats
We accept short papers (4 pages excl. references) as well as extended abstracts (1 page excl. references). All submissions must use the two-column IEEE template. Authors of four-page papers will present their work in a seven-minute talk, with an additional seven minutes for questions and discussion. Authors of extended abstracts will present their work in two-minute lightning talks. Authors of short papers and extended abstracts are strongly encouraged to submit a poster and participate in the mentoring program. We strongly encourage interdisciplinary submissions, early-stage research papers, and contributions from graduate students.
Important Dates
| Submissions open | March 26, 2026 |
| Submission deadline | May 7, 2026, 23:59 CET |
| Author notification | June 7, 2026 |
Submit via OpenReview: openreview.net/group?id=ijcai.org/IJCAI-ECAI/2026/Workshop/SPAI
Relevant Topics
- Safe decision making and action selection for embodied agents
- Uncertainty quantification, risk estimation, and probabilistic verification
- Interpretability of policies and world models, including mechanistic and behavioral approaches
- Task specification, alignment, and intent understanding for physical agents
- Monitoring and evaluation of untrusted models or untrusted hardware
- Runtime verification, safety shields, and control-theoretic safety mechanisms
- Bridging classical robot safety and statistical learning: safety beyond toy domains
- Safe operation in social, contextual, and multi-agent environments
- Stress testing, benchmarking, and evaluation methodologies
- Contributions from ethics, law, and philosophy to physical AI safety
- Interdisciplinary perspectives on risk taxonomies, safety vocabularies, and conceptual foundations
- Hardware and computational design choices for safety in physical AI
- Safe autonomy in real-world cyberphysical systems, including aerial, mobile, and manipulation platforms
Invited Speakers
George J Pappas
University of Pennsylvania
Machine Learning for Safe and Secure Cyber-Physical Systems
Interactive Formats
The workshop prioritizes active participation and knowledge exchange through multiple interactive formats designed to engage attendees across career stages and disciplines.
Poster Session
Poster sessions provide dedicated venues for early-career researchers to present their work and receive feedback from the broader community. Authors will submit one-page extended abstracts describing their research, with each presenter delivering a two-minute lightning talk to preview key ideas and encourage focused discussions at their poster. Authors of four-page short papers will also be given the opportunity to present a poster. Poster sessions are strategically scheduled during coffee breaks and between invited talks and panel discussions to maximize attendance and foster informal interactions.
Mentoring Program
To support early-career researchers and enhance inclusivity, we implement a structured mentoring program that pairs poster presenters with invited speakers or senior researchers based on shared interests. Authors can opt into the program when submitting extended abstracts or short papers. Mentors provide feedback on poster drafts and prioritize visiting their assigned mentees' posters during the session. Dedicated mentoring sessions during breaks facilitate discussions about research challenges, career development, and potential collaborations.
Panel Discussion
The panel discussion will convene invited speakers for a moderated conversation examining challenges and advancements in safe physical AI from multiple disciplinary perspectives. Attendees can submit questions through an online platform where others can vote to prioritize the most pressing topics.
Program Committee
- Prof. Michael Fisher (University of Manchester) — Verification of Autonomous Systems; Trustworthy AI
- Prof. Michael Beetz (University of Bremen) — Knowledge Representation and Reasoning; Robot Cognitive Architectures
- Prof. Masoumeh Mansouri (University of Birmingham) — Hybrid Robot Intelligence; Social Robotics
- Dr. Michaela Kümpel (University of Bremen) — Knowledge Systems and Human-Robot Interaction
- Dr. Maciej Zajac (Polish Academy of Sciences) — Ethics of Autonomous Weapons Systems
- Prof. Rania Rayyes (Karlsruhe Institute of Technology) — Developmental Robots, Dexterous Robots, Industrial Robotics
- Prof. Darko Katic (Stuttgart Technical University of Applied Sciences) — Knowledge Representation and Reasoning; Surgical Robotics
- Dr. Justin Shenk (Redwood Research) — Evaluations and risk assessments for frontier AI models
- Prof. Barbara Hammer (University of Bielefeld) — Robust, Data-parsimonious, Fair Machine Learning
- Prof. Daniel Neider (TU Dortmund) — Safety Verification of AI systems
- Prof. Vera Schmitt (TU Berlin) — Trustworthy and Responsible NLP
- Prof. Julius Schöning (Osnabrück UAS) — Intersections to Legal Issues
- Dr. Tim Schrills (University of Lübeck) — Psychology, AI Act
- Prof. Peter Fettke (DFKI) — Human-Centric AI, Economics
- Prof. Hanna Drimalla (University of Bielefeld) — Multi-Modal Sensors, Human-Centric AI
- Max Kroker (Independent Lawyer) — Law of AI
- Dr. Daniel Winteler (Fraunhofer Society) — Law of AI
Organizers
Benjamin Alt
University of Bremen, Germany / Robotics Institute Germany (RIG)
Technical Director at the AICOR Institute for Artificial Intelligence and co-founder of AICOR Solutions, a startup for safe robot intelligence.
Primary Contact: benjamin.alt@uni-bremen.de
Nico Hochgeschwender
University of Bremen, Germany / Robotics Institute Germany (RIG)
Professor of Software Engineering for Cognitive Robots and Systems, Co-Speaker of the Thematic Cluster Safety, Reliability, and Resilience of AI-enabled Robotics of the Robotics Institute Germany.
Benjamin Paaßen
University of Bielefeld, Germany
Junior Professor for Knowledge Representation and Machine Learning.
Karinne Ramirez Amaro
Chalmers University of Technology, Gothenburg, Sweden
Associate Professor in the Electrical Engineering Department.
Alex Robey
Carnegie Mellon University, Pittsburgh, USA
Postdoctoral fellow in the Machine Learning Department.
Nathan Wood
Hamburg University of Technology, Germany / Ethics + Emerging Sciences Group, California Polytechnic State University San Luis Obispo, USA
Junior Research Group Leader at the Institute for Air Transportation Systems, heading the interdisciplinary project "Military Defense Technologies and Ethics".
Preliminary Schedule
Day 1
| 09:45–10:00 | Opening remarks |
| 10:00–10:30 | Invited Talk 1 |
| 10:30–11:30 | Short Paper Session 1 (4 papers × 14 min) |
| 11:30–11:45 | Poster Lightning Talks 1 (6–7 posters × 2 min) |
| 11:45–12:45 | Lunch |
| 12:45–13:15 | Invited Talk 2 |
| 13:15–14:15 | Short Paper Session 2 (4 papers × 14 min) |
| 14:15–14:40 | Poster Lightning Talks 2 (6–7 posters × 2 min) |
| 14:40–16:00 | Coffee break and Poster Session |
| 16:00–16:30 | Invited Talk 3 |
| 16:30–17:10 | Plenary discussion / Q&A |
| 17:10–18:00 | Panel introduction / informal networking |
Day 2
| 09:00–09:30 | Invited Talk 4 |
| 09:30–10:30 | Short Paper Session 3 (4 papers × 14 min) |
| 10:30–11:00 | Poster Lightning Talks 3 (8–10 posters × 2 min) |
| 11:00–11:30 | Invited Talk 5 |
| 11:30–12:30 | Lunch |
| 12:30–13:30 | Short Paper Session 4 (4 papers × 14 min) |
| 13:30–15:30 | Poster Session and mentoring activities |
| 15:30–16:30 | Panel discussion / Q&A |
| 16:30–17:00 | Wrap-up / informal networking |
Contact
For inquiries, please contact: benjamin.alt@uni-bremen.de