We invite applications for a postdoctoral researcher to contribute to the EdgeAI-Trust project. EdgeAI-Trust aims to develop and implement decentralized Edge AI technologies to address key challenges facing Europe’s industrial and societal sectors, such as energy efficiency, system complexity, and sustainability. By developing trustworthy domain in dependent collaborative AI architectures and the HW/SW edge AI technologies, the project will promote large-scale edge AI solutions that able upgradeability, reliability, safety, security, and interoperability, with a focus on explainability and robustness. This will increase trustworthiness, reliability, safety, security, and societal acceptance of AI in a dynamic zero-trust environment. The EdgeAI-trust will develop toolchains with standardized interfaces for optimizing and validating edge AI solutions in heterogeneous computing systems, measuring ODD coverage of training data, and complementing edge AI training data. EdgeAI-trust will also establish sustainable impact by building open edge AI platforms and ecosystems, with a focus on standardization, supply chain integrity, environmental impact, and benchmarking frameworks, and support for open-source solutions.
Key Duties
As a member of the team, your role as a postdoctoral researcher will be to lead BSC activities in the Edge-AI project, which include the following.
1. Contribute to the development AI-driven hardware and software Operational Design Domain (ODD) awareness solutions that enable robust, safe and trustable operations at the edge (ODD is a key success factor for safe and reliable operation at the edge, regardless of the domain).
2. Contribute to the definition of requirements of an AI-based in-vehicle distributed single-camera object detection system and an AI-based in-vehicle distributed predictive maintenance system.
3. Devise appropriate AI architectures leveraging AI features such as explainability, interpretability, traceability, and leveraging AI ensembles to match safety requirements in line with standards such as ISO 26262, IEC 61508, ISO KDT-IA-FT3, ISO 5469, and ISO/AWI PAS 8800, among others. Those architectures will aim to make failure risk residual by construction whenever possible, and with appropriate observability and controllability means, as well as safety measures support so that safety cases can be built atop.
4. Contribute to the exploration of AI acceleration solutions at hardware level with emphasis on safety-related design aspects of those accelerators such as efficient realization of diversity and ensembles. Accelerators will be deployed on an FPGA-based RISC-V platform and integrated in TEB’s modular platform along with a COTS performance module for efficient orchestration.
5. Contribute to overall aspects relating to safety cases and influencing the overall design, control and data flow, and adherence to specific properties (e.g., whether diverse redundant components exhibit sufficient independence).
Requirements
Education
BSc in Computer Science.
Master in advanced computing
PhD (completed or its last stages) in the safety of AI software in critical embedded domains like automotive.
Essential Knowledge and Professional Experience
Recognized expertise AI software for critical domains
Previous knowledge on high-performance hardware for critical embedded systems
Additional Knowledge and Professional Experience
Previous knowledge in middlewares for real-time systems (ROS2, CyberRT, …)
Proficiency in English
Competences
Ability to conduct independent research, develop innovative solutions, and take initiative
Strong analytical and problem-solving skills
Collaborative mindset and the capacity to work effectively in interdisciplinary teams
Excellent verbal and written communication skills