The Institute for AI Safety provides research and development services in the field of AI-related methods, processes, algorithms and execution environments. The focus is on ensuring the operational and attack security of AI-based solutions in ambitious application classes. "Safety and security by design" is a central aspect here, which can also be used to fulfil the requirements of safety-critical applications that will be based on AI in the future or use AI-based components as solution modules.
Our mission
The department for AI platforms investigates and develops novel AI architectures with respect to challenging hard- and software platforms for training executing AI methods. These platforms are embedded solutions and quantum computers for example.
Your contribution
As part of our interdisciplinary team you will investigate innovative AI methods and ensure their safe and secure implementation. In detail, this covers these tasks:
- Development of efficient AI methods, e.g. neuromorphic AI
- Deployment of efficient AI components in applications, in particular in mobility and edge AI
- Evaluation of safety and capabilities of the developed AI methods with respect to the application
- Research on cybersecurity aspects of the novel AI methods (e.g. security evaluation of the developed ansatzes)
Your experience
- university degree in computer science, physics, maths or similar fields
- Extensive knowledge on several areas of AI / machine learning
- Experience with data encoding
- Knowledge on Cybersecurity
- Experience with AI based security applications
- Knowledge of at least one relevant programming language
If applicable, pursuing a PhD is possible on this position.
We look forward to getting to know you!
If you have any questions about this position (Vacancy-ID 1930) please contact:
Hans-Martin Rieser
Tel.: +49 731 400198 306