Can you force AI to be "good" – and get a PhD in the process?


Do you like doing machine learning and data science, but at the same
time are concerned about the (already visible as well as potential)
dangers of AI? Do you want to contribute to an emerging area of
research that aims at making AI “safe” such that people can trust it?
Would you like to work in a heterogeneous team that allows – and
sometimes forces – you to challenge long-cherished beliefs about how
to do things, and venture into new areas of knowledge?


If your answers are “yes”, you may want to apply to a fully-funded,
up-to-4-year PhD position in VeriLearn
(https://dtai.cs.kuleuven.be/projects/verilearn). In this project, we
(DTAI researchers from KU Leuven, https://dtai.cs.kuleuven.be/,
together with colleagues from the University of Namur and the ULB
Brussels) are investigating the questions above by bringing together
expertise from machine learning and software verification. In a
nutshell, we aim at identifying how learning AI systems can be
guaranteed to behave, and keep behaving, in safe ways.


In the subproject advertised here, led by Prof. Bettina Berendt
(https://people.cs.kuleuven.be/~bettina.berendt/), key VeriLearn
questions are themselves questioned. We aim at investigating topics
such as (a) what risks (to “safety”) we are investigating in the first
place, (b) to what extent these lend themselves to formalization and
verification – and (c) what we, as computer scientists and AI/ML/data
scientists, can do to enhance the safety of AI when we cannot verify
it. The overall goal is to develop concepts, methods, tools and
processes designed to help people understand and improve the “safety”
of AI systems. Key risks to be addressed include the challenges that
AI, machine learning, and data science pose to fairness,
accountability, and transparency, making this subproject also part of
a larger research context in which we explore social and ethical
challenges of data science and answers to them.


WHO can apply? People who have, or will have in the coming months, a
Masters degree in Computer Science or AI (if your Masters degree is
from a closely related discipline, additional constraints may apply).
You have strong grades and a background in contents related to this
subproject, a keen interest also in ethical questions around AI, and
enjoy reading and thinking also beyond the purely
technical/mathematical.


HOW to apply? Please send an email to bettina.berendt@cs.kuleuven.be,
including a motivation letter including a short description of your
Masters thesis project (ca. 1 page) and your certificates and
transcripts. Longlisted candidates will then be invited to an
interview (on site or on skype), and shortlisted candidates will be
asked to complete a task and take part in a second interview.


WHEN to apply? Now. The position will be filled when we have found a
suitable candidate.


WHO to ask if you have questions about this position?
bettina.berendt@cs.kuleuven.be