Machine learning (ML) is currently progressing in various areas, e.g., image recognition, autonomous driving, medical applications as tumor diagnoses, or security tasks as network intrusion detection.
With the increasing application of ML systems, a number of security, privacy, and functional challenges are posed on the design and implementation of the underlying algorithms and systems. Security threats include trojaned Neural Networks causing them to misbehave in certain situations, or attacks that disturb the training process of Neural Networks. In addition, other attacks threaten the privacy of ML, e.g, by reconstructing the used training data from a trained ML model. Further, also fairness aspects need to be considered to prevent the ML model from discrimination against humans, e.g., in judicial applications.
For the research projects where we investigate open challenges in ML and develop effective mitigation approaches, we are looking for excellent student assistants, who are motivated to be part of ongoing research in these areas.
Depending on your background and personal preferences, you will work on one of the projects, targeting the before mentioned areas. Your tasks include:
If you are intrigued by this cutting-edge subject, please get in touch with us at info@trust.tu-darmstadt.de to obtain further information. To facilitate the process, please include a summary of your academic background and a copy of your transcripts.
Good knowledge in computer security and privacy, as well as Deep Learning
Experience with Python
Recommended: Experience with ML libraries in python, e.g., Pytorch or Tensorflow
Good analytical capabilities
Motivation and capability to perform independent work as well as readiness to work in a team
Stellenmerkmale
Dein Beschäftigungsumfang
Teilzeit (befristet)
Dein Gehalt
Nach Vereinbarung
Dein Arbeitsplatz:
vor Ort
Dein Büro:
Raum Darmstadt