Postdoc in Security for Neural Architectures at the University of Manchester, UK

We are excited to announce a new project called EnnCore, End-to-End Conceptual Guarding of Neural Architectures, funded as part of the UKRI call “Security for all in an AI-enabled society”. EnnCore will run from 2021-2024 in collaboration with The University of Manchester, The University of Liverpool, digital Experimental Cancer Medicine Team (dECMT), and Urbanchain. 

EnnCore will address a fundamental security problem in neural-based (NB) architectures, allowing system designers to specify and verify a conceptual/behavioral hardcore to the system, which can be used to safeguard NB systems against unexpected behavior and attacks. EnnCore seeks four postdoctoral research associates with expertise in one of the following areas:

– Automated Verification

– Foundations of Learning

– Reliability Assessment of Neural Networks

– Security of Machine Learning

– Explainable AI

– Neuro-Symbolic Models

– Safety Mechanisms in AI Systems

– Adversarial Methods

– Causal Inference

There are three positions available at the University of Manchester (UK) to start in February 2021. These are fixed-term positions for three years with a starting salary from £32,816. 

Further details about the project are available at: 

https://enncore.github.io/

Applications must be sent via:

For Safe and Explainable AI Architectures:

https://www.jobs.ac.uk/job/CBC665/research-associate-in-safe-and-explainable-ai-architectures

For Automated Verification for Neural Architectures:

https://www.jobs.ac.uk/job/CBD019/research-associate-in-automated-verification-for-neural-architectures

Deadline: September 15th, 2020.

For informal inquiries, please contact Lucas Cordeiro (lucas.cordeiro@manchester.ac.uk).