General information
Organisation
The French Alternative Energies and Atomic Energy Commission (CEA) is a key player in research, development and innovation in four main areas :
• defence and security,
• nuclear energy (fission and fusion),
• technological research for industry,
• fundamental research in the physical sciences and life sciences.
Drawing on its widely acknowledged expertise, and thanks to its 16000 technicians, engineers, researchers and staff, the CEA actively participates in collaborative projects with a large number of academic and industrial partners.
The CEA is established in ten centers spread throughout France
Reference
SL-DRT-25-0173
Direction
DRT
Thesis topic details
Category
Technological challenges
Thesis topics
Integrity, availability and confidentiality of embedded AI in post-training stages
Contract
Thèse
Job description
With a strong context of regulation of AI at the European scale, several requirements have been proposed for the 'cybersecurity of AI' and more particularly to increase the security of complex modern AI systems. Indeed, we are experience an impressive development of large models (so-called “Foundation” models) that are deployed at large-scale to be adapted to specific tasks in a wide variety of platforms and devices. Today, models are optimized to be deployed and even fine-tuned in constrained platforms (memory, energy, latency) such as smartphones and many connected devices (home, health, industry…).
However, considering the security of such AI systems is a complex process with multiple attack vectors against their integrity (fool predictions), availability (crash performance, add latency) and confidentiality (reverse engineering, privacy leakage).
In the past decade, the Adversarial Machine Learning and privacy-preserving machine learning communities have reached important milestones by characterizing attacks and proposing defense schemes. Essentially, these threats are focused on the training and the inference stages. However, new threats surface related to the use of pre-trained models, their unsecure deployment as well as their adaptation (fine-tuning).
Moreover, additional security issues concern the fact that the deployment and adaptation stages could be “on-device” processes, for instance with cross-device federated learning. In that context, models are compressed and optimized with state-of-the-art techniques (e.g., quantization, pruning, Low Rank Adaptation) for which their influence on the security needs to be assessed.
The objectives are:
(1) Propose threat models and risk analysis related to critical steps, typically model deployment and continuous training for the deployment and adaptation of large foundation models on embedded systems (e.g., advanced microcontroller with HW accelerator, SoC).
(2) Demonstrate and characterize attacks, with a focus on model-based poisoning.
(3) Propose and develop protection schemes and sound evaluation protocols.
University / doctoral school
Sciences et Technologies de l’Information et de la Communication (STIC)
Paris-Saclay
Thesis topic location
Site
Grenoble
Requester
Position start date
01/10/2025
Person to be contacted by the applicant
MOELLIC Pierre-Alain
pierre-alain.moellic@cea.fr
CEA
DRT/DSYS//LSES
Centre de Microélectronique de Provence
880 route de Mimet
13120 Gardanne
0442616738
Tutor / Responsible thesis director
GOUY-PAILLER Cédric
cedric.gouy-pailler@cea.fr
CEA
DRT/DIN//LIIDE
CEA Saclay
Bâtiment 565, PC 192
91 191 Gif-sur-Yvette
01 69 08 41 87
En savoir plus