The list of the research proposals that are currently available for prospective students of the PhD Programme in Computer and Control Engineering is reported below. Those interested are invited to directly contact the proposers to get more details.
Information about the Call for applications, positions and scholarships available is available on the dedicated page of the Doctoral School website at this link. The Call also explains how to take part in the selection process and the mandatory deadlines.
Additionally, guidelines for preparing the report on scientific interests and motivations for pursuing a PhD are available at this link.
01 Local energy markets in citizen-centered energy communities (Prof. Edoardo Patti)
02 Simulation and Modelling of V2X connectivity with traffic simulation (Prof. Edoardo Patti)
03 Robust AI systems for data-limited applications (Prof. Santa Di Cataldo)
04 Artificial Intelligence applications for advanced manufacturing systems (Prof. Santa Di Cataldo)
05 Secure TinyML on RISC-V Architectures (Prof. Edoardo Patti)
06 Smart digital technologies for advanced and personalised health monitoring (Prof. Luigi Borzi')
07 Secure and Green Digital Networks (Prof. Fulvio Valenza)
12 Self-Adaptive Security for Resilient Networks (Prof. Daniele Bringhenti)
13 Secure Agent: Security and privacy in Agentic AI (Prof. Marco Mellia)
15 OS-Driven Management of Aging Errors for Sustainable Embedded Chips (Prof. Alessandro Savino)
18 Evolutionary Artificial Intelligence (Prof. Giovanni Squillero)
22 Neuromorphic Hardware Development (Prof. Stefano Di Carlo)
23 Neuromorphic Training & Continuous Learning (Prof. Stefano Di Carlo)
24 Trustworthy and Safe AI under Hardware Faults (Prof. Stefano Di Carlo)
25 AI-driven Smart Systems for Sustainable Precision Agriculture (Prof. Renato Ferrero)
29 Data-driven and Surrogate Modelling of In Vitro Tumor Vascularisation (Prof. Stefano Di Carlo)
30 Interaction Design for Everyday Augmented Reality (Prof. Andrea Bottino)
31 World Grounding for Virtual Humans in Extended Reality (Prof. Andrea Bottino)
33 Knowledge-grounded data generation for reliable and scalable AI agents (Prof. Daniele Apiletti)
34 Knowledge-Informed Machine Learning for Data Science and Scientific AI (Prof. Daniele Apiletti)
35 Agentic AI for the Cloud Continuum (Prof. Daniele Apiletti)
36 Few-shot imitation learning for real world manipulation (Prof. Giuseppe Bruno Averta)
37 Measuring Digital Wellbeing Beyond Screen Time (Prof. Alberto Monge Roffarello)
38 Spatio-Temporal Data Science Applied to Earth Observation (Prof. Paolo Garza)
39 Agentic AI for Advanced Temporal Reasoning (Prof. Luca Cagliero)
40 Multimodal Temporal Reasoning using Tiny LLMs (Prof. Luca Cagliero)
41 Continual Learning for Generative Models (Prof. Luca Cagliero)
42 Time-Aware Reinforcement Learning from AI Feedback (Prof. Luca Cagliero)
43 Adversarial Robustness in Multi-Modal Foundation Models (Prof. Luca Cagliero)
44 Privacy-Preserving Machine Learning over IoT networks. (Prof. Valentino Peluso)
46 Fairness-Aware Generative AI for Socio-technical Systems (Prof. Riccardo Coppola)
Local energy markets in citizen-centered energy communities | |
| Proposer | Edoardo Patti, Enrico Macii, Lorenzo Bottaccioli |
| Topics | Software engineering and Mobile computing, Parallel and distributed systems, Quantum computing, Computer architectures and Computer aided design |
| Group website | www.eda.polito.it |
| Summary of the proposal | Energy communities will enable citizens to participate actively in local energy markets by exploiting new digital tools. Citizens will need to understand how to interact with smart energy systems, novel digital tools and local energy markets. Thus, new complex socio-techno-economic interactions will take place in such systems which need to be simulated to evaluate future impacts. A novel co-simulation framework is needed, which combines agent-based modelling techniques with external simulators |
| Research objectives and methods | The diffusion of distributed (renewable) energy sources poses new challenges in the underlying energy infrastructure, e.g., distribution and transmission networks and/or within micro (private) electric grids. The optimal, efficient and safe management and dispatch of electricity flows among different actors (i.e., prosumers) is key to supporting the diffusion of the distributed energy sources paradigm. The goal of the project is to explore different corporate structures, billing and sharing mechanisms inside energy communities. For instance, the use of smart energy contracts based on Distributed Ledger Technology (blockchain) for energy management in local energy communities will be studied. A testbed comprising of physical hardware (e.g., smart meters) connected in the loop with a simulated energy community environment (e.g., a building or a cluster of buildings) exploiting different Renewable Energy Sources (RES) and energy storage technology will be developed and tested during the three-year program. Hence, the research will focus on the development of agents capable of describing:- the final customer/prosumer beliefs desires and intentions and opinions.- the local energy market where prosumers can trade their energy and or flexibility- the local system operator that has to provide the grid reliability All the software entities will be coupled with external simulators of the grid and energy sources in a plug-and-play fashion. Hence, the overall framework has to be able to work in a co-simulation environment with the possibility of performing hardware in the loop. The final outcomes of this research will be an agent-based modelling tool that can be exploited for:- Planning the evolution of future smart multi-energy systems by taking into account the operational phase- Evaluating the effect of different policies and related customer satisfaction- Evaluating the diffusion of technologies and/or energy policies under different regulatory scenarios- Evaluating new business models for energy communities and aggregators During the 1st year, the candidate will study state-of-the-art solutions of existing agent-based modelling tools in order to identify the best available solution for large-scale smart energy system simulation in distributed environments. Furthermore, the candidate will review the state of the art in prosumers/aggregators/market modelling in order to identify the challenges and identify possible innovations. Moreover, the candidate will focus on the review of possible corporate structures, billing and sharing mechanisms of energy communities. Finally, it will start the design of the overall platform starting with the requirements identification and definition. During the 2nd year, the candidate will terminate the design phase and will start the implementation of the agent intelligence. Furthermore, it will start to integrate agents and simulators together in order to create the first beta version of the tool. During 3rd year, the candidate will ultimate the overall platform and test it in different case studies and scenarios in order to show all the effects of the different corporate structures, billing and sharing mechanisms in energy communities. Possible international scientific journals and conferences: - IEEE Transaction Smart Grid, - IEEE Transactions on Evolutionary Computation, - IEEE Transactions on Control of Network Systems, - Environmental modelling and Software, - JASSS,- ACM e-Energy, - IEEE EEEIC internatational conference - IEEE SEST internatational conference - IEEE Compsac internatational conference |
| Required skills | Programming and Object-Oriented Programming (preferable in Python). Frameworks for Multi Agent Systems Development (preferable). Development in web environment (e.g. REST web services). Computer Networks |
Simulation and Modelling of V2X connectivity with traffic simulation | |
| Proposer | Edoardo Patti, Enrico Macii, Lorenzo Bottaccioli |
| Topics | Data science, Computer vision and AI, Parallel and distributed systems, Quantum computing, Software engineering and Mobile computing |
| Group website | www.eda.polito.it |
| Summary of the proposal | The development of novel ICT solutions in smart-grids has opened new opportunities to foster novel services for energy management and savings in all end-use sectors, with particular emphasis on Electric Vehicle connectivity, such as demand flexibility. Thus, there will be a strong interaction among transportation, traffic trends and energy distribution systems. New simulation tools are needed to evaluate the impact of Electric Vehicles in the grid by considering citizens behaviors. |
| Research objectives and methods | This research aims at developing novel simulation tools for smart cities/smart grid scenarios that exploit the Agent-Based Modelling (ABM) approach to evaluate novel strategies to manage the V2X connectivity with traffic simulation. The candidate will develop an ABM simulator that will provide a realistic and virtual city where different scenarios will be executed. The ABM should be based on real data, demand profiles and traffic patterns. Furthermore, the simulation framework should be flexible and extendable so that i) It can be improved with new data from the field; ii) it can be interfaced with other simulation layers (i.e. physical grid simulators, communication simulators); iii) It can interact with external tools executing real policies (such as energy aggregation). This simulator will be a useful tool to analyse how V2X connectivity and the associated services impact both social behaviours and traffic. It will also help the understanding of the impact of new actors and companies (e.g., sharing companies) in both the marketplace and the society, again by analysing the social behaviours and the traffic conditions. In a nutshell, ABM simulator will simulate both traffic variation and the possible advantages of V2X connectivity strategies in a smart grid context. This ABM simulator will be designed and developed to span different spatial-temporal resolutions. All the software entities will be coupled with external simulators of grid and energy sources in a plug-and-play fashion to be ready for being integrated with external simulators and platforms. This will enhance the resulting AMB framework unlocking also hardware in the loop features. The outcomes of this research will be an agent-based modelling tool that can be exploited for:- Simulating V2X connectivity considering traffic conditions- Evaluating the effect of different policies and related customer satisfaction- Evaluating the diffusion and acceptance of demand flexibility strategies- Evaluating the new business model for future companies and services During the 1st year, the candidate will study the state-of-the-art solution of existing agent-based modelling tools to identify the best available solution for large-scale traffic simulation in distributed environments. Furthermore, the candidate will review the state of the art of V2X connectivity to identify the challenges and identify possible innovations. Moreover, the candidate will focus on the review of Artificial Intelligence algorithms for simulating traffic conditions and variation for estimating EV flexibility and users' preferences. Finally, he/she will start the design of the overall ABM framework and algorithms starting with the requirements identification and definition. During the 2nd year, the candidate will terminate the design phase and will start the implementation of the agents' intelligence and test the first version of the proposed solution. During the 3rd year, the candidate will ultimate the overall ABM framework and AI algorithms and test it in different case studies and scenarios to assess the impact of V2X connection strategies and novel business models. Possible international scientific journals and conferences:- IEEE Transaction Smart Grid,- IEEE Transactions on Evolutionary Computation,- IEEE Transactions on Control of Network Systems,- Environmental modelling and Software,- JASSS,- ACM e-Energy,- IEEE EEEIC international conference- IEEE SEST international conference- IEEE Compsac international conference |
| Required skills | Programming and Object-Oriented Programming (preferable in Python), Frameworks for Multi Agent Systems Development (preferable) Development in web environment (e.g. REST web services), Computer Networks |
Robust AI systems for data-limited applications | |
| Proposer | Enrico Macii, Santa Di Cataldo, Francesco Ponzio |
| Topics | Data science, Computer vision and AI |
| Group website | https://eda.polito.it/, https://www.linkedin.com/company/edagroup-polito/ |
| Summary of the proposal | Artificial Intelligence is driving a revolution in many important sectors in society. Deep learning networks, and especially supervised ones such as Convolutional Neural Networks, remain the go-to approach for many important tasks. Nonetheless, training these models typically requires massive amount of good-quality annotated data, which makes them impractical in many real-world applications. This PhD program seeks answers to such problems, targeting important use-cases in today's society. |
| Research objectives and methods | The main goal of this PhD program is the investigation of robust AI-based decision making in data-limited situations. This includes three possible scenarios, which are typical of many important real-world applications: - the training data is difficult to obtain, or it is available in limited quantity. - obtaining the training data is not difficult. Nonetheless, it is either difficult or economically impractical to have human experts labelling the data. - the training data/annotations are available, but the quality of such data is very poor. Possible solutions involve different approaches, from classic transfer learning and domain adaptation techniques, data augmentation with generative modelling, or semi- and self-supervised learning approaches, where the access to real data of the target application is either minimized or avoided altogether. In addition, the use of probabilistic approaches (e.g., Bayesian inference) can be of help to properly quantify the uncertainty level both at training and inference time, making the decision process more robust both to noisy data and/or inconsistent annotations. This research proposal aims to investigate and advance the state of the art in such areas. The outline can be divided into 3 consecutive phases, one per each year of the program: - In the first year, the candidate will acquire the necessary background by attending PhD courses and surveying the relevant literature and will start experimenting on the available state-of-the-art techniques. A seminal conference publication is expected at the end of the year. - In the second year, the candidate will select and address some relevant use-cases, well-representing the three data-limited scenarios mentioned before. Stemming from the supervisors' collaborations and current research activity, these use-cases may involve industry 4.0 applications (for example: smart manufacturing and industrial 3D printing) as well as biomedicine and digital pathology. There is some scope to shape the specific focus of such use-cases with the interests and background of the prospective student, as well as with the ones of the various collaborators that could be involved in the project activity: research centers such as the Inter-departmental Center for Additive Manufacturing in PoliTO, the National Institute for Research in Digital Science and Technology (INRIA, France) as well as industries such as Prima Industrie, Stellantis, Avio Aero, etc. At the end of the second year, the candidate is expected to target at least a paper in a well-reputed conference in the field of applied AI, and possibly another publication in a Q1 journal of the Computer Science sector (e.g., Pattern Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, etc.) - In the third year, the candidate will consolidate the models and approaches that were investigated in the second year, and possibly integrate them into a standalone architecture. The candidate will also finalize this work into at least another major journal publication, as well as into a PhD thesis to defend at the end of the program. |
| Required skills | The ideal candidate to this PhD program has: - positive attitude to research activity and working in team - solid programming skills - solid basics of linear algebra, probability, and statistics - good communication and problem-solving skills - some prior experience in the design and development of machine learning and deep learning architectures. |
Artificial Intelligence applications for advanced manufacturing systems | |
| Proposer | Enrico Macii, Santa Di Cataldo, Francesco Ponzio |
| Topics | Data science, Computer vision and AI |
| Group website | https://eda.polito.it/, https://www.linkedin.com/company/edagroup-polito/ |
| Summary of the proposal | Industry 4.0 refers to digital technologies designed to sense, predict, and interact with production systems, to make decisions that support productivity, energy-efficiency, and sustainability. While Artificial Intelligence plays a crucial role in this paradigm, many challenges are still posed by the nature and dimensionality of the data, and by the immaturity and intrinsic complexity of some of the processes involved. The aim of this PhD program is to successfully tackle these challenges. |
| Research objectives and methods | The main goal of this PhD program is the investigation, design and deployment of state-of-the-art Artificial Intelligence approaches in the context of the smart factory, with special regards with new generation manufacturing systems. These tasks include: This PhD program seeks solutions to these challenges, with specific focus on new generation manufacturing systems involving complex processes. For example: Additive Manufacturing (AM) and semiconductor manufacturing (SM). The outline of the PhD program can be divided into 3 consecutive phases, one per each year of the program. |
| Required skills | The ideal candidate to this PhD program has: - positive attitude to research activity and working in team - solid programming skills - solid basics of linear algebra, probability, and statistics - good communication and problem-solving skills - some prior experience in the design and development of machine learning and deep learning architectures. - some prior knowledge/experience of manufacturing processes is a plus, but not a requirement. |
Secure TinyML on RISC-V Architectures | |
| Proposer | Edoardo Patti, Luca Barbierato, Enrico Macii |
| Topics | Computer architectures and Computer aided design, Cybersecurity |
| Group website | https://eda.polito.it/ |
| Summary of the proposal | This Ph.D. project investigates novel hardware?software co-design approaches for secure TinyML execution on RISC-V architectures. It focuses on integrating cryptographic acceleration and hardware Root of Trust mechanisms while maintaining the energy efficiency required by ultra-low-power edge devices. The research targets secure and trustworthy on-device ML inference for IoT, industrial and automotive systems, ensuring resilience against software and physical attacks in critical environments. |
| Research objectives and methods | In recent years, Tiny Machine Learning (TinyML) has emerged as a key technology enabling artificial intelligence on microcontroller-class devices. By executing inference directly at the edge, TinyML reduces latency, bandwidth consumption, and privacy risks associated with cloud-based processing. However, the increasing deployment of intelligent edge nodes in open and potentially adversarial environments introduces significant security challenges, including firmware tampering, model theft, malicious updates, key extraction, and side-channel attacks. At the same time, the RISC-V open instruction set architecture has gained strong momentum in both academia and industry due to its modularity, extensibility, and suitability for domain-specific acceleration. Modern RISC-V-based systems increasingly integrate heterogeneous components, including multi-core clusters for parallel processing, custom accelerators for neural network workloads, and hardware cryptographic engines for secure communication and data protection. In addition, hardware Root of Trust (RoT) subsystems are becoming essential building blocks to ensure secure boot, device identity, lifecycle management, and secure key storage. The objective of this Ph.D. research is to investigate and design a general and flexible architecture for secure TinyML on RISC-V platforms that combines:- A RISC-V processing subsystem capable of ultra-low-power operation.- A TinyML acceleration domain (e.g., multi-core cluster or neural network accelerator) for efficient inference.- Cryptographic acceleration mechanisms, either through ISA extensions or dedicated hardware engines.- A hardware Root-of-Trust subsystem ensuring secure boot, attestation, key management, and firmware integrity. The research will remain architecture-agnostic and general, considering different possible RISC-V implementations and levels of integration. Particular attention will be devoted to defining clear isolation boundaries between the secure domain (RoT and cryptographic services) and the compute domain (TinyML accelerators), minimising the trusted computing base and reducing the system attack surface. From an algorithmic perspective, the project will explore secure TinyML techniques, including encrypted or authenticated model storage, secure model update mechanisms with anti-rollback protection, runtime integrity verification, and lightweight confidentiality mechanisms for model parameters and intermediate activations. The impact of these security mechanisms on inference latency, energy consumption, and memory footprint will be systematically analysed. The research activity will also investigate the trade-offs between different cryptographic acceleration approaches, such as instruction-set extensions versus dedicated crypto engines, in terms of performance, power consumption, silicon area, and resistance to side-channel attacks. Furthermore, hardware?software co-design methodologies will be adopted to jointly optimize TinyML kernels and security services under strict energy and resource constraints. During the three years of the Ph.D., the research activity will be divided into four main phases:- Study and analysis of the state-of-the-art in TinyML, RISC-V secure architectures, hardware Root-of-Trust solutions, and cryptographic acceleration techniques. Definition of threat models and security requirements for edge AI systems.- Design of a general secure TinyML RISC-V reference architecture, including definition of isolation mechanisms, secure boot flow, key management strategies, and accelerator interaction models. Development of a prototype environment using simulation and/or FPGA-based platforms.- Implementation of secure TinyML runtime components, including secure model provisioning, authenticated updates, attestation mechanisms, and integration with TinyML accelerators. Experimental evaluation using representative TinyML benchmarks.- Comprehensive evaluation of security, performance, and energy trade-offs. Analysis of side-channel resilience and system robustness. Refinement of the architecture and preparation of scientific publications. Possible international scientific journals and conferences include: IEEE Transactions on Computers, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), IEEE Transactions on Information Forensics and Security, ACM/IEEE Design Automation Conference (DAC), DATE Conference, IEEE HOST, CHES, and leading venues in embedded systems and hardware security. The expected outcome of this research is a comprehensive framework and architectural methodology for the secure deployment of TinyML workloads on RISC-V platforms, enabling trustworthy, energy-efficient, and scalable edge AI systems suitable for next-generation secure embedded applications. |
| Required skills | Programming and Object-Oriented Programming (preferable in C/C++) Knowledge of operative system (e.g. UNIX) Knowledge of embedded system Knowledge of driver design Knowledge of cybersecurity Knowledge of IoT paradigms |
Smart digital technologies for advanced and personalised health monitoring | |
| Proposer | Gabriella Olmo, Borzì |
| Topics | Data science, Computer vision and AI, Life sciences, Software engineering and Mobile computing |
| Group website | https://www.smilies.polito.it/ |
| Summary of the proposal | In an era where it is easy to collect huge amounts of medical data over long periods of time, appropriately designed computer programs are essential to merge multimodal data, extract clinically relevant information and synthesise it to provide easy interpretation. The objective of the project is to design, implement, optimise and validate signal processing and machine learning algorithms that provide accurate health monitoring in different environments and conditions. |
| Research objectives and methods | Introduction: In an increasingly ageing society, the prevalence of health-related problems (e.g. cardiovascular complications) is rapidly increasing, as is the incidence of chronic and neurodegenerative disorders. In recent decades, the development of new digital technologies promises to revolutionise the management of diseases and health-related issues. In particular, digital solutions are being designed and optimised to facilitate early diagnosis, unconstrained continuous monitoring and evaluation of disease progression and treatment effectiveness. These new technologies include wearable sensors that can be worn on the body for long periods of time and enable minimally invasive, unconstrained, continuous and accurate real-time health monitoring. In addition, devices such as RGB cameras, infrared sensors and radar systems offer the advantage of non-contact monitoring of vital signs and human movement. Non-invasive data collection procedures ensure the ecological validity of the recorded data, promoting patient comfort and compliance and thus enabling long-term monitoring. At the same time, the huge amount of multimodal data generated by different sensors can pose challenges for data analysis, fusion, processing, synthesis, and interpretation. To overcome these challenges, it is necessary to implement advanced signal processing and machine learning algorithms tailored to detect imperceptible changes in health parameters, aid early diagnosis and provide robust predictions. Research objectives: This project aims to provide a broad set of computer-based methods capable of handling large amounts of multimodal health-related data. Algorithms include tailor-made signal processing methods to improve signal quality and fuse data from different domains, as well as feature- and data-driven machine learning models that process data to provide a robust and comprehensive health assessment. The design, implementation and optimisation of prediction algorithms must consider the trade-off between performance, interpretability, and computational complexity. Indeed, algorithms capable of running in real time on autonomous, low-resource portable devices will be developed. Outline of the research work plan: The research is inherently multidisciplinary and involves the application of computer methods to medical data. Data will be recorded in clinical settings under the supervision of medical personnel, as well as collected continuously in remote settings (e.g. at home). Both young, healthy and elderly subjects as well as subjects with chronic and neurological diseases will be involved in the data acquisition process. Various commercial sensors and prototypes providing effective and non-invasive health monitoring will be exploited. Simple to complex algorithms will be developed and validated, both on large public data sets and on proprietary self-collected data. General models will be implemented to aid early diagnosis of chronic diseases and related symptoms. Subject-specific processing pipelines will be designed to match the characteristics of each patient, thus providing patient-centered solutions. Finally, secure and privacy-preserving methods, such as data anonymisation and federated learning, will be exploited to ensure adequate training of models while reducing the risk associated with the transfer of sensitive medical data. List of possible venues for publications: The research results will be presented at recognised international scientific conferences (e.g., IEEE International Conference on Biomedical and Health Informatics, IEEE Symposium on Computer-Based Medical Systems, ACM International Symposium on Wearable Computers, IEEE International Conference on Digital Health) and top-tier international journals (e.g., Nature Digital Medicine, Artificial Intelligence in Medicine, Computers in Biology and Medicine, IEEE Journal of Biomedical and Health Informatics) at the intersection of computer science, engineering and medicine. Cooperations: Research studies will be conducted and published in close cooperation with clinical professionals (e.g. neurologists, cardiologists, diabetologists, family doctors) and patient associations, to ensure that the methods developed meet current clinical gaps and the real needs of patients. In addition, national and international cooperation with academic research units (e.g. electronics, neuroscience, computer science) and specialised laboratories will provide a comprehensive set of knowledge and tools that will facilitate and promote high-quality research. Finally, collaborations with industrial partners will facilitate the design of effective, ready-to-use solutions for use in the real world. |
| Required skills |
|
Secure and Green Digital Networks | |
| Proposer | Fulvio Valenza, Luca Durante, Riccardo Sisto |
| Topics | Cybersecurity |
| Group website | http://netgroup.polito.it/ https://www.ieiit.cnr.it/it/ |
| Summary of the proposal | Digital networks such as cloud, edge, and virtualized infrastructures must guarantee strong cybersecurity and resilience while operating under increasing energy and efficiency constraints. This PhD project aims to design adaptive and automated approaches for secure and energy-aware network optimization, enabling resilient reactions to cyber attacks and failures through continuous monitoring, formal modeling, and security-driven reconfiguration. |
| Research objectives and methods | Modern digital networks, including cloud, edge, and software-defined infrastructures, are increasingly complex, dynamic, and exposed to cyber threats. At the same time, they are required to operate under strict efficiency and sustainability constraints. In this context, cybersecurity and network resilience cannot be treated as static properties, but must be continuously enforced through adaptive, automated, and optimized configuration mechanisms. The main objective of this PhD research is to advance the state of the art in secure and green digital networks by developing optimization models and automated approaches for the configuration of network security devices. The proposed approaches will combine cybersecurity protection, resilience to attacks and failures in dynamic digital networks, and energy-aware optimization for efficient security configurations. Current network security management solutions still rely heavily on manual configuration and human-driven decision processes. This limits scalability, slows reaction to attacks or failures, and increases the risk of misconfigurations, especially in highly dynamic cloud and virtualized environments. The proposed research aims to reduce human intervention by introducing model-driven and optimization-based approaches that enable energy-aware, automated, correct-by-construction security configurations and adaptive reconfiguration at runtime. The research will build upon consolidated expertise in network security automation and formal methods and will be conducted in synergy with ongoing scientific activities in collaboration with CNR-IEIIT. Energy awareness will be treated as a constraint and optimization dimension influencing security decisions, rather than as an independent objective, enabling the joint management of security, resilience, and efficiency. The research activity will be structured in three main phases. Year 1: analysis of the state of the art in cybersecurity automation, network resilience, and energy-aware network optimization, with particular attention to formal and optimization-based modeling approaches. Initial problem formulations and models for the secure and green configuration of network security devices will be defined. Year 2: development and implementation of the proposed models and automated mechanisms, including security-aware and energy-aware optimization strategies. Experimental evaluation will assess correctness, scalability, performance, and resilience under representative attack and failure scenarios. The results of this phase are expected to lead to publications in international conferences and journals. Year 3: refinement and extension of the proposed approaches to improve scalability, generality, and applicability to different network architectures, security devices, and threat models. Data-driven and AI-based techniques may be investigated to support adaptive decision-making under changing network and threat conditions. Dissemination of the research results will be completed. The outcomes of this research are expected to contribute to high-impact scientific venues in the areas of cybersecurity, networking, and dependable systems, such as IEEE S&P, ACM CCS, NDSS, ESORICS, IFIP SEC, IEEE Transactions on Secure and Dependable Computing, and ACM Transactions on Privacy and Security. |
| Required skills | The candidate should have a solid background in computer networks and cybersecurity, as well as strong programming skills. Knowledge of network security mechanisms and distributed systems is required. Familiarity with formal methods, optimization, or automation techniques is a plus but not mandatory and can be acquired during the PhD program. |
Performance Optimization of ML-Based Compressed Video Communications on Packet Networks | |
| Proposer | Enrico Masala, Antonio Servetti |
| Topics | Computer graphics and Multimedia, Data science, Computer vision and AI |
| Group website | https://media.polito.it |
| Summary of the proposal | The focus of this proposal is on optimizing the video communication scenario nowadays that ML-based video compression algorithms with good performance (latency and required bits) are starting to emerge. Key objectives of the proposal are investigating how the major emerging algorithms perform in packet lossy network conditions, trying to predict users' quality of experience. Such knowledge will then be used to improve the way content is transported thus optimizing performance. |
| Research objectives and methods | In recent years, machine learning (ML) has been successfully employed also to improve video compression performance. Recent deep-learning based video compression algorithms achieve performance which is comparable if not better than state-of-the-art traditional standards such as, for instance, the MPEG family. However, algorithms are still relatively new and not adapted to specific communications scenarios where several impairments may happen, notably missing data due to packet losses. We aim to improve the situation by investigating the baseline robustness of decoding algorithms, followed by proposals of efficient techniques to include coded data in packets in order to minimize quality impairments due to packet losses. Starting from Internet Media Group's traditional background on optimizing multimedia communications over packet networks, as well as ongoing work on media quality evaluation experiments, the candidate will first measure the quality that is obtained by running decoding algorithms, then using such knowledge to design schemes to make communications more robust to losses. The workplan of the activities is detailed in the following. In the first year the PhD candidate will first familiarize with machine learning and AI-based video compression algorithms, using the available open source software which is released by researchers working in the field. In parallel, a framework will be created to efficiently conduct simulations and experiments where simulated video communications is subject to packet losses. The framework may include some modifications and adaptation to existing video decoders to make them work correctly in the simulations, since they might not have been originally designed to handle these conditions. These initial investigation and activities are expected to lead to conference publications. In the second year, building on the developed simulators the theoretical knowledge already present in the research group, new efficient packetization strategies will be developed, simulated, and tested to demonstrate their performance and in particular their ability to make the communication as much robust as possible to packet losses. In this context, potential shortcomings of such algorithms will be systematically identified and the resulting performance in terms of video quality will be carefully measured by means of widespread objective video quality measures suitable for such purpose. These results are expected to yield one or more journal publications. Possible targets for research publications, well known to the proposer, include IEEE Transactions on Multimedia, Elsevier Signal Processing: Image Communication, ACM Transactions on Multimedia Computing Communications and Applications, Elsevier Multimedia Tools and Applications, various IEEE/ACM international conferences (IEEE ICME, IEEE MMSP, QoMEX, ACM MM, ACM MMSys). The proposer is actively collaborating with the Video Quality Experts Group (VQEG), an international group of experts from academia and industry that aims to develop new standards in the context of video quality. In particular the tutor is co-chair of the JEG-Hybrid project which is very interested in the activity previously described. |
| Required skills | The PhD candidate is expected to have: strong analytical skills; some background on ML systems; good English writing and communication skills; reasonably good ability/willingness to learn how to work with large quantities of data (uncompressed videos) on remote server systems, and designing/running simple custom packet loss simulators. |
AI for early and differential diagnosis of dementia, using facial expressions and voice. | |
| Proposer | Gabriella Olmo, Luigi Borzì, Innocenzo Rainero |
| Topics | Data science, Computer vision and AI, Life sciences |
| Group website | https://www.sysbio.polito.it/analytics-technologies-health/ https://www.smilies.polito.it/ |
| Summary of the proposal | Early and differential diagnosis of dementia is essential for timely and targeted care. This project aims to develop an artificial intelligence (AI)-based system to discriminate between different stages and aetiologies of dementia by analysing facial expression and voice. |
| Research objectives and methods | Introduction: Dementia is a syndrome characterized by a progressive deterioration of cognitive functions and behavioural disturbances. Alzheimer's disease (AD) is the most common form; other aetiologies include vascular, frontotemporal dementia, dementia with Lewy bodies, dementia related to Parkinson's disease and mixed forms. From a clinical perspective, dementia is a progressive continuum from an asymptomatic phase, to mild cognitive impairment (MCI), and ultimately to overt dementia. In the MCI stage, the first symptoms occur, but daily activities are not heavily affected. This stage is often considered an important time window for diagnosis. Early and differential diagnosis is essential for timely access to care, as well as for enrolling individuals in clinical trials. The FDA has recently approved disease-modifying therapies for AD showing clinical efficacy only when administered during the earliest stages of the disease. Moreover, it is recognized that proper life style modifications are effective in slowing the disease progression; however, the interventions should be tuned on the more affected cognitive dimensions (e.g., speech, memory, executive functions), and these are dependent on the differential diagnosis of the dementia class. The diagnosis relies on a combination of medical history, neuropsychological assessments, neuroimaging, and lab tests, which are often costly and invasive; hence, there is a pressing need for accessible, cost-effective approaches. In this context, facial expressions may carry important diagnostic information, as they are often altered in individuals with CI, with alterations depending on the type and stage of dementia. Deep learning (DL) has already demonstrated potential in analysing facial expressions to support early and accurate diagnosis of CI. However, very few research tackled disease stadiation or differential diagnosis. Cognitive status was also discovered to significantly impact voice. Bowing, air escape, poor voice production, impaired breath support have been observed individuals with AD. However, no study addresses differential diagnosis, and the impact of different neurodegenerative conditions on voice production and semantics is largely under?researched. Research objectives: This project builds upon previous results achieved in this research group, and aims to improve the early classification of dementia aetiologies, using information from: elicited facial emotion features extracted from video recordings; voice samples; other physiological information (e.g., heart rate variability, electrodermal activity), which can be easily measured using wearable devices and augment the information related to the patient autonomic nervous activation. Outline of the research work plan: The research is inherently multidisciplinary and involves the cooperation of the Dementia Center of Molinette Hospital, Turin, and the Department of Neuroscience, Biomedicine and Movement Sciences, University of Verona. First year: A detailed protocol will be set up, in order to identify: a suitable emotion elicitation procedure; the video/audio recording set up; additional bio signals to be collected. This activity is already ongoing and included in a clinical trial (approved by the EC of AoU Citt? della Salute e della Scienza di Torino). The protocol, which has already been tested on about 70 patients and includes AD-specific biomarkers, will be modified to include voice and other bio signals, and optimized as far as more data are available. Data will be recorded in clinical during all the Ph.D. period, and the PhD. student will be directly involved in the measurements. Second and third year. AI models will be implemented to aid early and differential diagnosis. The work will be based upon already developed models for multiple classification tasks (MCI vs controls, MCI vs overt dementia, differential diagnosis of dementia types) and modified to include multidimensional data (voice and bio signals). Subject-specific processing pipelines will be designed to match the characteristics of each patient, thus providing patient-centred solutions. List of possible venues for publications: The research will be presented at recognised international scientific conferences and international journals (e.g., Nature Digital Medicine, Artificial Intelligence in Medicine, Computers in Biology and Medicine, IEEE Journal of Biomedical and Health Informatics) at the intersection of computer science, engineering and medicine. Cooperations: close cooperation with clinical professionals is foreseen. In addition, national and international cooperation with academic research units and laboratories will provide a comprehensive set of knowledge and tools that will facilitate and promote high-quality research. |
| Required skills |
|
Exploiting Vision and Multimodal Foundation Models for Neurodegenerative Disease Assessment | |
| Proposer | Gabriella Olmo, Luigi Borzì, Gianluca Amprimo |
| Topics | Data science, Computer vision and AI, Life sciences |
| Group website | https://www.smilies.polito.it/ |
| Summary of the proposal | Recent advances in vision and multimodal foundation models have demonstrated remarkable generalisation capabilities across tasks and domains. This project aims to exploit transfer learning, domain adaptation and fine-tuning strategies to adapt large-scale vision and vision?language models for the analysis of biomedical data. The goal is to develop AI systems capable of supporting early diagnosis and continuous monitoring of neurodegenerative diseases such as Parkinson's and Alzheimer's. |
| Research objectives and methods | Introduction: Neurodegenerative diseases such as Parkinson's and Alzheimer's represent one of the most pressing healthcare challenges in ageing societies. These disorders are characterised by progressive deterioration of neurological functions that significantly impacts quality of life and healthcare systems worldwide. Early diagnosis and continuous monitoring of disease progression are crucial for effective treatment, patient management, and evaluation of therapeutic interventions. In recent years, digital technologies have enabled the collection of large amounts of heterogeneous biomedical data, including medical imaging, video recordings of motor behaviour, wearable sensor signals, and clinical reports. However, extracting clinically meaningful information from such complex multimodal datasets remains a challenging task. At the same time, AI has undergone a major paradigm shift with the emergence of foundation models trained on massive datasets. In particular, vision foundation models (VFMs) and vision?language models (VLMs) have demonstrated strong generalisation capabilities and the ability to transfer knowledge across different tasks and domains. These models can potentially revolutionise biomedical data analysis by providing powerful feature representations that can be adapted to specialised medical applications. The main challenge is how to effectively adapt these large-scale pretrained models to the medical domain, where labelled data are often limited, heterogeneous, and highly sensitive. Research objectives: The objective of this PhD project is to investigate how vision and multimodal foundation models can be adapted and exploited for the analysis of biomedical data related to neurodegenerative diseases. In particular, the project will focus on: - Transfer learning and fine-tuning strategies for adapting pretrained vision models to video data collected for motor assessment of patients. - Domain adaptation methods to bridge the gap between large-scale natural datasets used for pretraining and specialised clinical datasets. - Multimodal learning approaches that integrate visual data with physiological signals, clinical metadata, or textual reports. - Representation learning techniques to extract clinically relevant features that can support diagnosis, disease staging, and monitoring. The resulting AI systems will aim to support clinicians by providing tools capable of detecting subtle behavioural or physiological patterns associated with neurodegenerative diseases. Outline of the research work plan: The research activity will be inherently interdisciplinary and will combine expertise from computer vision, machine learning, neuroscience, and biomedical engineering. The research plan will include several phases. First, a systematic analysis of existing vision foundation models and multimodal architectures will be conducted, including models such as Vision Transformers, self-supervised vision models, and vision?language models. Their suitability for biomedical applications will be investigated. Second, adaptation strategies will be developed to transfer these models to medical domains. This includes fine-tuning methods, domain adaptation approaches, and parameter-efficient learning techniques such as adapters or prompt tuning. Third, the models will be applied to datasets related to neurodegenerative diseases, including imaging data, behavioural recordings, and multimodal clinical datasets. Potential applications include: - analysis of motor behaviour and gait patterns in Parkinson's disease - detection of cognitive or behavioural markers in Alzheimer's disease - monitoring disease progression using longitudinal multimodal data The performance of the developed methods will be evaluated using both public datasets and clinical datasets collected in collaboration with medical institutions. Finally, the project will investigate trustworthy and interpretable AI methods, aiming to improve transparency and reliability of AI-based clinical decision support systems. Techniques for explainability and uncertainty estimation will be explored to ensure that the developed models provide meaningful insights to clinicians. List of possible venues for publications: Research results will be disseminated in leading international conferences and journals in artificial intelligence, computer vision, and biomedical engineering. Relevant journals include Nature Digital Medicine, Elsevier Artificial Intelligence in Medicine, IEEE Journal of Biomedical and Health Informatics and Elsevier Computers in Biology and Medicine. Cooperations: The research will be conducted in collaboration with clinical experts to ensure that the developed AI methods address real clinical needs. Collaborations with hospitals, research laboratories, and international academic partners will enable access to relevant biomedical datasets and clinical expertise. |
| Required skills |
|
Designing delay-trained spiking neural networks on heterogeneous cloud to edge neuromorphic systems | |
| Proposer | Gianvito Urgese, Vittorio Fra, Enrico Macii |
| Topics | Data science, Computer vision and AI, Parallel and distributed systems, Quantum computing, Computer architectures and Computer aided design |
| Group website | https://eda.polito.it/ |
| Summary of the proposal | The PhD candidate will design spiking neural networks that learn through both sparse activity and precise spike timing. They will also integrate EventProp-based training and benchmarking into a mixed digital/neuromorphic platform and help develop SW to deploy these models on neuromorphic systems within the inNuCE infrastructure. The goal is to create accurate, efficient, and quickly deployable neuromorphic solutions for neuroscience, AIoT, and bioinformatics. |
| Research objectives and methods | Research objectives Neuromorphic HW architectures, originally developed for brain simulation, are now emerging as promising substrates for AIoT, robotics, neuroscience, and sustainable edge/cloud AI. Recent work highlights that their advantage comes from event-driven sparsity, asynchronous processing, and tight coupling of memory and computation, but also stresses the need for scalable training, heterogeneous integration, and reproducible software stacks. Within this context, the PhD will focus on delay-trained SNNs and their deployment on heterogeneous digital/neuromorphic platforms. The objectives of the PhD plan encompass several key aspects: - Develop the expertise to analyse event-driven datasets, spike encodings, and HW constraints, extracting the information needed to map advanced SNN models to heterogeneous platforms. - Study Spiking Neural Networks that exploit intrinsic sparsity and rich temporal coding through the joint training of synaptic weights, axonal and dendritic delays, and selected neuronal dynamics. - Contribute to the design and development of the inNuCE Heterogeneous Prototyping Platform (HPP) and the NMLOps-oriented framework that covers data acquisition, training, simulation, conversion, deployment, and benchmarking. - Propose reusable abstractions and model components for delay-aware neuromorphic computing, enabling users to compose kernels and applications across multiple target HW and software stacks. - Utilize the HPP to design proof-of-concept applications and compare delay-trained SNNs with conventional baselines in terms of accuracy, latency, memory footprint, and energy consumption. - Contribute to the design of software libraries and optimization strategies for efficient sparse SNN execution on RISC-V CPUs and other resource-constrained edge devices. The research activities will primarily focus on implementing algorithms in three main application areas: - Simulations of models developed by the EBRAINS-Italy neuroscience community, especially where temporal dynamics and continual adaptation are central. - Real-time AIoT, robotics, and industrial data analysis, with particular attention to event-based or time-series sensing at the edge. - Analysis and pattern matching of neuroscience and bioinformatics data streams. Outline of the research work plan 1st year. The candidate will study SOTA neuromorphic frameworks and training methods for spiking networks, with emphasis on surrogate-gradient techniques, EventProp, and the learning of axonal delays and neuronal time constants. He/She will implement the first delay-aware training workflows inside the inNuCE HPP, contributing to containerized toolchains, versioned experiments, and initial benchmarking on GPUs, neuromorphic devices, and RISC-V-oriented edge targets. In parallel, the candidate will start the design of software components for efficient sparse SNN execution on conventional CPUs. 2nd year. The candidate will define an integrated methodological approach for modelling, training, compiling, and validating delay-trained SNN applications and systems. Building on the first-year results, he/she will extend the framework with model-zoo components, AutoML support, and resource-aware evaluation metrics covering accuracy, latency, memory footprint, and energy. The candidate will consolidate the inNuCE HPP prototype and define two Modelling, Simulation, and Analysis (MSA) use cases tailored to the needs of neuroscientists, bioinformaticians, and data scientists/engineers. 3rd year. The candidate will apply the proposed approach to industrial, neuroscience, and AIoT case studies, validating delay-trained SNNs on heterogeneous hardware and comparing them against conventional baselines. He/She will analyse portability constraints and compiler/runtime requirements for upcoming neuromorphic HW alongside general-purpose edge CPUs, and will contribute to the integration of the inNuCE HPP and related services into the EBRAINS ecosystem. The research activities will be carried out in collaboration with partners of the international neuromorphic community and will benefit from the methodological and infrastructural advances of the inNuCE RI, together with ongoing collaborations within the EU-funded Sublimity project and the Arrowhead FPVN project. List of possible venues for publications The main outcome of the project will be disseminated in three international conference papers and at least one publication in a journal of the AIoT, embedded AI, and neuromorphic fields. In the following, the possible conference and journal targets: - IEEE/ACM International Conferences (e.g., DAC, DATE, AICAS, NICE, ISLPED, GLSVLSI, PATMOS, ISCAS, VLSI-SoC); - IEEE/ACM Journals (e.g., TCAD, TETC, TVLSI, TCAS-I, TCAS-II, TCOMP), Neuromorphic Computing and Engineering, Frontiers in Neuroscience, MDPI Journals (e.g., Electronics). |
| Required skills | MS degree in computer engineering, electronics engineering, physics of complex systems, or related disciplines. Excellent skills in computer programming, computer architecture, embedded systems, and AI/IoT applications. |
Self-Adaptive Security for Resilient Networks | |
| Proposer | Fulvio Valenza, Daniele Bringhenti |
| Topics | Cybersecurity, Parallel and distributed systems, Quantum computing |
| Group website | https://netgroup.polito.it/ |
| Summary of the proposal | Next-generation networks require security mechanisms that go beyond traditional static protection. This research activity aims to study how self-adaptive security can enhance the resilience of such environments against evolving cyberthreats. The objective is to define adaptive response strategies that can continuously assess system conditions, promptly react to attacks, and preserve service continuity, thus enabling networks to withstand disruptions and recover efficiently. |
| Research objectives and methods | Next-generation networks, such as virtualized networks and edge-to-cloud infrastructures, are reshaping the nature of digital environments. Their distributed, programmable, and dynamic nature opens new opportunities but also increases the surface and complexity of cyberattacks. As a consequence, traditional manual security (re)configuration strategies are no longer feasible, as they require excessive time to mitigate detected attacks and are prone to misconfigurations. In such highly dynamic environments, security is no longer only a matter of preventing attacks, but also of ensuring that the network can continue operating, adapt under stress, and recover quickly from disruptions. The contributions produced by the proposed research can be published in conferences and journals belonging to the areas of cybersecurity (e.g. IEEE S&P, ACM CCS, NDSS, ESORICS, IFIP SEC, DSN, ACM Transactions on Information and System Security, or IEEE Transactions on Secure and Dependable Computing), and applications (e.g. IEEE Transactions on Industrial Informatics or IEEE Transactions on Vehicular Technology). |
| Required skills | In order to successfully develop the proposed activity, the candidate should have a good background in cybersecurity (especially in network security), and good programming skills. Some knowledge of Artificial Intelligence algorithms and/or formal methods can be useful, but it is not required: the candidate can acquire this knowledge and related skills as part of the PhD Program, by exploiting specialized courses. |
Secure Agent: Security and privacy in Agentic AI | |
| Proposer | Marco Mellia, Nikhil Jha, Daniele Antonioli |
| Topics | Data science, Computer vision and AI |
| Group website | https://smartdata.polito.it/ https://www.eurecom.fr/en/research/networking-and-security-department |
| Summary of the proposal | Agentic AI introduces significant security and privacy risks due to its early-stage ecosystem and pervasive deployment. Emerging protocols (MCP, A2A, OpenAI tool calling) create new attack surfaces, e.g., prompt injection, schema poisoning, data exfiltration, spoofing. The PhD project will study the evolution of agentic AI systems and develop a secure-by-design approach, embedding security as a core design principle to prevent, detect, and mitigate vulnerabilities in agent-based architectures. |
| Research objectives and methods | Scenario and motivations: Research objectives: Research work plan: The research will be organised into several phases.- Phase 1 ? Ecosystem and threat analysis. The first phase will focus on analysing the architecture of existing Agentic AI frameworks, orchestration platforms, and communication protocols. A comprehensive threat model will be developed, identifying attack vectors, trust boundaries, and possible security failures in agent-tool and agent-agent interactions.- Phase 2 ? Vulnerability discovery and experimental evaluation. This phase will involve the design of experimental environments to systematically study vulnerabilities in agentic architectures. The candidate will analyse how different orchestration patterns and protocol designs affect the security posture of the system. Empirical studies will be conducted to evaluate the effectiveness of known and novel attacks.- Phase 3 ? Secure-by-design architectures. Based on the identified vulnerabilities, the research will propose architectural principles and mechanisms to secure agentic AI systems. Possible directions include capability-based access control for tools, protocol-level safeguards, secure context management, isolation mechanisms for agents, and verification of tool schemas and interaction policies.- Phase 4 ? Detection and mitigation mechanisms. The candidate will design automated techniques for the detection and mitigation of attacks in agent ecosystems. These may include runtime monitoring, anomaly detection for agent behaviour, defensive prompting strategies, secure tool wrappers, and validation layers for agent interactions.- Phase 5 ? Evaluation and guidelines. The proposed solutions will be evaluated through experimental implementations and benchmarking scenarios. The outcome of the research will include best practices, reference architectures, and security guidelines for developers and organisations building agentic AI systems. The PhD candidate will be hosted for 18 months at EURECOM, working with Professor Antonioli, and 18 months at Polito, working with Professor Mellia and his group. Whatever the current location, the remote party will hold periodic (e.g., weekly) meetings with the hosting professor and the PhD student, so that the latter will effectively be co-supervised for the whole period of their scholarship. References:- Mohamed Amine Ferrag, Norbert Tihanyi, Djallel Hamouda, Leandros Maglaras, Abderrahmane Lakas, Merouane Debbah, From prompt injections to protocol exploits: Threats in LLM-powered AI agents workflows, ICT Express, Volume 12, Issue 2, 2026,- Datta, S., Nahin, S. K., Chhabra, A., & Mohapatra, P. (2025). Agentic AI security: Threats, defences, evaluation, and open challenges. arXiv preprint arXiv:2510.23883.- Zhan, Q., Liang, Z., Ying, Z., & Kang, D. (2024, August). Injecagent: Benchmarking indirect prompt injections in tool-integrated large language model agents. In Findings of the Association for Computational Linguistics: ACL 2024 (pp. 10471-10506). List of possible venues for publications:- Security venues: IEEE Security & Privacy, USENIX Security, ACM Computer and Communications Security (CCS), IEEE Transactions on Information Forensics and Security, ACM Transactions on Privacy and Security.- AI / Data Science venues: NeurIPS, ICLR, ICML, AAAI Conference, ACM KDD, ECML/PKDD- Systems / Web venues: ACM IMC, WWW/TheWebConf |
| Required skills |
|
Physics-Informed Model-Based Reinforcement Learning for Multi-Scale Energy Systems | |
| Proposer | Lorenzo Bottaccioli, Edoardo Patti, Enrico Macii |
| Topics | Data science, Computer vision and AI, Controls and system engineering |
| Group website | https://eda.polito.it/ |
| Summary of the proposal | This research develops model-based and physics-informed Reinforcement Learning (RL) for multi-scale energy systems. By embedding physical models and constraints into learning and control, it aims to improve efficiency, robustness, safety, and generalisation in Multi-Energy Systems (MES), from components to districts, enabling scalable and reliable energy management. |
| Research objectives and methods | einforcement Learning (RL) is a promising approach for optimizing complex energy systems, but standard model-free methods often suffer from limited sample efficiency, poor generalization, and lack of physical consistency. This research focuses on model-based and physics-informed RL for Multi-Energy Systems (MES), where learning is guided by system dynamics and domain knowledge. Modern MES integrate renewable energy sources, distributed generation, storage systems, and flexible loads such as HVAC units, resulting in highly dynamic, stochastic, and multi-scale environments. Traditional control approaches (e.g., PID) are often inadequate in handling these complexities, while purely data-driven RL may lead to unsafe or physically inconsistent solutions. This project addresses these limitations by combining model-based RL, Physics-Informed Neural Networks (PINNs), and hybrid control strategies. Research Objectives: - Develop model-based RL algorithms that exploit both learned and physics-based models to improve data efficiency and control stability. - Integrate PINNs to encode physical laws (e.g., thermal dynamics, power flow constraints) into learning and system identification. - Design hybrid control architectures combining RL with classical approaches (e.g., MPC, PID) to ensure safety and robustness. - Extend the framework to Multi-Agent RL (MARL) for coordinated control in energy communities and district-level systems. - Assess performance in terms of energy efficiency, operational costs, robustness, and constraint satisfaction. Research Plan: Year 2: - Extend the framework to household-level energy management, including demand response and flexibility modelling. - Develop multi-agent and hierarchical RL architectures for coordinated control.- Integrate PINNs into both system modelling and policy learning.- Explore hybrid RL-MPC/PID strategies to improve reliability and interpretability. Year 3:- Validate the proposed methods at the district scale using co-simulation platforms. - Analyze robustness under uncertainty (renewable variability, demand fluctuations, market signals). - Collaborate with industrial partners for validation in realistic scenarios. - Benchmark against model-free RL and traditional control approaches. Starting Point and Collaborations: Expected Outcomes: - Novel model-based and physics-informed RL methods tailored to MES. - Improved reliability, interpretability, and constraint compliance compared to purely data-driven approaches. - Scalable solutions for multi-scale and multi-agent energy management. - Contributions to hybrid AI-control methodologies for energy systems. Publication Venues: - Journals: IEEE Transactions on Smart Grid; IEEE Transactions on Sustainable Computing; Energy and AI; Engineering Applications of Artificial Intelligence. - ?Conferences: NeurIPS; ICML; ACM e-Energy; IEEE INDIN; IEEE SEST. |
| Required skills | - Strong programming skills (Python, ML frameworks). - Background in control systems and/or energy systems. - Knowledge of machine learning, preferably RL and neural networks. - Familiarity with system modelling, simulation tools, and optimisation. - Interest in model-based and physics-informed approaches. |
OS-Driven Management of Aging Errors for Sustainable Embedded Chips | |
| Proposer | Alessandro Savino, Stefano Di Carlo |
| Topics | Computer architectures and Computer aided design |
| Group website | |
| Summary of the proposal | The PhD targets sustainable ICT by reusing aging or discarded integrated circuits in embedded systems, with a focus on SRAM and firmware. It will experimentally characterize aging?induced errors, build predictive models, and design OS?level strategies that either compensate for performance losses or exploit errors for approximate computing, validated on ML workloads and satellite control systems. |
| Research objectives and methods | This proposal aims to transform hardware aging from a reliability threat into a system?level opportunity for sustainable computing in embedded platforms. Starting from climate and sustainability goals, it focuses on extending the useful lifetime of integrated circuits by reusing aging or discarded chips, thereby reducing the environmental costs of semiconductor manufacturing. The work targets systems based on SRAM and modern embedded processors (e.g., RISC?V), and integrates hardware characterization with firmware and operating?system techniques.? Research objectives Outline of the research work plan Phase 1 - Experimental characterization (Months 1?12): The candidate will study the effects of aging on embedded platforms, focusing on RISC?V?based systems and SRAM memories. Accelerated-aging campaigns (thermal cycling, voltage variations, high?frequency operation) will be used to induce degradation and to collect functional and low?level data. This phase will produce datasets and a reusable experimental framework describing bit?flip rates, positional and temporal error patterns, and their evolution. Phase 2 - Modeling and mitigation prototypes (Months 13?24): building on Phase 1, the candidate will derive mathematical and simulation models of aging?induced errors and their impact on timing, reliability, and energy. These models will guide the design of OS and firmware mechanisms such as frequency? and voltage?aware schedulers, adaptive parameter tuning, error?aware memory management, and approximate computing policies that classify and route tolerant workloads to degraded resources. Prototype implementations will be integrated into an embedded OS stack to assess feasibility and overhead. Phase 3 - Validation and integration (Months 25?36): The models and OS?level strategies will be validated in realistic scenarios. On the ML side, representative inference workloads will be used to quantify trade?offs between error tolerance, energy savings, and performance. On the satellite?control side, representative control and monitoring tasks will evaluate how parameter tuning and selective approximation can preserve safety and functional reliability as the satellite ages. The final outcome will be an integrated hardware?firmware?software framework for aging?aware approximate computing, together with quantitative evaluations in terms of energy, Energy?Delay Product, Quality of Result, and system lifetime. Industrial collaborations, projects, and context The proposal contributes to the European agenda on sustainable digital systems (e.g., ?Beyond Green ICT: building a truly sustainable digital future?). The candidate will benefit from active industrial collaborations in aerospace and automotive, as well as with semiconductor companies, which offer realistic use cases, platforms, and potential datasets. These collaborations support validation on industrial?grade systems and improve the PhD graduate's employability in both academia and industry. Possible venues for publications |
| Required skills | The candidate should have a solid background in computer engineering, with knowledge of computer architecture, embedded systems, and operating systems. Experience with C/C++ programming, scripting, and Linux?based development is important, while familiarity with RISC?V platforms, hardware reliability, fault injection, or approximate computing is a strong plus. Teamwork, autonomy, and good English communication are required. |
Agentic AI for vRAN in Cooperative, Connected and Automated Mobility | |
| Proposer | Claudio Ettore Casetti, Marco Rapelli |
| Topics | Data science, Computer vision and AI, Computer architectures and Computer aided design, Software engineering and Mobile computing |
| Group website | |
| Summary of the proposal | This PhD will investigate agentic AI for vRAN architectures supporting connected and cooperative automated mobility, with particular attention to teleoperated driving, automated mobility, and advanced URLLC/eMBB tradeoffs. The research will focus on how distributed AI agents can perceive context, reason on network and service conditions, and autonomously coordinate cooperative actions such as perception sharing, maneuver negotiation, and selective task exchange among vehicles and infrastructure. |
| Research objectives and methods | Future Connected, Cooperative and Automated Mobility (CCAM) systems will rely on communication infrastructures that are not only high performing, but also adaptive, context-aware, and capable of autonomous decision making. In this context, virtualized Radio Access Networks (vRANs) offer a flexible platform where networking functions can be dynamically deployed, scaled, and optimized at the edge or in the cloud. This PhD proposal investigates how agentic AI can be embedded into vRAN architectures to support CCAM services, with particular attention to automated mobility, teleoperated driving, cooperative perception, maneuver coordination, and other latency- and reliability-sensitive applications. The core idea is to move beyond static or purely reactive network control and design AI agents that can perceive network and service conditions, reason about system objectives, plan actions, and coordinate with other agents across vehicles, roadside infrastructure, and network nodes. Rather than treating the communication system as a passive transport layer, the research will explore how the vRAN can become an active participant in the orchestration of cooperative mobility services. In particular, the PhD will study how agentic AI can support selective information exchange, fusion and prioritization of data, adaptive allocation of compute resources, and coordinated decisions across distributed entities. Research objectives- define a reference architecture for agentic AI in vRAN-enabled CCAM systems, identifying the role of distributed agents, their observation space, decision variables, coordination mechanisms, and interaction with RAN control and orchestration functions.- develop AI-driven strategies for selective and goal-oriented communication, so that only the information that is relevant for a given cooperative task, such as perception sharing or maneuver negotiation, is exchanged with the required timeliness and reliability.- design resource management and orchestration mechanisms that jointly optimize compute, and service-level performance in dynamic mobility scenarios.- evaluate the impact of agentic AI on key CCAM metrics, including latency, reliability, scalability, network efficiency, and quality of service for cooperative applications. Outline of the research work plan A second phase will focus on system modeling and architecture design. This will include the definition of representative use cases, such as cooperative perception and coordinated maneuvers, and the mapping of service requirements onto vRAN functions and agent behaviors. A third phase will develop agentic AI mechanisms for distributed decision making in the vRAN. Candidate approaches may include multi-agent reinforcement learning, planning-based agents, hierarchical control, or hybrid AI methods combining learning and rule-based policies. Special attention will be given to selective task exchange, prioritization of control-relevant information, and interaction between network intelligence and application-level objectives. A fourth phase will address implementation and performance evaluation through simulation and, where possible, emulation or small-scale prototyping. The research may leverage digital-twin environments and integrated mobility-network simulators to assess the proposed framework under realistic traffic, wireless, and service conditions. The final phase will consolidate the results, compare the proposed solutions against conventional vRAN and non-agentic baselines, and derive design guidelines for future AI-native CCAM infrastructures. List of possible venues for publications |
| Required skills | The candidate should have a strong background in telecommunications, computer engineering, or computer science, with solid knowledge of wireless/mobile networks and good programming skills. Familiarity with AI/ML, distributed systems, optimization, or multi-agent methods is desirable. Experience with simulation tools and strong analytical, research, and scientific writing skills will be important. |
Digital Trust Data Infrastructures for Certified AI: blockchain-based architectures and governance | |
| Proposer | Valentina Gatteschi, Claudio Schifanella |
| Topics | Cybersecurity, Data science, Computer vision and AI, Parallel and distributed systems, Quantum computing |
| Group website | https://informatica.unito.it/do/home.pl |
| Summary of the proposal | This PhD project investigates Digital Trust Data Infrastructures for Certified AI, combining blockchain-based architectures, governance mechanisms, and policy analysis. The research aims to design and validate DLT-enabled frameworks that improve provenance, accountability, auditability, and certification readiness of AI systems, while also studying the regulatory, organizational, and policy conditions required for their adoption in critical and multi-stakeholder environments. |
| Research objectives and methods | The growing adoption of Artificial Intelligence in critical and regulated domains raises important challenges related to trust, accountability, certification, and governance. AI systems increasingly rely on complex data pipelines, multiple stakeholders, and continuous model evolution, making it difficult to guarantee data provenance, model integrity, traceability of decisions, and reproducibility of system behaviour. These issues become even more relevant in cross-organizational settings, where trust cannot rely on a single actor but must be supported by robust digital infrastructures. This PhD project focuses on Digital Trust Data Infrastructures for Certified AI, investigating how Distributed Ledger Technologies (DLTs) and blockchain can support the creation of trustworthy, auditable, and certification-ready AI ecosystems. The core idea is that blockchain can act as a trust anchor for AI lifecycle governance by enabling tamper-evident records, verifiable provenance, distributed accountability, and shared governance mechanisms. The research will study hybrid architectures in which AI assets remain off-chain, while blockchain is used to notarize and verify evidence related to data, models, processes, and compliance-relevant events. Privacy-preserving technologies, such as zero-knowledge proofs and Trusted Execution Environments (TEEs), will also be explored. A first objective of the research is to define a reference framework for Digital Trust Data Infrastructures for Certified AI, identifying the main actors, trust assumptions, governance models, and evidence flows required to support trustworthy AI across organizational boundaries. A second objective is to design blockchain-based mechanisms for provenance, accountability, and auditability in AI pipelines, for example through notarization of lifecycle events, cryptographic linkage of datasets and models, decentralized identities, verifiable credentials, and smart-contract-based governance rules. A third objective is to investigate how these infrastructures can support certification and assurance processes for AI. In this context, the Ph.D. candidate will study how trustworthy evidence generated through DLT-based infrastructures can facilitate conformity assessment, third-party audits, assurance cases, and continuous compliance monitoring. This may lead to the definition of structured trust records and reusable design patterns for certification-ready AI systems. A fourth objective is the analysis of the policy and governance dimension of these infrastructures. The research will examine how technical trust mechanisms interact with emerging regulatory and policy frameworks for trustworthy and high-risk AI, data governance, digital identity, and digital traceability. The goal is not only to design technical solutions, but also to understand the institutional and organizational conditions required for their adoption and recognition within certification and compliance processes. The research work plan of the three-year Ph.D. program is the following: First year: the candidate will analyse the state of the art on Digital Trust Data Infrastructures, blockchain, and AI lifecycle governance, focusing on provenance, auditability, and certification. He/she will also deepen competences in DLTs, cryptographic techniques, and AI assurance. Based on this, a reference framework for Certified AI infrastructures will be defined. Second year: the candidate will design and develop blockchain-based mechanisms for provenance, accountability, and auditability in AI pipelines, including hybrid architectures, notarization of lifecycle events, and decentralized identity solutions. The candidate will also implement and evaluate some prototypes. Third year: the candidate will validate the proposed infrastructures for AI certification, assessing their role in the contexts of audits and compliance. The work will be tested through case studies and the candidate will design guidelines and policy recommendations. Methodologically, the PhD will combine literature review, architectural design, prototype development, case-study validation, and policy analysis. Application domains may include cognitive cities, smart mobility, energy, healthcare, industrial data sharing, or other critical digital ecosystems where traceability and trust are essential. The program is suitable for candidates interested in working at the intersection of blockchain, distributed systems, AI assurance, and technology policy, with the ambition to contribute both to advanced technical research and to the broader debate on trustworthy and certifiable AI. Target publications may include: IEEE Trans. on Services Computing IEEE Access Elsevier Blockchain: Research and Applications Elsevier Expert Systems With Applications Elsevier Future Generation Computer Systems IEEE International Conference on Decentralized Applications and Infrastructures IEEE International Conference on Blockchain IEEE Computers, Software, and Applications Conference |
| Required skills | The ideal candidate has a strong background in computer science or computer engineering, with expertise in blockchain and DLT-based trusted data infrastructures. Knowledge of distributed systems, cryptography, and software design is required. Familiarity with AI, data provenance, auditability, compliance, and regulatory aspects is preferred. Interdisciplinary experience and strong analytical, programming, and scientific writing skills are also valued. |
Evolutionary Artificial Intelligence | |
| Proposer | Giovanni Squillero, Alberto Tonda |
| Topics | TOPICS Data science, Computer vision and AI, Computer architectures and Computer aided design |
| Group website | https://www.cad.polito.it/ |
| Summary of the proposal | Since the 1990s, EC techniques have been applied in various practical domains, but since 2010s they have been substituted by newer ML/DL approaches. In recent years, the existing sub-symbolic approaches have been proven inherently incapable of solving tasks demanding highly structured solutions and EC, with its specific characteristic, is experiencing a resurgence of interest. The proposal aims to leverage EC to address the fundamental limitations of current AI approaches in challenging domains. |
| Research objectives and methods | Evolutionary Computation (EC), a subfield of artificial intelligence, draws its inspiration from biological evolution. Since the 1990s, EC techniques have been applied in various practical domains, although sometimes under different names; in the past decades, their usage has been gradually reduced by the newer machine learning and deep learning techniques. However, existing sub-symbolic approaches have been proven inherently incapable of solving tasks demanding highly structured solutions, such as complex program synthesis problems or the ARC-AGI benchmarks. Evolutionary algorithms, with their unique characteristics, are experiencing a resurgence of interest (e.g., see "AlphaEvolve" by DeepMind). The objective of the proposal is to leverage EC to address the fundamental limitations of current AI approaches in challenging domains. The long-term objective of this research is to overcome many limitations of sub-symbolic AI by developing novel frameworks based on Evolutionary Computation (EC) frameworks. In more details, the research objectives include: The study of more powerful EC tools using alternative encoding for EC; more specifically, graph-based representations have led to novel applications of EC in circuit design, cryptography, image analysis, and other fields. The analysis of the "divergence of character", the most impairing single problem in the field of EC; the research activity would tackle "diversity promotion", that is either "increasing" or "preserving" diversity in an EC population, both from a practical and theoretical point of view. It will also include the related problems of defining and measuring diversity. The task of automatically generating executable programs from high-level specifications, such as logical constraints, input-output examples, or natural language descriptions. The candidate will work on an open-source Python project, currently under active development. The study of Core Knowledge theory, aiming at defining the "building blocks" to be used by the evolutionary algorithm to create programs, and bridge the gap between sub-symbolic and symbolic AI. Target Publications Journals with impact factors - ASOC - Applied Soft Computing Top conferences - ACM GECCO - Genetic and Evolutionary Computation Conference Notes The tutors regularly present tutorials on Diversity Preservation at top conferences in the field, such as GECCO, PPSN, and CEC. Additionally, they are involved in the organization of a workshops focused on graph-based representation for EA. Moreover, the research group is in contact with industries that actively consider exploiting evolutionary machine-learning for enhancing their biological models, for instance, KRD (Czech Republic), Teregroup (Italy), and BioVal Process (France). The research group has also a long record of successful applications of evolutionary algorithms in several different domains. For instance, the on-going collaboration with STMicroelectronics on test and validation of programmable devices, does exploit evolutionary algorithms and would benefit from the research. |
| Required skills | Proficiency in Python (including deep understanding of object-oriented principles and design patterns, and handling of parallelism); Preferred: Experience with metaheuristcs, Experience with optimization algorithms |
Orchestrating a Dynamically Federated Edge-to-Cloud Continuum | |
| Proposer | Fulvio Risso, Carla Chiasserini |
| Topics | Parallel and distributed systems, Quantum computing |
| Group website | https://netgroup.polito.it https://liqo.io |
| Summary of the proposal | Future cloud computing systems will include multiple resources, including the ones at the edge, either on the telco side or on the customer's premises. This research activity tackles the following problems: (1) define algorithms for scalable, infrastructure-wide and multi-provider orchestration; (2) offer enhanced resiliency and capability of the software infrastructure to survive and evolve also in case of network outages or planned disconnections. |
| Research objectives and methods | Research Objectives
|
| Required skills | The ideal candidate has good knowledge and experience in computing architectures, cloud computing and networking. Availability for spending periods abroad would be preferred for a more profitable investigation of the research topic. |
AI for Digital Design Automation: LLM-Based SoC and IP Integration and Optimization | |
| Proposer | Enrico Macii, Andrea Calimera, Valentino Peluso |
| Topics | Computer architectures and Computer aided design |
| Group website | eda.polito.it www.st.com |
| Summary of the proposal | The increasing complexity of modern System-on-Chips (SoCs) makes their integration a time-consuming and partially manual process. This project investigates Generative AI and Large Language Models to support and automate SoC assembly and IP optimization. The goal is to develop AI methodologies to interpret specifications, support integration and connectivity, and enable efficient exploration of design variants, thereby improving productivity and turnaround time in industrial design flows. |
| Research objectives and methods | Context and Motivation. Research goals.
|
| Required skills | - Strong programming skills (Python, C/C++) and experience with hardware/software development. - Basic knowledge of machine learning techniques (experience with LLMs is a plus). - Familiarity with digital IC design and computer architectures. - Knowledge of digital hardware design flows, like RTL synthesis and eventually HLS (physical synthesis is a plus). |
RISC-V Based Trusted Execution Environments for Software Defined Vehicles | |
| Proposer | Alessandro Savino, Stefano Di Carlo |
| Topics | Computer architectures and Computer aided design, Cybersecurity |
| Group website | |
| Summary of the proposal | This research addresses security and trust in Software-Defined Vehicles (SDVs) by enhancing hardware-based Trusted Execution Environments (TEEs) using RISC-V architectures. The goal is to design and validate secure, open, and verifiable execution platforms that ensure integrity, isolation, and trustworthiness of onboard services in next-generation automotive systems. |
| Research objectives and methods | The evolution of Software-Defined Vehicles (SDVs) is transforming automotive systems into highly connected, programmable platforms where critical functionalities?ranging from infotainment to advanced driver assistance?are deployed as software services. While this shift enables unprecedented flexibility and innovation, it also significantly enlarges the attack surface, raising critical concerns about security, safety, and trustworthiness. In this context, Trusted Execution Environments (TEEs) are a key technological enabler for ensuring the secure, isolated execution of sensitive workloads. However, current TEE solutions are often proprietary, limited in flexibility, and not fully aligned with the stringent requirements of automotive systems. At the same time, the emergence of the RISC-V open instruction set architecture offers a unique opportunity to rethink hardware-based security from the ground up, enabling transparent, customizable, and verifiable trust anchors. This research aims to bridge these gaps by designing and developing a novel hardware-based TEE tailored for SDVs, leveraging RISC-V's openness and extensibility. The central vision is to create a secure execution platform that guarantees integrity, isolation, and attestation of onboard services while remaining adaptable to evolving automotive requirements. Research objectives The main objective is to design a RISC-V-based TEE architecture that enhances security and trust in SDVs. This includes: (i) defining hardware mechanisms for strong isolation between trusted and untrusted domains, (ii) enabling secure boot and cryptographic attestation, (iii) supporting the trustworthy deployment of safety-critical and AI-driven services, and (iv) ensuring compatibility with automotive constraints such as real-time performance and functional safety. Research work plan The research will begin with an in-depth analysis of the state of the art in automotive cybersecurity and TEEs, alongside the definition of a comprehensive threat model tailored to SDVs. Building on this foundation, a novel RISC-V-based TEE architecture will be designed, exploiting hardware extensions to enforce isolation and security guarantees. A prototype implementation will be developed using FPGA platforms or existing RISC-V cores, enabling practical validation of the proposed concepts. Core security features?such as secure boot, remote attestation, and runtime monitoring?will be implemented and integrated into the system. The proposed solution will then be evaluated for performance, scalability, and resistance to realistic attack scenarios. Finally, the research will demonstrate the integration of the TEE within representative SDV use cases, such as over-the-air updates and secure execution of ADAS or AI services, highlighting its practical impact. Possible venues for publications The outcomes of this research are expected to be disseminated in leading venues in hardware security, embedded systems, and automotive technologies, including conferences such as DAC, DATE, HOST, CCS, USENIX Security, and NDSS, as well as journals like IEEE TDSC, IEEE TCAD, ACM TECS, and IEEE Security & Privacy. Additional dissemination may target automotive-focused venues such as IEEE IV and ITSC. |
| Required skills | Strong background in computer architecture and embedded systems, knowledge of hardware security and Trusted Execution Environments, familiarity with RISC-V ISA, and experience in C/C++ and low-level programming. Skills in FPGA prototyping, cryptography basics, and operating systems are highly desirable. Analytical thinking and research motivation required. |
Neuromorphic Hardware Development | |
| Proposer | Stefano Di Carlo, Alessandro Savino |
| Topics | Computer architectures and Computer aided design, Data science, Computer vision and AI |
| Group website | |
| Summary of the proposal | This research aims to design advanced neuromorphic accelerators with ultra-high energy efficiency by leveraging asynchronous circuit design, liquid neural network architectures, and brain-inspired mechanisms such as homeostasis. The goal is to develop scalable, low-power hardware platforms that enable adaptive and robust computation for next-generation intelligent systems. |
| Research objectives and methods | The growing demand for energy-efficient artificial intelligence is pushing beyond conventional von Neumann architectures toward brain-inspired computing paradigms. Neuromorphic systems, which emulate neural dynamics and event-driven processing, offer a promising path to drastically reduce power consumption while maintaining adaptive capabilities. However, current neuromorphic hardware often relies on simplified neuron models and largely synchronous designs, limiting their efficiency and biological realism. This research proposes to explore a new generation of neuromorphic accelerators that combine asynchronous design principles with advanced neural models and adaptive mechanisms inspired by biological systems. The central hypothesis is that integrating event-driven computation with richer neural dynamics can unlock significant gains in both energy efficiency and computational expressiveness. A key focus will be on asynchronous circuit implementations, which naturally align with the sparse and event-driven nature of neural activity. By eliminating global clocks and enabling computation only when needed, these architectures can significantly reduce energy consumption and improve scalability. In parallel, the research will investigate liquid neural network architectures, which offer continuous-time dynamics and greater adaptability than traditional spiking or layered models. These architectures are particularly suitable for processing temporal and uncertain data, making them relevant for real-world applications. Another innovative aspect of this work is the integration of homeostatic mechanisms directly into hardware. Inspired by biological neural systems, homeostasis allows networks to self-regulate their activity, improving stability, robustness, and long-term operation without external intervention. Research objectives Research work plan Possible venues for publications |
| Required skills | Strong background in digital/analog hardware design and computer architecture, knowledge of neuromorphic computing and neural models, and experience with HDL (Verilog/VHDL). Familiarity with asynchronous circuits, FPGA/ASIC design, and low-power techniques is highly desirable. Interest in bio-inspired systems and interdisciplinary research is essential. |
Neuromorphic Training & Continuous Learning | |
| Proposer | Stefano Di Carlo, Alessandro Savino |
| Topics | Computer architectures and Computer aided design, Data science, Computer vision and AI |
| Group website | |
| Summary of the proposal | This research focuses on enabling on-chip learning in neuromorphic systems through hardware-implemented training and continuous adaptation. It explores local learning rules, multiple forms of synaptic plasticity, and non-layered architectures to achieve scalable, energy-efficient, and biologically plausible learning directly within neuromorphic hardware. |
| Research objectives and methods | Despite significant advances in neuromorphic computing, most existing systems still rely on offline training and simplified learning schemes, limiting their adaptability and biological realism. In contrast, natural intelligence emerges from continuous, local, and diverse learning processes occurring directly within neural circuits. This research aims to bring learning closer to the hardware by enabling fully embedded training and continuous adaptation in neuromorphic systems. The core vision is to design architectures that support local, online learning rules and multiple forms of synaptic plasticity, allowing systems to learn autonomously from streaming data without external supervision. A central aspect of this work is the implementation of local training mechanisms directly in hardware, eliminating the need for global error propagation and reducing energy and communication overhead. This includes exploring spike-based and event-driven learning rules that can be efficiently mapped onto neuromorphic substrates. In addition, the research will investigate the coexistence of multiple plasticity mechanisms?such as short-term, long-term, and structural plasticity?within the same hardware system. This multi-scale adaptability is key to achieving robust and flexible learning behaviors. Moving beyond traditional layer-based architectures, the project will explore more general network topologies inspired by biological neural systems, with dynamic, recurrent, and heterogeneous connectivity. Such approaches can enable richer dynamics and improved generalization. Research objectives Research work plan Possible venues for publications |
| Required skills | Background in machine learning and neuromorphic computing, with knowledge of learning algorithms and neural plasticity. Experience in hardware design (HDL, FPGA) and embedded systems is desirable. Familiarity with spiking neural networks and event-driven systems is a plus. Strong analytical and interdisciplinary research skills required. |
Trustworthy and Safe AI under Hardware Faults | |
| Proposer | Stefano Di Carlo, Alessandro Savino |
| Topics | Computer architectures and Computer aided design, Data science, Computer vision and AI |
| Group website | |
| Summary of the proposal | This research investigates the robustness of machine learning models under hardware-induced faults, such as radiation effects. It aims to develop scalable fault emulation methods for large models, analyze their impact on task performance, and design resilience strategies inspired by brain recovery mechanisms, including selective retraining and error tolerance. |
| Research objectives and methods | As artificial intelligence systems are increasingly deployed in safety-critical and harsh environments?such as automotive, aerospace, and edge devices?their reliability under hardware faults becomes a fundamental concern. Phenomena such as radiation-induced soft errors, aging, and transient faults can alter computations at the hardware level, potentially leading to silent and unpredictable failures in machine learning models. Despite the growing importance of trustworthy AI, the interaction between hardware faults and model behavior remains poorly understood, particularly for large-scale neural networks. Most existing approaches focus either on hardware-level mitigation or on software robustness in isolation, leaving a critical gap in cross-layer understanding. This research aims to bridge this gap by systematically studying how machine learning models respond to hardware-induced faults and by developing new methodologies to enhance their resilience. The central idea is to combine large-scale fault emulation with biologically inspired recovery mechanisms, drawing inspiration from the human brain's ability to tolerate damage and adapt through selective relearning. A first key objective is to design scalable techniques to emulate hardware faults in large models, including bit flips, timing errors, and memory corruptions. These methods will enable controlled experimentation and detailed analysis of how faults propagate through neural computations and affect model outputs. Building on this, the research will investigate the sensitivity of different model architectures, layers, and parameters to faults, identifying critical components and failure modes. This analysis will provide insights into the relationship between model structure and fault vulnerability. The second major direction focuses on resilience. Inspired by neurobiological processes, the project will explore selective retraining strategies that adapt only the affected regions of the model after faults occur, reducing computational cost while restoring performance. Additional mechanisms for error tolerance, redundancy, and adaptive reconfiguration will also be considered. Ultimately, the goal is to develop a new generation of AI systems that are inherently robust, capable of maintaining reliable operation even in the presence of hardware imperfections.
Research work plan Possible venues for publications |
| Required skills | Strong background in machine learning and deep neural networks, with knowledge of model training and evaluation. Familiarity with hardware systems and fault mechanisms is desirable. Programming skills in Python and ML frameworks (PyTorch/TensorFlow) required. Experience in reliability, computer architecture, or neuromorphic/brain-inspired computing is a plus. |
AI-driven Smart Systems for Sustainable Precision Agriculture | |
| Proposer | Renato Ferrero, Maurizio Rebaudengo, Filippo Gandino |
| Topics | Data science, Computer vision and AI |
| Group website | |
| Summary of the proposal | The research aims at developing AI-driven smart farming systems integrating IoT sensors, computer vision, and machine learning to enable precision agriculture, reduce water and chemical usage, and improve sustainability. It includes time-series models (RNN, GRU, LSTM) for soil moisture prediction, multispectral analysis for plant health, and RGB?NIR imaging for weed detection and automated harvesting. |
| Research objectives and methods | The research aims to design and develop intelligent agricultural systems that optimize resource use and support data-driven decision-making, thereby improving the productivity, sustainability, and resilience of farming systems. By integrating IoT sensing, computer vision, and advanced machine learning, the project targets three main objectives: The activities will be carried out in collaboration with PIC4SeR, PoliTo Interdepartmental Centre for Service Robotics. |
| Required skills | Strong background in machine learning and deep learning (RNN, LSTM, CNN), signal and image processing, and data analysis. Experience with Python and AI frameworks (e.g., TensorFlow/PyTorch). Knowledge of time-series modeling and computer vision. Familiarity with IoT or sensor data is a plus. Ability to work interdisciplinary, with problem-solving, research, and communication skills. |
Integrating Certification-Oriented Security Assurance into RTL Verification of RISC-V CPUs | |
| Proposer | Matteo Sonza Reorda, Annachiara Ruospo, Michelangelo Grosso |
| Topics | Computer architectures and Computer aided design, Cybersecurity |
| Group website | https://cad.polito.it/ |
| Summary of the proposal | This research project proposes a certification-oriented security framework embedded in the RTL verification phase of RISC-V IPs. The goal is to remove exploitable design weaknesses before the product enters formal certification, by leveraging Large Language Models (LLMs) to translate vulnerability hypotheses into RTL-level verification scenarios and derive formal security invariants. |
| Research objectives and methods | On 8 December 2025, the European Common Criteria (EUCC) certification scheme under the EU Cybersecurity Act was amended. The update revised the state-of-the-art evaluation documents to reflect recent developments in hardware security, evolving threats, and industry practices for integrated circuits (IC) such as microcontrollers. Under this certification scheme, ICs may obtain certification at either the ?substantial? or the ?high? assurance level, depending on the required resistance to attack. At the ?substantial? assurance level, the EUCC scheme requires vulnerability analysis and penetration testing activities. In current industrial practices, a vulnerability analysis for certification is often run after architectural stabilization or near tape-out. Differently, physical penetration testing is typically performed post-silicon; it intensifies when silicon or FPGA prototypes are available. How can vulnerability analyses and penetration-test methodologies be shifted earlier and integrated into RTL verification flows for RISC-V CPUs? This represents a major challenge in the security field. The PhD thesis addresses these challenges by proposing a certification-oriented security framework embedded in the RTL verification phase of RISC-V IPs. The goal is to remove exploitable design weaknesses before the product enters formal certification, by leveraging Large Language Models (LLMs) to translate vulnerability hypotheses into RTL-level verification scenarios and derive formal security invariants. The PhD thesis is structured as follows: - During the first year of the PhD, the project will develop a RISC-V?specific threat model and vulnerability taxonomy. The analysis will consider privilege modes (Machine, Supervisor, User), CSR access control, PMP and MMU configuration, trap and interrupt handling, and debug interface behavior. Assets and trust boundaries will be formally defined. The result will be a structured mapping between potential state-of-the-art attack classes required by the updated EU Cybersecurity ACT and CPU architectural elements.- During the second year of the PhD, we will translate vulnerability hypotheses into verification tests. We will investigate LLM-based techniques to automatically translate structured vulnerability hypotheses into RTL-level verification stimuli, reducing manual effort and improving coverage of security-relevant corner cases. A dedicated library will be developed to emulate realistic attack attempts at the RTL level. Examples include illegal CSR access, privilege escalation attempts, and improper debug entry. While physical fault injection attacks such as clock glitches and voltage and electromagnetic glitches are typically evaluated post-silicon, their logical effects can be approximated at RTL. In addition, security-oriented RTL mutation techniques will be explored during the qualification phase to evaluate the robustness of the verification framework and the completeness of security checks.- During the third year of the PhD, the project will define security invariants for RISC-V CPUs. These invariants will be encoded as formally verified properties and integrated into the RTL verification environment. LLMs will be explored to assist in the derivation of formal security invariants from high-level specifications. In conventional verification flows, assertions are primarily derived from ISA compliance and functional correctness (they ensure that the processor behaves as specified under normal operating conditions). In this project, security invariants are instead derived from the threat model. They are designed to prevent misuse, privilege escalation, and isolation bypass, not only functional errors. For example, for out-of-order CPUs, security invariants can also be formulated as software-hardware contracts for secure speculation. This PhD thesis is conducted in industrial collaboration with STMicroelectronics. |
| Required skills | Strong background in digital design and computer architecture, with knowledge of RISC-V and RTL verification (SystemVerilog/UVM). Familiarity with hardware security concepts (threat modeling, side-channel, fault attacks) is desirable. |
Generative AI for Intelligent Biofabrication in Osteochondral Regeneration | |
| Proposer | Stefano Di Carlo, Alessandro Savino, Roberta Bardini |
| Topics | Data science, Computer vision and AI, Life sciences |
| Group website | |
| Summary of the proposal | This PhD will develop interpretable generative AI methods to support biofabrication in osteochondral regeneration. The research will integrate multimodal in vitro and in silico data to improve data representation, generate biologically plausible synthetic samples, and support the optimisation of selected biofabrication parameters in controlled experimental settings. |
| Research objectives and methods | Osteochondral regeneration requires the coordination of multiple biological, mechanical, and process-related factors, making the design and optimisation of biofabrication protocols particularly challenging. Experimental data are often heterogeneous, limited in size, and difficult to integrate with computational models. This PhD project aims to develop interpretable generative AI methods to support the analysis and optimisation of biofabrication processes for osteochondral applications, with a specific focus on multimodal data integration and biologically grounded data augmentation. The research objectives are: The work plan is structured in four phases. In Phase 1, the candidate will analyse the available datasets and relevant domain knowledge, identify the most informative modalities and variables, and define a coherent computational representation of the biofabrication process. In Phase 2, the candidate will develop and compare generative models for multimodal data fusion and synthetic data generation, evaluating biological plausibility and utility for downstream tasks. In Phase 3, the research will focus on interpretable modelling strategies to relate latent representations and generated samples to relevant process and outcome variables. In Phase 4, the candidate will validate the proposed methods on selected case studies in osteochondral biofabrication, assessing whether generative modelling can support more efficient design-space exploration and data-driven optimisation of experimental settings. The expected outcome is a computational framework for interpretable generative modelling in osteochondral biofabrication, capable of improving the use of limited multimodal datasets and supporting data-driven protocol refinement. The project will provide methodological advances in AI for biofabrication while remaining closely connected to experimentally meaningful questions. |
| Required skills | Strong background in machine learning, data analysis, or computational modelling; programming skills in Python and common ML frameworks; interest in interpretable AI and biomedical applications. Familiarity with biofabrication, tissue engineering, mechanobiology, or multimodal biological data analysis is a plus. |
Multi-modal Data Intelligence and Digital Twins for Liver In Vitro Screenings | |
| Proposer | Stefano Di Carlo, Alessandro Savino, Roberta Bardini |
| Topics | Data science, Computer vision and AI, Life sciences |
| Group website | |
| Summary of the proposal | This PhD will develop multi-modal data intelligence methods and digital-twin-ready data infrastructures for liver in vitro screenings. The goal is to integrate imaging, omics, simulation, and process data into interoperable, FAIR, and interpretable workflows that support biomarker discovery, model calibration, and predictive knowledge extraction across healthy and cancerous liver models. |
| Research objectives and methods | High-throughput liver in vitro screenings generate heterogeneous datasets spanning imaging, omics, simulation outputs, and process measurements, yet these resources are often fragmented, difficult to reuse, and poorly connected to predictive modelling pipelines. This PhD will develop multi-modal data intelligence methodologies and digital-twin-oriented infrastructures to integrate, analyse, and govern such datasets, enabling semantic interoperability, transparent reuse, and predictive knowledge discovery. The research objectives are: The work plan is structured in four phases. Expected outcomes include an open multi-modal analytics platform for liver in vitro screenings, a knowledge layer linking datasets, models, metadata, and experimental outcomes, and a set of methods for trustworthy calibration and reuse of liver digital twins. The project will contribute both methodological advances in multi-modal biomedical data intelligence and practical tools for interoperable digital twin creation. Possible publication venues include Bioinformatics, GigaScience, Scientific Data, Journal of Biomedical Informatics, Patterns, Computer Methods and Programs in Biomedicine, Frontiers in Bioengineering and Biotechnology, and conferences such as ISMB, IEEE BIBM, MICCAI, and KDD workshops on health and biomedicine. |
| Required skills | Strong background in data science, biomedical informatics, or computational biology; programming skills in Python/R and data engineering tools; interest in multi-modal data integration, interpretable AI, and digital twins. Experience with omics, imaging data, metadata standards, ontologies, or visual analytics is an asset. |
Data-driven and Surrogate Modelling of In Vitro Tumor Vascularisation | |
| Proposer | Stefano Di Carlo, Alessandro Savino, Roberta Bardini |
| Topics | Data science, Computer vision and AI, Life sciences |
| Group website | |
| Summary of the proposal | This PhD will develop explainable surrogate and data-driven models to accelerate multiscale simulations of cancer vascularisation and microenvironmental dynamics. By integrating transcriptomic, experimental, and mechanistic information, the project will create fast, interpretable, and uncertainty-aware predictive tools bridging in vitro assays and in silico tumor modelling. |
| Research objectives and methods | Tumors are characterised by complex vascular and microenvironmental dynamics that are difficult to capture with mechanistic models alone, especially when multiscale simulations become too computationally demanding for extensive exploration, calibration, or integration with experimental pipelines. This PhD will develop explainable surrogate models and interoperable data-driven environments to compress complex tumor vascularisation dynamics into efficient predictive tools, accelerating simulation while preserving biological relevance and interpretability. The research objectives are: The work plan is organised in four phases. Expected outcomes include accelerated and interpretable simulation pipelines for tumor vascularisation, interoperable environments for integrating experimental and computational data, and hybrid surrogate models that support hypothesis generation, simulation-based analysis, and digital experimentation across related applications. Possible publication venues include PLOS Computational Biology, Bioinformatics, Briefings in Bioinformatics, npj Systems Biology and Applications, Computer Methods and Programs in Biomedicine, Journal of Theoretical Biology, Cancers, and conferences such as ISMB, RECOMB, MICCAI, and IEEE BIBM. |
| Required skills | Strong background in computational modelling, machine learning, or bioinformatics; programming skills in Python/R; interest in systems biology, multiscale modelling, surrogate modelling, and explainable AI. Experience with transcriptomics, network inference, uncertainty quantification, or mechanistic simulations is highly desirable. |
Interaction Design for Everyday Augmented Reality | |
| Proposer | Andrea Bottino, Francesco Strada |
| Topics | Computer graphics and Multimedia, Data science, Computer vision and AI, Software engineering and Mobile computing |
| Group website | https://cgvg.polito.it |
| Summary of the proposal | Augmented Reality (AR) is moving beyond specialized applications toward persistent, everyday use, yet interaction design principles for ambient, context-aware, and socially acceptable AR remain underdeveloped. This research investigates interaction patterns for everyday AR using passthrough devices as a testbed for ubiquitous spatial computing, treating accessibility as a core constraint and cultural heritage as a real-world domain for design and evaluation. |
| Research objectives and methods | Context and motivation As AR technologies mature, they are moving beyond short-lived or task-specific applications toward more persistent forms of interaction embedded in everyday environments. This shift raises a core HCI challenge: how to design AR experiences that remain usable, comprehensible, and contextually appropriate when interaction is continuous, spatially distributed, and shaped by real-world settings rather than tightly controlled tasks. Current AR research remains largely fragmented across devices, applications, and narrow user groups. As a result, there is still a lack of well-grounded interaction design principles for everyday AR, particularly for experiences that must balance spatial anchoring, information layering, attentional demands, and multimodal interaction in ecologically valid settings. This limitation becomes even more significant when AR is considered not only for expert or early-adopter users, but also for heterogeneous publics, including older adults and users with different abilities and needs. This research addresses the problem from an interaction design perspective. Rather than treating accessibility, evaluation, and deployment as separate research objectives, the project investigates how robust interaction patterns for everyday AR can be designed, implemented, and empirically assessed in real-world conditions. Accessibility is treated as a core design requirement rather than an afterthought, while evaluation is framed as a means to understand whether the proposed interaction patterns remain effective and acceptable outside laboratory settings. Cultural heritage environments provide a suitable domain for this investigation. They require rich and situated information delivery, involve diverse audiences, and offer a realistic setting in which to study persistent AR interaction beyond training or highly specialized use cases. In this sense, they function not merely as an application example, but as a meaningful testbed for the broader question of how everyday AR interaction should be designed.Research objectives The research will address the following questions: which interaction patterns are most suitable for supporting persistent and contextually appropriate everyday AR experiences? How can these interaction patterns incorporate accessibility and inclusive design requirements while remaining usable in real-world settings? How should these patterns be evaluated in ecologically valid settings to assess their effectiveness, usability, and acceptability beyond short-term laboratory studies? Expected contributions include:- A set of interaction design patterns for everyday AR, addressing persistent spatial interaction, information layering, attention management, and multimodal input in real-world settings.- Accessibility-informed design guidelines for everyday AR, clarifying how inclusive requirements can be incorporated into spatial interaction design for heterogeneous users.- An evaluation protocol for ecologically valid AR studies, supporting empirical assessment of interaction patterns in cultural heritage settings.Work plan Phase 1 Foundations (M1?M12): review spatial computing interaction, XR accessibility, and evaluation methods; define the design space of everyday AR (persistence, spatial anchoring, information layering, multimodal input); develop early prototypes and run informal pilot studies. Phase 2 Interaction pattern development (M8?M24): design and iteratively refine accessible interaction patterns for everyday AR; develop an evaluation protocol for ecologically valid real-world AR studies; refine prototypes through formative testing and user feedback. Phase 3 Validation and dissemination (M20?M36): deploy the approach in a primary cultural heritage setting; evaluate usability, contextual appropriateness, and user experience with heterogeneous participants; refine the framework and, if feasible, conduct a secondary validation study in another everyday AR context. Publication venues Journals: IEEE Transactions on Visualization and Computer Graphics, International Journal of Human-Computer Studies, Presence: Teleoperators and Virtual Environments, Universal Access in the Information Society. Conferences: ACM CHI, IEEE VR, ISMAR, ACM ASSETS, INTERACT.Industrial collaboration and funding This research aligns with industrial interest in XR platforms, spatial interfaces, and human-centered AR applications for public, cultural, and service-oriented environments. Potential collaboration opportunities include organizations working on cultural heritage technologies, accessible digital experiences, public-facing XR systems, and spatial user experience design. Possible funding opportunities include research and innovation programs supporting XR, accessibility, digital culture, and interactive technologies. |
| Required skills | The ideal candidate has a strong background in computer science with demonstrated interest in human-computer interaction and XR technologies. Key skills include: proficiency in XR development (Unity or Unreal), experience with user study design and qualitative/quantitative analysis, interest in accessibility and inclusive design, and strong scientific writing and communication skills. |
World Grounding for Virtual Humans in Extended Reality | |
| Proposer | Andrea Bottino, Francesco Strada |
| Topics | Computer graphics and Multimedia, Data science, Computer vision and AI, Software engineering and Mobile computing |
| Group website | https://cgvg.polito.it |
| Summary of the proposal | Virtual humans (VHs) in XR often react to virtual events but lack actionable awareness of the physical environment, limiting situated interaction. This research studies world grounding for VHs by combining XR scene information, passthrough sensing, and multimodal reasoning in a unified framework. It will evaluate how real-time grounding affects interaction appropriateness, perceived intelligence, and user experience. |
| Research objectives and methods | Context and Motivation Recent advances in multimodal AI have made it increasingly plausible to design interactive agents that can reason over visual, linguistic, and spatial information. At the same time, XR technologies now provide access not only to rich virtual environments but also to real-time signals from the surrounding physical world through passthrough sensing, spatial mapping, and scene understanding. Despite these advances, most VHs in XR remain only weakly situated. Their behavior is usually driven by scripted events, predefined interaction logic, or limited scene-level awareness, and they rarely maintain a unified representation of the mixed-reality context in which they are deployed. As a result, they may respond coherently to events in the virtual scene while ignoring relevant aspects of the physical environment, such as nearby objects, changes in spatial configuration, or the presence of other people. This disconnect reduces behavioral appropriateness and weakens the sense that the VH is meaningfully present in the user's world. This research addresses that limitation by investigating world grounding for VHs in XR, i.e the ability of a VH to maintain and use a contextual representation of spatial layout, relevant entities, and interaction-relevant changes across both physical and virtual space. Rather than treating perception, reasoning, and embodiment as isolated modules, the project explores how they can be organized into a multimodal interaction architecture for situated XR agents. The contribution is not intended as a generic claim that VHs can be made universally intelligent across all domains. Instead, the research focuses on a more precise and defensible question: how world grounding can be modeled, implemented, and evaluated for VHs operating in mixed-reality settings under real-time constraints. This framing keeps the work focused on a specific scientific problem while still allowing the resulting framework to be reusable across application scenarios.Research Objectives The research will address the following questions: how can a VH in XR maintain a unified and actionable representation of the virtual scene and the surrounding physical environment? How can this form of world grounding support context-appropriate behavior beyond scripted interaction logic? To what extent can grounding-aware VHs remain viable under the latency and hardware constraints of real-time XR systems, and how does improved grounding affect interaction quality from the user's perspective?Expected contributions Expected contributions include:- A world-grounding framework for XR VHs, integrating information from the virtual scene and the surrounding physical environment into a unified contextual representation.- A grounding-aware behavior generation architecture that maps mixed-reality context to context-appropriate virtual human responses.- A latency-aware implementation strategy for real-time XR, identifying how grounding-related computation can be organized under the hardware and responsiveness constraints of immersive systems.- Empirical evidence on the role of grounding in XR interaction, clarifying how world grounding affects behavioral appropriateness, perceived intelligence, and user experience.Evaluation plan Evaluation will combine technical assessment with user-centered studies. From a system perspective, it will measure responsiveness, latency, contextual update reliability, and consistency of grounding-dependent behavior. From an interaction perspective, it will compare world-grounded VHs with baseline conditions featuring limited or scripted contextual awareness, examining perceived intelligence, behavioral appropriateness, contextual coherence, and user experience.Work Plan- Phase 1 (M1-M12): review related work; define the scope of world grounding; design a unified mixed-reality context representation; develop an initial prototype.- Phase 2 (M10-M26): develop the grounding-aware interaction architecture; address latency, computational load, and deployment trade-offs; refine the prototype through technical testing and formative user feedback.- Phase 3 (M22-M36): deploy the framework in a primary XR scenario; evaluate system-level performance; conduct comparative user studies.Industrial Collaboration and Funding This research aligns with industrial interest in XR platforms, virtual humans, and multimodal AI for immersive interactive systems. Potential collaboration opportunities include organizations developing training, assistance, or human-centered XR applications where contextual awareness and situated behavior are important. Possible funding opportunities include research and innovation programs supporting XR, artificial intelligence, and interactive digital technologies.Publication Venues Journals: IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Affective Computing, International Journal of Human-Computer Studies, Computers & Graphics. Conferences: IEEE VR, ISMAR, ACM CHI. |
| Required skills | Strong foundations in AI and machine learning, with an interest in multimodal systems and interactive AI. Proficiency in Python and deep learning frameworks such as PyTorch is essential. Familiarity with XR development, computer vision, or 3D interactive systems is desirable. The candidate should be able to work on both system implementation and experimental evaluation in human-centered interactive settings. |
AI-Assisted XR Production Pipelines: Authoring Tools and Workflow Support for Technical Artists | |
| Proposer | Andrea Bottino, Francesco Strada |
| Topics | Computer graphics and Multimedia, Data science, Computer vision and AI |
| Group website | https://cgvg.polito.it |
| Summary of the proposal | As AI-assisted generation and procedural methods become part of XR production workflows, technical artists must coordinate heterogeneous tools, verify outputs, maintain artistic control, and ensure real-time deployability. However, the authoring abstractions and pipeline support for this work remain underdefined. This research investigates controllable authoring abstractions, workflow tools, and evaluation methods for AI-assisted XR production. |
| Research objectives and methods | Context and Motivation The technical artist bridges creative intent and technical implementation in XR production, ensuring interactive content remains both artistically controllable and technically deployable. This role is becoming increasingly central as AI-assisted generation, procedural authoring, and real-time constraints converge within fragmented pipelines. However, current tools rarely support the technical artist directly, leaving critical tasks of control, integration, validation, and deployment poorly connected across the workflow. This research addresses that gap from a human-centered systems perspective. Rather than proposing new generative models or neural rendering methods, it investigates how AI-assisted XR production workflows can be structured around the needs of technical artists through authoring abstractions and tool support such as controllable generation stages, exposed parameters, validation checkpoints, intermediate asset states, and deployment-oriented feedback mechanisms that help technical artists direct, inspect, refine, and integrate heterogeneous outputs across XR pipelines.Research Objectives This research addresses three questions: how can AI-assisted XR production workflows be structured to support technical artists' control over heterogeneous generation, refinement, and integration processes? Which authoring abstractions and tooling mechanisms best support iterative refinement, validation, and deployment in real-time XR pipelines? How should such workflows be evaluated in terms of controllability, iteration efficiency, cognitive effort, and deployable output quality?Expected contributions Expected contributions include:- A structured analysis of technical artist workflows in AI-assisted XR production, identifying recurrent bottlenecks related to control, interoperability, validation, and deployability.- A set of authoring abstractions and workflow patterns for integrating AI-assisted and procedural content into real-time XR production pipelines.- A prototype tooling layer that supports technical artists in directing, inspecting, refining, validating, and exporting XR content across heterogeneous tools.- An evaluation methodology for technical-artist-centred XR workflows, combining workflow-based performance indicators with human-centred measures of control, effort, and deployment readiness.Evaluation plan The evaluation will combine workflow analysis, prototype-based experimentation, and user-centered studies with technical artists or closely related XR production roles. From a systems perspective, it will examine interoperability, editability, validation, and deployment readiness through indicators such as time to integration, manual correction steps, iteration latency, export success, and consistency across authoring stages. From a human-centered perspective, it will assess perceived controllability, cognitive load, iteration efficiency, confidence in intermediate outputs, and adequacy for production use. Where feasible, baseline comparisons will be made against less structured workflows. The goal is not to benchmark generative models in isolation, but to assess whether AI-assisted XR production can become more controllable, efficient, and deployable when designed around the needs of technical artists.Work Plan- Phase 1 (M1?M12): review prior work on XR authoring, technical artist workflows, AI-assisted creative systems, and real-time pipelines; analyse current practices through interviews and workflow audits; define requirements for technical-artist-centred authoring abstractions and tooling.- Phase 2 (M10?M26): design authoring abstractions and workflow patterns for AI-assisted XR production; develop and iteratively refine a prototype tooling layer integrated with existing XR tools or engines.- Phase 3 (M22?M36): evaluate the approach in one primary XR production scenario against less structured workflows; analyse implications for the evolving role of the technical artist and consolidate results into reusable tooling components where appropriate.Industrial Collaboration and Funding This research aligns with industrial interest in XR production pipelines, authoring tools, real-time content deployment, and AI-assisted interactive systems. Potential collaborations include XR studios, companies developing authoring tools or production middleware, and organizations working on applied XR content for training, simulation, communication, or cultural experiences. Funding opportunities include programs supporting XR, artificial intelligence, creative technologies, and advanced interactive systems.Publication Venues Journals: ACM Transactions on Graphics, IEEE Transactions on Visualization and Computer Graphics, Computers & Graphics, International Journal of Human-Computer Studies. Conferences: IEEE VR, ISMAR, Eurographics, ACM CHI, ACM DIS. |
| Required skills | The ideal candidate should have a strong background in computer science or computer engineering, with interest in XR systems, interactive tools, and real-time production workflows. Relevant skills include proficiency in XR development environments such as Unity or Unreal, solid programming ability, and familiarity with user-centered evaluation methods. Interest in authoring tools, creative workflows, and AI-assisted interactive systems is highly desirable. |
Knowledge-grounded data generation for reliable and scalable AI agents | |
| Proposer | Daniele Apiletti, Simone Monaco, Tania Cerquitelli |
| Topics | Data science, Computer vision and AI |
| Group website | https://dbdmg.polito.it/ https://smartdata.polito.it/ |
| Summary of the proposal | Modern AI systems increasingly rely on large datasets, which are often scarce, costly, or sensitive in industrial and scientific domains. This research investigates how Large Language Models (LLMs), integrated into structured pipelines, can align their outputs with domain constraints, e.g., for generating high-quality synthetic data or enhancing knowledge-informed models and agentic systems. The goal is to improve reasoning, robustness, and generalization in specialized applications. |
| Research objectives and methods | Research Objectives The goal of this research is to investigate the ability of Large Language Models (LLMs) to handle domain-specific constraints on their outputs. This is particularly important when using LLMs to generate synthetic data, especially in scientific and industrial domains where real-world data is scarce, costly, or constrained by privacy and regulatory limitations. A central premise of this work is that many real-world domains (e.g., engineering systems, legal reasoning, industrial logs) are strongly governed by structured knowledge, domain constraints, and theoretical principles. Traditional machine learning approaches often struggle in these contexts due to limited data availability and the difficulty of encoding prior knowledge directly into models. At the same time, recent advances in LLMs suggest that properly guiding these models can generate rich, structured, and semantically coherent data that may serve as a viable complement to real datasets. Within this context, the research aims to explore whether and how synthetic data, generated within carefully designed pipelines and aligned with domain knowledge, can improve the performance, robustness, and generalization capabilities of deep learning models and agentic systems. Particular emphasis is placed on understanding the conditions under which such improvements occur, as well as the limitations and risks associated with the use of synthetic data, such as the crucial model collapse issue. Main research objectives. Methodology Development: developing methodologies for generating synthetic datasets and digital twins of real-world environments that align explicitly with domain-specific knowledge and constraints, incorporating structured representations like ontologies, rules, physical laws, or policy constraints. Benchmark Design: designing controlled benchmarks based on synthetic data, aimed at systematically evaluating the capabilities of models and agents. These benchmarks will focus on key aspects such as reasoning, robustness to perturbation, and generalization across scenarios. Agentic Systems Support: exploring how synthetic data can support models that explicitly incorporate domain knowledge, as well as multi-agent systems navigating constrained environments. Synthetic data will simulate complex interactions, such as agent dialogues, sequences of tool use, and structured reasoning processes. Model Distillation: investigating the feasibility of using large LLMs as synthetic data generators to support smaller, more efficient models, via in-context learning and dedicated fine-tuning stages, while maintaining strict constraint compliance. Outline 1st Year: the candidate will investigate the state of the art in LLMs, synthetic data generation, and constraint-aware AI, focusing on methods for incorporating domain knowledge (e.g., ontologies, rules, physical laws) into generative processes. The study will analyze controllable text generation, structured data synthesis, and digital twin construction, alongside techniques for evaluating factuality and adherence to constraints in LLM outputs. In parallel, relevant application domains, suitable datasets, and simulation environments will be identified. 2nd Year: the candidate will develop novel methodologies for constraint-aware synthetic data generation using LLMs, ensuring alignment with domain knowledge and the controllability of generated outputs. This includes designing hybrid pipelines combining LLMs with symbolic or rule-based components, as well as automatic validation and filtering stages to enforce constraints. The research will explore generating complex synthetic scenarios (e.g., agent interactions, tool usage sequences). Additionally, the candidate will investigate using large LLMs as teachers to support smaller models, analyzing performance, robustness, and constraint adherence. 3rd Year: the proposed methodologies will be extended to more complex, large-scale, multi-domain scenarios, and industrial use cases. The focus will be on optimizing synthetic data generation pipelines for scalability, efficiency, and reliability, and improving their integration with downstream models and agentic architectures. Extensive experimental evaluation will identify conditions where synthetic data provides measurable benefits, alongside its limitations and potential risks (e.g., bias propagation or constraint violations). Strategies for transferring knowledge from large LLMs to smaller, deployable models will be refined. Target Publications |
| Required skills | -Knowledge of basic computer science concepts, AI, machine learning, and Maths. -Programming skills in Python -Knowledge of English, both written and spoken. -Capability of presenting the results of the work, scientific writing, and slide presentations. -Entrepreneurship, autonomous working, goal-oriented. -Flexibility and curiosity for different activities, from programming to teaching to presentation. -Capability of guiding undergraduate students for thesis projects. |
Knowledge-Informed Machine Learning for Data Science and Scientific AI | |
| Proposer | Daniele Apiletti, Simone Monaco, Paolo Garza |
| Topics | Data science, Computer vision and AI |
| Group website | https://dbdmg.polito.it/ https://smartdata.polito.it/ |
| Summary of the proposal | Traditional machine learning is mainly data driven. However, besides the knowledge brought by the data, extra a-priori knowledge of the modeled phenomena is often available (e.g., physical laws, domain expertise), leading to the Knowledge-Informed Machine Learning, Theory-Guided Data Science, and ultimately to Scientific AI. The candidate will explore solutions leveraging advanced data science models and Agentic AI components to learn, reason, and represent complex phenomena. |
| Research objectives and methods | Research Objectives The research aims to define new methodologies for integrating scientific and domain knowledge within advanced data science models and agentic AI architectures, with a focus on advancing Scientific AI. To this end, the main research objectives include: Outline Target publications IEEE TKDE (Trans. on Knowledge and Data Engineering) |
| Required skills | -Knowledge of the basic computer science concepts, AI, machine learning, and Maths. -Programming skills in Python -Knowledge of English, both written and spoken. -Capability of presenting the results of the work, scientific writing and slide presentations. -Entrepreneurship, autonomous working, goal oriented. -Flexibility and curiosity for different activities, from programming to teaching to presenting to writing. -Capability of guiding undergraduate students for thesis projects. |
Agentic AI for the Cloud Continuum | |
| Proposer | Daniele Apiletti, Giovanni Malnati |
| Topics | Data science, Computer vision and AI, Parallel and distributed systems, Quantum computing |
| Group website | https://dbdmg.polito.it/ https://smartdata.polito.it/ |
| Summary of the proposal | As cloud continuum environments grow in complexity, orchestrating workloads across heterogeneous edge-to-cloud infrastructure becomes critical. This research proposes an innovative solution for optimizing job scheduling on federated Kubernetes clusters using Agentic AI based on Large Language Models (LLMs). By leveraging structured LLM outputs and introducing an "Explainability-as-code" pattern, the framework enables autonomous, transparent, and auditable workload placement. |
| Research objectives and methods | Research Objectives This research aims to improve resource management and workload orchestration across the cloud continuum by transitioning from traditional heuristic schedulers to AI-driven autonomous systems. As computing architectures increasingly distribute across heterogeneous edge, fog, and cloud nodes, orchestrating complex workloads over federated environments requires dynamic, context-aware decision-making that static policies struggle to provide. The core objective is to design, implement, and evaluate a novel cloud continuum scheduling framework over federated Kubernetes architectures, utilizing technologies like Karmada as the multi-cluster control plane. Within this framework, LLM-based Agentic AI acts as the central decision-maker. By analyzing real-time cluster telemetry, hardware constraints, energy availability, and job requirements, the LLM agent dynamically optimizes workload distribution across the continuum. A critical challenge in applying Generative AI to critical IT operations (AIOps) is ensuring determinism and transparency. To address this, the research introduces two foundational pillars: 1. Structured LLM Outputs: the Agentic AI is engineered to generate strictly constrained and validated structural formats (e.g., JSON/YAML schemas) that can be safely, directly, and deterministically ingested by the Kubernetes and Karmada APIs. 2. Explainability-as-Code (EaC): the LLM not only dictates the scheduling placement but programmatically traces its reasoning, the constraints evaluated, and the trade-offs considered. Every automated placement decision is accompanied by a declarative, machine-verifiable, and human-readable trace of the AI's logical reasoning (e.g., via metadata or annotations). This paradigm bridges the gap between black-box AI and the strict auditability required in critical infrastructures. Outline of the research work plan 1st year. The candidate will explore the state of the art in cloud continuum orchestration, focusing on Kubernetes federation architectures (specifically Karmada) and the limitations of traditional multi-cluster schedulers. Concurrently, the candidate will investigate the integration of LLMs as reasoning engines for system orchestration, identifying gaps in reliability and exploring prompt engineering techniques to enforce structured outputs. 2nd year. The candidate will design and develop the core Agentic AI scheduler. This phase involves creating a closed-loop system where the LLM agent retrieves live cluster metrics, processes scheduling requests, and outputs deterministic, structured placement decisions. The framework will be tested on federated testbeds simulating heterogeneous edge and cloud tiers to evaluate the agent's ability to optimize job placement against baseline heuristic algorithms. 3rd year. The candidate will advance the research by formalizing and integrating the "Explainability-as-code" framework, ensuring the agent's multi-dimensional reasoning is fully auditable. Experimental evaluation will be scaled to complex, real-world multi-cluster scenarios to measure overhead, scheduling latency, and optimization gains. Final optimizations will be applied to minimize the LLM inference latency in the orchestration loop. List of possible venues for publications - IEEE Transactions on Cloud Computing (TCC) - IEEE Transactions on Network and Service Management (TNSM) - Future Generation Computer Systems (Elsevier) - Journal of Systems and Software (Elsevier) - IEEE Internet of Things Journal - IEEE IC2E (Int. Conf. on Cloud Engineering) - IEEE/ACM UCC (Int. Conf. on Utility and Cloud Computing) - KubeCon + CloudNativeCon (CNCF) |
| Required skills | - Knowledge of the basic computer science concepts. - Knowledge of the main cloud computing topic. - Programming skills - Capability of presenting the results of the work, both written (scientific writing and slide presentations) and oral. - Entrepreneurship, autonomous working, goal oriented. - Flexibility and curiosity for different activities, from programming to teaching to presenting to writing. |
Few-shot imitation learning for real world manipulation | |
| Proposer | Giuseppe Averta, Francesca Pistilli |
| Topics | Data science, Computer vision and AI |
| Group website | |
| Summary of the proposal | This research investigates data-efficient imitation learning from visual demonstrations in 2D and 2.5D, possibly enriched through structured representations such as scene or graph-based encodings, to learn manipulation skills from few examples. The goal is to develop transferable policies for anthropomorphic robotic arms that adapt across visual domains, simulators, tasks and embodiments, with minimal additional data. |
| Research objectives and methods | Deploying anthropomorphic robotic manipulators in real-world environments still requires learning methods that can generalize beyond a single robot, setup, or task. In this project, the candidate will study advanced imitation learning approaches that acquire manipulation policies directly from visual demonstrations, using 2D or 2.5D observations and, when beneficial, structured intermediate representations such as scene graphs or other graph-based encodings. The objective is to reduce the amount of supervision and robot interaction needed to learn complex behaviours, while improving robustness to changes in viewpoint, lighting, background, object arrangement, task configuration and possibly also robot embodiment. Research objectives:- design visual imitation learning pipelines that learn manipulation policies from a limited number of demonstrations.- investigate how 2D, 2.5D, and graph-based representations can improve policy learning, temporal reasoning, and action generalization.- develop neural architectures that transfer across robot morphologies (i.e from human examples or from other robots), simulators, and manipulation tasks with little fine-tuning data.- study few-shot adaptation strategies for rapidly specializing a pretrained policy to new domains, embodiments, or task variants.- evaluate precision, robustness, and sample efficiency on manipulation tasks involving anthropomorphic robotic arms. The research work plan will be articulated in four main phases. We plan to publish outcomes of this research to premium conferences like CoRL, RSS, ICRA, IROS, CVPR, ICCV, and ECCV, as well as journals such as IEEE Robotics and Automation Letters, IEEE Transactions on Robotics, The International Journal of Robotics Research, IEEE Transactions on Pattern Analysis and Machine Intelligence, and the International Journal of Computer Vision. |
| Required skills | The candidate should have a strong background in machine learning, computer vision, and robotics, with solid programming skills in Python and deep learning frameworks such as PyTorch. Experience with imitation learning, robot learning, representation learning, or graph neural networks is highly desirable. Independence, research motivation, and strong analytical skills are essential. |
Measuring Digital Wellbeing Beyond Screen Time | |
| Proposer | Alberto Monge Roffarello, Luigi De Russis |
| Topics | Data science, Computer vision and AI, Software engineering and Mobile computing |
| Group website | https://elite.polito.it/ |
| Summary of the proposal | Current digital wellbeing research relies on screen-time metrics and self-reports, which fail to capture the real-time experiential impact of attention-capture design patterns on users. This PhD proposal investigates novel multimodal measurement methods - integrating behavioral, physiological, and computational signals - to quantify how dark patterns affect users' sense of agency, attention, and wellbeing beyond traditional proxies. |
| Research objectives and methods | The growing integration of digital technologies into everyday life has raised fundamental concerns about their impact on users' psychological wellbeing. Social media platforms, streaming services, and mobile applications increasingly rely on Attention-Capture Damaging Patterns (ACDPs)?such as infinite scroll, content autoplay, and algorithmic recommendations?designed to maximize engagement metrics rather than support healthy digital experiences. These design strategies compromise users' Sense of Agency (SoA), i.e., their perceived control over their own actions and outcomes during interaction. Despite growing research on digital wellbeing, the HCI community still lacks robust methods for measuring the real-time impact of these patterns on users' cognitive and affective states. Current approaches rely on two paradigms: (1) screen-time metrics, which reduce digital experience to a single quantitative proxy, and (2) self-report questionnaires, which capture only post-hoc reflective judgments and are vulnerable to recall bias. Neither can adequately capture the in-the-moment experiential dimension of digital wellbeing?how users actually feel and behave while interacting with an interface. Building on empirical evidence showing that small feed-level design changes can measurably influence both reflective and experiential dimensions of agency on platforms like TikTok [1], this PhD proposal aims to develop and validate new methods for measuring digital wellbeing beyond screen time and self-reports, and to leverage these insights to design effective digital interventions. The PhD student will design, develop, and evaluate novel measurement frameworks, experimental paradigms, and intervention strategies that capture and address the impact of interface design patterns on users' sense of agency, attention, and wellbeing. Possible areas of investigation are: The proposal will adopt a human-centered approach, building upon the scientific literature from Human-Computer Interaction, Cognitive Psychology, and Psychophysiology. The work plan is organized in four partially overlapping phases:- Phase 1 (months 0?6): literature review at the intersection of digital wellbeing measurement and ACDPs; focus groups and interviews with researchers, designers, and end users; identification of measurement gaps and intervention opportunities.- Phase 2 (months 3?24): design, development, and validation of multimodal measurement instruments and reusable experimental tools; controlled laboratory studies on the impact of specific ACDPs; development of heuristic-based methods for characterizing ACDPs.- Phase 3 (months 12?36): design, prototyping, and controlled evaluation of digital interventions informed by measurement findings; investigation of the effectiveness-intrusiveness trade-off.- Phase 4 (months 24?36): longitudinal in-the-wild deployments of measurement instruments and interventions; cross-platform generalization; consolidation into guidelines and open-source research tools for the HCI community. Results are expected to be published at top HCI conferences (e.g., ACM CHI, ACM CSCW, ACM UIST) and in journals such as ACM Transactions on Computer-Human Interaction, International Journal of Human-Computer Studies, Proceedings of the ACM on IMWUT, and Computers in Human Behavior. [1] A. Monge Roffarello, A. De Luca, Am I in Control? How the Design of the TikTok Feed Shapes Users' Sense of Agency, CHI EA 2026, https://doi.org/10.1145/3772363.3798790 |
| Required skills | A candidate interested in the proposal should ideally: - be able to critically analyze and evaluate existing research, as well as gather and interpret data from various sources; - have experience with or strong interest in experimental research methods, including the design and execution of user studies; - have a good understanding of HCI research methods, especially around user experience evaluation and behavioral measurement. |
Spatio-Temporal Data Science Applied to Earth Observation | |
| Proposer | Paolo Garza, Daniele Apiletti |
| Topics | Data science, Computer vision and AI |
| Group website | https://dbdmg.polito.it/ |
| Summary of the proposal | Spatio-Temporal data continuously increase (time series collected from IoT sensors, satellite images, and geo-referenced documents). Although ST data have been extensively studied, current data science pipelines do not effectively manage heterogeneous sources. Most of them focus on one source at a time. Innovative deep learning approaches based on latent spaces, designed to integrate information from heterogeneous sources, are the primary goal of this proposal, with a focus on Earth Observation. |
| Research objectives and methods | The main objective of this research proposal is to study and design data-driven pipelines and deep learning models to analyze heterogeneous spatio-temporal data in the Earth Observation (EO) domain. Such data include, for instance, multi-spectral satellite imagery, time series from remote sensing platforms, and geo-referenced textual or environmental reports. Both descriptive and predictive problems will be considered, with applications such as environmental monitoring, disaster management, and climate analysis. Heterogeneity. Earth Observation systems inherently produce heterogeneous data characterized by different modalities, resolutions, and formats (e.g., satellite images, sensor measurements, textual reports). Each source provides a partial view of the observed phenomena, and meaningful insights can only be extracted through effective integration. To address this issue, innovative data integration techniques based on aligned latent spaces will be investigated. These approaches will enable the fusion of multi-modal EO data, preserving complementary information and improving the understanding of complex environmental processes. The work plan for the three years is organized as follows. 1st year. Analysis of the state-of-the-art algorithms and data science pipelines for spatio-temporal EO data. Based on the strengths and limitations of existing approaches, a preliminary common data representation based on latent spaces will be designed to effectively integrate heterogeneous EO data sources. Novel algorithms will then be developed and validated using historical EO datasets, with applications including disaster-related event detection, environmental change analysis, and data retrieval. |
| Required skills | Strong background in data science fundamentals and machine learning algorithms, including embeddings-based data models and LLMs. Strong programming skills. Knowledge of big data frameworks such as Spark is advisable but not required. |
Agentic AI for Advanced Temporal Reasoning | |
| Proposer | Luca Cagliero, Silvia Chiusano |
| Topics | Data science, Computer vision and AI |
| Group website | https://smartdata.polito.it |
| Summary of the proposal | Addressing Temporal Reasoning using Large Language Models (LLMs) requires understanding not only the general concepts of time and time relations, such as ordering or duration, but also more intricate aspects, such as task planning or causal relation discovery. The scholarship aims to explore the use of Agentic frameworks based on Multimodal LLMs to manage complex temporal aspects and effectively and efficiently address challenges related to content misalignment and long-context reasoning. |
| Research objectives and methods | Context List of possible publication venues |
| Required skills | The PhD candidate is expected to - Have the ability to critically analyze complex systems, model them and identify weaknesses; |
Multimodal Temporal Reasoning using Tiny LLMs | |
| Proposer | Luca Cagliero, Paolo Garza |
| Topics | Data science, Computer vision and AI |
| Group website | https://dbdmg.polito.it/ https://smartdata.polito.it |
| Summary of the proposal | Multimodal temporal reasoning is the process of combining different temporal cues into a coherent temporal view of multimodal data. While Multimodal Large Language Models (MLLMs) leverage their robust pretraining to incorporate time-related information, MLLMs with few billion parameters often struggle with time-related problems. The scholarship aims to propose new approaches to significantly advance the performance of tiny MLLMs across different temporal reasoning tasks. |
| Research objectives and methods | Context List of possible publication venues |
| Required skills | The PhD candidate is expected to - Have the ability to critically analyze complex systems, model them and identify weaknesses; - be proficient in Python programming; - know data science fundamentals; - have a solid background on machine learning and deep learning; - have natural inclination for teamwork; - be proficient in English speaking, reading, and writing. |
Continual Learning for Generative Models | |
| Proposer | Luca Cagliero, Elena Baralis |
| Topics | Data science, Computer vision and AI |
| Group website | https://dbdmg.polito.it/ https://smartdata.polito.it |
| Summary of the proposal | Training Multimodal Large Language Models (MLLMs) is inherently dynamic as data distributions, languages and user demands continually evolve. Continual Learning (CL) aims to adapt models for new tasks, languages, and domains without forgetting prior knowledge and capabilities. While CL for discriminative models is established, its use for generative models poses relevant challenges. The scholarship aims to study innovative CL approaches suited for Small MLLMs and apply them in real scenarios. |
| Research objectives and methods | Context Research objectives Benchmarking Existing CL techniques for textual generative models; Extend CL techniques for textual LLMs towards multimodal scenarios; Transfer CL models from one domain to another, from one language to another, and from one modality to another; Propose new foundational CL models, including Reinforced CL techniques [4,5]; Study new strategies to fight catastrophic forgetting in challenging scenarios; Generalize CL approaches to make them agnostic to data modality and language; Explore new, challenging application scenarios. |
| Required skills | The PhD candidate is expected to - Have the ability to critically analyze complex systems, model them and identify weaknesses; - be proficient in Python programming; - know data science fundamentals; - have a solid background on machine learning and deep learning; - have natural inclination for teamwork; - be proficient in English speaking, reading, and writing. |
Time-Aware Reinforcement Learning from AI Feedback | |
| Proposer | Luca Cagliero, Eliana Pastor |
| Topics | Data science, Computer vision and AI |
| Group website | https://dbdmg.polito.it/ https://smartdata.polito.it |
| Summary of the proposal | Reinforcement Learning from AI Feedback (RLAIF) has been developed to mitigate the substantial expenses involved in acquiring human preferences. With the seamless advances of Multimodal LLMs, RLAIF has become fundamental to complement human feedback for model fine-tuning, but temporal model drift and time-evolving application scenarios pose significant challenges. The scholarship aims to adapt RLAIF for time-evolving scenarios, mainly focusing on real applications of SpeechLLMs and VisionLLMs. |
| Research objectives and methods | Context Reinforcement Learning from Human Feedback (RLHF) is an established technique for aligning language models to human preferences [1]. However, since the cost of human annotation is often unaffordable, RL commonly uses a reward model trained on a mix of human and AI preferences [2]. Challenges To overcome RLHF issues, Reinforcement Learning from AI Feedback (RLAIF) has successfully been applied in several domains, ranging from hate speech detection and mitigation [3] to SpeechLLM fine-tuning [4]. However, most RLAIF-based solutions often assume that the AI feedback is collected once, without incremental updating, and the model to be fine-tuned is static. When a temporal drift occurs, new AI feedback is required, and time-evolving/streaming models and Reinforcement Learning strategies become necessary. Research objectives Benchmarking Existing RLAIF techniques on speechLLMs, VisualLLMs, and VideoLLMs; Extend RLAIF techniques towards different modalities, contexts of application, and data distributions; Propose new, efficient time-evolving approaches to RLAIF; Adapt RLAIF techniques to incremental/streaming scenarios; Define new performance metrics to capture RLAIF effectiveness and efficiency in time-evolving scenarios; Develop Agentic AI solutions incorporating RLAIF; Explain AI agents' decisions in time-evolving RLAIF scenarios. Tentative work plan During the first year, the PhD student will study existing RLAIF techniques and compare them with RLHF approaches for model fine-tuning. Focusing on speechLLMs first, the PhD investigates new approaches to adapt RLAIF to time-evolving scenarios, with particular attention paid to incremental/streaming scenarios. In the second year, the PhD student will extend the research to other data modalities (e.g., visual content, time series), studying original approaches to efficiently make RLAIF and RL-based fine-tuning techniques time-aware. The PhD student will also develop Agentic AI framework incorporating time-aware RLAIF techniques. In the last year, the PhD student will further explore RLAIF applications, particularly on Agentic AI, and studies how to explain AI agents' decisions, and how to measure the quality of LLM-as-a-judge models in time-evolving scenarios. Funding The research activities will be carried out under the FIS 2 National Project ?TA-LLM - Large Language Models: a matter of time?, funded by MUR. CUP: E53C25001820001 Bibliography [1] Learning from human preferences. Amodei, Dario; Christiano, Paul; Ray, Alex. Openai.com. [2] RLAIF vs. RLHF: Scaling Reinforcement Learning from Human Feedback with AI Feedback. Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash. https://arxiv.org/abs/2309.00267 [3] A. Albladi et al., "Hate Speech Detection Using Large Language Models: A Comprehensive Review," in IEEE Access, vol. 13, pp. 20871-20892, 2025, doi: 10.1109/ACCESS.2025.3532397. [4] WavReward: Spoken Dialogue Models With Generalist Reward Evaluators. Shengpeng Ji, Tianle Liang, Yangzhuo Li, Jialong Zuo, Minghui Fang, Jinzheng He, Yifu Chen, Zhengqing Liu, Ziyue Jiang, Xize Cheng, Siqi Zheng, Jin Xu, Junyang Lin, Zhou Zhao. 2025 [5] A Comprehensive Survey of LLM Alignment Techniques: RLHF, RLAIF, PPO, DPO and More. Zhichao Wang, Bin Bi, Shiva Kumar Pentyala, Kiran Ramnath, Sougata Chaudhuri, Shubham Mehrotra, Zixu (James)Zhu, Xiang-Bo Mao, Sitaram Asur, Na (Claire) Cheng. https://arxiv.org/abs/2407.16216 List of possible publication venues - Conferences: ACL, EMNLP, ACM Multimedia, NeurIPS, AAAI, KDD, IEEE ICDM, ECML PKDD, ACM CIKM - Journals: IEEE TKDE, ACM TKDD, IEEE TAI, ACM TIST, IEEE/ACM TASLP, TACL |
| Required skills | The PhD candidate is expected to - Have the ability to critically analyze complex systems, model them and identify weaknesses; - be proficient in Python programming; - know data science fundamentals; - have a solid background on machine learning and deep learning; - have natural inclination for teamwork; - be proficient in English speaking, reading, and writing. |
Adversarial Robustness in Multi-Modal Foundation Models | |
| Proposer | Luca Cagliero, Danilo Giordano, Nicola Franco |
| Topics | Cybersecurity, Data science, Computer vision and AI |
| Group website | https://ai4i.it/ https://dbdmg.polito.it/ https://smartdata.polito.it |
| Summary of the proposal | Multi-modal AI models elaborating vision, language, and audio content are becoming increasingly prevalent in several applications. However, these systems introduce novel attack surfaces arising from cross-modal interactions, where adversarial inputs in one modality can exploit semantic inconsistencies or vulnerabilities when processed jointly with other modalities. This research aims to investigate these vulnerabilities and develop novel attacks exposing weaknesses in cross-modal processing. |
| Research objectives and methods | Large-scale multi-modal AI models enable richer understanding and generation of content across different modalities. However, the security implications of cross-modal interactions remain poorly understood. Recent evidence suggests that adversaries can craft inputs where, for example, benign visual content paired with carefully manipulated audio or text can cause model misclassification or unsafe outputs. Unlike single-modality adversarial attacks, multi-modal attacks exploit the complex fusion mechanisms that integrate information across modalities, creating attack vectors that are difficult to detect and defend against using existing techniques. The main objective of the proposed research is to advance understanding of adversarial vulnerabilities in multi-modal AI systems by establishing theoretical foundations for attack surfaces and developing practical defense mechanisms that can be deployed in real-world applications. In this research work, the candidate will leverage expertise in adversarial machine learning, information theory, and secure system design to develop both theoretical insights and practical solutions for multi-modal AI security. The research activity will be organized in three phases: Phase 1 (1st year): The candidate will conduct a comprehensive study of the attack surface of multi-modal AI models, focusing on architectures that combine vision, language, and audio modalities. This phase involves analyzing state-of-the-art multi-modal fusion mechanisms to identify potential vulnerability points where cross-modal interactions can be exploited. The candidate will develop a taxonomy of attack vectors specific to multi-modal systems, categorizing them by the exploited modality interactions, attack objectives, and required adversary capabilities. The candidate will begin developing novel attack methods that exploit semantic inconsistencies between modalities. For example, attacks where visual and textual content appear individually benign but their combination triggers misclassification, or where subtle audio perturbations alter the interpretation of accompanying visual content. At this phase's end, preliminary results are expected to be published, including the attack taxonomy, initial attack methods, and theoretical characterizations of multi-modal vulnerability surfaces. During the first year, the candidate will also acquire necessary background through coursework in adversarial machine learning, information theory, and multi-modal AI architectures, supplemented by personal study. Phase 2 (2nd year): In the second phase, the candidate will build upon the knowledge acquired during the first year to create a dataset of synthetic adversarial attacks. The dataset will cover several attack techniques, including cross-modal perturbation, semantic inconsistency exploitation, modality-specific backdoors. The dataset will target different types vulnerabilities and it will cover different application contexts. This dataset will serve as a benchmark for evaluating defense mechanisms and will be made publicly available to support the research community. The dataset will be validated through an experimental campaign to demonstrate the effectiveness of the attack methods in exposing vulnerabilities. Applications in content analysis systems, customer service AI agents, and AI-assisted software development tools will serve as case studies. Phase 3 (3rd year): Building on previous results, the candidate will develop new attack techniques capable of automatically synthesizing attack samples. These techniques will be implemented as a framework of parameterized attack templates to generate diverse adversarial examples across different multi-modal architectures. The attack generation framework will incorporate the results from Phase 2 to prioritize attack strategies with higher success probabilities. The candidate will refine and validate the attack mechanisms through extensive testing on multimodal models. The final phase will produce comprehensive documentation, open-source implementations of attack tools, and best practices guidelines for developing robust multi-modal AI systems. Dissemination activities will include publications, software releases, and potentially workshops or tutorials to transfer knowledge to practitioners. The work may be conducted in collaboration with industry partners deploying multi-modal AI systems or relevant research initiatives focusing on AI safety and security. The contributions produced by the proposed research can be published in conferences and journals belonging to the areas of Machine Learning and AI (e.g., NeurIPS, ICML, ICLR, CVPR, ACL, AAAI, JMLR, IEEE TPAMI), Cybersecurity (e.g., IEEE S&P, ACM CCS, USENIX Security, NDSS, ACM Transactions on Privacy and Security), and AI safety (e.g., AIES, FAccT). The activities will be carried out within the scope of a research collaboration between the Italian Institute of Artificial Intelligence for Industry and Politecnico di Torino. |
| Required skills | The candidate should have a strong background in ML and AI, with particular emphasis on deep learning architectures. Familiarity with multi-modal AI models, computer vision, NLP, or audio processing is desirable. Strong programming skills and experience with deep learning frameworks are essential. The candidate can acquire specialized knowledge in adversarial robustness and multi-modal architectures as part of the PhD Program, by exploiting specialized courses and the research group's expertise. |
Privacy-Preserving Machine Learning over IoT networks. | |
| Proposer | Enrico Macii, Andrea Calimera, Valentino Peluso |
| Topics | Computer architectures and Computer aided design, Cybersecurity, Data science, Computer vision and AI |
| Group website | eda.polito.it https://www.linkedin.com/company/edagroup-polito/ |
| Summary of the proposal | Distributed Machine Learning strategies, like split learning and federated learning, enable decentralized intelligence but are vulnerable to data theft and manipulation, raising privacy and security concerns. Existing defenses often degrade performance and introduce overhead, limiting their adoption in resource-constrained IoT devices. This project aims to develop hardware-aware software optimization techniques for efficient, privacy-preserving ML in distributed IoT systems. |
| Research objectives and methods | Research objectives. This project aims to develop and evaluate optimization techniques that address the challenges of privacy and security in distributed machine learning (ML) while ensuring efficiency in resource constrained IoT environments. Specifically, the objectives include:Acquire competences in ML and deep learning training and deployment, distributed computing architectures, and existing privacy-preserving techniques. Develop optimization strategies to make privacy-preserving strategies compatible with the limited resources of low-power end-nodes and off-the-shelf devices, enabling their implementation feasible in real-world networks and infrastructures. Identify the evaluation metrics to assess the quality, security and efficiency of privacy-preserving ML frameworks. Develop an emulation framework for rapid assessment of different optimization strategies and techniques.Develop multi-objective optimization techniques and algorithms that enhance accuracy, energy efficiency, and communication costs while maintaining privacy protection. The proposed solutions should also be compatible with security defenses against adversarial attacks, such as data and model poisoning, which are notoriously difficult to integrate with standard privacy-preserving techniques. Outline of research work plan. 1st year. The candidate will conduct a comprehensive review of the state-of-the-art in distributed ML, focusing on: existing approaches such as federated learning, split learning, split inference; vulnerabilities, threats, and attacks in distributed ML systems; privacy-preserving techniques, including differential privacy, multi-party computation, and homomorphic encryption; key performance indicators (KPIs) to evaluate distributed ML strategies and their applicability in IoT systems. The candidate will also develop an initial version of an emulation framework for distributed ML (leveraging existing open-source projects), which will serve as testbed to evaluate novel optimization strategies. 2nd year. The candidate will design, develop, and validate novel optimization strategies, working across multiple layers: at the software layer, with algorithmic solutions that concurrently optimize accuracy efficiency; at the hardware layer, investigating compiler-level optimization and specialized architectures for acceleration. Rather than treating these optimization strategies as isolated solutions, the candidate will explore their interactions to maximize efficiency. 3rd year. The candidate will test and consolidate the developed methodologies on real applications. The focus will be on emerging applications that could benefit most from privacy-preserving ML, assessing feasibility, robustness, and efficiency in practical scenarios. Possible venues for publications:IEEE Internet of Things JournalIEEE Transactions on Parallel and Distributed SystemsIEEE Transactions on PrivacyIEEE Transactions on Information Forensics and SecurityIEEE Transactions on Dependable and Secure ComputingACM Transactions on Embedded Computing SystemsACM Transactions on Internet of ThingsACM Transactions on Privacy and SecurityACM/IEEE Design Automation Conference (DAC)IEEE/ACM International Conference on Computer Aided Design (ICCAD) |
| Required skills | Knowledge of standard Machine Learning and Deep Learning and basic model compression strategies (e.g., pruning, quantization). Background in embedded systems programming. Proficiency in Python, including ML frameworks like scikit-learn and PyTorch. Strong communication and writing skills. |
Stack and Compilation Techniques for Hybrid Algorithms on Fault-Tolerant Quantum Architectures | |
| Proposer | Bartolomeo Montrucchio, Maurizio Rebaudengo |
| Topics | Parallel and distributed systems, Quantum computing |
| Group website | https://www.dauin.polito.it/la_ricerca/gruppi_di_ricerca/grains_graphics_and_intelligent_systems |
| Summary of the proposal | Fault-tolerant quantum computers are expected to enable more reliable execution than NISQ devices, while still being strongly constrained in terms of qubit count, logical resources, communication overheads, and runtime costs. This research will investigate software stack and compilation techniques for the efficient execution of hybrid quantum-classical algorithms on such systems, with particular attention to portability, resource optimization, and integration with classical computing. |
| Research objectives and methods | Fault-tolerant quantum computing is progressively moving from a long-term theoretical target toward an early practical regime, in which a limited number of logical qubits may become available with non-negligible overheads due to error correction. In this context, the most relevant applications will likely remain hybrid in nature: quantum kernels will be embedded into broader classical workflows, and their practical usefulness will depend not only on algorithmic asymptotic advantages, but also on the efficiency of the full software stack. The goal of this Ph.D. activity is to study, design, and prototype software methodologies for the efficient implementation of hybrid quantum-classical algorithms targeting early fault-tolerant quantum machines. The research will focus especially, though not exclusively, on the compilation layer, with the objective of translating high-level hybrid programs into hardware-aware and resource-efficient executable forms. The work will consider the constraints specific to early fault-tolerant architectures, such as limited logical qubit availability, expensive non-Clifford resources and qubit connectivity constraints. The research objectives include: - the analysis of software abstractions and intermediate representations for hybrid quantum-classical programs, with emphasis on portability across different execution backends and programming models: - the design of compilation strategies able to optimize circuits and hybrid workflows - the study of methods for reducing the overhead associated with fault-tolerant execution, for example by minimizing costly operations and improving reuse of logical resources - the integration of compilation and runtime techniques to support efficient interaction between quantum kernels and classical orchestration, including scheduling, batching, feedback paths, and interoperability with HPC and cloud-based infrastructures - the definition of representative benchmarks and evaluation methodologies to assess the effect of software and compilation choices on algorithmic performance, execution cost, scalability, and resource consumption. From a methodological perspective, the Ph.D. candidate will work on state-of-the-art open-source frameworks and will contribute new software components, prototypes, and evaluation tools. The activity may involve different abstraction levels, from high-level hybrid programming interfaces down to compiler passes, intermediate representations, runtime orchestration mechanisms, and resource models for fault-tolerant execution. While the focus will be on compilation, the research is intentionally broad enough to include relevant cross-layer optimizations in the software stack whenever these are necessary to obtain efficient end-to-end execution. The work is expected to evolve over the three years of the Ph.D. as follows: -First year: consolidation of the background in quantum computing, fault tolerance and hybrid quantum-classical programming models; analysis of the state of the art; identification of relevant use cases and software platforms; initial implementation activities; submission of at least one conference paper -Second year: design and development of original compilation and software-stack techniques for hybrid algorithms on early fault-tolerant architectures; implementation of prototypes and benchmarking methodology; experimental validation on simulators and, where possible, on available platforms or emulation environments; submission of conference papers and at least one journal paper. -Third year: refinement and consolidation of the developed methodologies; extension toward broader software integration and more mature evaluation campaigns; final comparison with state-of-the-art approaches; preparation of the doctoral thesis and publication of results in selected international venues. Possible publication venues include major journals and conferences in the areas of quantum computing systems, compilation, and high-performance computing, such as IEEE, ACM, and related international venues. Representative outlets may include IEEE Transactions on Quantum Engineering, ACM Transactions on Quantum Computing, IEEE Quantum Week, QCE workshops, and selected systems/HPC conferences where relevant. The proposal builds on the complementary expertise available at the Department of Control and Computer Engineering and at Fondazione Links, with which a scientific collaboration is foreseen on the addressed topics and collaborative research activities and joint scientific contributions are expected where appropriate. This collaboration fits into an already established relationship on advanced computing topics and may provide additional opportunities for validation, technology transfer, and interaction with broader applied-research scenarios. An international research period abroad of at least 6 months is mandatory, in coherence with the Ph.D. program and with the evolution of the candidate's scientific activity. |
| Required skills | The ideal candidate should have a strong interest in quantum computing systems and software. A solid background in computer engineering, computer science, or a related field is expected, with good programming skills, preferably in Python and/or C++. Knowledge of compilers, HPC, programming languages, or computer architecture will be appreciated. Previous experience in quantum computing is beneficial but not strictly mandatory. Good teamwork and research attitude are important. |
Fairness-Aware Generative AI for Socio-technical Systems | |
| Proposer | Riccardo Coppola, Antonio Vetrò |
| Topics | Software engineering and Mobile computing, Data science, Computer vision and AI |
| Group website | http://softeng.polito.it/ https://nexa.polito.it/ |
| Summary of the proposal | The research will study how large language models reproduce and amplify bias in high-impact contexts, and how such risks can be detected, measured and mitigated through evaluation workflows, model orchestration and software design practices for more trustworthy and equitable AI-enabled systems. |
| Research objectives and methods | Generative AI is increasingly embedded in socio-technical systems that support communication, assessment, recommendation, profiling, moderation and decision preparation, such as customer-support chatbots, automated essay scoring in education, recommendation systems on digital platforms, and decision-support tools used in domains like hiring, healthcare, and public administration. In these settings, biased model behaviour can implicitly embed political views, produce unequal treatment, distorted representations and unfair downstream effects. This PhD position addresses the problem from a fairness-aware software and systems perspective, with the goal of developing audit methods, tools, workflows and design principles for the analysis and mitigation of bias in generative AI. Recent work in this area has shown that unfair behaviour in LLMs can emerge in different forms. Prior studies attached to this line of research have shown, for example, that models may reproduce gendered stereotypes in generated descriptions and may also react differently to semantically equivalent inputs when those inputs vary by dialect, producing stereotype-bearing differences in adjectives, occupational associations, trust judgments and inferred background. These cases are not the exclusive focus of the PhD, but they provide initial concrete evidence that can be expanded upon in terms of depth or breadth during the research. that fairness risks in generative AI are broader than any single application domain and can arise from both social and linguistic signals. The research will investigate fairness-aware generative AI as a general complex socio-technical challenge that integrates technical dimensions?such as training data balance, model architecture, and optimization strategies?with core concepts from the social sciences. It will examine how bias manifests across textual and multimodal outputs; how such behaviour can be evaluated through controlled empirical protocols; and how mitigation can be embedded into the architecture of AI-enabled systems. Particular attention will be given to the comparison between prompt-based mitigation and process-based mitigation, since recent evidence suggests that critique-and-revision workflows and multi-agent orchestration may provide more stable mitigation than single-pass prompting alone. The work is expected to produce a fairness-aware framework for generative AI in socio-technical systems, combining empirical evaluation methods, reusable benchmark designs, mitigation workflows and software engineering guidance. The research plan may include the construction of matched-input evaluation protocols, cross-model comparative studies, analysis of representational and allocational harms, and the design of agentic or workflow-level controls to reduce biased outputs in realistic deployment settings. The broader aim is to support the development of trustworthy generative AI systems that are not only technically effective, but also fairer, more transparent and more robust in practice. Phase 1 (Months 1-12) - Benchmark design and harm identification. Phase 2 (Months 13-24) - Comparative evaluation and mitigation. Phase 3 (Months 25-36) - Deployment testing and operational consolidation. List of possible venues for publications |
| Required skills | The ideal candidate should have a background in computer science, computer engineering, management engineering, software engineering or related disciplines, with interest in generative AI, empirical research and responsible AI. Useful preparation includes programming, data analysis, experimental evaluation, and familiarity with AI or NLP methods. Interest in fairness, ethics and socio-technical systems is particularly valuable. The candidate should also possess good communication skills. |