02 Robust AI systems for data-limited applications (Prof. Santa Di Cataldo)
03 Artificial Intelligence applications for advanced manufacturing systems (Prof. Santa Di Cataldo
04 Digital Wellbeing by Design (Prof. Alberto Monge Roffarello)
05 Goal-Oriented Adaptive Learning for 6G (Prof. Claudio Ettore Casetti)
06 Security of Linux Kernel Extensions (Prof. Riccardo Sisto)
07 Local energy markets in citizen-centered energy communities (Prof. Edoardo Patti)
08 Simulation and Modelling of V2X connectivity with traffic simulation (Prof. Edoardo Patti)
13 Privacy-Preserving Machine Learning over IoT networks (Prof. Valentino Peluso)
14 Data-Driven and Sustainable Solutions for Distributed Systems (Prof. Guido Marchetto)
15 Single-cell Multi-omics for Understanding Cellular Heterogeneity (Prof. Stefano Di Carlo)
19 Safety and Security of AI in Space and Safety Critical Applications (Prof. Stefano Di Carlo)
20 Non-invasive and low-cost solutions for health monitoring (Prof. Massimo Violante)
22 Innovative technologies for infrastructures and buildings management (Prof. Valentina Gatteschi)
24 Video Retrieval-Augmented Generation (Prof. Luca Cagliero)
25 Human-Centered AI within Internet-of-Things Ecosystems (Prof. Luigi De Russis)
26 Preference models for multimodal annotations (Prof. Luca Cagliero)
28 Spatio-Temporal Data Science (Prof. Paolo Garza)
30 AI4CTI - ARTIFICIAL INTELLIGENCE FOR CYBER THREAT INTELLIGENCE (Prof. Marco Mellia)
32 Risk-aware Cyber Threats Mitigation (Prof. Cataldo Basile)
33 AI-based Cyber Threats Mitigation in Software Networks (Prof. Cataldo Basile)
35 High-Performance Networking for Efficient and Secure AI Applications (Prof. Guido Marchetto)
36 Development of Virtual Platforms for Early Software Design (Prof. Sara Vinco)
40 Enhancing Educational Storytelling with Human-Centered AI in the LLM Era (Prof. Luigi De Russis)
41 Knowledge-Informed Machine Learning for Data Science and Scientific AI (Prof. Daniele Apiletti)
42 Building Dynamic and Opportunistic Datacenters (Prof. Fulvio Giovanni Ottavio Risso)
43 Trustworthy Edge AI: efficient and explainable multi-modal models (Prof. Tatiana Tommasi)
Evaluating work-induced stress and cognitive decline using wearables and AI algorithms | |
Proposer | Gabriella Olmo, Luigi Borzì, Marco Ghislieri |
Topics | Data science, Computer vision and AI, Life sciences |
Group website | https://www.smilies.polito.it/ https://www.biomedlab.polito.it/ |
Summary of the proposal | The level of emotional activation is recognized to affect both a person's work performance and his/her safety at work, as well as general health. On the other hand, the work environment is one of the major sources of dysfunctional stress. This proposal refers to the development and implementation of a protocol for objective quantification of work-related stress conditions, and to the identification of possible correlation with the decline in cognitive abilities caused by work-related stress. |
Research objectives and methods | The primary objective of this proposal is to design and implement a prototypal BAN (Body Area Network) made of low-cost, low-impact commercial wearables, to evaluate working-related distress and its correlation with cognitive decline. Specific objectives consist in the implementation of Artificial Intelligence (AI) algorithms working on heterogeneous and multidimensional health data, with attention to interpretability and generalizability of the results. It will be possible to validate the prototype against gold standard instrumentation available at PolitoBIOMedLab, and to discuss the clinical implications of the results, increasing the clinical and psychological knowledge on the correlation between stress and cognitive decline. We plan to have at least a journal paper published per year. |
Required skills | Expertise in the fields of Signal Processing, Data Analysis, Statistics and Machine Learning (e.g. feature selection and ranking, supervised and unsupervised learning). Basic knowledge of bio-signal data processing (EEG, ECG, EMG, EOG). Good knowledge of C, Python, Matlab, Simulink programming languages. - Good relational abilities and knowledge of the Italian language, to effectively manage interactions with participants during the evaluation trials. |
Robust AI systems for data-limited applications | |
Proposer | Santa Di Cataldo, Francesco Ponzio, Enrico Macii |
Topics | Data science, Computer vision and AI |
Group website | https://eda.polito.it/ https://www.linkedin.com/company/edagroup-polito/ |
Summary of the proposal | Artificial Intelligence is driving a revolution in many important sectors in society. Deep learning networks, and especially supervised ones such as Convolutional Neural Networks, remain the go-to approach for many important tasks. Nonetheless, training these models typically requires massive amount of good-quality annotated data, which makes them impractical in many real-world applications. This PhD program seeks answers to such problems, targeting important use-cases in today's society. |
Research objectives and methods | The main goal of this PhD program is the investigation of robust AI-based decision making in data-limited situations. This includes three possible scenarios, which are typical of many important real-world applications:- the training data is difficult to obtain, or it is available in limited quantity.- obtaining the training data is not difficult. Nonetheless, it is either difficult or economically impractical to have human experts labelling the data.- the training data/annotations are available, but the quality of such data is very poor. Possible solutions involve different approaches, from classic transfer learning and domain adaptation techniques, data augmentation with generative modelling, or semi- and self-supervised learning approaches, where the access to real data of the target application is either minimized or avoided altogether. In addition, the use of probabilistic approaches (e.g., Bayesian inference) can be of help to properly quantify the uncertainty level both at training and inference time, making the decision process more robust both to noisy data and/or inconsistent annotations. This research proposal aims to investigate and advance the state of the art in such areas. The outline can be divided into 3 consecutive phases, one per each year of the program:- In the first year, the candidate will acquire the necessary background by attending PhD courses and surveying the relevant literature and will start experimenting on the available state-of-the-art techniques. A seminal conference publication is expected at the end of the year.- In the second year, the candidate will select and address some relevant use-cases, well-representing the three data-limited scenarios mentioned before. Stemming from the supervisors' collaborations and current research activity, these use-cases may involve industry 4.0 applications (for example: smart manufacturing and industrial 3D printing) as well as biomedicine and digital pathology. There is some scope to shape the specific focus of such use-cases with the interests and background of the prospective student, as well as with the ones of the various collaborators that could be involved in the project activity: research centers such as the Inter-departmental Center for Additive Manufacturing in PoliTO, the National Institute for Research in Digital Science and Technology (INRIA, France) as well as industries such as Prima Industrie, Stellantis, Avio Aero, etc. At the end of the second year, the candidate is expected to target at least a paper in a well-reputed conference in the field of applied AI, and possibly another publication in a Q1 journal of the Computer Science sector (e.g., Pattern Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, etc.)- In the third year, the candidate will consolidate the models and approaches that were investigated in the second year, and possibly integrate them into a standalone architecture. The candidate will also finalize this work into at least another major journal publication, as well as into a PhD thesis to defend at the end of the program. |
Required skills | The ideal candidate to this PhD program has: - positive attitude to research activity and working in team - solid programming skills - solid basics of linear algebra, probability, and statistics - good communication and problem-solving skills - some prior experience in the design and development of machine learning and deep learning architectures. |
Artificial Intelligence applications for advanced manufacturing systems | |
Proposer | Santa Di Cataldo, Francesco Ponzio, Enrico Macii |
Topics | Data science, Computer vision and AI |
Group website | https://eda.polito.it/ https://www.linkedin.com/company/edagroup-polito/ |
Summary of the proposal | Industry 4.0 refers to digital technologies designed to sense, predict, and interact with production systems, to make decisions that support productivity, energy-efficiency, and sustainability. While Artificial Intelligence plays a crucial role in this paradigm, many challenges are still posed by the nature and dimensionality of the data, and by the immaturity and intrinsic complexity of some of the processes involved. The aim of this PhD program is to successfully tackle these challenges. |
Research objectives and methods | The main goal of this PhD program is the investigation, design and deployment of state-of-the-art Artificial Intelligence approaches in the context of the smart factory, with special regards with new generation manufacturing systems. These tasks include: - quality assurance and inspection of manufactured product via heterogeneous sensors data (e.g., images from visible range or IR cameras, time-series, etc.) - process monitoring and forecasting - anomaly detection - failure prediction and maintenance planning support While the Artificial Intelligence technologies able to address such tasks may already exist and be successfully consolidated in other real-world applications, the specific domain of manufacturing systems poses severe challenges to the effective deployment of these techniques. Among the others: - the immaturity of the involved technologies - the complexity of the underlying physical/chemical processes - the lack of effective infrastructures for data collection, integration, and annotation - the necessity to handle heterogeneous and noisy data from different types of sensors/machines - the lack of annotated datasets for training supervised models - the lack of standardized quality measures and benchmarks This PhD program seeks solutions to these challenges, with specific focus on new generation manufacturing systems involving complex processes. For example: Additive Manufacturing (AM) and semiconductor manufacturing (SM). - AM includes many innovative 3D printing processes, which are rapidly revolutionizing manufacturing in the direction of higher digitalization of the process and higher flexibility of production. AM involves a fully digitalized process from design to product finishing, and hence it is a perfect candidate for the deployment of Artificial Intelligence. Nonetheless, it is a very complex and still immature technology, with tremendous room for improvement in terms of production time and product defectiveness. Specific use-cases in this regard will stem from the supervisors' collaborations with the Inter-departmental Center for Additive Manufacturing in Politecnico di Torino, as well as with several major industrial partners such as Prima Additive, Stellantis, Avio Aero, etc. - SM is another highly complex process, entailing a wide array of subprocesses and diverse equipment. Driven by the Industry 4.0 revolution and European Chips Act, the semiconductor industry is investing heavily in the digitalization of its production chain. As a result of these investments, the chip production process has been equipped with multiple sensors that constantly monitor the evolution of each manufacturing phase, from oxidation to testing and packaging, thus collecting a tremendous amount of heterogeneous data. To fully unveil the potential and hidden knowledge of such data, Artificial Intelligence is widely acknowledged to have a fundamental role. Use-cases in this regard will stem from the supervisors' collaborations with important industrial players in this sector, such as STMicroelectronics. The outline of the PhD program can be divided into 3 consecutive phases, one per each year of the program. - In the first year, the candidate will acquire the necessary background by attending PhD courses and surveying the relevant literature and will start experimenting state-of-the-art techniques on the available datasets, either from public sources or from past projects of the supervisors. A seminal conference publication is expected at the end of the year. - In the second year, the candidate will select and address some relevant use-cases, with real data from the industrial partners, and will seek solutions to the technological and computational challenges posed by the specific industrial application. At the end of the second year, the candidate is expected to target at least a second conference paper in a well-reputed industry-oriented conference (e.g. ETFA), and possibly another publication in a Q1 journal of the Computer Science sector (e.g. IEEE Transactions on Industrial Informatics, Expert Systems with Applications, etc.). In the third year, the candidate will consolidate the models and approaches that were investigated in the second year, and possibly integrate them into a standalone framework. The candidate will also finalize this work into at least another major journal publication, as well as into a PhD thesis to defend at the end of the program. |
Required skills | The ideal candidate to this PhD program has: - positive attitude to research activity and working in team - solid programming skills - solid basics of linear algebra, probability, and statistics - good communication and problem-solving skills - some prior experience in the design and development of machine learning and deep learning architectures. - some prior knowledge/experience of manufacturing processes is a plus, but not a requirement. |
Digital Wellbeing by Design | |
Proposer | Alberto Monge Roffarello, Luigi De Russis |
Topics | Computer graphics and Multimedia, Data science, Computer vision and AI, Software engineering and Mobile computing |
Group website | https://elite.polito.it/ |
Summary of the proposal | Tools for digital wellbeing allow users to self-control their habits with distractive apps and websites. Yet, they are ineffective in the long term, as tech companies still adopt attention-capture designs, e.g., infinite scroll, that compromise users' self-control. This PhD proposal investigates innovative strategies for designers and end users to consider digital wellbeing in user interface design, recognizing the need to foster healthy digital experiences without depending on external support. |
Research objectives and methods | In today's attention economy, tech companies compete to capture users' attention, e.g., by introducing visual features and functionalities - from guilty-pleasures recommendations to content autoplay - that are purposely designed to maximize metrics such as daily visits and time spent. These Attention-Capture Damaging Patterns (ACDPs) [1] compromise users' sense of agency and self-control, ultimately undermining their digital wellbeing. As of now, the HCI research community has traditionally considered digital wellbeing an end-user responsibility, enabling them to self-monitor their usage of apps and websites through tools for digital self-control. Nevertheless, studies have shown that these external interventions - especially those that are overly dependent on users' self-monitoring capabilities - are often ineffective in the long term. Taking a complementary perspective, the main research objective of this PhD proposal is to explore how to make digital wellbeing a top-design goal in user interface design, establishing a fruitful collaboration between designers and end users and recognizing the critical necessity to foster healthy online experiences and address potential negative impacts of ACDPs on users' mental health without depending on external support. The PhD student will study, design, develop, and evaluate proper models and novel technical solutions (e.g., tools and frameworks) to support designers and end users in fostering the creation of user interfaces that preserve and respect user attention by design, starting from the relevant scientific literature and performing studies involving designers and end users. In particular, possible areas of investigation are:- Innovating frameworks that define and educate designers on novel theoretically grounded processes that prioritize digital wellbeing. These processes will build upon existing design guidelines and best practices, providing clear guidance on their application and providing tech companies and designers with actionable insights to transition away from the contemporary attention economy.- Creating a validated taxonomy of positive design patterns that respect and preserve the user's attention. These patterns will promote users' agency by design and support reflection by offering the same functionality as ACDPs. - Developing design tools to support designers in prioritizing users' digital wellbeing in real-time. Using artificial intelligence and machine learning models, these tools may detect when a designed interface contains ACDPs and/or fails to address digital wellbeing guidelines, suggesting positive design alternatives.- Developing strategies that empower end users to actively participate in designing technology that prioritizes digital wellbeing. This may include the development of platforms for co-designing user interfaces, as well as mechanisms for evaluating existing user interfaces against ACDPs and giving feedback. The proposal will adopt a human-centered approach, and it will build upon the existing scientific literature from different interdisciplinary domains, mainly from Human-Computer Interaction. The work plan will be organized according to the following four phases, partially overlapped:- Phase 1 (months 0-6): literature review at the intersection of digital wellbeing, design, and ACDPs; focus groups and interviews with designers, practitioners, and end users; definitions of a set of use cases and promising strategies to be adopted.- Phase 2 (months 3-24): research, definition, and evaluation of design frameworks and models of positive design patterns. Here, the focus will be on the design of user interfaces for the most commonly used devices, i.e., the smartphone and the PC.- Phase 3 (months 12-36): research, definition, and experimentation of design tools to support designers in prioritizing users' digital wellbeing in real-time, integrating frameworks, design guidelines, and positive design patterns explored and defined in the previous phase.- Phase 4 (months 24-36): extension and possible generalization of the previous phases to include additional devices; evaluation in real settings over long period of times of the proposed solutions; development and preliminary evaluation of strategies for end-user collaboration. It is expected that the results of this research will be published in some of the top conferences in the Human-Computer Interaction field (e.g., ACM CHI, ACM CSCW, and ACM IUI). Journal publications are expected on a subset of the following international journals: ACM Transactions on Computer-Human Interaction, ACM Transactions on the Web, ACM Transactions on Interactive Intelligent Systems, and International Journal of Human Computer Studies. [1] A. Monge Roffarello, K. Lukoff, L. De Russis, Defining and Identifying Attention Capture Deceptive Designs in Digital Interfaces, CHI 2023, https://dl.acm.org/doi/abs/10.1145/3544548.3580729 |
Required skills | A candidate interested in the proposal should ideally: - be able to critically analyze and evaluate existing research, as well as gather and interpret data from various sources; - be able to communicate research findings through writing and presenting; - have a solid foundation in computer science/engineering and possess relevant technical skills; - have a good understanding of HCI research methods, especially around needfinding. |
Goal-Oriented Adaptive Learning for 6G | |
Proposer | Claudio Ettore Casetti, Marco Rapelli |
Topics | Software engineering and Mobile computing, Data science, Computer vision and AI |
Group website | |
Summary of the proposal | 6G will connect autonomous systems, requiring a shift from traditional data delivery to goal-oriented communication. This AI-driven approach prioritizes relevant data for decision-making, optimizing efficiency. Key areas include intelligent routing, semantic exchange, and intent-based models. This PhD research aims to develop AI-orchestrated data exchange, explore causal inference and contrastive learning for relevance extraction, and design adaptive, task-driven networking frameworks. |
Research objectives and methods | Outline Goal-oriented communication can transform networking into an intelligent, context-aware, and task-driven system by focusing on:- Intelligent Routing and Data Prioritization: Instead of treating all packets equally, goal-oriented communication prioritizes and routes information based on its relevance to an ongoing process. For example, in an autonomous traffic management system, real-time hazard notifications should take precedence over general telemetry data.- Semantic and Task-Aware Information Exchange: Networks will no longer transmit all available data but instead extract and share only the information necessary for AI models or human users to make a decision. For example, in industrial automation, rather than sending thousands of sensor readings per second, a machine could communicate only when an anomaly is detected, significantly reducing bandwidth and computation costs.- Intent-Based and Goal-Driven Communication Models: Goal-oriented networks move beyond conventional request-response models to intention-based data exchange, where AI-driven entities anticipate what information is needed to complete a task and optimize communication accordingly. For instance, in autonomous vehicle coordination, a vehicle does not need to continuously broadcast its speed and position but only shares critical updates when approaching intersections or hazards. The main objectives of this PhD research are:- Define AI-Orchestrated Data Exchange Models, by developing AI-driven approaches to filter, prioritize, and exchange only task-relevant information between connected devices and infrastructures. The PhD candidate will be required to investigate techniques such as semantic communication, federated meta-learning for adaptive network intelligence, and goal-oriented routing to optimize network resource utilization. - Investigate innovative AI techniques to be used with goal-oriented communication, such as Dynamic Causal Inference for Relevance Extraction (models that leverage causal reasoning to identify the true cause-effect relationships within network data, thus enabling the system to determine which pieces of information are causally relevant to a given goal, rather than merely correlational) or Multi-Modal Contrastive Learning for Unified Semantic Representation (using contrastive self-supervised learning to fuse data from multiple modalities, e.g., sensor data, localization information, communication signals, into a cohesive semantic representation that emphasizes task-specific features.)- Develop Context-Aware and Task-Driven Networking Frameworks, by designing adaptive, scalable and goal-oriented communication models that dynamically adjust information exchange based on real-time context and application needs. - Implement AI-based decision-making frameworks to ensure that communication serves system-wide objectives rather than individual data requests. Year 1: Theoretical Foundations & Initial Models Year 2: Model Implementation & Performance Evaluation Year 3: System Integration & Testing |
Required skills | The ideal candidate should have a strong understanding of networks and advanced communication technologies, including RAN, 5G, and 6G systems. They should be proficient in protocol design and optimization and have a grasp of methodologies for the integration of AI-driven solutions in next-gen networking. Proficiency in programming (Python, TensorFlow/PyTorch) and network simulation tools is required. Analytical skills, and experience with predictive modelling and analysis are highly valued. |
Security of Linux Kernel Extensions | |
Proposer | Riccardo Sisto, Daniele Bringhenti |
Topics | Cybersecurity, Software engineering and Mobile computing, Parallel and distributed systems, Quantum computing |
Group website | https://netgroup.polito.it |
Summary of the proposal | eBPF (extended Berkeley Packet Filters) and XDP (eXpress Data Path) are technologies recently introduced in Linux to enable the execution of user-defined plugins in the Linux kernel with the purpose of processing network packets at highest speed. This research aims to perform a deep study of the security of these technologies, enriching the still limited literature in this field, and to propose code development techniques that avoid the related most dangerous vulnerabilities by construction. |
Research objectives and methods | Today, there is a growing interest in eBPF and XDP in the networking field because such technologies allow ultra-high-speed monitoring of network traffic in real time. However, the security of such techniques has not yet been studied adequately. Moreover, as witnessed by several new related vulnerabilities that have been recently discovered, eBPF/XDP security is not yet satisfactory despite eBPF code is statically analyzed by a bytecode verifier before being accepted for execution by the Linux kernel. The main objective of the proposed research is to improve the state of the art of secure coding for eBPF/XDP code. This will be done by first studying the state of the art and the attack surface of the eBPF/XDP technologies. Then, new techniques will be proposed to produce code that is provably free from the most dangerous vulnerabilities by construction. In this research work, the candidate will exploit the expertise about formal methods available in the proposer's research group. The research activity will be organized in three phases: Phase 1 (1st year): The candidate will analyze and identify the main security issues and attack surfaces of eBPF/XDP code, going beyond the limited studies available today in literature on the topic. This will be done by also applying new formal modeling approaches developed by the candidate with the tutor's help to look for new classes of possible eBPF/XDP vulnerabilities in a systematic way. At this phase's end, some preliminary results are expected to be published, such as a survey of the state of the art and the findings of the systematic search for new classes of vulnerabilities. During the first year, the candidate will also acquire the background necessary for the research. This will be done by attending courses and by personal study. Phase 2 (2nd year): The candidate will develop techniques to support the programmer in developing eBPF/XDP code that is provably free from the most important classes of vulnerabilities. This will be done by leveraging the knowledge about eBPF/XDP code security acquired in the first year, and by developing a formal secure-by-construction approach for the development of eBPF code. Particular emphasis will also be given to the experimental evaluation of the developed approach. The results of this work will also be submitted for publication, aiming at least at a journal publication. Phase 3 (3rd year): based on the results achieved in the previous phase, the proposed approach will be further refined, to improve its precision and relevance, and the related dissemination activity will be completed. The work will be done in synergy with the European Project ELASTIC, which started in 2024 with the goal of developing a software architecture for extreme-scale analytics based on recent programming technologies like eBPF/XDP and Wasm and characterized by high security standards. The proposer's group participates as one of the ELASTIC partners and is involved in the study of the security of eBPF/XDP, which is strictly related to the proposed research. The contributions produced by the proposed research can be published in conferences and journals belonging to the areas of Cybersecurity (e.g. IEEE S&P, ACM CCS, NDSS, ESORICS, IFIP SEC, DSN, ACM Transactions on Information and System Security, or IEEE Transactions on Secure and Dependable Computing), and networking applications (e.g. INFOCOM, ACM/IEEE Transactions on Networking, or IEEE Transactions on Network and service Management). |
Required skills | To successfully develop the proposed activity, the candidate should have a background in cybersecurity, software engineering and networking. Some knowledge of formal languages and formal methods can be useful, but it is not strictly required: the candidate can acquire this knowledge and related skills as part of the PhD Program, by exploiting specialized courses. |
Local energy markets in citizen-centered energy communities | |
Proposer | Edoardo Patti, Enrico Macii, Lorenzo Bottaccioli |
Topics | Software engineering and Mobile computing, Parallel and distributed systems, Quantum computing, Computer architectures and Computer aided design |
Group website | www.eda.polito.it |
Summary of the proposal | Energy communities will enable citizens to participate actively in local energy markets by exploiting new digital tools. Citizens will need to understand how to interact with smart energy systems, novel digital tools and local energy markets. Thus, new complex socio-techno-economic interactions will take place in such systems which need to be simulated to evaluate future impacts. A novel co-simulation framework is needed, which combines agent-based modelling techniques with external simulators |
Research objectives and methods | The diffusion of distributed (renewable) energy sources poses new challenges in the underlying energy infrastructure, e.g., distribution and transmission networks and/or within micro (private) electric grids. The optimal, efficient and safe management and dispatch of electricity flows among different actors (i.e., prosumers) is key to supporting the diffusion of the distributed energy sources paradigm. The goal of the project is to explore different corporate structures, billing and sharing mechanisms inside energy communities. For instance, the use of smart energy contracts based on Distributed Ledger Technology (blockchain) for energy management in local energy communities will be studied. A testbed comprising of physical hardware (e.g., smart meters) connected in the loop with a simulated energy community environment (e.g., a building or a cluster of buildings) exploiting different Renewable Energy Sources (RES) and energy storage technology will be developed and tested during the three-year program. Hence, the research will focus on the development of agents capable of describing:- the final customer/prosumer beliefs desires and intentions and opinions.- the local energy market where prosumers can trade their energy and or flexibility- the local system operator that has to provide the grid reliability All the software entities will be coupled with external simulators of the grid and energy sources in a plug-and-play fashion. Hence, the overall framework has to be able to work in a co-simulation environment with the possibility of performing hardware in the loop. The final outcomes of this research will be an agent-based modelling tool that can be exploited for:- Planning the evolution of future smart multi-energy systems by taking into account the operational phase- Evaluating the effect of different policies and related customer satisfaction- Evaluating the diffusion of technologies and/or energy policies under different regulatory scenarios- Evaluating new business models for energy communities and aggregators During the 1st year, the candidate will study state-of-the-art solutions of existing agent-based modelling tools in order to identify the best available solution for large-scale smart energy system simulation in distributed environments. Furthermore, the candidate will review the state of the art in prosumers/aggregators/market modelling in order to identify the challenges and identify possible innovations. Moreover, the candidate will focus on the review of possible corporate structures, billing and sharing mechanisms of energy communities. Finally, it will start the design of the overall platform starting with the requirements identification and definition. During the 2nd year, the candidate will terminate the design phase and will start the implementation of the agent intelligence. Furthermore, it will start to integrate agents and simulators together in order to create the first beta version of the tool. During 3rd year, the candidate will ultimate the overall platform and test it in different case studies and scenarios in order to show all the effects of the different corporate structures, billing and sharing mechanisms in energy communities. Possible international scientific journals and conferences:- IEEE Transaction Smart Grid,- IEEE Transactions on Evolutionary Computation,- IEEE Transactions on Control of Network Systems,- Environmental modelling and Software,- JASSS,- ACM e-Energy,- IEEE EEEIC internatational conference- IEEE SEST internatational conference- IEEE Compsac internatational conference |
Required skills | Programming and Object-Oriented Programming (preferable in Python). Frameworks for Multi Agent Systems Development (preferable). Development in web environment (e.g. REST web services). Computer Networks |
Simulation and Modelling of V2X connectivity with traffic simulation | |
Proposer | Edoardo Patti, Enrico Macii, Lorenzo Bottaccioli |
Topics | Data science, Computer vision and AI, Parallel and distributed systems, Quantum computing, Software engineering and Mobile computing |
Group website | www.eda.polito.it |
Summary of the proposal | The development of novel ICT solutions in smart-grids has opened new opportunities to foster novel services for energy management and savings in all end-use sectors, with particular emphasis on Electric Vehicle connectivity, such as demand flexibility. Thus, there will be a strong interaction among transportation, traffic trends and energy distribution systems. New simulation tools are needed to evaluate the impact of Electric Vehicles in the grid by considering citizens behaviors. |
Research objectives and methods | This research aims at developing novel simulation tools for smart cities/smart grid scenarios that exploit the Agent-Based Modelling (ABM) approach to evaluate novel strategies to manage the V2X connectivity with traffic simulation. The candidate will develop an ABM simulator that will provide a realistic and virtual city where different scenarios will be executed. The ABM should be based on real data, demand profiles and traffic patterns. Furthermore, the simulation framework should be flexible and extendable so that i) It can be improved with new data from the field; ii) it can be interfaced with other simulation layers (i.e. physical grid simulators, communication simulators); iii) It can interact with external tools executing real policies (such as energy aggregation). This simulator will be a useful tool to analyse how V2X connectivity and the associated services impact both social behaviours and traffic. It will also help the understanding of the impact of new actors and companies (e.g., sharing companies) in both the marketplace and the society, again by analysing the social behaviours and the traffic conditions. In a nutshell, ABM simulator will simulate both traffic variation and the possible advantages of V2X connectivity strategies in a smart grid context. This ABM simulator will be designed and developed to span different spatial-temporal resolutions. All the software entities will be coupled with external simulators of grid and energy sources in a plug-and-play fashion to be ready for being integrated with external simulators and platforms. This will enhance the resulting AMB framework unlocking also hardware in the loop features. The outcomes of this research will be an agent-based modelling tool that can be exploited for:- Simulating V2X connectivity considering traffic conditions- Evaluating the effect of different policies and related customer satisfaction- Evaluating the diffusion and acceptance of demand flexibility strategies- Evaluating the new business model for future companies and services During the 1st year, the candidate will study the state-of-the-art solution of existing agent-based modelling tools to identify the best available solution for large-scale traffic simulation in distributed environments. Furthermore, the candidate will review the state of the art of V2X connectivity to identify the challenges and identify possible innovations. Moreover, the candidate will focus on the review of Artificial Intelligence algorithms for simulating traffic conditions and variation for estimating EV flexibility and users' preferences. Finally, he/she will start the design of the overall ABM framework and algorithms starting with the requirements identification and definition. During the 2nd year, the candidate will terminate the design phase and will start the implementation of the agents' intelligence and test the first version of the proposed solution. During the 3rd year, the candidate will ultimate the overall ABM framework and AI algorithms and test it in different case studies and scenarios to assess the impact of V2X connection strategies and novel business models. Possible international scientific journals and conferences:- IEEE Transaction Smart Grid,- IEEE Transactions on Evolutionary Computation,- IEEE Transactions on Control of Network Systems,- Environmental modelling and Software,- JASSS,- ACM e-Energy,- IEEE EEEIC international conference- IEEE SEST international conference- IEEE Compsac international conference |
Required skills | Programming and Object-Oriented Programming (preferable in Python), Frameworks for Multi Agent Systems Development (preferable) Development in web environment (e.g. REST web services), Computer Networks |
Machine Learning techniques for real-time State-of-Health estimation of Electric Vehicles batteries | |
Proposer | Edoardo Patti, Enrico Macii, Alessandro Aliberti |
Topics | Data science, Computer vision and AI, Software engineering and Mobile computing, Computer architectures and Computer aided design |
Group website | https://eda.polito.it/ |
Summary of the proposal | This Ph.D. research proposal aims at studying novel software solutions based on Machine Learning (ML) techniques to estimate the State-of-Health (SoH) of batteries in Electric Vehicles (EV) in near-real-time. This research area is gaining a strong interest in the last years as the number of EVs is constantly rising. Knowing this SoH can unlock different possible strategies i) to reuse EVs' batteries in other contexts, e.g. stationary energy storage systems in Smart Grids, or ii) to recycle them. |
Research objectives and methods | In the last years, the number of Electric Vehicles (EVs) increased significantly and it is expected to grow in the upcoming years. Due to the use of high-value materials, there is a strong economic, environmental and political interest in implementing solutions to recycle EV's batteries for example by reusing them in stationary applications to become useful energy storage systems in Smart Grids. To achieve it, novel tools are needed to estimate the battery State-of-Health (SoH), i.e. the battery capacity measurement, in near-real-time. Currently, SoH is determined by bench discharging tests taking several hours and making this process time-consuming and expensive. The objective of this Ph.D. proposal consists of the design and development of models based on Machine Learning (ML) techniques that will exploit both synthetic and real-world datasets. The synthetic dataset is needed to train and test a generic ML model suitable for any EV independently from a specific brand and/or model. Whilst, the real-world dataset, given by monitoring real EVs, is needed to fine-tune the ML models, for example, by applying transfer learning techniques, customizing them more and more on the specific brand and model of the real-world EV to monitor. During the three years of the Ph.D., the research activity will be divided into four phases:- Study and analysis of both state-of-the-art solutions and datasets of real-world EV monitoring.- Design and develop a realistic simulator of an EV fleet to generate the synthetic and realistic dataset. Starting from both datasheet information of different EVs (in terms of brand and model) and information provided by the Italian National Institute of Statistics (ISTAT), the simulator will simulate different routes in terms of length, altitude and travel speed, impacting battery wear differently, thus making the resulting dataset realistic and heterogeneous.- Design and development of ML-based models trained and tested with the synthetic dataset to estimate the SoH of EV's batteries.- Application of transfer learning techniques to the ML-based models (from the previous bullet #3) to fine-tune them by exploiting datasets of real-world EV monitoring (result of the previous bullet #1). Possible international scientific journals and conference:- IEEE Transaction Smart Grid,- IEEE Transaction on Vehicular Technology,- IEEE Transaction on Industrial Informatics,- IEEE Transactions on Industry Applications,- Engineering Applications of Artificial Intelligence,- Expert Systems with Applications,- ACM e-Energy- IEEE EEEIC international conference- IEEE SEST international conference- IEEE Compsac international conference |
Required skills | Programming and Object-Oriented Programming (preferable in Python). Knowledge of Machine Learning and Neural Networks. Knowledge of frameworks to develop models based on Machine Learning and Neural Networks. Knowledge of development of Internet of Things Applications |
Natural Language Processing e Large Language Models for source code generation | |
Proposer | Edoardo Patti, Enrico Macii, Lorenzo Bottaccioli |
Topics | Data science, Computer vision and AI, Software engineering and Mobile computing |
Group website | https://eda.polito.it/ |
Summary of the proposal | This Ph.D. research is focused on revolutionizing source code generation by harnessing the capabilities of Natural Language Processing by exploring novel methodologies to facilitate the creation of high-quality code through enhanced human-machine collaboration. By leveraging advanced language models, like Generative Pretrained Transformer models, the research seeks to optimize the process, leading to more efficient, expressive, and context-aware source code generation in software development. |
Research objectives and methods | The integration of Artificial Intelligence, especially Machine/Deep Learning, in industrial processes promises swift changes. Companies stand to benefit in the short term with improved production quality, efficiency, and automated routine tasks, fostering positive impacts on work environments. In addition to Natural Language Processing, Large Language Models (LLMs) have already demonstrated significant progress in healthcare, education, software development, finance, journalism, scientific research, and customer support. The future entails optimizing LLMs for widespread use, enhancing the competitiveness of the industrial system and streamlining collaborative supply chain management. The objective of this Ph.D. proposal consists of the design and development of AI-assisted models based on Natural Language Processing (NLP) and Large Language Models (LLMs) to optimize the AI-assisted source code generation in the context of software development by enhancing the process, leading to more efficient, expressive, and context-aware. During the three years of the Ph.D., the research activity will be divided into five phases:- Survey existing literature on NLP applications in software engineering and analyze methodologies and challenges in source code generation using language models.- Design and develop Large Language Models for improved programming language understanding by investigating techniques for domain-specific customization of language models.- Develop algorithms and strategies for context-aware source code generation by implementing prototype systems for evaluation and refinement.- Design and implement a collaborative framework that seamlessly integrates developer input with language model suggestions.- Evaluate the effectiveness of the collaboration framework through user studies and real-world projects. Possible international scientific journals and conference:- IEEE Transactions on Audio, Speech, and Language Processing- IEEE Transactions on Software Engineering- IEEE Transaction on Industrial Informatics,- IEEE Transactions on Industry Applications,- Engineering Applications of Artificial Intelligence,- Expert Systems with Applications,- IEEE NLP-KE internat. conf.- IEEE ICNLP internat. conf.- IEEE Compsac internat. conf. |
Required skills | Programming and Object-Oriented Programming (preferable in Python), Knowledge of Natural Language Processing and Large Language Models Knowledge of frameworks to develop models based on Natural Language Processing and Large Language Models |
Advanced ICT solutions and AI-driven methodologies for Cultural Heritage resilience | |
Proposer | Edoardo Patti, Enrico Macii, Alessandro Aliberti |
Topics | Data science, Computer vision and AI, Software engineering and Mobile computing, Parallel and distributed systems, Quantum computing |
Group website | https://eda.polito.it/ |
Summary of the proposal | This Ph.D. research leverages on cutting-edge technologies to preserve Cultural Heritage (e.g., monuments, historical sites, etc.) against natural disasters, climate change, and human-related threats. The interdisciplinary approach integrates ICT tools, Machine Learning, and Data Analytics to develop proactive strategies for risk assessment, monitoring, and preservation of cultural assets by addressing challenges through innovative solutions for sustainable conservation and resilience |
Research objectives and methods | Recent crises and disasters have affected the European citizens' lives, livelihoods, and environment in unforeseen and unprecedented ways. They have transformed our very understanding of them by reshaping hitherto unchallenged notions of the ?local? and the ?global? and putting into question well-rehearsed conceptual distinctions of ?natural? and ?man-made? disasters. Modern and high-performance ICT solutions need to be deployed in order to prevent and mitigate the effects of disasters and climate change events by enabling critical thinking and framing a holistic approach for better understanding of catastrophic events. The objective of this Ph.D. proposal consists of the design and development of ICT-driven solutions to develop proactive strategies for risk assessment, monitoring, and preservation of Cultural Heritage. The candidate will adopt a comprehensive interdisciplinary approach, seamlessly integrating modern techniques rooted in IoT, Machine/Deep Learning, and Big Data paradigms within the realm of cultural heritage resilience. This approach transcends purely technical facets, encompassing social and cultural dimensions to provide a holistic understanding and effective solutions. During the three years of the Ph.D., the research activity will be divided into five phases:- Survey existing literature on modern Ai-driven ICT solutions and applications in software engineering and analyze methodologies and challenges Cultural Heritage resilience.- Design and develop a data-driven digital ecosystem - i.e., distributed IoT platform - for the collection and harmonization of heterogeneous data from the real world to enable on-top advanced visualization and analysis services (e.g., Digital Twins). A multidisciplinary approach ranging from IoT paradigms to the application of Machine/Deep Learning methodologies for Big Data analysis is required in order to allow the development of proactive strategies for risk assessment, monitoring, and preservation of Cultural Heritage.- Develop algorithms and strategies for a context-aware Cultural Heritage resilience by implementing prototype systems for evaluation and refinement.- Design and implement continuous improvement and fine-tuning strategies for the development of increasingly effective and high-performing prevention strategies.- Evaluate the effectiveness of the data-driven digital ecosystem and developed strategies through user studies and real-world projects. Possible international scientific journals and conference: - IEEE Transactions on Computational Social Systems- IEEE Transactions on Industrial Informatics- Journal on Computing and Cultural Heritage- Journal of Cultural Heritage- Engineering Applications of Artificial Intelligence,- Expert Systems with Applications,- IEEE CoStProgramming and Object-Oriented Programming (preferable in Python).- Knowledge of web application programming.- Knowledge of IoT paradigms.- Knowledge of Machine Learning and Deep Learning.- Knowledgeof frameworks to develop models based on Machine Learning and Deep Learning Model- internat. Conf.- IEEE SKIMA internat. Conf. |
Required skills | Programming and Object-Oriented Programming (preferable in Python). Knowledge of web application programming. Knowledge of IoT paradigms. Knowledge of Machine Learning and Deep Learning. Knowledge of frameworks to develop models based on Machine Learning and Deep Learning Models |
Embedded Cybersecurity Solutions for Enhanced Resilience in Smart City Environments | |
Proposer | Edoardo Patti, Enrico Macii, Luca Barbierato |
Topics | Computer architectures and Computer aided design, Cybersecurity |
Group website | |
Summary of the proposal | This research aims at developing advanced embedded cybersecurity solutions tailored to smart city environments' unique challenges. The increasing interconnectivity of devices within smart cities exposes them to cybersecurity threats, necessitating the integration of robust security measures into the very fabric of these systems.By leveraging cutting-edge technologies, this research aims to enhance the resilience of smart cities vs cyber threats, ensuring the secure operation of critical services |
Research objectives and methods | The advent of smart cities heralds a new era of urban development, characterized by the pervasive integration of digital technologies and Internet-of-Things (IoT) devices into the fabric of urban infrastructure. While these advancements promise enhanced efficiency, sustainability, and quality of life for urban residents, they also introduce a myriad of cybersecurity challenges that necessitate immediate attention. The interconnected nature of smart city systems renders them vulnerable to diverse cyber threats, ranging from data brehttps://eda.polito.it/aches and ransomware attacks to potential disruptions in critical services. As such, the imperative to fortify the cybersecurity resilience of smart cities has become a pressing concern in urban planning and infrastructure development. At the heart of addressing the cybersecurity vulnerabilities inherent in smart city environments lies the innovative integration of advanced cryptographic techniques, anomaly detection algorithms, and secure communication protocols into the core of embedded systems. This research endeavours to harness the power of machine learning and data analytics to develop intelligent cybersecurity solutions capable of real-time threat detection and adaptive response mechanisms. By delving into the intricate interplay between network security principles, IoT protocols, and system architecture, this research aims to craft resilient cybersecurity frameworks specifically tailored to the dynamic and interconnected nature of smart city ecosystems. The technical underpinnings of this research encompass a multidisciplinary approach that converges the realms of cybersecurity, embedded systems, machine learning, and data analytics. By amalgamating these diverse disciplines, this research seeks to construct a robust cybersecurity framework that safeguards critical infrastructure and services and fosters a culture of cyber resilience within smart city environments. The deployment of embedded cybersecurity solutions fortified with advanced cryptographic algorithms and anomaly detection mechanisms represents a paradigm shift in fortifying the digital fortresses of smart cities against the ever-evolving landscape of cyber threats. The research will commence with an exhaustive examination of prevailing cybersecurity frameworks and protocols pertinent to smart city settings. Subsequently, novel embedded cybersecurity solutions will be meticulously designed and implemented to cater to the unique requisites of smart cities. These tailored solutions will consider resource constraints, scalability, and real-time threat identification. Rigorous testing and validation in simulated smart city environments will be conducted to assess the efficacy and performance of the developed cybersecurity mechanisms. Furthermore, the research will delve into the socio-technical aspects of embedded cybersecurity in smart cities, exploring privacy, governance, and societal trust implications. Collaboration with industry partners and stakeholders will be integral to validating the practicality and viability of the proposed cybersecurity solutions in real-world deployment scenarios. The objectives of this PhD fellowship span three years, beginning with an extensive assessment of existing cybersecurity frameworks tailored to smart city environments in the first year. This initial phase involves identifying vulnerabilities and challenges specific to interconnected urban systems, which will inform the design of innovative cybersecurity solutions that incorporate advanced cryptographic techniques and anomaly detection algorithms. In the second year, the focus shifts to the implementation of these solutions in simulated smart city environments, where rigorous testing will evaluate their effectiveness in real-time threat detection and adaptive responses, allowing for iterative refinements based on performance metrics. The final year will concentrate on the optimization and scalability of the developed cybersecurity frameworks, with an emphasis on their integration into existing smart city infrastructure.
|
Required skills | Programming and Object-Oriented Programming (preferable in C/C++) Knowledge of operative system (e.g. UNIX) Knowledge of embedded system Knowledge of driver design Knowledge of cybersecurity Knowledge of IoT paradigms |
Privacy-Preserving Machine Learning over IoT networks | |
Proposer | Valentino Peluso, Andrea Calimera, Enrico Macii |
Topics | Computer architectures and Computer aided design, Cybersecurity, Data science, Computer vision and AI |
Group website | www.eda.polito.it www.linkedin.com/company/edagroup-polito/ |
Summary of the proposal | Distributed Machine Learning strategies, like split learning and federated learning, enable decentralized intelligence but are vulnerable to data theft and manipulation, raising privacy and security concerns. Existing defenses often degrade performance and introduce overhead, limiting their adoption in resource-constrained IoT devices. This project aims to develop hardware-aware software optimization techniques for efficient, privacy-preserving ML in distributed IoT systems. |
Research objectives and methods | Research objectives. This project aims to develop and evaluate optimization techniques that address the challenges of privacy and security in distributed machine learning (ML) while ensuring efficiency in resource constrained IoT environments. Specifically, the objectives include: - Acquire competences in ML and deep learning training and deployment, distributed computing architectures, and existing privacy-preserving techniques. - Develop optimization strategies to make privacy-preserving strategies compatible with the limited resources of low-power end-nodes and off-the-shelf devices, enabling their implementation feasible in real-world networks and infrastructures. - Identify the evaluation metrics to assess the quality, security and efficiency of privacy-preserving ML frameworks. - Develop an emulation framework for rapid assessment of different optimization strategies and techniques. - Develop multi-objective optimization techniques and algorithms that enhance accuracy, energy efficiency, and communication costs while maintaining privacy protection. The proposed solutions should also be compatible with security defenses against adversarial attacks, such as data and model poisoning, which are notoriously difficult to integrate with standard privacy-preserving techniques. Outline of research work plan. 1st year. The candidate will conduct a comprehensive review of the state-of-the-art in distributed ML, focusing on: (i) existing approaches such as federated learning, split learning, split inference; (ii) vulnerabilities, threats, and attacks in distributed ML systems; (iii) privacy-preserving techniques, including differential privacy, multi-party computation, and homomorphic encryption; (iv) key performance indicators (KPIs) to evaluate distributed ML strategies and their applicability in IoT systems. The candidate will also develop an initial version of an emulation framework for distributed ML (leveraging existing open-source projects), which will serve as testbed to evaluate novel optimization strategies. 2nd year. The candidate will design, develop, and validate novel optimization strategies, working across multiple layers: (i) at the software layer, with algorithmic solutions that concurrently optimize accuracy efficiency; (ii) at the hardware layer, investigating compiler-level optimization and specialized architectures for acceleration. Rather than treating these optimization strategies as isolated solutions, the candidate will explore their interactions to maximize efficiency. 3rd year. The candidate will test and consolidate the developed methodologies on real applications. The focus will be on emerging applications that could benefit most from privacy-preserving ML, assessing feasibility, robustness, and efficiency in practical scenarios. Possible venues for publications: - IEEE Internet of Things Journal - IEEE Transactions on Parallel and Distributed Systems - IEEE Transactions on Privacy - IEEE Transactions on Information Forensics and Security - IEEE Transactions on Dependable and Secure Computing - ACM Transactions on Embedded Computing Systems - ACM Transactions on Internet of Things - ACM Transactions on Privacy and Security - ACM/IEEE Design Automation Conference (DAC) |
Required skills | Knowledge of standard Machine Learning and Deep Learning and basic model compression strategies (e.g., pruning, quantization). Background in embedded systems programming. Proficiency in Python, including ML frameworks like scikit-learn and PyTorch. Strong communication and writing skills. |
Data-Driven and Sustainable Solutions for Distributed Systems | |
Proposer | Guido Marchetto, Alessio Sacco |
Topics | Parallel and distributed systems, Quantum computing, Data science, Computer vision and AI |
Group website | http://www.netgroup.polito.it |
Summary of the proposal | Recent advances in cyber-physical systems are expected to support advanced and critical services incorporating computation, communication, and intelligent decision making. The research activity aims to leverage advanced analytics, machine learning, and optimization techniques, to enhance the efficiency, resilience, and sustainability of distributed systems. Key focus areas include reducing energy consumption while using distributed learning techniques and optimizing resource allocation. |
Research objectives and methods | Two research questions (RQ) guide the proposed work: RQ1: How can we design and implement on local and larger-scale testbeds effective autonomous solutions that integrate the network information at different scopes using recent advances in supervised and reinforcement learning? RQ2: To scale the use of machine learning-based solutions in cyber-physical systems, what are the most efficient distributed machine learning architectures that can be implemented at the edge of such systems? The final target of the research work is to answer these questions, also by evaluating the proposed solutions on small-scale emulators or large-scale virtual testbeds, using a few applications, including virtual and augmented reality, precision agriculture, or haptic wearables. In essence, the main goals are to provide innovation in decision, planning, responsiveness, using centralized and distributed learning integrated with edge computing infrastructures. Both vertical and horizontal integration will be considered. By vertical integration, we mean considering learning problems that integrate states across hardware and software, as well as states across the network stack across different scopes. For example, the candidate will design data-driven algorithms for planning the deployment of IoT sensors, tasks scheduling, and resources organization. By horizontal learning, we mean using states from local (e.g., physical layer) and wide area (e.g., transport layer) as input for the learning-based algorithms. The data needed by these algorithms are carried to the learning actor by means of newly networking protocols. Aside from supporting resiliency with the vertical integration, solutions must offer resiliency across a wide (horizontal) range of network operations: from close-edge, i.e., near the device, to the far-edge, with the design of secure data-centric resource allocation (federated) algorithms. The research activity will be organized in three phases: Phase 1 (1st year): the candidate will analyze the state-of-the-art solutions for cyber-physical systems management, with particular emphasis on knowledge-based network automation techniques. The candidate will then define detailed guidelines for the development of architectures and protocols that are suitable for automatic operation and (re-)configuration of such deployments, with particular reference to edge infrastructures. Specific use-cases will also be defined during this phase (e.g., in virtual reality, smart agriculture). Such use cases will help identifying ad-hoc requirements and will include peculiarities of specific environments. With these use cases in mind, the candidate will also design and implement novel solutions to deal with the partial availability of data within distributed edge infrastructures. Results of this work will likely result in conference publications. Phase 2 (2nd year): the candidate will consolidate the approaches proposed in the previous year, focusing on the design and implementation of mechanisms for vertical and horizontal integration of supervised and reinforcement learning. Network, and computational resources will be considered for the definition of proper allocation algorithms, with the objective of energy efficiency. All solutions will be implemented and tested. Results will be published, targeting at least one journal publication. Phase 3 (3rd year): the consolidation and the experimentation of the proposed approach will be completed. Particular emphasis will be given to the identified use cases, properly tuning the developed solutions to real scenarios. Major importance will be given to the quality offered to the service, with specific emphasis on the minimization of latencies in order to enable a real-time network automation for critical environments (e.g., telehealth systems, precision agriculture, or haptic wearables). Further conference and journal publications are expected. The research activity is in collaboration with Saint Louis University, MO, USA and University of Kentucky, KY, USA, also in the context of some NSF grants. The contributions produced by the proposed research can be published in conferences and journals belonging to the areas of networking and machine learning (e.g. IEEE INFOCOM, ICML, ACM/IEEE Transactions on Networking, or IEEE Transactions on Network and Service Management) and cloud/fog computing (e.g. IEEE/ACM SEC, IEEE ICFEC, IEEE Transactions on Cloud Computing), as well as in publications related to the specific areas that could benefit from the proposed solutions (e.g., IEEE PerCom, ACM MobiCom, IEEE Transactions on Industrial Informatics, IEEE Transactions on Vehicular Technology). |
Required skills | The ideal candidate has good knowledge and experience in networking and machine learning, or at least in one of the two topics. Availability for spending periods abroad (mainly but not only at Saint Louis University and/or University of Kentucky) is also important for a profitable development of the research topic. |
Single-cell Multi-omics for Understanding Cellular Heterogeneity | |
Proposer | Stefano Di Carlo, Savino Alessandro, Bardini Roberta |
Topics | Life sciences, Data science, Computer vision and AI |
Group website | |
Summary of the proposal | Single-cell multi-omics analysis integrates data from multiple molecular layers (e.g., transcriptomics, epigenomicsm, proteomics) within individual cells to provide a deeper understanding of cellular heterogeneity. This project will develop computational methods for integrating and analyzing single-cell sequencing data, supporting disease modeling and therapy optimization. The proposed algorithms will be applied to open datasets to uncover novel insights into cell identity and lineage evolution. |
Research objectives and methods | Single-cell technologies have transformed biology by enabling the analysis of individual cells within heterogeneous populations. These methods allow researchers to study cellular diversity, track cell lineage, and identify molecular signatures underlying disease progression. However, most computational tools have been developed for analyzing individual omic layers, failing to leverage the full potential of multi-omics integration. This project aims to develop novel computational frameworks for integrating single-cell multi-omics data, improving our ability to interpret complex biological systems. The candidate is expected to publish in high-impact journals (e.g., BMC Bioinformatics, IEEE/ACM Transactions on Bioinformatics) and present findings at leading conferences (IEEE BIBM, BIOSTEC). |
Required skills | Required skills: Nice-to-have skills: |
Cybersecurity of RISC-V-based Cyber-Physical Systems in Embedded Scenarios | |
Proposer | Stefano Di Carlo |
Topics | Computer architectures and Computer aided design, Cybersecurity |
Group website | |
Summary of the proposal | This Ph.D. project aims to enhance the security of RISC-V-based Cyber-Physical Systems (CPS) and embedded computing by addressing vulnerabilities in hardware, memory protection, debugging, and runtime manipulation detection. The research will develop novel CPU-centric architectures and integrate security monitoring and countermeasures to protect against attacks in embedded and high-performance computing environments, ensuring trust throughout the hardware supply chain. |
Research objectives and methods | Background and Motivation Embedded computing systems (ECS) are the backbone of critical applications, including IoT, transportation, autonomous vehicles, and industrial automation. However, as these systems become increasingly interconnected, they are exposed to significant security risks. Traditional cybersecurity measures focus on software-level protections, but with the emergence of hardware-level attacks, it is crucial to design secure architectures from the ground up. Among the key vulnerabilities in embedded systems are:- Embedded hardware vulnerabilities (e.g., side-channel attacks, fault injection)- Memory access protection issues (via Memory Management Units (MMU) and Memory Protection Units (MPU))- Secure debugging mechanisms (Hardware Security Modules and Host systems)- Runtime attack detection- Secure feature activation mechanisms Given that hardware serves as the root of trust, any compromise at this level endangers the entire system. This project will investigate and develop security mechanisms to protect RISC-V-based embedded architectures, ensuring trust across the supply chain and during system operation. The main objective of this Ph.D. project is to design and implement security-enhanced RISC-V-based architectures by addressing hardware and runtime security challenges. The research will focus on: This project will be structured into three key phases: The proposed research will contribute to:- More secure RISC-V architectures, enhancing their adoption in CPS, IoT, and automotive industries.- Improved protection against hardware and runtime attacks.- Better integration of security features across the hardware supply chain.- The development of open-source security frameworks for RISC-V systems.- By addressing security challenges across different computing layers, this research aims to create a new generation of trustable RISC-V-based embedded systems. |
Required skills | Mandatory Skills: Preferred Skills (Nice-to-have) |
Challenges and Advancements in Spiking Neural Networks for Neuromorphic Computing | |
Proposer | Stefano Di Carlo, Alessandro Savino |
Topics | Computer architectures and Computer aided design, Data science, Computer vision and AI |
Group website | |
Summary of the proposal | Spiking Neural Networks (SNNs) are a promising alternative to traditional deep learning, offering energy-efficient computation inspired by biological neurons. However, challenges such as training complexity, hardware efficiency, and scalability limit their adoption. This Ph.D. will develop novel training algorithms, efficient SNN architectures, and hardware acceleration techniques to improve SNN performance in edge AI, robotics, and neuromorphic computing applications. |
Research objectives and methods | Background and Motivation Spiking Neural Networks (SNNs) represent a biologically inspired paradigm for computing, where neurons communicate using discrete spikes instead of continuous values, mimicking real brain activity. These networks hold great potential for:- Low-power neuromorphic computing- Event-driven processing in edge AI applications- Energy-efficient robotics and real-time decision-making However, despite their theoretical advantages, SNNs face major challenges, limiting their widespread adoption. These challenges include:- Training difficulties: Traditional deep learning techniques do not directly apply to SNNs due to non-differentiable spike-based activation functions.- Hardware inefficiencies: Current neuromorphic chips (e.g., Loihi, SpiNNaker) struggle with memory constraints and real-time processing scalability.- Scalability issues: Large-scale SNNs require optimized architectures to handle thousands to millions of spiking neurons efficiently. The primary objective of this Ph.D. research is to address the fundamental challenges in Spiking Neural Networks by proposing innovative training algorithms, scalable architectures, and efficient neuromorphic hardware solutions. 2nd Year: Algorithm Development & Model Optimization- Develop hybrid training methods that integrate biological learning principles with gradient-based techniques.- Optimize SNN architectures for real-time processing.- Apply the proposed methods to benchmark datasets (MNIST, DVS Gesture Recognition, Speech Processing). 3rd Year: Hardware Acceleration & Validation- Implement and test optimized SNN models on FPGA/ASIC platforms.- Evaluate SNN performance in neuromorphic processors (Loihi, SpiNNaker).- Publish findings in leading AI and neuromorphic computing journals and conferences.Expected Impact |
Required skills | Mandatory Skills Preferred Skills (Nice-to-have) |
Artificial Intelligence for Intelligent Biofabrication in Regenerative Medicine | |
Proposer | Stefano Di Carlo, SAVINO Alessandro, BARDINI Roberta |
Topics | Data science, Computer vision and AI, Life sciences |
Group website | |
Summary of the proposal | Advancements in biofabrication enable the precise engineering of tissues and organs for regenerative medicine. However, achieving real-time control, optimization, and scalability remains a major challenge. This Ph.D. will develop AI-driven biofabrication techniques, integrating machine learning, computer vision, and process optimization to enhance bioprinting accuracy, cell viability, and functional tissue development for next-generation biomedical applications. |
Research objectives and methods | Background and Motivation Biofabrication is revolutionizing tissue engineering and regenerative medicine by enabling the controlled assembly of cells, biomaterials, and growth factors to create functional biological structures. Bioprinting technologies, such as extrusion-based, inkjet, and laser-assisted bioprinting, allow precise deposition of cells to mimic native tissue architectures. However, current biofabrication methods face key challenges, including:- Variability in printing resolution and cell distribution- Real-time monitoring and adaptation to ensure tissue viability- Scalability and reproducibility for clinical applications- Automated quality control in tissue fabrication- The main objective of this Ph.D. research is to develop AI-powered biofabrication frameworks that improve precision, efficiency, and scalability in 3D bioprinting and intelligent tissue engineering. How can AI improve real-time control in biofabrication?Develop AI-driven feedback loops for adaptive bioprinting parameter optimization.Use deep learning models to adjust extrusion rates, layer deposition, and environmental conditions dynamically. This research will be structured into three main phases: 1st Year: AI Model Development & Data Collection- Develop a bioprinting simulation environment for AI training.- Collect high-resolution imaging data from biofabrication experiments.- Train CNN and transformer models to analyze cell growth, scaffold integrity, and printing precision. |
Required skills | Mandatory Skills Preferred Skills (Nice-to-have) |
Safety and Security of AI in Space and Safety Critical Applications | |
Proposer | Stefano Di Carlo, SAVINO Alessandro |
Topics | Computer architectures and Computer aided design, Cybersecurity, Data science, Computer vision and AI |
Group website | |
Summary of the proposal | The increasing deployment of Artificial Intelligence (AI) in space and other safety-critical applications introduces unique challenges in security, reliability, and fault tolerance. This Ph.D. will focus on developing robust AI models that can withstand extreme environmental conditions, adversarial attacks, and system failures. The research will integrate machine learning, cybersecurity, and hardware-based security to enhance the safety and resilience of AI systems in critical applications. |
Research objectives and methods | Background and Motivation AI-driven systems are revolutionizing space exploration, aviation, autonomous vehicles, and other safety-critical domains by enabling real-time decision-making, anomaly detection, and autonomous operations. However, the adoption of AI in these fields introduces significant challenges, including:- Reliability under extreme conditions: Space environments expose AI systems to radiation, temperature fluctuations, and hardware degradation, leading to system failures and unpredictable behaviors.- Security threats and adversarial attacks: AI models deployed in critical infrastructure and space missions are vulnerable to cyber threats, adversarial perturbations, and data manipulation.- Fault tolerance and self-repair: AI systems must be capable of detecting failures, adapting to changing conditions, and recovering from faults autonomously.- Secure and efficient communication: AI-based systems in space require resilient communication protocols to ensure secure and reliable data transmission. This Ph.D. will address key challenges in AI safety and security by developing robust, fault-tolerant, and cyber-secure AI architectures for space-based and mission-critical applications.Key Research Questions: How can AI models be designed for robustness against environmental and cyber threats?Develop AI models with radiation-tolerant architectures.Implement resilient deep learning techniques to mitigate adversarial attacks.Enhance error correction and self-repair mechanisms in AI inference. The research will be structured in three main phases:Phase 1: AI Security and Fault Tolerance Analysis (Year 1)- Conduct a comprehensive review of AI safety and security vulnerabilities in space and safety-critical applications.- Develop an AI security assessment framework for space-based and autonomous AI systems.- Analyze real-world case studies of AI failures in mission-critical environments.Phase 2: Development of Secure and Robust AI Architectures (Year 2)- Design error-detection and mitigation techniques to improve AI fault tolerance.- Implement adversarial-resistant AI models with enhanced cybersecurity features.- Develop hardware-accelerated AI solutions for deployment in radiation-prone and resource-constrained environments.Phase 3: Validation, Testing, and Deployment (Year 3)- Validate AI security frameworks using simulation-based attacks and fault injection techniques.- Deploy and test AI security solutions in aerospace testbeds, real-time autonomous systems, and cybersecurity platforms.- Publish research findings in top-tier AI, cybersecurity, and aerospace journals and conferences.Expected Impact This research will contribute to:- Improving AI safety and reliability in space exploration, aviation, and critical infrastructure.- Enhancing cybersecurity for AI-based autonomous systems in mission-critical environments.- Developing fault-tolerant AI architectures that ensure continuous and safe operation under extreme conditions.- Bridging AI, cybersecurity, and aerospace engineering to create a trustworthy AI framework for safety-critical applications.Active Collaborations- Thales Alenia Space- AVIO GE- Space-IT-up project |
Required skills | Mandatory Skills - Strong background in machine learning, deep learning, and AI security. - Experience with cybersecurity principles, adversarial AI, and anomaly detection. - Proficiency in Python, C/C++, and AI frameworks. - Familiarity with embedded AI, fault-tolerant systems, and real-time computing. Preferred Skills (Nice-to-have) Experience with secure AI architectures and trusted execution environments. Knowledge of radiation-resistant hardware and space computing. |
Non-invasive and low-cost solutions for health monitoring | |
Proposer | Massimo Violante, Gabriella Olmo |
Topics | Life sciences, Data science, Computer vision and AI |
Group website | www.cad.polito.it |
Summary of the proposal | The PhD program focuses on the development of low-cost solutions for health monitoring. Different sensors will be analyzed (wearable such as smart rings, contact-based such as ballistocardiograph sensors, contact-less such as radars), and algorithms will be developed to detect pathologies such as Sleep Apnea (SA) and Heart arrhythmia. The main target application is digital health with particular emphasis of constant monitoring of elderly persons at home or in elderly houses. |
Research objectives and methods | Sleep Apnea is a potentially serious sleep disorder in which breathing repeatedly stops and starts, whose most evident side effects are loud snoring, and tiredness after a full night's sleep. The research program will be performed in collaboration with Sleep Advice Technologies Srl, Ospedale Regina Margherita, and Istituto di Ricovero e Cura a Carattere Scientifico (I.R.C.C.S.), NEUROMED ? Istituto Neurologico Mediterraneo. |
Required skills | MATLAB or Python or C/C++ programming |
Developing methods and techniques for estimating the value of open-source software in public sector | |
Proposer | Antonio Vetro' |
Topics | Software engineering and Mobile computing |
Group website | https://nexa.polito.it/ https://nexa.polito.it/pilot-study-on-estimating-the-value-of-open-source-software/ |
Summary of the proposal | The project aims to develop an innovative methodology for estimating the economic value of the Open-Source software in the public-owned company PagoPA, considering both its internal development and its dependencies on already existing Open-Source libraries. The novel software analysis methodology should mine information from code repositories and integrate it with company-specific data and external sources. In addition, the PhD candidate shall build a software pipeline to automate the valuation. |
Research objectives and methods | Background Open Source Software (OSS) is a cornerstone of technological innovation and global economic development. The European Commission estimates that OSS investments generate an economic impact between ?65 and ?95 billion annually, while research from the Harvard Business School values the global OSS supply at $4.15 billion, with demand reaching $8.8 trillion. List of possible venues for publications The PhD project is in collaboration with PagoPA S.p.A. |
Required skills | The candidate should have: - Strong programming skills. - Very good knowledge on software testing. - Good knowledge of statistical methods for analyzing experimental data. - Proficiency in data analysis techniques and tools. - Research aptitude and curiosity to cross disciplinary boundaries. |
Innovative technologies for infrastructures and buildings management | |
Proposer | Valentina Gatteschi, Valentina Villa, Marco Domaneschi |
Topics | Parallel and distributed systems, Quantum computing, Data science, Computer vision and AI, Cybersecurity |
Group website | http://grains.polito.it/ https://siscon.polito.it |
Summary of the proposal | Infrastructures and buildings management has become very complex in terms of regulations, documentation, and technology, and it is increasingly difficult to govern assets using traditional methods. The objective of this proposal is to investigate how cutting-edge technologies such as IoT, AI, blockchain, and smart contracts could be used to improve the efficiency of asset management, and to support activities like damage detection, predictive maintenance, and process certification/automation. |
Research objectives and methods | This research aims to revolutionize the construction and infrastructure management industry by combining cutting-edge technologies like IoT, AI, blockchain, and smart contracts to improve active monitoring, damage detection, predictive maintenance, and process certification/automation. This Ph.D. proposal will be in collaboration with DISEG Department (Department of Structural, Geotechnical and Building Engineering) of Politecnico di Torino. The activities carried out in this Ph.D. proposal will aim at investigating existing approaches, devising, and testing novel ones for: a) automatizing assessment, maintenance, and efficiency procedures of infrastructures; b) enhancing security, transparency, and privacy in the context of public infrastructures; c) improving the resilience of infrastructural assets. The research work plan for the three-year Ph.D. programme is the following: - First year: the candidate will perform an analysis of the state-of-the-art available methodologies/tools for the storage and certification of large amounts of data with distributed technologies. Part of the candidate's research activities will be devoted to analyzing how oracles could be designed and used to integrate, in the blockchain, data acquired from the real world, as well as to inspecting existing distributed solutions to efficiently store sensors' data. The candidate will also analyze the type of data required and the algorithms that are available for predictive maintenance. - Second year: during the year, the candidate will design and develop methodologies and tools for active monitoring, damage detection, predictive maintenance and processes certification/automation, starting from use cases proposed by companies working in the sector of private/public infrastructures. - Third year: the third year will be devoted to refining the tools developed during the second year (eventually by exploiting other blockchain frameworks), and to testing them, with a focus on privacy, transparency and automation, as well as on metrics such as latency/throughput, and service costs. Expected target publications are: - IEEE Transactions on Services Computing - IEEE Transactions on Knowledge and Data Engineering - IEEE Access - Intelligent Systems with Applications - Future Generation Computer Systems - IEEE International Conference on Decentralized Applications and Infrastructures - IEEE International Conference on Blockchain and Cryptocurrency - IEEE International Conference on Blockchain - Automation in Construction - Structure and Infrastructure Engineering - Buildings - European Conference on Computing in Construction - International Association for Bridge Maintenance And Safety |
Required skills | The candidate should have the following characteristics: - Ability to research and understand theoretical and applied research topics - Good programming skills in commonly used programming languages (e.g., Python, Java, C, Node.js, PHP) and in blockchain-related programming languages (e.g., Solidity) - Good knowledge of existing blockchain frameworks - Ability to autonomously develop decentralized applications Knowledge of cryptography, and involvement in previous research projects are a plus. |
Secure Artificial Intelligence: Enhancing IT Infrastructure and Online Services | |
Proposer | Luca Cagliero, Francesco Tarasconi |
Topics | Data science, Computer vision and AI |
Group website | https://www.polito.it/personale?p=luca.cagliero https://smartdata.polito.it |
Summary of the proposal | This scholarship explores the role of Artificial Intelligence in optimizing IT infrastructure and online services while addressing security, privacy, and adaptability challenges. Key objectives include AI-driven predictive maintenance, anomaly detection, providing real-rime support, as well as fine-tuning domain-specific AI models, evaluating open-source vs. closed-source architectures, and developing secure, scalable AI frameworks for diverse industries. |
Research objectives and methods | Context Research objectivesAI-Driven IT Infrastructure Optimization, including predictive maintenance, anomaly detection, automatic resource balancing.Development of Agentic AI or innovative approaches to integrate Generative AI in the diverse landscape of online services and tools. Exploration of new domain-specific Transformer models for industry-specific applications.Fine-tuning of pretrained large generative models on specific domains to improve accuracy and relevance for business use cases.Evaluation of the benefits and limitations of open-source vs. closed-source AI models and architectures across different industries.Developing frameworks to protect AI models from adversarial attacks and data poisoning and integrate them into the IT Infrastructure Optimization strategies.Development of scalable and modular AI frameworks to meet the needs of companies of different sizes. Tentative work plan List of possible publication venues |
Required skills | The PhD candidate is expected to - Have the ability to critically analyze complex systems, model them and identify weaknesses; - be proficient in Python programming; - know data science fundamentals; - have a solid background on machine learning and deep learning; - have natural inclination for teamwork; - be proficient in English speaking, reading, and writing; - proficiency with Docker and Kubernetes software is a plus. |
Video Retrieval-Augmented Generation | |
Proposer | Luca Cagliero, Elena Baralis |
Topics | Data science, Computer vision and AI |
Group website | https://dbdmg.polito.it/ https://smartdata.polito.it/ |
Summary of the proposal | Retrieval-Augmented Generation is an established cost-effective approach to extend the capabilities of LLMs to specific domains and to leverage proprietary data without the need to retrain LLMs. To improve the performance of Video LLMs, existing RAG frameworks incorporate visually aligned auxiliary texts (e.g., OCR, ASR). The PhD scholarship aims to study and advance the state-of-the-art solutions in the area of Video RAGs and their applications to real-world multimedia learning scenarios. |
Research objectives and methods | Objectives Tentative work plan List of possible publication venues |
Required skills | The PhD candidate is expected to - Have the ability to critically analyze complex systems, model them and identify weaknesses; - be proficient in Python programming; - know data science fundamentals; - have a solid background on machine learning and deep learning; - have natural inclination for teamwork; - be proficient in English speaking, reading, and writing; |
Human-Centered AI within Internet-of-Things Ecosystems | |
Proposer | Luigi De Russis, Alberto Monge Roffarello |
Topics | Computer graphics and Multimedia, Data science, Computer vision and AI, Software engineering and Mobile computing |
Group website | https://elite.polito.it |
Summary of the proposal | Human-Centered AI (HCAI) is an emerging discipline intent on creating AI systems that amplify and augment rather than displace human abilities. This Ph.D. proposal aims at designing, developing, and evaluating concrete HCAI systems to support inhabitants of IoT-enabled environments in various tasks related to their daily life. |
Research objectives and methods | Artificial Intelligence (AI) systems are widespread in many aspects of the society, and Generative AI lowered some barriers to access information. While this leads to many advantages in decision processes and productivity, it also presents drawbacks such as disregarding end-user perspectives and safeness. The Ph.D. proposal aims at designing, developing, and evaluating concrete HCAI systems to support users of IoT-enabled environments in various tasks related to their settings. The main research objective is to investigate solutions for designing and developing HCAI systems in IoT-enabled environments. A particular focus will be on how the adoption of the HCAI framework can bring tangible benefits to users and to the IoT research field. The research activities will mainly build on the following characteristics of the HCAI framework: |
Required skills | The ideal candidate should have a solid background in Computer Engineering or Data Science, with prior experience with AI, especially around machine learning and/or deep learning. The candidate should also have a knowledge of Human-Computer Interaction methods and techniques. |
Preference models for multimodal annotations | |
Proposer | Luca Cagliero, Elena Baralis |
Topics | Data science, Computer vision and AI |
Group website | https://dbdmg.polito.it/ https://smartdata.polito.it |
Summary of the proposal | Data sources are commonly enriched with multimodal annotations, e.g., a video can be annotated with visual tags, textual summaries, audio excerpts, and OCR text. The choice of the modality and style of the data annotations is often arbitrary and independent of the downstream models and tasks. The research aims to define automatic preference models for Multimodal LLMs for annotations that automatically recommend the right modality, format, and type according to the task, context, and model. |
Research objectives and methods | Objectives Tentative work plan List of possible publication venues |
Required skills | The PhD candidate is expected to - Have the ability to critically analyze complex systems, model them and identify weaknesses; - be proficient in Python programming; - know data science fundamentals; - have a solid background on machine learning and deep learning; - have natural inclination for teamwork; - be proficient in English speaking, reading, and writing; - proficiency with Docker and Kubernetes software is a plus. |
Spatio-Temporal Data Science | |
Proposer | Paolo Garza, Daniele Apiletti |
Topics | Data science, Computer vision and AI |
Group website | https://dbdmg.polito.it/ |
Summary of the proposal | Spatio-Temporal (ST) data continuously increase (time series collected from IoT sensors, satellite images, and textual geo-referenced documents). Although ST data have been extensively studied, the current data science pipelines do not manage heterogeneous sources effectively. Most of them focus on one source at a time. Innovative deep learning approaches based on latent spaces, designed to integrate information conveyed by heterogeneous sources, are the primary goal of this proposal. |
Research objectives and methods | The main objective of this research proposal is to study and design data-driven pipelines and deep learning models to analyze heterogeneous spatio-temporal data (e.g., time series, satellite images, and geo-referenced documents). Both descriptive and predictive problems will be considered. The main issues that will be addressed are as follows. Heterogeneity. Several sources, characterized by different data types or formats, are available. Each data source represents the phenomena under analysis from different retrospectives and provides helpful insights only if adequately integrated with the other sources. Innovative data integration techniques based, for instance, on latent spaces will be studied to address this issue. Properly integrating heterogeneous data sources permits analyzing all facets of the phenomena of interest without losing information. Scalability. Spatio-Temporal data are frequently big (e.g., vast collections of remote sensing data, extensive collections of social network messages). Hence, big data pipelines are commonly used to process and analyze them, mainly when historical data are analyzed. Timeliness. Timeliness is crucial in several domains (e.g., emergency management, fraud detection, online news). Real-time and incremental machine learning algorithms must be designed and implemented.
1st year. Analysis of the state-of-the-art algorithms and data science pipelines for Spatio-Temporal data. Based on the pros and cons of the current solutions, a preliminary common data representation based on latent spaces will be studied and designed to integrate heterogeneous data effectively. Based on the proposed data representation, novel algorithms will be designed, developed, and validated on historical data related to specific domains (e.g., emergency management news summarization). 2nd year. Common representations of heterogeneous Spatio-Temporal data will be further analyzed and proposed, focusing on scalable and resource-awareness algorithms. Specifically, solutions based on big data frameworks will be considered. 3rd year. The timeliness facet will be considered during the last year. Specifically, the focus will be on real-time Spatio-Temporal data analysis based on near real-time ML-based algorithms. The outcomes of the research activity are expected to be published at IEEE/ACM International Conferences and in any of the following journals: - ACM Transactions on Knowledge Discovery in Data |
Required skills | Strong background in data science fundamentals and machine learning algorithms, including embeddings-based data models and LLMs. Strong programming skills. Knowledge of big data frameworks such as Spark is advisable but not required. |
Advanced data modeling and innovative data analytics solutions for complex application domains | |
Proposer | Silvia Anna Chiusano, Tania Cerquitelli |
Topics | Data science, Computer vision and AI |
Group website | |
Summary of the proposal | Data science projects entail the acquisition, modelling, integration, and analysis of big and heterogeneous data collections generated by a diversity of sources, to profile the different facets and issues of the considered application context. However, data analytics in many application domains is still a daunting task, because data collections are generally too big and heterogeneous to be processed through machine learning techniques currently available. |
Research objectives and methods | The PhD student will work on the study, design and development of proper data models and novel solutions for the integration, storage, management and analysis of big volumes of heterogeneous data collections in complex application domains. The research activity involves multidisciplinary knowledge and skills including database, machine learning and artificial intelligence algorithms, and advanced programming. Different application contexts will be considered to highlight a wide range of data modeling and analysis problems, and thus lead to the study of innovative solutions. The objectives of the research activity consist in identifying the peculiar characteristics and challenges of each considered application domain and devise novel solutions for the modelling, management and analysis of data for each domain. Example scenarios are urban context and in particular urban mobility, and the medical domain. More in detail, the following challenges will be addressed during the PhD: 1. Modeling Heterogeneous Data: Design innovative approaches for modeling heterogeneous data, including structured and unstructured data from different sources, integrating them into a single coherent framework. The experience gained on data modeling in different application contexts can lead to the realization of a Computer-Aided Software Engineering (CASE) tool that guides the user through the design process, reducing design time and improving the quality of the modeling result. 2. Innovative algorithms for data analytics. Study, design, and implementation of innovative machine learning algorithms, with a primary emphasis on clustering and classification tasks. The objective is to overcome limitations of current approaches, enhancing their accuracy, scalability, and ability to deal with heterogeneous data collections. 3. Scalable Learning: Investigate scalable learning techniques to address the increasing complexity and volume of data for achieving optimal performance in big data environments. This research is indeed driven by the growing demand to develop machine learning systems capable of dynamically adapting to the increasing complexity of data and models. For recent machine learning/AI applications, it is crucial to propose innovative models capable of handling large volumes of data with parallel and scalable solutions. The research activity will be organized as follows. 1st Year. The PhD student will start considering a first reference application domain (for example the urban scenario) and a first reference use case in this scenario (for example urban mobility). The PhD student will review the recent literature in the selected use case to (i) identify the most relevant open research issues, (ii) identify the most relevant data analysis perspectives for gaining useful insights, and (iii) assess of main data analysis issues. The PhD student will perform an exploratory evaluation of state-of-the-art technologies and methods on the considered domain, and she/he will present a preliminary proposal for the optimization techniques of these approaches. 2nd and 3rd Year. Based on the results of the 1st year activity, the PhD student will design and develop a suitable framework including innovative data analytics solutions to efficiently model data in the considered use case and extract useful knowledge, aimed at overcoming weaknesses of state-of-the-art methods. Moreover, during the 2nd and 3rd year, the student will progressively consider a larger spectrum of application domains. The student will evaluate if and how his/her proposed solutions can be applied to the new considered domains as well as he/she will propose novel analytics solutions. During the PhD, the student will have the opportunity to cooperate in the development of solutions applied to the research project on smart cities (e.g., PRIN project on the development of an atlas for historic buildings in an urban context). The student will also complete his/her background by attending relevant courses. The student will participate to conferences presenting the results of his/her research activity. Possible pubblication venues includes international journals such as IEEE Transactions on Intelligent Transportation Systems, Information Systems Frontiers (Springer), Information sciences (Elsevier), and international conferences such as IEEE Big data, ACM Inter. Conf. on Information & Knowledge Management (CIKM), IEEE International Conference on Data Mining (ICDM) |
Required skills | The candidate should have good programming skills, and competencies in data modelling and techniques for data analysis. |
AI4CTI - ARTIFICIAL INTELLIGENCE FOR CYBER THREAT INTELLIGENCE | |
Proposer | Marco Mellia, Paolo Garza |
Topics | Cybersecurity, Data science, Computer vision and AI |
Group website | |
Summary of the proposal | As digital reliance grows, cyber fraud is surging, with costs projected to hit $13.8T by 2028. Social engineering attacks exploit multimedia and fake news, bypassing outdated security tools. AI is key to countering these threats, using advanced algorithms for scalable, adaptive threat detection. The candidate will develop AI-driven cybersecurity solutions, leveraging multimodal analysis to detect malicious content, despite limited ground truth data, enabling on-device protection and integration. |
Research objectives and methods | Scenario and motivations: Nowadays we rely on digital services to stay informed, organize our work, manage our savings, etc. Numbers in hand, 63,1% of the global population accesses the web daily for work, social media, and any service. With this, cyber fraud and attacks are proliferating. With the explosion of social networks and instant messaging, attack vectors multiply, making social engineering attacks based on counterfeit multimedia and fake news an everyday threat. Research objectives: The candidate will develop AI-based solutions to counterfight fight cyberthreats, focusing on the automatic detection of phishing attacks on multiple vectors, including email, websites, and messaging applications. The project will be based on three key pillars:- data collection and aggregation: Crawl the web and the dark web, in a scalable and cost-effective way, and discover and explore online groups in messaging applications such as Telegram or WhatsApp and Online Social Media Networks like Instagram or TikTok.- data storage and indexing: develop an innovative graph-based data structure that allows to simplify the query process to support the integration with AI-based algorithms that typically need to process data during training. Given state-of-the-art graph-based platforms are still in their infancy, the candidate will contribute to new solutions specifically tailored to the web security scenario.- AI algorithms: The candidate will focus on the development of a foundation model specifically engineered for cybersecurity. This will be a cornerstone that will streamline and open applications to several use cases. Differently from Large Language Models or Computer Vision models that address a single specific domain, the model will be multimodal in nature given the mix of text, images, videos, languages, etc. that are found on the web. Research work plan: We foresee three phases:- During the first year, the candidate will review the state of the art, and focus on the data collection, storage and indexing platforms- During the second year, the candidate will focus on the development of AI solutions, leveraging the data collected and aggregating CTI outlets to obtain labelled data to train algorithms. These algorithms will work initially on separate domains, like text and images separately.- During the third year, the candidate will deep dive into AI approaches, fine-tuning the models to vertical applications like phishing detection and malicious profiles found on social media networks. Here the models will be multimodal in nature, able to analyse images and text at the same time, References:- Boffa, M., Valentim, R. V., Vassio, L., Giordano, D., Drago, I., Mellia, M., & Houidi, Z. B. (2023). LogPr\'ecis: Unleashing Language Models for Automated Shell Log Analysis,Computers & Security, Volume 141,2024,- Boffa, M., Milan, G., Vassio, L., Drago, I., Mellia, M., & Houidi, Z. B. (2022, June). Towards nlp-based processing of honeypot logs. EuroS&PW- Valentim, R., Drago, I.,Mellia, M., Cerutti, F.. 2024. X-squatter: AI Multilingual Generation of Cross-Language Sound-squatting. ACM Trans. Priv. Secur.- Valentim, R., Drago, I., Mellia, M. F. Cerutti, Lost in Translation: AI-based Generator of Cross-Language Sound-squatting, EuroS&PW, 2023 List of possible venues for publications: Collaborations and projects: This scholarship is in collaboration with the Ernes Cybsersecurity company, in the context of the FISA-2023 AI4CTI project funded by the Ministry of University and Research with a 6.1 Million Euro grant. |
Required skills | - Good programming skills (such as Python, Torch, Spark) - Solid Machine Learning knowledge - Knowledge NLP and LLM - Fundamentals of Networking and computer security |
Emerging Topics in Evolutionary Computation: Diversity Promotion and Graph-GP | |
Proposer | Giovanni Squillero, Alberto Tonda (INRAE) |
Topics | Computer architectures and Computer aided design, Data science, Computer vision and AI |
Group website | https://www.cad.polito.it/ |
Summary of the proposal | Evolutionary computation (EC), a subfield of AI that uses mechanisms inspired by biological evolution, is experiencing a unique moment. While fewer scientific papers focus solely on EC, traditional EC techniques are frequently utilized in practical activities under different labels. The objective of this proposal is to examine both the new representations that scholars are currently exploring and the old, yet still pressing, problems that practitioners are facing. |
Research objectives and methods | Although the classical approach to representing solutions in EC involves bit strings and expression trees, far more complex encodings have been recently proposed. More specifically, graph-based representations have led to novel applications of EC in circuit design, cryptography, image analysis, and other fields. At the same time, divergence of character, or, more precisely, the lack of it, is widely recognized as the most impairing single problem in the field of EC. While divergence of character is a cornerstone of natural evolution, in EC all candidate solutions eventually crowd the very same areas in the search space, such a "lack of speciation" has been pointed out in the seminal work of Holland back in 1975. It is usually labeled with the oxymoron "premature convergence" to stress the tendency of an algorithm to convergence toward a point where it was not supposed to converge to in the first place. The research activity would tackle "diversity promotion", that is either "increasing" or "preserving" diversity in an EC population, both from a practical and theoretical point of view. It will also include the related problems of defining and measuring diversity. The research project shall include an extensive experimental study of existing diversity preservation methods across various global optimization problems. Open-source, general-purpose EA toolkits, inspyred and DEAP, will also be used to study the influence of various methodologies and modifications on the population dynamics. Solutions that do not require the analysis of the internal structure of the individual (e.g., Cellular EAs, Deterministic Crowding, Hierarchical Fair Competition, Island Models, or Segregation) shall be considered. This study should allow the development of a, possibly new, effective methodology, able to generalize and coalesce most of the cited techniques. During the first year, the candidate will take a course in Artificial Intelligence, and all Ph.D. courses of the educational path on Data Science. Additionally, the candidate is required to improve the knowledge of Python. Starting from the second year, the research activity shall include Turing-complete program generation. The candidate will work on an open-source Python project, currently under active development. The candidate will try to replicate the work of the first year on much more difficult genotype-level methodologies, such as Clearing, Diversifiers, Fitness Sharing, Restricted Tournament Selection, Sequential Niching, Standard Crowding, Tarpeian Method, and Two-level Diversity Selection. At some point, probably toward the end of the second year, the new methodologies will be integrated into the Grammatical Evolution framework developed at the Machine Learning Lab of University of Trieste as GE allows a sharp distinction between phenotype, genotype and fitness, creating an unprecedented test bench (the research group is already collaborating with a group in UniTS on these topics, see "Multi-level diversity promotion strategies for Grammar-guided Genetic Programming" Applied Soft Computing, 2019). A remarkable goal of this research would be to link phenotype-level methodologies to genotype measures. Target Publications Journals with impact factors - ASOC - Applied Soft Computing - ECJ - Evolutionary Computation Journal - GPem - Genetic Programming and Evolvable Machines - Informatics and Computer Science Intelligent Systems Applications - IS - Information Sciences - NC - Natural Computing - TCIAIG - IEEE Transactions on Computational Intelligence and AI in Games - TEC - IEEE Transactions on Evolutionary Computation Top conferences - ACM GECCO - Genetic and Evolutionary Computation Conference - IEEE CEC/WCCI - World Congress on Computational Intelligence - PPSN - Parallel Problem Solving From Nature Notes: The tutors regularly present tutorials on Diversity Preservation at top conferences in the field, such as GECCO, PPSN, and CEC. Additionally, they are involved in the organization of a workshops focused on graph-based representation for EA. Moreover, the research group is in contact with industries that actively consider exploiting evolutionary machine-learning for enhancing their biological models, for instance, KRD (Czech Republic), Teregroup (Italy), and BioVal Process (France). The research group has also a long record of successful applications of evolutionary algorithms in several different domains. For instance, the on-going collaboration with STMicroelectronics on test and validation of programmable devices, does exploit evolutionary algorithms and would benefit from the research. |
Required skills | Proficiency in Python (including deep understanding of object-oriented principles and design patterns, and handling of parallelism); Preferred: Experience with metaheuristcs, Experience with optimization algorithms |
Risk-aware Cyber Threats Mitigation | |
Proposer | Cataldo Basile, Antonio Lioy |
Topics | Cybersecurity |
Group website | https://security.polito.it/ |
Summary of the proposal | The project aims to develop AI-based methods for automated cyber threat mitigation through risk-aware responses. By integrating Cyber Threat Intelligence with models for security capability, risk assessment, and network reconfigurability, it enables real-time adaptation to threats. The solution emphasizes explainability and applicability across sectors like enterprise networks, ISPs, and industry, ensuring resilient and risk-conscious cyber defence. |
Research objectives and methods | Context and Motivation Expected Results include a comprehensive framework for automated, AI-driven, risk-aware mitigation and theoretical and practical models of security controls and system behavior. |
Required skills | The candidate needs a solid background in cybersecurity (risk management), defensive controls (e.g., firewall technologies and VPNs), monitoring controls (e.g., IDS/IPS), threat intelligence and incident response. Moreover, he should also possess a background in software network technologies (SDN, NFV, Kubernetes). Having skills in formal modelling and logical systems is a plus, as well as the availability to apply AI techniques in the cybersecurity field. |
AI-based Cyber Threats Mitigation in Software Networks | |
Proposer | Cataldo Basile, Antonio Lioy |
Topics | Cybersecurity |
Group website | https://security.polito.it/ |
Summary of the proposal | This PhD project explores AI-driven methods for automated cyber threat mitigation and policy enforcement. It aims to develop intelligent systems that interpret threat intelligence and generate adaptive mitigations and incident responses. By leveraging state-of-the-art AI techniques, the project will enable autonomous, explainable cybersecurity across cloud-native and software-defined environments. |
Research objectives and methods | Context and Motivation The increasing complexity of cyber threats and the scale of modern digital infrastructures demand a new generation of cybersecurity solutions. Traditional rule-based systems are insufficient to handle the velocity and sophistication of modern attacks, and manual interventions are prone to errors. While threat intelligence sources are rich and ever-growing, the gap between detection and actionable system-wide defence remains significant. Existing solutions often lack the intelligence required for proactive threat remediation and policy enforcement. This PhD project aims to bridge that gap by developing Artificial Intelligence (AI)-based techniques that can automatically analyze, contextualize, and respond to cyber threats. By enabling systems to autonomously adapt security policies and reconfigure infrastructure in response to incoming threats, we aim to move toward autonomous cybersecurity defence. Research Objectives The primary goal of this research is to investigate and develop AI-driven mechanisms for understanding and contextualizing cyber threats, synthesizing mitigation strategies automatically, and maintaining security requirements that are continuously enforced during security incidents. Integrating these capabilities into software-defined infrastructures (e.g., Kubernetes, cloud-native platforms) involves the use of state-of-the-art AI models, including Machine Learning (ML), Large and Small Language Models (LLMs and SLMs) and symbolic reasoning systems, to build cybersecurity solutions that can make intelligent decisions autonomously and explain them to humans. The key areas of research are: Automated Threat Interpretation and Enrichment. Develop AI systems that consume raw Cyber Threat Intelligence (CTI), identify the attacker's behaviour (e.g., via MITRE ATT&CK), and generate semantic representations of attacks, their steps and impacts. SLMs, LLMs, and retrieval-augmented generation (RAG) architectures will be explored to map technical indicators (e.g., CVEs, IoC) to potential system impact and identify mitigations. Generative Mitigation Strategy Synthesis. Design algorithms capable of translating enriched threat descriptions into actionable mitigation strategies to maintain the security requirements enforced, including reconfiguring security control or changes to the network layout and container-level isolation to respond through real-time infrastructure reconfiguration. The candidate will investigate AI techniques and models fine-tuned on threat-response actionable playbooks. Policy Refinement and Enforcement. Use AI to interpret high-level security requirements and translate them into low-level actionable operations within orchestrated environments like Kubernetes. This involves learning or reasoning over models that describe the effect of offensive and defensive actions, security requirements, and compliance. It will be based on an existing Security Capability Model and a refinement engine based on forward reasoning. Explainability and Human-in-the-Loop Integration. Design AI systems that can explain the rationale behind threat interpretations and mitigation decisions. AI will support operators in building trust in the system and satisfy regulations. The project will focus on building the following:- Knowledge Bases. Construct AI-ready knowledge graphs combining abstract network representation, CTI, attack techniques and defences to provide a rich context for building remediations.- AI-Enhanced Attack Understanding and Mitigation. Fine-tune language models to automate the understanding of unstructured threat reports and build AI systems to generate sequences of actions that mitigate the identified risks. Expected Outcomes The expected results are:- AI systems capable of autonomously interpreting CTI and proposing targeted mitigations.- Prototype tools that integrate with cloud-native platforms to dynamically enforce adaptive security policies.- Contributions to open-source security automation frameworks and standardization efforts.- Empirical validation of AI-generated policies in realistic environments, including enterprise and edge computing settings. The PhD proposal leverages collaboration with the EC-funded iTrust6G project, which offers real-world scenarios and industrial-grade data for validation. International research periods (at least 6 months) are expected in leading EU universities to cover essential background or exploit results in the research area. We expect publications on leading AI and cybersecurity venues (e.g., ACM CCS, IEEE S&P, IEEE EuroSP). Results on the application of AI to cybersecurity, attack, remediation and mitigation models will be submitted to top-tier journals in scope (e.g., IEEE Transactions on Networking, ACM Transactions on Privacy and Security, IEEE Transactions on Dependable and Secure Computing, IEEE Transactions on Emerging Topics in Computing). |
Required skills | The candidate needs a solid background in AI techniques, cybersecurity (risk mitigation), defensive controls (e.g., firewall technologies and VPNs), monitoring controls (e.g., IDS/IPS), threat intelligence and incident response. Moreover, a background in software network technologies (SDN, NFV, Kubernetes) and in formal modelling and logical systems is a plus. |
Resilient Cybersecurity: Attack and Defense Strategies in Next-Generation Networks | |
Proposer | Fulvio Valenza, Daniele Bringhenti |
Topics | Cybersecurity, Parallel and distributed systems, Quantum computing |
Group website | https://netgroup.polito.it/ |
Summary of the proposal | Next-generation networks are affected by advanced attacks that can no longer be stopped with traditional mitigation strategies. This research activity aims to critically analyze the characteristics of those attacks, and to use the gained insights to define new incident response strategies that can provide progressive resilience. Their objective is to apply attack-tailored fast countermeasures while a formally verified reconfiguration is computed, thus limiting the impact of ongoing attacks. |
Research objectives and methods | Next-generation networks, such as virtualized networks and edge-cloud architectures, are reshaping the nature of digital environments. Their distributed, programmable, and dynamic nature opens new opportunities but also increases the surface and complexity of cyberattacks. Traditional incident mitigation strategies are no longer sufficient to handle sophisticated, multi-stage, and multi-vector attacks that exploit the heterogeneity and dynamism of modern systems. This PhD research proposal aims to study and characterize advanced attack scenarios and use the gained insights to design novel defense approaches and methodologies for attack resilience and incident response, leveraging network security automation. Although some methods and tools are available today with this target, they support these activities only partially and still have severe limitations. Most notably, they leave much of the work and responsibility in charge of the human user, who is expected to configure adequate protection mechanisms and instantly react to cyberattacks. They also struggle or fail in facing complex attacks and advanced persistent threats related to the characteristics of next-generation networks, such as lateral movement through software-defined network segments, exploitation of vulnerabilities in network orchestration layers, manipulation of edge-based services, or stealthy configuration drift attacks. In order to go over the limitations of the literature, the candidate will investigate advanced cyber attacks and threats specific to programmable, virtualized, and distributed environments, analyzing how they unfold, exploit system configurations, and impact the security levels of computer networks. The candidate will then leverage the expertise gained through this investigation to propose automatic defense strategies that can ensure a progressive and resilient incident response. In the early phase of an incident, these methodologies should apply fast, attack-tailored countermeasures aimed at limiting the impact of the ongoing attack, containing or slowing down the attacker's progression. Anyway, these early responses should only represent the first provisional layer of resilience necessary to maintain operational continuity. While those early responses stabilize the immediate threat, a new complete network security configuration should be reconfigured with a formal approach capable of providing formal correctness by construction. This way, assurance is provided that the new configuration can resist future repetitions of the detected attack. The research activity will be organized in three phases: Phase 1 (1st year): The candidate will study the characteristics of the complex attacks that nowadays affect next-generation networks, and will critically identify the main issues and limitations of the state-of-the-art literature on network security configuration automation and incident response in handling them. Subsequently, with the tutor's guidance, the candidate will start identifying and defining new approaches for defining novel models and strategies for an automatic defense that can ensure high resiliency. Some preliminary results are expected to be published at this phase's end. During the first year, the candidate will also acquire the background necessary for the research. This will be done by attending courses and by personal study. Phase 2 (2nd year): The candidate will consolidate the proposed approaches, fully implement them, and conduct experiments with them, e.g., to study their correctness, generality, and performance. In this year, particular emphasis will be given to the identified use cases, properly tuning the developed solutions to real scenarios. The results of this consolidated work will also be submitted for publication, aiming at least at a journal publication. Phase 3 (3rd year): based on the results achieved in the previous phase, the proposed approach will be further refined to improve its scalability, performance, and applicability (e.g., different security properties and strategies will be considered), and the related dissemination activity will be completed. The contributions produced by the proposed research can be published in conferences and journals belonging to the areas of cybersecurity (e.g. IEEE S&P, ACM CCS, NDSS, ESORICS, IFIP SEC, DSN, ACM Transactions on Information and System Security, or IEEE Transactions on Secure and Dependable Computing), and applications (e.g. IEEE Transactions on Industrial Informatics or IEEE Transactions on Vehicular Technology). |
Required skills | In order to successfully develop the proposed activity, the candidate should have a good background in cybersecurity (especially in network security), and good programming skills. Some knowledge of formal methods can be useful, but it is not required: the candidate can acquire this knowledge and related skills as part of the PhD Program, by exploiting specialized courses. |
High-Performance Networking for Efficient and Secure AI Applications | |
Proposer | Guido Marchetto, Alessio Sacco |
Topics | Parallel and distributed systems, Quantum computing, Data science, Computer vision and AI |
Group website | https://www.netgroup.polito.it |
Summary of the proposal | The rapid growth of AI applications adds urgency to the need for fast, reliable, and secure network infrastructures, as AI workloads typically require significant computing resources and near-instantaneous responsiveness. This activity aims to design novel software-defined networks architectures and protocols to serve these needs. A wide range of connectivity options will be considered, facilitating seamless integration across client devices, cloud platforms, and edge computing environments. |
Research objectives and methods | Two research questions (RQ) guide the proposed work: RQ1: How can we design and implement on local and larger-scale testbeds effective network solutions that enable or facilitate the recent AI-enabled use cases? RQ2: To scale the use of AI-based solutions, what are the most efficient distributed machine learning architectures that can be implemented at the network edge layer? The final target of the research work is to answer these questions, also by evaluating the proposed solutions on small-scale clusters or large-scale virtual network testbeds, using a few applications, including virtual and augmented reality, precision agriculture, or haptic wearables. In essence, the main goals are to provide innovation in distributed AI algorithms and network integration, using centralized and distributed learning integrated with edge computing infrastructures. The data needed by these algorithms are carried to the learning actor by means of newly defined in-band network telemetry mechanisms. The candidate will design novel solutions to offer resiliency across a wide range of network operations: from close-edge, i.e., near the device, to the far-edge, with the design of secure data-centric resource allocation (distributed) algorithms. The research activity will be organized in three phases: Phase 1 (1st year): the candidate will analyze the state-of-the-art solutions for network integration, with particular emphasis on knowledge-based automation techniques. The candidate will then define detailed guidelines for the development of architectures and protocols that are suitable for automatic operation and configuration of NextG networks, with particular reference to edge infrastructures. Specific use-cases will also be defined during this phase (e.g., in virtual reality, automotive). Such use cases will help identifying ad-hoc requirements and will include peculiarities of specific environments. With these use cases in mind, the candidate will also design and implement novel solutions to deal with the partial availability of data within distributed edge infrastructures. Results of this work will likely result in conference publications. Phase 2 (2nd year): the candidate will consolidate the approaches proposed in the previous year, focusing on the design and implementation of mechanisms for integration of supervised and unsupervised learning with network-empowered protocols. Network, and computational resources will be considered for the definition of proper allocation algorithms. All solutions will be implemented and tested. Results will be published, targeting at least one journal publication. Phase 3 (3rd year): the consolidation and the experimentation of the proposed approach will be completed. Particular emphasis will be given to the identified use cases, properly tuning the developed solutions to real scenarios. Major importance will be given to the quality offered to the service, with specific emphasis on the minimization of latencies in order to enable a real-time network automation for critical environments (e.g., telehealth systems, precision agriculture, or haptic wearables). Further conference and journal publications are expected. The research activity is in collaboration with Saint Louis University, MO, USA, also in the context of the NSF grant #2201536 ?Integration-Small: A Software-Defined Edge Infrastructure Testbed for Full-stack Data-Driven Wireless Network Applications?. Furthermore, it is related to active collaborations with Rakuten and Tiesse SpA, both interested in the covered topics. The contributions produced by the proposed research can be published in conferences and journals belonging to the areas of networking and machine learning (e.g. IEEE INFOCOM, ACM CoNEXT, ICML, ACM/IEEE Transactions on Networking, or IEEE Transactions on Network and Service Management) and cloud/fog computing (e.g. IEEE/ACM SEC, IEEE ICFEC, IEEE Transactions on Cloud Computing), as well as in publications related to the specific areas that could benefit from the proposed solutions (e.g., IEEE Transactions on Industrial Informatics, IEEE Transactions on Vehicular Technology) |
Required skills | The ideal candidate has good knowledge and experience in networking and machine learning, or at least in one of the two topics. Availability for spending periods abroad (mainly but not only at Saint Louis University) is also important for a profitable development of the research topic |
Development of Virtual Platforms for Early Software Design | |
Proposer | Sara Vinco, Enrico Macii |
Topics | Computer architectures and Computer aided design |
Group website | https://eda.polito.it/ https://www.linkedin.com/company/edagroup-polito/ |
Summary of the proposal | The goal of this research proposal is to enable early-stage software development and validation through the development of virtual platforms. The virtual platforms to be investigated are based on virtual hardware models (e.g., developed with SystemC) and Instruction Set Simulators (ISS) for the software side. The simulators are unified into a co-simulation framework through standards, such as the Functional Mock-up Interface (FMI) or Lingua Franca, to support interoperability and tool coupling. |
Research objectives and methods | A virtual platform is a software-based simulation model of a complete embedded system that mimics the behavior of hardware components and allows software to run as if it were on real hardware. A virtual platform is typically composed of: processor models (e.g., Instruction Set Simulators like QEMU), memory and peripherals (e.g., modeled using SystemC or TLM-2.0), interconnects such as buses or network interfaces, optional multi-domain components (e.g., to describe actuators or power sources), and optional external co-simulation links. The goal of the virtual platform is to allow for execution, debugging, and analysis of embedded software in a controlled, observable, and repeatable simulation environment, without needing physical hardware (and possibly prior to its development). Key characteristis of the virtual platforms are thus allowing the execution of embedded software binaries, being abstract, to trade off cycle-accuracy for simulation speed, and being modular, to enabling experimentation with different configurations and simulators. The goal of this research proposal is to enable early-stage software development and validation through the development of virtual platforms, with a possible application to the automotive domain. The virtual platforms to be investigated are based on virtual hardware models (developed with SystemC, together with its AMS and TLM extensions) and Instruction Set Simulators (ISS like QEmu or custom RISC-V ISS like GVSoC) for the software side. The simulators are unified into a co-simulation framework through standards, such as the Functional Mock-up Interface (FMI) or Lingua Franca, to support interoperability and tool coupling. The goal is to: (i) ease the design flow, with co-design, possibility to explore a wider design space; (ii) allow reuse of third party IPs, still enabling intellectual property protection and plug-and-play IP integration. The outline of the PhD program can be divided into 3 consecutive phases, one per each year of the program. - In the first year, the candidate will acquire the necessary background by attending PhD courses and surveying the relevant literature and will start studying the possible co-simulation standards, with a preliminary integration of the HW models into the virtual platform. A seminal conference publication is expected at the end of the year.- In the second year, the candidate will focus on the integration of the ISS, and will select and address some relevant use-cases, with support from the industrial partners. At the end of the second year, the candidate is expected to target at least a second conference paper in a well-reputed EDA-oriented conference (e.g. DATE, DAC), and possibly another publication in a Q1 journal of the Computer Science sector (e.g. IEEE Transactions on Computers, etc.). - In the third year, the candidate will consolidate the models and approaches that were investigated in the second year, and possibly apply them to an industrial case study. The candidate will also finalize this work into at least another major journal publication, as well as into a PhD thesis to defend at the end of the program The activities will be supported by international academic partners and companies. |
Required skills | The ideal candidate to this PhD program has: - positive attitude to research activity and working in team - solid programming skills (C++/Python) - good communication and problem-solving skills - some prior experience in digital design flows - some prior knowledge/experience of cosimulation solutions is a plus, but not a requirement. |
Development of virtual platforms for early prototyping of heterogeneous systems | |
Proposer | Sara Vinco, Enrico Macii |
Topics | Computer architectures and Computer aided design |
Group website | https://eda.polito.it/ https://www.linkedin.com/company/edagroup-polito/ |
Summary of the proposal | The research aims to develop a modular and extensible virtual prototyping framework for heterogeneous systems (e.g., unmanned aerial vehicles) that supports hardware/software co-development, multi-domain modeling (electrical, mechanical), and realistic visualization using external 3D simulation. The goal is to allow early exploration of the design space, and the early validation of the different aspects of the system prior to its physical realization. |
Research objectives and methods | Modern heterogeneous systems span across multiple domains that go beyond the standard HW/SW dimensions, and include a tight integration with the environment, mechanical aspects, and energy awareness. Designing such systems (e.g., unmanned aerial vehicles) is complex as it involves multiple technical challenges across various fields of engineering and design. Achieving homogeneous simulation of such different aspects in a single simulation run would allow an improvement of the design space exploration, with more room for optimization, and to take into account the mutual impact of the different aspects of the system (e.g., fault tolerance, power autonomy). This research activity focuses on the study and the developemnt of virtual prototypes that go in this direction, by combining processor models for software development, memory and peripherals modeled, modeling of mechanical, aerodynamics and power aspects, with the possible integration with external tools for visualization and simulation of physical aspects (e.g., wind or irradiance evolution). The goal is to provide the designer with a white box approach on all aspects of the system, to perform efficient power design space exploration and effective software development (e.g., with countermeasures to physical-related issues). The outline of the PhD program can be divided into 3 consecutive phases, one per each year of the program. - In the first year, the candidate will acquire the necessary background by attending PhD courses and surveying the relevant literature and will start studying virtual platorm developemnt, with a focus on open source languages and frameworks like SystemC and its extensions, QEmu, RISC-V ISSs (e.g., GVSoC). The candidate will develop a preliminary solution on a case study, by focusing on one scenario and on simple mechanical and environmental concerns. A seminal conference publication is expected at the end of the year.- In the second year, the candidate will select and address some relevant use-cases, with support from the industrial partners, and will seek solutions to the challenge of validation of systems including heterogeneous aspects. At the end of the second year, the candidate is expected to target at least a second conference paper in a well-reputed EDA-oriented conference (e.g. DATE, DAC), and possibly another publication in a Q1 journal of the Computer Science sector (e.g. IEEE Transactions on Computers, etc.). - In the third year, the candidate will consolidate the models and approaches that were investigated in the second year, and possibly apply them to an industrial case study. The candidate will also finalize this work into at least another major journal publication, as well as into a PhD thesis to defend at the end of the program. |
Required skills | The ideal candidate to this PhD program has: - positive attitude to research activity and working in team - solid programming skills - good communication and problem-solving skills - some prior experience in digital design flows - some prior knowledge/experience of analog and extra-functional domains is a plus, but not a requirement. |
Next-Generation IT Infrastructure optimization through Semi-Structured Data-Aware Agents | |
Proposer | Luca Cagliero, Francesco Tarasconi (Aruba Group) |
Topics | Data science, Computer vision and AI, Cybersecurity |
Group website | https://www.polito.it/personale?p=luca.cagliero https://smartdata.polito.it |
Summary of the proposal | The research investigates how Artificial Intelligence can optimize IT infrastructure and online services by leveraging advanced time series analysis and AI agents capable of interacting with structured and semi-structured data such as databases and tables. The project focuses on AI-driven predictive analytics, anomaly detection, and real-time decision support. It also explores fine-tuning domain-specific models and developing secure, scalable AI frameworks tailored to various industries. |
Research objectives and methods | Context Artificial Intelligence (AI) is transforming how IT infrastructure is managed, particularly in areas requiring real-time analysis and decision-making based on dynamic data. Time series data ?from server logs, sensor outputs, or user interactions? is fundamental for predictive maintenance, capacity planning, and anomaly detection. However, interpreting such data at scale requires AI models specifically tuned for temporal patterns and contextual signals. In parallel, the increasing use of AI agents that can query, analyze, and act on structured or semi-structured data (such as SQL databases or CSV files) opens new opportunities for automation and smart infrastructure management. The widespread adoption of cloud platforms and Large Language Models (LLMs) raises challenges related to data security, computational cost, and regulatory compliance. Small and medium-sized enterprises (SMEs) especially need adaptable and secure AI solutions capable of handling structured operational data without sacrificing control or privacy. This proposal seeks to advance Secure and Interpretable AI techniques that can analyze time-dependent data and interact meaningfully with enterprise data systems. Research objectives - Time Series-Based IT Infrastructure Optimization: Apply pretrained AI models to analyze temporal patterns for predictive maintenance, dynamic scaling, and anomaly detection. Tentative work plan During the first year, the PhD student mainly explores the application of time series analysis techniques for infrastructure optimization, including the design and adaptation of LLM- and Transformer-based models tailored to specific industrial cases. Industrial collaborations List of possible publication venues |
Required skills | The PhD candidate is expected to - Have the ability to critically analyze complex systems, model them and identify weaknesses; - be proficient in Python programming; - know data science fundamentals; - have a solid background on machine learning and deep learning; - have natural inclination for teamwork; - be proficient in English speaking, reading, and writing; - proficiency with Docker and Kubernetes software is a plus. |
Reliability and Security Enhancement of LLMs from Exascale to Edge Computing | |
Proposer | Ernesto Sanchez, Annachiara Ruospo |
Topics | Computer architectures and Computer aided design, Cybersecurity |
Group website | https://www.polito.it/personale?p=annachiara.ruospo https://www.polito.it/personale?p=ernesto.sanchez |
Summary of the proposal | LLMs face reliability and security threats across computational scales-from exascale systems to edge devices. This Ph.D. will develop hardware-aware, cross-layer solutions to protect LLM training and inference against random hardware faults (e.g., silent data corruption) and intentional attacks (e.g., adversarial backdoors). By integrating fault tolerance, runtime monitoring, and security-by-design principles, the PhD aims at enabling robust LLM deployments in safety-critical applications. |
Research objectives and methods | Background and Motivation Artificial Intelligence (AI) has revolutionized our daily life, transforming the way we think and design next-generation hardware technologies. This groundbreaking field introduces at the same time numerous benefits and new challenges and risks, also. Modern autonomous systems increasingly rely on ensembles of AI models, where Large Language Models (LLMs) act as central coordinators for complex tasks like natural language understanding or adaptive planning. However, LLMs are vulnerable to hardware faults and security attacks, regardless of whether they are deployed on exascale systems or edge devices. In exascale systems, random-hardware faults in accelerators can compromise, for example, distributed training, while edge devices may face voltage instability, and adversarial exploits. Although exascale systems and edge devices differ greatly in their computational resources, the core principles for accelerating AI workloads remain fundamentally aligned. Both systems exploit parallel processing, hardware-specific optimizations, and model adaptation to enhance performance. LLMs reliability is threatened by: exascale vulnerabilities (e.g., silent data corruption or SDC, in distributed training due to GPU/TPU faults), and edge vulnerabilities (e.g., voltage-induced computational errors on resource-constrained devices, or adversarial attacks exploiting limited on-device security). Current software- or hardware-level hardening strategies address isolated risks but lack cross-scale adaptability. This project bridges the gap through unified hardware-software co-design. Objectives and Research Plan The main objective of this Ph.D. research is to develop robust, hardware-aware strategies that enhance the reliability and security of Large Language Models (LLMs) during both training and inference, across the full spectrum of computing platforms-from exascale supercomputers to edge devices. Key Research Questions:- How to evaluate the impact of hardware faults on LLM training and inference?Implement methodology to assess their resilience- How can LLM training be hardened against random hardware faults?Integrate in field fault detection solutions using hardware monitors- How can edge LLMs resist adversarial exploits without compromising efficiency?Implement effective defense mechanisms Methodology This research will be structured into three main phases: 1st Year: LLM Development & Profiling- Training or identifying pre-trained LLM benchmarks to extract key parameters that can assist the early detection of random hardware faults (i.e., silent errors propagating through the hardware-software stack), and security attacks (e.g., faults attacks, backdoor attacks).- Publish research findings in leading conferences and journals in the field. 2nd Year: Reliability Enhancement Solutions- Leveraging the extracted key parameters, implement hardware-aware solutions to be integrated into heterogeneous architectures to raise the resilience of the LLM models. RISC-V architectures and accelerators are in the scope. Both training and inference should be addressed.- Publish research findings in leading conferences and journals in the field. 3rd Year: Security Enhancement Solutions- The implemented hardware-aware solutions will be integrated with security-specific constraints to early detect security vulnerabilities. - Publish research findings in leading conferences and journals in the field. Expected Impact The project's ambition is to develop hardware mechanisms for detecting random hardware faults and security threats that can be configured across various LLM models. Outcomes will significantly impact the field, as current solutions are often tailored to specific LLM models and hardware. The proposed methodology and its implementation could be integrated in commercial EDA tools and applied to specific AI-oriented designs. In fact, the PhD outcomes could be exploited by chip design and manufacturing companies like INTEL, as well as by EDA tool companies like Siemens. Overall, this Ph.D. research will:- Enhance reliability of Large Language Models (LLMs) by developing hardware-aware fault tolerance techniques, reducing the risk of silent data corruption and random hardware faults during both training and inference on exascale and edge platforms.- Strengthen security of LLMs through the implementation of cross-layer monitoring and security-by-design strategies, mitigating risks from adversarial attacks, data poisoning, and backdoor implants across diverse computing environments.- Enable robust deployment of LLMs in safety-critical applications by ensuring consistent model performance and integrity, regardless of the underlying hardware (from high-performance clusters to edge devices). Active Collaborations- Ecole Centrale de Lyon, Lyon, France- University of Rennes, Rennes, France- NVIDIA- Intel- Meta |
Required skills | - Mandatory Skills: - Proficiency in PyTorch, CUDA, and HPC workflows - Strong background in machine learning, deep learning and AI modeling - Computer design and architectures - Preferred Skills (Nice-to-have): - Knowledge of RISC-V ISA. |
Enhancing Educational Storytelling with Human-Centered AI in the LLM Era | |
Proposer | Luigi De Russis, Raphael Troncy (EURECOM) |
Topics | Data science, Computer vision and AI, Software engineering and Mobile computing |
Group website | https://elite.polito.it |
Summary of the proposal | The PhD aims to develop novel methods and techniques for allowing end-users to create interactive educational narratives from structured resources such as knowledge graphs. The research envisions combining generative models with Retrieval-Augmented Generation (RAG) and end-user personalization strategies, moving beyond simple binary-choice formats and thus enabling more engaging, custom-tailored, and culturally adaptive storytelling. |
Research objectives and methods | Interactive storytelling holds significant potential for enhancing educational experiences by engaging learners through tailored, culturally adaptive narratives. Recent advancements highlight that educational storytelling, enriched by interactive personalization, increases learner motivation and deepens understanding and retention of content. Despite these benefits, educators often lack the technical skills necessary to create sophisticated interactive narratives, and current generative AI tools alone do not reliably produce culturally sensitive and accurate educational content without structured guidance. Knowledge graphs (KGs) provide a backbone of real-world entities and relations that can ground these stories in accurate information. By augmenting a system based on Large Language Models (LLMs) with retrieved facts or connections from a KG or similar sources, the system can dynamically incorporate relevant background information (e.g., historical events, definitions, examples) into the story, improving factual accuracy and depth. This approach builds on the Retrieval-Augmented Generation (RAG) paradigm, where external non-parametric memory is tapped to overcome the limited knowledge coverage of standalone LLMs. In an educational storytelling context, RAG allows the narrative to remain up-to-date and richly informative, drawing on sources like domain knowledge graphs or textbooks on demand. Central to this approach is the integration of KGs as structured semantic backbones, ensuring factual correctness, educational relevance, and cultural adaptation in storytelling. For example, if the educator wants to set a biology lesson as a fantasy story, the metaphors and characters can be adjusted to ones familiar in the learner's culture (animals, folklore, historical figures, etc.). We propose employing RAG techniques ? demonstrated effective in tasks demanding factual grounding ? to dynamically retrieve contextually relevant educational resources from structured knowledge bases. Educators, via intuitive end-user development user interfaces inspired by recent research in end-user website generation and KG-based debugging of interactive trigger-action rules, will specify narrative constraints and themes, educational and pedagogical goals, and personalization strategies. The generative model, enriched by retrieved knowledge from structured educational graphs, will then produce interactive, culturally nuanced narratives tailored to individual learners. The research activity will build upon successful approaches to fine-tune large language models for educationally engaging dialogues and extend those toward fully interactive, branching, educational storylines. This PhD proposal directly builds on the complementary expertise of Politecnico di Torino, with its extensive background in end-user AI empowerment, and EURECOM, known for its expertise in KG-driven narrative generation and advanced AI storytelling approaches. The PhD student will split their time equally between EURECOM and Politecnico di Torino, spending 1.5 years at each institution. This research is expected to make significant contributions in both the Artificial Intelligence and Human-Computer Interaction fields by enabling the scalable creation of culturally aware, personalized educational stories while also strengthening scientific collaborations between the two institutions. The results of this research are expected to be published in leading conferences on Artificial Intelligence, Human-Computer Interaction and Information Retrieval (e.g., ACM CHI, ACM IUI, UMAP, ACM TheWebConf, ECIR, CIKM). Additionally, one or more journal publications are anticipated in a subset of the following international journals: ACM Transactions on Interactive Intelligent Systems, ACM Transactions on the Web, and ACM Transactions on Information Systems. |
Required skills | The ideal candidate should have a solid background in Computer Engineering or Data Science, with prior experience in AI, particularly in machine learning and/or deep learning. Proven knowledge and experience with RAG, LLMs, and knowledge graphs is a plus. Additionally, the candidate should have knowledge of Human-Computer Interaction methods and techniques, experience with user interface creation, and demonstrable proficiency in English. |
Knowledge-Informed Machine Learning for Data Science and Scientific AI | |
Proposer | Daniele Apiletti, Paolo Garza, Tania Cerquitelli |
Topics | Data science, Computer vision and AI |
Group website | https://dbdmg.polito.it/ https://smartdata.polito.it/ |
Summary of the proposal | Traditional machine learning is mainly data driven. However, besides the knowledge brought by the data, extra a-priori knowledge of the modeled phenomena is often available (e.g., physical laws, domain expertise), leading to the Knowledge-Informed Machine Learning, Theory-Guided Data Science, and ultimately to Scientific AI. The candidate will explore solutions leveraging advanced data science models and Agentic AI components to learn, reason, and represent complex phenomena. |
Research objectives and methods | Research Objectives The research aims to define new methodologies for integrating scientific and domain knowledge within advanced data science models and agentic AI architectures, with a focus on advancing Scientific AI. The goal is to propose innovative algorithms, agent designs, and hybrid model structures, explore their applications in various scientific and engineering domains, investigate the limitations of current approaches, and advance solutions based on the integration of data-driven methods and theory-informed reasoning. The ultimate goal is to contribute to improving the capability of AI systems to understand and model complex systems by deeply integrating existing domain knowledge (e.g., physical laws, causal relationships, theoretical principles) with data-driven insights. This will drive towards more robust, interpretable, and generalizable models and intelligent agents capable of sophisticated reasoning and discovery, opening new perspectives for accelerating scientific breakthroughs and designing more efficient and reliable engineering solutions. To this end, the main research objectives include:- Developing new strategies to embed scientific knowledge (e.g., physical laws, causal relationships) and structured domain knowledge into advanced data science models and agentic AI architectures, enabling them to reason from first principles.- Designing intelligent agents and multi-agent systems capable of representing, reasoning about, and acting upon complex entities and relationships within scientific domains, incorporating physical constraints and theoretical knowledge into their behavior and learning processes.- Addressing the challenges in current AI models, including agentic systems, to effectively manage and integrate complex, multi-scale scientific knowledge, thereby enhancing their reasoning capabilities and the expressive power of the derived solutions. Outline- 1st year. The candidate will explore the state-of-the-art techniques in Knowledge-Informed Machine Learning, Scientific AI, Theory-Guided Data Science, Agentic AI, and advanced data modeling. This will include a focus on how scientific knowledge can inform the design and learning processes of intelligent agents and sophisticated analytical models. Applications to physical phenomena, engineering simulations, social data, and other scientific domains will be analyzed. Specific datasets, simulation environments, and problems for experimentation will be identified.- 2nd year. The candidate will define innovative solutions based on agentic principles and advanced data science models to overcome the limitations described in the research objectives, experimenting with the proposed techniques on the identified real-world or simulated problems. The development and experimental phase will be conducted on public, synthetic, and possibly real-world datasets/environments, with particular attention to validation against known scientific principles and theoretical understanding. New challenges and limitations are expected to be identified in this phase.- 3rd year. During the third year, the candidate will extend the research by broadening the experimental evaluation to more complex phenomena capable of better leveraging the domain knowledge provided by the developed Agentic AI systems and advanced data science models. The candidate will perform optimizations on the designed algorithms and agent architectures, establishing the limitations of the developed solutions and possible improvements in new application fields, with a focus on generating scientifically valid, interpretable, and actionable results. Target publications IEEE TKDE (Trans. on Knowledge and Data Engineering) ACM TKDD (Trans. on Knowledge Discovery in Data) ACM TOIS (Trans. on Information Systems) ACM TIST (Trans. on Intelligent Systems and Technology) IEEE TPAMI (Trans. on Pattern Analysis and Machine Intelligence) Information sciences (Elsevier) Expert systems with Applications (Elsevier) Engineering Applications of Artificial Intelligence (Elsevier) ACM Transactions on Spatial Algorithms and Systems (TSAS) |
Required skills | - Knowledge of the basic computer science concepts, AI, machine learning, and Maths. - Programming skills in Python - Knowledge of English, both written and spoken. - Capability of presenting the results of the work, scientific writing and slide presentations. - Entrepreneurship, autonomous working, goal oriented. - Flexibility and curiosity for different activities, from programming to teaching to presenting to writing. - Capability of guiding undergraduate students for thesis projects. |
Building Dynamic and Opportunistic Datacenters | |
Proposer | Fulvio Risso, Carla Chiasserini |
Topics | Parallel and distributed systems, Quantum computing |
Group website | https://www.fluidos.eu |
Summary of the proposal | This project aims at aggregating the huge number of traditional computing/storage devices available in modern environments (such as desktop/laptop computers, embedded devices, etc.) into an ?opportunistic? datacenter, hence transforming all the current devices into datacenter nodes. This proposal aims at tackling the most relevant problems towards the above scenario, such as defining a set of orchestration algorithms, as well as a proof-of-concept showing the above system in action. |
Research objectives and methods | Cloud-native technologies are increasingly deployed at the edge of the network, usually through tiny datacenters made by a few servers that maintain the main characteristics (powerful CPUs, high-speed network) of the well-known cloud datacenters. However, most of current domestic environments and enterprises host a huge number of traditional computing/storage devices, such as desktop/laptop computers, embedded devices, and more, which run mostly underutilized. The objectives of the present research are the following: The research activity is part of the Horizon Europe FLUIDOS project (https://www.fluidos.eu/) and it is related to current active collaborations with Aruba S.p.A. (https://www.aruba.it/) and Tiesse (http://www.tiesse.com/). The research activity will be organized in three phases: Expected target conferences are the following: Journals: Magazines: |
Required skills | The ideal candidate has good knowledge and experience in computing architectures, cloud computing and networking. Availability for spending periods abroad would be preferred for a more profitable investigation of the research topic. |
Trustworthy Edge AI: efficient and explainable multi-modal models | |
Proposer | Tatiana Tommasi, Carlo Masone |
Topics | Data science, Computer vision and AI, Parallel and distributed systems, Quantum computing |
Group website | https://vandal.polito.it |
Summary of the proposal | This PhD project aims to design multimodal deep learning models for edge devices that balance accuracy, trustworthiness, and efficiency. Key goals include avoiding spurious features, ensuring transparency and interpretability, enabling local, user-tailored adaptation, and supporting real-time decision-making. The proposal will focus on two use-cases (visual place recognition and video understanding) within an overarching edge-intelligence framework. |
Research objectives and methods | The growing integration of artificial intelligence (AI) across diverse sectors calls for models that are not only accurate but also trustworthy and interpretable. This need is especially pronounced when deploying AI directly on edge devices, which enables real-time data processing close to the source while leveraging large-scale pre-trained multi-modal models. This approach offers clear advantages in latency and privacy but also raises challenges in understanding the internal mechanisms of these models, ensuring transparency, avoiding reliance on spurious features, and supporting efficient, user-specific adaptation. This project aims to study and design multi-modal deep learning models that optimally balance accuracy, trustworthiness, and efficiency in edge intelligence settings. (1) Visual Place Recognition (VPR) (2) Video Understanding (3) Edge Intelligence Overall, in the first two years the PhD project will focus on efficient multi-modal learning for visual place recognition and video understanding. The third year will be dedicated to adapting the designed models to a distributed and federated setting. References |
Required skills | - Good programming skills and proficiency in programming languages (Python is required, C++ is a plus). - Familiarity with at least one recent deep learning framework (PyTorch, JAX, or TensorFlow). - The candidate is expected to be proactive and capable of autonomously studying and reading the most recent literature. - Ability to tackle complex problems and algorithmic thinking. - Be fluent in English, both written and oral. |