Logo



Available Ph.D. positions


If available, call for Ph.D. positions are listed in this page (please check the deadlines for your application). Access to Ph.D.in Italy is subject to a public examination. Please refer to the following PhD program pages for the specific requirements and deadlines. ScuDO – Politecnico di Torino: requirements and call for applications

If you are interested into a proposal and wish to submit your application for the position, send us an email with the following information:


Available Thesis


DIVINE: Utilizing Advanced AI for Precision Diagnosis of Vine Diseases in Compliance with the European Green Deal

Thesis @CGVG in collaboration with Pro-Logic, Torino, available for multiple students. Tirocinio + reserach grant available
Tutors: Alessandro Emmanuel Pecora, Andrea Bottino
TAGS: rtificial Intelligence, Deep Learning, Computer Vision, Precision Agriculture, Vine Disease Diagnosis, Image Analysis, Neural Networks, Sustainability, European Green Deal

The DIVINE (DIagnosi delle malattie della VIte per immagini tramite le reti NEurali e il deep learning) project is a pioneering initiative aligned with the European Green Deal, aimed at transforming the way vine diseases are detected and treated. This thesis aims at developing Computer Vision (Depp-learning based) methodologies for automatically and accurately diagnose from images (in the visible and multispectral interval) major vine diseases in Italy, such as Downy Mildew (Peronospora) and Powdery Mildew (Oidio).

This project is a collaborative effort, bringing together entities from various sectors including enterprises, academic institutions, agronomists, and sensor technology experts. The collaborative nature of the project is aimed at leveraging a wide spectrum of expertise to effective solution and build a comprehensive, annotated dataset from both controlled experiments and real-world crop scenarios that can be used to train the devised models.

Goals:

Literature Review and Model Exploration:

Data Collection and Annotation:

Model Training and Validation:

In-the-wild Application and Evaluation:

Objectives:

This thesis represents an opportunity to contribute significantly to sustainable agriculture through the integration of cutting-edge AI technology.

UUsing generative AI to create annotated datasets for wet damage identification

Thesis @CGVG, available for multiple students. Reserach grant available
Tutors: Andrea Bottino, Federico D'Asaro, Alessandro Emmanuel Pecora
TAGS: Generative AI, Annotated Datasets, Instance Segmentation, Wet Damage Identification, Synthetic Data Generation, Machine Learning

This work focuses on the automatic detection of wet damages from images.

A major challenge in this work is the lack of annotated datasets large enough to effectively train instance segmentation algorithms. The core idea of this thesis proposal is to use the capabilities of generative AI (GenAI) to create a synthetic dataset of annotated images.

Using a small set of existing annotated images, the work aims to develop GenAI approaches that can extrapolate and generate a comprehensive dataset that simulates different scenarios and conditions of wet damages. This dataset will then be used to train robust instance segmentation algorithms to improve their accuracy and effectiveness in real-world applications.

The expected outcome of this work is twofold. First, to successfully demonstrate the feasibility of using generative AI to create large, diverse and reliable annotated datasets from a minimal number of real annotated images. Second, to evaluate the performance of instance segmentation algorithms trained on these synthetic datasets.

Developing a Comprehensive Digital Twin for the CARLA Simulator: A Case Study in Turin's Urban Environment

Thesis @CGVG avaliable for multiple students
Tutors: Leonardo Vezzani, Francesco Strada, Andrea Bottino
TAGS:Digital Twin, CARLA Simulator, Urban Environment Replication, Unreal Engine, VR

The aim of this thesis is to create an advanced digital twin of an urban environment, specifically focusing on a neighborhood in Turin, Italy, and integrating it with the CARLA simulator. This project encompasses the development and integration of various components required for a highly effective and realistic driving simulator.

Goals and Objectives:

Urban Environment Replication:

Integration of Autonomous Agents and Realistic Elements:

Real-Time VR Environment Implementation:

Methodology for Continuous Update and Replication:

Use of Advanced Technologies:

Research-Oriented Tool Development:

This thesis represents a blend of simulation technology, urban planning, and software engineering. It offers a unique opportunity for students to contribute to the growing field of digital twins, particularly in the context of urban environments and autonomous driving simulation. The project not only aims to create a realistic digital replica of a neighborhood but also establishes a replicable framework that can be applied to various urban settings, thereby contributing significantly to research and development in this field.

Development of a Modular HUD Design Tool for the CARLA Driving Simulator

Thesis @CGVG avaliable for multiple students
Tutors: Leonardo Vezzani, Francesco Strada, Andrea Bottino
TAGS: HUD Design, CARLA Simulator, Unreal Engine 5, VR

This thesis deals with the development of a tool for the development of sophisticated and modular Head-Up-Displays (HUD) for the CARLA driving simulator. HUDs, which project important information onto a vehicle's windshield, have become increasingly prevalent in modern vehicles and offer the advantage of reducing driver distraction and increasing road safety. This tool aims to replicate and extend the functionality of real HUDs in a virtual driving environment.

Aims and objectives:

Development of a modular HUD tool:

Integration into the CARLA simulator:

Diverse interface and system testing:

Design the user interface and driving experience:

Evaluation through case studies:

Contribution to research:

MASTER THESIS AT EST@ENERGY CENTER

Thesis In collaboration with Energy Center, Politecnico di Torino
Tutors: Andrea Bottino, Francesco Strada, Daniele Grosso, Ettore Bompard
TAGS: Climate and Energy Transition, Large Language Models (LLM), Prompt Engineering, Data Integration, Jupyter Notebooks, Data Visualization, Interactive Environment, City Sustainability, Scenario Planning, Digital Twin, 3D Modeling, Unity, Decision Theatre.
Details of the thesis are below (or here )

PRODUCTION OF A GENERATIVE BOOK ON THE CLIMATE AND ENERGY TRANSITIONS IN THE MEDITERRANEAN AREA APPLYING A LARGE LANGUAGE MODEL (LLM)

The context: EST has produced in the last years five (5) reports on the climate and energy transition in the Mediterranean area, accumulating a large body of knowledge, data and references. The reports gather information on all the energy technologies (from hydrocarbons to renewables), on maritime transport, and on their emissions of greenhouse gases.

The problem: to structure that volume of information in a manner that is ready and fit for all end users, enabling assisted interactions (open and with predefined prompts), and that can grow with the addition of new information. This goal derives from the limitation of traditional books where contents are static and too profuse, and therefore they are on the one hand quickly outdated, and on the other difficult to consult and not prone to rapid answers to the requests of the reader.

The thesis activity to develop a so-called Generative Book (GB) using LLM technologies, applying the platform BLOOM. The new GB will, using as the basic content the already available five reports and related sources, enable users to interact with the contents in different ways (quick summaries, put forward different questions, develop questionnaires based on the text, obtaining usable output against specific requests, etc); will let users contribute to the contents by for instance indicating useful sources of information, indicating shortcomings, commenting, etc.; and will act as an evolving platform to facilitate the growth and evolution of the contents.

CUSTOM TRAINING AND PROMPT ENGINEERING OF A LLM PLATFORM ON THE ENERGY AND CLIMATE TRANSITIONS IN THE MEDITERRANEAN AREA

The context: EST has produced in the last years five (5) reports on the climate and energy transition in the Mediterranean area, accumulating a large body of knowledge, data and references. The reports make reference to a vast set of documents and data. EST intends to upload all those elements in an LLM-based Generative Book (GB).

The problem: to accelerate the adaptation of an out-of-the-box LLM (BLOOM) for its use with knowledge and contents referring to the Mediterranean energy and climate area.

The thesis activity: to compose and format prompts to maximize the model’s performance regarding the tasks defined for the GB, and to custom train the GB model with datasets taken from the EST reports. This will involve fine-tuning the training parameters, setting up the training environment, and fine-tuning the GB model.

INTEGRATION OF JUPYTER NOTEBOOKS WITH A LLM PLATFORM ON THE ENERGY AND CLIMATE TRANSITIONS IN THE MEDITERRANEAN AREA

The context: EST has produced in the last years five (5) reports on the climate and energy transition in the Mediterranean area, accumulating a large body of knowledge, data and references. Many values are supported by equations and formulae, which are not made explicit in the reports.

The problem: to produce an interactive environment composed of computational documents using the data, equations and explanations present in the EST reports on the energy and climate in the Mediterranean ready for their customised use, visualization and analysis, and integrated into a Generative Book (GB).

The thesis activity: to produce Jupyter notebooks concerning energy and climate in the Mediterranean area to be integrated in an LLM-based GB, connecting software codes, data analytics and text, to work interactively and being customizable by the end users.

DEVELOPMENT OF A DIGITAL PLATFORM FOR SUPPORTING TABLE-TOP EXERCISES APPLIED TO THE CLIMATE AND ENERGY TRANSITIONS IN CITIES

The context: EST supports cities in the elaboration of: i) their transition towards climate neutrality and the related production of Climate City Contracts, and ii) plans for their sustainability. For these goals, EST is developing two digital platforms, CLICC and CITTA, composed of interactive tools for dealing with data and text in a multimedia environment, and a full set of scientific instruments for the calculation and analysis of data. The study of future scenarios for climate neutrality and sustainability demands the interaction with all stakeholders, and the joint study of best alternatives concerning all potential contingencies. These interactions can be structured in the form of Table Top Exercises (TTXs).

The problem: to facilitate the arrangements and implementation of TTXs for cities by means of digital applications in an interactive environment, with the management of narrative scripts, a diversity of timing scales, alternative paths in the presence of contingencies, etc., while recording the decision and actions of all participants. TTxs are role-playing activities in which players respond to scenarios presented by the facilitators.

The thesis activity: to produce an interactive digital platform using open-source technologies for supporting the preparation and the running of TTXs applied to the climate neutrality and sustainability of cities, taking advantage of the existing platforms CLICC and CITTA. The platform should provide facilities for ex-ante preparation of the TTX, for the work of the participants (i.e. players, observers, facilitators, note takers), and for the ex-post analysis and reporting.

DESIGN OF AN INTERACTIVE INTERFACE FOR THE DIGITAL TWIN OF CITIES FOR THE STUDY OF CLIMATE NEUTRALITY AND SUSTAINABILITY

The context: EST supports the city of Torino in their plans to climate neutrality and sustainability. A crucial aspect of this support is to enable the city administrators and all city stakeholders to visualize and interact with a digital twin of the various main components of the city (e.g. energy, transport, waste, green areas, etc.) directly related to the production and mitigation of emissions. EST is developing two digital platforms, CLICC and CITTA, composed of interactive tools for dealing with data and text in a multimedia environment, and a full set of scientific instruments for the calculation and analysis of data. EST operates a Decision Theatre with a 180 degrees, 3 meter tall wall where to display interactive software applications.

The problem: to facilitate the interaction with the manifold aspects related to climate neutrality and sustainability of cities, including the virtual representation of the city systems as a digital twin. This representation should include both past data and future scenarios, with the possibility of displaying the evolution in time of those scenarios.

The thesis activity: to produce an interactive interface based on open-source tools such as Unity, to be used in both EST’s Decision Theatre and the web, able to dynamically exhibit data in 2D/3D, and create game-like experiences based on the climate neutrality and sustainability scenarios produced by CLICC and CITTA. The activity will be applied to the city of Torino.

Theses in collaboration with Centro Ricerche RAI about AI, synthetic humans, motion capture, and multimedia

Thesis In collaboration with Centro Ricerche, Innovazione Tecnologica e Sperimentazione; available for multiple students
Tutors: Andrea Bottino, Francesco Strada
TAGS: virtual production, character modeling, animated avatars, generative AI, motion capture, virtual studios, Internship

In the following section, we present a series of thesis proposals, some of which necessitate management as internships due to constraints related to the access of equipment or facilities at the CRR.

It is important to note that the CRR has a limited capacity for hosting students. As of the current date, there is only one available spot for a thesis project. Any additional projects may start upon the completion of the theses already underway.

All the following theses are eventually available for multiple students working on the same project

3D Content creation for virtual studio

New 3D creation technologies offer the possibility of achieving a great level of photorealism and similarity compared to 3D scanner. This thesis aims to analyze and evaluate a robust workflow for creating photorealistic objects. The creation process will include the use of a variety of software (NeRF, Gaussian Splatting...) and tools such as 3D modelling software. The expected outcomes of this research include a better understanding of how photorealistic objects can enhance and change the storytelling process and how they can be incorporated into broadcast and metaverse ecosystems.

Exploring the potential of Generative AI for 3D modeling

This thesis investigates the transformative impact of generative artificial intelligence (AI) on the field of broadcasting. The primary objective is to comprehensively analyze how generative AI technologies can enhance the efficiency, creativity, and overall quality of content produced by broadcasters with a focus on 3D model building (objects, set...) starting from a prompt or sketches. The thesis will combine qualitative and quantitative analyses and it will be explore practical applications.

Motion capture for virtual studio [Tirocinio]

New motion capture technologies offer the possibility of achieving a high level of photorealism and likeness. This thesis aims to analyze and evaluate a robust workflow for avatar animation. The capture process will include the use of a variety of software and tools (move.ai, motion suit, Unreal Character Animator...). The expected outcomes of this research include a better understanding of how to improve the animation process with qualitative and quantitative analysis.

Sing language LIS [Tirocinio]

The thesis aims to develop an innovative, avatar-based interface for Italian Sign Language (LIS) communication, enhancing accessibility and interaction for the deaf and hard-of-hearing community. The system will leverage motion capture suits and gloves (Xsense & Rokoko) to accurately capture and translate LIS gestures. This technology will enable the detailed tracking of hand movements and body language, essential for conveying the nuances of LIS. The recorded movements will be applied to a digital avatar to replicate LIS gestures in real-time. The avatar will serve as a visual representation, translating LIS into a visual format that can be easily understood.

Volumetric capture (point clouds) in real time for TV production [Tirocinio]

The objective of this thesis is to investigate the use of real-time volumetric capture for broadcasters, analyzing its impact on content creation, viewer experience, and the overall broadcasting landscape. By examining the technical challenges and creative possibilities, the study seeks to provide valuable insights into the practical implementation of this emerging technology. The complexities of real-time volumetric capture systems will be examined in technical and subjective assessments, and case studies will look at effective broadcasting implementations.

Volumetric capture & Compression

This thesis explores the rapidly evolving field of volumetric capture and compression, addressing the challenges associated with creating and delivering immersive content. As the demand for high-quality immersive experiences continues to grow, efficient compression techniques are essential for transmitting volumetric data seamlessly across various platforms and devices.

The study begins by providing an in-depth analysis of state-of-the-art volumetric capture systems, especially the work in the MPEG (Moving Picture Experts Group) and evaluating their respective advantages and limitations. Furthermore, the research investigates the critical aspect of compression algorithms tailored specifically for volumetric data.

Generative AI for production centre

This thesis explores the integration and impact of Generative Artificial Intelligence (Generative AI) for editorial side. As industries strive for increased efficiency, automation, and innovation, Generative AI emerges as a powerful tool with the potential to transform traditional production workflows. The thesis begins with an in-depth analysis of existing literature on Generative AI and its best applications such as scene, script and virtual character. Special attention is given to adapting and customizing these technologies to suit the specific requirements of diverse production processes, ensuring seamless integration with existing infrastructure.

Development of sampling pattern for first degree aberration in raytracing rendering

Thesis @CGVG, available for multiple students.
Tutors: Leonardo Vezzani, Francesco Strada, Andrea Bottino, Bartolomeo Montrucchio
TAGS: Ray Tracing, Rendering, Sampling pattern, Point Spread Function, optical transfer function

The quality of renderings using ray tracing has become increasingly higher, thanks to recent advancements in computer graphics. Despite these advancements, some significant optical defects that characterize photographs taken with real lenses have yet to be implemented in various rendering engines.

This thesis aims to implement the optical defects of a physical lens in a virtual renderer. Specifically, the aim is to introduce first-order optical defects by manipulating the sampling pattern of the renderer to achieve a realistic appearance in both out-of-focus and in-focus planes of the rendered image.

This thesis seeks to re-implement the technology in the open-source renderer Mitsuba (https://www.mitsuba-renderer.org/), starting from a previous algorithm implementation in Blender. The resulting algorithm will be tested using objective and subjective measures to assess the quality of its renders.

Enhancing locmotion in Virtual Reality: The Development and Analysis of Walking Seat V2

Thesis @CGVG
Tutors: Leonardo Vezzani, Francesco Strada
TAGS: VR, locomotion methapores, leaning interfaces, input devices

Walking Seat V2 represents an advanced solution for locomotion in virtual reality (VR), utilizing seat pressure sensors for more intuitive and responsive movement.

Despite various approaches to VR locomotion discussed in literature (Locomotion Vault), the challenge remains largely unresolved. Our innovative device, the Walking Seat, is designed to improve VR navigation significantly. This thesis is dedicated to advancing this technology, focusing on critical aspects such as enhancing sensor density and refining data interpretation. A key challenge to address is distinguishing between leaning movements for navigation and object interaction within the VR space.

This research goes beyond mere development; it involves rigorous testing of the Walking Seat, alongside a comparative analysis with existing locomotion methods. Key objectives of this study include:

Additionally, this study encourages the exploration of various implementation alternatives and innovative approaches to further refine VR locomotion.

Animating Virtual Characters in Unity Using Generative AI: A Prompt-Based Approach

Thesis @CGVG, available for multiple students
Tutors: Stefano Calzolari, Andrea Bottino, Francesco Strada
TAGS: diffudion models, prompt-based generative AI, NPC animation, VR, Unity

This Master's thesis delves into the innovative intersection of generative artificial intelligence and virtual character animation within the Unity environment. The primary focus is on exploring and utilizing diffusion models for prompt-based animation generation, a cutting-edge approach in the realm of AI-driven content creation.

Key Tasks:

  1. Literature Review on Diffusion Models for Prompt-Based Animation Generation: The student will conduct a comprehensive review of existing literature. This involves studying the current state and advancements in diffusion models, specifically how they are applied to generate animations based on textual prompts. This review will help in understanding the theoretical foundation and practical applications of these models in the context of animation.

  2. Implementation of a Generative AI Solution for Unity Character Animation: The practical aspect of this thesis involves implementing a solution, potentially building upon existing models. The objective is to develop a system capable of animating a standard character in Unity based on prompts. This will require integration of AI models with the Unity engine, ensuring that the system is not only functional but also efficient and user-friendly.

Expected Outcomes:

This thesis is an opportunity to contribute to the emerging field of AI in game development and animation, offering practical experience in implementing advanced AI techniques in a popular game development platform.

MPAI-MMM: MPAI Metaverse Model Arhitecture

Thesis @CGVG in collaboration with MPAI consortium, available for multiple students
Tutors: Andrea Bottino, Francesco Strada
TAGS: metaverse, distributed VR, MPAI-MMM, virtual classroom

Metaverse is the name given to a collection of application contexts in which humans, represented by avatars, engage in educational, social, work-related, recreational activities, etc. MPAI (Moving Picture, Audio, and Data Coding by Artificial Intelligence), of which PoliTo is a founding member, has developed a standard for a portable avatar format (Portable Avatar Format) and a standard for metaverse architecture (MPAI Metaverse Model – Architecture). PoliTo is creating the reference code for the use case Avatar-Based Videoconference, where humans participate in a virtual conference with their portable avatars. The use case is implemented as a stand-alone solution.

The proposed thesis, however, concerns the study of an innovative teaching method carried out in the context of the MPAI Metaverse Model – Architecture in which students and the teacher attend the lesson through their portable avatars, exploiting the functionalities of the metaverse.

Details about the MPAI Metaverse standard can be found here.

The Use of Interactive Virtual Scenarios in Personnel Training and Product Presentation: An Analysis of Graphic Optimization for Different Devices

Industrial thesis @ SynArea
Academic tutor: Andrea Bottino, Francesco Strada
TAGS: VR, Training, Product presentation

Technological advancements have made it possible to use interactive virtual scenarios to simulate reality using advanced rendering and graphic techniques. These scenarios can be used in various contexts, such as product presentation and personnel training in the management of industrial machinery. However, graphic optimization for different devices is crucial to ensure a smooth and high-quality experience.

The objective of this thesis is to analyze the use of interactive virtual scenarios in personnel training and product presentation, with particular attention to graphic optimization for different devices. The expected results includes highlighting the advantages and analyzing the challenges associated with the current proposal. The results obtained can be used by companies to improve user experience and ensure a good quality of interactive virtual scenarios.

Required skills Basic skills in the field of 3D graphics, software development and game engine programming.

The activities will take place in Turin:

SynArea Consultants C.so Tortona 17

Polythecnic of Turin (when possible)

Improving Training and Learning Methods in eXtended Reality

Thesis @CGVG, available for multiple students;
Tutors: Andrea Bottino, Edoardo Battegazzorre, Francesco Strada
TAGS: Mixed Reality, Animation, Training, Education

Training and learning in XR (VR/AR) in several scenarios (industrial, medical, educational) can be envisage as actvities characterized by a sequence of activities that can be organized in procedures. The activity organization can differ according to the specific scenario (e.g., activities can be sequential, alternative, looped and so on), anc can be generally represented as a graph of activities.

The learning phase is then usually organized in different steps (or phases):

As said before, this structure is standard in many application fields. The general objective of this proposal is to facilitate the development of such learning program and improve their effectiveness.

Research methods

An educational path can be structured through different learning methodologies, different assessment systems and activities organization (looped sequences, repeat only mistakes, and so on). However, the effectiveness of these approaches and the best combination of learning/assessment/organization methods is also related to the context where learning/training activities take place.

The objective of this thesis is to make a review of the state of the art to identify the most promising approaches, and to validate their effectiveness in different real-world contexts, in terms not only of knowledge and skill acquisition, but also of their retention over time

Topic 2: software farmework for rapid prototyping of MR-based learning environments

A first thesis topic is developing a software framework that allows a fast deployment of a learning program by defining the structure supporting activities, procedures and their scheduling, so that the designer of the educational intervention is only required to: i) create the assets to be used in the MR environment, ii) define the logic of the single activity, and iii) design the activity graph that define the procedures.

Students are required to implement the framework and develop at least two different use case scenario (in different fields, e.g., medical and industrial) to test it.

Topic 2: usability and UX of the learning environments

A second thesis topic is analyze the problem in terms of usability and User eXperience, i.e. analyze how to better deliver instructional/educational content in immersive AR/VR experiences, how to develop effective HCIs (in terms of input/output) and how to actively support users during their learning program (e.g., by adding AI-driven virtual instructors that can provide a natural and “face-to-face” support for the user).

Students are required to analyze the problem, proposed alternative solutions that can be implemented in the HCI, and validate them trough quantitative/qualitative assessment involving a panel of users. For this task, at least one use case scenario (in any possible field…) must be developed from scratch

Topic 3: development of an effective debriefing support

A third topic is the development a debriefing companion application, which relies on the analytics (and other data) collected during the rehearsal and evaluation sessions. The availability of a debriefing step is extremely relevant for knowledge retention, since it helps learners to reflect on what they did, get insights from their experience and make meaningful connections with the real world, thus enhancing transfer of knowledge and skills. Even when results are not as successful as the learners hoped, debriefing can still promote active learning by helping them to analyze mistakes made and explore alternative solutions.

Students are required to analyze the problem, proposed alternative solutions that can be implemented in the debriefing companion app, and validate them trough quantitative/qualitative assessment involving a panel of users. For this task, at least one use case scenario (in any possible field…) must be developed from scratch.

XR FRAMEWORK FOR COLLABORATIVE LEARNING and collaborative work

Thesis @CGVG, available for multiple students;
Tutors: Andrea Bottino, Francesco Strada
TAGS: Mixed Reality, Animation, Training, Education

The goal of this work is to evaluate and implement solutions that allow simultaneous access to a three-dimensional environment in which two or more users interact with each other and with the environment. Three application scenarios are proposed below:

  1. Use for educational purposes in a classroom where the professor and students access the same application from different devices. A plausible scenario consists of:
  2. Professor using a tablet or PC version of the application to highlight objects/points of interest;
  3. VR device used by one or more students to perform operations on three-dimensional objects within the scene;
  4. Touch device (e.g. lim) that allows interacting with objects in the scene and performing the same operations intended for virtual reality devices, but with different input devices (touch screen);
  5. Use for remote assistance, which allows a technician/student (equipped with smart glasses) who wants to perform work on a machine/a trainign activity to receive real-time assistance from a senior operator/an expert who can connect to the machine via an app (desktop or VR) and view its 3D and related data. The senior can geographically locate the junior using the 3D model or the camera on the smart glasses and tell him where to go to work. It would be interesting to be able to give the junior the feeling of the senior's presence as well.

The objective of the thesis will be to evaluate the effectiveness of the possible solutions through the implementation of different use cases in both medical and industrial scenarios.


Assigned, ongoing and deprecated thesis


Innovative multimedia science based tools for supporting policy decision making in the energy area

Thesis @CGVG, in collaboration with ENERGY center available for multiple students
Tutors: Andrea Bottino, Francesco Strada
TAGS: What if analysis, Virtual intercative nevironments, strytelling and computational narrative

Energy transition is undeferrable for humanity. The decisions of the policy decision makers at various levels (super-national, national, local) must be based on quantitative analysis and assessments of their impacts on environment and socio-economics aspects. In this context “what if” tools are needed to provide and “in silico” environment in which different strategies can be confronted with their impacts. Those tools should create a “virtual world” in which the decision maker can immerge and experience the world around through an immersion in its virtual representation. This requires the development both SW innovative tools and design of communication languages and metalanguages. The “technologies” to make this possible developed presently at EST@energycenter/polito.it (energy security transition lab) range from interactive web-interface, to story telling and computational narrative. A decision theatre installed at Energy Center of Politecnico (round screen 225°, 4 class 1 laser projectors Panasonic PT‐RZ660 with a resolution of 1920 x 1200 pixels each, Dolby 7.1 multichannel speaker system with a professional Denon processor, workstation DELL Precision 7920 for image generation) provide an immersive reality venue in which the decision maker can confront their decision making process). The student will be involved and will contribute to develop this vision selecting one the possible “communication technologies” and working in the facilities available at ENERGY center.

Digital Twins

Thesis @CGVG, available for multiple students; thesis in collaboration with Applied
Tutors: Andrea Bottino, Francesco Strada
TAGS: Mixed Reality, Animation, Training, Education

The Digital Twin (DT) differs from traditional simulation tools since it integrates IOT protocols that transmit synchronized data from a real machine to obtain information that allows real-time monitoring and more accurate diagnosis and predictions.

The objective of the work is to evaluate and implement solutions that allow to process and visualize this data within a 3D simulation, thus increasing the usability of and user experience with the machine. In particular, the work will provide answers to the following challenges:

Possible application contexts, which will be defined by Applied company, are industrial plants, automotive and home automation.

The Virtual Reality (VR) in Blended Learning settings, with particular focus to education in the field of robotics

Industrial thesis @ SynArea
Academic tutor: Andrea Bottino
TAGS: VR, Education, Virtual Labs

Design and development of a Virtual Learning application that will be characterized by a 3D environment in which students will be able to follow in a “remote way” the physical activities carried out in the laboratory and use some interactive virtual procedures for their training at home.

In particular the activities will be carried out using a collaborative robot present in the laboratory of the Polythecnic of Turin and will consist of:

The main objectives are to provide some virtual solutions with wich the teacher can better explain the laboratory activities, which of course in this moment are particularly limited.

Under normal conditions of access to the laboratory, they will be also used to better and safely visualize the activities on the robotic cell, as only a few students can stay near the work area.

Some technological choices will be defined during the development of the thesis.

The adopted solutions will be tested and used as a use case in the context of a more structured project, so the thesis will focus only on some of the previously listed topics, according to the student.

The activities will take place in Turin:

SynArea Consultants C.so Tortona 17

Polythecnic of Turin (when possible)

Virtual Upper Limb Embodiment

Thesis @CGVG in collaboration with VR@POLITO and IIT, available for multiple students
Tutors: Andrea Bottino, Fabrizio Lamberti, Giacinto Barresi (IIT)
TAGS: Virtual Reality, Virtual Embodiment, Simulation

The thesis will focus, in collaboration with IIT (Istituto Italiano di Tecnologia), on the design, the implementation, and the experimental use of a virtual setting to improve/train the integration – embodiment – of a simulated upper limb in the body scheme of an individual. Such an approach is currently explored to improve the embodiment of an actual prosthetic system, a prerequisite for promoting the actual usage of the device and reducing the risk of its abandonment.

After a brief overview of the literature related to this topic, the master candidate will develop a virtual environment based on the Unity game engine for obtaining a setup for experiments of “virtual hand illusion”, collecting subjective and objective data pointing at the embodiment degree of the simulated limbs. Initially, the subjects involved in these studies will be people without impairments (evaluating their reactions as in classic literature on embodiment phenomena). However, the recruitment of amputees could become viable during these activities: if such an opportunity will be offered, the student will be free to opt for involving actual prosthetic users in the thesis work.

According to the experimental results, this thesis could lead to a scientific publication on peer-reviewed journals or conferences.

Gamification in Multiple Sclerosis Rehabilitation

Thesis @CGVG in collaboration with VR@POLITO and IIT, available for multiple students
Tutors: Andrea Bottino, Fabrizio Lamberti, Giacinto Barresi (IIT)
TAGS: Mixed Reality, Gamification, Rehabilitation

The thesis will focus on the development of virtual/mixed reality gamification settings to improve the engagement of people with Multiple Sclerosis during motor and cognitive rehabilitation exercises.

The activities will include the creation of interactive environments in Unity and their usage in experimental sessions for data collection and analysis, involving people with and without impairments to validate the capability of such solutions to engage the user. The candidate will collaborate with experts of IIT (Istituto Italiano di Tecnologia) and AISM (Associazione Italiana Sclerosi Multipla) to devise, implement, and test each game-based system that will be described in the thesis.

According to the experimental results, this could lead to a scientific publication on peer-reviewed journals or conferences. The field activities will be performed in Genova in clinical settings to directly involve the research participants.

Holo-ACLS: team-based XR training for emergency first reponders

Thesis @CGVG in collaboration with Dipartimento di Scienze Mediche, Università di Torino. Direttore della Struttura Complessa Medicina d’Urgenza (MECAU) dell’ospedale Molinette di Torino, available for multiple students
Tutors: Andrea Bottino, Francesco Strada
TAGS: XR, Hololens, team training, adaptive learning

Holo-BLSD is a software tool for training laypersons and professionals in Basic Life Support and Defibrillation (BLSD), i.e., the sequence of actions to recognize a patient in cardiac arrest and perform first aid. The proposed tool is able to independently manage the phases of learning (i.e., teaching the BLSD procedure), training (where trainees can practice the learned concepts), and final assessment of the acquired skills.

The training content is delivered via a HMD-based (Hololens) mixed reality application (MR) that provides an experiential learning approach by integrating a low-cost standard cardiopulmonary resuscitation (CPR) manikin with virtual digital elements (integrated into the physical environment in which the training activity is conducted) that replicate elements of the emergency scenario.

The proposed project aims to further develop and improve the current prototype version of the system with three main objectives:

Students will work with the Unity Engine, so basic knowledge of Unity, C#, and XR SDKs are required. Basic knowledge of a 3D modeling software (Blender, Maya, 3DS Max) is also advised.

MetaHumans for Unreal (and other platforms)

Thesis @CGVG, available for multiple students
Tutors: Andrea Bottino, Edoardo Battegazzorre, Francesco Strada
TAGS: Mixed Reality, CG, Animation

Virtual humans are gaining attention given their increased diffusion in real-time applications for various purposes as guides, companions, or nonplayable characters (NPCs) in large-scale simulations. However, current tools are not satisfactory or too costly. Recently, Unreal Engine has released MetaHumans, a free tool to rapidly create texturized 3d models of human avatars which are entirely rigged and featured with standard face blend shapes.

The overall objective of this thesis is to evaluate the possibilities and limitations of the MetaHumans tool analyzing how it can be integrated within the Unity environment. The following steps of the production pipeline should be evaluated: 3dModel creation and import, animations (body and face), animation retargeting from popular human body animation libraries (e.g. Mixamo), and usage in real-time immersive environments (VR/AR).

If repetitive operations emerge in the pipeline the student should also develop Unity interface tools (editor scripts) to automate them as much as possible. Once the entire pipeline is evaluated the student should apply the gained knowledge (and developed tools) to create a general-purpose library of virtual humans that can be easily exported (unity package) to other projects.

Learning by making serious games

Thesis @CGVG, available for multiple students
Tutors: Andrea Bottino, Dimitar Gyaurov, Francesco Strada
TAGS: Serious Games, Learning, Game Engine, Collaboration

Playing serious games (i.e., games that have another purpose besides entertainment) is widely recognized as an effective approach for teaching a new subject, training on a complex procedure or raising awareness on complex topics (e.g., sustainability). However, we can see this from the opposite perspective: making serious games as a method for building knowledge on the specific topic addressed by the serious game. Moreover, making games (serious or not) in a collaborative fashion can be envisioned as a tool to provide support and which helps create a communication space for kids with special needs (e.g., kids suffering from autism or attention deficit disorder). In fact, these kids tend to spend a lot of their time playing games and when asked to interact with one another they mostly communicate through and about the games they mostly play. However, in both these scenarios (i.e., making games for learning or for interacting), some challenges arise. The main problem is that the target audience for these interventions is not acquainted with the tools generally used to create games, both complex or simple game engines. The objective is to develop a tool to create 3D games which is simple and equipped with high-level building blocks that simplify the whole game making process. This tool should not be a game engine developed from scratch, whereas an extension to preexisting game engines (e.g., Unity, Google Game Builder). The developed tool should employ the same approach as Scratch where complex behaviours are achieved by combing together visual lego-like blocks. Finally, in the case of collaborative game making this activity (and the underlying tool) should envisage as a playful activity itself (i.e., the game is making the game). The general activities of the thesis student should be:

Developing an AR location-based serius game capable of integrating mobile and remote users

Thesis @CGVG, available for multiple students
Tutors: Andrea Bottino, Dimitar Gyaurov, Francesco Strada
TAGS: Serious Games, AR location-based games, mobile AR, integration of mobile and remote players

The objective of the thesis is the development of an AR game that combines two modalities of participation. The first modality is a location-based one, in which users have to complete problem-solving tasks by exploring an urban area, collecting information, solving quizzes, completing quests and interacting with each other. The second modality is a remote one, in which users have to participate in problem-solving tasks by solving puzzles, constructing maps, deciphering encrypted texts and interacting with each other using different platforms (VR/AR, web based).

Players have to collaborate to identify, recreate and interpret clues by engaging in three types of tasks:

Players’ actions and achievements will determine how the AR game unfolds in the style of an interactive novel, effectively making them co-authors of the story they interpret.

The underlying rationale of the game is to develop a serious game aimed at supporting ( through in-game and after-game activities) different types of intervention (e.g., helping persons with dementia cope and adapt to their condition, helping students to deal with challenges and sustain high levels of academic proficiency despite life adversities), promoting interactions among different local communities/groups (e.g., patients, students, caregivers, family and community) and support activities outside the game (e.g., support families and caregivers; promote dialogue within the family; inform counseling and coaching activities within and outside the school environment).

Students are required to develop an initial prototype implementing the core elements that will be used within the game (i.e., location-based content management and augmentation of real places, communication system and information storage, integration between mobile and remote participating users).

The use case scenario, which aims at involving a large user community with different roles, can be developed by the students, or chosen among a list of proposals.

Agent-based model for large (urban) scale simulation of pandemic spread

Thesis @CGVG, available for multiple students
Tutors: Andrea Bottino, Edoardo Battegazzorre, Francesco Strada
TAGS: Agent Based Models, Medical Simulations, Unity ECS/DOTS

The recent worldwide COVID-19 pandemic highlighted the importance of planning intervention strategies in urban areas, to safeguard the population’s health while also minimizing the impact on the economy. Agent-based simulations can be useful tools for administrators to evaluate the impact of the epidemic and the effects of different containment policies.

While several epidemic simulations can be found in literature, some factors that proved to be critical in the spread of the COVID pandemic are still missing.

The objective of this thesis is the development of a large scale agent-based simulation, leveraging Unity’s DOTS (Data Oriented Technology Stack) and ECS (Entity Component System) that allow the development of optimized real-time simulations with numbers of agents in the order of the hundreds of thousands.

The students will be implementing a number of features on top of a pre-existing framework. This framework at the moment includes some basic classes of buildings (houses, offices, pubs, convenience stores, parks), a population module for controlling the agents based on the BDI (Belief, Desire, Intention) model and a customizable epidemic module.

Some of the most relevant features that need to be implemented are:

Following the development and testing of the simulation inside a toy scenario, it will also be applied to a model of a real city environment (Torino), to validate the model by comparing the results with real data.

Embodied Pedagogical Agents (EPA) for Adaptive VR Medical Emergency Training Framework

Thesis @CGVG in collaboration with VR@POLITO, available for multiple students
Tutors: Edoardo Battegazzorre, Andrea Bottino
TAGS: Intelligent Agents, Medical Simulations, VR, Adaptive Learning

Medical education is a field that encompasses many skills, including knowledge acquisition, operation of medical equipment, and development of communication skills. Virtual and Mixed reality can offer a valuable contribution in medical training, as they provide a safe and flexible environment for trainees to practice these skills. These systems do not require the physical presence of an instructor, and they are able to support institutions and learners with standardized computer-based training and automatic assessments. Furthermore, they can foster self-learning and be easily adjusted, in terms of difficulty, to suit the learning pace for students at different levels.

To fully leverage the potential of VMR digital training, one of the most prominent approaches is the Adaptive Learning philosophy. The general definition of Adaptive Learning refers to systems capable of modulating contents and pace of learning based on a User Model, which is different for every learner (A user model contains information about the person’s preferred learning style, previous knowledge, attention threshold, gender etc.).

The student will work on developing an Embodied Pedagogical Agent for a VR Adaptive Learning framework for training doctors in emergency procedures (specifically: Airway Management, Pericardiocentesis, Central Line, Chest Drainage). Embodied Pedagogical Agents (EPA) are a specific category of Intelligent Agents able to tutor, guide and assess trainees in a Virtual Reality training application. In this particular context (VR training for emergency procedures), the EPA should be able to:

The student will work on refining a pre-existing ECA (Embodied Conversational Agent) framework and its integration into the Adaptive Learning system driving the VR simulation. Students will work with the Unity Engine, so basic knowledge of Unity, C#, and XR SDKs are required. Moreover, the project will possibly involve the creation of domain-specific 3D assets and animations, so basic knowledge of 3D modeling and animation software (Blender, Maya, 3DS Max) is also advised.

Dynamic Virtual Patient Avatar for Adaptive VR Medical Emergency Training Framework

Thesis @CGVG, available for multiple students
Tutors: Edoardo Battegazzorre, Andrea Bottino
TAGS: Intelligent Agents, Medical Simulations, VR, Adaptive Learning

Medical education is a field that encompasses many skills, including knowledge acquisition, operation of medical equipment, and development of communication skills. Virtual and Mixed reality can offer a valuable contribution in medical training, as they provide a safe and flexible environment for trainees to practice these skills. These systems do not require the physical presence of an instructor, and they are able to support institutions and learners with standardized computer-based training and automatic assessments. Furthermore, they can foster self-learning and be easily adjusted, in terms of difficulty, to suit the learning pace for students at different levels.

To this day, the standard approach usually still relies on classroom or on-the-job learning or interactions with human actors, the so-called «standardized patients». Virtual Patients are a novel and valid alternative to Standardized Patients that is becoming increasingly more popular as a training medium. Virtual Patients are interactive computer-based simulations capable of portraying patients and clinical scenarios in a realistic way. Patients are portrayed by Embodied Conversational Agents, virtual agents with human appearance with the ability to respond to users and engage in communication patterns typical of a real conversation.

The student will work on developing a Dynamic Virtual Patient Avatar for a VR Adaptive Learning framework for training doctors in emergency procedures (specifically: Airway Management, Pericardiocentesis, Central Line, Chest Drainage). Patient condition is the primary variable to consider when performing these procedures to determine the correct course of action. The doctor should consider the patient's anamnesis (existing and previous conditions, discovered through documents detailing the patient’s history or directly dialoguing with the patient) and the patient's current physical conditions. The student(s) will focus their efforts in the development of a dynamic patient avatar with the following characteristics:

Students will work with the Unity Engine, so basic knowledge of Unity, C#, and XR SDKs are required. The student will also need to create custom features of the avatar, such as specific blend shapes and materials, so basic knowledge of a 3D modeling software (Blender, Maya, 3DS Max) is also advised.

Always Open: development of web-AR applications for indoor and outdoor museum visits

Thesis @CGVG in collaboration with Parco Paloentologico Astigiano, available for multiple students
Tutors: Andrea Bottino, Francesco Strada
TAGS: webAR, mobile devices, location based AR

The project envisions the development of Web- AR applications (which provide access to AR content over the Internet without the need to download specific applications). These Web- AR applications will provide users with multimedia content to support museum visits.

Specifically, the main objectives of this project are the following:

Generating avatar motion from head and hands position

Thesis @CGVG, available for multiple students
Tutors: Andrea Bottino, Edoardo Battegazzorre, Francesco Strada
TAGS: Mixed Reality, CG, Animation, ML

In shared VR environments, seeing a realistic animation of the participating peers is of paramount relevance, helping increase the feeling os immersion and presence. In the case of shared environments where users are physically co-located in the same physical space, the availability of a full pose of the peer avatars becomes a mandatory requirement for enforcing the safety of the simulation environment and avoid collisions with other users. However, obtaining a realistic animation would require the availability of the full pose of the users, which can only be captured with external (and often expensive) devices, thus preventing the implementation of low-cost and off-the-shelf solutions.

A possible alternative (that this thesis proposals aim to explore) is to leverage the tracking data available with current HMDs (i.e., position and orientation of the HMD and the controllers, which can be made correspondent to head and hands) to reconstruct a believable, fluid and natural animation of the full avatar body. The problem is ill-posed since the available data are not enough to reconstruct the full-DOF of a real body. However, the scope of the work is NOT to reconstruct precisely the current posture, but estimate a posture that (i) mimics in a reasonable way the real one, (ii) evolve in affluent and realistic way through time. One possible solution is to use Inverse Kinematic for the upper-body and Machine Learning approaches to extract a plausible lower-body pose form a repository. Other solutions will be devised and investigated during the research project.

Link to an example on youtube: https://www.youtube.com/watch?v=SaGezfGzFQs

Improving the design and effectiveness of Virtual Patients

Thesis @CGVG, available for multiple students
Tutors: Andrea Bottino, Edoardo Battegazzorre, Francesco Strada
TAGS: Mixed Reality, Medical Learning

Today, standardized patients (SP, i.e., actors who are instructed to represent a patient during a clinical encounter with a healthcare provider) are considered the gold standard in health care education to perform tarining activities such as the simulation of clinical processes, decision making, developing clinical reasoning skills, and medical communication. SPs provide students with the opportunity to learn and practice both technical and non-technical skills in an environment capable of reproducing the realism of the doctor-patient relationship. These simulated environments are less stressful for the students, who are not required to interact with a real patient, and not harmful for the patient. However, SPs are difficult to standardize, since their performance heavily depend on the actors' skills, and their recruitment and training can become very costly.

A practical alternative to SPs is represented by virtual patients (VPs), i.e., virtual agents that have a human appearance and the ability to respond to users and engage in communication patterns typical of a real conversation. They can be equipped with external sensors capable of capturing a wide range of non-verbal clues (user's gestures and motions, expressions and line of sight) and use them to modulate the evolution of the conversation. They are cost-effective solutions since they can be developed once and used many times. They can be deployed as in-class or self-learning tools that students can use at their own pace and at any place. Finally, VP simulations, compared to SPs, are easier to standardize. That said the current state of the art on VPs allows identifying several limitations and potentially unexplored areas that these thesis aim to explore.

Students will work on (and extend) a software library for the creation and management of Embodied Conversational Agents (ECA, i.e. avatar cabale of sustaining a realistic and empathical conversation with a human being) that CG&VG is currently developing,

Topic 1. Authoring tools for VPs

Implementing VPs is a cumbersome and complicated process, which requires taking into account several different elements (Natural Language Processing, emotion modelling, affective computing, 3D animations, etc.), which, in turn, involve specific technological and technical skills. Usually, the development of a VP is a cyclical process of research, refinement and validation with experts that can take a considerable amount of time. Thus, there is the need to develop simple (and effective) authoring tools that can allow developers to support clinical educators in the rapid design, prototyping and deploying of VPs in a variety of use cases.

Students are required to develop such an authoring tool and assess its usability with a panel of volunteers.

Topic 2. Detecting real users non-verbal cues (body-language, prosodic features) to drive rich emotional interactions with ECAs

The unfolding of the simulation's narrative should be dictated (in tandem) by both user's verbal and non-verbal behaviours. To this end, VPs should fully leverage non-verbal cues as a factor that actively influences the state of the agent. For instance, the same utterance should have a different outcome if the user maintains eye contact with the patient, looks in another direction, and is fidgeting or exhibiting an incoherent facial expression.

Students are required to develop software modules capable of tackling various issues:

Deep learning approaches for Video action recognition

Thesis @CGVG + @VANDAL, available for multiple students
Tutors: Andrea Bottino; Mirco Planamente, Chiara Plizzari, Barbara Caputo
TAGS: ML, Deep Learning, Domain Adaptation, Source Free Domain Adaptation, Self supervised tasks

This topic include a list of various thesis proposals related to video analysis recognition (either in first or third person) with specific focus on addressing the domain shift that affect models trained on a source domain and applied to a target domain through Domain Adapation approaches (i.e., methods that attempt in various ways to adapt the representation learned from the (labeled) "source" domain used for training to that of the "target" unseen domain using a set of unlabeled target data)

The full list of available topics (and their details) can be found here

Multi-user (co-located) interaction paradigms in VR (VERA)

Thesis @CGVG, available for multiple students
Tutors: Andrea Bottino, Francesco Strada
TAGS: Distributed VR Environments, Collaboration in VR

We recently developed a custom framework for managing shared virtual environments where a large number of users (i.e., up to 50) are present and active simultaneously. This framework has been adopted in the development of the musical performance VERA. The project aimed at experimenting with the possibilities offered by immersive VR with an extended audience of colocated (i.e., sharing the same physical space) spectators. In this experience, users shared the same virtual environment and were able to interact with it (e.g., gazing at objects or pushing buttons). Although users could virtually see each other (in the form of avatars) they could not interact with one another. The objective of this thesis is to expand our previously designed framework to encompass also this feature. The thesis student is thus required to:

Automatic extraction of video analytics labels

Thesis @CGVG, available for multiple students
Tutors: Andrea Bottino, Francesco Strada
TAGS: Collaborative Mixed Reality, Signal processing, ML

Experimental evaluation of collaborative behaviours is usually carried through manual audio/video labelling. This process is extremely time-consuming because it requires a person to carefully watch and listen to hours of recorded material, manually annotating (i.e., labelling) specific events (e.g. people looking at each other, sharing material) or contents (e.g. what are they saying). However, these practices are extremely important for providing quantitative evidence (data) to experimentally assess the effectiveness of novel collaborative technologies (e.g. mixed reality environments). Nevertheless, automating these processes would allow the evaluation of collaborative behaviours in real-time.

The objective of this thesis is to propose and develop a software solution capable of processing and combining data, captured from multiple sources (e.g., cameras for body tracking data as well as microphones for audio), to automatically label and extract the presence or absence of collaborative behaviours.

The thesis student(s) will be required to:

AI-Driven Video Innovation: Revolutionizing Internal Communication through a Video Series Format

Thesis In collaboration with Reply S.p.A.; available for multiple students
Tutors: Andrea Bottino, Francesco Strada
Reply Tutor: Edoardo Raffaele (mailto:e.raffaele@reply.com)
TAGS: Video content, AI-powered tools, Video production, Communication strategy, AI-generated elements

In today's digital age, video content has become a powerful tool for effective communication. The rise of AI-powered tools for generating images and videos presents exciting opportunities to streamline the video production process and produce dynamic, personalized content at scale.

The project focused on creating a new video series format, from shooting to the final editing, that leverages AI-powered generation of images and videos. The project will also involve experimentation with new video ideas and strategizing the best approach to launch and communicate the new format.

Objectives:

Required skills: Video shooting, video editing, basic knowledge of audio editing

The activity will take place mainly in Turin at Reply SpA

Patient-physician relationship in VR

Thesis In collaboration with University of Turin, Department of Neuroscience, prof. Elisa Carlino; available for multiple students
Tutors: Andrea Bottino, Francesco Strada, Elisa Carlino
TAGS: virtual production, character modeling, animated avatars, Internship

The presence of an external context can change symptoms’ perception. This phenomenon has been recognized in the medical field, where the role of the therapeutic context has been extensively documented by placebo research studies. Popularly known as a therapeutic effect derived from inert pills, placebo effect is more aptly described as a “context-effect” whereby internal and external variables, ranging from the physical aspect of a treatment to the physician-patient relationship, are meaningful and capable of producing remarkable clinical improvements when an inert treatment is administered. No studies have deeply investigated the effects of a virtual physician-patient interaction on healing processes and symptoms/pain perception. In collaboration with the “ContExp Lab” of the University of Turin, this project aims to investigate the possible use of virtual reality, and virtual physician-patient interactions, to modulate pain experience when a treatment (real or inert) is delivered. The study aims to get an understanding of this phenomenon at behavioral levels, i.e. level of pain experienced, and cerebral levels, combining the VR technology with cerebral recordings techniques such as electroencephalography (EEG) and functional near infrared spectroscopy (fNIRS). In a typical neurophysiology study on placebo effect, healthy volunteers are involved to participate in a study where painful stimulations are experimentally delivered in a specific context (in this case a virtual context). Participants rate the painful stimulations before and after the administration of a treatment. EEG and/or fNIRS recordings accompany the entire process in order to identify biological components related to the placebo effect. For the thesis, the student(s) will work on developing a Dynamic Virtual Environment where there will be several variables to create and modulate in order to understand which are the virtual determinants of placebo effects on pain perception. Example of variables are: aspects of the virtual hospital in which the user will receive the treatment to reduce pain, aspects of the physician-patient interaction between the virtual physician and the user, level of empathic interaction etc. The student(s) will focus their efforts in:

The final aim is to investigate the effects of these scenarios on pain perception, using behavioral and neurophysiological approaches in collaboration with the University of Turin.

CREATION OF VIRTUAL MODELS FOR MONITORING VIRTUAL MUSEUM VISIT EXPERIENCES

Thesis In collaboration with Museo Nazionale Etrusco di Roma (ETRU), Unito (prof. Annamaria Berti, prof. Raffaella Ricci); available for multiple students
Tutors: Andrea Bottino, Michela Benente, Valeria Minucciani
TAGS: modelling of museum environments, virtual visit, visitor behaviour/reactions

This proposal is part of a strand of research that examines the fruition of museum spaces, based on the assumption that visitor involvement is partly conscious and partly unconscious. It is strongly influenced by the characteristics of the exhibition space, which have a positive or negative impact on the visit. Some design solutions are capable of triggering very different motor and emotional responses.

For this reason, the research will investigate how neurophysiological parameters change in relation to the exhibition space. For this project, it is thus necessary to create virtual environments that reflect both the current layout of some rooms in the Etruscan National Museum in Rome and alternative design proposals. These environments will then be used to test the responses of (virtual) visitors, both in terms of behavior and neurophysiological parameters measured with biosensors.

In particular, the main objectives of this work are the following:

MPAI-ARA: Avatar Representation and Animation standard

Thesis @CGVG in collaboration with MPAI consortium, available for multiple students
Tutors: Andrea Bottino, Francesco Strada
TAGS: virtual videoconferencing, avatar animation, body and facial animation, MPAI-ARA, distributed VR environmente

Avatar Representation and Animation (ARA) is a Technical Specification being developed to provide data format specifications enabling a party to represent and animate an avatar transmitted by another independent party.

The goal is represented by the following use case involving an Avatar-Based Videoconference: avatars representing humans with a high degree of accuracy participate in a videoconference in a virtual room. The Virtual environment is distributed and shared among all participants. User's avatars can be animated with available third-party AI-based facial and motion capture systems.

The virtual conference includes a virtual secretary (VS), represented as an avatar, which creates an online summary of the meeting recording the utterances of the speaker (by means of speech to text available APIs).

The goal of this thesis project is to develop a prototypal implementation of the MPAI-ARA standard including:

  1. The development of a client server architecture for managing communications and VE state consistency.
  2. The development and management of the virtual meeting rooms
  3. The management of the avatar animations in the shared environment
  4. The management of the VS

Required skills Basic skills in the field of 3D graphics, software development and game engine programming.

Thesis available for multiple students

MPAI-SPG: Server-based Predictive Multiplayer Gaming standard

Thesis @CGVG in collaboration with Synesthesia and MPAI consortium, available for multiple students
Tutors: Andrea Bottino, Marco Mazzaglia, Francesco Strada
TAGS: online gaming, MPAI-SPG, authoritative servers, online cheat detection

MPAI-SPG aims to develop a standard for a software architecture that minimise the audio-visual and gameplay discontinuities caused by high latency or packet losses during an online real-time game. In case information from a client is missing, the data collected from the clients involved in a particular game are fed to an AI-based system that predicts the moves of the client whose data are missing. The same technologies provide a response to the need to detect whether a player is cheating. Details about the standard can be found here.

The goal of this thesis project is twofold:

  1. To work on a prototype Racing game to test the architecture of MPAI-SPG.

  2. To use the architecture as a tool to intercept cheating attempts by certain clients

Virtual Production with Artificial Intelligence for Motion Capture in Broadcast: Benefits, Limitations, and Production Implications [Assigned]

Thesis In collaboration with Centro Ricerche, Innovazione Tecnologica e Sperimentazione; available for multiple students
Tutors: Andrea Bottino, Tatiana Mazali
TAGS: virtual production, Motion Cpature, AI, Internship

Virtual production is a cutting-edge technology that combines real-time visualization with pre-visualization in the film and television industry. Through the use of virtual reality and artificial intelligence, virtual production enables more efficient and creative ways of authoring and capturing media content. The goal of this thesis is to investigate the potential of virtual production and artificial intelligence, with special focus on motion capture techniques, in the field of broadcast production. Through a literature review, case studies, and interviews with industry professionals, the benefits and limitations of using artificial intelligence for motion capture in virtual production will be explored, as well as impacts on the production value chain (costs, workflow, human resources, and skills to name a few). Among the expected outcomes of this research is a better understanding of how virtual production and AI can improve the production process, increase the accuracy and realism of motion capture, and reduce production costs while maintaining or even improving the quality of the final product.

During the internship, the student will work with qualified staff both in the Studio TV production center in Turin and in the Rai Research & Innovation Center offices.

Possible thesis topics

Type of Thesis Experimental – Possibility of Internship

Required skills Basic skills in the field of 3D graphics, software development and game engine programming.

Thesis available for multiple students

CGVG Thesis