MUSIC4D: Advanced Training Course on AI and Digital Music Returns

Robotics, audio computing and generative artificial intelligence. The MUSIC4D project relaunches its international laboratory dedicated to advanced music technologies, aiming to engage a potential community of over 4,000 students, faculty members, and technical-administrative staff from Conservatories.

The Advanced Training course Artificial Intelligence and Digital Music, promoted within the framework of the MUSIC4D project, is now back in session. The program focuses on the study of emerging technologies applied to sound production. Launched on February 19, the initiative offers a series of synchronous and asynchronous sessions designed for the academic and artistic community, with the goal of developing skills in advanced sound processing, computer-assisted composition, and AI applications across the music sector. The objective is to reach an audience of 4,000 participants, making technological excellence a shared heritage.

As scheduled, activities will resume on Thursday, March 20, with Professor Francesco Pupo delivering an introductory lecture aimed at framing the course content and preparing participants for the upcoming in-depth sessions. The program will officially continue on March 24 and run through May 21, combining theoretical study with hands-on applications.

Over the coming weeks, participants will explore a wide range of research and experimental areas, including audio signal analysis, sound synthesis and processing systems, computer-assisted composition, and the application of artificial intelligence in creative and production workflows. Teaching will be led by Francesco Pupo, Riccardo Sarti, Sandro Mungianu, Luca Bimbi, Francesco La Camera, Caterina Perri, and Rashmi Chawla—scholars and researchers specializing in electronic music, sound technologies, and AI-driven systems. Their contribution ensures an interdisciplinary approach, providing both methodological and operational tools to understand the evolution of digital technologies in sound.

Through this initiative, MUSIC4D further strengthens its role as a hub for technological innovation, contributing to the development of new educational models that connect scientific research, creativity, and evolving competencies within Conservatories.

ACCESS THE COURSE AT:
https://mooc.unical.it/course/index.php?categoryid=8

TRAINING PROGRAM

Content, Structure, and Schedule of Educational Activities

1. FUNDAMENTALS OF ARTIFICIAL INTELLIGENCE, INTELLIGENT SYSTEMS AND ROBOTICS (March)

The course begins under the guidance of Francesco Pupo, focusing on the theoretical foundations of generative AI.

  • March 20 – 5:30 PM – From Neural Networks to Deep Learning Models
  • March 24 – 5:30 PM – Generative Architectures and Transformer Models
  • March 27 – 5:30 PM – Reinforcement Learning

2. AUDIO SIGNAL FUNDAMENTALS, ANALYSIS AND PROCESSING, SOUND SYNTHESIS TECHNIQUES (Late March – Mid April)

In this phase, Sarti, La Camera, and Bimbi introduce the technical foundations of sound.

  • March 30 – 5:30 PM – Introduction to Signals and the Electroacoustic Chain
  • March 31 – 5:30 PM – Sampling and Quantization
  • April 1 – 5:30 PM – Frequency Analysis and Domain Transformations
  • April 13 – 5:30 PM – STFT (Short-Time Fourier Transform), Windowing and Audio Coding
  • April 14 – 5:30 PM – Digital Filters
  • April 15 – 5:30 PM – First- and Second-Order Filters
  • April 16 – 5:30 PM – Advanced Filters
  • April 17 – 5:30 PM – Audio Effects and Dynamics Processing
  • April 20 – 5:30 PM – Spectral and Modulation Synthesis
  • April 21 – 5:30 PM – Physical Modeling Synthesis and the Karplus–Strong Algorithm

3. MUSIC SYNTHESIS AND COMPUTER-ASSISTED COMPOSITION (Late April – Mid May)

The focus shifts to sound creation and integration, led by Perri, Pupo, Mungianu, and Chawla.

  • April 28 – 5:30 PM – Prompt Engineering: Fundamentals and Advanced Techniques
  • April 29 – 5:30 PM – Computer-Assisted Composition and AI in Music
  • April 30 – 5:30 PM – AI as Support for the Compositional Process
  • May 4 – 5:30 PM – Virtual Orchestration
  • May 5 – 5:30 PM – Frame-by-Frame Method for Music and Visuals
  • May 6 – 5:30 PM – Audio-Video Workflow in Professional DAWs
  • May 14 – 5:30 PM – TuttiBot – Evaluation Tool
  • May 15 – 5:30 PM – Human-Robot Interaction (HRI)

4. PROJECT WORK: AI-BASED MUSIC SYSTEM (Second Half of May)

The final phase includes an intensive workshop coordinated by Pupo, Sarti, and Mungianu, focusing on system design, AI core development, audio integration, and final rendering.

  • May 18 – 5:30 PM – Concept Definition and System Design
  • May 19 – 5:30 PM – AI Core Development and Audio Integration
  • May 20 – 5:30 PM – Optimization and Final Rendering
  • May 21 – 5:30 PM – Final Presentation and Review

ACADEMIC AND SCIENTIFIC TEAM

Profiles, Expertise and Research Areas

  • Francesco Pupo – Professor at the University of Calabria, expert in intelligent systems and robotics, and Scientific Director of MUSIC4D, focusing on the integration of generative AI and musical creative processes.
  • Caterina Perri – Researcher and engineer at the University of Calabria, specializing in AI and education; works on real-time interaction platforms between humans, robotics, and IoT devices.
  • Sandro Mungianu – Multimedia professor at the Conservatory of Cagliari, expert in computer-assisted composition and advanced audio-video production.
  • Riccardo Sarti – Electronic engineer and musician, professor of Music Informatics at the Conservatory of Sassari, with extensive experience as a sound engineer and performer in contemporary and electroacoustic music.
  • Luca Bimbi – Sound designer, audio engineer, and electroacoustics lecturer, author of major technical guides on Logic Pro and Csound, with a strong background in international music production.
  • Francesco La Camera – Expert in audio restoration and sound heritage preservation, with over thirty years of experience and collaborations including orchestral sound engineering for Andrea Bocelli.
  • Rashmi Chawla – Professor at JC Bose University (India), expert in Human-Robot Interaction (HRI), and coordinator of “TuttiBot,” an advanced evaluation and monitoring tool within MUSIC4D.