- An Effective, Simple Tempo Estimation Method Based on Self-Similarity and Regularity. George Tzanetakis, Graham Percival, ICASSP 2013.
- Physical Modeling meets Machine Learning: Teaching Bow Control to a Virtual Violinist. Graham Percival, Nicholas Bailey, George Tzanetakis, Sound and Music Conference 2011. pdf
- MOGFUN: musical mObile group for FUN. Yinsheng Zhou, Graham Percival, Xinxi Wang, Ye Wang, and Shengdong Zhao, CHI 2011. pdf *Honorable mention award*
- MOGCLASS: a collaborative system of mobile devices for classroom music education. Yinsheng Zhou, Graham Percival, Xinxi Wang, Ye Wang, and Shengdong Zhao, ACM Multimedia 2010. pdf
- MOGFUN: musical mObile group for FUN. Yinsheng Zhou, Zhonghua Li, Dillion Tan, Graham Percival, and Ye Wang, ACM Multimedia 2009. pdf
- Generating Targeted Rhythmic Exercises for Music Students with Constraint Satisfaction Programming. Graham Percival, Torsten Anders, and George Tzanetakis, ICMC 2008. pdf
- Effective Use of Multimedia for Computer-Assisted Musical Instrument Tutoring. Graham Percival, Ye Wang, and George Tzanetakis, EMME 2007. pdf
- Analysis of Saxophone Performance for Computer-Assisted Tutoring. Matthias Robine, Graham Percival, and Mathieu Lagrange, ICMC 2007. pdf
- Pedagogical Transcription for Multimodal Sitar Performance. Ajay Kapur, Graham Percival, Mathieu Lagrange, and George Tzanetakis, ISMIR 2007. pdf
- Adaptive Harmonization and Pitch Correction of Polyphonic Audio using Spectral Clustering. Mathieu Lagrange, Graham Percival, and George Tzanetakis, DAFX 2007. pdf
(PhD) Physical Modelling meets Machine Learning: Performing Music with a Virtual String Ensemble
This dissertation describes a new method of computer performance of bowed string instruments (violin, viola, cello) using physical simulations and intelligent feedback control. Computer synthesis of music performed by bowed string instruments is a challenging problem. Unlike instruments whose notes originate with a single discrete excitation (e.g., piano, guitar, drum), bowed string instruments are controlled with a continuous stream of excitations (i.e. the bow scraping against the string). Most existing synthesis methods utilize recorded audio samples, which perform quite well for single-excitation instruments but not continuous-excitation instruments.
This work improves the realism of synthesis of violin, viola, and cello sound by generating audio through modelling the physical behaviour of the instruments. A string’s wave equation is decomposed into 40 modes of vibration, which can be acted upon by three forms of external force: A bow scraping against the string, a left-hand finger pressing down, and/or a right-hand finger plucking. The vibration of each string exerts force against the instrument bridge; these forces are summed and convolved with the instrument body impulse response to create the final audio output. In addition, right-hand haptic output is created from the force of the bow against the string. Physical constants from ten real instruments (five violins, two violas, and three cellos) were measured and used in these simulations. The physical modelling was implemented in a high-performance library capable of simulating audio on a desktop computer one hundred times faster than real-time. The program also generates animated video of the instruments being performed.
To perform music with the physical models, a virtual musician interprets the musical score and generates actions which are then fed into the physical model. The resulting audio and haptic signals are examined with a support vector machine, which adjusts the bow force in order to establish and maintain a good timbre. This intelligent feedback control is trained with human input, but after the initial training is completed the virtual musician performs autonomously. A PID controller is used to adjust the position of the left-hand finger to correct any flaws in the pitch. Some performance parameters (initial bow force, force correction, and lifting factors) require an initial value for each string and musical dynamic; these are calibrated automatically using the previously-trained support vector machines. The timbre judgements are retained after each performance and are used to pre-emptively adjust bowing parameters to avoid or mitigate problematic timbre for future performances of the same music.
The system is capable of playing sheet music with approximately the same ability level as a human music student after two years of training. Due to the number of instruments measured and the generality of the machine learning, music can be performed with ensembles of up to ten stringed instruments, each with a distinct timbre. This provides a baseline for future work in computer control and expressive music performance of virtual bowed string instruments.
I am currently preparing an 8-12 page booklet about the PhD.
phd-percival.pdf, (approx 11 Mb)
Additional media (audio, video, sheet music)
(Master’s) Computer-Assisted Musical Instrument Tutoring with Targeted Exercises
Learning to play a musical instrument is a daunting task. Musicians must execute unusual physical movements within very tight tolerances, and must continually adjust their bodies in response to auditory feedback. However, most beginners lack the ability to accurately evaluate their own sound. We therefore turn to computers to analyze the student’s performance. By extracting certain information from the audio, computers can provide accurate and objective feedback to students.
This thesis lays out some general principles for such projects, and introduces tools to help practicing rhythms and violin intonation. There are three distinct portions to this research: automatic exercise creation, audio analysis, and visualization of errors. Exercises were created with Constraint Satisfaction Programming, audio analysis was performed with amplitude and pitch detection, and errors were displayed with a novel graphical interface. This led to the creation of MEAWS, an open-source program for music students.
At UVic, the thesis defense begins with a 15-20 minute presentation. Here are the slides I used: camit-talk.pdf
masters-percival.pdf, (approx 1 Mb)