Usage

1. User input: example-input.ly

Vivi currently reads music written in LilyPond. For example, LilyPond will transform this text into the following image:

\relative c' {
  \set Staff.instrumentName = "violin-1"
  \key d \major
  \tempo 4 = 72

  a4\f d fis8-. a-. r4
  d16( cis b a) g4 \breathe e8\p( g) fis4
}
Example of LilyPond input

(adding other input formats would not be hard... there’s already a MusicXML → .ly converter included in LilyPond!)

2. Automatic: .notes file

Vivi translates the above music into her internal representation:

0.000   note     57       4   p-c 2 12
0.000   dynamic  f
0.250   note     62       4   p-c 7 12
0.500   note     66       8   p-c 9 12
0.625   note     69       8   p-c 14 12
0.750   rest     0        4
1.000   note     74       16  p-c 2 13
1.000   slur     -1
1.0625  note     73       16  p-c 7 13
1.1250  note     71       16  p-c 11 13
1.1875  note     69       16  p-c 13 13
1.1875  slur     1
1.2500  note     67       4   p-c 16 13
1.2500  script   staccato
...

3. Automatic: .wav file

Vivi translates the music into physical actions, then begins simulating a violin with Artifastring. Every 5 milliseconds (approximately), Vivi analyzes the output sound, and adjusts the physical parameters if she recognizes any problems with the sound.

All sound analysis is done with marsyas. The analysis makes the sound generation considerably slower than sound generated with Artifastring alone, but Marsyas is quite fast – the whole process is still faster than realtime.

example-input.wav.mp3

4. Automatic: .actions file

While performing the music in step 3, Vivi writes a file containing her physical actions:

f  0       0  0.109
b  0       0  0.08  2.13  0.029  0.1
b  0.0058  0  0.08  2.13  0.029  0.1002
b  0.0058  0  0.08  2.12  0.058  0.1002
b  0.0116  0  0.08  2.11  0.058  0.1007
..

These represent the finger position per string, and the bow-bridge distance, bow force, velocity, and contact point distance along the bow.

5. Automatic: .mpeg file

Vivi uses blender to render a movie showing her violin playing.

At the moment, this movie is in the "proof of concept" stage – it’s enough to see the bowing, but that’s it. The focus of Vivi so far has been on the bowing control. A simple improvement would be to show the hand position and actual fingering, instead of merely the position of the highest finger.

example-input.mpeg

6. User: enjoy your music!

Other than writing the sheet music and the training, the entire process is automatic. All sound and video is performed with no direct human involvement.

Note that the human involvement need not come from the same person:

  • one person could train Vivi to recognize bad sounds,
  • a second person could compose a piece of music,
  • a third person could edit the bowings, dynamics, and articulations, and
  • a fourth person could create alternate bowings, dynamics and articulations!