Artifastring ("artificial fast string") is a highly optimized physical simulation of a violin for sound synthesis.
The latest tarball release can be downloaded from: http://percival-music.ca/artifastring/artifastring-latest.tar.gz
Development takes place at: https://github.com/gperciva/artifastring
Requires a typical development environment including
g++, with the additional
If you do not want to use the autotools build system, you can still compile with the
.h files directly. This is demonstrated in use in code.
The format of the
".actions" file is described in file format.
Video can be produced with
–enable-blender during the
../configure stage. Individual pictures are produced with
actions2images.py; these pictures can be combined with audio with
There are 3 preset levels of image quality. Most machines can process multiple threads at once; the speed of rendering images can be increased by manually splitting the range of frames and calling
actions2imges.py multiple times (i.e.
artifastring2images.py -q 1 -s 1 -e 100 and
artifastring2images.py -q 1 -s 101 -e 200). For more information, check the
–help of both scripts.
I recommend generating a 30-second test file:
This tests many aspects of the physical modelling. Be aware that the bowing parameters (notably bow velocity and bow force) are not tweaked, so the violin control will be fairly poor.
Artifastring is discussed extensively in my PhD dissertation:
The fundamental model was described by Demoucron, subjected to a few changes to avoid undesirable behaviour (described in my PhD).
Despite the recent dates, this model is relatively old (early 1990s?) and does not represent the most accurate simulation known to researchers. In particular, the friction of bowing is estimated with a hyperbolic curve, rather than a more accurate double-exponential curve, or hysteresis behaviour due to the rosin melting and cooling during bowing.
This model was not chosen for accuracy; rather for being simple yet still "good enough" for our desires. Don't criticize it for being "not sufficiently accurate" for whatever you want to do.
This project would not have been possible without initial work by my supervisor, Dr. Nick Bailey, in understanding and explaining the model (in language, diagrams, and code). Many thanks also to Dr. John Williamson and Dr. Martin Macauley for clarifying some more abstract parts of the model.
Many thanks also to Matthias Demoucron for clarifying some physical constants (especially the slope of friction charactistics v0) in personal email.
The general syntax of each line is:
Lines must be divided with tabs, with one command per line. A line beginning with a # hash is a comment.
There are four types of commands. All commands begin with the
seconds is the absolute time since the beginning of the audio, not the relative time since the last command.
See files in the
actions/ directory for examples.
To compile (after installing):
SWIG bindings are built automatically if possible.
Short answer: GNU GPL 3.0+
Copyright 2010--2013 Graham Percival Artifastring is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. Artifastring is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Artifastring. If not, see <http://www.gnu.org/licenses/>.
It would be nice to extend the physical modelling algorithm to use (at least) two bow positions (to accommodate bow width), and two finger forces during bowing. I hope to have the latter working in the next few weeks.
More instruments would be fantastic, especially double bass. If I feel inspired then I might try renting a double bass for a week so I can take the required measurements.
In general, though, my attention is on virtual musicians to control the physical model. This takes place in a separate software project: http://percival-music.ca/vivi.html