WebAudio : audio effects and musical instruments in the browser
From JavaScript to WebAssembly, from simple examples to complete reproductions of commercial synthesizers and production
quality effects
-
Title
–
WebAudio : audio effects and musical instruments in the browser
-
Example of Demos/Videos we can show:
-
Organizers
–
Michel Buffa
is a professor/researcher at Université Cote d’Azur, member of the WIMMICS research group
that belongs to INRIA/CNRS. He’s responsible of the WASABI ANR research project (Web Audio Semantics
Aggregated in the Browser for Integration), and member of the W3C WebAudio working group. He developed many
WebAudio applications including a real time tube guitar amplifier simulation presented at the last international
WebAudio conference (best paper presentation), many real time guitar effects, a multitrack player/DAW. He
wrote a whole module about WebAudio in the MOOC HTML5 Apps and Games he created for W3C on the EDx platform
at MIT/Harvard. He published research papers in many international conferences about the Web and the Semantic
Web.
Yann Orlarey
is a french composer and computer music researcher. He is currently scientific director at GRAME. His research
work focuses on real-time architectures of music systems, and programming languages for music composition and
audio processing. He authored or co-authored more than 60 scientific papers and several music-oriented softwares.
He is the designer and co-developer of FAUST (Functional Audio Stream), a functional programming language specifically
designed for real-time signal processing and synthesis. His current research interests include efficient compilation
and parallelization of signal processing programs, end-user programming, and preservation of signal processing
programs using formal mathematical techniques.
Stéphane Letz
, from GRAME National Center for Musical Creation is a research engineer graduated from INSA Lyon engineering school.
His research interests are related mainly to Formal Languages for the Musical Composition and Architecture for Musical
Systems. He is the joint author of various systems and musical software like MidiShare or Elody. He developed
the multiprocessor aware version of JACK low-latency audio server on OSX, Linux, Windows and Solaris.
He is a co-author of the FAUST functional programing language for sound synthesis and audio processing. He received
the best demo presentation award for his work “Compiling Faust audio DSP code to WebAssemby” at the
last 2017 WebAudio conference.
-
Abstract
– The W3C WebAudio API is now a recommendation candidate, and proposes a set of high level nodes
for building an “audio processing graph”. The native implementation of these nodes in the browser
allows for developing a whole range of applications by assembling them (gain, filters, waveshapers, delay,
stereo panner etc) and developers managed to write some impressive applications (Digital Audio Workstations,
real time audio effects, tube guitar amp simulation, synthesizers, real time 3D sound spatialization etc.),
but there was no good way to make low level processing (the now obsolete scriptProcessor node was designed
for this purpose and had many flaws) until the recent addition of the AudioWorklet node, that is the last
inclusion in the WebAudio API version 1.
The tutorial will last half a day, with the first half about a presentation of the high level nodes and the
way they can be used for synthesizing music and for real time audio processing. We’ll also show proposals
for a “audio plugin” design (“VSTs/VSTis plugins for WebAudio” in other words) that
could be used for WebAudio version 2. There is a crucial need for such a plugin architecture.
The second half tutorial will present the new AudioWorklet node and how low level processing can be achieved,
using for example the FAUST language (a Domain Specific Language for sound synthesis and processing) to inject
low level WebAssembly code in it. Several working examples will be designed (sound synthesis, physical modeling,
and audio processing…). Benchmarks and comparisons on the different available browsers will be presented.
At the end of both sessions, we’ll present a summary of good practices and will sum up the current
state (achievements, limitations) of WebAudio and the plans for its future (audio plugins, breaking the low
latency barrier on some configurations and operating systems, etc.)
-
Topic and Relevance
– WebAudio v1 is in last state before becoming a frozen standard, and the recent addition of a mean
to do some low level processing makes it a valuable alternative to native frameworks (VST, Juce, Audio Units,
LV2, etc.). Authors of this tutorial have been involved in the WebAudio standard since the very beginning
(they published at the first 3 WebAudio conference, some are in the W3C WebAudio working group), and developed
applications that “pushed the limits” of what WebAudio can achieve, gave feedbacks and measures
to the implementers, etc. They are involved in large scale projects (WASABI ANR project with IRCAM, DEEZER,
Radio France, INRIA/CNRS, FAUST authors belong to a top lab in France about audio/acoustic - GRAME).
The tutorial will present WebAudio v1 evolution from its creation to its current state (standardization),
and show examples of what could be included in the future in the v2.
-
Duration and Sessions
– Half day.
-
Audience
– Anybody interested in Web development.
-
Previous Editions
– No previous edition. Michel Buffa wrote a module in the MOOC HTML5 Apps and Games that had several
thousands of registered users since 2015.
-
Tutorial Material
Slides here. Click on pictures on the slides to run examples and see source code.
-
Equipment
– Good speakers ! Good wifi connexion.