Skip to content

Notochord (Documentation | Paper | Video)

Max Ernst, Stratified Rocks, Nature's Gift of Gneiss Lava Iceland Moss 2 kinds of lungwort 2 kinds of ruptures of the perinaeum growths of the heart b) the same thing in a well-polished little box somewhat more expensive, 1920

Notochord is a neural network model for MIDI performances. This package contains the training and inference model implemented in pytorch, as well as interactive MIDI processing apps using iipyper.

API Reference

Getting Started

Using your python environment manager of choice (e.g. virtualenv, conda), make a new environment with a Python version at least 3.10. Then pip install notochord.

For developing notochord, see our dev repo

Install fluidsynth (optional)

fluidsynth is a General MIDI synthesizer which you can install from the package manager. On macOS:

brew install fluidsynth
fluidsynth needs a soundfont to run, like this one: https://drive.google.com/file/d/1-cwBWZIYYTxFwzcWFaoGA7Kjx5SEjVAa/view

You can run fluidsynth in a terminal. For example, fluidsynth -v -o midi.portname="fluidsynth" -o synth.midi-bank-select=mma ~/'Downloads/soundfonts/Timbres of Heaven (XGM) 4.00(G).sf2'

Notochord Homunculus

Notochord includes several iipyper apps which can be run in a terminal. They have a clickable text-mode user interface and connect directly to MIDI ports, so you can wire them up to your controllers, DAW, etc.

The homunculus provides a text-based graphical interface to manage multiple input, harmonizing or autonomous notochord voices:

notochord homunculus
You can set the MIDI in and out ports with --midi-in and --midi-out. If you use a General MIDI synthesizer like fluidsynth, you can add --send-pc to also send program change messages. More information in the Homunculus docs, or run notochord homunculus --help

If you are using fluidsynth as above, try:

notochord homunculus --send-pc --midi-out fluidsynth --thru

Note: on windows, there are no virtual MIDI ports and no system MIDI loopback, so you may need to attach some MIDI devices or run a loopback driver like loopMIDI before starting the app.

If you pass homunculus a MIDI file using the --midi-prompt flag, it will play as if continuing after the end of that file.

Adding the --punch-in flag will automatically switch voices to input mode when MIDI is received and back to auto after some time passes.

Python API

See the docs for Notochord.feed and Notochord.query for the low-level Notochord inference API which can be used from Python code. notochord/app/simple_harmonizer.py provides a minimal example of how to build an interactive app.

OSC server

You can also expose the inference API over Open Sound Control:

notochord server
this will run notochord and listen continously for OSC messages.

Tidal interface

see notochord/tidalcycles in iil-examples repo (updated examples coming soon):

add Notochord.hs to your tidal boot file. Probably replace the tidal <- startTidal line with something like:

:script ~/iil-examples/notochord/tidalcycles/Notochord.hs

let sdOscMap = (superdirtTarget, [superdirtShape])
let oscMap = [sdOscMap,ncOscMap]

tidal <- startStream defaultConfig {cFrameTimespan = 1/240} oscMap

In a terminal, start the python server as described above.

In Supercollider, step through examples/notochord/tidalcycles/tidal-notochord-demo.scd which will receive from Tidal, talk to the python server, and send MIDI on to a synthesizer. There are two options, either send to fluidsynth to synthesize General MIDI, or specify your own mapping of instruments to channels and send on to your own DAW or synth.

preprocess the data:

python notochord/scripts/lakh_prep.py --data_path /path/to/midi/files --dest_path /path/to/data/storage
launch a training job:
python notochord/train.py --data_dir /path/to/data/storage --log_dir /path/for/tensorboard/logs --model_dir /path/for/checkpoints --results_dir /path/for/other/logs train
progress can be monitored via tensorboard.