Personal Projects:

Sometimes I tinker with code in my spare time – it may be a chance to learn a new framework or tool, explore an idea, fulfil a need, or just make something fun.

Note: Most of my personal projects aren't optimised for smaller screens. Please check back on a larger one if you can!

Music Experiments.

(code on GitHub)

2019. JavaScript (React).

(The name is misleading – this is currently one experiment.)

A synthesiser. The idea behind this project was to explore an alternate method of changing the synthesiser's parameters over time, using the keyboard and mouse rather than the mouse alone.

The synthesiser repeats an eight note loop. The cursor determines which parameter is being updated, and its value, while the keyboard number keys (1 to 8) are used to indicate which notes in the loop should be updated. This allows multiple notes to be updated simultaneously.


(code on GitHub)

2018. Reason (reason-react).

A step sequencer controlling a synthesiser. There are rows of sliders, each controlling a parameter of the synthesiser. The length of each row can be set independently, meaning rows move in and out of sync during playback, resulting in complex and unpredictable patterns.

This project was an opportunity to try Reason, Facebook's extension of OCaml, for the first time.

Parameters are temporarily changed when the cursor is moved over a slider — the value is restored when the cursor leaves. The intention being to expose happy accidents during playback, and encourage experimentation.


(code on GitHub)

2017. JavaScript (React, Redux).

A sequencer that plays back sounds recorded from the user's microphone. The recording is triggered when the incoming volume of the signal exceeds a certain threshold, removing the need for the sound to be manually trimmed to avoid unwanted leading silence.

When a sound is recorded and inserted into the sequence, a random pattern is generated for the user to tweak. The first is a semi-random kick drum pattern, the second a semi-random snare pattern, and subsequent sounds use a weighted random selection algorithm to favour filling in any gaps.

Web Audio Sequencer.

(code on GitHub)

2012–2013. JavaScript (Backbone).

A sequencer that sources short sounds from FreeSound and SoundCloud, via their APIs. Sounds can be browsed and auditioned, then added to a musical composition. Multiple sounds can be composed together and played back at different pitches.

(code on GitHub)

2012. Python.

A Python module used to interact with the Korg PadKontrol MIDI controller via its native mode. In this mode, the controller bypassed its own functionality and could be controlled at a lower level — button lights could be lit up, text displayed on the LCD, et cetera.

(code on GitHub)

2012. Python.

A Python module used to generate and edit patches for the Ensoniq ESQ-1 synthesiser (a keyboard from the mid 1980s). The goal was to be able to easily generate banks of unusual sounds by randomising parameters.

The data structures were packed into a binary format that required bitwise operations to read and write. Not knowledge that's been overly useful since, but a unique sort of challenge. This project was also my first foray into writing tests.

(Flash project - build taken offline.)

(code on GitHub)

2012. Flash.

An interface that allows a user to record a sound using their microphone, listen to that sound, and then either re-record another sound or upload the sound to SoundCloud. Before uploading, the user can specify the title, its public/private status, and add tags. It utilises SoundCloud's API for authorisation and uploading.

This was my introduction to the SoundCloud API, which contributed later to the idea for the Web Audio Sequencer.

(Flash project - build taken offline. Never opened sourced.)

2011–2012. Python (Django), Flash, JavaScript (Backbone).

A full-stack web application. A Flash interface allows composition of a musical sequence using sounds recorded or uploaded by the user, or chosen from a pool of sounds previously uploaded by other users. Any recorded or uploaded sounds are added to that pool. The user can save the sequence to the server via an API. Saved sequences can be browsed via a separate JavaScript/Backbone interface utilising the same API. That interface can then launch the Flash editor to edit a copy of the sequence by another user, as a sort of remix.

This was a big undertaking (if I'd realised how big at the start, I don't think I would have started it!) but I learnt a lot – it was my first time writing significant backend code, designing and implementing an API, and writing more complex JavaScript and ActionScript. I also had to write my own code to mix and play back sounds at different pitches – Flash gave me a buffer to fill, the rest was up to me.