Personal Projects:

I have a collection of personal projects accessible online. These mostly involve my interest in sound and music and have been opportunities to learn a new framework, tool, or language, explore an idea, fulfil a need, or just make something fun.

Note: Most of my personal projects aren't optimised for smaller screens. Please check back on a larger one if you can!



A React-based sequencer and synthesiser controlled using the mouse and keyboard, rather than mouse alone. The number keys are used to indicate which points in the sequence should be updated, freeing up the viewport to display one slider per synthesiser parameter, and allowing multiple points at once. Animations are performed using react-spring.

While using document-wide keyboard and mouse events with React hooks, I uncovered and submitted an issue about the way React handles DOM event subscriptions (, prompting an in-depth response from Dan Abramov of Facebook.



A Reason and React-based step sequencer supporting independent timings and pattern lengths per synthesiser parameter, resulting in complex variations in sound. Parameters temporarily update as the cursor travels over each component, encouraging happy accidents and experimentation. Some functionality is tested using Jest.

This project was an opportunity to use Reason, Facebook's extension of OCaml, and learn more about type systems and functional programming.



A React and Redux-based sequencer that composes sounds recorded from the microphone. Recording is triggered automatically by analysing input volume, removing the need to manually trim recordings. Sequences can be saved and toggled between in sync to the playback.

Redux middleware was used to encapsulate the sound recording, playback, and microphone browser permission functionality. When a new sound is recorded, sequence data is pre-filled for immediate feedback – first a semi-random kick and snare-type pattern, then a weighted random selection algorithm to favour filling in gaps. Visuals were synchronised in time with the music by manually managing DOM updates using requestAnimationFrame. A custom React hook manages components transitioning in and out, animated via the rebound spring library, with tests using Jest.



A Backbone-based sequencer that composes short sounds retrieved from the FreeSound and SoundCloud APIs. Users browse through and audition sounds under three seconds long. The sounds can be played at different pitches. The API responses are standardised for easy handling by Backbone views – additional APIs could be supported. The note input grid is drawn in canvas for performance reasons.



A Python library to interact with the Korg PadKontrol MIDI controller via its native mode, for custom functionality. The user can control the button lights and LCD display, and register a callback to be notified when buttons are pushed.



A Python module used to generate and edit patches for the mid–1980s Ensoniq ESQ-1 synthesiser. Certain parameters, or groups of parameters, can be randomised to generate unusual sounds. The data structures were packed into a binary format that required bitwise operations to read and write. Unit tests ensure patches are serialised and deserialised correctly.



Flash project - build taken offline.

A Flash application that allows a user to record and audition a sound using their microphone, then upload that sound to SoundCloud in WAVE format. The user can specify the title, set its public/private status, and add tags. It uses SoundCloud's API for OAuth authorisation and uploading.



Flash project - build taken offline. Never opened sourced.

A full-stack web application. A Flash interface allows composition of a musical sequence using sounds recorded or uploaded by the user, or chosen from a pool of sounds previously uploaded by other users. Any recorded or uploaded sounds are added to that pool via an API implemented with Django and Django REST Framework. The user can also save the sequence to the server via that API. Saved sequences can be browsed via a separate JavaScript/Backbone interface. That interface can then launch the Flash editor to edit a copy of the sequence, as a remix.

The user can view and trim a microphone recording waveform drawn by a custom component. The raw sample data is manually interpolated and mixed to play back sounds at different pitches.