top of page

Create Your First Project

Start adding your projects to your portfolio. Click on "Manage Projects" to get started

Soundscape Composition using AI-Generated MIDI and Granular Synthesis on Clarinets

Project type

Electronic Music Composition + AI Music Generation

Date

June 2025

Tools Used

Software: Ableton
Hardware: Microphones (Neumann KM184)
Programming Languages: Python

Credits

Programmed by James (Skyler) Crook, Keene Cheung, and Brandon Rogers
Composed by Keene Cheung

Link

This piece was a product involving projects for two different classes, MUS 174C, Studio Techniques with Professor Tom Erbe, and CSE 153R, Machine Learning for Music with Professor Julian McAuley. CSE 153R had the first part. The goal for this team-based project was to create two types of generated music. We chose symbolic conditioned and symbolic unconditioned generation. Our symbolic conditioned generation involved giving a model a prompt of a desired instrumentation, which it would then output a composition using the given instruments. Our unconditioned generation used the MAESTRO dataset, composed of virtuosic MIDI piano performance, trained to detect patterns in features such as key signature, note frequency, etc., to produce a new MIDI sequence that, in theory, would replicate the style of music being played. In our first iterations of this, the note sequences were pretty cacophonous and questionable in terms of musicality. However, as we progressed and updated our model, our results became more tonally intuitive. I wanted to try listening to what some of these MIDI sequences would sound like using my own synthesized sounds. Unfortunately, they were often jarring and borderline random. However, I found this to be a fun experience and wanted to lean into the grating sounds and aleatory. This led to my work for MUS 174C. Up until this point, my second piece for the class was only in the sound design stage. I had wanted to create a piece that consisted exclusively of sounds that originated from the clarinet. Using granular synthesis and experimenting with recording techniques, I had developed a wide sonic palette from just these sounds. However, I didn't have a form that I wanted to settle on for the piece. It wasn't until after I submitted my project for CSE 153R that I thought to use the MIDI sequences we generated as the form for the piece. Admittedly, the piece is still very sonically polarizing, but I think it is a fun exploration of sound design and serves as a commentary on the use of AI within the musical realm.

bottom of page