Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save ikukojohanna/5e23c4bd79e24c2aeb8454b2507981d1 to your computer and use it in GitHub Desktop.
Save ikukojohanna/5e23c4bd79e24c2aeb8454b2507981d1 to your computer and use it in GitHub Desktop.

Scala Sampler for Functional Soundscapes

Project for Google Summer of Code 2024

Project Description and Goals

The main objective of this project was to develop a sampler instrument for Sounds of Scala, a web-based music and audio library built for Scala. A sampler is a type of digital music instrument for playing and manipulating audio samples. Sounds of Scala leverages Scala.js to target the Web Audio API, which brings audio processing capabilities to the web browser.

The sampler instrument is an important addition to Sounds of Scala, significantly expanding possibilities in developing web based music applications with the library.

Report Overview

The remainder of this report first discusses the work that was completed, then outlines a list of the PRs made, describes the implementation in detail, and concludes with future work and key learnings.

The Current State of the Project

The project has made significant progress in developing a functional sampler instrument for the Sounds of Scala library. Most goals have been fully implemented and are working as expected. The sampler now offers the following capabilities:

  • Loading Samples: Loads a sample’s audio data from a specified file path.
  • Linking samples to music notes: Adjusts pitch and playback rate to map and play the sample across a musical scale.
  • Customisable Playback: Provides a range of customisable settings for manipulating sample playback. This includes volume and fade controls, custom pitch and playback rate adjustments, adding a delay to the start of the sample and cropping the sample.
  • Advanced Playback Options: Supports looping and/or reversing a sample, as well as polyphonic playback.

A Graphical User Interface was also developed to test and showcase the sampler's features. This interface demonstrates the instrument’s practical applications and enables users to interact with the sampler’s functionalities in a visual and intuitive manner.

Screenshot 2024-08-25 at 17 20 51

My Contributions to the Project:

Implementation

Sample Loading

Loading the Audio Data into an AudioBuffer

Before samples can be played, they must be loaded into an AudioBuffer. The SampleLoader provides a loadSample function that asynchronously loads an audio sample from a given file path. It sends an HTTP GET request to fetch the file as an ArrayBuffer, then decodes it into an AudioBuffer using the Web Audio API’s AudioContext. The AudioBuffer holds the audio data for that sample.

Creating a Sampler Instance

The process of loading one or several samples is separate from the playback action and happens upon the creation of the sampler instrument. Within the Sampler companion object, the fromPaths method is responsible for loading samples from their designated file paths when the sampler instrument is first instantiated. This method accepts a list of file paths paired with SampleKey identifiers. Each SampleKey represents a musical note with the attributes of pitch, accidental, and octave. fromPaths traverses through the list, loading each sample via the SampleLoader.loadSample method. Once the samples are loaded, they are mapped to their respective keys and combined into a Sampler instance.

In addition, the Sampler companion object includes factory methods for creating specific samplers (e.g., piano, guitar). Each function loads a different type of instrument or sound effect from specific sets of audio samples. It maps the samples to their appropriate musical notes and sets up the sampler instrument for playback.

Playback

The Sampler class handles the playback of audio samples mapped to specific musical notes. It selects the closest matching audio sample for a given note. If a sample with the exact pitch is not provided, the nearest pitch to the desired note is found, and the sample’s playback rate is adjusted accordingly.

Getting the Sample’s Frequency

The Sampler class accepts a map (samples) that combines a SampleKey object with the sample’s corresponding AudioBuffer instance. When a sample needs to be played, the samples map is transformed into an array (orderedSamples) of tuples containing the frequency of the note, the SampleKey, and the AudioBuffer. This array is then sorted by frequency to facilitate efficient searching.

Closest Frequency Search

The method closestFrequency is a helper function that finds the sample in the orderedSamples array with the frequency closest to a given target frequency. This method uses a binary search algorithm to recursively compare the target frequency to the middle element of the current search range and narrows down the search space until the closest match is found.

Playing Samples Linked to Notes

The Sampler.playWithSettings method handles the playback of a note or a chord and passes on the sampler’s settings. It accepts a musicEvent, along with when that event should be played, the track’s tempo, and some customisable playback settings.

If the musical event is a note (AtomicMusicalEvent.Note), the method calculates the note's frequency and then calls closestFrequency to find the sample in the orderedSamples array that most closely matches this frequency. Once the closest matching sample for the respective note is found, the method calculates a playbackRatePitchFix, which adjusts the playback rate to match the desired frequency. Changing the playback rate alters the pitch, allowing the sample to be tuned to the correct note.

If the musical event is a chord (AtomicMusicalEvent.Harmony), the playWithSettings method handles the simultaneous playback of multiple notes. The method traverses through each note in the chord and applies the same playback process to each individual note. For each note in the chord, the method calculates the timing, taking into account any timing offsets specific to that note, and then invokes playWithSettings for each note individually. This ensures that all notes in the chord are played together.

In both cases, the method concludes by calling SamplePlayer.playSample to play the sample’s AudioBuffer with the adjusted pitch/playback rate, along with the specified settings.

Settings

SamplePlayer provides the functionality for playing an audio sample with its customisable settings and allows control over how and when the audio sample is played. It extends Instrument and can be configured with various parameters for audio playback, defined in the Settings case class. Those settings include:

  • volume: The gain level.
  • fadeIn and fadeOut: The duration in seconds for fading in and out the audio. fadeIn + fadeOut must be less than the length of the sample.
  • playbackRate: The speed at which the audio sample is played.
  • reversed: A boolean indicating if the audio should be played in reverse.
  • loop: An optional loop setting that specifies if the audio should loop and the start/end points of the loop in seconds. loopStart + loopEnd must be less than the length of the sample.
  • startTime (rename): The when parameter defines when the play will start. Delays the moment when the sample is played in seconds.
  • offset: The offset parameter, which defaults to 0, defines where in the sample the playback will start (in seconds).
  • length: Defines the length of the portion of the sample to be played. It defaults to the duration of the sample minus the value of offset but can be changed to crop the sample to a desired length.

Default Settings

The Settings companion object provides an implicit default instance, meaning default settings are available when none are explicitly provided.

Handling the Sample’s Settings

The playSample method in SamplePlayer handles the playing of the sample’s audio buffer with its settings. playSample has several inner methods to construct and configure the necessary audio nodes:

  • createGainNode: Creates a GainNode to control the volume of the audio. The node is connected to the audio context's destination, which typically outputs sound to the speakers.
  • createSourceNode: Creates an AudioBufferSourceNode, sets the playback rate, and assigns the audio buffer (either reversed or as is). The SourceNode is responsible for actually playing the audio sample.
  • reverseBuffer: If the audio is to be played in reverse, this function creates a new AudioBuffer with the audio data reversed.
  • configureGainNode: Configures the GainNode to handle a specified volume and fade-in and fade-out controls, adjusting the volume over time.
  • configureSourceNode: Configures the AudioBufferSourceNode for playback. If looping is specified, it sets up the loop points; otherwise, it starts playback with the specified offset and duration.

The playSample method combines these inner functions to execute the playback. It first creates the GainNode and AudioBufferSourceNode. It then connects the source node to the gain node, which is in turn connected to the audio context's destination. The method applies the gain and source node configurations, such as volume, playback rate, fade effects, and looping. Finally, it starts the audio playback at the specified time.

Example

ExampleSongSampler is a demonstration of how one can write a song using the sampler instrument. The song is constructed by defining a sequence of notes and applying custom playback settings. The play method initializes a piano sampler from the Sampler class, which loads the appropriate piano samples, and then uses this sampler to play the song with the specified settings. This example demonstrates how the Sampler can be used to create and play a song with customised audio behavior in a Scala web application.

What’s Left to Do:

While the current features provide a solid foundation, the potential for expanding the sampler's capabilities is virtually limitless. Here are several enhancements I'd like to explore adding to the sampler in the future:

  • Granular Synthesis: Implementing granular synthesis would allow for the independent manipulation of pitch and playback speed, enabling more creative possibilities and control over the sound.
  • MIDI Integration: Incorporating MIDI integration would allow users to control the sampler using external MIDI devices.
  • Real-Time Audio Recording: Record audio directly into the sampler in addition to loading a sample from a file path.

Challenges and Key Learnings During the Project:

Developing the sampler presented some challenges, particularly with the intricacies of the Web Audio API. Each step required the consideration of numerous interdependent factors. At times, it took several attempts to understand why a specific sound didn’t play as intended, and testing audio proved somewhat challenging since it’s tied to a listener’s hearing/perception.

Overall, this has been a very rewarding experience that has provided me with valuable insights into program design decisions and the importance of a structured approach to managing complexity. Applying functional programming principles played a crucial role in navigating this.

I would like to extend my heartfelt thanks to my mentors, Noel Welsh and Paul Matthews, for their invaluable help and support throughout this project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment