By applying probability, we can still employ randomness, but weigh the odds to
favour specific outcomes. Adjusting the weights, we can influence how our
program behaves.
An easy way to think about this is by visualizing a pie chart. The more pieces
of the pie we assign a given outcome, the more chance that outcome will occur.
navigator.requestMIDIAccess().then((midi)=>{
const outputs = midi.outputs.values()
let output = outputs.next().value
functionrandom(min, max){
returnMath.floor(Math.random()*(max - min +1))+ min
Probability is one way to reign in randomness. Another way is to emulate a
common pattern found in nature, where values tend to cluster around a certain
range, otherwise known as normal (or Gaussian) distribution (in contrast to pure
randomness, which aims for uniform distribution). This maps well to music, where
melodies tend to use a narrow range of notes and steps.
Instead of choosing from all notes, we can instead limit our choices to a
particular scale. In fact, we've already been using a scale, the chromatic one.
This is valid, see
Twelve-tone technique,
but lacks 'musicality' (part of what those composers were getting away from).
For our purposes we can say that a scale is a pattern of white and black keys.
This pattern can be described in terms of intervals. See the Music chapter for
details. These notes sound like they 'belong together'.
C Major scale is just all the white notes, starting at C to next C.
Now all the notes are in the same scale so things sound a little less random /
more cohesive. Given a single stream of notes this is less jarring than total
randomness.
Here we're dividing the notes and sending the high notes to one channel, and the
low notes to a second channel. We're sampling from the same set of notes so we
can be sure they will harmonise. The results are more interesting as hearing how
the two voices interact adds a layer of depth to our music.
## Learning
We covered two main topics in this chapter: 1) we can use various methods to
generate sequences of numbers with different characteristics that we can use as
the input to our programs; and 2) we can apply music theory to coerce that data
into something that makes more musical sense.
With that in mind, we can encapsulate our learning into two new utilities:
Generative: Functions for generating data we can use in our programs,
either algorithms we write ourselves, or ones we might use from other
libraries → TODO.
Music: A place to wrap up our musical knowledge and handle the details of
mapping that to midi → TODO.
With these in our toolbelt, we could rewrite our last example as follows:
Right now our music is basically just streams of notes. To take it further, we
need a way to generate cohesive patterns of notes, and sequence them with other
patterns. As it happens, that's just the goal of the next chapter!