Probability distribution vector

A computer can make a choice between different alternatives based on assigned statistical “likelihoods”—relative probabilities assigned to each possible alternative. This is accomplished most easily by storing all of the probabilities in a single vector (array), calculating the sum of all those probabilities, dividing the range (0 to the sum) into “quantiles” (subranges) proportional to the different probabilities, choosing a random number within that range, and determining which quantile the random number falls into.

The article “Probability distribution” describes this process and shows how to accomplish it, both conceptually (in a prose description that could be implemented as a program) and with an example written in Max using the table object. That article also discusses the implications and limitations of making decisions in this way.

What follows is an example of how to implement a probabilistic decision making program in JavaScript, and a simple Max patch for testing it. I chose to write the example in JavaScript for two reasons. One reason is that JavaScript is an easy-to-understand language, comprehensible to people who already know Java or C; the other reason is just to demonstrate how easy it is to write and use JavaScript code inside Max.

First, let’s recap the process we’ll follow, as stated in that article.
1. Construct a probability vector.
2. Calculate the sum of all probabilities.
3. Choose a random (nonnegative) number less than the sum.
4. Begin cumulatively adding individual probability values, checking after each addition to see if it has resulted in a value greater than the randomly chosen number.
5. When the randomly chosen value has been exceeded, choose the event that corresponds to the most recently added probability.

To see an implementation of this in JavaScript for use in the Max js object, download the file “probabilisticchoice.js” and save it with that name somewhere in the Max file search path. The comments in that file explain what’s being done. In this implementation, though, we use a reverse procedure from the one described in step 4 above. We start by subtracting the value of the last probability in the array from the total sum, and checking to see if that value is less than the random number we chose. If not, we proceed to the next-to-last probability, subtract that, and see if it’s less than the random number, and so on. The principle is the same, we’re just checking downward from the maximum rather than upward from the minimum.

You can try out the program using the example Max patch shown below.


probabilitiestester.maxpat

The JavaScript program accommodates the six input messages shown in the patch. To set the array of probabilities, one can use the setprobabilities message or simply a list. One can query the contents of the variables probabilities, choices, and sum variables, which are sent out the right outlet. The message bang makes a probabilistic choice based on the specified probabilities, and sends the choice (some number from 0 to choices-1) out the left outlet. Note that this is nearly identical to the probabilistic choice capabilities of the table object. It’s shown here as a JavaScript to demonstrate the calculation explicitly.

Tempo-relative timing in Max

As noted in the essay on musical timing, computers can measure absolute time with great precision, to the nearest millisecond or microsecond, but for musical purposes it’s generally more suitable to use “tempo-relative” time, in which we refer to units of time as measures, beats, and ticks (fractions of a beat) relative to a given tempo stated in beats per minute (bpm). (For the purpose of this discussion, we’ll consider “beat” and “quarter note” to be synonymous.)

The default—and most common—unit of time in Max is the millisecond. Almost all timing objects (cpuclock, metro, clocker, timer, delay, pipe, etc.) refer to time in terms of milliseconds by default. However, in Max (as in most DAW software) there exists a syntax for referring to time in a variety of formats: absolute time in milliseconds, absolute time in clock format (hours:minutes:seconds:milliseconds), audio samples (dependent on the audio sampling rate in effect), hours:minutes:seconds:frames (dependent on an established film/video frame rate), and tempo-relative time based on the tempo attribute stored in the transport.

Tempo-relative time, as controlled by the transport in Max, can be expressed in bars.beats.units (bbu format, equivalent to measures.beats.ticks in DAW software), note values, or simply in ticks. The relationship of those units to absolute time depends on the value stored in the transport‘s tempo attribute, expressed in bpm. A complete explanation of the time value syntax in Max is in the Max documentation. A complete listing of the objects that support time value syntax is also available in the documentation. (Just about all time-related objects do support it.) To translate automatically from one format to another, the translate object is useful. (The translate object works even when the transport is not running.)

When the transport is on, its sense of time moves forward from the specified starting point, and it governs all timing objects that refer to it. If a timing object is using absolute millisecond time units, it will be oblivious to the transport. However, if you specify its timing in tempo-relative units, it will depend on (and be governed by) the transport. The transport can be turned on and off in Max, its current time can be changed, its tempo and time signature can be changed, and it can be queried for information about the current tempo, the time signature, and the current moment in (its own sense of) time.

The phasor~ object can also be synchronized to a transport. Since the phasor~ can be used to drive many other MSP objects, many audio processes (such as oscillator rates, looping, etc.) can be successfully governed by the transport for tempo-relative musical timing.

In addition to the Max documentation cited above, you can read more about tempo-relative timing in Max in the article “Tempo-relative timing” and you can try out the example Max patch it contains. To understand the Max transport object and its implications for rhythmic timing, you can study this example of “Tempo-relative timing with the transport object“, read the accompanying explanatory text, and also study the other examples to which links are provided in that text.

Timing in MIDI files

In a standard MIDI file, there’s information in the file header about “ticks per quarter note”, a.k.a. “parts per quarter” (or “PPQ”). For the purpose of this discussion, we’ll consider “beat” and “quarter note” to be synonymous, so you can think of a “tick” as a fraction of a beat. The PPQ is stated in the last word of information (the last two bytes) of the header chunk that appears at the beginning of the file. The PPQ could be a low number such as 24 or 96, which is often sufficient resolution for simple music, or it could be a larger number such as 480 for higher resolution, or even something like 500 or 1000 if one prefers to refer to time in milliseconds.

What the PPQ means in terms of absolute time depends on the designated tempo. By default, the time signature is 4/4 and the tempo is 120 beats per minute. That can be changed, however, by a “meta event” that specifies a different tempo. (You can read about the Set Tempo meta event message in the file format description document.) The tempo is expressed as a 24-bit number that designates microseconds per quarter-note. That’s kind of upside-down from the way we normally express tempo, but it has some advantages. So, for example, a tempo of 100 bpm would be 600000 microseconds per quarter note, so the MIDI meta event for expressing that would be FF 51 03 09 27 C0 (the last three bytes are the Hex for 600000). The meta event would be preceded by a delta time, just like any other MIDI message in the file, so a change of tempo can occur anywhere in the music.

Delta times are always expressed as a variable-length quantity, the format of which is explained in the document. For example, if the PPQ is 480 (standard in most MIDI sequencing software), a delta time of a dotted quarter note (720 ticks) would be expressed by the two bytes 85 50 (hexadecimal).

So, bearing all that in mind, there is a correspondence between delta times expressed in terms of ticks and note values as we think of them in human terms. The relationship depends on the PPQ specified in the header chunk. For example, if the PPQ is 96 (hex 60), then a note middle C on MIDI channel 10 with a velocity of 127 lasting a dotted quarter note (1.5 beats) would be expressed as
00 99 3C 7F // delta time 0 ticks, 153 60 127
90 99 3C 00 // delta time 144 ticks, 153 60 0

It’s about time

Sound and music take place over time. Sonic phenomena change constantly over time, and therefore almost any consideration of them has to take time into account.

The word “rhythm” is used to refer to (sonic) events that serve to articulate, and thus make us aware of, how time passes. We become aware of intervals of time by measuring and comparing—either intuitively or with a time-measuring device such as a clock—the interval between events. We can detect patterns among those intervals, and we can recognize those patterns when they recur, even if with variations.

In everyday consideration of time, we discuss durations or intervals of time in terms of “absolute”, measurable clock time units such as hours, minutes, and seconds. When considering sound, we often need to consider even smaller units such as milliseconds (to accurately represent the rhythmic effect of events) or even microseconds (in discussions of audio sampling rate and the subsample accuracy needed for many digital audio signal processing operations).

Almost all programming languages provide a means of getting some numerical representation of the current time with millisecond or microsecond accuracy, such as the System.nanoTIme() method in Java, the cpuclock object in Max, etc. By comparing one instant to another, you can measure time intervals or durations with great accuracy.

When considering music, we most commonly don’t use direct reference to clock time units. Instead we refer to a “beat” as the basic unit of time, and we establish a metronomic “tempo” that refers indirectly to clock time in terms of beats per minute (bpm). Thus, a tempo of 120 bpm means that 120 beats transpire in one minute, so the beat interval (that is, the time interval between beats) is 60 seconds/minute divided by 120 beats/minute, which is 0.5 seconds/beat. Humans don’t consciously do that mathematical calculation; we just use the designated tempo to establish a beat rate, and then we think of the music in terms of divisions or multiples of the beat.

In the music programming language Csound, time is expressed in terms of beats, and the default tempo is 60 bpm, so time is by default also expressed in seconds. A command to change the tempo changes the absolute timing of all subsequent events, while keeping the same rhythmic relationships, relative to the designated tempo. Referring to units of time in terms of tempo, beats, measures (groups of beats), and divisions (fractions of beats) can be called “tempo-relative” time, to distinguish it from “absolute” time. This represents two different ways of talking about the same time phenomena; each has its usefulness. In most music, it makes more sense to use tempo-relative time, since we’re quite used to conceptualizing and recognizing musical timing in tempo-relative terms, yet not so good at measuring time in absolute terms (without the aid of a timekeeping device).

Most audio/MIDI sequencing programs, such as Live, Reason, Logic, Pro Tools, Cubase, Garage Band, etc. are based on the idea of placing events on a timeline, and they allow the user to refer to time either in terms of absolute time or tempo-relative time. For music that has a beat, tempo-relative time is usually preferable and more common. The norm is to have a way of setting the metronomic tempo in bpm, a way of setting the time signature, and then referring to points in time in terms of measures, beats, and ticks (fractions of a beat) relative to a starting point such as “1.1.0”, meaning measure 1, beat 1, 0 ticks. In most sequencing programs “ticks” means fractions of a quarter note, also sometimes called “parts per quarter”, regardless of the time signature. (That is, “ticks” always refers to fractions of a quarter note, even if we think of the “beat” as a half note as in 2/2 time, or an eighth note as in 5/8 time, or a dotted eighth note as in 6/8 time.) 480 ticks per quarter note is standard in most programs, the default time signature is 4/4, and the default tempo is 120 bpm. 480 ticks per quarter note gives timing resolution at nearly the millisecond level at that tempo. In Max, the measures.beats.ticks terminology is called bars.beats.units, but the idea is the same.

Managing MIDI pitchbend messages

When designing a synthesizer or a sampler, how should you interpret MIDI pitchbend messages so that they’ll have the desired effect on your sound? First let’s review a few truisms about MIDI pitchbend messages.

1. A pitchbend message consists of three bytes: the status byte (which says “I’m a pitchbend message,” and which also tells what MIDI channel the message is on), the least significant data byte (you can think of this as the fine resolution information, because it contains the 7 least significant bits of the bend value), and the most significant data byte (you can think of this as the coarse resolution information, because it contains the 7 most significant bits of the bend value).

2. Some devices ignore the least significant byte (LSB), simply setting it to 0, and use only the most significant byte (MSB). To do so means having only 128 gradations of bend information (values 0-127 in the MSB). In your synthesizer there’s really no reason to ignore the LSB. If it’s always 0, you’ll still have 128 equally spaced values, based on the MSB alone.

3. Remember that all MIDI data bytes have their first (most significant) bit clear (0), so it’s really only the other 7 bits that contain useful information. Thus each data byte has a useful range from 0 to 127. In the pitchbend message, we combine the two bytes (the LSB and the MSB) to make a single 14-bit value that has a range from 0 to 16,383. We do that by bit-shifting the MSB 7 bits to the left and combining that with the LSB using a bitwise OR operation (or by addition). So, for example, if we receive a MIDI message “224 120 95” that means “pitchbend on channel 1 with a coarse setting of 95 and a fine resolution of 120 (i.e., 120/128 of the way from 95 to 96)”. If we bit-shift 95 (binary 1011111) to the left by 7 bits we get 12,160 (binary 10111110000000), and if we then combine that with the LSB value 120 (binary 1111000) by a bitwise OR or by addition, we get 12,280 (binary 10111111111000).

4. The MIDI protocol specifies that a pitchbend value of 8192 (MSB of 64 and LSB of 0) means no bend. Thus, on the scale from 0 to 16,383, a value of 0 means maximum downward bend, 8,192 means no bend, and 16,383 means maximum upward bend. Almost all pitchbend wheels on MIDI controllers use a spring mechanism that has the dual function of a) providing tactile resistance feedback as one moves the wheel away from its centered position and b) snapping the wheel quickly back to its centered position when it’s not being manipulated.

5. The amount of alteration in pitch caused by the pitchbend value is determined by the receiving device (i.e., the synthesizer or sampler). A standard setting is variation by + or – 2 semitones. (For example, the note C could be bent as low as Bb or as high as D.) Most synthesizers provide some way (often buried rather deep in some submenu of its user interface) to change the range of pitchbend to be + or – some other number of semitones.

So, to manage the pitchbend data and use it to alter the pitch of a tone in a synthesizer we need to do the following steps.
1. Combine the MSB and LSB to get a 14-bit value.
2. Map that value (which will be in the range 0 to 16,383) to reside in the range -1 to 1.
3. Multiply that by the number of semitones in the ± bend range.
4. Divide that by 12 (the number of equal-tempered semitones in an octave) and use the result as the exponent of 2 to get the pitchbend factor (the value by which we will multiply the base frequency of the tone or the playback rate of the sample).

A pitchbend value of 8,192 (MSB 64 and LSB 0) will mean 0 bend, producing a pitchbend factor of 2(0/12) which is 1; multiplying by that factor will cause no change in frequency. Using the example message from above, a pitchbend of 12,280 will be an upward bend of 4,088/8191=0.499. That is, 12,280 is 4,088 greater than 8,192, so it’s about 0.499 of the way from no bend (8,192) to maximum upward bend (16,383). Thus, if we assume a pitchbend range setting of ± 2 semitones, the amount of pitch bend would be about 0.998 semitones, so the frequency scaling factor will be 2(0.998/12), which is about 1.059. You would multiply that factor by the fundamental frequency of the tone being produced by your synthesizer to get the instantaneous frequency of the note. Or, if you’re making a sampling synthesizer, you would use that factor to alter the desired playback rate of the sample.

See a demonstration of this process in the provided example “Using MIDI pitchbend data in MSP“. The process is also demonstrated in MSP Tutorial 18: “Mapping MIDI to MSP“.

Music 147 lecture notes from Thursday April 17, 2014

1. We took a look at the project assignments as done by Brian MacIntosh, Jane Pham, Jared Leung, and Vanessa Yau. Brian’s solution played four simultaneous pseudo-random algorithmically-composed sinusoids. We discussed ways to avoid clicks when changing the frequency of a tone. Jane’s solution plotted the sum of two sinusoids. We identified a bug that was causing misrepresentation of the waveform. Jared’s program plotted and played a sinusoid and provided a slider for the user to select the desired frequency. We discussed interface choices and possible improvements, and discussed ways to schedule sound events so as not to interfere with simultaneous user events. Vanessa’s solution plotted the sum of three sinusoids. We discussed monitoring the sum of added sounds in the computer to avoid clipping, and we observed the difficulty of plotting a realtime sound stream when the fundamental period of the sound wave doesn’t correspond to the dimensions of the plot and the drawing rate of the program.

2. We discussed the concept of a “control function”, a function or shape that not heard directly but is perceptible because of the way it is used to affect change in a sound event.

For example, in the formula
f[n] = Acos(2π(ƒn/R+))
what if A, instead of being a constant value were a constantly changing value obtained from some other time-varying function of n? For example, it could be a linear function increasing from 0 to 1 (which would be a fade-in). Or it could be the output of a second oscillator function at a sub-audio frequency, which would result in a tremolo effect.

Control functions are important for giving shape and interest to an otherwise static sound, and also can be used to give shape to the musical content of the sound.

We built an example in Max in which we used a low-frequency oscillator (LFO) to modulate the frequency input of an audio-frequency oscillator, creating a vibrato effect. We discussed how the rate and depth of the frequency modulation affect the sound.

3. We went over the readings, and the examples of linear interpolation. We built an example in Max that uses the line~ object to create a breakpoint line segment function controlling the amplitude of a sound. In effect, we made four line segments that describe an attack-decay-sustain-release (ADSR) amplitude envelope emulating a note played by an instrument. We also used line~ to make a linear frequency glissando.

4. We discussed the math of linear interpolation.

The purpose of linear interpolation is to find intermediate values that lie on a straight line between two known values.

The underlying concept is:
To find intermediate y values that theoretically lie in between the y values at two successive x indices, as x progresses from one index to the next, intermediate y values can be estimated to lie on a straight line between the two known y values.

In other words:
If the y value at point x1 is y1, and the y value at point x2 is y2, then as x progresses linearly from x1 to x2, y progresses linearly from y1 to y2.

In other words:
The current value of y is to the range of possible y values as the current value of x is to the range of possible x values.

In other words:
(y-y1)/(y2-y1) = (x-x1)(x2-x1)
and
(y-y1)/(x-x1) = (y2-y1)/(x2-x1)

The general linear mapping equation to map one range of x indices to a corresponding range of y values is:
y = ((x-x1)/(x2-x1))*(y2-y1)+y1

This is applicable in discrete sampling terms because when we want to interpolate linearly from one value to another (say, from y1 to y2) over a series of samples (say, from na to nb), that implies that there will be b-a steps (increments) to get from y1 in sample na to y2 in sample nb.

So at each successive sample we would increase the value of y by (1/(b-a))*(y2-y1).

This interpolation algorithm is used for achieving a weighted balance of two signals.
Suppose we want a blend (mix) of two sounds, and we would like to be able to specify the balance between the two as a value from 0 to 1, where a balance value of 0 means we get only the first sound, a balance value of 1 means we get only the second sound, and 0.5 means we get an equal mix of the two sounds.

One way to calculate this is
y[n] = x1[n](balance)+x2[n](1-balance)
where x1 and x2 are the two signal values and balance is the weighting value described above.

Another way to calculate the same thing (slightly more efficiently) is
y[n] = x1[n]+balance(x2[n]-x1[n])
This second way involve one fewer multiplication than the first way.

When accessing a buffer of sample values or a ring buffer delay line or a wavetable array, for values of the index n that would fall between integer indices
(i.e. where n would have a factional part) we use the samples on either side of n—we’ll call them n0 and n1—and take a weighted average of the two.

One way to calculate this is
x[n] = x[n1](fraction)+x[n0](1-fraction)
where fraction is the fractional part of n.

Another way to calculate the same thing (slightly more efficiently) is
x[n] = x[n0]+fraction(x[n1]-x[n0])

5. We showed how the line~ object takes care of all that calculation for you in MSP, and allows you just to specify a destination value and a transition time to get there (or you can provide a series of such pairs: value and time.

Fade-ins and fade-outs, and multistep breakpoint line-segment functions (such as ADSR envelope shapes) can be easily generated by line~ (or adsr~), and can be drawn in the function object which then provides instructions to line~. You can apply these ideas to frequency as well as amplitude.

6. We discussed the logarithmic nature of our perceptions, as addressed in the Weber-Fechner law and in Stevens’ power law. I gave examples in both amplitude (loudness perception) in frequency (pitch perception). In an upcoming class I’ll explain tunings systems and equal temperament.

Lecture topics, 4/10/14

[The following are the professor’s preparatory notes for Thursday April 10. They’re not actually useful as “lecture notes”; they don’t really tell you anything. But I’m posting them here just as a reminder of what we talked about (and will talk about).]


Topic 0. Upcoming Events: The following three concerts are highly recommended to see new approaches to the use of computers in live music performance.
Lava Glass – MFA recital by Martim Galvao
Shackle – Anne La Berge, flute, and Deckard, laptop and electronics
Interactive Instrumentation – ICIT faculty concert of new works for instruments and computers

1. Review the previous four blog posts on finding and opening files.

2. High-level and low-level programming. Max is a common language (and is a kind of level playing field) that lets us confront issues quickly, easily, and directly, and hear the results immediately, but learning Max is not really the goal. It’s up to each student to transfer the things that we do in Max into the programming situation that’s most meaningful to them.

3. Categorizing the basic tasks of computer audio/music programming: computer music applications deal with both non-realtime and realtime (i.e., untimed and timed) tasks. File i/o and stream i/o are untimed and timed, respectively. MIDI and audio (“midi” and “sampled” in the Java Sound API) involve different timings and structural levels (i.e., music is organized audio, and is dealt with as a higher level of description). Most activities are, behind the scenes, untimed (as fast as possible) manipulation of arrays of integers or floats.

4. Scheduling. The Max queue and realtime scheduling paradigm. How MSP works. How the two (actually three) “threads” are related: queue, scheduler, and audio. See Joshua Clayton’s article.

5. Take a look at Audacity. What things does it do, and what exactly does it have to do in order to accomplish those things? Discuss audio file I/O, data management, screen drawing, functionality, etc. Generate a tone.

6. Synthesize a waveform.

A sinusoid as the basic vibration component of sound:
Analog way: f(t) = Asin(2πft+φ)
Digital way: f(n) = Asin(2πfn/R+φ)
Puckette way: x[n] = a cos(ωn+φ)
Angular frequency: ω = 2πƒ/R
Dobrian way: y[n] = Acos(2π(ƒn/R+φ))

Additive synthesis y(n) = a0+a1cos(ωn+φ1)+a2cos(2ωn+φ2)+…

6a. Do it in MSP. (Too easy, too high-level.)

6b. You can DIY in gen~.

6c. Print out a cycle of a sinusoid in super-simple Java.

6d. Plot a cycle of a sinusoid in simple Java/Swing.

7. Assignment. Do the reading that was assigned for today if you haven’t done it yet. (Re-read it anyway, to reinforce your understanding of it.) Q&A participation is required at least once a week. Write a program that plots a waveform, or even plays a waveform if you’re able. Make a periodic but non-sinusoidal waveform via additive synthesis. Go beyond this basic requirement if you want to and can, of course. If you can’t program, build it in MSP and study the plot~ object to see how you can best plot it.

Future stuff:

Smoothing transitions with interpolation
— Linear interpolation formula
— Exponential interpolation

The logarithmic nature of human perception
— Subjective experience vs. empirical measurement
— Additive vs. multiplicative relationships (differences vs. ratios)
Weber-Fechner law and Stevens’ power law

The decibel scale: dB = 20log10(A/A0)
and its converse: A = 10(dB/20)

The harmonic series
— Pythagorean tuning
— Temperament

Equal temperament: ƒ(p) = 440(2(69-p)/12)
and its converse: p = 69+12log2(ƒ/440)
and its generalized form: ƒ(n) = ƒ0o(n/d)

Wavetable synthesis

Basics of opening a file

In class we discussed potential complications when loading external data files into your program automatically. We were talking about audio and image files, but it could be any file.

One question is what to do if the file cannot be found by your program or cannot be opened by your program. If you do detect a malfunction, you have to decide what to do about it: how to notify the user and how to recover from (or exit from) the error. The standard programmer’s way to deal with that is to check the value returned by the file-opening function, to ensure that it doesn’t contain an error warning (or an “exception”). If an error is detected, most often the program provides a warning to the user that the desired activity failed—providing enough information to actually be informative to the user. Then, if the program can’t proceed from there, the program should reset to a place where the user can try again in a different way, or should simply exit that function so that the user can try something completely different, or in extreme cases should quit the program entirely. In the case of being unable to find a file, the proper response action might be to open a file dialog window that will permit the user to browse and find the file manually.

First let’s address the problem of the program being unable to find a file. That might happen for one of two reasons. The file might not exist; the name was misspelled or the file is missing entirely. Or, the file might exist, but the program is not looking for it in the right place in the file system hierarchy. In Max, when the sfplay~ object can’t find the file you’ve tried to open or preload, it posts an error message in the Max window, “sfplay~: cant find file <filename>”, but does nothing more than that. If you want to handle that error in your patch somehow, you can use the error object, which sends error messages out its outlet, and you can use that message however you want (show it to the user, trigger other actions, etc.).

The Max application keeps a list of preferred search directories, which you can view and edit via the File Preferences… command in the Options menu. You can read about Max’s file search path in the documentation. However, those file search directories are stored in Max’s preferences for each user on that particular computer, so they won’t be guaranteed to be in effect on another computer. Max provides several objects to help you query the search path and construct a full pathname. Those objects are filepath to set or query the current path, absolutepath and relativepath to convert pathnames, and thispatcher to query the directory of the patch itself. There are also objects that help you construct messages, such as join, append, and prepend (which combine components of a message with spaces in between items), combine (which combines components with no space between items), and sprintf (which combines items in a C-like way).

One good approach can be to accompany your program with a folder full of the needed data files and name the folder something like “sounds” or “data” or whatever it contains. Then, as long as that folder is in the same folder as your Max patch, and nobody renames it, your program can find the files it needs regardless of what computer it’s on. You would query the thispatcher object by sending it a ‘path’ message, then combine that with the foldername and filename you want to use. You can make that into a subpatcher or abstraction that you use whenever you want to open a file, and you can then prepend the word ‘open’ or ‘preload’ or whatever selector you want, to make it into a message that will open the file successfully. I’ve posted an example of an abstraction that implements this path-construction technique and another example that demonstrates the use of the “providepath” abstraction in a main patch.

How to send a Max patch by email

You can send a Max patch (or any selected objects from it) to someone else as plain text in an email (or a post like this) by following these steps.

1. In Max, while in Edit mode, select the objects you want to include (or type Command-A to Select All).

2. Choose ‘Copy Compressed’ from the Edit menu.

3. Go to the program where you want to include your patch and choose ‘Paste’ from the Edit menu.

The text that you paste in will look something like the text shown at the bottom of this post. When you receive such text from someone else, to turn it back into a Max patch, follow these steps.

1. Copy the weird-looking text, including the lines that say

———-begin_max5_patcher———
and
———–end_max5_patcher———–

but not the lines that contain the HTML tags ‘<pre><code>’ and ‘</pre></code>’.

2. Go to Max and choose ‘New From Clipboard’ from the File menu.

Try it yourself. This will be a common way for you to send and receive Max patches.

<pre><code>
———-begin_max5_patcher———-
560.3oc0V0taaBCE82vSgk2eyRMNfIr8q8BrWfopIGvP7DXihc1RaU6y97GP
VZKPJsQSsHA1b8E6y4dO9ZtKL.tQdfoffu.9AHH3tvf.mIqgft2CfMzC40Tk
yMXtrogIzvE9wzrCZmcgbIPVBn6K3RP9VpPvpU8d0R04a4hpetikq8qFFgVh
V.hHqcM3DaC1XCbc2GwKbSrbyu9LNtelJkBsf1vbC8scbZc+Hh8MbQMS6fYz
+LJ2q6shNYRT7acSRDdIxZ89vP6iEuwvfrkI.TfRtWT.J40rIBAqhVlYuHF9
ihcgADdxv.9iRXvKCjhqjkkSDAhhczFicMoqlj7n2ajmcaAM+gonGwqwW6nW
bR+ygnWDYPRfGlDuBz1vTJZEaPE6Dbf3ElXGSVk54yHTXtYngIWzHYnNydO0
2zx7.DBAWeAUtJMcm9JkV1ddca2tVeVdLca1GqRWlxVs0zaLc5pkelhXYDK4
W6E5IwSEIReuEIzxppIIW+FX7wL73riLCNDMlVlaxKuJ4rf8GCHdtZtzlLe.
feAjLwc1Sp4PozTbR53LM4hrMe0L2lq3UB6BbZuMTQE7HDUze267m9tyQO6s
cM2XWhLiXHIxaA90GGrcyDrlKd5uD4hZV6ONCXNkeWd+R1EY.QGwSASo4Bpl
KEm3isP+INskWTvDmJra3EsRiPnCCiHGlCjPuDHg9uAI64DmESIyCRl+cHYg
qtrsgjc7s2HVIWdn9DIW38g+Eb7fmdI
———–end_max5_patcher———–
</code></pre>