Abstraction for quad panning using x,y coordinates

Image

There are several standard and speaker configurations for 2-dimensional surround sound panning, such as quadraphonic (four speakers in a square or rectangular placement) and the 5.1 or 7.1 THX cinema surround specifications. There are also sound distribution encoding techniques that work for a variety of speaker configurations, such as the Ambisonics panning description, and there are processing techniques such as head-related transfer functions (HRTF) filtering.

This and the next few examples will show simple algorithms for intensity panning with a rectangular quadraphonic speaker configuration.

One way to implement two-dimensional panning is to specify the sound’s virtual location as a x,y coordinate point on a rectangular plane representing the floor of the room, with a speaker at each corner of the plane. The x value can be used to represent the left-right panning (0 to 1, going from left to right) and the y value represents the front-back panning (0 to 1, going from front to back). For some purposes, simple linear panning might suffice (or even be found to be preferable). I usually prefer to use a constant intensity panning algorithm. So I use the pan~ abstraction to calculate the amplitudes that will provide the left-to-right panning illusion, and then I use two other pan~ objects to pan each of those gains (the left and right amplitudes) from front to back.

This patch is an abstraction that enacts that plan. (It requires that the pan~ abstraction be somewhere in the Max file search path.) You can use this to pan any signal to four speakers in a rectangular quadraphonic layout. It takes a signal in its left inlet, an x coordinate in its second inlet, and a y coordinate in its right inlet. Similarly to the pan~ abstraction, it allows the panning coordinates to be specified as initial arguments, floats in the 2nd and 3rd inlets, or as signals.

Abstraction for constant-intensity stereo panning

Image

For the basics of intensity panning for stereo sound in MSP, you might want to review Example 12 on linear amplitude panning, and Example 13 and Example 14 on constant power panning, from the 2012 Interactive Arts Programming class.

 

This patch is a very useful abstraction for constant-intensity stereo panning of sound. It’s identical to Example 43 from the 2009 Interactive Arts Programming class, so you can read a description of it there.

 

Notice how this abstraction allows the user to specify the panning position in any one of three ways: as a typed-in value in the parent patch to set an initial value, as a float to change instantly to a new position, or as a signal to change continuously and smoothly from one value to another.

 

Save this patch with the name “pan~”. It’s needed for the next two examples.

 

Abstraction for crossfading delay times of a remote tapin~ object

Image

 

If we want to use the delay crossfading technique shown in the above example for multiple different delays of the same sound, the simplest solution is just to make multiple copies of that abstraction and send the same audio signal to each one. However, that’s a bit inefficient in terms of memory usage because each subpatch would have its own tapin~ object, each of which would be containing the same audio data.

The way that tapin~ and tapout~ communicate is that when audio is turned on tapin~ sends out a ‘tapconnect’ message. When tapout~ receives a ‘tapconnect’ message it refers to the memory in the tapin~ object above it. So we really could modify our delay crossfade abstraction so that, instead of receiving an audio signal in its left inlet, it receives the message ‘tapconnect’. That way, multiple copies of the abstraction could all refer to the same tapin~ object in their parent patch.

So this example shows a modification of the delay crossfading abstraction, in which the tapin~ object has been removed, and in which the left inlet expects a ‘tapconnect’ message instead of an audio signal. It will refer to a tapin~ object in the parent patch. You can save this abstraction with a different name, such as tapoutxfade~.

Abstraction for crossfading between delay times

Image

 

This example shows my preferred method for changing between different fixed delay times. It’s an abstraction that I regularly use when I want a simple delay, and want the ability to change the delay time with no clicks or pitch changes. It’s designed as an abstraction so that it can be used as an object (a subpatch) within any other patch.

It works by using two different delays of the same buffered sound, and crossfading between the two. The signal we want to delay comes in the left inlet and is saved in a tapin~ ring buffer that’s connected to a tapout~ object with two delay taps (but we’re initially only hearing the left output of tapout~, because the mix~ subpatch has the other signal of tapout~ faded down to 0). When a new delay time comes in the second inlet, it’s directed to the inlet of tapout~ that’s associated with the delay tap that’s currently faded to 0, then fades up that signal while fading down the other. The third inlet allows for changing the crossfade time, so we can have quite sudden (nearly instantaneous, like 10 ms) changes of delay time that are nevertheless click-free, or we can have slower crossfades between delay times, even lasting several seconds (in which case we’ll actually hear both delayed signals while we’re crossfading between the two). By flipping back and forth between the two outlets of the gate object, and also fading back and forth between the two outputs of tapout~, we’re always changing the delay time on the tap that is currently silenced. Try mentally stepping through the sequence of messages to understand exactly how this is accomplished.

Notice that this abstraction has been designed with the ability to accept typed-in arguments, so that its characteristics can be specified in its parent patch. The first argument in the parent patch will replace the #1 in this abstraction, so that the user can (indeed, must) specify the size of the tapin~ buffer. The second argument in the parent patch replaces the #2 to set the initial delay time of the object, and the third argument replaces the #3 to set the crossfade time that will be used for subsequent delay time changes.

N.B. There is actually a “screw case”–a way that this patch can fail to do its job correctly. If a new delay time comes in before the previous crossfade has finished, the tap that’s being changed will still be audible, and we might hear a click. I haven’t bothered to protect against this because I expect the user to know not to set a crossfade time that’s longer than the expected minimum interval between changes of delay time. If we wanted to make this patch more “robust”–invulnerable to the screw case–we could refuse to accept a new delay time (or hold onto any new delay time) till the crossfade of the previous one has finished.

You can save this abstraction with a name such as delayxfade~ and try it out. (I try to use the convention of putting a ~ at the end of audio processing abstractions to remind myself that the abstraction involves MSP.)

Ducking when changing delay time

Image

 

One possible solution to the problem of clicks occurring when delay time is changed is to fade the amplitude of the delayed sound to 0 just before changing the delay time, then fade back up immediately after the change. This does avoid clicks, but causes an audible momentary break or dip in the sound. This shows one way you could implement such momentary “ducking” of the amplitude. (The same idea with the delay~ object is shown in an example from the previous quarter’s class.)

Continuous change of delay time causes a pitch shift

Image

 

The way we commonly avoid clicks when changing the amplitude of a sound is to interpolate smoothly sample-by-sample from one gain factor to another, using an object such as line~. Does that same technique work well for making a smooth change from one delay time to another? As it turns out, that’s not the best way to get a seamless unnoticeable change from one delay time to another, because changing the delay time gradually will actually cause a pitch shift in the sound.

This patch demonstrates that fact. When you provide a new delay time, it interpolates to the new value quickly; you’ll hear that as a quick swoop in pitch. You can get different types of swoop with different interpolation times, but this sort of gradual change in delay time always causes some amount of audible pitch change. Of course there are ways to use this pitch change for desired effects such as flanging, but what we seek here is a way to get from one fixed delay time to another without any extraneous audible artifacts.

Change of delay time may cause clicks

Image

The main ways to delay a sound in Max are demonstrated in the examples from the previous quarter that show the delay~ object and the tapin~ and tapout~ objects. You might want to take a look at those examples and read the associated text to review how they work, and what their pros and cons are.

Whenever you change the delay time, you risk causing a click by creating a discontinuity in the output waveform. (The amplitude at the new location in the ring buffer is likely to be different from the amplitude at the old location, so the output waveform will leap instantly from the old amplitude to the new amplitude.) This patch allows you to try that, to confirm that clicks can occur. You might sometimes get lucky and change the delay time at a moment of silence thus avoiding a click, but the odds are that a click will occur. So if you plan to change the delay time while listening, you probably want to try to solve that problem. The next few examples will address the topic.