Adjust the pitch of a comb filter

Image

This patch demonstrates how to adjust the delay time of a comb filter to make the filter correspond to a desired fundamental pitch.

The filtering formula used by the comb~ object is

y[n] = a x[n] + b x[n-(DR/1000)] + c y[n-(DR/1000)]

wherein R is the sampling rate, D is a delay time in milliseconds, x[n] is the current input sample, y[n] is the current output sample, and a, b, and c are gain scaling factors.

That formula can be shown diagrammatically like this.

In the patch we convert a MIDI-based pitch number into a frequency in Hertz, then use that to calculate the correct delay time for the filter. Using delay feedback (a past y[n] value) with feedback gain approaching 1 creates strong resonance at the comb frequency, yielding an inverted comb response pattern sort of like this,

resulting in a strong imposition of the fundamental pitch and a buzzy timbre.

 

Adjust pitches according to a pitch class set

Image

One potential use of the “inlist” abstraction is to compare incoming pitches to a pitch class set. This patch uses a % 12 object to find the pitch class of an incoming pitch, then compares it with the members of a prescribed pitch class set. If it belongs to the pitch class set, it gets passed on unchanged; if it doesn’t belong to the pitch class set, it gets pushed up one semitone and tested again.

Note that this patch does point to a potential bug (a so-called “screw case”). If the pitch class set is null (the bag inside the inlist abstraction is empty), any incoming pitch would set this patch into an infinite loop and cause a stack overflow. However, we’re safe in this particular example because we have pre-loaded the pitch class set and there’s no way provided in the program to delete those numbers.

Continuous change of delay time causes a pitch shift

Image

 

The way we commonly avoid clicks when changing the amplitude of a sound is to interpolate smoothly sample-by-sample from one gain factor to another, using an object such as line~. Does that same technique work well for making a smooth change from one delay time to another? As it turns out, that’s not the best way to get a seamless unnoticeable change from one delay time to another, because changing the delay time gradually will actually cause a pitch shift in the sound.

This patch demonstrates that fact. When you provide a new delay time, it interpolates to the new value quickly; you’ll hear that as a quick swoop in pitch. You can get different types of swoop with different interpolation times, but this sort of gradual change in delay time always causes some amount of audible pitch change. Of course there are ways to use this pitch change for desired effects such as flanging, but what we seek here is a way to get from one fixed delay time to another without any extraneous audible artifacts.