Lecture topics, 4/10/14

[The following are the professor’s preparatory notes for Thursday April 10. They’re not actually useful as “lecture notes”; they don’t really tell you anything. But I’m posting them here just as a reminder of what we talked about (and will talk about).]


Topic 0. Upcoming Events: The following three concerts are highly recommended to see new approaches to the use of computers in live music performance.
Lava Glass – MFA recital by Martim Galvao
Shackle – Anne La Berge, flute, and Deckard, laptop and electronics
Interactive Instrumentation – ICIT faculty concert of new works for instruments and computers

1. Review the previous four blog posts on finding and opening files.

2. High-level and low-level programming. Max is a common language (and is a kind of level playing field) that lets us confront issues quickly, easily, and directly, and hear the results immediately, but learning Max is not really the goal. It’s up to each student to transfer the things that we do in Max into the programming situation that’s most meaningful to them.

3. Categorizing the basic tasks of computer audio/music programming: computer music applications deal with both non-realtime and realtime (i.e., untimed and timed) tasks. File i/o and stream i/o are untimed and timed, respectively. MIDI and audio (“midi” and “sampled” in the Java Sound API) involve different timings and structural levels (i.e., music is organized audio, and is dealt with as a higher level of description). Most activities are, behind the scenes, untimed (as fast as possible) manipulation of arrays of integers or floats.

4. Scheduling. The Max queue and realtime scheduling paradigm. How MSP works. How the two (actually three) “threads” are related: queue, scheduler, and audio. See Joshua Clayton’s article.

5. Take a look at Audacity. What things does it do, and what exactly does it have to do in order to accomplish those things? Discuss audio file I/O, data management, screen drawing, functionality, etc. Generate a tone.

6. Synthesize a waveform.

A sinusoid as the basic vibration component of sound:
Analog way: f(t) = Asin(2πft+φ)
Digital way: f(n) = Asin(2πfn/R+φ)
Puckette way: x[n] = a cos(ωn+φ)
Angular frequency: ω = 2πƒ/R
Dobrian way: y[n] = Acos(2π(ƒn/R+φ))

Additive synthesis y(n) = a0+a1cos(ωn+φ1)+a2cos(2ωn+φ2)+…

6a. Do it in MSP. (Too easy, too high-level.)

6b. You can DIY in gen~.

6c. Print out a cycle of a sinusoid in super-simple Java.

6d. Plot a cycle of a sinusoid in simple Java/Swing.

7. Assignment. Do the reading that was assigned for today if you haven’t done it yet. (Re-read it anyway, to reinforce your understanding of it.) Q&A participation is required at least once a week. Write a program that plots a waveform, or even plays a waveform if you’re able. Make a periodic but non-sinusoidal waveform via additive synthesis. Go beyond this basic requirement if you want to and can, of course. If you can’t program, build it in MSP and study the plot~ object to see how you can best plot it.

Future stuff:

Smoothing transitions with interpolation
— Linear interpolation formula
— Exponential interpolation

The logarithmic nature of human perception
— Subjective experience vs. empirical measurement
— Additive vs. multiplicative relationships (differences vs. ratios)
Weber-Fechner law and Stevens’ power law

The decibel scale: dB = 20log10(A/A0)
and its converse: A = 10(dB/20)

The harmonic series
— Pythagorean tuning
— Temperament

Equal temperament: ƒ(p) = 440(2(69-p)/12)
and its converse: p = 69+12log2(ƒ/440)
and its generalized form: ƒ(n) = ƒ0o(n/d)

Wavetable synthesis