Skip to main content

Timing events

The WAIT and SYNC event kinds enable you to schedule Jamdac instruments to be triggered at precise moments in time, which is required for accurate playback of music.

One way to introduce delays between audio events is for the program to enqueue them at a later time. Consider this example:

Sample indexChannel 0 event
0dequeue LOAD INSTRUMENT
30dequeue TRIGGER INSTRUMENT
60note starts playing
(queue is empty for a while)
150dequeue TRIGGER INSTRUMENT
180previous note gets silenced
180note starts playing

The note starts playing at sample index 60 because LOAD INSTRUMENT and TRIGGER INSTRUMENT each consume 30 samples. The program waits some time before enqueuing the second TRIGGER INSTRUMENT event, so it isn't processed until sample index 150, and therefore it takes effect at sample index 180. Since each channel controls a single DSP voice, triggering a note causes the previous note to be immediately silenced.

Due to hardware interactions and other responsibilities of a program, in practice it's very difficult to control the timing of enqueuing an event. Therefore, the above timeline typically only arises from unpredictable realtime sources such as video game playing a sound effect when the player collects a coin. Timed enqueuing is not a reliable way to schedule musical notes of a song. Instead, the WAIT and SYNC events provide a precise solution.

8 – WAIT

The WAIT event causes Jamdac to wait for a specific number of samples before dequeuing the next event. This approach enables a music player to schedule many notes in advance, freeing the program to perform other tasks which may take hundreds of milliseconds before the next SOUND::THINK(). The example below shows a typical usage of WAIT:

Sample indexChannel 1 event
0dequeue LOAD INSTRUMENT
30dequeue TRIGGER INSTRUMENT
60note starts playing
60dequeue WAIT 51 samples
141dequeue LOAD INSTRUMENT
171previous note gets silenced
171dequeue TRIGGER INSTRUMENT
201note starts playing

In the above example, dequeuing the WAIT command itself takes 30 samples, and its operand specifies 51 samples, so the total delay is 81 samples. Thus, the next LOAD INSTRUMENT starts processing at sample index 141. Since each channel controls a single DSP voice, loading a new instrument causes the previous note to be silenced.

9 – SYNC

Now suppose we want to start playing a song that will use both Channel 0 and Channel 1. We said that Channel 0's queue became empty "for a while", not receiving its next event until sample index 150. Generally, there is no way for the program to determine how long this took, and therefore no way to calculate the correct WAIT time to synchronize the two channels to the same sample index.

The SYNC event solves this problem:

Sample indexChannel 0 eventSample indexChannel 1 event
0dequeue LOAD INSTRUMENT0dequeue LOAD INSTRUMENT
30dequeue TRIGGER INSTRUMENT30dequeue TRIGGER INSTRUMENT
60note starts playing60note starts playing
. . .(queue is empty for a while)60dequeue WAIT 51 samples
150dequeue TRIGGER INSTRUMENT141dequeue LOAD INSTRUMENT
180previous note gets silenced171previous note gets silenced
180note starts playing171dequeue TRIGGER INSTRUMENT
180dequeue WAIT 160 samples201note starts playing
370dequeue SYNC
group=0 channels=2
231dequeue SYNC
group=0 channels=2
400(start synchronizing)261(start synchronizing)
400(synchronized)400(synchronized)
400dequeue TRIGGER INSTRUMENT400dequeue TRIGGER INSTRUMENT
430note starts playing430note starts playing

The SYNC command causes Channel 1 to wait until Channel 0 processes its corresponding SYNC command, so both channels dequeue their next event at the same index 400.

Jamdac has six channels, and SYNC can be used to synchronize any subset of channels. The subset is specified using a sync group (some arbitrary integer between 0 and 5), combined with a sync channel count to indicate the number of channels in the subset. These two parameters are combined into bytes 3 and 4 of the IO_AUDIO_EVENT::OPERAND field.

By specifying sync groups, multiple subsets of channels can be synchronized concurrently. In the implementation, each channel that processes a SYNC event becomes "blocked". When a SYNC event is processed whose channel count is less than or equal to the number of blocked channels in that sync group, then the entire group becomes unblocked. In other words, the synchronized sample index for the group is determined by whichever channel is last to unblock.

10 – RAMPDOWN

The above timing diagrams mentioned that the "previous note gets silenced" whenever a new event is processed for that channel:

Sample indexChannel 1 event
. . .
60note starts playing
60dequeue WAIT 51 samples
141dequeue LOAD INSTRUMENT
171previous note gets silenced
171dequeue TRIGGER INSTRUMENT
201note starts playing

If the note that started playing at sample index 60 is still producing sound, the samples will become all zeros from index 171 until index 201. This jump to zero often causes subtle but unpleasant popping sounds in a song. One solution is to carefully adjust the instrument envelopes (perhaps using RELEASE INSTRUMENT) to ensure the instrument naturally decays to zero before sample index 171.

The RAMPDOWN event provides a simpler, generalized solution: over the next 30 samples (~1.36 ms), the currently playing sound will "fade out", linearly scaling its amplitude down to zero.

Let's modify the above timeline to insert a RAMPDOWN event, while still triggering the next note at sample index 171:

Sample indexChannel 1 event
. . .
60note starts playing
60dequeue WAIT 21 samples
111dequeue RAMPDOWN
141rampdown start
141dequeue LOAD INSTRUMENT
171rampdown end; previous note is silenced
171dequeue TRIGGER INSTRUMENT
201note starts playing

I/O definitions

CLASS IO_AUDIO_EVENT # SIZE 5
. . .
# 8 = WAIT OPERAND = NUMBER OF ADDITIONAL SAMPLES
# 9 = SYNC OPERAND = SYNC GROUP (0..5) IN BYTE 3;
# CHANNEL COUNT (1..6) IN BYTE 4
# * 10 = RAMPDOWN 30 SAMPLES
. . .
VAR KIND: BYTE

VAR OPERAND: INT
END CLASS