Audio Architecture
1. Overview
Oh My Ondas is a 4-source, 4-channel mixer with per-channel FX processing. The four sources — microphone, sampler, synthesizer, and internet radio — each feed into a dedicated mixer channel with independent gain, 3-band EQ, and a full FX chain (delay, glitch, grain). All channels merge through a master EQ, master FX stage, and master gain before reaching the audio output and analyser.
2. Signal Flow
Each source follows an identical path through its channel strip. The per-channel FX sit between the channel EQ and the master bus, allowing independent effects processing without affecting other sources.
3. Per-Channel FX
The ChannelFX class provides three effect types per instance. Five instances are created: one per channel (mic, samples, synth, radio) and one master.
Delay
Variable-time delay (0–2s) with feedback loop and dry/wet crossfade. At 100% mix, the dry signal is attenuated to 50% rather than muted, preserving transient definition.
Glitch
Probability-based slice effects running on a 100ms check interval. Three modes: stutter (rapid volume chops), reverse (feedback burst simulating reversal), and jump (random amplitude spikes).
Grain
Simplified granular texture using the delay line. Grain density maps to delay feedback, grain size maps to delay time. Freeze mode sets feedback to 0.98 for near-infinite sustain of the delay buffer.
Routing via MangleEngine
The MangleEngine acts as a routing manager. Its currentRoute property (default: 'master') determines which ChannelFX instance receives effect parameter changes. Direct per-channel control is available via setChannelDelayMix(channelName, percent) and similar methods.
4. Soundscape Analysis
The SoundscapeAnalyzer taps a high-resolution AnalyserNode (FFT size 2048) into the mic channel gain node in parallel. The tap is non-destructive — it doesn't alter the mic audio path.
Analysis Pipeline
- Spectral frames collected every 50ms over a configurable duration (default 3s)
- RMS amplitude computed per frame from time-domain data
- Transient detection via amplitude spike ratio (1.8x threshold)
- Average spectrum computed across all frames for peak and centroid analysis
Metrics
- dominantFreqs — top 5 frequency peaks by magnitude
- spectralCentroid — brightness center frequency (Hz)
- brightness — energy ratio above 2kHz (0–100%)
- transientDensity — amplitude spikes per second
- avgAmplitude — mean RMS level
- envelope — level over time array
Classification
- amplitude < 0.01 → quiet
- transients > 4 AND brightness > 50 → chaotic
- transients > 2 → rhythmic
- brightness > 40 → noisy
- centroid > 500 → tonal
- otherwise → ambient
AI Integration
When generateFull() runs, it calls listenToEnvironment(2000) first. If the soundscape has meaningful amplitude (> 0.05), the classification overrides GPS-based vibe selection: rhythmic→urban, tonal→nature, ambient→calm, noisy/chaotic→chaos. Transient density and brightness also adjust the density and complexity parameters (+15 and +10 respectively). Dominant frequencies below 10Hz are used as tempo hints.
5. Creative Source Roles
The SourceRoleManager breaks from the assumption that samples = drums, radio = plays straight, synth = notes. Any source can serve any of four roles:
- RHYTHM — percussive, pattern-driving elements
- TEXTURE — ambient, atmospheric layers
- MELODY — pitched, melodic phrases
- MODULATION — real-time parameter modulation from source amplitude
Role Distribution by Vibe
- calm — 2 rhythm / 3 texture / 2 melody / 1 modulation
- urban — 4 rhythm / 1 texture / 2 melody / 1 modulation
- nature — 1 rhythm / 4 texture / 2 melody / 1 modulation
- chaos — 3 rhythm / 2 texture / 1 melody / 2 modulation
Strategy Pattern
Each role+source combination has a strategy function that configures the track. Examples:
_rhythmFromSynth— short decay + filter P-Locks for percussive character_textureFromRadio— delay + grain P-Locks for ambient layer_textureFromMic— heavy grain P-Locks for processed ambient_melodyFromSampler— pitch P-Locks on pitched kits_modulationFromMic— amplitude-to-parameter modulation route
Modulation Routing
Modulation-role tracks set up real-time parameter routes. Each sequencer tick, processModulation() reads the source channel's RMS amplitude and maps it (scaled) to a target channel's FX parameter. For example: mic amplitude → synth delay mix.
6. P-Lock Integration
Per-step parameter locks now include an fxRoute field that determines which channel's FX receives the P-Lock values. The resolution order is:
- Explicit
pLocks.fxRoute(set by role strategies or manually) - Track's source channel (sampler→samples, synth→synth, etc.)
- Master (fallback)
This allows a single pattern to apply delay to the radio channel on step 3, grain to the synth on step 7, and leave all other steps targeting master — per-step, per-channel FX automation.
7. File Reference
| File | Responsibility |
|---|---|
| audio-engine.js | AudioContext, channel routing, gain, EQ, metering, per-channel FX wiring |
| channel-fx.js | ChannelFX class: delay, glitch, grain per instance |
| mangle.js | MangleEngine: FX routing manager, delegates to ChannelFX instances |
| soundscape-analyzer.js | SoundscapeAnalyzer: mic spectral analysis, classification |
| source-roles.js | SourceRoleManager: creative role assignment, modulation routing |
| sequencer.js | Step sequencer, P-Locks with per-channel FX targeting, trig conditions |
| scenes.js | Scene save/recall/morph with per-channel FX and role state |
| ai-composer.js | AI composition engine: soundscape-aware, role-based track assignment |
| mic-input.js | Microphone input management |
| sampler.js | 8-pad sampler with kit loading and capture |
| synth.js | 2-oscillator subtractive synth with ADSR, filter, LFO |
| radio.js | Internet radio streaming and capture |
| recorder.js | Audio recording to file |
| arrangement.js | Multi-scene arrangement timeline |
| gps.js | GPS positioning and location tracking |
| landmark.js | GPS-tagged sonic snapshots |
| journey.js | GPS-tracked walking sessions |
| app.js | UI controller, event binding, initialization |