Official Website
https://iusmusic.github.io/distant-lights/
Distant Lights is a physics-informed software instrument and analysis toolkit for synthesizing sound from intensity-modulated light models. It combines electrical hum, photoacoustic response, sonification, sequencing, layering, and export workflows in a single system for composition, prototyping, and experimental comparison.
The project is designed for:
- composers and sound designers exploring nontraditional synthesis sources
- researchers prototyping reduced-order models of light-to-sound mechanisms
- developers building browser-based and Python-based audio tools
- experimental artists working across signal, physics, and software instrument design
Distant Lights models three primary mechanisms and lets them be used independently or in combination:
-
Electrical / mechanical hum
Sound associated with lighting electronics, transformers, ballasts, inductors, capacitors, PWM drivers, and enclosure resonances. -
Photoacoustic response
Pressure variation generated by intensity-modulated light absorbed by a material and converted into heat. -
Sonification of sub-audible light structure
Auditory mapping of slow flashing or pulsed light patterns into audible output for composition and analysis.
In practice, Distant Lights sits between a software instrument, a signal-design environment, and a reproducible research tool.
- interactive browser-based instrument interface
- waveform and modulation control
- sequencing / timeline-based event arrangement
- multi-model layering
- preset save, load, import, and export
- WAV export
- MIDI export
- OSC export
- Python simulation layer for reproducible analysis
- FFT generation and comparison tooling
- validation notebook for measured-vs-simulated overlays
- publication-ready figures and documentation scaffolding
Distant Lights is not a toy visualizer and not just a browser sketch. It is a structured synthesis and validation environment built around physically motivated light-driven sound models.
models/ Physical and signal models
audio-engine/ Synthesis, rendering, and export engine
ui/ Controls, drawing, storage, and interaction
presets/ Built-in presets and parameter defaults
python/ Reproducible simulation and analysis code
data/ Example WAVs, FFT plots, and generated figures
notebooks/ Validation and comparison workflows
docs/ Technical documentation and report material
The browser application provides the interactive front end for exploring the model space. It is intended for fast iteration, listening, preset design, and export.
Key functions include:
- parameter control for waveform, modulation depth, resonance, thermal response, noise, and carrier mapping
- waveform previews for both light modulation and audio output
- sequencing of multiple events over time
- project-level playback and export
- preset management in-browser
The Python implementation exists for exact, reproducible, higher-precision work. It supports:
- offline synthesis
- parameter sweeps
- FFT analysis
- figure generation
- comparison against measured recordings
- notebook-driven validation workflows
The browser layer is optimized for immediacy. The Python layer is optimized for analysis and reproducibility.
The current system implements reduced-order versions of the underlying mechanisms. These are compact enough for interactive use while still preserving the main causal structure needed for sound design and comparative study.
When alternating current flows through coils and magnetic cores, ferromagnetic materials can expand and contract as the magnetic field changes. In practical lighting systems, this can contribute to audible vibration through transformer laminations, ballasts, coils, capacitors, housings, and mounts.
We model the driving modulation as:
where:
- (I_0) is the steady-state current or intensity baseline
- (m) is modulation depth
- (s(t)) is a normalized waveform such as sine, square, triangle, PWM, or rectified sine
The audible output is represented as a mixture of direct modulation, resonant structural response, and optional high-frequency whine:
where:
- (x_{\mathrm{dry}}(t)) is the centered modulation signal
- (g) is gain
- (H_{\mathrm{res}}) is a resonant filter with center frequency (f_{\mathrm{res}}) and quality factor (Q)
- (A_{\mathrm{whine}}) and (f_{\mathrm{whine}}) control an additive switching or driver-whine term
This model captures the practical combination of mains-related ripple, PWM artifacts, structural resonance, and parasitic tonal components.
If a material absorbs intensity-modulated light, part of that energy becomes heat, and the resulting thermal change can generate pressure variation. The full physical system can involve thermal diffusion, geometry, material properties, and acoustic boundary behavior.
The reduced-order model used here is:
where:
- (T(t)) is an effective lumped temperature state
- (\tau) is the thermal time constant
- (p(t)) is modeled as proportional to the rate of thermal change
This is the interactive model used in the browser instrument.
For higher-fidelity exploration, the Python layer also includes a one-dimensional finite-difference thermal/acoustic-style solver based on the heat equation:
where:
- (u(x,t)) is temperature
- (\kappa) is thermal diffusivity
That solver provides a more physically explicit stepping stone toward measured comparison.
Very slow flashing or pulsed structures may not produce useful directly audible structure in a conventional listening context. To expose their temporal behavior, Distant Lights supports sonification by mapping the modulation envelope onto an audible carrier:
where:
- (\mathrm{env}(t)) is the slow envelope derived from the light pattern
- (f_c) is the carrier frequency
This is explicitly a mapping method for analysis and composition. It is not a claim that slow visible light directly produces that audible pitch.
The instrument exposes a compact but expressive parameter set. Core parameters include:
- Mode — selects electrical / hum vs photoacoustic behavior
- Waveform — modulation shape
- Base Hz — base modulation rate
- Depth — modulation amount
- PWM duty — duty cycle for PWM structures
- Thermal Hz / (\tau) — effective thermal response rate
- Res Hz — resonance center frequency
- Res Q — resonance sharpness / selectivity
- Res mix — dry-to-resonant blend
- Whine Hz / Whine level — additive switching tone controls
- Noise — stochastic broadband component
- Gain — output gain
- Carrier — carrier frequency for sonification mode
- Low-freq sonify — enables audible carrier mapping for slow patterns
The current implementation is intentionally reduced-order. Important simplifications include:
- a compact resonant-response model instead of full multi-mode structural mechanics
- a reduced magnetostriction approximation instead of a full nonlinear hysteretic electromechanical model
- a one-dimensional thermal stepping model rather than a full multi-dimensional thermoacoustic solver
- simplified noise generation rather than calibrated measured noise distributions
- limited treatment of geometry, mounting, coupling media, and environmental effects
- no claim that the browser path is a laboratory-grade simulator
These simplifications are deliberate. They keep the system fast enough for interactive use while preserving enough structure to support meaningful design and comparison workflows.
Choose whether you want:
- mains-like hum
- resonant lighting electronics behavior
- chopped / pulsed photoacoustic texture
- slow beacon-style rhythm mapped into audible space
Select waveform, base rate, modulation depth, and duty cycle. This defines the temporal structure of the light-driven excitation.
Adjust resonance frequency, resonance Q, resonance mix, thermal response, and whine components until the output has the desired spectral and temporal identity.
Use layering to combine:
- hum
- photoacoustic texture
- sonified structure
Then automate selected parameters with envelope or LFO movement.
Use the timeline editor to place multiple events over time. This allows one project to contain evolving sections, layered transitions, or multiple light-source behaviors.
Export:
- WAV for direct rendering
- MIDI for note/event transfer into a DAW
- OSC for driving external software or modular environments
Store presets or project states, generate variants, and move between the browser and Python layers depending on whether you are composing, testing, or validating.
Distant Lights supports project-level arrangement through sequencing and timeline control.
Multiple events can be arranged across time, allowing:
- distinct sections
- repeated motifs
- overlapping light-source behaviors
- staged transitions between models
Models can be layered so that hum, photoacoustic response, and sonification coexist in a single rendered output.
Envelope and LFO-style parameter control enable movement across:
- resonance
- depth
- thermal response
- whine level
- carrier frequency
- mix states
This makes the instrument suitable not only for static sound generation, but for evolving, performable textures.
MIDI export is intended for DAW and sequencer integration. Event timing can be translated into note-based control structures suitable for further orchestration, layering, and processing.
OSC export supports integration with external audio systems, modular setups, custom controllers, or research environments that accept structured real-time control messages.
The Python layer provides exact, scriptable synthesis and analysis.
Main dependencies:
numpyscipy.signalsoundfilematplotlib
Typical use cases:
- reproducible offline rendering
- spectral analysis
- batch parameter sweeps
- figure generation
- comparisons against recordings
- notebook-based documentation of experimental runs
Example workflow:
from python import models
preset = models.PRESETS[0]
samples = models.synthesize(preset, seconds=1.5)
models.export_wav(samples, "output.wav")For generated examples and FFTs:
PYTHONPATH=python python data/generate_examples.py --seconds 2 --out data/examplesDistant Lights includes a validation workflow so theory can be compared against measurement rather than treated as a purely speculative synthesis exercise.
Recommended process:
- record a real lighting-related source or test setup
- load the recording into the notebook pipeline
- match simulation parameters to the measured conditions
- render the corresponding synthetic result
- compare waveform overlays, spectra, and spectrogram differences
- refine the parameters or model assumptions
Suggested capture setups include:
- LED plus function generator for controlled modulation
- microphone near transformer or ballast
- photoacoustic cell or equivalent experimental geometry, where available
The repository notebook supports:
- audio loading
- simulation using matched parameters
- overlay plots
- spectrogram comparison
- difference visualization
The data/ directory is intended to hold:
- example WAV renders
- FFT plots
- publication-ready figures
- sensitivity plots
- comparison visuals derived from validation work
The project is structured so figures can be generated from the Python layer rather than assembled manually.
This repository is structured to support software citation and release archiving.
Recommended citation path:
CITATION.cfffor repository citation metadata.zenodo.jsonfor Zenodo release metadata- GitHub release tags for versioned software snapshots
- Zenodo archival release for DOI-backed version citation
When a DOI is minted through Zenodo, update the README badge and citation metadata to point to the released version.
- NCBI Bookshelf, Neuroscience – The Audible Spectrum
- Wikipedia, Perception of infrasound
- ELSCO Transformers, Why Do Transformers Hum?
- Würth Elektronik, ANP118 Acoustic Noise & Coil Whine Effect
- Analog Devices, Avoid the Audio Band with PWM LED Dimming at Frequencies Above 20 kHz
- Wikipedia, Photoacoustic effect
- MIT OpenCourseWare, The 1-D Heat Equation
- Gregory Kramer (ed.), Auditory Display
Distant Lights is a physics-informed software instrument and validation-oriented toolkit. The browser application prioritizes immediacy and exploration. The Python layer prioritizes reproducibility, analysis, and comparison to measured systems.