Process is a personal exploration of generative AI image and video systems, created with Stable Diffusion and AnimateDiff. Beginning in 2021, Chris May extended his earlier experiments in generative music and chance-driven MIDI into audiovisual loops where visuals and sound emerge from the same process. Each piece is scored with original music, shaping system-driven randomness into works that feel alive.
I began exploring generative AI image systems in 2021, building on an earlier fascination with generative music and random MIDI experiments. That curiosity around chance, automation, and pattern led me to work with systems I can’t fully control, shaping their outputs into forms that feel alive.
I’m less interested in making machines appear human than in the space where intention and emergence begin to blur.
Technique: Oscillator-based audiovisual capture processed through a custom generative workflow.
TG-8H-2
moving walkway
blush
entanglement