What does a machine see when it looks at a thousand mudras? Ten thousand beaches? A year of sunsets through the same window? These experiments are about feeding StyleGAN2 a visual world and watching what it chooses to learn. The datasets are deliberate. Sacred hand gestures with centuries of encoded meaning. Coastlines shaped by chaos. Light filtered through glass. Each one asks the same question: what does the model understand, and what does it invent?
This work was made in 2020 and 2021, before the ChatGPT era, when AI-generated art was still a raw, experimental frontier and every output required training your own models from scratch.
Machine-generated mudras
Hastas and mudras are hand gestures from Indian classical dance. Each one carries specific meaning. The difference between two gestures can be a single finger's curl, a slight rotation of the wrist. I wanted to know if a machine could learn something that precise, that culturally specific, that human.
The model captures the broad vocabulary. It produces convincing hands in convincing positions. But the most interesting outputs are the ones it invents: gestures that exist somewhere between known mudras. Positions that feel right but have never been performed. The machine learned the grammar and started writing its own sentences.
Machine-generated beaches
Ten thousand photographs of beaches around Seattle. Irregular shorelines, turbulent water, wet sand reflecting sky. Where mudras demand precision, coastlines are pure chaos. Every image is different in ways that are hard to quantify. The model learns all of it. It generates beaches that are entirely synthetic but immediately recognizable as the Pacific Northwest: the grey light, the dark sand, the heavy sky.
Machine-generated Seattle sunsets
The most constrained dataset of the series. Every image is a sunset photographed through the same window in my room. Same frame. Same vantage point. Different sky. By removing all variation except the light itself, the model is forced to focus on exactly what changes: color, cloud shape, the quality of the glow.
The latent space is mapped to an audio track. The sunsets shift and pulse with the music. Artificial color grading pushes the output into territory that is clearly synthetic but visually arresting. Sunsets that could never exist, generated from sunsets that did.
Machine-generated me
Can a machine generate me? Not a generic face. My face, my expressions, my movement. StyleGAN2 produces a version of me that blinks, shifts, and reacts with unsettling naturalism. It learned what I look like well enough to perform as me.
Behind the face, a second model trained on abstract textures generates a reactive background. It watches the first model's output and responds to its expressions. Two generative systems in conversation: one producing a person, the other producing a world that moves with them.