Machine-generated series


Generative Art / StyleGAN2 / Dataset Curation / Machine Learning



Training StyleGAN2 on curated datasets of mudras, beaches, and sunsets to explore what machines learn about form, light, and gesture.

Personal Project, 2020 - 2021

What does a machine see when it looks at a thousand mudras? Ten thousand beaches? A year of sunsets through the same window? These experiments are about feeding StyleGAN2 a visual world and watching what it chooses to learn. The datasets are deliberate. Sacred hand gestures with centuries of encoded meaning. Coastlines shaped by chaos. Light filtered through glass. Each one asks the same question: what does the model understand, and what does it invent?

This work was made in 2020 and 2021, before the ChatGPT era, when AI-generated art was still a raw, experimental frontier and every output required training your own models from scratch.

Role

Dataset Curation, ML Training, Creative Direction

Stack

StyleGAN2, U2Net, OpenCV, VQVAE, Processing

Datasets

Custom-curated, 1,000 to 10,000 images each

Period

2020 - 2021

A

Machine-generated mudras

Hastas and mudras are hand gestures from Indian classical dance. Each one carries specific meaning. The difference between two gestures can be a single finger's curl, a slight rotation of the wrist. I wanted to know if a machine could learn something that precise, that culturally specific, that human.

The model captures the broad vocabulary. It produces convincing hands in convincing positions. But the most interesting outputs are the ones it invents: gestures that exist somewhere between known mudras. Positions that feel right but have never been performed. The machine learned the grammar and started writing its own sentences.

Traversal through the latent space of machine-generated mudras.
B

Machine-generated beaches

Ten thousand photographs of beaches around Seattle. Irregular shorelines, turbulent water, wet sand reflecting sky. Where mudras demand precision, coastlines are pure chaos. Every image is different in ways that are hard to quantify. The model learns all of it. It generates beaches that are entirely synthetic but immediately recognizable as the Pacific Northwest: the grey light, the dark sand, the heavy sky.

StyleGAN2 trained on 10,000 beach photographs from around Seattle.
C

Machine-generated Seattle sunsets

The most constrained dataset of the series. Every image is a sunset photographed through the same window in my room. Same frame. Same vantage point. Different sky. By removing all variation except the light itself, the model is forced to focus on exactly what changes: color, cloud shape, the quality of the glow.

The latent space is mapped to an audio track. The sunsets shift and pulse with the music. Artificial color grading pushes the output into territory that is clearly synthetic but visually arresting. Sunsets that could never exist, generated from sunsets that did.

Latent space mapped to audio. Trained on sunsets from a single window.
D

Machine-generated me

Can a machine generate me? Not a generic face. My face, my expressions, my movement. StyleGAN2 produces a version of me that blinks, shifts, and reacts with unsettling naturalism. It learned what I look like well enough to perform as me.

Behind the face, a second model trained on abstract textures generates a reactive background. It watches the first model's output and responds to its expressions. Two generative systems in conversation: one producing a person, the other producing a world that moves with them.

Two generative models in dialogue: one produces a face, the other reacts to it.