The Code Before the Storm
Long before generative AI became mainstream conversation, a small group of artists and technologists were already experimenting with machine-assisted creativity.
Today the world knows tools like OpenAI’s DALL·E, Midjourney, and xAI’s Grok Imagine. Millions of people can now generate images instantly using prompts alone. But years before AI image generation entered public consciousness, a handful of experimental creators were already exploring the intersection of neural networks, abstraction, immersive environments, and fine art.
Among those early experiments was a unique father-and-son collaboration between painter Matt Vegh and artist-technologist Nathaniel Vegh.
Their work was not focused on replacing painting. Instead, it explored how computational systems could extend artistic lineage itself.
Training a Neural Network on the Images of Family Lineage Abstracts
By the late 2010s, Nathaniel Vegh had begun experimenting with neural style transfer systems, GANs (Generative Adversarial Networks), and immersive projection environments. Unlike many early AI image experiments that relied on random internet datasets, Nathaniel trained generative systems on tens of thousands of images taken of his father Matt Vegh’s abstract paintings and visual archive.
This created something unusual for its time:
a machine-learning system trained not on generalized internet aesthetics, but on a family-developed visual language accumulated over years of painting practice.
The result was not merely “AI art” in the contemporary sense. It was closer to computational inheritance: an attempt to teach a machine the structural logic, color relationships, gestural vocabulary, and compositional tendencies of a living painter’s body of work.
Years before public debates around dataset ethics and artistic authorship became mainstream, the Vegh projects were already exploring questions that now define the AI art conversation:
· Can artistic language be translated computationally?
· What happens when a family artistic lineage becomes a training corpus?
· Can machines extend human aesthetic systems rather than replace them?
Beyond Screens: Immersive Generative Sculpture
One of the defining characteristics of the Vegh experiments was the refusal to treat AI output as flat digital imagery alone.
Instead, Nathaniel designed and constructed physical sculptural environments integrating:
· projection systems,
· immersive lighting,
· generative animation,
· architectural installation design,
· and interactive audience engagement.
At exhibitions in China, including projects associated with the 2019 Chengdu Creativity & Design Week, visitors encountered a glowing, touch-capable spherical display embedded within monumental sculptural forms. These installations combined ancient symbolic aesthetics with machine-generated abstraction.
Audience members interacted physically with the works, extending their hands toward the responsive glowing orb displaying continuously evolving AI-generated imagery derived from Matt Vegh’s paintings.
This was years before “immersive AI experiences” became a widespread cultural trend.
The installations existed in a space between:
· sculpture,
· ritual object,
· projection architecture,
· computational theater,
· and generative painting.
Early Generative Art Before the AI Boom
It is important to remember how early these experiments were.
In 2018 and 2019:
· consumer diffusion models did not yet exist,
· Midjourney had not launched,
· Stable Diffusion had not been released,
· and public awareness of generative AI remained extremely limited.
Most AI art at the time remained confined to research labs, isolated experiments, or highly technical online communities.
The Vegh projects instead attempted something more culturally ambitious:
bringing machine-generated abstraction into physical exhibition environments connected to contemporary fine art culture.
Rather than positioning AI as a novelty, the work explored continuity between:
· traditional painting,
· large-scale artistic production,
· generational knowledge transfer,
· and computational creativity.
The Emergence of Hybrid Artistic Ecosystems
The collaboration between Matt and Nathaniel Vegh reflects a broader transformation now occurring throughout the creative world.
The future of art may not belong exclusively to traditional painters, nor exclusively to engineers. Instead, a new hybrid category is emerging:
artists who combine physical craftsmanship, computational systems, immersive design, and cultural authorship.
In this sense, the Vegh projects represent an early example of a hybrid creative ecosystem:
· multi-generational,
· technologically experimental,
· commercially active,
· and deeply rooted in physical artistic practice.
The story is not about machines replacing artists.
It is about artists evolving their mediums.
A New Chapter in Artistic Lineage
For centuries, artistic knowledge passed from generation to generation through apprenticeships, studios, sketches, and direct observation.
Today, a new possibility has emerged:
the translation of artistic lineage into computational form.
The Vegh experiments stand as an early example of this transition: where painting became data, data became generative structure, and generative structure became immersive experience.
Before AI art became fashionable, before image generation became instant, and before neural aesthetics entered mainstream culture, a small number of artists were already asking a deeper question:
What happens when artistic inheritance itself becomes programmable?
And in that question may lie the future of art itself.