From Innovation to Art: The History of AI images
From ancient myths about artificial beings, to the introduction of electronic computers in the mid-20th century, Ágnes Ferenczi reflects on a long-standing human ambition to breathe life into inanimate objects. This article is part of Foam Magazine #66: Missing Mirror – Photography Through the Lens of AI.
When the camera first appeared, it divided the art community. Some artists welcomed this new medium as a revolutionary tool for expression, while others viewed it with suspicion, fearing it might relegate older practices such as traditional portraiture to the background. However, as photography evolved, it enriched these traditional techniques and provided artists with new horizons, leading to the development of new artistic styles. Over time, photography transcended its status as a mere tool for documentation, becoming an independent artistic medium.
Throughout history, artists have consistently demonstrated their interest in technological advancements, often incorporating innovations into their creative processes. The camera obscura, a precursor to the modern camera, is a fine example, enabling artists such as Vermeer to create realistic paintings. Leonardo da Vinci explored the mechanics of flight, anatomy, and engineering, integrating these concepts into his artworks. What photography represented as a significant technological breakthrough in the 19th century, parallel advancements were made in digital technology and computing in the 20th century, which all impacted art. Today, the widespread availability of artificial intelligence (AI) technologies has led many artists to adopt these tools for creative expression. However, just as in the early days of photography, this genre is still seeking its place and acceptance within the traditional art world.
The concept of AI traces back to ancient myths about artificial beings endowed with human-like qualities, reflecting a long-standing human ambition to breathe life into inanimate objects. Throughout the 1800s, this fascination grew, with early versions of artificial life appearing in literature, with works like Mary Shelley’s Frankenstein. However, the form of AI that we recognise today began to take shape with the advent of electronic computers in the mid-20th century. In 1950, Alan Turing, the pioneer in computer science, made a seminal contribution to the field with his paper ‘Computing Machinery and Intelligence’. In this work, he explored the question of whether a machine can think and developed the Turing Test as a measure of machine intelligence. In 1956, the term artificial intelligence was introduced by John McCarthy at the iconic Dartmouth Summer Research Project on Artificial Intelligence. McCarthy organised a two-month meeting of leading researchers to explore the potential for creating machines that could replicate human intelligence, formally recognising AI as a research field and setting the stage for future innovations.
As computers became faster and had enhanced storage capabilities in the 1960s, important developments occurred. Programs such as the General Problem Solver addressed a broad spectrum of problems. In 1966, the first chatbot, named ELIZA, was created by Joseph Weizenbaum. Then, in 1972, Waseda University, in Japan, introduced WABOT-1, the first advanced humanoid robot capable of walking and communicating.
Alongside AI, computer-generated art has also taken its first steps. Due to the rapid development of computer technology, Max Bense’s information theory, and the systematic, interactive, and process-orientated aspects of kinetic, concrete, and op art, artists began to use autonomous systems such as computer programs and algorithms for artistic expression in the 1960s. In these early days, they frequently collaborated with scientists, as computers were not widely available and were primarily housed in universities, research institutions, or large corporations. Parallel to the rise of generative art, generative photography also emerged, with roots that go back to 1920s experimental photography and 1950s concrete photography. Generative photography refers to the methodical creation of visual aesthetics through predefined programs that apply photochemical, photo-optical, or photo-technical operations, combining traditional photographic mediums with mathematical algorithms. The first exhibition which showcased these works was organised at Kunsthalle Bielefeld in 1968 and featured artists like Hein Gravenhorst and Gottfried Jäger.
While these artists laid the foundation for computer-generated art, they were not using what we would now recognise as AI. Harold Cohen was the first artist to introduce AI into art with AARON, a software considered one of the first AI art systems, which he began developing in the early 1970s. AARON used a set of predefined rules created by Cohen to autonomously generate images, enabling the program to independently make decisions on composition. The software has produced artworks with drawing and painting devices the artist built. Initially, it generated abstract monochrome line drawings that he manually coloured. Later, it evolved to generate more complex, colourful, even real-world forms. Cohen’s development of AARON highlighted AI’s capacity for autonomous decision-making, simulating a form of creative autonomy.
The development of AI experienced a significant slowdown in the 1970s, leading to the period known as the ‘AI winter’. This lack of progress resulted in decreased funding and increased critique of the technology. The 1980s witnessed a revival with innovations such as the Expert System and the ambitious ten-year initiative Fifth Generation Computer Systems, funded by Japan. In the 1990s and 2000s, computers became cheaper, faster, and more widely available, with increased storage capacity, and the emergence of the internet provided access to vast amounts of data. Significant milestones included the victory of IBM’s Deep Blue, a computer chess-playing system, over chess champion Garry Kasparov, and IBM’s Watson, a natural language AI, which won on the game show Jeopardy! against top contestants. AI found applications in various fields, including mathematics, engineering, and economics, demonstrating its capability to solve a wide range of problems.
Since the 2010s, neural networks and machine learning have opened new possibilities for AI, building on foundational research from the 1980s. Neural networks are computer systems designed to simulate the way the human brain works and can learn and adapt from data inputs. This technology has made progress in various fields such as image and speech recognition and natural language processing and has been particularly transformative in AI art. Generative Adversarial Networks (GANs), a form of deep learning, were Developed by Ian Goodfellow and his colleagues in 2014. GANs comprise two neural networks, which are trained simultaneously to produce highly detailed and complex images. Starting around 2017, artists began to incorporate GANs into their art-making processes. Their approaches showcase two distinct directions.
On one side, artists such as Robbie Barrat and Mario Klingemann utilise large datasets available on the internet to generate their art. Others, such as Helena Sarin and David Young, opt for training their models on a smaller scale, using their own watercolours, paintings, or photographs. Klingemann’s The Butcher’s Son (2017), stands as an early example of the use of GANs in art. He worked with AI to transform stick figures into paintings by analysing a vast number of internet-sourced images, showcasing how neural networks perceive the human body. Young’s Learning Nature (2018) series exemplifies an early adaptation of a more personal approach to data. He trains machines on smaller datasets, such as his photographs, to bring AI to a human scale. Helena Sarin’s unique approach can be seen in her work AI Candy Store, where she uses her own watercolours, sketches, and culinary photography as data sources. Her technique involves a process of curating and training, which allows AI to reflect her artistic vision more distinctly.
In the same year, the use of GANs in art creation captured the traditional art world’s attention. Portrait of Edmond de Belamy, created by the French collective Obvious, was sold at Christie’s auction for US$432,000, marking a significant milestone in AI art history. Many artists, both from traditional backgrounds and the emerging field of new media, have explored the capabilities of AI. Amongst them, new media artist Hito Steyerl showcased her video installation Power Plants at the prestigious Venice Biennale in 2019. In this work, she used neural networks to create a series of fictional plants, offering critical reflections on the complexities of the digital world and the societal implications of this technology. The exhibition of this work at such a renowned venue highlights the growing importance of AI in contemporary art. During this period, companies began to recognise the potential of AI and started investing in its research. Google, a leading technology giant, made significant contributions, particularly with DeepDream, developed by Alexander Mordvintsev in 2015. DeepDream is an algorithm that modifies images by emphasising patterns, resulting in a dream-like and almost psychedelic appearance. The deep learning algorithm CLIP (Contrastive Language–Image Pre-training), introduced by OpenAI in 2020, has also made a significant Impact on AI art. Designed to understand the complex relationships between text and images, CLIP enables the creation of AI-generated art from textbased prompts. Alongside these innovations, diffusion models have also made their mark, creating images by gradually transforming random patterns of pixels into coherent works.
GANs and diffusion models have played a significant role in transforming the post-photography genre. This movement transcends traditional photography by incorporating digital manipulation and often the use of AI. In 2020, Dutch photographer Bas Uterwijk, also known as Ganbrood, transitioned from traditional media to AI post-photography, creating portraits of historical figures who lived before the invention of the camera. One of his most celebrated works is a portrait of Jesus, for which he combined cultural, historical, and archaeological elements with the use of neural networks. Another important artist in this genre is Roope Rainisto, who employs custom-trained diffusion models and the visual language of traditional photography to create images that are both nostalgic and futuristic. These artists, by embracing AI’s capabilities, can explore and depict ideas that traditional techniques alone could never achieve. In the last few years, AI research has seen rapid growth, and the technology has become integrated into our daily lives through virtual assistants, advertising tactics, and language models like ChatGPT. A similar trend is observable in AI art, with platforms such as DALL.E, Stable Diffusion, and Midjourney generating images from textual prompts, making art creation accessible to a broad audience.
The field of AI art, despite scepticism, has gained recognition and validation from prestigious institutions. Many museums, including the Los Angeles County Museum of Art, Centre Pompidou Paris, and the Museum of Modern Art New York, have incorporated such works into their collections. Beyond museum walls, AI art has also made its mark on various art fairs and biennales, such as the Venice Biennale and Art Basel, and renowned auction houses like Christie’s and Sotheby’s have featured these artworks, further legitimising the genre. Just as photography once did, AI art is on its way to being fully accepted as a form of contemporary art, reshaping creative expression.
About the author
Ágnes Ferenczi is an art historian with a specialisation in generative art, exploring its development from historical origins to contemporary practices. Her professional background includes work in museums and research on late 19th-century art, particularly Dutch art movements and art colonies. In 2021, she broadened her focus to digital media, contributing numerous articles on the intersection of art and technology. Currently, Ferenczi works as an art historian and gallery manager at the Zurich-based Kate Vass Galerie, a contemporary art gallery with a focus on new media art.
Foam Magazine #66 Missing Mirror
As part of the overarching project, Foam Magazine #66: Missing Mirror – Photography Through the Lens of AI looks at the growing overlaps between art, technology, and society, exploring how the recent advancements in AI impact our relationship with the image, ourselves, and our perception of reality. How do we form a truthful image of the world when credibility is questioned? And vice versa, how do we recognise ourselves in the images around us?
Foam Magazine has been awarded several prizes for both its high-grade graphic design and the quality of its content. Most recently, Foam Magazine was awarded Photography Magazine of the Year at the Lucie Awards 2017 and 2019.
Foam Magazine is an international photography magazine, published two times a year by Foam Fotografiemuseum Amsterdam.