

#Tabulo rosa code
Even the tools we use to collect that data - from their interfaces that control what we can see, to their underlying APIs, code libraries, and sensor technologies - each adds its own particular bias. But unfortunately, the data we provide to the machine is not neutral, for embedded within all data are human biases - the preconceptions, irrationalities, and emotions of those who collected it. In Locke’s model, one develops rational knowledge from experiences in the natural world. It is then trained with data - the “sensation” upon which it “reflects” - in order to learn. Just as with Locke’s conception of the mind, the machine begins as a blank tablet, knowing nothing. This sounds remarkably similar to a contemporary AI system and how it learns. There are thus two sources of our knowledge and ideas: sensation and reflection.”¹ He believed the mind to be a tabula rasa - a “white paper” or “blank tablet” - “on which the experiences derived from sense impressions as a person’s life proceeds are written.


In philosophy, John Locke (1632–1704) proposed that the only knowledge humans can have is through experience. But what does it mean for the machine to “know” something? Through all of my work I ask: Do we need to develop a new concept of beauty to evaluate what emerges from working with these systems? And is it possible that a new aesthetic can give us a better, more empowered, understanding of AI?įundamental to AI and machine learning is the concept of training - teaching the machine to understand its subject. Can I then use that difference to create unique images? I want to learn what is fundamentally different about art made with AI. In this essay, I step back from that broader project in order to examine the very basics of AI. (More information is in my previous essay Little AI.) Using small data sets I am exploring how AI builds an understanding the natural world, and how the images I create relate to earlier art movements and technological innovations. My own work with AI, created with GANs (generative adversarial networks), exists in contrast to the vast scale of commercial AI. The resulting artworks look and feel different from previous forms of computer-based art for they reflect the organic messiness of their vast and inscrutable neural networks. Like AI programmers, artists working with AI don’t encode rules, but instead train networks. It is in this context that artists have been exploring AI - as subject matter, as collaborator, and as a medium. And so we are resigned to the inevitable disruption that will follow in AI’s wake, cynically aware of how it may be used for the benefit of those in power, and that most people will have very little say in what that future will be. It is poised to be the latest in the inevitable progression of technologies that brings greater efficiency and optimization of all aspects of our lives - with little regard for any unintended consequences. And, as a result, it is not possible to fully know why the machine behaves as it does.ĭespite this mysterious opacity, the future is being built on AI. Its understanding has developed organically, from the bottom up. Part of the magic, and the enigma, of AI is that we can’t open up a trained network to see what “rules” it has learned, because it doesn’t have any.

Then, using large quantities of data, the networks are trained and their components adjusted - or tweaked - until the system starts performing as desired. With this “new AI” programmers no longer encode rules but instead build network frameworks. Rethinking the Intelligence of Machine Minds IntroductionĪlthough AI has been around since the 1950’s, advances over the past decade, particularly in machine learning, have brought about a radical-change in what we call AI.
