Last week I attended the Post-human Territories session as part of Amsterdam’s FIBER Festival. The session explored the influence and impact of AI (specifically, machine learning) in environmentalist and humanitarian endeavours.
By far the most inspiring contribution to the evening was that of engineer and artist Tega Brain who shared her project Deep Swamp.
The installation consists of three semi-submerged micro ‘landscapes’ which are each managed by a single machine learning system which uses image recognition to evaluate its own performance and tweak the landscape that has been placed under its care. Nothing mind-blowing, you might think. But there’s a catch.
Each AI has been trained on a different set of images. The first (‘Harrison’) has been fed a database of images tagged with the word ‘wetland’, mostly sourced from Flickr, etc. This consists of a pretty straight-forward, although often idyllic, definition of ‘wetland success’, as understood by the machine learning program.
The second agent (‘Hans’) evaluates the success of its own care-taking efforts based on a training set of Renaissance paintings. It is, essentially, doing its best to produce a work of art.
The third agent (‘Nicolas’) just wants attention. His training set is a collection of images containing audiences: people in various states of wonder, excitement, or awe. His measure of success is getting as many responsive people in his images of the landscape as possible.
What this of course beautifully illustrates is the fact that machine learning programs have no objective understanding of the concepts and values driving so-called ‘success’. These concepts and values are built into the datasets they have been trained on — and can thus easily perpetuate biases present in the dataset.
Full link to the project on the artist’s website below: