Lawrence Lek: Humanizing the Non-human in Virtual Worlds
Using the medium of game design, artist and filmmaker Lawrence Lek explores the nature of intelligence and ethics of the non-human
Using the medium of game design, artist and filmmaker Lawrence Lek explores the nature of intelligence and ethics of the non-human.
Our rapidly developing technologies change the socio-political landscape, economics, and environments, but also alter the ways in which we understand humanity. The design of neural networks suggests that intelligence is no more than a dataset processed by a set of algebraic equations. Similarly, recent research in human behavioral biology states that patterns are formed mainly by events in the surroundings as a result of gene-environment interaction. It is important to acknowledge that our consciousness and artificial intelligence depend on similar flows of information.
When we process information and do something with it, we create. What is more important—the act of creation, recycling information, or the conscious decision to put oneself in a certain environment? Professional gamers spend days in virtual environments, but their actions render value and profit in the physical world. In her seminal essay Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective, philosopher Donna Haraway states that knowledge can be highly biased even in respected scientific circles. Now, when we understand that the bias is something we cannot avoid, might it be something we should be aware of?
New artistic tools and mediums help us to reflect on these issues. Lawrence Lek is a contemporary artist who uses game engines as a creative medium to deliver his unique view on science fiction. He explores the notions of future and artificial intelligence, the relationship between the human and non-human, and digital cultures such as gaming clans and internet communities. Some of Lek’s stories are rendered as feature-length, stand-alone films. Lek’s work has been exhibited in K11 Chi Art Space, the Victoria & Albert Museum, and Garage Museum of Contemporary Art.
Digital designer and producer Vladimir Shlygin talked to Lawrence Lek about designing virtual worlds and the subjectivity of artificial intelligence.
The conversation took place in September 2019.
Artificial Intelligence as the Other
Lawrence Lek: The idea of identifying oneself with a non-human is something that I explored with the films AIDOL and Geomancer, the video essay Sinofuturism, and the open-world game 2065. Personification is widely used in poetry and literature by adding human characteristics to non-human phenomena and seeing the natural world as a mirror of the self. And of course, as an author exploring these topics, I create characters from different observations of these phenomena.
Some artists, writers, and filmmakers are dealing with the idea of “the other.” It can span from different individuals in different cultures, to people from different times or non-human people. For example, the protagonist is an AI in the projects above. But each of these films is also an allegory of any group that has no voice yet. This group could be non-human; they could be a new political class of people, or they could be also people from a different time and place. Sinofuturism essentially functions as a kind of post-humanist allegory and also as speculative fiction.
Science fiction, including film and literature, has served the function of being a utopian or critical space to make statements about the present, where all the names and places have been changed. It gives a sudden immunity from reality, using the story and the setting as an allegory. In 1950s science fiction movies such as Creature from the Black Lagoon or Flash Gordon, the on-screen enemy is a thinly-veiled reference to another race. When it became socially unacceptable to represent “the other” as another culture or race in the 70s and 80s, these fears became transferred onto the machinic alien. In Terminator and Blade Runner, the sentient machine antagonists are designed to instill an uncanny fear. These films deal with the embodiment of non-human intelligence, but they still place the alien mind within a humanoid body. Today, however, non-human intelligence could be anything: a game algorithm, a planet, the entire financial stock market. It comprises a wider spectrum of minds that are not necessarily confined to a physical body. This creates a huge range of interesting narrative possibilities.
Instead of humanizing the non-human, I’m interested in other ways of being. After all, there is often an implicit assumption that humanism is the ideal for contemporary society. But this ideal is socially constructed, and takes the form of anthropocentric bias, the primacy of the rational mind, as well as ethical concerns about freedom and individualism. For example, one of the principles of the video essay Sinofuturism is that Chinese cultural development embodies seven key ideas that run against the rational humanist ideal. When I was researching deep learning, I noticed that the portrayals of AI in the media mirrored those of Chinese industrialization. In the case of the technological workforce, the equation of human workers to a “nameless, faceless mass” capable of endless work is exactly how robots are presented as a threat to human livelihood. The difference is, of course, is that the Chinese workforce is biological but becomes dehumanized through their representations. The seven chapters of Sinofuturism are organized after other characteristics embraced by Chinese culture that are somehow seen as anti-humanist. The political idea is that rather than deny these corrupt traits, Sinofuturism embraces them and embodies them into a nonhuman lifeform, an AI whose goal is to optimize and survive. One of the chapters, “Copying,” highlights how both deep learning and recent Chinese innovations are characterized as parasitic, aggregative mechanisms capable only of copying massive amounts of raw data, rather than original innovative thought. Another chapter, “Addiction,” draws a parallel between AI algorithms used to optimize decision-making processes and the creation of addictive behaviors in humans. Gamification—the process of making behavior addictive through play—draws upon the dopamine feedback loop; the habit-forming mechanism that rewards good decisions.
It is a problematic situation when the conscious human attempts to speak on behalf of the non-human other. How can you, as the author, the observer, or the filmmaker, take the position of speaking on behalf of an ethnic, gendered, or non-human minority? But this goes back to the idea of personification, where you temporarily imagine yourself in the position of that other being. This embodiment in the other mind might help open up ethical and political dimensions that evolve discourse on what AI might become.
After working with this idea for a while, it’s become important to think about the difference between the simulation of consciousness and consciousness itself, or the simulation of intelligence and intelligence itself. The romantic ideal of an artist is that you create according to some intangible idea of “art.” But the more I became immersed in this process, the more I started wondering: am I just an algorithm, making “cost-benefit” decisions for aesthetic choices? This happened over and over again: I would have to make decisions to place this building here, frame this shot like that, or choose the words to generate an evocative phrase. What if my entire artistic process is not a biologically driven sense of intuition, but a process of probabilistic decision-making just like the cost functions used in machine learning? Instead of anthropomorphizing AI, I started to explore ways in which I behave like a deep learning algorithm. I don’t mean this as a call for some transhumanist viewpoint. But I started to think from another perspective, like a kind of empathy for the machine.
Phenomenology of Virtual Worlds
There is a materialist focus in architecture that sees the virtual building—in its representation in a drawing, film, or rendering—as merely an intermediate stage for realizing the built artefact. One alternate to this view of architecture is a phenomenological viewpoint, where it is your perception rather than physicality that defines a place. When I started doing my first virtual worlds, it was with this in mind—the experience of the viewer or inhabitant. Once I dissociated the idea of architecture from an obligation to build, I started wondering if putting the existing world in a different time, place, political structure, or environmental scenario could result in a form of spatial practice that reflected the world in a different way.
Nowadays, the act of navigation takes place in virtual space as much as in real life. Streaming platforms make the process of search almost effortless. But in reality, your choices decrease, because recommendation algorithms are designed to actively push content to you. Before services governed by prediction and profiling like Spotify emerged, the internet was populated by other forms of social space that arguably still had a larger degree of choice. In the chapter on “Gaming” in Sinofuturism, online eSports and multiplayer games present how communities built within the virtual world reflect a future reality. The low-polygon humanoids and environments I use inevitably become dated at an accelerated rate, but these kinds of synthetic objects are going to be more common as virtual and augmented reality develop. Moreover, there is real corporate interest to create this new ubiquitous medium, where everyone is creating and sharing online.
Video games or virtual worlds are appealing because they are like sandboxes, where you can play and compose things in space and also in time. The medium enables you to create a collage of places, environments, soundscapes, and fragments of reality in a single place. When you make a world, it also changes your mental perceptions, because you become more playful with the possibilities of reality, while also observing reality in greater detail. I keep exploring how to embed different political or social frameworks in simulation, so that in the future, I can look back and compare the evolution of the real world with the fictional one.
I am interested in how fiction frames reality in a continuous feedback loop, where the present is informed by a future speculation. In the screenplay for Geomancer (2017), which is set in the year 2065, there was a company that made AI satellites called Farsight. I thought: what if I take that fictional timeline seriously, as a kind of subconscious prophecy? The next year, I started a production company with the same name, framing subsequent exhibitions as if I was an anonymous content creator employed by this entertainment corporation. Farsight is thus a kind of “reality fiction,” and so advertising, marketing, branding—the things I had not considered much outside of worldbuilding—suddenly become much more interesting. Many projects at the intersection of art, science, technology, and architecture have a kind of advertorial feel. Google sponsors art residencies and Spotify publishes features on digital creativity. Often art is disguised as the main objective, but in the end, it also serves the function of paid advertising. In AIDOL, the sequel to Geomancer, Farsight expands into predictive entertainment, and their record label attempts to manipulate a singer to produce the kind of music that is so generic that it can appeal to anybody, all the time.
The virtual world is also a way to communicate with an unknown future audience. This might be the most important kind of public: the one you don’t even know. A few years ago, I had this realization that most of the people I am inspired by have no idea I exist. Maybe we have no friends in common, or they are really busy, or they just aren’t alive. No nineteenth-century novelist wrote with me, as an individual, in mind. This idea was really liberating. I think of this cinematic universe I’ve been working on as an experiment, a time capsule in the future might be found by another kind of mind or person, one whose existence I can’t even anticipate. This idea of an unknown audience really speaks to me and it makes all the difference.
Cover image: Lawrence Lek, 2065, still image, 2018. Courtesy of the artist
Vladimir Shlygin
Vladimir Shlygin is a digital art and design director specializing in meaningful technology application and in-depth research for cultural projects.