Suzanne Treister
2017

Interview with Kenric McDowell at Google Machine Intelligence
Introduction initiated by Lucy Sollitt


ST: What kinds of things is Google researching around machine intelligence etc. at the moment. What are the sub-areas? And where is Google at with all of this?

K: Within the research groups I interact with, there are major efforts around machine vision or machine perception. Emotion recognition is one primary area of focus. Our group also works on some more infrastructural areas of machine learning, including user privacy and debiasing machine learning systems. Research at Google generally exists two years ahead of products and is very tied to future product features.

ST: What's your own perception of this work? How do you feel about it?

K: I'm involved because machine intelligence is not only technically and mathematically interesting, but also a site for political intervention that requires strong interdisciplinary and critical thinking skills. Neural nets literally compress epistemologies into numbers; they shouldn’t be made without investigating foundational assumptions about society, meaning, even the nature of mind. Take, for example, a neural net designed to predict crime rates (this already exists). The way crime and criminality are defined in training can reinforce or transform systemic bias. Tech work must expand to include ways of thinking that are not strictly mathematical or programmatic.

ST: What do you find most interesting about what Google is finding out?

K: I find it very interesting how arbitrary (and flexible) the methods of knowledge-modeling can be. For example, modeling emotions for an emotion recognition system is incredibly difficult, and while one can make a reasonable guess at how to do it, there is no one correct model. So we are forced to see the limits of our epistemology, which in a technical culture like Google's is a very enlightening experience.

ST: What do you think the repercussions are for other forms of technology?

K: I think the repercussions overall are a shift toward probabilistic and multidimensional models of thought, in technology and design, in the narratives framing our lives, etc. This is one reason why the image of the oracle is especially pertinent now. We are building machines to predict everything, and our relationships with cartomancy or other highly networked systems for navigating probabilistic reality (like the I Ching) can certainly inform how we think about, design, and use AI. Ultimately, I see this as cultural and historical trend that has one facet in AI.

ST: Has Google predicted anything about the singularity? What do you think about the singularity? (How do you see the singularity?) What does Google think about the singularity?

K: As for the singularity, Google has probably the most famous Singularitarian on staff in the form of Ray Kurzweil. There tend to be prize game like this that get recruited at Google. I don't know that it implies anything about Google's official stance per se. Larry, Sergei and Eric seem pretty pragmatic but becoming a billionaire in your twenties does strange things to people. As far as my opinion on Kurzweil's framing of the singularity, I side with Douglas Hofstadter, who said:

What I find is that it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad.


At least regarding his post-human Ship of Theseus type arguments, in my opinion there seems to be a lack of recognition of the existence and value of interiority, that is, he seems to see himself from the outside as an object with replaceable parts and no inner integrity that would be disrupted by this. There is also an obvious lack of reconciliation with the inevitability of death and therefore the entire range of transcendent experience. This comes also from my understanding that he takes an insane number of vitamins every day.

Independent of Kurzweil, any eschaton, technological or otherwise, would in my opinion exist outside of time and not be subject to linear unfolding and therefore would be present throughout history and future like some kind of tentacular super-being. This inclines my argument for how to engage with it much more toward techniques involving the extant mysteries, plants, etc. than the demented-god of techno-transcendence. I did like Terrence McKenna's deployment of the concept at least as a type of utopian self-fulfilling prophecy. What is your take? I'm very curious.

ST: My take would be that the idea of the singularity is like a hypothetical alternative reality that my otherwise unused braincells like to soak up and float with and see what happens, what thoughts arise.

ST: Is Google all of one mind? Do you see it as a unity? Are you all working in the same direction? Are there internal political, ideological and/or philosophical differences of opinion within Google? This begs the question how can you know the mind of all Google people, but whatever you can say will be useful. Do some Google projects have internal detractors? Do you consider there to be any ultimate goals of Google, either singular or plural?

K: There is absolutely a plurality of opinions at Google. In fact, our group (which is known internally as Cerebra) has been an internal resistor vis-a-vis privacy and cloud technology. Specifically, we develop AI that runs locally on devices and doesn't share information with the cloud, for the purpose of maintaining privacy and providing a deeper level of assistance from technology that is predicated on personalized knowledge.

We've been thought leaders but have actually managed to steer the ship quite a bit. So, yes there are many different forces working internally and often the thing that surfaces as a product is shaped by these forces. My own goals with Google are highly personal and related more to the evolution of consciousness than traditional corporate results but I definitely speak for myself there.

Ultimately, there is the stated goal of "organizing the world's information and making it universally useful and accessible". However, it's been a known thing for quite some time that "Google is an AI company disguised as a search company", but it's very obvious now. For more info I would point to my friend Meredith's work with the AI Now Foundation and this release from my colleagues in Cerebra.


Kenric McDowell has worked at the intersection of culture and technology for twenty years. His resumes includes work for R/GA, Nike, Focus Features, HTC Innovation and Google. Kenric currently leads the Artists + Machine Intelligence program at Google Research, where he facilitates collaboration between Google AI researchers, artists and cultural institutions. He is a regular speaker at conferences and has spoken about art and interdisciplinary collaboration at UCLA IDEAS, Eyebeam, MacArthur Foundation, Nabi Art Center, and the Google Arts & Culture Lab in Paris. Kenric received his MFA from the International Center of Photography-Bard in New York City.

Artists + Machine Intelligence is a program at Google that brings artists and engineers together to create art with machine intelligence. We provide financial and technical support to artists and host interdisciplinary conferences. Our goals are to facilitate a rigorous conversation around Machine Intelligence, to open Google's research to new ways of thinking and working, and to support an emerging form of artmaking: collaboration between artists, engineers, and intelligent systems.

Treister homepage