Interview with Kenric McDowell at Google Machine Intelligence
ST: What kinds of things is Google researching around machine intelligence etc. at the moment. What are the sub-areas? And where is Google at with all of this?
K: Within the research groups I interact with, there are major efforts around machine vision or machine perception. Emotion recognition is one primary area of focus. Our group also works on some more infrastructural areas of machine learning, including user privacy and debiasing machine learning systems. Research at Google generally exists two years ahead of products and is very tied to future product features.
ST: What's your own perception of this work? How do you feel about it?
K: I'm involved because machine intelligence is not only technically and mathematically interesting, but also a site for political intervention that requires strong interdisciplinary and critical thinking skills. Neural nets literally compress epistemologies into numbers; they shouldn’t be made without investigating foundational assumptions about society, meaning, even the nature of mind. Take, for example, a neural net designed to predict crime rates (this already exists). The way crime and criminality are defined in training can reinforce or transform systemic bias. Tech work must expand to include ways of thinking that are not strictly mathematical or programmatic.
ST: What do you find most interesting about what Google is finding out?
K: I find it very interesting how arbitrary (and flexible) the methods of knowledge-modeling can be. For example, modeling emotions for an emotion recognition system is incredibly difficult, and while one can make a reasonable guess at how to do it, there is no one correct model. So we are forced to see the limits of our epistemology, which in a technical culture like Google's is a very enlightening experience.
ST: What do you think the repercussions are for other forms of technology?
K: I think the repercussions overall are a shift toward probabilistic and multidimensional models of thought, in technology and design, in the narratives framing our lives, etc. This is one reason why the image of the oracle is especially pertinent now. We are building machines to predict everything, and our relationships with cartomancy or other highly networked systems for navigating probabilistic reality (like the I Ching) can certainly inform how we think about, design, and use AI. Ultimately, I see this as cultural and historical trend that has one facet in AI.
ST: Has Google predicted anything about the singularity? What do you think about the singularity? (How do you see the singularity?) What does Google think about the singularity?
K: As for the singularity, Google has probably the most famous Singularitarian on staff in the form of Ray Kurzweil. There tend to be prize game like this that get recruited at Google. I don't know that it implies anything about Google's official stance per se. Larry, Sergei and Eric seem pretty pragmatic but becoming a billionaire in your twenties does strange things to people. As far as my opinion on Kurzweil's framing of the singularity, I side with Douglas Hofstadter, who said:
ST: My take would be that the idea of the singularity is like a hypothetical alternative reality that my otherwise unused braincells like to soak up and float with and see what happens, what thoughts arise.
ST: Is Google all of one mind? Do you see it as a unity? Are you all working in the same direction? Are there internal political, ideological and/or philosophical differences of opinion within Google? This begs the question how can you know the mind of all Google people, but whatever you can say will be useful. Do some Google projects have internal detractors? Do you consider there to be any ultimate goals of Google, either singular or plural?
K: There is absolutely a plurality of opinions at Google. In fact, our group (which is known internally as Cerebra) has been an internal resistor vis-a-vis privacy and cloud technology. Specifically, we develop AI that runs locally on devices and doesn't share information with the cloud, for the purpose of maintaining privacy and providing a deeper level of assistance from technology that is predicated on personalized knowledge.
We've been thought leaders but have actually managed to steer the ship quite a bit. So, yes there are many different forces working internally and often the thing that surfaces as a product is shaped by these forces. My own goals with Google are highly personal and related more to the evolution of consciousness than traditional corporate results but I definitely speak for myself there.
Ultimately, there is the stated goal of "organizing the world's information and making it universally useful and accessible". However, it's been a known thing for quite some time that "Google is an AI company disguised as a search company", but it's very obvious now. For more info I would point to my friend Meredith's work with the AI Now Foundation and this release from my colleagues in Cerebra.