AI, Problem Solving and the ‘Robot Apocalypse’: Q&A With Prof. Raghu Ramanujan

May 3, 2018

The core scientific ideas in today's artificial intelligence systems are actually decades old, said Prof. Raghu Ramanujan. The engineering is what has advanced significantly, allowing data sets and computing power to catch up to theory, and with that comes a host of new issues to consider.

In Ramanujan's class "Programming and Problem Solving," conversation pivots from what computers can do, to how we might apply computers' abilities in other realms, and the implications of doing so. His students get a taste of that application as they try their hand at programming a computer to win a game of Connect Four. Often, the computer beats its human opponents.

So are we headed for a robot apocalypse? Not exactly, said Ramanujan, an assistant professor of mathematics and computer science, but there are real reasons for concern.

Originally from Chennai, India, Ramanujan moved to the United States in 2001 to study computer engineering at Purdue University. He later attended Cornell University for graduate school, and began teaching in 51¹ÙÍø's Mathematics and Computer Science Department in 2012. Here, he discusses the implications of artificial intelligence and computer technology in the modern world.

Why are machine learning and other AI-related fields so popular today?
They've always been active areas of research. The very first people who built computers, going back to Alan Turing and [John] Von Neumann, saw the potential right away. They recognized that computers weren't just another machine, but that they had a universality–that once you could program computers, you could simulate all sorts of complex processes.

I think what's changed in recent years is that early AI efforts weren't always successful. There were certain high-profile successes, but they were always in very narrow things where there was a great deal of excitement for maybe a week and then people went back to work because it wasn't clear how you translated those successes into other domains. So, we could build a fantastic chess-playing computer in the mid-1990s–great. But we didn't have a plan for how that would translate into other domains.

The core scientific ideas you see in AI systems that have been deployed today are decades old. What has really advanced in the interim has been the engineering. It was just a matter of the data sets and the computing power catching up to where the theory has been for a long time.

And once things started working, what's also happened is that it's leaked out into the wider world so that AI systems are more prevalent, and they're a lot more successful now. Especially because if you're going to commercialize something, it had better work really well. For example, I can talk to my phone and have it transcribe what I say. The fundamentals for how to do that have been around for the better part of 15, 20 years. We've had systems that could maybe faithfully transcribe your speech with 60, 70 percent accuracy, but that's simply not good enough. If I talk into Siri and I have to correct everything she transcribes, like every third word, that's not good enough; no one's going to want to use that. But if the error rate goes down to one in 10 words, one in 20 words, one in 50 words, that's something the consumer may tolerate.

And there's sort of this feedback loop: the money follows the success, so you have a few successful types, and suddenly there's a lot of excitement about machine learning and more people are willing to fund it, so people take on more ambitious things and it sort of reinforces itself.

Should we prepare for a robot apocalypse?
I don't think a robot apocalypse is by any means a foregone conclusion, but there's definitely a worrisome issue there that needs to be carefully handled, which is this problem of what's called value alignment. People think that we're going to get subjugated and have these evil overlords, but there are more mundane situations that are just as dangerous.

There's the classic thought experiment of the paperclip maximizer; it's meant to illustrate the fact that you don't necessarily need to have some machine that's actively malicious for bad outcomes. So, in this thought experiment, someone builds a machine. For whatever reason, its sole purpose is to manufacture as many paperclips as it can. So, what happens? First up, it does things you would expect: it builds a more efficient process for manufacturing paperclips, finds ways to produce more using fewer materials, [etc]. But at some point, you run into resource constraints. Maybe it now starts creating a fleet of mining equipment that it sends out with no care for environmental protection laws or whether people live in these places, or without any sort of regard for other values that may also be important to humans. Pretty soon, mountains are getting blown up, forests are getting razed, just so it can collect more resources to be able to build more paperclips. Pretty soon, the whole universe is just one giant paperclip manufacturing operation.

It sounds ridiculous, but the deeper point is that it's kind of like the King Midas story when you're dealing with these systems: you will get exactly what you ask for, and not necessarily what you wanted. That is the bigger danger that I see–an unintentional AI calamity that comes from some sort of poor value alignment or machines being given jobs that are way above their pay grade.

This is already happening, and people are already worrying about the extent to which machines have autonomy in making important decisions in people's lives–about whether you get approved for a loan, about whether you get fired from your job, about whether you should get parole or not, or how long your sentence should be. All sorts of important, life-and-death decisions are being handed over to machines at an accelerating pace without sufficient deliberation to think through how these things could fail or what safeguards we should put in place.

When thinking about technology in the liberal arts, this is why that training in a broad array of disciplines is so important: being able to think critically about the applications of technology, being aware of the context in which it will be applied and of social factors that are not easy to quantify, and being aware of the history of a problem–that is such a critical piece of the puzzle, rather than just barging in and saying, "Ask this computer. It'll give you the right answer."

So, I'm not worried about literally The Matrix or Terminator, but there are important issues to be worried about now, given the AI systems that are already out in the world and are being deployed for various things. My concern is not with the robot apocalypse that might come in 200 or 300 years, but the unintended side effects that could be happening in a couple years from now. Maybe not as apocalyptic–it might not make a great film–but it's worrisome, nonetheless.

What role do you think computer science and other digital studies play in the humanities and at colleges like 51¹ÙÍø?
Computer science is the study of problem-solving itself. How do I solve this complicated problem by breaking it down into minute, step-by-step instructions that even a dumb computer could follow? It grew out of physics, engineering and math, but what we have in computer science is this toolkit of ideas and approaches to thinking about problems. Those problems can be something specific to the field, but they absolutely can be problems that come from history, languages, and the arts.

It goes the other way too: the news recently has centered around data privacy issues and what went on at Facebook in terms of harvesting user data. There are technological issues there, but there are broader questions being asked in the social sciences about privacy, security and the building of these psychometric profiles. These are problems that someone who has gone through a liberal arts curriculum should be capable of thinking about critically from different angles. Technology playing the role it does today, it seems like one more of those critical pieces in a true, liberal arts education.

Royce Chen '20
rochen@davidson.edu