Co-author Thomas G. Dietterich discusses "Rise of Concerns abut AI," (cacm.acm.org/magazines/2015/10/192386) his and Eric J. Horvitz's Viewpoint column from the October 2015 Communications of the ACM.
00:00 The lush forests of Eastern Oregon may seem a strange place for artificial intelligence. But AI is now everywhere, even in systems that manage these natural resources.
00:15 As AI grows into the background of our lives, so too grow the dangers it carries.
00:23 Join us as we talk with ACM Fellow Thomas Dietterich on the Rise of Concerns about AI.
00:32 [Intro graphics/music]
00:42 After the Big Burn of 1910 devastated the Pacific Northwest, the Forest Service dedicated itself to fighting all woodland fires. This paradoxically led to worse fires as unburned material built up.
00:59 We caught up with Dr. Dietterich and his colleagues at The Lodge at Suttle Lake in Sisters, Oregon. He told us how they're applying artificial intelligence to reverse the trend.
01:10 TD: Well, one thing we can do is we can pay people to go into the forest and remove some of this accumulated vegetation. And this is known as "mechanical fuel treatment". It's quite expensive, though, so it's unlikely that we can do a lot of it. But there's a very interesting computational question: If you can put mechanical fuel treatments in the landscape, where should you do it?
01:31 So Dr. Dietterich clearly appreciates the value of AI to solve complex problems. At the same time, he and co-author Eric Horvitz noted increasing fear of AI among the public and the media -- even from within the scientific community.
01:48 TD: Those of us in artificial intelligence field who had been accustomed to years and years of people saying, "What you're doing is ridiculous, it will never work, computers are so stupid" -- suddenly we're confronted with people who are saying, "Oh, computers are getting too smart, and they're now becoming a risk to humanity."
02:06 So Drs. Dietterich and Horvitz examined what they see as AI's real dangers, and put them in several categories.
02:14 TD: There are short-term fears about AI that I think are definitely justified. And these spring from the fact that AI is software, and software can have bugs. Software can be attacked by cybercriminals. We can have problems in the user interface. ... And we can also have problems ... where there might be handoffs between the human driver and the AI driver.
02:38 These fears are complicated, because the popular conceptions of AI often don't jibe with the reality.
02:45 TD: You can see there are two main story lines, typically, in Hollywood's portrayal of artificial intelligence. There's the sort of Commander Data story line, which is really the Pinocchio story. ... And then the other side is Frankenstein's monster, that somehow we create something we then cannot control. Those two stories make great plots in science fiction and in Hollywood. But they're really far away from our daily experience with artificial intelligence, which is that we can ask Siri or Cortana a question and get an answer, like "What time does this movie start."
03:18 But as artificial intelligence goes beyond Siri and Cortana, the stakes go up.
03:24 TD: It's one thing if Siri gives you a wrong start time for a movie. It's something completely different if your self-driving car gets in an accident, or your surgical robot kills the patient.
03:36 At the same time, Drs. Dietterich and Horvitz dismiss one of the most well-publicized fears of AI: That a "singularity" event will cause AI-driven machines to suddenly explode in intelligence and capabilities.
03:50 TD: There's no reason to expect that there's some sort of threshhold phenomenon. And I think that's really the misapplication of the nuclear chain-reaction metaphor. That somehow, if you bring enough AI "smartonium" together, you'll somehow get an intelligence chain reaction.
04:09 But Drs. Dietterich and Horvitz warn of another category of danger that AI researchers themselves are unqualified to manage: Artificial intelligence could create unforeseen socioeconomic impacts that we'll all have to deal with.
04:25 TD: Will jobs disappear? Will these AI systems be controlled by a very small fraction of the population that basically controls that capital and reaps all the financial benefit from it?
04:38 In fact, general concern for the outside world may be a new goal for artificial intelligence.
04:45 TD: So if we think about systems acting in the world, maybe making medical recommendations, they need to be prepared for the possibility that the patient has a new disease ... Being robust to the, I would say the "unknown unknowns" I think is the big challenge.
05:00 Learn more opinions on this matter from Drs. Dietterich and Horvitz in this month's Communications of the ACM, in the Viewpoint article, "Rise of Concerns about AI: Reflections and Directions".
05:14 [Outro and credits]