Select Page

I’m generally annoyed by stories about people being nominated for some “award” as a thinly veiled method to criticize their actions or insult their beliefs, and usually do my best to ignore them. However the recent spate of articles announcing that Elon Musk had been nominated as “Luddite of the Year” piqued my interest with their sheer audacity. Digging into the story quickly reveals the source as a report by the Information Technology and Innovation Foundation, which annually nominates a set of actions or ideas they believe worthy of the term Luddite and calls out some of the people behind them. The report identifies 10 ideas, the first two of which can be ascribed to Elon Musk.  As titled in the report these ideas are “Alarmists Tout an Artificial Intelligence Apocalypse” and “Advocates Seek Ban on Killer Robots”.

In their discussion of the first idea the authors of the report mention Musk by name, along with Bill Gates, Stephen Hawking, and Nick Bostrom. They correctly mention that each of these individuals has publicly described strong artificial intelligence as potentially posing an existential threat to humanity. Their rationale in defining this idea is Luddite is simple. They state that artificial intelligence as a general class of technology provides great benefit to humanity, point out that describing artificial intelligence as posing extreme danger has the potential to slow its development, and then state that the dangerous sort of AI won’t be developed for a long time. The article doesn’t argue that strong AI could not pose a danger nor that there aren’t valid reasons to carefully craft the development of AI given the potential scope of that danger.

In their discussion of the second idea the authors only mention Stephen Hawking and Noam Chomsky by name, however the articles they reference mention Musk as well. They immediately equate proposed modifications of international treaties regarding warfare to cover autonomous weapons with “AI paranoia” and seemed to ridicule the term “killer robots”, despite the fact that it’s a rather apt term for machines designed to automatically end life. It’s a vexing bit of sophism ITIF is apparently deploying because they are concerned that such a ban would prevent militaries from making war “safer” and reduce available funding for robotics and AI efforts. Again, they don’t directly comment on the actual concerns of the individuals they quote and fail to address the rather sticky issue of whether making war easier and more exacting actually reduces casualties in the long run.

Now, I’m a rabid advocate of artificial intelligence, one might even say hell-bent on seeing it created, so I also find myself annoyed at talk that frightens the “ignorant masses” into opposing the idea as well as outright bans on its application to certain fields. However, I support artificial intelligence because of the transformative benefit it can bring to society.  I see it as counterproductive to pursue AI development in a way that endangers the people I hope to see this benefit. We face a multitude of obstacles in developing this technology and in obtaining its promised benefits, far beyond any single aspect of public opinion and even the entire scope of international law concerning warfare. I’m not sure ITIF is really considering the complexity of the path from here to there.

I’d like to offer the opinion that the inclusion of these two ideas in the organization’s 2015 list goes beyond being simply silly and actually runs counter to the organization’s goal of promoting innovation. The tone of the report suggest that they oppose any and all efforts to restrain any portion of innovation, even if that restraint is specifically designed to promote long-term development of the technology over certain short-term applications. Many of the individuals they are lumping into the category of “AI alarmists” are also on record as strong proponents of automation in general and artificial intelligence technology in particular. The fact that these individuals believe strong AI poses a tremendous threat does not suggest that they don’t support its development or laud efforts to create it. Musk in particular has made a strong statement of his support for AI by participating in the founding of an organization tasked to promote open, collaborative development of the technology.

What ITIF misses about these warnings is the possibility that approaching the danger zone in regards to strong AI without a plan to deal with the implications could lead to a far stronger, far more lasting reaction from society. Even the most fearful, most conservative people are going to be less afraid of discussing malicious AI now when the technology to create them is somewhere in the future than they will be after we’ve met the real thing. Preparing the world to face that enemy, or perhaps preventing it from ever arising, could prevent the sort of worldwide, primal terror that dark ages are born of.

The same can be said of banning fully autonomous weapons in warfare. There don’t seem to be any obvious instances where international treaty banning a particular technology in warfare has significantly hindered peaceful civilian application. For instance, the ban on using lasers to blind enemy soldiers hasn’t significantly slowed the development of laser technology or prohibited its application both in civilian and military arenas. It has however restrained one way that lasers in general could come to be seen as implements of torture. Banning fully autonomous weapons would be another step to prevent the public from seeing all robots as potential implements of destruction. Whereas it would almost certainly reduce the amount of funding available for military applications, it could go a long way to promoting, or perhaps maintaining, public acceptance of commercial robots.

For these reasons I think ITIF has made a serious mistake with their first two Luddite awards, but there’s a greater problem than simply failing to understand that open discussion of dangers and bans of military applications can actually promote innovation. The way they are discussing AI and autonomous robotics in their report suggest that perhaps they seriously misunderstand the potential of the technologies and the manner in which they may develop.

Strong AI, particularly strong general AI, is not the next generation of weaving machine. It is not simply a progression of automation to a new, more convenient level. It is a new thing under the sun. Discussing the implications of the technology, or even raising the alarm about the danger it could pose, is not the same thing is resorting to violence in the defense of traditional lifestyles. It’s certainly not displaying a hatred of all technology or misunderstanding of its workings. In fact, the so-called alarmists are accurately depicting the potential of strong AI and showing great respect for its capacity.

In their treatment of AI, the dismissive tone of the ITIF report ignores the potential of the innovation they are hoping to support. Conflating technology capable of producing an independent, autonomous agent with a labor economizing device displays appalling ignorance of, and disrespect for, the promise of artificial intelligence. A strong artificial intelligence can exert its own will upon the world. That’s the entire point of the technology. That will could be benign, built upon the most despicable portions of human nature, or so inhuman as to be incomprehensible. Strong AI is also unique among technologies in that it offers the possibility for rapid recursive self-improvement, which could unfathomably amplify the competitive advantage of those who control it while simultaneously eroding their capacity to understand the AI’s actions and maintain that control.

 

The same sort of carelessness is displayed and ITIF’s discussion of “killer robots”. They are missing that the applications we’ve seen so far, partial autonomy of several weapon systems but very few examples of full automation, have only scratched the surface of what can be done with military robotics. Autonomous weapons have the capacity to put a bullet in every head and a grenade in every pocket through methods yet unconceived and with economies of scale we can barely grasp. The seemingly magical plot devices of science fiction movies, from intelligent terminators to flesh devouring nano-swarms, may be far more real than we’d like to believe. The point of a ban on autonomous weapons isn’t simply to make sure that a human is always behind the controls of today’s predator drone, it’s to prevent nightmare from becoming far too easy to deploy reality.

In both cases, the report’s authors seem to be making some sweeping assumptions about future development in artificial intelligence and autonomous robotics. The outright state that strong AI is in the far future and blithely ignore the fact that autonomous weaponry could one day leverage microscopic form factors or do something particularly worrying such as replicate. Supposedly in support of unconstrained innovation, they egregiously ignore the unpredictable nature of innovation in general.

The other ideas discussed in the report, ranging from disturbing ban on citizen science to outright dangerous misconceptions about genetic modification, seem to be adequately described and more fairly presented. Perhaps the unique nature of AI technology or the extreme degree to which it promises to transform society were simply too much to present in the report or outside the experience of the authors. Regardless, the report only circumstantially nominates Elon Musk as a noteworthy Luddite in the process of including ideas that clearly do not meet the criteria of running counter to innovation and progress. Thankfully, some of the media outlets retransmitting this idea have also noted the absurdity of branding Musk as anti-technology… even if they didn’t dig into the real problems with the ITIF report.