Select Page

In a recent article at Business Insider, Oblong Industries CEO and computer interface visionary John Underkoffler criticised the foundation of OpenAI and the very idea that strong AI is a likely development.  His idea that strong AI is “…just not plausible…” and “…not going to wake up…” seems to stem from a belief that all AI research is focused on predictive aspects of user interface.

Not-so coincidentally, that’s his focus area.

The article is not an in-depth coverage of Underkoffler’s thoughts so it’s unfair to read too much into it, but whether it accurately depicts his thoughts on the matter or simply displays Matt Weinberger’s summary interpretation, the piece sets an unflattering tone. It comes across as a  it of a monument to the way otherwise intelligent people can be so involved in their own area of expertise and individual experience that they miss the reality of events in their environment and the implication of the larger tech industry’s current trajectory.  The article suggests that Underkoffler envisioned the systems he designed for “Minority Report” and “Ironman” as purposefully without an underlying intellect, draws a comparison between these systems and contemporary digital assistants like Siri, and provides a conclusion that present and future work is not focused on the development and deployment of systems with, or even capable of, general intelligence.

This conclusion falls short of reality in three ways.

First, consider that current systems are doing far more than taking voice commands from a Tony Stark or watching where Tom Cruise waves his hands in front of a holographic monitor.  Combine that with the fact that we don’t really know what leads a machine to display human level intelligence and Underkoffler’s certainty seems premature.  We don’t fully understand what intelligence is and how it comes about in humans.  We’ve never encountered it in machines.  We don’t know if it will be recognizable when we do.  Given that the scope of artificial intelligence research, and lately commercial deployment of machine intelligence, covers a broad range of tasks and systems to perform them it seems unfounded to conclude that no system in use today could lead, in some way or another, to an “awakening” of a strong AI.

Second, researchers all across the world are trying to answer those questions about the nature of intelligence by developing increasingly sophisticated AI.  There are tens of millions of scientists, engineers, and developers working to create a machine intelligence just to see if it can be done.  They are working on planning, motivation, emotions, and outright autonomy as a way to understand how it works in people and how it could be applied to constructs.  If they have the opportunity to do so, which given our lack of knowledge about the true prerequisites we can’t be certain they won’t get, many of these researchers would create a strong AI for curiosity’s sake.

Third, and this is perhaps the most important reason to doubt the article’s unfounded tone of certainty, Underkoffler comments on the current state of investment in strong artificial intelligence research and by extension the likelihood of further investment in the near future. As is all too often the case for such a serious issue, Underkoffler dismisses investment in the area as insufficient without qualifying what constitutes investment specifically in the development of strong AI, without quantifying the current level of investment, and without detailing what he feels is the necessary level of investment. Of course it’s quite possible that Underkoffler has done significant research on the point and has a strong rationale for what he believes. My commentary here, after all, is longer than the article itself. Perhaps I’m making a mountain out of a molehill. However Underkoffler’s comments in this interview are far from unusual and I feel it’s worth allowing them to stand in for the general class of arguments we see in this area.

As stated earlier, we don’t know what it will take to create a strong artificial intelligence. Without that understanding we can’t really qualify what sort of research and development will lead to the outcome. It’s possible that much targeted research is hidden from the public, much more than the recent spate of open source disclosure and formations of companies like OpenAI would lead us to believe. There are far too many unknowns to simply dismiss the development of strong AI based on the notion that insufficient effort is being applied toward the cause.

While we can’t quantify the investment necessary or detail exactly what is already being undertaken, we do know quite a bit about the reward for developing a strong AI. The payoff for controlling the first strong AI may be greater than any other project ever conceived. The entity that accomplishes it will have a competitive advantage unlike any other. They will be capable of obtaining absolute economic or military domination of the world. Given the potential for this sort of profit, it is almost unthinkable that no company or government will pursue the goal.

We are foolish to think it’s not already happening.

Underkoffler may be right and none of the projects undertaken to improve user-interface or provide alternatives to search engines will spontaneously “wake up”. What he misses is the fact that individuals and organizations have an extremely strong incentive to wake them up on purpose. Whether the motive is profit, humanitarian benefit, improved national security, simple curiosity, or even the utter annihilation of enemies there are many reasons for humans to summon this particular demon. It’s silly to dismiss the efforts of those trying to ensure that the outcome is positive.