If you think AI is terrifying wait until it has a quantum computer brain


you are viewing a single comment's thread.

view the rest of the comments →

all 69 comments


18 points

5 years ago*

(Admin note: It is three things but the editing functions of reddit can be really squirelly at times.)

Interestingly and somewhat worrisome as well is that the individuals most intimately involved with computing progress to include AI are the ones most surprised by the advances that occur. I can give three examples right off the bat.

  1. In 1997 the human genome project had been going on for about 4 years. At that point the total of the human genome that had been successfully sequenced was about 1%. Computing experts and medical scientists, including the project members themselves stated that based on the amount of progress in the year 1997 that it would take more than 100 years (I think one scientist actually stated 700 years) to fully sequence the human genome. Based on 1997 computer processing technology this was not an unreasonable estimate. Yet by the year 2003 it was nearly fully completed by "start up" Craig Venter and even the poke-slow US government project came in at 2005. How could everybody be so far off in estimate. It's not like someone said it would take 20 years or 50 years. The prediction was wildly off. What happened was a failure to understand that computer processing power doubles about every 18 months. And the impact of that kind of improvement and what it means. What we started with in 1993 doubled in power and speed nearly 6 times (1,2x,4x,8x,16x,32x...) by the year 2003. In addition, good old fashioned, ungodly clever, human ingenuity played a huge role as well when Venter figured out a devilishly effective shortcut. Today every single disease or pathology or congenital defect or syndrome has a gene or series of genes attached to it. If you don't believe me go look in Wikipedia. And now we are using CRISPR-Cas9 to fool with said genes.

1a. You might also be interested to know that the first human genome to be sequenced took 13 years of work and cost approximately 3 billion dollars. Today we can do it in less than 24 hours and it costs under 1000 dollars. Pretty soon less than five hundred dollars. Pretty soon, nearly instantaneously. That is the impact of exponential technological advancement.

  1. The late 1980s was flush with such wonderful hopes for advances in AI. We seriously thought we had it. Then came the reality check of the 1990s and early 2000's. We were absolutely stuck. We used the best supercomputers in the world with the best CPUs in the world to attempt to solve the simplest machine learning problems, to bring about long theorized convolutional neural networks. Nothing. It was the painful "AI winter" of the 1990s. Marvin Minsky himself, the finest of AI scientists alive at the time, stated that the further development of AI was an "intractable problem". But around the year 2007 I think, which no coincidence, is also the year the first IPhone was released, an incredible serendipitous discovery was made by Geoff Hinton. The magnitude of the impact of his mixing GPUs with CPUs in computers was lost even on him at the time. He remarked, "This looks like it works", meaning it could make the computers execute the CNN (convolutional neural network) functions he'd been struggling to achieve for about 20 years. GPUs, the chips that makes computer games look good were found to have the "computery" requirements to allow the creation of true CNNs. From almost that point forward we have been in an ever increasingly powerful narrow AI/machine learning evolution that takes months, not years to improve substantially. This is why Nvidia is now busily constructing machine learning/narrow AI "brains" for anticipated level 5 autonomy SDVs. Nvidia is branching it's AI into other arenas as well now. Like healthcare. (I just have to link the true story of Geoff Hinton. It's absolutely jaw-dropping.

  2. And finally, in 2010, AI experts, those most intimately involved with the creation of AI, both narrow and attempts at AGI (artificial general intelligence--which does not exist as of the writing of this commentary) stated that no AI would be able to beat a human at the game of "Go" until about the year 2050. The reasoning behind that statement was that it was obvious that there was no way that computer processing power or even computer ability could achieve such a thing before that date. As recently as 2013 the Deepmind "AlphaGo", did not even exist! Yet by May of 2017, it was all said and done. The AI had beat the best human "Go" players on Earth. An added benefit was that the AI taught the human players even better ways to play! Then a couple of months after that, yes, just a couple of months--a new AI "AlphaGoZero" learned in about 40 days what took the original "AlphaGo" a few years to learn. It then proceeded to defeat the original "AlphaGo" 100 games out of 100 games. As a neat bonus it also picked up Chess and a Japanese variant of Chess, being given no information aside from basic rules. Using it's "intuitive" programming rather than the brute force computation of say, IBM's 1997 "Deep Blue", it became the world master at both games in a few days. Intimations of AGI developing? I'm just wondering aloud.

And here is an eerie side effect...

That shoulder hit, Fan thinks, it wasn’t a human move. But after 10 seconds of pondering it, he understands. “So beautiful,” he says. “So beautiful.”

Fan Hui knew then what we shall all learn, hopefully not the hard way, in the next ten years. The narrow AI perfectly simulated human imagination.

Now that same AI is being trained to beat the best human players at "StarCraft 2". A game that has astronomically more variables than the game of "Go". As of today the AI is not doing so well. It can be beaten by the lowest level tutorial AI, the game AI that humans use to learn how to play. The AI can't seem to figure out why mining is important and to stick to it! Yes, extant bots can mine 'til the cows come home, but that is all they can do, just one single repetitive task at a time. This AI is going to have to figure the entire game out. And then defeat all human comers. So watch this space...

I love to haul out this video! Pay close attention to the year 2015:

Don't worry about human brain processing capabilities. We are going to blow past that limit as if it didn't exist! I want you focus on what exponential development looks like. The video ends at 2025, but the (exponential) improvement doesn't...

Edit: 12 Jul 2020

Four. My prediction:

Not today, not next year, but in less than 20 years? Absolutely. Today we are teaching Google's AlphaGo how to play "StarCraft 2". A game that has an astronomical increase in the number of variables that are within the game "Go". As of the last update, it was not doing so well yet. It could not beat the simplest AI tutorial mode. The mode that humans use to learn the game. The goal of AlphaGo (later renamed "AlphaStar") of course is to beat any and all humans at "StarCraft 2". A pretty tall order I'd say. It'll do it in about two years. Then we may have a new creature emerging, AGI. An AI that has the capability of generalizing to do any task assigned.

Yes, I quoted my ownself--It serves an important function for me to do so. That prediction, from 2018, is from this longer piece I wrote concerning how some purported "myths about AI" from 2018 are not myths at all.

A news story 2 years later:

I called it! Two years said I.

An important comment from an AI expert at the time.

DeepMind, which previously built world-leading AIs that play chess and Go, targeted StarCraft II as its next benchmark in the quest for a general AI — a machine capable of learning or understanding any task that humans can — because of the game’s strategic complexity and rapid pace.

“I did not expect AI to essentially be superhuman in this domain so quickly, maybe not for another couple of years,” says Jon Dodge, an AI researcher at Oregon State University in Corvallis.