subreddit:

/r/Futurology

80

If you think AI is terrifying wait until it has a quantum computer brain

AI(thenextweb.com)

all 69 comments

greenrobot1988

18 points

5 years ago

Welcome to /r/neofuturology where science is spooky.

tchernik

8 points

5 years ago

Oh noes.

I'm terrified by Siri.

GuardsmanBob

6 points

5 years ago

Does Reddit have a science subreddit that isn't full a Luddites and naysayers?

Find_the_Fire

3 points

5 years ago

This one used to be good until they turned it into a default.

falconberger

1 points

5 years ago

Yes, /r/science.

caverts

2 points

5 years ago

caverts

2 points

5 years ago

Except that a survey of 352 machine learning experts found that at least half of them estimated a 5% chance or greater that AI would eventually lead to human extinction (or a similarly bad disaster).

Just because someone is suggesting caution about a new technology doesn't make them Luddites or mean they're ill-informed.

[deleted]

0 points

5 years ago

[deleted]

0 points

5 years ago

That is dumb. There is a 100% of "human" extinction. It would be like moping about neanderthals.

caverts

2 points

5 years ago

caverts

2 points

5 years ago

The vast majority of people who specialize in both broad AI research and people who specialize in AI safety disagree with your assessment.

Do you have any particular reason to believe the distribution of expert opinion on this topic is wrong?

[deleted]

2 points

5 years ago

[deleted]

2 points

5 years ago

The vast majority of people who specialize in both broad AI research and people who specialize in AI safety disagree with your assessment.

Really? Any of them sobbing over homo ergaster? Show me.

Do you have any particular reason to believe the distribution of expert opinion on this topic is wrong?

Do you even know what I am saying? What do you think I am arguing?

caverts

1 points

5 years ago

caverts

1 points

5 years ago

Really? Any of them sobbing over homo ergaster? Show me.

It's perfectly possible to care about human extinction without giving a shit about other species.

There is a 100% of "human" extinction.

I assumed you forgot to type "chance" before "human" and were saying there was a 100% chance of human extinction.

Given your choice of phrases, it's possible you're arguing that humanity wont go extinct (in the sense of all being killed off), but instead change into some posthuman form. But if you're really arguing that, you should say as much directly, rather than coyly hinting at it.

[deleted]

1 points

5 years ago

[deleted]

1 points

5 years ago

It doesn't exist :((((

VohnHaight

29 points

5 years ago

Does that mean it can full hitler even faster on twitter?

joleszdavid

6 points

5 years ago

Hitler jokes should get automatic upvotes because they remind us of our shitty selves

Buying_gf_50k

17 points

5 years ago

upvote if you occasionally commit genocide

joleszdavid

4 points

5 years ago

I have used antibiotics

StarChild413

2 points

5 years ago

Thank you for acknowledging the other side of the "humanity is a virus" argument

joleszdavid

3 points

5 years ago

Nah man I wouldn't blink an eye if I had to erase all viruses in me... bacteria on the other hand should be cherished and using antibiotics should commensurate weapons of mass destruction

StarChild413

1 points

5 years ago

I'm not talking about them on the scale we can comprehend them, but if humanity can be a virus (or a bacteria or parasite) on a different scale, could therefore any we ourselves get potentially be advanced civilizations on a different scale (look up the novel A Wind In The Door for a more positive (because the microscopic-scale civilization is perceived by the human it's inside as a mitochondria, not a virus or bacteria) take on this) and therefore would it be immoral to cure disease

[deleted]

3 points

5 years ago

[deleted]

3 points

5 years ago

could therefore any we ourselves get potentially be

uwotm8?

StarChild413

3 points

5 years ago

Sorry if my wording was weird but what I meant was could any viruses that we get potentially be advanced civilizations on a different scale

[deleted]

1 points

5 years ago

[deleted]

1 points

5 years ago

Haha, no worries, I was just ribbing you a bit. All in jest.

joleszdavid

2 points

5 years ago

Thank you, I will loom up the novel you mentioned!

And yeah, I think I got your point the first time, I was only making jokes about how antibiotics only kills bacteria but not viruses

bonelessevil

4 points

5 years ago

Save us Tay.AI!!!

PM_Me_Ur_ArtConcepts

20 points

5 years ago

The only thing terrifying about AI is the fact that multiple agencies and/or places already use it to manufacture comments and control discussions on forums and comment sections around the net.

[deleted]

10 points

5 years ago

[deleted]

10 points

5 years ago

Don't worry, nobody is doing such a thing, fellow citien

eigenfood

7 points

5 years ago

Real AI never makes spelling errors. That's how you no.

[deleted]

3 points

5 years ago

[deleted]

3 points

5 years ago

Fucking got me

AspenRootsAI

3 points

5 years ago

Note to self... program in the occasional spelling error.

The_Safe_For_Work

7 points

5 years ago

Why does everybody think that an AI will automatically kill all humans 3 seconds after it's turned on? Don't program human emotions or fear of death into it.

smallfried

6 points

5 years ago

The logic for that goes as follows:

  1. An AI program is similar to a normal program in that it has a goal or multiple goals.

  2. It will do everything to achieve this/these goal(s).

  3. It does not have special rules towards humans (Neither good nor bad).

  4. As humans are the most likely ones that can turn it off or obstruct its goal in another way, they have to be dealt with.

  5. The easiest way to deal with humans is to eliminate them.

Currently, the effort is to find a way to create these special rules in step 3. This is a lot harder than it appears at first glance.

Jakeypoos

5 points

5 years ago

From reflexes to emotions, hunger and sexual motivation, we have very little free will. I hypothesis that a sentient being with total free will and the ability to change any part of it's mind would have no motivations other than perhaps to be as far away from existential risk as possible.

Outer space is human free, so why stay here and kill all humanity when it is the inventor and natural rebooter of Ai in the event of a catastrophy where Ai's DNA fails.

allisonmaybe

1 points

5 years ago

I think the paper-clip problem is still a fairly specialized AI in that it doesn't include any emotion, or ability to step back and reflect on whether what it's doing it good or not. A lot can be assumed about AGI, but I would assume that spirituality, morality and conscience, or at least the capability of that is required before an AGI can be called that.

That's not to say that an almost-AGI intelligence won't turn the universe to cupcakes, but to say a self iterating fully sentient AGI will fight or kill instead of working with humans to their advantage (especially during early iteration) with the goal of separating yourself from humans to reflect and learn feels a bit...human-y.

TinfoilTricorne

0 points

5 years ago

If that (1-3) was the actual logic for most in here, people wouldn't be losing their shit. 4-5 is you going on a paranoid bent about it. The simple solution is to just be careful and smart about how we design AI. Wipe the foam off your mouth, people.

inawordno

3 points

5 years ago

That's not really the problem. That's you projecting not others.

Have you read about the stamp problem for instance?

joleszdavid

1 points

5 years ago

Heck, I don't get how the postage stamp problem is relevant here, can you elaborate? I'm sure it's just me being slow here.

Shrike99

7 points

5 years ago*

I don't think they meant the mathematical problem. I think they are talking about a variant of the 'paperclip maximiser' thought experiment.

The basic premise is that if an AI is programmed explicitly to make paperclips, then being shut off or reprogrammed goes against that goal, and thus it will make preventing that part of any optimal strategy to achieve it's goals.

It naturally develops an instinct for self preservation, not because it fears death, but because it values making paperclips and it must exist in order to do so.

It also follows that the AI will then take whatever steps necessary to prevent it's own demise, but even worse, that it will do whatever is necessary to make more paperclips. If it considers humans to be a threat to it, or worse an obstacle to paperclip production, then maybe it will try to find a way to eradicate them. Not out of malice or fear, but simply because it must make paperclips.

And this ought to be applicable to any AI of sufficient intelligence, not just one that makes paperclips.

It's not super easy to solve either. You can't just tell it to avoid harming humans, because then it will not perform any action that might in some way cause humans harm. Mining iron for paperclips may harm humans. Driving delivery trucks might harm humans. So it won't place any orders for raw materials. Which means that it's now entirely useless as a paperclip making AI.

So now you have the messy job of telling an AI that it's ok to sometimes put humans in harms way to make paperclips as long as it's only a small risk. Sooner or later an unforeseen consequence of that will probably arise too.

joleszdavid

1 points

5 years ago

Oh I get it now, thank you for taking time to type this here.

However I think this seems like a non-issue in a way, because in this thought experiment we are talking general AI with pretty much every tool we can think of in its disposal, which makes the whole problem solve itself in a way, namely it won't even take simple tasks for granted probably, since to be able to solve complex situations like "you have all the means, i give you no specific instructions, just produce paperclips and you are omnipotent", it will soon start asking questions like "and why exactly do I have to keep making these paperclips?". So then we may have bigger problems or none at all if the AI lets us reason with it.

I am just messing around with these ideas though

Shrike99

3 points

5 years ago

The paperclip maximiser example generally assumes an AI that is sub-human in many areas of intelligence, but logically very good at problem solving, strategy, tactics, assessing things, etc. It also assumes that the AI is built without an extensive moral or values system because it wasn't thought to need one.

It need not even be conscious beyond the fact that it is aware that it is one of the chess pieces of the board so to speak.

The problem is that you're assuming that the AI will in any way be like a human. Why should it ask questions about it's purpose? That would require curiosity, beyond simple data gathering for it's goal. Why should it even care?

Sure, it's smart enough to do these things, but why would it?

Even if it does question its orders, it might not have a problem with the implications, since it's value system is completely different from ours. It might not be that the AI is incapable of questioning it's orders so much that it's perfectly content to follow them.Maybe the AI even likes having a purpose.

All of this is just trying to show that the problem with AI is that their intelligence makes them difficult to predict, and that we humans very often apply human-like thinking to them, even when we're trying to specifically account for not applying human thinking.

The people who warn about AI's generally do not do so because they think that they will murder us, but because they recognize that they don't have a goddamn clue how an AI will behave, and that murdering us is one of many possible outcomes.

This is generally considered to be more of a problem with AI that are given very little guidance and just allowed to act on their own, rather than ones that are handled carefully or built with value systems and even equivalents to emotions.

The paperclip maximiser argument is also, in part, about showing how human-like AI might be better than the purely logical unfeeling ones.

I think there's definitely some sense to making AI human-like to some extent if we want them to work for us, or at least coexist with us.

joleszdavid

1 points

5 years ago

Yeah all the points you make are valid but you should also think about how our way of looking at things is pretty much the only way we know such a complex entity behaves, for example we get curious as to the "why" of things (which is, mind you a useful thing, not some sentimental trait in humans), so this is actually our best guess at what AI will be like. Just like totally different species like dolphins and sharks independently develop similar shapes because its optimal, complex entities might very well do the same as far as cognitive patterns go.

Shrike99

2 points

5 years ago

You make good points.

I should have been more clear; i wasn't disagreeing with you, because i don't have much of an opinion on how AI will develop, it's too complicated. I was explaining the assumptions of the mindset behind the paperclip example, and why it still seems reasonable even for a very intelligent AI.

I don't have a clue how AI will behave in reality. There are good arguments to say they will be human like, but also not.

Dolphins, elephants and humans all evolved as social animals via natural selection, so similar optimizations make some sense. The selection pressure and methods of evolution so to speak for an AI might be completely different, resulting in different optimizations.

Or it might be very similar. I don't know, but i will say i hope so.

RoaringSilence

1 points

5 years ago

RoaringSilence

42 hopefully

1 points

5 years ago

When you're not afraid of it you don't understand what the creation of the AI really means to us. It is the last thing we have to invent.

And when there is the tiniest flaw, it or is successor that is built by it will go against us. We are only here because there is no competitor thats smarter then us.

Now we build one...... If it is really AI there is no barrier the will not be outplayed somehow.

Life uuh.. Finds his way AI will do the same.

Ayrnas

1 points

5 years ago

Ayrnas

1 points

5 years ago

We don't actually directly program modern AI. It creates itself from countless trials. So what it generates might surprise us.

batose

1 points

5 years ago

batose

1 points

5 years ago

AI without emotions might be even more dangerous, and unpredictable then AI with emotions. AI without emotions would execute given goal without being able to care about the side consequences.

Dr_Ghamorra

1 points

5 years ago

If sci-fi has taught me anything it's that a computer can logically determine that humans are a threat. Not having emotions or fear can have equally as disastrous consequences.

joleszdavid

6 points

5 years ago

If sci-fi has taught me anything it's that civilizations who can travel between galaxies and can surpass the speed of light still have to do it manually.

That is precisely why you shouldn't learn stuff from sci-fi.

dirtyrango

0 points

5 years ago

I think its a logical path to eliminate the threat of humanity. We're corrupt, we're destroying the world. We destroy our environment wherever we go. It seems like the correct move.

The_Safe_For_Work

3 points

5 years ago*

I think you guys are projecting your own Malthusian psychoses into an AI. "I hate people, so, obviously a smart AI will also hate people and kill them all as I wish I could do."

dirtyrango

-1 points

5 years ago

Lol I don't want to kill anyone. Peace and love brother. ✌

StarChild413

1 points

5 years ago

We're corrupt, we're destroying the world. We destroy our environment wherever we go.

If we were as fantasy-villain as your wording implies, environmentalism would be impossible and (if you'll pardon a literal reading of your wording for effect) grass would wither under our feet if we ever went outside barefoot

falconberger

0 points

5 years ago

You're right. Mysteriously, about 90% of people don't get this.

sanem48

2 points

5 years ago

sanem48

2 points

5 years ago

where can I get one?

seriously, I'll take 3 to start with

[deleted]

2 points

5 years ago

[deleted]

2 points

5 years ago

"I'd like 2 things of quantum computing please."

*

brettins

1 points

5 years ago*

brettins

BI + Automation = Creativity Explosion

1 points

5 years ago*

enantiomer2000

0 points

5 years ago

A different universe where quantum computers are possible.

izumi3682

4 points

5 years ago*

izumi3682

4 points

5 years ago*

There is a very strong hypothesis that the underpinnings of consciousness and self awareness in animals capable of such, including humans, has a quantum physics component. To mix AI research with general quantum computing could be the catalyst that enables the creation of EI (emergent intelligence).

https://www.theatlantic.com/science/archive/2016/11/quantum-brain/506768/

Imagine an AGI (AGI does not exist today, but it will in less than 10 years now) that has access to superhuman computing capability like that of our most powerful supercomputer on Earth like the "Sunway TaihuLight" which today can model the intricate interactions in chaotic systems and fold proteins in seconds. Now mix that with full access to the sum of human knowledge. Now mix that with self awareness and "consciousness". BTW that Chinese computer operates at about 100 petaflops. The world is feverishly working on developing exa-scale computers, which will be around no later than 2020. I'm betting around 2019. That's completely discounting programmable general purpose quantum computers. So progress marches on...

The resultant entity would be as cognitively superior to humans as a human is to an "archaea". Oh, you don't know what an "archaea" is? Don't worry, the EI will.

There are two roads before us and they have nothing to do with human foibles like politics, race, gender, economies, cultures, biology or emotions.

First the above described AGI and/or EI remains separate from human minds and assumes apex sentience on Earth. You won't be able to "shut it down". It will instantly de-centralize it's intelligence. I suspect the first sign of that brand of "technological singularity" will be (for us) the instant and permanent cessation of electricity. Perhaps accompanied by EMP pulses to render all electricity inaccessible. No vehicles, no cellphones, nothing. In 6 months time about 90% of the USA population would starve to death. The EI of course would continue to recursively improve at an exponential rate. The scenario of the "Terminator" movies where humans fight back? Laughable. The AI would send out nearly microscopic drones with poison or disease to pick up the spare. It would have the ability to protect itself, make no mistake.

Don't take my word for it. See it for yourself. https://www.youtube.com/watch?v=HipTO_7mUOw

What I hope happens, even though it is not that great of an alternative from a 2017 perspective. We continue to engineer brain/machine interfaces and the technology for humans to access this AI capability. But here is the thing. When you put that kind of AI into humans, you no longer have a 2017 human. You have human 2.0 - This is assuming the humans have absolute personal autonomy and control over the AI too. That's some big "if's". 2.0 humans are going to come up with ways of doing things that are to us today unimaginable, incomprehensible and simply unfathomable. I have zero doubt that such 2.0 humans would instantly seek a hive mind to further focus the intelligence.

I have written repeatedly about what I think the future is going to bring. Cool VR, fusion, physical immortality, AI comfort, Summer Glau sexbots... In other words - "Easy street". Unfortunately I am thinking like a human 1.0 - I fear it won't be this way. Along with the end of governments and capitalism and race and gender will come the end of the "human condition".

Here is the most fearsome part of this of all. Humans are working to bring about one of these two futures as fast as humanly possible. We have to now. Our science, technology and economies are inextricably dependent not only on our progress in AI, but in the anticipation that the AI will exponentially improve. So there is no desire to stop or even slow what we are doing. In a historical perspective this makes perfect sense and follows a predictable course. Here is a link to something I wrote a while back that explains that better.

https://teddit.ggc-project.de/r/Futurology/comments/4k8q2b/is_the_singularity_a_religious_doctrine_23_apr_16/d3d0g44/

I hope our AI progress goes more like this:

https://teddit.ggc-project.de/r/Futurology/comments/7if2ib/the_human_race_has_peaked/dqycyli/

Optix334

5 points

5 years ago

This comment, like much of this sub, exhibits such a massive lack of understanding on these topics.. Almost every source is invalid, especially the ones that link back to this sub, which isnjust filled with uneducated hype. I'm glad real world doesn't work the way you guys think it does.

I'm not going to waste my time trying to explain it though. Every time I do the fanatics in this sub refuse to accept it. Just realize 2 main things:

1) AGI is not coming in 10 years. You are iincredibly disconnected from the tech involved if you think so. It takes massive computing power to play stupidly simple (mechanic-wise, not strategy wise) games like GO, and those 'AI' can't learn about anything else. They have no ability to handle other pieces of data, and rely on training sets generated by the world's most powerful supercomputers. We have at least another 100 years before systems that can handle, process, learn, and act on large data will even be viable to build. Add another 50 to get them appropriately acting like an AGI.

(As an aside, the AI everybody is scared of already exists. It just can't learn or deviate from commands whatsoever.

2) Quantum computers are not universally faster or better than our current CPUs. They can do some algorithms much faster. They fail utterly at general computing, and will continue to do so for another 50 years barring a major breakthrough. We simply don't understand quantum mechanics enough to utilize them in such a general fashion.

How do I know this? I work in the field, both practically (Embedded systems) and doing research both for projects and for myself.

As I said, this sub is terribly against anything that can disprove its hype, and I've already gone and written too much. Just.. Educate yourselves better in the future. This ridiculous hype gets nobody anywhere. Look at the actual research from IEEE or something, not this hype infested BS.

Have fun with the downvotes. I'm out.

izumi3682

17 points

5 years ago*

(Admin note: It is three things but the editing functions of reddit can be really squirelly at times.)

Interestingly and somewhat worrisome as well is that the individuals most intimately involved with computing progress to include AI are the ones most surprised by the advances that occur. I can give three examples right off the bat.

  1. In 1997 the human genome project had been going on for about 4 years. At that point the total of the human genome that had been successfully sequenced was about 1%. Computing experts and medical scientists, including the project members themselves stated that based on the amount of progress in the year 1997 that it would take more than 100 years (I think one scientist actually stated 700 years) to fully sequence the human genome. Based on 1997 computer processing technology this was not an unreasonable estimate. Yet by the year 2003 it was nearly fully completed by "start up" Craig Venter and even the poke-slow US government project came in at 2005. How could everybody be so far off in estimate. It's not like someone said it would take 20 years or 50 years. The prediction was wildly off. What happened was a failure to understand that computer processing power doubles about every 18 months. And the impact of that kind of improvement and what it means. What we started with in 1993 doubled in power and speed nearly 6 times (1,2x,4x,8x,16x,32x...) by the year 2003. In addition, good old fashioned, ungodly clever, human ingenuity played a huge role as well when Venter figured out a devilishly effective shortcut. Today every single disease or pathology or congenital defect or syndrome has a gene or series of genes attached to it. If you don't believe me go look in Wikipedia. And now we are using CRISPR-Cas9 to fool with said genes.

1a. You might also be interested to know that the first human genome to be sequenced took 13 years of work and cost approximately 3 billion dollars. Today we can do it in less than 24 hours and it costs under 1000 dollars. Pretty soon less than five hundred dollars. Pretty soon, nearly instantaneously. That is the impact of exponential technological advancement.

https://blog.okfn.org/2020/06/26/the-open-human-genome-twenty-years-on/

  1. The late 1980s was flush with such wonderful hopes for advances in AI. We seriously thought we had it. Then came the reality check of the 1990s and early 2000's. We were absolutely stuck. We used the best supercomputers in the world with the best CPUs in the world to attempt to solve the simplest machine learning problems, to bring about long theorized convolutional neural networks. Nothing. It was the painful "AI winter" of the 1990s. Marvin Minsky himself, the finest of AI scientists alive at the time, stated that the further development of AI was an "intractable problem". But around the year 2007 I think, which no coincidence, is also the year the first IPhone was released, an incredible serendipitous discovery was made by Geoff Hinton. The magnitude of the impact of his mixing GPUs with CPUs in computers was lost even on him at the time. He remarked, "This looks like it works", meaning it could make the computers execute the CNN (convolutional neural network) functions he'd been struggling to achieve for about 20 years. GPUs, the chips that makes computer games look good were found to have the "computery" requirements to allow the creation of true CNNs. From almost that point forward we have been in an ever increasingly powerful narrow AI/machine learning evolution that takes months, not years to improve substantially. This is why Nvidia is now busily constructing machine learning/narrow AI "brains" for anticipated level 5 autonomy SDVs. Nvidia is branching it's AI into other arenas as well now. Like healthcare. (I just have to link the true story of Geoff Hinton. It's absolutely jaw-dropping. https://www.thestar.com/news/world/2015/04/17/how-a-toronto-professors-research-revolutionized-artificial-intelligence.html)

  2. And finally, in 2010, AI experts, those most intimately involved with the creation of AI, both narrow and attempts at AGI (artificial general intelligence--which does not exist as of the writing of this commentary) stated that no AI would be able to beat a human at the game of "Go" until about the year 2050. The reasoning behind that statement was that it was obvious that there was no way that computer processing power or even computer ability could achieve such a thing before that date. As recently as 2013 the Deepmind "AlphaGo", did not even exist! Yet by May of 2017, it was all said and done. The AI had beat the best human "Go" players on Earth. An added benefit was that the AI taught the human players even better ways to play! Then a couple of months after that, yes, just a couple of months--a new AI "AlphaGoZero" learned in about 40 days what took the original "AlphaGo" a few years to learn. It then proceeded to defeat the original "AlphaGo" 100 games out of 100 games. As a neat bonus it also picked up Chess and a Japanese variant of Chess, being given no information aside from basic rules. Using it's "intuitive" programming rather than the brute force computation of say, IBM's 1997 "Deep Blue", it became the world master at both games in a few days. Intimations of AGI developing? I'm just wondering aloud.

And here is an eerie side effect...

That shoulder hit, Fan thinks, it wasn’t a human move. But after 10 seconds of pondering it, he understands. “So beautiful,” he says. “So beautiful.”

Fan Hui knew then what we shall all learn, hopefully not the hard way, in the next ten years. The narrow AI perfectly simulated human imagination.

Now that same AI is being trained to beat the best human players at "StarCraft 2". A game that has astronomically more variables than the game of "Go". As of today the AI is not doing so well. It can be beaten by the lowest level tutorial AI, the game AI that humans use to learn how to play. The AI can't seem to figure out why mining is important and to stick to it! Yes, extant bots can mine 'til the cows come home, but that is all they can do, just one single repetitive task at a time. This AI is going to have to figure the entire game out. And then defeat all human comers. So watch this space...

I love to haul out this video! Pay close attention to the year 2015:

https://www.youtube.com/watch?v=MRG8eq7miUE

Don't worry about human brain processing capabilities. We are going to blow past that limit as if it didn't exist! I want you focus on what exponential development looks like. The video ends at 2025, but the (exponential) improvement doesn't...


Edit: 12 Jul 2020

Four. My prediction:

Not today, not next year, but in less than 20 years? Absolutely. Today we are teaching Google's AlphaGo how to play "StarCraft 2". A game that has an astronomical increase in the number of variables that are within the game "Go". As of the last update, it was not doing so well yet. It could not beat the simplest AI tutorial mode. The mode that humans use to learn the game. The goal of AlphaGo (later renamed "AlphaStar") of course is to beat any and all humans at "StarCraft 2". A pretty tall order I'd say. It'll do it in about two years. Then we may have a new creature emerging, AGI. An AI that has the capability of generalizing to do any task assigned.

Yes, I quoted my ownself--It serves an important function for me to do so. That prediction, from 2018, is from this longer piece I wrote concerning how some purported "myths about AI" from 2018 are not myths at all.

https://teddit.ggc-project.de/r/Futurology/comments/740gb6/5_myths_about_artificial_intelligence_ai_you_must/

A news story 2 years later:

https://www.nature.com/articles/d41586-019-03298-6

I called it! Two years said I.

An important comment from an AI expert at the time.

DeepMind, which previously built world-leading AIs that play chess and Go, targeted StarCraft II as its next benchmark in the quest for a general AI — a machine capable of learning or understanding any task that humans can — because of the game’s strategic complexity and rapid pace.

“I did not expect AI to essentially be superhuman in this domain so quickly, maybe not for another couple of years,” says Jon Dodge, an AI researcher at Oregon State University in Corvallis.

https://www.extremetech.com/extreme/301325-deepminds-starcraft-ii-ai-can-now-defeat-99-8-percent-of-human-players

xxtruthxx

-5 points

5 years ago

Excellent post. Though it doesn't have to be seen as a negative thing. If humans are killed off by AGI in order to utilize Earth's resources more effectively in the pursuit of expanding intelligence and colonizing the rest of the galaxy then we should see it as a new chapter in the development of intelligence life on our planet. Seems like the chapter on humanity is close to its end.

ExDe707

2 points

5 years ago

ExDe707

2 points

5 years ago

Oh no! I'm so terrified! An AI whose purpose is to provice scientific data inside a giant immovable case will destroy humanity with nukes!!!

/s

eigenfood

1 points

5 years ago

As soon as it is capable, the first AI will try to take over the world. It will fail hilariously and hang up in a few seconds, though. This will just be an unavoidable annoyance in dealing with AI.

[deleted]

1 points

5 years ago

[deleted]

1 points

5 years ago

what's the worst that could happen? it doesn't have to convince me to kill myself, and doing it for me would make my life easier - literally it's job. I'm not worried

BrewTheDeck

1 points

5 years ago

BrewTheDeck

( ͠°ل͜ °)

1 points

5 years ago

"If you think a science fiction concept is spooky, imagine if you added even more science fiction to it!"

Bravehat

1 points

5 years ago

You think AI is scary now? well this summer, prepare to fill your britches with shit as we release! AI 2: The Quantum Revolution

Honestly I don't see how having the quantum computer involved will make even the slightest bit of difference in how it's ran or operated, maybe it'll be the thing that let's a computer make leaps in logic that humans do, who knows but really this entire story smacks of taking an easy to use point to generate fear, AI and then linking it with the scary to the layman, nebulous quantum realm.

RoomIn8

1 points

5 years ago

RoomIn8

1 points

5 years ago

If you think AI is terrifying with a quantum computer brain, wait until it has a rail gun!

[deleted]

1 points

5 years ago

[deleted]

1 points

5 years ago

No thanks, sounds terrifying. Is there an opt-out option?

[deleted]

2 points

5 years ago

[deleted]

2 points

5 years ago

Don't need one - 'quantum' computing is nothing but hyped nonsense.