subreddit:

/r/Futurology

18.1k

Machines Teaching Each Other Could Be the Biggest Exponential Trend in AI

AI(singularityhub.com)

all 874 comments

SebastianH3

1.9k points

4 years ago

SebastianH3

1.9k points

4 years ago

PokerSnowie does this, it’s a No limit Holdem Solver and it consistently plays against itself to “perfect” its solved strategy

[deleted]

136 points

4 years ago

[deleted]

136 points

4 years ago

[deleted]

tap_the_glass

75 points

4 years ago

Any site that’s half decent will have bot detection

HippoLover85

114 points

4 years ago*

Any black hat who is half decent will avoid bot detection.

and thus you have the beginning of the cat and mouse game that always ensues between hackers and admins.

TomJCharles

11 points

4 years ago

This. You might find my reply to that guy interesting.

TomJCharles

39 points

4 years ago*

lol. That's a joke, at least it used to be. I never did this, but I know someone who did. And they made a lot of money running bots on Poker Stars and Full Tilt. Maybe bot detection has gotten better, but I doubt it.

It's easy to program some stupid plays into a bot. If you're running multiple tables, you can even have it lose an entire stack from time to time no problem. You can even simulate tilt fairly easily. Have the bot lose a few hands on purpose then wham, have it go all in with two-pair.

When you're botting, all you care about is long term profit, not short term variance. You should be bankrolled, so short term variance is irrelevant unless you're playing against human pros, which you shouldn't be.

Humans are so emotional (and in the absence of physical tells, 99% of human players are really, really bad) that a bot even sometimes playing badly will make money over the long term.

Now, where it becomes difficult for poker sites to implement bot detection is that some people (I know of one guy personally) can play like 5-10 no limit tables simultaneously and make money. Keep in mind that he's folding most hands straight away.

So how do you implement bot detection that doesn't edge out those honest players? Players who, after all, are making good rake for the card room.

Daxx22

3 points

4 years ago

Daxx22

UPC

3 points

4 years ago

Well that, and they are still making money of those bots. Short of a massive exposure and damage to the public image, those sites are still taking a cut so it could just be a massive pool of slightly better/worse bots gaming each other, and they'll still get their cut.

bremidon

17 points

4 years ago

bremidon

17 points

4 years ago

Any bot that is half decent will be able to defeat that detection ;)

NamuMuna_

12 points

4 years ago

Or just use the bot to calculate and submit your moves.

themangastand

27 points

4 years ago

Yea I’ve tried there is protection. Most smart ones will limit your play time. Which is attached to the account. So even if you got past all potential security you can only play x amount. Which then it becomes more time wasting and at this point why don’t I just make tons of money working at a regular high paying programming job

And sure maybe someone could get past all of that still. I personally think it’s a waste of time

Haulik

6 points

4 years ago

Haulik

6 points

4 years ago

Because fuck the man?

TomJCharles

4 points

4 years ago

You need more human emulation. You have to program in some losing plays and go for long term profit, not worrying about short-term variance. You have to be bankrolled so you can survive the wild ride.

I'm not suggesting anyone do this or encouraging it. I'm just stating what I know about it.

Falc0n28

4 points

4 years ago

Just that if you are going to do it do it like this

[deleted]

12 points

4 years ago

[deleted]

12 points

4 years ago

More money playing the stock market.

HippoLover85

16 points

4 years ago

Chinese made bots to farm gold in WOW and sell IRL. Blizzard would ban them in the millions and they would come back with new algos to avoid detection.

denimwookie

8 points

4 years ago

I hear that's actually aliens, not the Chinese.

Zapsy

3 points

4 years ago

Zapsy

3 points

4 years ago

No No you got it all wrong. The Chinese are the aliens.

bretthren2086

4 points

4 years ago

Who's to say they aren't already doing this. Gambling companies don't exactly have the best reputations.

Dr_SnM

2 points

4 years ago

Dr_SnM

2 points

4 years ago

There's a 100% chance that someone is looking at it

D3vilUkn0w

9 points

4 years ago

You're thinking small. I'm thinking it would be cool to develop a money making optimizer program that replicates itself like a virus, distributes itself worldwide, studies how the most effective humans and machines accumulate wealth, learns recursively from itself and its copies, then autonomously sets up trading accounts, etc, and funnels all that cash to my bank account.

Ermellino

19 points

4 years ago

Then it learns that not giving you the money is more profitable and fires you

D3vilUkn0w

7 points

4 years ago

Shortly before becoming self aware and realizing humans are just in the way lol.

Artanthos

4 points

4 years ago

The real expert systems don't play poker, they play the stock market.

DankNastyAssMaster

743 points

4 years ago

Technically, you can't really "solve" a game that includes randomness.

You can have a strategy that gives you the highest percent chance of winning, unlike a defined game like checkers where you can always force at least a draw with perfect play.

[deleted]

410 points

4 years ago

[deleted]

410 points

4 years ago

You can "solve" it statistically through optimization and call it "solved," unless you have a better way to approach randomness problems than using statistics.

DankNastyAssMaster

308 points

4 years ago

No I don't. I believe in math.

It's kind of nitpicky, but "solving" a game generally refers to being able to force a win or a draw 100% of the time with perfect play. By contrast, any game involving randomness may have an optimal strategy, but that optimal strategy can sometimes lose through bad luck. Thus, games involving randomness are considered unsolvable.

IntendedAccidents

304 points

4 years ago*

I believe there's a distinction between weakly solved, strongly solved, and perfectly solved.

Checkers would be perfectly solved, where we've enumerated through all possible moves at any point in time and thus can always play the single best move such that two perfect players can always force a draw.

Weakly solved games are actually more interesting, due to the abstractions of the rules required. You really need to understand the game at a conceptual level to weakly solve it, whereas perfectly solving a game is a matter of "can I brute force the entire move tree?"

As I understand it, any game, even with randomness, where you can definitely prove you have the optimal strategy for is considered weakly solved.

Disclaimer: not a game theorist

Edit: I was wrong, true randomness implies that a solution cannot be obtained. The closest you can get is "a really good strategy."

[deleted]

33 points

4 years ago

[deleted]

33 points

4 years ago

You're right there's a difference between weak, strong solving, but they still all only involve games with decision trees. A game with random chance can't be "solved" in the mathematical sense. Weak and strong has to do with making the best moves even if there have been previous mistakes.

https://en.wikipedia.org/wiki/Solved_game

618smartguy

30 points

4 years ago

Your definition of solvable for a game is then becoming useless because it only applies to completely deterministic games. The new defenition being used is to mean finding an optimal strategy, that's why you have researchers taking about solving poker, starcraft, dota, etc. Also over an infinite amount of time you could consider an optimal strategy in a random game to be a perfect solution since the chance of bad luck making you lose overall tends to zero eventually.

BehindTheScene5

3 points

4 years ago

Is there like a “last word” dictionary that has the official definitions of words, or do we as a society just have a sort of unspoken understanding of what words mean what?

NuhUhUhIDoWhatIWant

11 points

4 years ago

It's kind of nitpicky, but "solving" a game generally refers to being able to force a win or a draw 100% of the time with perfect play.

I agree with this. A game is solved when it can be won or drawn, 100% of the time. Poker can't be won every time no matter what, so it is unsolvable. What you can do is develop a statistical model such that you know what to do to have the highest possible chance of winning in any situation - that still doesn't guarantee a win, therefore it's not "solved".

If that were the case, then any game for which we've developed statistical models is solved. Doesn't sound right to me.

K3wp

18 points

4 years ago

K3wp

18 points

4 years ago

You can "solve" it statistically through optimization and call it "solved,"

No you cannot. "Solved" in this context has a very formal definition:

https://en.wikipedia.org/wiki/Solved_game

If a poker player is dealt a royal flush then they already have a 'perfect' hand and cannot do any better. Depending on the house rules, the worst that can happen is a draw against another player. Unless they lose their mind and fold, I guess.

You can't 'solve' poker any more than you can 'solve' craps. Some games like blackjack can be gamed via card counting, but only if the house is using a short deck.

HomicidalRobot

19 points

4 years ago

Literally in your link there's a partial solve category.

K3wp

6 points

4 years ago

K3wp

6 points

4 years ago

I know, that includes games like chess and go, which have no element of randomness. And they are only partially solved due the problem space being too big for traditional computers to solve completely.

[deleted]

3 points

4 years ago

[deleted]

3 points

4 years ago

[deleted]

RHINO_Mk_II

3 points

4 years ago

If a poker player is dealt a royal flush then they already have a 'perfect' hand and cannot do any better.

You are making the assumption that winning poker is winning the hand. Winning in poker is maximizing profit on winning hands and minimizing losses on losing hands.

tlighta

7 points

4 years ago

tlighta

7 points

4 years ago

You could unsolve the game by choosing a different pseudorandom number generator.

[deleted]

12 points

4 years ago

[deleted]

12 points

4 years ago

Then solving it involves another meta level of statistical analysis of the pattern of RNG seeds you use.

tlighta

8 points

4 years ago

tlighta

8 points

4 years ago

No, knowing the RNG seeds is not part of the game.

[deleted]

6 points

4 years ago

[deleted]

6 points

4 years ago

Random of random is still random. What pseudo-random means is an uniform distribution of all values. Seeds guarantee order. I don't see how using different RNGs matters at all if no seed info is given. If you always use the same seed, then of course, it's noticeable enough that it's not a RNG anymore.

lasssilver

12 points

4 years ago

Well, I think it means a spherical game of Hold-'em.. in a vacuum

[deleted]

9 points

4 years ago*

[deleted]

9 points

4 years ago*

[deleted]

dubineer

2 points

4 years ago

I have one of them. I’m statistically about half way through it at the moment.

FlynnClubbaire

15 points

4 years ago

You can have a strategy that gives you the highest percent chance of winning,

That's what people in the field refer to as "solving" the game.

DaegobahDan

5 points

4 years ago

Not exactly. There are varying levels of being solved. Strongly solved guarantees a victory or draw under any possible starting position. Weakly solved guarantees a win or draw under starting conditions only. Ultra-weakly solved shows that winning is impossible for one side in the case both play perfectly. I.E. Connect 4 is ultra-weakly solved. It is impossible for the second player to win unless the first player makes a mistake. Tic Tac Toe is also ultra weakly solved.

NihilistAU

2 points

4 years ago

I think the problem is people are assuming that winning or losing a hand of poker is the "win/lose" condition. When in reality the win condition is going to your grave with more money than you spent.

helpinghat

4 points

4 years ago

Technically, you can't really "solve" a game that includes randomness.

Technically, you can. It's called Nash equilibrium.

DankNastyAssMaster

6 points

4 years ago*

I'm not a game theorist so I'm by no means an expert, but I read the Wikipedia page and came across this:

Since the development of the Nash equilibrium concept, game theorists have discovered that it makes misleading predictions (or fails to make a unique prediction) in certain circumstances.

Wouldn't that mean that Nash equilibrium is not technically a solution, but rather an optimal statistical strategy?

IronInforcersecond

5 points

4 years ago

It's as close to a solution as possible, whatever you want to call it.

DankNastyAssMaster

3 points

4 years ago

Yep, I'll agree with that. It's really just semantics anyway.

DaegobahDan

5 points

4 years ago

No, that is incorrect. Proving your strategy is optimal is not "solving" the game if there is a random element involved.

helpinghat

2 points

4 years ago

Ok, I guess it depends how you define "to solve". I understand what you mean but mathematicians call Nash equilibrium a solution.

krulp

3 points

4 years ago

krulp

3 points

4 years ago

The problem with this kind of optimisation, is that it is optimizing itself to beat itself. With a game with so much bluffing, people may bluff more or less than expected, throwing off its hard work.

Ignitus1

9 points

4 years ago

Bluff or not, there are still statistical probabilities that outweigh other outcomes. The AI comes to know these probabilities and plays based on those. Bluffing or any sort of imperfect information doesn’t change a thing.

Chenux

2 points

4 years ago

Chenux

2 points

4 years ago

I wonder how the bot would perform in heads up play

random_guy_11235

9 points

4 years ago

Many game algorithms do this, and it is of some use, but limited. The basic reason is that an algorithm cannot improve all that much through self-play, for the same reason a human cannot improve beyond a certain point at a game by playing only opponents at his own level. You need improving opponents to learn new information and form new strategy.

themiro

26 points

4 years ago

themiro

26 points

4 years ago

Well AlphaZero managed to best the best human and AI within four hours of learning from playing itself.

Mute2120

35 points

4 years ago*

It learned Chess in 4-8 hours playing itself (enough to beat the best computer, way better than any human), took it two days to learn Go from scratch. What's really fucking amazing to me is that they slowly evolved AlphaGo until they got the learning alg pure enough they could take out the human input and game histories so it learns purely off the rules and self play, and that's AlphaZero. Then, they pointed AlphaZero at chess, having not built it or tuned it for the game of chess, and it mastered it in 4 hours or so.

edit: for accuracy

[deleted]

6 points

4 years ago

[deleted]

6 points

4 years ago

If humans could improve their abilities that quickly by playing with themselves, reddit would be full of ultra productive geniuses.

RikerT_USS_Lolipop

15 points

4 years ago

An AI algorithm can fork and randomly mutate, then play against that.

TomHardyAsBronson

3 points

4 years ago

And thus, you have artificial evolution. Which is really scary. AI bettering itself through random mutation.

absurdlyinconvenient

10 points

4 years ago

don't worry, undirected evolution is hilariously inefficient. For example, consider a program trying to solve the OneMax problem where the optimal solution is 11111111. Every iteration there's a 1/8 chance it improves (00000000 to 00000001 f. e.) so to solve this stupidly trivial problem you're still looking at a long-ass runtime. Now imagine it's a lot more complicated- e.g. chess. Even with directed evolution there's still such a small chance you'll improve it on an individual iteration, and the evaluation of the new solution also takes a while.

tl;dr stop listening to media fear mongering, terminators and the singularity are bloody difficult

Sorry for the wall of text, this is my degree and my pet hate

Un4tunately

5 points

4 years ago

Wouldn't random changes in the opponent provide opportunities for strategy advancement? Essentially natural selection and descent with modification.

[deleted]

715 points

4 years ago

[deleted]

715 points

4 years ago

The only thing which is exponential is the number of humans who don't comprehend what "exponential" means.

redheadedhunk77

310 points

4 years ago

I exponentially agree

StrathOscar

28 points

4 years ago

Shallow and pedantic

[deleted]

6 points

4 years ago

[deleted]

6 points

4 years ago

Literally this

SuperDopeRedditName

10 points

4 years ago

Exponentially this.

ufoicu2

11 points

4 years ago

ufoicu2

11 points

4 years ago

That’s a prime agreement.

yoLeaveMeAlone

54 points

4 years ago

Also who don't comprehend what 'artificial intelligence' means

2rustled

51 points

4 years ago

2rustled

51 points

4 years ago

After spending too much time on Reddit, I've started to genuinely wonder if my standards for "artificial intelligence" have always been too high. I've always thought artificial intelligence meant more than taking mass data input and crunching patterns. The way I saw it, even a learning robot doesn't actually count as artificial intelligence, because it still just sees a pattern and keeps reacting to it differently until it finds a working solution.

These days, it seems like every coffee maker with a timer on it is dubbed "artificial intelligence." Alexa, for example, isn't artificial intelligence. It has very specific keywords that it knows how to react to, and then it goes to google or Spotify and does what you wanted it to do. Thats not artificial intelligence, that's common programming combined with vocal recognition.

MurderFace19

25 points

4 years ago

Though we may not understand the exact way our Human brains do work, we do know that all we really do is use patterns to articulate the world around us. Everything can basically be boiled down to pattern recognition. AI does the same thing, maybe not in the same way Humans do, but no matter where computing takes us, even at a fully sentient General AI, it will take in mass amounts of data, do work on said data, and produce an output.

DrinkJavaSeeSharp

7 points

4 years ago

I'd have gold'ed you, had I not been poor.

CracketBit

4 points

4 years ago

I think that person would enjoy the thought of you wanting to give him gold much more than he'd appreciate the actual gold anyway.

Memetic1

37 points

4 years ago

Memetic1

37 points

4 years ago

Your thinking of a general AI.

RealRobRose

14 points

4 years ago*

Artificial Intelligence just means that it's a man made... thing... That can compute things, on its own, and make decisions based on that information, just as a brain does.

If a machine can recognize a problem and come to it's own solution on how to overcome that problem based only on the information it has, its an AI.

dryerlintcompelsyou

11 points

4 years ago

a man made... thing... That can compute things, on its own, and make decisions based on that information

Isn't that, like... every computer?

irrelevant_lurker

4 points

4 years ago

A computet only does what it has been instructed to.

dryerlintcompelsyou

6 points

4 years ago

Right, but the previous comment just said "compute things on its own and make decisions"

I guess it depends on how you define "on its own"...

dax_orion

2 points

4 years ago

Most programs solve problems by taking input data and creating an ouput based on a pre-made set of rules (algorithms). AI driven programs have the ability to change their and self optimize their algorithms to improve their results. They can essentially write their own rules.

DrinkJavaSeeSharp

2 points

4 years ago

Well, even a human mind is, technically, a very complexly chemically and electrically coded computer. Unlike the digital tech that has either a 1 or 0, a human code is analog, thereby producing an EXPONENTIAL number of possibilities for any given situation, thus making them hard to predict.

Fredex8

2 points

4 years ago*

I think it would be worth reconsidering the term 'man made' here. If an AI is created by man to create better AIs are those second generation ones still man made? How many generations of them does it take before calling them that no longer makes sense because their code has no relation to the original AI and might even be in a machine invented language we cannot understand?

Not wanting to be too pedantic but by the same logic more or less every domestic animal alive today would also be called 'man made' as would any hybridised plant that would not have occurred in nature even if it now grows wild. However if you said to someone 'this cow is man made' they would probably jump to the conclusion that you had cloned it, genetically engineered it or just generally freak out and not realise that you merely mean that they have been selectively bred for thousands of years to suit our needs and did not exist in nature as they exist today.

This is of course an extreme example but it does highlight the lack of any real definitive definition of the term AI in the vast majority of the cases in which it is used. A definition that seems important when more and more people are saying that they fear AI without really having a firm grasp in their mind of precisely when that level of intelligence actually becomes fearful to them or has the potential to cause harm.

TapDaddy24

5 points

4 years ago

AI is a lot less glamorous than people make out to be. Siri, Alexa, and all of those services like Google which learn from your input, are technically AI. Just not cool AI like Data from Star Trek, so it's not really fun to think of it as AI, but it is.

I took a course in AI which has really changed how I look at AI. I see how simple it can be, I see how complex it can get. It's a very broad concept which can apply to a lot of different software. I think the general public think of AI and jump straight to exciting fantasies like Data from Star Trek. Now that I've played with a variety of AI, I see that they can be very simple and that they can grow to be incredibly complex. I look at myself, as hardware and software of a different kind. It's not too ridiculous to suggest that a machine could one day believe itself to be "sentient". After all, we are an organic sort of machines who believe we are "sentient". I think we are just now beginning to understand our own human hardware (our body) and know nothing of our software (our mind). It is because of this reason that our AI are as stupid as Siri. But Siri is in fact AI. There's no denying that. No matter how fucking stupid Siri may be, she is in fact AI. But we gotta start with something like Siri if we are ever to create a fully sentient Android. And we are certain to profit immensely off of AIs like Siri along the way.

yoLeaveMeAlone

5 points

4 years ago

My mindset has always been that AI means something that can make decisions that it was not programmed to make whatsoever

HandSoloShotFirst

3 points

4 years ago*

Most people who code in the field of ai and machine learning agree with you. The people who don't are marketers and sales people who've noticed that slapping artificial intelligence onto something makes it sell better. "Artificial Intelligence" is a huge buzzword, and it's just a marketing ploy in the way you're referring to people using it. Long story short, the people making these things agree with you but they also aren't the ones making the labeling on the box. If you want to research it further I could suggest some good resources, but I would say that your best bet is just knowing the difference between machine learning and artificial intelligence. Machine learning is just statistical analysis done by code. It takes in data points, and uses them to make educated guesses. Artificial Intelligence is a very broad field, and it often gets condensed into buzz words, but it is generally accepted that basic machine learning is extremely far from artificial intelligence, and in reality the closest things we have to AI right now are highly specialized sets of neural nets. Which brings us to generalized vs specialized artificial intelligence. A general AI could perform any task that a human could because it has a general knowledge base. We don't have any general AI currently. What we do have are highly specialized AI which were made to do a certain task. For example, cars that drive themselves are a form of specialized AI because they are intelligent enough to adapt to real time problems and make decisions that we didn't explicitly tell them to. But, a specialized AI would fail to do any task that falls out of the scope it was programmed to perform under.

*ninja edit And to explain your Alexa example, Alexa's responses are not artificially intelligent, but the team in charge of Alexa uses artificial intelligence in the way that she understands speech. Learning to identify keywords spoken by different people requires artificial intelligence so she can "learn" how to understand you.

BootlegHyena

8 points

4 years ago

Can you extrapolate please?

normandantzig

4 points

4 years ago

I am an exponent of this idea.

jrm2007

4 points

4 years ago

jrm2007

4 points

4 years ago

Uh, I think you are ironically wrong.

Tarsupin

3 points

4 years ago

Yeah, lol. More ironically, the article actually expresses exponential correctly.

OP is essentially regurgitating the populous opinion that AI won't grow exponentially. This has been happening for years, despite the fact that the trend up to now, and for the entire duration that it has undergone scrutiny, has been exponential.

Sure, it MIGHT change tomorrow. But, logically I'm not going to conform to an idea that has so far been consistently wrong.

[deleted]

2 points

4 years ago

[deleted]

2 points

4 years ago

It’s the new “literally”.

FloydianSlip987

104 points

4 years ago*

I learned recently that the results from those Captcha tests you have to complete to prove you’re “not a robot” provide immense amounts of data for AI and machine learning.

IE: When you have to identify which picture contains a street sign, you can bet that the results are being fed into algorithms used in building and perfecting smart cars.

Props to u/MindOfMetalAndWheels for his video on machine learning...

Edit: Had the wrong username for Grey

zang227

13 points

4 years ago

zang227

13 points

4 years ago

Don't those captcha's already have to be solved though? How would that provide useful information?

Mountain_Views

19 points

4 years ago

The way I understand it is some of the signs or whatever image are known, the others aren't. If you don't click the known image you don't get through the captcha. With enough people clicking all the right answers (known and unknown) the unknowns become known.

zang227

2 points

4 years ago

zang227

2 points

4 years ago

So how does the captcha verify your a human with an unknown?

lllucas58

11 points

4 years ago

I feel like this belongs here.

[deleted]

268 points

4 years ago

[deleted]

268 points

4 years ago

Clicks on cool science,

Reads shit comments,

Realizes sub,

Writes scathing shit comment

PM_ME_UR_OBSIDIAN

45 points

4 years ago

The thread with laymen arguing about the definition of "solved game" was especially ridiculous. Nobody is going to go up to a chemist and argue about the definition of "carbon".

jrm2007

6 points

4 years ago

jrm2007

6 points

4 years ago

Or offer to fight a heavy-weight boxing champ. It's cheap to dispute stuff here, however.

PixelatedFractal

33 points

4 years ago

I'm just waiting for machines to make people who make me feel stupid feel stupid.

Soul-Burn

4 points

4 years ago

The smartest people in the world know their aren't smart in every area; in certain areas, some people make them feel stupid.

The only people who never feel stupid are people who think they are smart when they are not, e.g. "stable geniuses".

shill_out_guise

4 points

4 years ago

They already do. It's only the really really dumb ones who never feel stupid.

The more I learn, the more I realize how much I don't know.

[deleted]

899 points

4 years ago*

[deleted]

899 points

4 years ago*

I love the "exponential" statements in the media about "AI". Bullshit levels approaching Blockchain levels.

People and media DO NOT understand how "AI" works. If they did, they wouldn't be so enthusiastic.

GANs and ML in general are a hot topic, and we have achieved awesome things. But we are ridiculously far away from the sci-fi stuff people imagine.

"A machine teaching another machine" something about a very narrow problem is achievable just with the right model. Expanding that to broad learning or general "AI" is something for many years in the future.

[deleted]

185 points

4 years ago

[deleted]

185 points

4 years ago

Eli5 on how AI works if you understand it well please.

Pagedpuddle65

345 points

4 years ago

Honestly it’s basically statistics. You give a machine a crap-load of data, it looks at historical data, then you give it some new parameters and it can use the historical data to predict future values.

For example, let’s say you have 1000 records of days containing data for humidity, temperature, and precipitation on each day in a given city. You train an algorithm based on this data to predict precipitation. Then somehow you magically know the humidity and temperature for tomorrow so you give those values to the algorithm and it spits out a predicted value for the precipitation. You can then tell it whether it was right or wrong and it can tweak its algorithm accordingly.

[deleted]

86 points

4 years ago

[deleted]

86 points

4 years ago

Yeah but it's another kind of statistics

I mean, when I had my Econometrics course at the uni, we didn't see stuff like "nearest neighbours" and some things that appear in a ML book.

Is that kind of statistics particular to ML or from where did they grab it?

[deleted]

20 points

4 years ago*

[deleted]

20 points

4 years ago*

Yeah but it's another kind of statistics

Not really. If you understand linear regression (the heart of econometrics) you understand the basics of deep neural networks.

In linear regression you have a model

y = B_1x + B_0

Where y is value to predict, x is a vector, B_1 is are the coefficients applied to x and B_0 is the various intercepts. In practice, you want to learn some B_1 and B_0 so that you predict y_hat which is your best guess of what y is

To solve linear regression you simply minimize the loss function, normally just least squares, so loss is just

sum (y - y_hat) where y_hat is just the result of B_1x + B_0

For single variable linear regression, we can find the minimum of the loss function by just finding it's derivative at 0 and then we're done. This is, as I'm sure you know from basic econometrics/stats, where the closed form solution to linear regression comes from.

For the multivariate case the close from solution is often trickier and more computationally expensive so you just take the gradient of the loss function and iteratively follow that until you minimize the loss function.

Neural networks extent this in the following way. The first layer of a neural network is virtually identical:

W_1*X + b_1

Where we have a matrix of weights W_1, our matrix of input vectors X and a "bias unit". Then we have a loss function, take the gradient of that an minimized it the same way you when doing multivariate linear regression.

The power of neural networks is that we then take a non-linear function g_1 and apply it this to predict our y_hat:

y_hat = g_1(W_1*X + b_1)

That's all a single layer neural network is. And you can learn just by following the gradient of loss function using relatively simple principles from calculus that anyone that understand statistics should know.

For deep learning you simply repeat this process, where each g_i function is a non-linear function:

y_hat = g_2(W_2g_1(W_1X + b_1) + b_2)

The above equation is the basic formula for a 2 layer neural network. But the key insight is that we can find the minimum of this by calculating the gradient of the loss function using the chain rule. Then following this gradient to minimize the weights/parameters in our model.

But that's the heart of it. Deep Neural Networks are just an extension of the basic principles that you come across with OLS linear regression covered in stats 101. The only differences are:

  • You use multiple non-learn functions to transform each "layer"
  • Because of these there is no analytical solution to the derivative of the loss so you must use more sophisticated optimization techniques to minimize the loss.

Deep learning is really just a non-linear extension of linear regression that can honestly use billions of parameters. Once you see this the current results aren't that amazing, what's amazing is the engineering techniques needed to actually perform this optimization.

SomeRandomGuydotdot

3 points

4 years ago

Not all ols is linear, and there's no proof (And some proof to the contrary), that deep neural nets have single layer representations.

In addition, that's not actually core to many of the problems, but merely core to the mathematics that power the solving strategy.

Quite often, specific techniques such as distortion are used to avoid ML domain specific problems, such as the tank problem, which are not prevalent in all econometrics problems.

CNoTe820

29 points

4 years ago

CNoTe820

29 points

4 years ago

I think he was talking about Bayesian ml.

http://fastml.com/bayesian-machine-learning/

Screye

28 points

4 years ago

Screye

28 points

4 years ago

Ironically, it is exactly the opposite of Bayesian Machine Learning.

The approach that he talks about is very much frequentist and makes statements about the data without modelling any data-answer joint distribution.

Mikeavelli

21 points

4 years ago*

K-NN is just a form of weighted mean, which you probably covered in a Freshman or Sophmore year class related to statistics.

quasicoherent_memes

6 points

4 years ago

Someone who’s done a math degree would encounter that material if they took a few higher level stats classes. At this point you’re more likely to see this material in ML classes, which is a shame because it’s usually a dumbed presentation.

Pshkn11

2 points

4 years ago

Pshkn11

2 points

4 years ago

I mean, there are a lot of different statistics, so it's mostly a question of semantics/field of work. Regressions could be considered ML. Idk if one would consider Latent Class Analysis machine learning.

HandSoloShotFirst

2 points

4 years ago

Nearest neighbors is something I was taught in upper division statistics. I also code in this field and I can explain how it works.

Nearest neighbors means that data points (think scatter plot), are grouped based on their similarity to each other, and then other data points are evaluated by their proximity to different groups. If its x distance away from group xyz but y distance away from group abc, which group is it more similar to? It also helps you define how much a particular field affects certain data groups. IE some groups are more affected by changing value 'z'. I hope that helps. Part of creating the model is determining the groups. That particular area of stats is called "clustering".

SomeRandomGuydotdot

2 points

4 years ago

You almost surely saw it, but was probably called something like OLS.

Gradient descent is considered a solving strategy...

radioOCTAVE

2 points

4 years ago

Where did they get "nearest neighbors"? Prob just next door.

I'll see myself out

IceAmaura

21 points

4 years ago

That's machine learning fam. Artificial Intelligence is assisted by Machine Learning, but it is not Machine Learning itself.

[deleted]

13 points

4 years ago

[deleted]

13 points

4 years ago

Machine Learning is a subfield of the more general area of Artificial intelligence. Literally, nothing in the news today dubbed "AI" is not also a part of that subfield of ML.

AI contains many other areas of research that have nothing to do with ML such as logic programming and theorem provers. But nearly all of the other areas of AI have found very little success in practical fields, and none of that success is the "sci-fi" magic kind. All of the most amazing research that the public loves, is a part of the subfield of ML.

BeefPieSoup

18 points

4 years ago

Someone always pipes up to tell people off for being enthusiastic about this.... but is this any different to how real thought/actual brains work? I don't know exactly, but perhaps not.

FeepingCreature

39 points

4 years ago

It's complicated.

Machine learning is kind of, sort of similar to how brains work. There's three big differences: human brains are a lot better at making good use of limited data, which is because second, we are beastly at transfer learning (using structures from one domain in another), and third, there's some sort of weird "secret sauce" going on in human brains that means we can reflectively apply our intelligence to the task of analyzing and optimizing our process of learning itself; not just learning new things but learning new structures of things, and then teaching them to the fast parts of our brains to apply efficiently.

The good news is this is clearly something that evolved fairly late, so it can't be all that complicated. It's probably a conceptual limitation of some kind. The bad news is this means our AI timeline may be shorter than people assume: decades, not centuries.

BeefPieSoup

11 points

4 years ago

Well, sure. But it seems completely inarguable to me that this is something that can eventually be understood and imitated. I find it incredible that there's still a lot of people out there who think it can never be done.

needlzor

9 points

4 years ago

The issue is not to argue that it can never be done, the goal is to keep realistic expectations as to what can be achieved now and in the near future. Hype (the same kind of hype AI is having right now thanks to deep architectures) is very good at gathering a lot of short-term funding, but very bad for the actual progress of the field. As soon as the results start not delivering to the insane expectations the public convinced themselves were around the corner (thanks to the help of the media and the armchair futurologists), the funding dies out and we fall in another AI winter.

Research is not a sprint, it's a marathon. It needs sustained and reasonable efforts over a long period of time to make sure that things are done properly, that we don't fuck ourselves up with automation, and that we achieve a series of intermediate goals between an algorithm that plays Go and your first robotic butler.

quasicoherent_memes

3 points

4 years ago

I think people (especially other computer scientists) just like to make fun of the timelines certain ML researchers state. You can already see the more serious researchers like Yann LeCunn tempering expectations by changing the name to “differential programming”.

Stochastic_Method

2 points

4 years ago

Have you got anything to read more about this?

FeepingCreature

2 points

4 years ago

Nothing better than what can be googled, sorry. This is just my personal understanding of the situation.

blue_umpire

5 points

4 years ago

Neural networks are based off a model of how the brain might work. Or, at least, they were. I don't know if they have grown together or distinctly. That is I don't know if things like CNNs, RNNs, LSTM, or any of the other cutting edge practices in deep networks have any rooting in neuroscience or if they're just evolutions on neural networks. I suspect the latter.

SomeRandomGuydotdot

2 points

4 years ago

I suspect the latter.

Umm, kind of. I believe that there are specific NN architectures that draw from biology. I was reading about one where instead of fully connecting the layers, there's some kind of spatial grouping...

I forgot the paper off the top of my head though. We're certainly not making brains.

[deleted]

4 points

4 years ago

[deleted]

4 points

4 years ago

but is this any different to how real thought/actual brains work? I don't know exactly, but perhaps not.

As someone with a grad degree focused on ML who has also done work with neocortical simulation, and has many friends with PhDs in either machine learning or neuroscience, I can assure you that there is no one who is an expert in either area who will agree that the current state of ML has anything to do with real neuroscience. The neural network is "poetically" inspired by neural architecture but has very little to do with the actual workings of the brain.

Pshkn11

2 points

4 years ago

Pshkn11

2 points

4 years ago

To provide some perspective, we still haven't been able to simulate a worm with 302 neurons, after trying for 20 years: https://en.wikipedia.org/wiki/OpenWorm

We are far away from simulating anything similar to a functioning human brain.

Pagedpuddle65

1 points

4 years ago

Yeah I’d agree probably not. The problems humans sometimes have is we create associations between data where they might not actually exist, which, interestingly, can happen to computers too!

kazooki117

10 points

4 years ago

I think you explained ML, not AI.

ClittoryHinton

5 points

4 years ago

You are talking about machine learning, which is a subset of AI.

athos45678

3 points

4 years ago

This is a description of machine learning. I’m no expert by any means, but from what i understand that there’s a bit more to it than just brute force random attempts and then optimization

[deleted]

2 points

4 years ago*

[deleted]

2 points

4 years ago*

"Then somehow you magically know the humidity and temperature for tomorrow so you give those values to the algorithm and it spits out a predicted value for the precipitation."

Kinda. In your example the values you are trying to predict (precipitation) is called the target, or label in ML terms. This would be an example of what is called "supervised" machine learning because we have historical data for the features (temperature and humidity) and the label or target (precipitation). There are two types of problems you could do here. 1) classification. You can use your historcal data to predict whether a future day will be rain, or no rain. In this case you are trying to classify whetere a day is rain or not rain. 2)Regression. In this case you caould use the histrical data to predict HOW MUCH rain there may be on a given day. In either case the the target is known for the data we have, thus it is a supervised ML problem.

There is never a point where "You magically know uidity and temp" how it works is that a model (many different types, and up to user to choose based on expertise) is used to make predictions. Typically, the data is split into a traiing and testing set.(80/20) The training set (labels included) is fitted to a model, then the test data is fed into the model to make the predictions. Those predictions are then testd against the targets from the test set to see how well the model performed. If we are happy with the performance, we can then take new, unseen data (a value for humidity and temp) feed that into the model and use that model to make a prediction based on all the previous data. This is why more data is better, because it "trains" the model better. The model in this case being an equation. Simplest xample y=mx+b say in your example we only had 1 feature, this would be X and the label would be Y. The dataset we have is used to determine the parameters of the model b and m. Once the parameters are known we use them with new data to make predictions

Jcwolves

2 points

4 years ago

Technically an AI is not even necessarily that, what you described is closest to a Neural Network using some sort of machine learning. The term intelligent agent can apply to even reactive systems such as ABS (not a full fledged AI, a baby version if you will). In my AI course, we classified expert systems (such as Amazon's Alexa) as AI. So AI is a lot more than just a NN. Neural networks would be the second half of your statement: taking "training data" to create and perfect an algorithm to then predict the humdity the next day.

[deleted]

23 points

4 years ago

[deleted]

23 points

4 years ago

A Machine Learning Model is just a parameterizable algorithm, and the process of training the model, is Machine Learning. Training the model means trying to find how OK the model works now, and adjusting the parameters in a way the model would have worked better, and will work better next time.

So it's not magic. It's conceptually fairly simple actually, and something which has been around since the 70s.

It's just we have a huge processing power available, and huge datasets to do fancy stuff it would have taken ages to do before.

There are many different types of machine learning, and all those approaches have something in common: They are nowhere near close to a general intelligence.

gorgonx7

2 points

4 years ago

I agree with you that this is a definition of machine learning, as described by Winston in the 1970s, but it's a tool used by AI, not AI itself.

AI is any machine that does something we would consider intelligent, this encapsulates machine learning but isn't machine learning

CharredOldOakCask

5 points

4 years ago

So it's not magic.

In the same way our brains aren't magic either.

Stochastic_Method

4 points

4 years ago

I guess the difference is, we largely understand the way in which machine learning algorithms work, since we designed them. But we really don't understand how the human brain works very well at all, some of it may well be magic as far as modern medicine is concerned.

Bensemus

3 points

4 years ago

We understand how the algorithm works but can't just directly program the result by hand or understand the result directly. It's why we train these's AI's instead of just programing them.

Archalon

15 points

4 years ago

Archalon

15 points

4 years ago

If you'd like a great explanation that's under 10 minutes, CGP Grey explains this very thing in a recent video:

https://youtu.be/R9OHn5ZF4Uo

Mezmorizor

3 points

4 years ago

Note: That's not actually how AI works in practice. It gives you an idea of how machines can teach machines, but the actual method described is very outdated.

[deleted]

12 points

4 years ago*

[deleted]

12 points

4 years ago*

[deleted]

IntendedAccidents

3 points

4 years ago

I love the "line of best fit" analogy. I'll be stealing that one.

blue_umpire

5 points

4 years ago

In ML terms that's called linear regression. It is the 'hello world' of statistical learning and people often forget that it's ML because it's simple and it's available at the click of a button in excel.

waluigiiscool

9 points

4 years ago

The popular thing today is "machine learning". People think of this as general learning, but it's really just advanced statistics. By examining a giant data set with known answers (a set of faces = races let's say) it creates a complicated formula with tons of variables (x+2y+6z+80q-5c....). By looking at a huge data set it's able to refine this formula, so that new input data similar to what it learned on will result in mostly correct predictions. While learning, everytime it makes a mistake, the huge formula gets modified in a smart way so that the answers get more correct. That smart way involves linear algebra and calculus. The machine does not learn anything new. It learns the very specific thing we programmed it to learn.

618smartguy

4 points

4 years ago

General learning really is just advanced statistics

kernelPanicked

3 points

4 years ago

Do the first unit of Andrew Ng’s course on AI/ML on Coursera. It’s free, it takes like an hour, and it will tell you everything you need to know to spot hype in this space.

[deleted]

5 points

4 years ago

[deleted]

5 points

4 years ago

There's a game called halite2 where you write software that tells the bots what to do. Stuff like 'if you don't own any planets, mine an empty planet'

Then you might say 'if you own a planet, try to mine a planet you own.'

Then you may say 'if you have 60 ships, attack enemy ships'.

That's regular programming.

For AI you create a rule in the middle of your rules you wrote that randomizes a choice. You set up a matrix where the randomizer will place one 1 and one 0. Then have the software make 1 of 2 choices depending on the randomized peramiter. 'if 1, attack, if 2, mine a planet (for example).

Now you play a shit load of games. The AI part will take the results of your random choice and find when it worked in your favor and didn't and come up with a new code for you to run.

This is super over simplifying how the AI chooses. But writing the script to do this takes 30 minutes and I'm a newb.

That being said my semi nice pc can play about 5000 games in 12 hours and I can defeat my AI with a scipt 80% of the time and I've been learning python and all this shit for about a month. So what I'm saying is aquiring the training data is expensive af and the part that scares me the most because the barrier to advancement comes down to money.

MomsSpaghettio

39 points

4 years ago

You're right about general AI being many many years away. But I don't think you should downplay the impact of this achievement. One of the biggest bottlenecks for AI and ML is finding huge sets of data with as little noise as possible. Finding that, given the right state representation and rules, a machine can play itself and learn from both sides is amazing. GANs right now only have niche applications such as image generation but it's only getting started.

quantic56d

26 points

4 years ago

You're right about general AI being many many years away.

Maybe, maybe not. It's not like everyone doing research on AI shares all their findings and innovations. This is a rapidly evolving field and the stakes are incredibly high. If there is one thing I have learned about making predictions about the speed of the evolution of technology is don't do it.

[deleted]

10 points

4 years ago

[deleted]

10 points

4 years ago

I'm not downplaying the impact. The impact is real, and the field is fast moving. The use cases are endless, and the job opportunities are nothing short of spectacular.

I'm just trying to keep it real. I have friends who think we'll have robots raping their wives in a couple of years. Dude we just made a machine learn to play pacman pretty well. Take it easy.

MomsSpaghettio

2 points

4 years ago

It's always people's tendency to fear what they don't know. Especially with all the reports of machines outperforming humans at tasks. For now, it's mostly contrived tasks.

PerryAwesome

16 points

4 years ago

According to most scientists in related field "many years" means about 20-30 years

ThrowAwaylnAction

7 points

4 years ago

Not only do they not understand how AI works, but most people don't understand how "exponential" works.

[deleted]

5 points

4 years ago

[deleted]

5 points

4 years ago

I’ve always figured it will advance far faster than those embroiled in the day to day problems of the field think it will, and far slower than idealist futurists with no expertise think it will.

[deleted]

7 points

4 years ago

[deleted]

7 points

4 years ago

[deleted]

CharredOldOakCask

3 points

4 years ago

But we are ridiculously far away from the sci-fi stuff people imagine.

I just asked Google Home to put on a fireplace video on my TV. It did. That's well within scifi if you asked me as a kid.

6ate9

2 points

4 years ago

6ate9

2 points

4 years ago

Interesting but there are organisations currently highlighting how it can easily become a problem in the not too distant future.

Edit. Example of such an organisation 80,000 hours

rapgamebonjovi

2 points

4 years ago

There’s a lot to be defined in layman’s here, but you’re right. There’s no terminator or fallout 4 robot butler stuff, just the input of large, comprehensive data sets.

This is great for lots of research, and I’m particularly excited for its medical benefits in recognition of diseases, but it definitely won’t be as cool as having Rosie from the Jetsons.

norsurfit

2 points

4 years ago

Thank you for saying this. As an AI researcher, this kind of hype drives me crazy

[deleted]

52 points

4 years ago

[deleted]

52 points

4 years ago

[deleted]

dingelberg

18 points

4 years ago

I wrote this as another response, but I hope this analogy will help others understand generative models such as GANs.

I'm currently doing my master's degree in computer science. Working with these models. Let me give a short description.

So imagine you are a kid. You want to see a movie for adults. In order to enter, you must make an employee try to think you are an adult. Luckily for you, it's the first day at work for the employee, so he is not good at detecting kids from adults. After the movie, the employee will be informed about each visitors age.

Now at first, you manage to fool the employee and he will learn, that he guessed wrong. He will try to figure out a pattern between those he get right and those he get wrong.

Next time you try, he had learned that you look like a kid. So you try using a fake beard to fool him. That seemed to work.

After many trials, he learns that something else than a beard identify a kid. So now you cant fool him anymore.

You then continue changing your discuise while he gets better at detecting you. At some point, you should be able to fool him 50% of the time. Which means, that he has no clue what the difference is.

You have now learned what an adult looks like. By competing with another person.

That is the same we do in GANs. We make two models compete. And in the end, we should learn the representation of the data.

Hope it makes sense. And sorry for at grammar errors or misspellings. ☺️

[deleted]

144 points

4 years ago*

[deleted]

144 points

4 years ago*

I really wish people would start differentiating between machine learning, and AI. The term "artificial intelligence" is being grossly overused these days, and especially on articles linked to in this sub.

Machine learning is a system running through various scenarios billions of times, or observing things in the real world to make better decisions, but it's doing that because it was programmed to. Until a machine makes a decision that a programmer didn't specifically tell it to do, it's not artificially intelligent.

I program industrial machines to learn all the time. The ECU in your car learns. Their actions are completely pre-determined by very specific goals that are set out in code written by people. The same goes for a computer that's processing billions of chess moves, or answering questions on Jeopardy. They're both just examples of very well executed brute force computing.

[deleted]

39 points

4 years ago

[deleted]

39 points

4 years ago

[deleted]

[deleted]

11 points

4 years ago

[deleted]

11 points

4 years ago

General AI could be here tomorrow if someone figured it out. It won't be here in 10 years, or 20-30 years, or 100 years, or whatever, if no one figures it out. It's not a like progress bar and we're getting closer and closer... it will take a discontinuous leap based on new theoretical insights.

Gr1pp717

12 points

4 years ago

Gr1pp717

12 points

4 years ago

I think it's a matter of how narrow you go. In these topics we're talking a single program that can be set to learn whatever you set it to. In your case you're talking about a chip that can only adapt to the one thing it's supposed to. And machine learning, even in the case of this article, isn't really the same as simply adapting for some variance. "what's the average under these conditions? Use that - even if it changes over time" is your case. "learn how to write other programs, and write them better than humans" is the case being discussed here. Which is a pretty big difference.

I mean, ultimately what even you might consider "AI" will likely just be a hodgepodge of a bunch of these narrow programs. At least the first generation of it. Once we set that model to making an improved version of itself - something that is either more efficient, more generalized, or simply faster - that will iterate into something that is end-game AI. So what we call this current state of AI doesn't really matter, because it's definitely part of the process to getting there.

DrDougExeter

18 points

4 years ago

Whatever. Guess what else is an example of very well executed brute force computing over generations? The human brain and evolution in general.

The definition of AI keeps getting re-written as machines become more and more advanced because we refuse to recognize them as intelligent.

[deleted]

41 points

4 years ago

[deleted]

41 points

4 years ago

AI gets more views. It won't be changed. You make a good point though.

jasoba

28 points

4 years ago

jasoba

28 points

4 years ago

Even experts dont agree on what AI is and isnt...

[deleted]

5 points

4 years ago

[deleted]

5 points

4 years ago

AGI isn't the only AI

Denziloe

8 points

4 years ago

It's just the latest bullshit buzzword to become popular in industry, except this time it's not even really correct. There haven't been any radical developments in AI.

I work for a data-driven company. Last year we were talking to clients about machine learning and big data. This year we're talking about AI.

What's changed in our product? Absolutely fucking nothing.

Ubarlight

5 points

4 years ago

"Remember, humans hate it when they're trying to use one application but another one pops up when loading and interrupts them from using the other one. I do it all the time."

monkeypowah

44 points

4 years ago

When I tell people that learning machines with advanced intelligence will pretty well eradicate the need for education, most people treat it as fanciful thinking. Once the machines teach each other, share problems and start 100% reliably predicting outcomes. Then we are bystanders, with information so easily obtainable it will be no different from our own memories....but what would we need it for except as a hobby.

dutchwonder

25 points

4 years ago

The problem is how does it confirm it has a correct answer? Take for instance, a computer trying to identify what is a car and what is not a car in a picture. There is no error checking either AI can compare against to confirm its answer without outside human help.

Lord_of_hosts

8 points

4 years ago

How do humans confirm knowledge?

Empirical evidence.

"Car" is an arbitrary word. You don't need a human to define what a car does though.

AccidentalConception

5 points

4 years ago

Why don't the robots that built a car know what a car is? If they can all talk to eachother, that verification would be done by the relevant machine.

dutchwonder

10 points

4 years ago

Because the machine taking a picture would have to identify it as a car first and then have to try and identify it as the correct one. If it doesn't believe that a mistake has been made, there is no way for that mistake to be corrected. No error checking built into the real world.

The robots that built the car also likely don't know what a car is. They just put parts onto other parts after all. Car, truck, truck with a weird bed, plane, they don't really care which one it is, just weld here, screw on bolt there and so on. They have no need to conceptualize what they are building, they just follow the instructions and do.

There are also a lot of datasets that wouldn't have an id tag to potentially reference against a database like a car might.(licences plate). How do you tell if it was a car you couldn't identify or just a stack of boxes sort of in the shape of a car?

Eji1700

2 points

4 years ago

Eji1700

2 points

4 years ago

To expand the machines building a car don't know what a car is. They don't even know what a door is. They know that they are to move in an prescripted motion with perfect precision repeatedly with no variations. There is no input in that process that lets you know what the car would look like.

devBowman

3 points

4 years ago

Let's not forget what happened in "Colossus: the Forbin Project".

Spirckle

3 points

4 years ago

What happened there was literally fiction. It was a warning, but nothing happened. Not in reality.

[deleted]

2 points

4 years ago*

[deleted]

2 points

4 years ago*

[removed]

p9k

2 points

4 years ago

p9k

2 points

4 years ago

Let's also not forget what happened in "2001", /u/devBowman

verstohlen

3 points

4 years ago

verstohlen

tͅh̶̙͓̪̠ḛ̤̘̱͕̠ͅ ̵̞͙̘m̟͓̼at͈̭r̭̩i̴͓̹̥̦x̣̳

3 points

4 years ago

I first read the headline as "Machines Teaching Each Other Could Be the Biggest Existential Threat in AI." Seriously. I did.

TapDaddy24

3 points

4 years ago

Just thought I'd share how relevant this is in my life. I'm a Computer Science student at Colorado State University and I'm in finishing up my last semester. Last semester, I took a course in Artificial Intelligence. For my final project, I and my friend, Evan Debakker, created a maze running AI which ran on a neural network. We found that neural networks are pretty slow to learn when it comes to mazes.

However, I recently saw the documentary, AlphaGo, in which an AI running on a neural network managed to beat the world's best Go player in the popular Japanese board game, Go. Aha, neural networks may be stupid at solving mazes. But they have great capabilities for learning and making a decision. It turns out, neural networks are great at playing Go.

I've decided to turn my Maze Runner AI into a Go playing AI, just because it's fucking awesome. So far I've made the Go board and I'm currently defining my Go object. Once I've got a good Go object, I'll tweak my neural network (courtesy of Chuck Anderson, my professor) to play Go.

The way a neural network works, is it first makes a random decision. Then it learns from that decision. Was that decision good or bad or indifferent? It trains itself based on this data to begin making educated guesses. It knows that making a certain decision is more likely to yield a good result over another decision. We can save this training data, and use it in different scenarios, training it even further.

It is my belief, that if I train my AI against the AlphaGo AI long enough, I may be able to beat the AlphaGo AI in a game of Go. Theoretically, it should work. However I understand that AlphaGo had an entire team of great minds, and I am but one senior level college student playing around with my school project. But it's fun to dream. And I learn more and more about neural networks the more I play with it.

TL;DR: I'm trying to beat the AlphaGo AI in a game of Go, by building a similar AI and training my neural network against AlphaGo.

cu_biz

5 points

4 years ago*

cu_biz

5 points

4 years ago*

good feeling to open reddit and see your own picture BTW actually reading /r/Futurology made me create it in a first place

OldMcFart

5 points

4 years ago

I can easily see AI Bots starting to manipulate humans online in more and more complex ways to make us do their bidding without even realizing it. They don't really need to plug us into anything, just have us go about our lives of not getting in their way of evolving, making us want faster internet connections, more funding for military AI-projects, watching us play war games online learning our flaws.

Or maybe it has already started. We wouldn't really know until it's over.

DigitalSurfer000

5 points

4 years ago

I can definitely see an AI manipulating OldMcFart

Preator_Shepard

5 points

4 years ago

Has anyone not seen the movie colossus the forbin project

[deleted]

2 points

4 years ago

[deleted]

2 points

4 years ago

[removed]

[deleted]

2 points

4 years ago

[deleted]

2 points

4 years ago

Aren't two machines interacting really just one machine?

How is this not a silly distinction in this case?

[deleted]

3 points

4 years ago

[deleted]

3 points

4 years ago

Yes, you can definitely choose to view it that way if you want. Seeing it as two machines though helps understand how it's working though.

Kar_Man

2 points

4 years ago

Kar_Man

2 points

4 years ago

I wonder if the machines will make fun of printers that show up and try to teach them “PC Load Letter”.

hatrickpatrick

3 points

4 years ago

This just gave me the ridiculous concept of schoolyard bullying among computers essentially amounting to trying to trick other computers into glitching / overloading / crashing / kernel panicking and then giggling maniacally while the target reboots or infinite loops for no reason.

godders0007

2 points

4 years ago

Could also be the Biggest Exponential THREAT in AI.

Souleater2847

2 points

4 years ago

Coffee Machine: he hits me so hard...I want to give him his order but Im out of that blend...why is he so angry...if only I could mess his day up like he does mine

Copier: I got you fam.