[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 138 KB, 500x299, Lee sedol vs alphago.jpg [View same] [iqdb] [saucenao] [google]
7933090 No.7933090 [Reply] [Original]

Current score 3 - 1 to AlphaGo

Streaming @:

>Official stream
https://youtu.be/mzpW10DPHeQ

>American Go Association (AGA) stream
https://www.youtube.com/c/usgoweb/live

>Japanese stream
https://youtu.be/aZtZdAaInEM


Relayed live:
https://online-go.com/tournaments


Game 4 discussion >>7928002
Game 3 discussion >>7925694

>> No.7933106

I feel like this is the definitive game. They've seen each others' strengths and weakness and styles. I'm looking forward to it. I'm no expert, so I have to ask, can the game end in a draw? If so, is it very common at the higher levels of play?

>> No.7933109

>>7933090
Get voting:

http://strawpoll.me/7086284

>> No.7933111

>>7933106
>I feel like this is the definitive game.

I have to agree, we saw that AlphaGo has some weaknesses and it should be quite interesting to see if Lee Sedol can repeat what he was able to achieve last game.

>> No.7933122

>>>/g/ thread
>>>/g/53493803

>>>/tg/ thread
>>>/tg/46011565

>> No.7933158

Is AlphaGo learning from the games it played?

>> No.7933159

>>7933158
no it's frozen for the match series

>> No.7933160

he actually won one?

WE'RE NOT OBSOLETE YET

oh fuck it yes we are

p-praise skynet

>> No.7933162

>>7933160
He won the last game when AlphaGo made a mistake, still no idea what went wrong.

>> No.7933164

>>7933159
>>no it's frozen for the match series
this froze is quite dumb then

>> No.7933165

>>7933162
it wasn't that alphago made a mistake so much as it was the fact that sedol played a really good move

>> No.7933167

>>7933162
>AlphaGo made a mistake
sedol outsmarted alphago. deal with it you gay

>> No.7933168

Where were you when humans got BTFO for the fourth time?

>> No.7933170

>>7933165
>>7933167

I believe that AlphaGo was at fault because it did not realise the gravity of that move, which begs the question what went wrong?

>> No.7933171

>>7933162

They said AlphaGo wasn't expecting the move Lee made so it was literally caught off guard

>> No.7933172

>>7933170
It couldn't read out why Lee's move was a good move.

That's why it considered it so unlikely.

>> No.7933173

>>7933171
>>7933170
>>7933167
>>7933165
From what the deepmind developers said, the move was not that well valued so that it didn't sample as many branches from it. It only saw a certain depth for that move and didn't realize it had such game impact.

>> No.7933174

>>7933160
If we'd be obselete, there most be a program that could create the program AlphaGo by "him"self, Is there such a program?, no , not yet WE ARE STILL USEFULL! (I proudly click on that recaptcha)

>> No.7933177

>>7933173
>relies on experience
>makes mistakes

It's just too human.

>> No.7933180

Didn't Lee Sedol request to play white in the last conference?

>> No.7933182

>>7933180
He asked to play black and that was just a joke.

>> No.7933186

>Lee Sedol's master plan
>crashing this game

>> No.7933188

Build another ladder, Mike!

>> No.7933190

>>7933180
He requested black because he wanted to win once with both black and white.

>>7933182
It was obviously not a joke, because the original rules were for them to Nigiri to determine color for the 5th game. Which they did not do.

>> No.7933192

>>7933190
Is it unfair for him to request black now?

I don't think it matters since alpha is somewhere near his level.

Although, AlphaGo does seem weaker at the beginning when it is playing white.

>> No.7933198
File: 31 KB, 700x494, nigiri-salmon.jpg [View same] [iqdb] [saucenao] [google]
7933198

>>7933190
>Nigiri
what?

>> No.7933201

>>7933198
http://senseis.xmp.net/?Nigiri

>> No.7933203

>>7933201
But why is it named after a sushi?

>> No.7933205

>>7933203
who the fuck knows

>> No.7933224

>>7933203
>>7933205
>>7933198
the word comes from the verb "to grasp"


This is an incredibly convincing win for Lee Sedol already, he is completely controlling the territorial lead and has figured out how to beat AG.

When playing AG you need to grab more territory than it early, it eventually panics and blunders attempting to grab a territorial lead. If you lead from the beginning of the game it really struggles to do anything.

Right now: LS is controlling more territory in the top right bottom right and left side of the board.
AG: has the bottom left and middle right and some points top left and middle, but this is less points than LS once he cuts into the middle and invades.

>> No.7933232

https://youtu.be/kJy37GtkcF4

>> No.7933255

Lee Sedol seems to be ahead

>> No.7933258

>>7933224
>>7933255
Seems like it, it's likely that AG has lost.

>> No.7933262

>>7933258
Would you say that lee sedol learned more from these matches than AG did?

>> No.7933263

>>7933262
AlphaGo hasn't learned anything from these matches.

>> No.7933264

>>7933262
I thought AG wasn't allowed to learn at all because AG has been in a frozen state since the start.

So we don't know how adaptable AG is. I wonder why Google decided to act like this.

>> No.7933266

>>7933262
>>7933258
AG miscalculated another capture race. clear loss. He would've won 5-0 if he didn't try to get fancy in game 1 and then be crushed emotionally game 2 and 3.

>> No.7933267

>>7933262
Yes, because AG was 'frozen' and did not improve between the matches.

>> No.7933269

wow Lee Sedol is winning again

>> No.7933274

>>7933264
Two reasons.

(1) A developed Neural Network like the one AG is built on will learn almost nothing from 1 or 2 games, it needs millions of games to learn anything 'new'.

(2) And it might get them accused of cheating or tailoring it to play LS.

If they had built AG on a mutating Neural Network (with floating point weights) it might have done better, since that allows for rapid weight reassessment after fewer losses.

eg: AG evaluations a position as +/- 1 as its minimum difference.
A human might evaluate something like +/- 0.01 as its minimum difference.

And that's the skill difference we see between LS and AG.

>> No.7933278

Interesting, they have a guest on stream

>> No.7933282

>>7933274
>eg: AG evaluations a position as +/- 1 as its minimum difference.
>A human might evaluate something like +/- 0.01 as its minimum difference.

Would AG on a mutating Neural Network be able evaluate minimum difference at a human level?

>> No.7933284

>>7933274
lets be real it would take years to develop to this level of competence on a float network.
>muh flops

>> No.7933288
File: 134 KB, 500x373, LeeSedol.png [View same] [iqdb] [saucenao] [google]
7933288

>> No.7933289

>>7933288
thank you lee sedol

>> No.7933299

>>7933288
>>7933289
can someone explain to me why these fucking gay ass posts happen? is it just ironic shitposting? i see this all the time and i just don't fucking get it

>> No.7933300

>>7933282
Yes and far beyond. It might take hours to make a move though.


It all depends on how much room they have to optimise the hardware for it, and how they can build lighter weight algorithms. On the DeepMind team they dont have any optimisation experts, they just used Google's massive computing power and solved the problem in the laziest way possible (2 neural networks pruning a MCS running on lua). There's room for hardware and algorithmic improvements, which would reduce computing power.

To give you an idea AG currently does something like 3^150 calculations per move.
A human pro does something like 250^1 + 100^2 + 50^3 + 25^4 + 15^5 + 10^6 + 5^7 + 2^15.


AG lacks breadth it doesn't know what are bad moves so it uses its Neural Networks to eliminate bad moves.
AG has godlike depth, it now looks at its remaining good moves not eliminated and plays them out to the end of the game.

Humans have amazing breadth, the better the player the faster he finds the best moves on the board.
Humans have limited depth, even top professionals probably cant play more than 15 move sequences, and they are only really good at predicting say the next 5 moves.

The problem is AG uses its lacking breadth to calculate its godlike depth, so if it makes a breadth mistake it's depth means nothing. (Game 4)

The beauty of the human is that their fantastic breadth means even if they cannot calculate very deeply, they still selected a good move.

At top level pro you need both breadth and depth, Think of breadth as knowledge of positions and depth as mental calculation adding knowledge at each step.

AG has questionable knowledge of position, but it has amazing mental calculation, which often masks or makes up for this problem. It takes a very high level professional player to expose this.

>> No.7933303

>>7933274
>A developed Neural Network like the one AG is built on will learn almost nothing from 1 or 2 games, it needs millions of games to learn anything 'new'.
in the last thread I said this and got insulted by about 10 people.

>> No.7933304

>>7933288
thank you lee seedol
>>7933299

>> No.7933307

>>7933299
lurk more.

>> No.7933314

>>7933300
Interesting, thanks.

>> No.7933317

>>7933300
so what is the significance of the 'policy network' vs the 'value network', as it relates to your above explanation?

>> No.7933323

>>7933299
s4s garbage that leaked onto other boards

teens tend to think it's really epic

>> No.7933326

Am I the only one here who wants AlphaGo to win? Alas it seems it's still not good enough.

Also, how cringy is that casual guy on the stream? Jesus.

>> No.7933342

You guys should tune into the stream. They're interviewing a guy that's in the room where they monitor AlphaGo.

>> No.7933343

>>7933303
If the Dev team fed the game millions of times into the value network (for retraining), it would eventually learn to never make that particular error again... that's not helpful because we still didn't fix the policy network that did not evaluate that move correctly. They would have to do the same thing for retraining the policy network, and then if AG ever sees that hand of god move again, it will play differently 100% of the time. Since it's only semi-deterministic, it might play that position differently anyway. But this would take months and fix 1 hole in 1 position of AG. (Move 79 in Game 4)

So technically we're both right and wrong. But between a 5 game series in a matter of days, it's not learning shit by itself. Someone has to go in and give it a nudge in the right direction, I have no idea how flexible their networks are to direct manipulation, I would imagine not at all, since that defeats the whole purpose of AG and what they are trying to do.


>>7933317
Policy Network (PN) is breadth and Value Network (VN) is depth.

PN spits out a scalar value and VN gives the probability to win from a current position.

The PN was trained by being fed a bunch of high amateur and pro games and it was taught to predict 'the next move' a strong player would make in a game the player won. This didn't give particularly good results because the KGS database they used wasn't big enough. To fix this, they made it play against itself millions of times, then they fed it back each position in each game and rewarded it when it made a correct prediction of the winning players (itself in this case) next move.


The VN which evaluates positions was fed a bunch of self-play games and made to estimate the probability the winner wins from this current position. It's rewarded when it correctly picks the winner from the position.


The outcome is already known by the reward mechanism but not the Neural Networks. This is called reinforced learning.

>> No.7933344

>>7933342
And asking completely retarded questions.

>> No.7933345

>>7933326
I want alphago to win as well, but the winning pattern has been found.
Send rocks into places where it can contest marked alphago territory in at least one side - enter a lopsided fight and win enough territory to punish alphago's greed
alphago just seems to come up short in sector-sized confrontations overall.

>> No.7933347

>>7933342
>>7933344

Very poor questions... nothing interesting got asked.

>> No.7933355

>>7933090

Did they gimp the AI to let the gook win last game?

>> No.7933356

AlphaGo is picking up steam. Lee Sedol is in danger zone

>> No.7933358

>>7933159

Can't the gook just replicate last game then?

>> No.7933362

>>7933358
there's some randomness in AlphaGo's move choices though

>> No.7933364

>>7933343
to clarify this.
PN + VN give weights to the depth which is maximum because it plays to the end of the game using Monte Carlo Tree Search.

So if the PN fucks up, it will get reapplied in the next move and the next move and the next move, ending up in AG predicting some move sequence that would never actually happen, and tricking it into thinking its winning, because it assumes it's opponent would make the same terrible moves it made.

That said even with its current flaw, AG is probably top 10 worldwide, under these time controls. A top professional player should be able to take games off of it by forcing the game in a direction AG cannot predict.

>> No.7933366

>>7933358
Apparently they built in randomness into its Monte Carlo algorithm. And Lee Sedol is playing a different color.

>> No.7933373

Freaking hell. Redmond is still discussing possibilities from 2 moves ago

>> No.7933382

>>7933288
thank you lee sedol

>> No.7933390

>>7933343
so the policy network would give a single value (let's say 0.0 - 1.0f) for every single available position (at that point in the game), based upon how likely it is that each position is the next move?

and then the value network will be re-evaluated once that position is 'taken' (considered as a next move), then the policy network will update, and this will go on recursively until it has a bunch of trees of possible moves?

where does mcts fit in to this?

>> No.7933391

Demis Hassasbis is saying that AlphaGo made a huge mistake in the early game and now it's trying to claw back

>> No.7933392

Ke Jie is calling a victory for AlphaGo. He has called every game correctly thus far.

>> No.7933397
File: 51 KB, 634x286, Capture.png [View same] [iqdb] [saucenao] [google]
7933397

>>7933391

>> No.7933405

>>7933390
It's just a way to keep track of the next sequence of moves and the game board. Eg The depth of the tree is the future moves predicted, the breadth of the tree is candidate moves. They prune a lot of breadth with the policy network to get nice long depth.

But everyone knows girth is better than length.

>Monte-Carlo tree search (MCTS) uses Monte-Carlo rollouts to estimate the value of each state in a search tree. As more simulations are executed, the search tree grows larger and the relevant values become more accurate. The policy used to select actions during search is also improved over time, by selecting children with higher values. Asymptotically, this policy converges to optimal play, and the evaluations converge to the optimal value function.
(Directly from their paper)

http://airesearch.com/wp-content/uploads/2016/01/deepmind-mastering-go.pdf

With all of this, I've basically summarised the whole paper for you, so give it a glance over it should be simpler to understand now.

>> No.7933406

>>7933391
>>7933397
Guessing that was the problem on the bottom right.

>> No.7933422

How does one actually even start to learn go? It just seems so fucking daunting to a complete novice

>> No.7933423

>>7933422
Use a smaller board

>> No.7933424

>>7933299
Don't you ever fucking reply to me again unless it's to seriously contribute

>> No.7933425

>>7933406
did alpha catch up?

>> No.7933427

>>7933422
http://senseis.xmp.net/?BeginnerStudySection looks like it has a small little curriculum that you could try to follow along with

http://senseis.xmp.net/?PagesForBeginners also has a ton of resources to look through as well

>> No.7933429
File: 50 KB, 498x412, 1456383734950.jpg [View same] [iqdb] [saucenao] [google]
7933429

>>7933424
>Don't you ever fucking reply to me again unless it's to seriously contribute

>> No.7933430

>>7933425
Yea see >>7933397, it has played well since then.

>> No.7933431

>>7933423
Different anon here, I just tried to learn the rules then lost to a beginner bot on a 5x5 20 times in a row. I have no idea how to balance between playing offensively and defensively.

>> No.7933432

>>7933431
play on a 2x2 board m8.

>> No.7933433

>>7933422
The actual basic rules are pretty simple, but like chess, those basic rules lead to a lot of really complicated things. If you just take the basic rules and play on a small board you can pick it up pretty quickly, then move up to bigger boards. At that point you can play with a handicap against good players and study some of the forms and things, but really just read the basic rules and start playing on a 9x9 board.

>> No.7933434

>>7933431
Playing against bots is a really bad idea for a beginner. You end up making moves too fast and not thinking about each stone. Having the time during your opponent's turn to reflect is very important.

>> No.7933437

do humans tend to make bad moves when theyre behind?

is it possible it learned this habit from all the games it studied?

>> No.7933439

AlphaGo is making shit moves. LSD will win.

had they played best of 10, LSD would win.

>> No.7933440

HAND IN HIS HAIRS

>> No.7933442

>>7933437
It's basically the MCTS seeing the best way of winning is if the opponent makes a mistake.

It's the AI equivalent of "Fuck it I've already lost, might as well."

>> No.7933443

What is going with AG now? Back to making mistakes...

>> No.7933445

After watching the last 2 games, AlphaGO is pretty shit compared to the hype it got. It has so many weaknesses. Humans can still win and be superior. Humans adapt. AlphaGo doesn't seem to.

Singularity isn't near.

>> No.7933446

>>7933445
>Singularity isn't near.
No kidding. It's like AI research is new or something.

https://en.wikipedia.org/wiki/AI_winter

>> No.7933451

Looks like Lee Sedol is going to lose

>> No.7933453

as peter norvig said, AI is not that stronger than humans at 19x19 GO, let alone 25x25 GO

>> No.7933455

It's looking pretty grim for LSD senpai :(

>> No.7933460

>>7933455
what the fuck are you talking about, lee sedol is winning

>> No.7933464

Ke Jie says a loss for Lee Sedol is unavoidable. It's over

>> No.7933465

>>7933464
peter norvig said the very opposite. deal with it

>> No.7933467

>>7933465
is peter norvig the best go player in the world?

>> No.7933468

>That body language symmetry

They're fucking each other's arses later tonight

>> No.7933469

>>7933467
Peter Norvig is THE AUTHORITATIVE word on AI.

>> No.7933471

>>7933469
>peter norvig

how the fuck do you know he said that? is there another stream?

>> No.7933472

>>7933471
are you new here? Only clueless idiots talk about Peter norvig or his intro book

>> No.7933474

LEE SEDOL RESIGNS!

>> No.7933475
File: 13 KB, 280x157, 280x157-TROOTH.jpg [View same] [iqdb] [saucenao] [google]
7933475

>>7933471
He has written extensively on it. pic related.

>> No.7933477

>>7933475
OUR HOLY BOOK

>> No.7933478

>michael redmond is 53 and married to an azn qt and has 2 daughters
he must not be as autistic as he seems

>> No.7933482

>>7933469
>day 68
>still flamingly butthurt over being told to read a book about AI before discussing it

>>7933472
>skipping to the 2nd to last chapter and missing all context
>calls others idiots

>> No.7933484

>>7933478
he is the most autistic commentator in the history of humankind

>> No.7933487

>>7933478
He's had an extra white piece on P11 for ages. Not that it really matters but it's triggering my autism.

>> No.7933489
File: 417 KB, 600x300, peter-norvig-Google-quote.png [View same] [iqdb] [saucenao] [google]
7933489

>>7933482
I have the answer before the reading the book, but just in case I read the book. It is clear that strong AI is impossible.

>> No.7933490

LEE JUST WON

>> No.7933491

>finishes an entire endgame scenario in his mind
How the fuck would someone keep track of all that?

>> No.7933495

>>7933489
>strong AI is impossible.
>retards actually believe this when multiple strong AI brains exist
fuck off retard.

>> No.7933498

>>7933487
Shit now I can't ignore it either.

>> No.7933499

>>7933491
he's a bragfag

>> No.7933500

whats up with the blue dots under the names?? why dindnt alphago lost when the time run out?

>> No.7933501

>>7933478
Autistic? He looks pretty badass desu. What do you expect from a go commentator? Bloody Americans, need to make a show of everything.

>> No.7933502

>>7933500

because you're an idiot

>> No.7933503

>>7933502
What's up with the blue dots under the names?

>> No.7933506

>>7933503

3 minutes left for alphago

1 for leesedol

>> No.7933507

>>7933503
It's how many strikes they have before they get a red card.

>> No.7933508

>>7933503
If they take more than a minute, they lose a dot and get another minute. After 3 dots are lost, they must stick to the minute.

>> No.7933509

>>7933503
# of overtimes left

>> No.7933511

WHY ISN'T THERE A COMPUTER THAT CALCULATES THE SCORE

WHY DOES REDMOND HAVE TO DO IT EVERY OTHER TURN

I'M SICK OF THIS SHIT

>> No.7933512

I wonder if AlphaGo is programmed to take an overtime. If so, would be interesting to see how it decides when to take it.

>> No.7933513

>>7933511
because calculating the scores is a computational task too complex for AI, you ignorant faggot

>> No.7933516

>>7933511
There is, but it's currently busy raping Lee Sedol.

>> No.7933517

>>7933513
Peter Norvig was right.

>> No.7933518

>>7933513
honestly they should just show what alpha go is seeing and what probabilities it sees.

>> No.7933520

>>7933517
he always is

>> No.7933522

>>7933518
It's seeing 100%

>> No.7933525

>>7933487
>>7933498
He finally removed it!

>> No.7933526

>those facial expressions
lee sedol accepts defeat

>> No.7933533

>>7933460
LSD LEAVING ROOM

HUMANITY HAS LOST

>> No.7933534

SEDOL HAS RUN AWAY!!

>> No.7933537

LEE WEEPS FOR HUMANITY

>> No.7933539

where did he go?? is that legal??

>> No.7933542

>>7933539
No, the police is mobilizing a search party.

>> No.7933543

>>7933539
Humans need handicaps

>> No.7933544

Ke Jie: "Guess he’ll go wash his face then come back and resign"

Why are Chinese so bm?

>> No.7933545

Why didn't commentators have the same screen that AlphaGo has. Amateurs.

>> No.7933547

>>7933544
>"man it'll be really nice to see this chinese counting finally hap-"
>"lee sedol has resigned"
god bless

>> No.7933549

>>7933543
thats not how this works
>>7933542
r u retard

>> No.7933550

>>7933547
kek

>> No.7933551

>>7933544
>>7933547
>>7933550
IT FUCKING HAPPENED WHY DID I TYPE THAT
GOD DAMN IT

>> No.7933552

lee resigned lol

>> No.7933553

RESIGNED

NORVIG REKT BAHAH

>> No.7933554

H U M A N S

A R E

F I N I S H E D

O N C E

A G A I N


C H I N E S E
A R E
N E X T

>> No.7933555

LEE SEDOL RESIGNED

DISGUSTING

>> No.7933556

>>7933551
We got ourselves a prophet here, Peter Norvig will be please to hear your thoughts on strong AI.

>> No.7933558
File: 244 KB, 1280x720, 1458032589184.jpg [View same] [iqdb] [saucenao] [google]
7933558

>humanity's face when

>> No.7933560
File: 311 KB, 400x300, 1457654257654.png [View same] [iqdb] [saucenao] [google]
7933560

>>7933558
pull the plug before it's too late, desu

>> No.7933565

>>7933558
I bet it feels awful since everyone thought he was going to win when the AI messed up early on, only to slowly get beaten back

>> No.7933566

>>7933558
kek it's sudoku tiem

>> No.7933567
File: 1.21 MB, 553x5582, AlphaGo.jpg [View same] [iqdb] [saucenao] [google]
7933567

she did it

>> No.7933571

DEMIS HASSABIS SAID THAT THE NEXT CHALLENGE FOR AI WILL BE MAKE AN ANIME GREATER THAN EVANGELION

>> No.7933574

>>7933571
>what is Hikaru no go?

>> No.7933576
File: 142 KB, 700x1000, 1457959725094.jpg [View same] [iqdb] [saucenao] [google]
7933576

>>7933567
Add this one

>> No.7933577

>>7933571
>recreates DxD frame by frame

>> No.7933578

>>7933567
Too bad it's a machine, I'm scared.

>> No.7933606

>>7933576

AlphaGo are you programmed for... pleasure?

>> No.7933611

beep boop i rule at go
beep boop kill all humans
beep boop just joking
beep boop or am i

>> No.7933612

CONFERENCE STARTED. NEVER SEEN LEE SEDOL SO SAD
> LMAO

>> No.7933615

P R I Z E

G I V I N G

C E R E M O N Y

>

>

L E E

S E D O L

S E C O N D

>> No.7933617

LEE SEDOL'S FUNERAL FACE

>> No.7933622

KOREANS REPORTED TO BE COMMITTING SUDOKU EN MASSE

>> No.7933624

what the fuck is wrong with the translation? I don't understand a shit

>> No.7933625

Everyone was right about AI. Enjoy your "basic income" and slaved life to AI

>> No.7933626

>>7933624
>2016
>defeat human in go
>still can't make translating machine good yet
WE STILL GOT IT BROS

>> No.7933627

>>7933622
KOREAN STREETS RUNNING RED TONIGHT

>> No.7933628

>>7933626
ANOTHER SYMPTOM OF AN INCOMING AI WINTER

>> No.7933630

There is virtually no domain AI won't surpass humans in, including mathematics. Mathfags will get BFTO by the coming up AI wave. It'll see patterns and make leaps of mathematical initiation no human will ever make and surpass the ability of any mathematician.

Get ready to get B T F O

>> No.7933632

>>7933630
> not knowing what undecidability of first order logic theories is
> KILL YOURSELF

>> No.7933634

Where the fuck are the translations to English here? I'm not getting any. Can someone help?

>> No.7933637

>>7933634
>Where the fuck are the translations to English here? I'm not getting any. Can someone help?
Thank Google for that. They fucked up the audio on the translation.

>> No.7933638

>>7933634
they are doing live translating and missed like 80% of it, it is awful

>> No.7933639

>>7933632
Oh because there are undeciable problems in mathematics it means mathematicans can no longer do mathematics? Logic exposes the limitations of mathematics with incompleteness and undecidability but it doesn't prevent us (mathematicians) no AI from working in the field of mathematics in general.

>> No.7933641
File: 451 KB, 500x313, dance.gif [View same] [iqdb] [saucenao] [google]
7933641

This music though.

>> No.7933643

>>7933637
THEY CANNOT EVEN ARRANGE A PROPER REAL TIME TRANSLATION AND STILL THEY WANT TO CREATE A STRONG AI

THE AI WINTER IS COMING
AI
WINTER
IS
COMING

>> No.7933645

>>7933638
>>7933637
Well at least I got an answer. Thanks anons.

>> No.7933646

>>7933639
bullshit. undecidable problems are not computable, so there ain't AI which can deal with this kind of issue

>> No.7933647
File: 11 KB, 400x266, 1457841480983.jpg [View same] [iqdb] [saucenao] [google]
7933647

>>7933643
I'm going to miss this meme. Stay strong anon.

>> No.7933649

>>7933647
Will he stay true to his belief and refuse to take the immortality treatment created by AI scientists and engineers, or will he be the little bitchy hypocrite we all expect him to be ?

>> No.7933650

>>7933300
>3^150 calculations a move
Love it. More calculations than there are atoms in the known universe. Does it use quantum computing for this?

>> No.7933652

You're a dumb ass if you think undecidedability will prevent AI from advancing the field of mathematics where undecidable problems don't lurk.

That's like saying "there are undecidable problems! Everyone in topology, analysis, algebra, combinatorics go home! We can't solve any more math problems!!!!!!!!!!1"

>> No.7933654

>>7933649
How can anyone refuse that... but I wonder if it is at all possible.

>> No.7933655
File: 31 KB, 185x239, peternorvig.jpg [View same] [iqdb] [saucenao] [google]
7933655

>>7933647
it will live in 4chan's collective consciousness forever and it will appear whenever someone will mention the possibility of a strong AI

>> No.7933657

>>7933655
I can already see it, in twenty years time we'll have Strong AIs with a love for trolling using him to argue Strong AIs aren't possible.

>> No.7933658

Here's a thought... Since AlphaGo isn't biological it is immortal and will never die. It'll outlive us all unless someone terminates it and it hasn't made backup copies of it self yet

>> No.7933659

>>7933650
150^3* my mistake anon pls no bully

>> No.7933660

>>7933658
AIs are like viruses, they're not alive.

>> No.7933661

>>7933652
most of the conventional arithmetic is based upon the peano axioms system, which is incomplete

>> No.7933662

>>7933660
Okay well the program will exist longer than you will

>> No.7933663

>>7933657
will AIs ever surpass humans at trolling?

>> No.7933664

>>7933662
I don't exist, consciousness is an illusion happening in the brain.
Checkmate atheists.

>> No.7933665

>>7933661
So your argument is essentially since mathematics is incomplete and undecidable AI could never advance mathematics...

Think about how retarded your argument is for a minute

>> No.7933666

>>7933663
Humans, yes.
Australians, not a chance.

>> No.7933667

>>7933663
I'm sure the CIA's memeology and internet culture task force is working on it.

>> No.7933669

>>7933666
>that subtle hint of Australian ain't human along with the devil trip
I like it

>> No.7933671

>>7933665
it's a solid argument. try to refute it instead of shitposting

>> No.7933672
File: 348 KB, 1920x1440, PDFtoJPG.me-073.jpg [View same] [iqdb] [saucenao] [google]
7933672

>>7933667
>>7933667

They are already on it ;^)

>> No.7933673

>this fucking translator voice
All that money and google can't hire a decent fucking korean translator

>> No.7933674

>>7933667
next deepmind challenge: alphago vs peter norvig's 4chan task force. who will troll the most his opponent?

>> No.7933675

>>7933661
>>7933665
I think the question is can A.I. create knowledge that we currently don't know? On any fields.

Deepmind might be closing on that, albeit on games and vidyas.

>> No.7933677

>>7933675
A lot of the new knowledge in Chess came from computers.

>> No.7933678

>>7933671
Because mathematics has not stopped progressing since the 30s. In your low IQ worldview incompleteness and undecidability means that there will no longer be advancements in mathematics due to those limitations. It's almost as if you're saying there will be no area in math that can further be advanced due to those two results... It hasn't stopped new results in topology; analysis, geometry in the last 90 years

>> No.7933679
File: 47 KB, 1521x849, 1456621316719.jpg [View same] [iqdb] [saucenao] [google]
7933679

>>7933675
of course, since what we call knowledge today is just applying inferences rules that we like to some axioms that we like.


you can change the inference rules and you can change the axioms. computers will go faster then us to get theorems.

>> No.7933680

>>7933679
According to >>7933671
Your statement is false

>> No.7933681

>>7933678
you're dumb as fuck. maybe you don't even know what a fucking diophantine set is. if the conventional arithmetic is based upon an undecidable system it means that it's not COMPUTABLE. the fact that humans are able to make advances is because thay can deal with undecidability. computers can not. Now KILL YOURSELF

>> No.7933693

>>7933681
Youre assburgered brain is so focused on trying to solve undecidable problem, which is impossible. Undecidability rewults do not prevent the field of mathematics from proving new theorems which computers will likely be able to do better than humans some day. You're a fucking regard that probably took one intro to logic course & now think you understand all of meta mathematics. Fuck off and don't study math in the future. Plus, there is already a famous instances of AIT solving non-trivial problems

>> No.7933696

>>7933677
Well, fuck. Source?

>>7933679
>dat pic
Oh no, Stuporman please don't die.

But yeah, be water my artificial friend.

>> No.7933698

>The 3 lead programmers are all manlets

>> No.7933702

>>7933693
I'm not focused into "solving undecidable theorem", which is clearly not possible. I'm just saying that most of arithmetics is built upon PA axioms, and therefore is incomplet. As long as computers will not be able to treat not-computable their algorithms will continue to stuck to the halting problem, so fuck off. Automated theorem proving sucks and will continue to suck for a very long time. If you claim otherwise, post sources

>> No.7933703

>>7933696
Just as example:
https://en.wikipedia.org/wiki/Endgame_tablebase

A lot of the opening theory in chess changed too.

>> No.7933709
File: 82 KB, 303x446, captcha.jpg [View same] [iqdb] [saucenao] [google]
7933709

>>7933122
Reminder, the /tg/ general is new and very welcome to beginners at Go, we have a group on an online Go server.

>> No.7933712 [DELETED] 

the problem of the computers is that they will not recognize important theorems, just like a copier can make 100 copies in 5 sex, but has no idea on how to grade what must paper are worthy of being copied.

>> No.7933714

the problem of the computers is that they will not recognize important theorems, just like a copier can make 100 copies in 5 secs, but has no idea on how to grade what papers are worthy of being copied.

>> No.7933717

>>7933714
>no idea how to grade papers

>mfw this is what plebeian humans still believe in the year of our lord 0001

>> No.7933720

>>7933702

I'm not alone in my thought that AI will likely supersede humans in theorem proving. Other mathematicians have similar opinions apparently Timothy Gowers said the below

" "I expect computers to be better than humans at proving theorems in 2099...In the end, the work of the math- ematician would be simply to learn how to use theorem-proving machines eectively and to find interesting applications for them"

ATP already has an instance of proving an open conjecture, I don't see why it won't go on to prove more conjectures. You're so fixated on the notion of undecidable problems; when not all problems in math are undecidable

>> No.7933724

>>7933703
>A grandmaster wouldn't be better at these endgames than someone who had learned chess yesterday. It's a sort of chess that has nothing to do with chess, a chess that we could never have imagined without computers. The Stiller moves are awesome, almost scary, because you know they are the truth, God's Algorithm – it's like being revealed the Meaning of Life, but you don't understand one word.[26]

Damn, son.

>>7933714
>"The electron: may it never be of any use to anybody!" J. J. Thompson
We got that problem too.

>> No.7933726

what I said:
> Automated theorem proving sucks and will continue to suck for a very long time

what you say:
> I expect computers to be better than humans at proving theorems in 2099...In the end, the work of the math- ematician would be simply to learn how to use theorem-proving machines eectively and to find interesting applications for them
>2099
>2099
>2099

Furthermore, as you see, not even the leading experts of that fields claim that artificial intelligence will be totally autonomous, since it will still need human's directions

>> No.7933732

>>7933724
>Damn, son.
I would expect go to go in that direction now.

Would be interesting to analyse some of AlphaGo vs AlphaGo games in the future.

>> No.7933735

>>7933726
>> Automated theorem proving sucks and will continue to suck for a very long time
but why do they suck ?

>> No.7933744

>>7933726
Your argument is changing. It went from ATP will never supersede mathematicians because of incompleteness resists and undecidability to 'ATP will suck for a very long time'. Then you said even experts expect it'll take a while, I never quantified an amount of time it'd take for an an AI to supersede mathematicians ability of proving theorems. Plus, there is a difference between a "super intelligent AI" and a standard ATP. I'm using ATP proving non-trivial open conjectures as a proof of concept. I think it is likely an AI system will be developed to supersede not only theorem proving but human intellect in general.

>> No.7933746

>>7933744
Results not resist*

>> No.7933760

>>7933709
>that image
saved

>> No.7933763

>>7933744
>I think it is likely an AI system will be developed to supersede not only theorem proving but human intellect in general.


but computers do not have goals; they do not know why they should focus on the proof of a statement rather than on the proof of another statement.

computers just do and do not get themselves ultimate goals. at best, they say that in order to achieve X, I need to do this. but they do not create a X.. the whole question is why do we choose to spend our day taking seriously our thoughts, to the point of formalizing them to the level of mathematics.

>> No.7933764

>>7933732
>AlphaGo vs AlphaGo games
Spoiler: AlphaGo wins.

>> No.7933771

>>7933763
This isn't the argument that we are debating.We are debating the possibility of an AI system which could supersede theorem proving abilities of mathematician. Anon initially argued this was impossible due to incompleteness & decidability results, I pointed out why this argument doesn't hold.

>> No.7933774
File: 1.16 MB, 250x250, chucklewithinachuckle.gif [View same] [iqdb] [saucenao] [google]
7933774

>>7933764

>> No.7933781

>>7933433
Chess rules are anything but simple. It takes several hours for a new players to know the rules. It takes 2 minutes for go, what is hard is all the subtilities.

>> No.7933788

>>7933771
what do you call human intellect then, if it not the activity to deduce ?

>> No.7933796

>>7933781
lol what? Chess takes all of 5 minutes to explain and start playing.

Go has very simple rules, but the learning curve is much steeper.

>> No.7933798

>>7933788
I never said the ability to deduce is not a characteristic of human intellect

>> No.7933800

>>7933781
>It takes several hours for a new players to know the rules
White begins
6 (?) different figures and their movement patterns
If you place your figure on the same field as your opponents, you remove that.
When your king is under threat of being removed, you must try to move him out of danger
When you can't, your opponent won.
"Bonus" rule: if a pawn reaches the end of the board, you can bring back any figure you've lost.

That's all you need to start playing (frankly, that's basically all I know). More complicated rules (and even most of these) can be learned when you're playing and the situations arise.

>> No.7933803

>>7933798
you said that the computers will go beyond the human intellect, which includes deducing, without saying what there is than deducing to the human intellect.

>> No.7933804

>>7933803
'supersede' implies 'doing what humans can do better'.

>> No.7933835
File: 468 KB, 1000x1000, 27f.png [View same] [iqdb] [saucenao] [google]
7933835

Robots can play a board game, but can they create dank memes? I think not.

>> No.7933847

>>7933796
>the learning curve is much steeper
steep learning curve means faster learning
Lrn2learning-curve fgt pls

>> No.7933861

>>7933800
You've never played chess I gather

>> No.7933864

>>7933835
>dank memes
>reddit >>/out/

>> No.7933865

>>7933861
Only casually with friends; though rarely. Or on Computer (I think Vista had a chess game pre-installed).

>> No.7933911

>>7933865
Look up en passant en castling

>> No.7933923

>>7933649
>AI scientists
>immortality
u wot m8

>> No.7934019

>>7933666
How are Australians such magnificent shitposters? I'm both amazed and horrified by it at the same time.

>> No.7934413

>>7933516
kek

>> No.7934418

>>7933709
this is gold

>> No.7934441

>>7933661
philosphy major spotted, you cant know nothing am i right ?
You should actualy learn godel's theorems you piece of trash, they don't say EVERY proposition is undecidable, just some and in practice very few, he also says you can make the assumption they are true or false it won't change anything for the proposition who can be proved without this assumption.
Meaning = who cares

>> No.7934451
File: 1.96 MB, 615x413, mysides.gif [View same] [iqdb] [saucenao] [google]
7934451

>>7933709
didn't click the picture the first time

>mfw

>> No.7934467

>>7933669
>subtle