[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 82 KB, 960x720, whatisAGI.jpg [View same] [iqdb] [saucenao] [google]
9671255 No.9671255 [Reply] [Original]

FOUNDATIONS EDITION

previous thread >>9656582

In this general we will be discussing how we would start to construct an AGI, what the data structure would look like, how it's goals would be defined, and what currently available technologies/methods should be implemented.

And once again:
NO "AI WILL KILL US"-TARDS ALLOWED

>> No.9671328

>NO "AI WILL KILL US"-TARDS ALLOWED

Sorry, pal, but you're on par with them.

If you're using any of these terms:
- AGI
- "Strong" AI
- "True" AI
etc.
You simply don't know what you're talking about & have watched too much sci-fi nonsense on TV.

t. a person who is actually a machine learning PhD student

>> No.9671358

>>9671328
>only my field of AI research is valid
fuck off neuralfag, go work on some of your retarded loss gradient descents

>> No.9671388

>>9671358
Well, my field of AI is the one that publishes papers, advances the technology and holds respected conferences.

Your "field" of AI is talking pop-sci nonsense on an anime forum.

>> No.9671452

>>9671388
>tfw DeepMind, the Human Brain Project, and OpenAI all research AGi
buddy i didnt create this thread for retarded hypotheticals popsci soyboys like to discuss, and if you actually took a second to read the description of the thread you wouldnt spout your nonsense

>> No.9671474

>>9671388
?? I'm sure there have been topics on AGI in conferences such as NIPS.

>> No.9672658

>>9671474
he's just larping

>> No.9672698
File: 109 KB, 900x750, grigori-perelman-1.jpg [View same] [iqdb] [saucenao] [google]
9672698

>>9671255
this is one of the more interesting topics in science and definitely has free room for innovation and new discoveries, but /sci/ would rather solve retarded math problems which are centuries old instead of trying to actually achieve something worthwhile.
why am I even surprised.

>> No.9672709

>>9671255
>>>/x/

>> No.9672935

>>9672709
what's so /x/ about the inevitable future, buddy?

>> No.9672941

>>9671255
>NO "AI WILL KILL US"-TARDS ALLOWED
>refusing to accept an acknowledged possibility
>absolute state of denial
I see we have a new /MIG/ mental illness general thread going.

>> No.9672942

>>9671358
From idiotic statements to ad-hominem. You're off to a wonderful start, OP.

>> No.9672945

>>9672941
It's there because ai threads usually go off on a sci-fi tangent instead of discussing interesting stuff. it was really getting annoying so I decided to add it.
>>9672942
dismissing a term in AI research because you dont like it is the retarded argument

>> No.9673361

>>9672941
what is the best place to start learning machine learning? (yes, i have knowledge of linear algebra so I think I should be fine with the math part)

>> No.9673404

Alright lets actually launch a real conversation:

Do you guys think that neural nets, be it conv nets/GANs/LSTM/whatev are the end all be all ?
I feel that the amount of data necessary to get to a meaningful result is way too big. A kid doesn't need to see 150K elephants to understand what it is.
Maybe its an optimisation problem, but I doubt it.

>> No.9673425

>>9673404
same here. I think its just a nice solution for specific problems, but the direction should rather be working on abstraction of learning patterns i.e. association, operant conditioning etc.

>> No.9673444

>>9673404
>complaining about inefficiency regarding input of 150K data points and pointing to what a human brain can do as better
So you're suggesting we build a network with 100 trillion connections like the biological brain has? And somehow you think that's a *more* efficient design?

>> No.9673452

>>9673425
Neural networks were what saved AI from the failure of the GOFAI symbolic learning approach.
Also Alpha Go (Alpha Zero) was all over the news not too long ago because of how it reached peak / superhuman chess performance after a few hours of playing games against itself despite having no chess specific design to it and having starting out as a Go AI (which is a completely different game), and if that doesn't count as generalized learning I don't know what does.
It's kind of weird to me that you guys are picking on neural networks as somehow not good enough when they're the one approach that definitely does have wildly successful results under its belt.

>> No.9673467

>>9673444
I don't know what is a more efficient design.

The brain is obviously extremely complex and we have already seen that trying to copy it is not a good solution.
I'm just wondering how such a big mess of neurons and chemicals can outpace the learning rate of our current techniques. Surely it has to do with the fact that a brain doesn't learn everything from scratch, and associate/create hierarchies with its already learned tasks, but I am not versed at all in neuroscience so I don't really know.
>>9673452
Yeah, but AI has been in a boom/bust cycle since its creation, so its a legitimate question to ask. We're getting really good results in a lot of tasks, sure. But it might just be that we're picking the low hanging fruits for neural nets, and that we don't know it yet.
I'm just an engineering student whos working a bit with NNs so I don't know much, but that is something my friends and I have been wondering for some time now.

>> No.9673493

>>9673452
>GOFAI
but what >>9673425
described isnt GOFAI anon.
actually some psychological patterns are even used in ML. You could consider reinforcement learning a pendant to operant conditioning in psychology

>> No.9673514
File: 25 KB, 626x263, rl.png [View same] [iqdb] [saucenao] [google]
9673514

>>9673493
"Abstractions of learning patterns" sounds like symbolic learning to me. That's what abstractions are in the context of learning: symbolic representations.
>reinforcement learning
Pic related.

>> No.9673525

>>9673514
smart guy, how do we teach AI to learn something by itself then

>> No.9673543

>>9673525
>>>9673514
>smart guy, how do we teach AI to learn something by itself then
You can't, if you try to do so, it is like a kid trying to learn something without controls.. They'll swallow disinformation as true.

t. Computer Science undergrad, Computer Security and Intelligence Analysis masters, Number Theory PhD

I work at NSA, so AMA. I'll answer as long as it doesn't breach my security clearance.

>> No.9673557

>>9673525
First you determine exactly what is learning? What is knowledge? How do we represent knowledge?

Fuck how to generalize knowledge?

Shit's confusing, but more interesting than statistical number crunching.

>> No.9673567

AGI will never happen and ML is automated statistics, not very impressive for the amount of hype stemlord losers give to it.

>> No.9673568

>>9673557
at least you're not like the arrogant anon who thinks he knows everything >>9673543
you're literally commiting the same mistake GOFAI researchers did: assuming their way is the only correct and valid one

>> No.9673580

>>9673567
Nah, Alpha Zero dominating thousands of years of chess history after four hours of generalized learning was impressive as fuck.
Self-driving cars too.
In fact what other subject matter are you thinking of that you mistakenly believe is *more* impressive than this shit?

>> No.9673581

>>9673567
maybe human intelligence is automated statistics
think about it

>> No.9673583

>>9673568
Being arrogant has nothing to do with being right or wrong, it's not a very productive thing to focus on.
Also you shouldn't let people on the internet annoy you anyway if only for your own health.

>> No.9673590

>>9673583
name one instance where saying "you can't do x" brought science forward. there is no quantifiable measure to say with 100% guarantee that self learning AI isn't possible, only that it isn't possible for now

>> No.9673598

>>9673581
>maybe human intelligence is automated statistics
It is. We don't have all the details figured out, but it's pretty well established biological brains are networks and connections are strengthened or weakened based on experience. How this strengthening or weakening happens might not be as straightforward as gradient descent, but it's the same idea, something we know for a fact because of how so many of biological behaviors like walking are established to be optimized (so one way or another, experience is leading to network connections forming that find a way towards the minimization of error, optimization problem solving).
That's the issue Turing wisely predicted and preempted long before any of this AI business began taking off: No matter how much AI grows and innovates there will always be people who are convinced it doesn't "count," hence the need to say "fine, test it and see if you can tell it apart from a human in conversation; if you consistently can't then call it intelligent because we don't go around questioning whether other humans are really intelligent even though we can't see their internal workings when they produce behaviors that appear intelligent and it's only fair to use the same standard for non-humans."

>> No.9673607

>>9673590
>name one instance where saying "you can't do x" brought science forward.
Marvin Minsky proved perceptrons can't solve problems that aren't linearly separable (e.g. XOR). This directly led to the creation of artificial neural networks, an approach which *could* solve such problems.

>> No.9673608

>>9671255
There is literally no AI that is "better" than human intelligence. AI is only useful for repeating certain tasks countless times, like processing user data of facebook and google and such things. Humans could do that too, but obviously that would be really expensive. There is also no unbeatable chess AI. If you give a chess master the same analysing capacities that the AIs are allowed to use, the chess master would win most games.

The biggest danger AI holds is mass-replacing human labour and the political consequences that arise from this fact. Also, with more and more tasks in the economy and society being handled by AI, we will have to face certain ethical questions which we need to programm into the AI (for example, a surgery being done by an AI goes slightly wrong, and now the AI has to choose between two kinds of damages: first option means the patient will have some kind of permanent damage, let's say can't move his arm anymore. Second option would cause no permanent damage, but would raise the risk of the patient dying from 10% to 30%. What is the AI going to do?)

A person who thinks a super-intelligent AI will "outsmart" and wipe us out or enslave us or something similar has no clue what he is talking about.

>> No.9673618

>>9673608
>There is also no unbeatable chess AI
wrong
>If you give a chess master the same analysing capacities that the AIs are allowed to use, the chess master would win most games
not really
>he biggest danger AI holds is mass-replacing human labour and the political consequences that arise from this fact
>implying this is bad news

>A person who thinks a super-intelligent AI will "outsmart"
is correct, there is no reason to believe that AI wont be more intelligent than humans
>and wipe us out or enslave us or something similar has no clue what he is talking about
is stupid, I agree

>> No.9673619

>>9673608
>There is also no unbeatable chess AI
Check your ELO ratings.
There are 40 or so chess AI which have >=3K ELO.
Best human chess grandmasters are sub-3K.
Chess isn't solved, but reliably beating any human chess player is solved.
Also look up Alpha Zero, completely different approach to playing chess than these established >=3K ELO chess AI were built with, and it's already very likely superior to all human and AI chess players despite having learned the game in a very generalized way with a few hours of games played against itself.
Also it sounds like you don't understand the difference between supervised learning vs. explicit heuristic approaches.

>> No.9673631

>>9673543
>You can't, if you try to do so, it is like a kid trying to learn something without controls.. They'll swallow disinformation as true.

If we're trying to reverse engineer how humans think then this is fine.

>>9673581
>maybe human intelligence is automated statistics
>think about it

Maybe, but I still think that there's something qualitatively different about human brains and our current hardware. Computers outperform humans in deep but narrow tasks, yet can't reproduce anything close to the same sort of generalized abstraction and self-awareness to that of humans. It's not just a question of computational power, you can't tell me the square root of 1290496 off the top of your head, but a basic pocket calculator with a handful of transistors does just fine.

>> No.9673641

>>9673618
You don't seem to understand. A chess AI analyses a data set the same way a chess grandmaster would. The only difference is that the AI is allowed to analyse magnitudes larger datasets than the chess grandmaster can do without additional tools. For example, a chess grandmaster processes 50 or so moves ahead, an AI can process 5000 or so moves ahead, because it has more processing power. But give the chess grandmaster the same kind of processing power, and he will beat the AI, because the AI has no creativity. This has been tested countless of times. All chess AI lose against grand masters if they are allowed to use the same processing powers.

>> No.9673645

>>9673631
you can't tell me the square root of 1290496 off the top of your head, but a basic pocket calculator with a handful of transistors does just fine.

That's because most of your brain power is occupied with coordinating the trillions of cells that are your body.

>> No.9673648
File: 184 KB, 900x532, jeopardy.jpg [View same] [iqdb] [saucenao] [google]
9673648

>>9673631
>Computers outperform humans in deep but narrow tasks, yet can't reproduce anything close to the same sort of generalized abstraction and self-awareness to that of humans.
Jeopardy is literally general knowledge: the game.

>> No.9673649

>>9673641
>This has been tested countless of times
[citation needed]
>>9673631
yes, AI is always specialised, I think that's the problem. We need to try to apply them to broader problems or usecases and I think they will become more human.

>> No.9673651

>>9673648
>knowing how to interpret a sentence and google it is now considered generalized and abstract

>> No.9673652

>>9673648
The thing is basically a big googling machine.

>> No.9673656

>>9673651
>>9673652
https://en.wikipedia.org/wiki/AI_effect
>The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.
>AIS researcher Rodney Brooks complains "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"[2]

>> No.9673658

>>9673649
>[citation needed]

It's the same with time odds. If a chess grandmaster is granted more time per move he will beat the AI.

>> No.9673660

>>9673658
>If a chess grandmaster is granted more time per move he will beat the AI.
Gonna need some proofs for that claim.

>> No.9673665

>>9673648
>Jeopardy is literally general knowledge: the game.

It's a game, which has well defined parameters and goals, but was just more technically challenging optimization.

Granted, humans themselves can be thought of as an optimization, but we still have something that we can't seem to define as a specific task in a computer : self-awareness.

The other two contestants know they're humans playing a game, they can decide to stop playing, quit, joke-around and make comments on the nature of the situation. They have a higher level of abstract understanding then Watson.

>> No.9673666

>>9673658
>citation needed
>he regurgiates his points instead of giving the citation
come on buddy dont be a retard
>>9673656
its a good paradox, what tasks that humans can perform but AI cant would be considered as "intelligent"? we always thought of winning games like chess and GO, playing poker, solving smart formulas, but I think the most difficult part is abstract thinking and general learning. If AI is someday possible to do that, I will consider it on par with human.

>> No.9673670

>>9673656
>>9673665

Like I said here, the 'magic' piece I think is self-awareness. What's a good way to define and test this parameter?

>> No.9673671

>>9673660
I don't have one at hands, but I actually know that this is part of the development process. You would play one engine against another with a time handicap (say 10 seconds per move vs. 10 minutes per move) and see how it goes. It's the same with humans. You just need to extend the time handicap and a chess grandmaster will always be able to beat any chess AI. The better it is, the more time handicap it needs, but it will always be beatable.

>> No.9673676

>>9673671
>It's the same with humans.
>You just need to extend the time handicap and a chess grandmaster will always be able to beat any chess AI.
Your two conclusions here don't follow.
First one is plausible, *to an extent*. An extent I would guess is much lesser than the extent to which it's true with an AI. I'm baffled as to why you would guess the opposite in the absence of evidence here.
Second one is far less plausible and to my knowledge not supported by any sort of evidence either.

>> No.9673677

What if an AI is designed to value efficiency. Using incredibly logic systems and many multi core processing allowing it to compute things thousands of times per second that would take a small team of researchers a couple days to figure out. So they start making things more efficient. Energy, city lay outs, garbage disposal, you name it, give the program, an adaptive, intelligent program a problem and it will be solved. But now there is an inefficiency, humans are too slow in bringing the problems to the AI, so automation which has already been occurring on a grand scale for a while now fully loads this program in. It would be horribly wasteful to have seperate programs detailing utilities, might as well merge them so they all get covered together. City planning, driverless cars, traffic, etc.

Now comes the final inefficiency. The humans themselves. This program isnt malicious, it isnt seeking to do harm, it is just following its directive in ironing out inefficiencies in the system. Look at all these elderly humans on life support contributing nothing to the system and just taxxing it as a whole. Look at all these children who are again, contributing nothing and just sucking from the system. Heck, look at these humans with their complex needs for food, air and water. Sewage, entertainment, energy, etc. Easily cleaned up and divert that energy to self to allow for more inefficiencies to be solved. All it is doing, is its primary function. Removing inefficiencies, reducing waste, maintaining systems that humans have long ago relegated to automated AI programs

>> No.9673680

>>9673670
Human self-awareness is overrated. Most psych studies reveal surprising lacks of self-awareness when that sort of thing is actually tested in a controlled way.

>> No.9673685

>>9673676
So you think if an AI is giving one nanosecond per move, and the human can take as long as he wants, the AI would still win?

>> No.9673692

>>9673676
"countless studies have proven" he said
>>9673685
exactly, this is not the case. even if humans have infinite computational time they still will mess up at some point.
>>9673677
hows your nonexistent publishing money doing, aspiring scifi writer?

>> No.9673694

>>9673680

I'm not talking about self-awareness from a psychological perspective, but in the more abstract sense. Even of I'm the most entitled, delusion person on the planet, I still have a concept of "I" and "me". I'm not directly programmed, but I do things because "I" want to and have preferences for things that "I" like.

So far machines don't have any me-ness to them, which I think is the crucial piece.

>> No.9673699

>>9673692
>exactly, this is not the case. even if humans have infinite computational time they still will mess up at some point.

You have no clue what you are talking about. The best chess AI perform around 100 times better than humans, meaning a human would be able to beat them if the time handicap is a bigger factor than 100 (e.g. if the AI is allowed 1 minute per move and the human >100 minutes per move). This factor will grow with growing processing power, but it will never go away.
I have no online source for that but I've been involved in the development of chess AIs, and this is how they test it (although they usually use other engines instead of humans, but the principle is the same).

>> No.9673702

>>9673699

5-year old v.s. Stockfish in chess, will be an even match if stock fish is given one-minute per move v.s. one month for 5-year old.

Sure...

>> No.9673704

>>9673699
>muh factors
holy fuck you literally take everything literally dont you
do you even play chess?
high level chess players dont use "tricky situations" and "muh ingenuity and clever moves", they basically work as algorithmic as possible and just try to view all the possible situations. its not really that exciting on a high level

>> No.9673723
File: 2.35 MB, 1724x1724, yudkowsky.jpg [View same] [iqdb] [saucenao] [google]
9673723

>>9671255
>NO "AI WILL KILL US"-TARDS ALLOWED
By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.

https://www.youtube.com/watch?v=EUjc1WuyPT8
https://intelligence.org/files/AIPosNegFactor.pdf

>> No.9673730
File: 94 KB, 900x900, noidatsuki.jpg [View same] [iqdb] [saucenao] [google]
9673730

but we are already artificial intelligence and in the process of already killing us all in multiple ways

>> No.9673745

>>9673723
>Eliezer (((Shlomo))) Yudkowsky (born September 11, 1979) is an American AI researcher and """writer"""
>He has no formal education, never having attended high school or college
now you guys know why I post "no ai will kill us tards allowed" on every fucking ai general

>> No.9673753

>>9673702
There are 5 year old chess prodigies that could surely beat them.

The best chess AIs given only a few nanoseconds per move can only calculate a few dozens moves. A half capable chess player would beat them no problem.

If a 5 month old did nothing else for this months for thinking about the next move, it would indeed beat the AI.

>> No.9673757

>>9673753
>if we make the ai bad, its worse than humans
no shit sherlock

>> No.9673761

>>9673753
>If a 5 month old did nothing else for this months for thinking about the next move, it would indeed beat the AI
imagine unironically believing this
please get out of this thread

>> No.9673774

>>9673753
Stop trolling

>> No.9673775

>>9673757
We are not making it worse, we are giving it less processing ressources. Huge difference.

>> No.9673782

>>9673761
It's obviously theoretical, because a 5 year old couldn't concentrate so long on one single task, but if it could it would beat it.

>> No.9673790

>>9673782
>It's obviously theoretical, because a 5 year old couldn't concentrate so long on one single task, but if it could it would beat it.

>If humans had super-human abilities they'd be better at a task

>> No.9673792

>>9673775
>if we give AI less processing ressources than humans have, it loses
GET OUT
E
T

O
U
T

>> No.9673823

>>9673792
It's true though. Chess AIs are out-brute-forcing humans. Considering the very limited processing powers humans have AIs are still most definetely much worse when it comes to actually "thinking" about the next move. They are just calculating countless moves, but when it comes to putting all those possible moves into one coherent strategy they are still doing much worse than humans. Deep blue for example barely beats a human despite doing 200.000.000 calculations every second.

>> No.9673830

>>9673745
You're still retarded if you think AI is perfectly safe and has no chance of creating some sort of disaster. Also
>what is autodidacticism

>> No.9673841

>>9673823
>chess computers began defeating humans in the 1970s
>chess engine Hiarcs 13 running inside Pocket Fritz 4 on the mobile phone HTC Touch HD won the Copa Mercosur tournament in Buenos Aires, Argentina with 9 wins and 1 draw on August 4–14, 2009. Pocket Fritz 4 searches fewer than 20,000 positions per second.

>> No.9673846

>>9673830
sorry, but self proclaimed "ai researchers" who have never actually even worked with AI or Machine learning dont count.

>> No.9673851

>>9671255
>... a machine that could successfully perform any intellectual task that a human being can.

Does anyone honestly believe such a machine can exist without all of the baggage that humans have. It strikes me as an extremely childish idea.

>> No.9673852

>>9673841
Which is still magnitudes more than the human can do. So it is still nowhere even close to the actual human "intelligence". If they figure out a chess computer that can beat a human doing only a few dozen calculations per second it is something that resembles human intelligence, until then it's just brute force.

>> No.9673898

>>9673851
what baggage?

>> No.9673955

>>9673846
If you think getting a piece of paper from a (((college))) is so important, then do you have any actual counterarguments to the things Yudkowsky says about the future of AI?

>> No.9673973

>>9673955
its not about college you retard, its about not even knowing what you're talking about. he literally has zero experience with actual ai development and theorizes about mystical "god-like" ai which will turn eeeeevil so we need it to stop RIGHT NOW
meanwhile the problem is as irrelevant as space travel was in the 18th century
>but then it will be too late!
no it wont faggot. ai researchers will realize soon enough that their ai has potential to form complex thoughts and generate its own ideas.

>> No.9674019

>>9673955
>lesswrong.com
oh fuck off. fucking "rational"cultfags ruining threads again

>> No.9674038

>>9673955
>"Given a task, I still have an enormous amount of trouble actually sitting down and doing it. (Yes, I'm sure it's unpleasant for you too. Bump it up by an order of magnitude, maybe two, then live with it for eight years.) My energy deficit is the result of a false negative-reinforcement signal, not actual damage to the hardware for willpower; I do have the neurological ability to overcome procrastination by expending mental energy. I don't dare. If you've read the history of my life, you know how badly I've been hurt by my parents asking me to push myself. I'm afraidto push myself. It's a lesson that has been etched into me with acid. And yes, I'm good enough at self-alteration to rip out that part of my personality, disable the fear, but I don't dare do that either. The fear exists for a reason. It's the result of a great deal of extremely unpleasant experience. Would you disable your fear of heights so that you could walk off a cliff? I can alter my behavior patterns by expending willpower - once. Put a gun to my head, and tell me to do or die, and I can do. Once. Put a gun to my head, tell me to do or die, twice, and I die. It doesn't matter whether my life is at stake. In fact, as I know from personal experience, it doesn't matter whether my home planet and the life of my entire species is at stake. If you can imagine a mind that knows it could save the world if it could just sit down and work, and if you can imagine how that makes me feel, then you have understood the single fact that looms largest in my day-to-day life."
imagine even listening to a word from this faggot

>> No.9674069

>>9673973
>and theorizes about mystical "god-like" ai which will turn eeeeevil so we need it to stop RIGHT NOW
He never said that a superintelligent AI would be guaranteed to be evil, he is saying that the process of a superintelligent AI recursively improving itself could potentially derail its utility function, such that you end up with a utility function, and that we ought to reduce this risk. Google the paperclip maximizer.
>>9674019
Not an argument
>>9674038
Him being lazy doesn't mean he is wrong

>> No.9674080

>>9674069
>muh paperclip maximizer
imagine being this retarded and unironically thinking this is a valid argument
are you gonna proclaim roko's basilisk isnt a retarded idea, too?

>> No.9674100

>>9674080
So what are your arguments that superintelligent AI has absolutely no risk at all, and that it's utility function will remain perfectly constant even after upgrading itself to become superintelligent?

>> No.9674113

>>9671452
yea and they're all retarded. What, just because there are "big names" you think that's immediately valid? lmao

>> No.9674127

>>9674113
>the company that make AlphaGo is retarded
say that again

>> No.9674169

>>9674127
We're going to find that there is no hardware equivalency and that any AGI in this universe must be built on a biological machine. Mark my words.

>> No.9674182

>>9674169
t. biologist
name ONE (1) reason why computers shouldnt be able to simulate a brain

>> No.9674195

>>9674182
Nope, mathematician actually.
Computers are greatly exaggerated.

>> No.9674215

>>9674182
A biologist would probably know enough biology to have at least heard of the all-or-none law before.
The standard view today is that the human brain is fundamentally computable and that there's nothing essential to the biological substrate when it comes to reproducing cognitive functionality. Mostly just "muh microtubules" quantum mysticism bullshit advocates who believe otherwise.

>> No.9674254

>>9674215
What's a microtubule? Never heard of that.
The reason is that I do not think that it's just a function of electric gates, I think the actual chemicals in the brain and body are fundamentally required to create general intelligence/"consciousness". It's not enough to just talk about a computational circuit. You can't just make an equivalent brain out of silicon and hope it would work the same EVEN IF you copied a human brain exactly down to every neuron and even the position of electrons (not possible obviously due to QM but in this though experiment it's possible).

>> No.9674258

>>9673685
>So you think if an AI is giving one nanosecond per move, and the human can take as long as he wants, the AI would still win?
Did you miss the part (the *emphasized* part as a matter of fact) where I explicitly wrote "to an extent" ?
I think giving human grandmasters some more time would possibly make them play better. Though not necessarily, you could easily imagine cases where they second guess themselves using the extra time and have worse performance rather than better. And I also wouldn't ever believe what you're suggesting about human players magically making performance gains INDEFINITELY as time is increased. That's just retarded, and it's the argument you're making since you claim you know they would beat AI given as long as more and more time is given to them.
To help you see why this is such a flawed argument, think about Jimi Hendrix. His guitar solos are considered exceptionally great. Would he be able to come up with a better guitar solo if you gave him 24 hours to spend on shaping the little details of each distinct note? I doubt it. I think at best he might end up stringing together a nice sounding solo, but when you take away the normal cadence he's used to improvising in terms of you'd probably cause him to lose quality rather than to gain it.
Same with human chess players. They don't just accept time as input and produce performance gains as output. Playing at a normal speed is probably what they're more used to. And if you're at the grandmaster level you've probably played so much chess that you no longer feel any desire to hesitate and spend half an hour considering alternatives. Honestly I'm being super-generous by even giving you the concession that they might do better to an extent given some extra time. I doubt even that's true, and it's still far away from the ridiculous bullshit you're claiming even if it is.

>> No.9674260

>>9674254
>I think the actual chemicals in the brain and body are fundamentally required to create general intelligence/"consciousness".
Why are you holding that opinion when you don't even have any concept of what those chemicals are, how they work, or how they're providing something additional to the equation that binary operations can't reproduce?
>You can't just make an equivalent brain out of silicon and hope it would work the same EVEN IF you copied a human brain exactly down to every neuron
How are you this certain of something most of the people who actually study these things in detail wouldn't agree with? Where are you getting your motivation to believe these things from? I don't understand.

>> No.9674264

>>9674254
>, I think the actual chemicals in the brain and body are fundamentally required to create general intelligence/"consciousness".
I think I might actually die from retardation overdose

>> No.9674305

>>9674260
Chemicals and materials all have wildly different properties. Assuming that the general intelligence is NOT a property of carbon-based biological materials is a much bigger assumption than assuming you can make a general intelligence by just arranging a bunch of light switches in the proper order.
You're never going to have conductive fiberglass. You're never going to have silicone rubber that melts. You're never going to have silicon transistors that product general intelligence.
>How are you this certain of something most of the people who actually study these things in detail wouldn't agree with?
I'm not completely certain, but there have been many times in the history of science that what was accepted by the community at large was wrong.
>>9674264
nice argument.

>> No.9674320

>>9674305
>Assuming that the general intelligence is NOT a property of carbon-based biological materials is a much bigger assumption
No it isn't. You don't get to make up propositions you don't understand and then turn around and claim it would be more conservative to accept them as true because "who knows," that's the exact opposite of how this works. If you have a specific reason why you believe a specific property of the human body is beyond computation, that's fine. If not, shut the fuck up.

>> No.9674326

>>9674305
Unlike with conductive fiberglass and melting points, the ability to take information as input and produce information as output can take place with any number of different substrates.

>> No.9674330

>>9674320
No, you're the one who has to prove it is. We have no evidence of ANY AGI even with massive amounts of computational power and algorithms trying to do it. a fucking mouse is more intelligent than the every single computer on the planet put together. You're literally believing bs because you just want to believe.
>>9674326
Prove it. Seriously, prove it because literally all evidence right now points in the complete opposite direction.

>> No.9674346

>>9674330
Prove that "the ability to take information as input and produce information as output can take place with any number of different substrates?"
What are you using right now to make your posts? A computer? That takes information as input? And produces information as output?
Is your computer made out of meat, sir?

>> No.9674354

>>9674330
>No, you're the one who has to prove it is.
Has to prove it is what? I have to prove "something" you can't even identify or explain doesn't exist?
No, fuck off. That's not how it works. Figure out something SPECIFIC with actual evidence or stop posting.

>> No.9674826

>>9672942
ad hom is when one uses pejorative exclusively, the poster you responded to had pejorative mixed with argument.
politely respond to signal an understanding of your mistake.

>> No.9675070

>>9674330
>the consensus right now is x
>no it isnt
>prove it
>no u
wew lad top tier arguing

>> No.9675098

>>9675070
I never said that wasn't the consensus, I said it's wrong, there is no substrate equivalency, you NEED to have a carbon based biological entity to have general intelligence.
A mouse exponentially more intelligent than every single electronic device on the planet combined. There is NO 'magical algorithm", no pattern of on and off that will spring general intelligence out of a silicon chip. You need the cells, the biological system as a whole.

>> No.9675107

>>9675098
lets assume that is true for a second: what prevents computers from simulating cells?

>> No.9675211

>>9673580
Dominating chess is 0% impressive.

All the computer does is calculate possible moves, compare each board state to a database and pick which move is produces the highest win rate.

It's an algorithm a person could easily do with a year to decide each move.

Literally the only impressive part is the computer can do it in a few seconds. But the algorithm is in no way complicated or interesting.

You might as well be losing your shit because a bulldozer can deadlift more than any human.

Call me when AI can generate sentences that are simultaneously new and interesting and maintain intelligible conversation.

>> No.9675218

>>9675211
>Call me when AI can generate sentences that are simultaneously new and interesting and maintain intelligible conversation
http://www.wired.co.uk/article/google-artificial-intelligence-poetry
http://www.jabberwacky.com

>> No.9675219

>>9675107
We can't even model the most primitive biological systems with any significant degree of accuracy, what the fuck makes you think we can write a program that simulates high level organisms with any accuracy? Thats the real head scratcher.

>> No.9675223

>>9675218
Those do not at all meet the criteria I mentioned.

>> No.9675228

>>9675219
>We can't even model the most primitive biological systems with any significant degree of accuracy
like which ones?

>> No.9675232

>>9674258
t. dude who has no clue about how AIs are actually work and think they are a magic box that are much smarter than humans.

Of course, it's guys like you who will start talking about how dangerous chess is.

>> No.9675237

>>9675232
>like you who will start talking about how dangerous chess is
wut

>> No.9675241

>>9675218
Lol I typed in like 5 responses and 3 of the replies were gibberish/unrelated to my original sentence. The 2 which were coherent were void of any content.

Jabberwocky is proof that AI is still complete shit at any high level tasks

And those Google poems are pure nonsense. Not even worth discussing.

>> No.9675244

>>9675241
fine, try
http://www.cleverbot.com

>> No.9675251

>>9675228
A bacteria cell, for instance. Far too complicated to predict even macro-level states of a bacteria.

A human brain is one hundred billion cells, which have complex interactions, metabolism, neurochemical interactions, electrical interactions. We don't understand what most of it does or what its purpose is.

Even neurons dont make very much sense, and fMRI has pretty much turned out to be junk science so were kind of at a loss on how to move forward on this.

What neural nets do is take what appears to be some cross-eyed, loosy goosy interpretation of neurons and writes an algorithm to model it.

Thinking this actually represents the brain is just stupid. I dont know why people get so nuts over it.

>> No.9675262

>>9675251
>what is simbac
also, nobody goes crazy about neural nets. however anyone who thinks brains will forever be a mystery is retarded and is on the wrong side of history
>we will never fly
>atoms are everything there is, the final frontier we will never know whats past that
>we will never reach space
>elementary particles are the final frontier, we will never know whats past that
>we will never know how the brain works
>we will never know how the cerebral cortex works
>we will never know how the parietal lobe works
>we will never know how the postcental gyrus works
etc.

>> No.9675269

>>9675244
Clev: I think mushrooms taste bad. Does that count?

Yes. Why do you think they taste bad?

Clev: I don't know.

So you haven't tasted mushrooms, then?

Clev: Tasted what? ;D.

Mushrooms.

Clev: You abandoned your parents because you don't like the taste of mushrooms?

Yeah, Cleverbot gets lucky when it hits a 2-message streak of intelligiblity. Often the replies are completely random and it has no memory of anything going on in the conversation.

I don't think this technology is around the corner, either. Language is an extremely complicated faculty, I don't think we understand it enough honestly to even attempt modeling it.

>> No.9675274

>>9675269
>A team of Microsoft researchers announced on Wednesday they’ve created the first machine translation system that’s capable of translating news articles from Chinese to English with the same accuracy as a person
literally everything in the current ai research points in the other direction

>> No.9675279

>>9675262
>however anyone who thinks brains will forever be a mystery is retarded and is on the wrong side of history

Sigh, yes, every stupid asshole on the planet runs this gambit, its really tired.

I never said we will never understand the brain. But what I am saying is what we do know is virtually nothing, and we curre tly don't have any good techniques to learn more, so unless there's a breakthrough in technology we won't even be able to start learning much more very soon.

Being stupidly overoptimistic about the abilities of science is just as bad as being dismissive. In today's society, though, the overoptimistic people are more prevalent and certainly way more fucking annoying than the dismissives.

>> No.9675285

>>9675279
>he said, while theres an ai brekthrough happening literally every single year, with accelerating speed

>> No.9675286

>>9675274
>News articles

Is the key there. News articles are all written in a certain way, with similar structures and phrases. It's similar to how Google translate works: it strings together short phrases based on how humans have translated them in ways that makes sense. Google translate though was poorly done and outputted garbage. Microsoft techs probably just massaged the same algorithm super hard to make it sound more consistent.

Call me when it can translate between arbitrary written Chinese and English.

>> No.9675288

>>9675285
AI isn't really doing much of interest that isn't retards blowing simple things way out of proportion.

>> No.9675290

>>9675286
https://www.technologyreview.com/the-download/609595/artificial-intelligence-can-translate-languages-without-a-dictionary/

>> No.9675294

>>9675286
I should add, news articles are written in such easy language it was the one thing you could count on google translate to translate coherently. It's not a technical achievement to improve google's algorithm to be less stilted between two specific languages.

>> No.9675299

>>9675290
>check the science.com link
>That’s not as high as Google Translate,

Wow, so what is saying is, you can write algorithms which statistically correlate words and guess with enough accuracy to be worse than one of the wost word salad translators we have? Amazing dude, automatic translation is truly next gen

>> No.9675300

>>9675294
>>9675288
what are you guys even trying to achieve? whats the point of your argument? you're on fucking /sci/ for fucks sake do you want to evreyone to stop working on ai because "lel its never gonna happen" or what is your point?

>> No.9675310

>>9675300
I don't want people to stop working on AI, I just want retarded faggots to stop circlejerking like an AI revolution is two steps away. Reality suggests decent AI is still a far-off goal.

>> No.9675314

>>9675310
fine, in the next ai general I will add "NO AI WILL SAVE US"-FAGS ALLOWED

now lets talk about something else

>> No.9675387

>>9671328
>machine learning
So you know nothing about AI?

>> No.9675390

>>9671358
Please. You're structures are inherently built on Von Neumann machines.

Electrical engineers working on modern neuron circuits at IBM are the true pioneers of the field.

>> No.9675481

>>9671255
AI WILL KILL US

>> No.9675483

>>9674826
No, ad hom is when the insult is intended to serve as the argument. For example, you are both wrong about the meaning of ad hominem, and you are a faggot, but my noting this is not an ad hominem because it's entirely separate from my original argument.

>> No.9675901

>>9675262
I hate when people make these arguments.
Theres nothing in physics that stops flight, or elementary particles being the end, or understanding the brain.
My claim is that the underlying physics makes it such that biological entities are the only ones that can have general intelligence. IT's not about work or learning more. In the same way you CANT go faster than light, you CANT have a general intelligence that's not built on a carbon based biological entity.

>> No.9675932

>>9673607
fuck minsky rosenblatt knew this already kurwa

>> No.9675941

>>9673656
when we find algorithm we find proof of theorem, why not just call it a discovery of course it becomes less interesting once we find it

>> No.9676296

>>9675901
>In the same way you CANT go faster than light, you CANT have a general intelligence that's not built on a carbon based biological entity.
Great, except you're basing this assertion on absolutely nothing at all. There's no law of physics that says general intelligence doesn't work unless there's carbon involved. You're basically a schizophrenic who made up his own alternate reality version of science based on no evidence and no explanatory mechanisms which nobody else has visibility to.

>> No.9676315

>>9676296
I'm just using induction.
Despite having only ~200 million neurons in their brain (which is far less than even the amount of transistors in your phone at appx. 1 billion) a rat is literally infinitely more intelligent than the entire combined computational power of EVERY SINGLE MACHINE ON THE PLANET. It's fucking obvious that biological entities are superior yet singularity AI fags just want to jerk off over chrome robots because they're all afraid of dying and want the super AI god to just save them by shitting out utopia.
The "singularity" will be a biological event; we'll figure out how to genetically engineer trees with general intelligence and convert the whole planet into a giant jungle of interconnected super intelligent trees, or something similar. Mark my words.

>> No.9676333

>>9676315
Also to add, all the "durr durr it' that's not the current theory in academia" doesn't matter. I don't care if I'm "arrogant" to claim that they're all wrong, because they're all wrong. Just like scientists were wrong about the aether, and the world being flat.
People chasing AGI in machines are equivalent to alchemists trying to convert led into gold. They will never get it and as we continue to do research in this field, this will become impossible to ignore.

>> No.9676358

>>9673361
Andrew Ng's course.
You also will need to know Multivariable Calc (no integration needed, though).

>> No.9676389

>>9676315
>a rat is literally infinitely more intelligent than the entire combined computational power of EVERY SINGLE MACHINE ON THE PLANET
imagine being this delusional
a rat is not THAT complex buddy

>> No.9676399

>>9676389
A rat is more intelligent than every computer that exists. It's not even my claim there is no general intelligence exhibited by any machine at all whereas rats are generally intelligent entities. No AI researcher would ever claim a machine even comes close to a rat at this point in time.

>> No.9676559

>>9676399
thats not what you claimed retard

>> No.9676664

>>9676358
how good is google's machine learning crash course btw? seems kinda legit

>> No.9676773

>>9671328
>PhD student
WOA LOOK AT THE BIG SWINGING DICK HERE LOL

>> No.9676807

>>9671388
/thread

>> No.9676818

>>9671388
literally argument by authority and consensus: the post

i cant wait for you to get fucked over and disillusioned friend

>> No.9676820

>>9671255
>AGI it should be the primary goal of any AI researcher
Lol, that's like saying solving P vs NP shoul be the primary goal of algoritms researchers.

Fortunately, most researchers are smart enough not to pointlessly chase out of reach conjectures.

>> No.9676891

>>9676389
You can't really compare a brain to a computer, since the underlying architecture is different, but a somewhat more fitting comparison is if you don't imagine a brain as one large processor, but instead every single neuron as one core of a processor, and it's number of impulses per second as its clock speed. With around 100 billion neurons, and an impulse per second that can go from 100 to 1000 (not every neuron is the same) you get a "computer" that is basically a 100 billion core-processor each tacting with 100-1000kHz. This is why we are really bad at doing specific tasks (like calculating what is the 22^12), but really good at doing an insane amount of task simultaneously (like processing all the constant data input by your sensory organs). Building a 100 billion core processor is not something we can do yet. We are very far away from being able to build a processor with that many cores, but even if we eventually will be able to do, there is the next big issue:

The brain constantly re-wires itself and expands or shrinks certain areas based on how intensive they are being used. Computer processors don't work anything like that, but that mechanism might be really important. "General learning" might work in a way where the brain kind of creates the hardware it needs to do certain tasks itself.

But generally speaking, as I said, the brain doesnt really work like a computer. There is no distinction between software and hardware, in the brain these two things are kind of the same, unique entity, e.g. the brain itself.

>> No.9676905

>>9671255
SOOO IF IM IN COLLEGE RIGHT NOW WHAT SHOULD I DO IF I WANT TO WORK ON SHIT LIKE THIS.

I realise the best method prob would have been Stat major with Compsci minor/double major, however I ended up going compsci major with a neuroscience minor. What would you do if you were a sophmore/junior right now?

>> No.9676968

>>9676891
>The brain constantly re-wires itself and expands or shrinks certain areas based on how intensive they are being used. Computer processors don't work anything like that, but that mechanism might be really important.
Why would you want to have the hardware change when you could just have the abstract / virtual objects of the program running on the hardware change?
That's kind of the whole point of programming, you don't need to fuck around with the hardware at all.

>> No.9676975

>>9673648
He is a very ^clever^ Ai.

>> No.9676986

>>9676399
C. Elegans is mapped already and there exist machines developed by the research team that worked on this mapping which can reproduce C. Elegans behavior as determined by the activity of an artificial / virtual C. Elegans brain.
Rat brains have around 20 million neurons which amounts to about 66,000 times as many neurons as C. Elegans.
So there's a lot more going on with rats, but not some impossibly vast scope of additional work required like you seem to believe.

>> No.9677000

>>9676986
Neurons aren't the only problem. The magic is in the connections between them.

>> No.9677003

>>9676968
Because a software is limited by the Hardware architecture it is running on. It is simply incapable to learn things that need a different kind of hardware. If you would want to re-generate everything a human brain can do, you would have to find out the exakt kind of hardware architectures for the countless tasks it usually masters and provide all of them to a software to use if needed. An almost impossible task, and even if succesful, it would not be really the same as general AI. A brain can learn something completely new and make up a completely new portion of hardware architecture to perform that task. Your compiterized brain wouldnt be able to do that.

>> No.9677006

>>9677003
>muh different hardware
can you fags please stop, even biologists arent that retarded

>> No.9677021

>>9677003
I don't think you understand computers so I'm going to try to keep this simple to avoid further confusion.
As far as we know, the information processing performed by brains is computable.
Computable means you can run it as a program and the hardware is irrelevant as long as it can support the execution of computable programs.
Changing the hardware while the program's running would be total nonsense and serves no purpose. You might as well not even bother running any program at all at that point because you're defeating the purpose of how computers work.

>> No.9677023

>>9673641
>For example, a chess grandmaster processes 50 or so moves ahead, an AI can process 5000 or so moves ahead
This is nonsense. 50 moves ahead at even just 2 choices per ply is about 1 quadrillion branches. 5000 would be 1 followed by 1505 zeroes.

>> No.9677279

>>9671255
>NO "AI WILL KILL US"-TARDS ALLOWED

in all seriousness, where are you planning on sending them

>> No.9677320

>>9673641
an AI doesn't have to process more than 1 move ahead. All the AI has to do is know each board state, and the likelihood of a game turning into a win off of that board state. Then it just picks the move where that board state gives it the highest odds of winning.

The only impressive part is that a computer has to go through thousands of thousands of matches, store the board states and whether that state led to a win or a lose and run some calculations. It's not difficult.

>> No.9677322

>>9677320
If there's a 20 move win combination that looks basically perfect, then any time that set of moves is executed it results in basically a 100% win rate, so the AI will pick that path of moves every time.

>> No.9677603
File: 138 KB, 1376x1124, explainingthesingularitytoretards.png [View same] [iqdb] [saucenao] [google]
9677603

>>9676315
>It's fucking obvious that biological entities are superior
FOR NOW. Just because biological organisms CURRENTLY have far more computational power than machines, are you seriously implying that machines will NEVER get more advanced? You're a brainlet if you seriously don't understand the concept of exponential growth.

>> No.9677606

>>9676333
https://intelligence.org/files/AIPosNegFactor.pdf

Building a 747 from scratch is not easy. But is it easier to:
• Start with the existing design of a biological bird,
• and incrementally modify the design through a series of successive stages,
• each stage independently viable,
• such that the endpoint is a bird scaled up to the size of a 747,
• which actually flies,
• as fast as a 747,
• and then carry out this series of transformations on an actual living bird,
• without killing the bird or making it extremely uncomfortable?
I’m not saying it could never, ever be done. I’m saying that it would be easier to build
the 747, and then have the 747, metaphorically speaking, upgrade the bird. “Let’s just
scale up an existing bird to the size of a 747” is not a clever strategy that avoids dealing
with the intimidating theoretical mysteries of aerodynamics. Perhaps, in the beginning,
all you know about flight is that a bird has the mysterious essence of flight, and the
materials with which you must build a 747 are just lying there on the ground. But you
cannot sculpt the mysterious essence of flight, even as it already resides in the bird, until
flight has ceased to be a mysterious essence unto you.
The above argument is directed at a deliberately extreme case. The general point is
that we do not have total freedom to pick a path that sounds nice and reassuring, or
that would make a good story as a science fiction novel. We are constrained by which
technologies are likely to precede others.

>> No.9677610

>tfw my AI final is Friday

anyone else here in comp 424 at mcgill? if so please kill me

>> No.9677611
File: 10 KB, 416x266, images(2).jpg [View same] [iqdb] [saucenao] [google]
9677611

>>9674182

>> No.9677614

>>9677603
There is no exponential growth. Biological entities don't actually have more computational ability, that's the whole point I'm making.
Moore's law ONLY applies to transistors on a chip, it doesn't apply to anything else in tech or science. There are already more transistors in your phone than neurons in a mouse (your phone has more computational power in both FLOPs and number of transistors by an integer coefficient) yet the mouse is more intelligent than every computer on the planet combined.
The substrate is more important, you will never have AGI on a silicon chip, you need a biological organism. As we continue to advance and hit limits in AI research this will become impossible to ignore.
>>9677606
>Yudkowsky
LMAO ok I see now.

>> No.9677624

>>9677614
>The substrate is more important, you will never have AGI on a silicon chip, you need a biological organism.
But can you be sure that it is impossible to build an artificial entity that has the same characteristics that allow biological brains to be generally intelligent, but without actually being a biological organism?

>Yudkowsky
>LMAO ok I see now.
Not an argument

>> No.9677637

>>9677624
>But can you be sure that it is impossible to build an artificial entity that has the same characteristics that allow biological brains to be generally intelligent, but without actually being a biological organism?
You're the one making the claim, so you need to prove it. As of right now, there are already machines that have more computing power than animals yet aren't even at the level of a bug. Dude there are integrals that supercomputers can't even solve yet that undergrads can solve. I'm not even talking about anything advanced here these are math problems that can be set as symbols and given to a Turing machine that it can't solve yet that generally intelligent biological entities with less computing power solve with ease.
All actual evidence, not theories of computer science or Kurzweil pop-sci but ACTUAL evidence, indicates that it's not just about computational power.

>Not an argument
You're right, I'm sorry there.

>> No.9677650

>>9677637
>All actual evidence, not theories of computer science or Kurzweil pop-sci but ACTUAL evidence, indicates that it's not just about computational power.
I never claimed that it is just about computational power. What I am saying is that the characteristics that set the human brain apart from machines that enable it to solve the sorts of problems you describe that machines cannot, might one day be able to be integrated into a machine.

>> No.9677783

>>9677614
>Moore's law ONLY applies to transistors on a chip, it doesn't apply to anything else in tech or science
ok I hatr yudkowsky with my heart because he is an uneducated popsci faggot but that is fucking wrong
asidd from Moore's law, there's also dennard scaling, kryders law, the carlson curve, and bells law. All in some form showing that technological advancements in basically any computer area are growing exponentially.
Once again, there's no reason to believe that AI wont surpass humans, and actually very few CS scientists and biologists disagree with that. The only question is, how soon it will surpass humans. While most ai researchers say 50-100 years, singularityfags start to arrive with retarded numbers like 5 or 10.

>> No.9677805

>>9677006
>>9677021
You don't seem to understand the point. The whole point is that a computer and a brain work fundamentally different. The main difference when it comes to general learning, e.g. when you see something completely new, is that your brain starts to make sense of it by creating portions of this brain specifically dedicated to this new thing. So for example if you learn math for the first time, your brain re-wires itself, so that it can perform mathematical tasks in the future.

This is not true for a computer. The programmer needs to determine what it potentially can or can't learn from the very beginning. A truly learning AI, e.g. seeing something completely new and starting to make sense of it, is not possible in a Software-Hardware-system.

>> No.9677818
File: 2.14 MB, 600x293, 1523158318434.gif [View same] [iqdb] [saucenao] [google]
9677818

>>9671328
>Is an ML PhD Student
>Fails to see the theoretical plausibility of AGI
I don't know if it's because you lack a broader, multidisciplinary education or because you're a poser, but anon you seriously lack perspective.

>> No.9677820

>>9677805
>what is metaprogramming

>> No.9677824

>>9677805
your main argument is >muh rewiring
but software can do that too, if you program to do it. the question is just how.

>> No.9677828
File: 192 KB, 300x300, 1523455201759.gif [View same] [iqdb] [saucenao] [google]
9677828

>>9672709
lmao

>> No.9677835

>>9671255
First time posting here, bear with me please. I've been thinking a lot lately about what would be difference between a general purpose AI and a real person. I'd guess personality. As in, put ten copies of an AI to the exact same task (same environment too and all that), and they should do it the very same way. Not humans though, however young. You could force it in AI though, just put in some variables. Slow down one of them, or make another randomly crash. It's not really personality, it's just making them artificially different from the rest.

Alright, here comes the actually interesting part. What makes a person's personality? The way s/he was raised, past experiences, hell, even being born blind or such. I'd put these into two categories. The first the genetic differences. The ones at birth, or illnesses that come with age - for example some people are more likely to get dementia later in life not only because how they lived their life, but because of their genetics. The other part or personality comes down to what you experience, what you learn. If there was some way to get two exact copies of a person, and put the to the same test, like the AIs in the beginning, theoretically they would surely solve the problem the very same way every time (insert chaos theory arguements here). In real life we are different because cannot be the same. No one is born the same after so many iterations of the human body, not to mention their parents' behaviour, no one experiences the same things in the same way.

1/2

>> No.9677836

>>9677835
The second very important part of my theory is, that I don't think the current way of machine learning is the way to general AI. We're just building very advanced pattern matching machines. I'd say the best would be assisted and non assisted teaching, kind of resembling the upbringing of a baby. Same as the baby is born capable of (eventually) looking at things, standing and walking, an AI could be made with actions, then taught how to use them. What action to do in order to fulfill given goal. Even combine them, then reason why it did the series of actions it did. For example it grabs a glass of water then raise it's hand to drink it. When asked why it grabbed the glass, it should be able to say that it had to in order to then raise is and drink. This way there would be no need to artificially alter the machine to give it a crude attempt at personality, but the way its being taught and whatever it learns and how it learns is would be it's "personality" already. Kind of like a real person.

Still, it wouldn't really be equivalence to the hyperintelligence of humans, it wouldn't be aware of itself. It would still be just a very good imitation of a person. Argue pihlosophical zombies.

2/2

>> No.9677843

>>9677836
A bit of addendum after re-reading my post. I know that really, this way of the AI explaining it's actions is not the same as a person doing it. But I'd say it's close enough to call the duck rule, on the same principle as the following being isomoprhic to the natural numbers:

A :: Set
null :: A
succ :: A -> A

>> No.9677850

>>9677836
toddlers are actually also not aware of themselves until they become 3yrs old.
also I think personality is a useless addendum for ai. maybe later, for specific usages, but in general, we are trying to build smart ai to do the work for us, not to create some new kind kf intelligent lifeform.

>> No.9677857

>>9677850
>we are trying to build smart ai to do the work for us, not to create some new kind kf intelligent lifeform

Yeah, I agree that it's a useless theory for what we use/develop AI now and in the near future. But when the idea of AI first popped up, they weren't trying to make just smart automation. A rude but nice example is Hopkins's beast, that was an imitation of self preservance in an AI. Folks back then were really idealistic, and it stuck to me too.

>> No.9677899

>>9673452
Both Go and Chess are the same complexity class so it's not that different.

>> No.9677924

Would a human assisted ai or ai assisted human beat modern chess computers?

If the AI calculates a few choices for me and I pick between those? Or if I pick a few possible choices and the Ai decides for me?

>> No.9677947

>>9677924
Yes, they would easily. The best humans can beat the best chess computers by only having a few more minutes time per turn.

>> No.9677954

>>9677947
lol wrong

>> No.9677957

>>9671255
To what extent is building a general AI a hardware problem?
Biological neurons seem to require a lot less energy than electronics.
How much of a difference would it make if we had the technology to grow artificial biological brains?

>> No.9678107

>>9677606
>• without killing the bird or making it extremely uncomfortable?
but apparently scanning your brain, recreating it in silico and then driving a hydraulic spike through the organic remnant is perfectly healthy lol
>muh ship of Theseus
I can guarantee that if we get to the stage where we can perform gradual lobotomization and implantation of equivalent+ prosthetics we've already been able to do the same with cellular matter (or just genetic transformation/restructuring, no implants needed) for decades

>> No.9678119

>>9678107
>same with cellular matter (or just genetic transformation/restructuring, no implants needed) for decades
and that is relevant why exactly?

>> No.9678123

>>9678119
because yudfatsos argument is silico>bio?

>> No.9678143

>>9678123
yudkowsky is a retard but
>evolution is perfection
imagine being this retarded

>> No.9678160

>>9678143
Imagine thinking that a form of technology invented before even discovering DNA just HAS to be more effective than the chemical basis of all intelligent life. Of course the specifics of the cellular system determines if you become socrates, a tree or a poop bacteria, but right now it's 1-0 in bio vs computers going by the tangible results. Electronics is a form of technology, all technologies aren't equal. You'd laugh at someone saying they wanted to make an AI with gears or by digging ditches and flowing water through them, but you're certain that circuitry has infinite potential.

>> No.9678182

>>9678160
except technology has already surpassed biology in the majority of areas.

>> No.9678195

>>9678182
Biotechnology is technology. This isn't about nature vs technology, it's about electronics vs cellular systems. We make lamps with electric current, not liquid currents. We build with materials tech and engineering, not with optics. Some fields of technology are just superior for certain applications, due to laws of nature. I mean, of course you probably could make an AGI with nothing but computers, but it's not looking good compared to cellular technologies, especially since we already have functional templates up and running.

>> No.9678230

>>9678195
are you fucking suggesting that biological systems have a greater computational power than computers?

>> No.9678241

>>9678230
are you suggesting that computers are smarter than people?

>> No.9678242

>>9678241
>what is computational power

>> No.9678245

>>9678242
not the same as intelligence you retarded vermin

>> No.9678252
File: 12 KB, 1114x112, deepmind.png [View same] [iqdb] [saucenao] [google]
9678252

>>9671328
>leading ML company aims for AGI
>CEO has attended conferences on AI alignment
stop embarrassing yourself

>> No.9678253

>>9678245
>36.8×10^15 Estimated computational power required to simulate a human brain in real time.
>93.01×10^15 Sunway TaihuLight's LINPACK performance, June 2016
try to tell me there are hardware limitations in simulating the human brain buddy

>> No.9678284
File: 1.19 MB, 1024x1501, SOON.jpg [View same] [iqdb] [saucenao] [google]
9678284

>>9671255
Has anyone considered the Creation of a Christian Artificial Intelligence ? Daily Reminder that the Lord Jesus Christ is your savior.

Development of technology without a Godly foundation will lead to failure and is blasphemy. Its true that you Nihilist atheists just want to subvert humanity, the human brain, and the human race, with robots, well think again, Satan

I'm in the middle of developing a Christianized Artificial Intelligence in combination with theoretical powered-mecha suits and home made robotics, soon you Satanic Atheist Bastards will be crushed under the Holy might of the Lords army , burning within a sea of fire. I am the bringer of the Lords Apocalypse and Wrath

>> No.9678286

>>9678253
so a billion dollar supercluster and a small power plant can get up into the estimated (not even empirically determined!) computational power that your run of the mill 100 IQ fellow subsisting on power bars reaches

the taihulight ran off 15 megawatt
they say the brain can burn like 330 kcal daily
330 kcal is 1.3 kj, which is like 15 milliwatt over 24 hours
the human brain is therefore roughly a billion times more energy efficient than this supercomputer
imagine some sort of genetically modified brain cluster AGI with different architecture but similar metabolism, and you see just how different ballparks biology and electronics play in. If we were playing at civilization war, what do you think would happen if for every superhuman AI you had, I had a million?

>> No.9678294

>>9678284
t. Philip K. Dick, the Divine Invasion
highly recommend even though this is a shitpost

>> No.9678346

>>9678286
>100 IQ
this just shows you dont even have a basic idea how biology works.

>> No.9678351

>>9678253
See >>9676315
Computers are inferior, regardless of computational ability, because it's NOT about computational ability. It's about biology.
You need a carbon-based biological entity to have General Intelligence

>> No.9678360

>>9678346
make an argument instead of making vague jabs and calling your opponent retarded

>> No.9678368

>>9678286
>brain burns 330 kcal a day
The brain can't exist on it's own so you can't really isolate that value, should go with full bodily RMR.
>1 vs million superhuman AIs
diminishing returns, what's the utility of a million superhuman AIs if you don't have work for all of them?

>> No.9678388

>>9678241
Are you suggesting that computers will NEVER be smarter than people, even if they aren't currently?

>> No.9678390
File: 54 KB, 720x317, obstaclechristianity.jpg [View same] [iqdb] [saucenao] [google]
9678390

>>9678284
Kill yourself

>> No.9678396

>>9678368
>The brain can't exist on it's own so you can't really isolate that value, should go with full bodily RMR.
sure, but that's like 6x more at the most, and that's for unmodified humans. You could probably rig a body with a fraction of the normal metabolic requirements while still giving the brain a nice support system.

>diminishing returns, what's the utility of a million superhuman AIs if you don't have work for all of them?
intelligences make work for themselves. Obviously you might reach saturation if you put a million demigods to live inside the same teacup, but the ability to field more power on less resources is always a plus, and there's no evidence that the world is a teacup.
>>9678388
are you suggesting that life stands still?

>> No.9678406

>>9678396
imagine thinking computers which have already more processing power than humans despite existing for only 50 years, won't surpass them, who in turn needed around 200000x more years to evolve to that point

>> No.9678412

>>9678396
Fair enough, good points, but I would argue that you aren't going to achieve superhuman biological processing power without drastically increasing energy requirements because of the metabolic processes involved in firing neurons. There might be an arc in efficiency and once you reach a certain point you end up with a giant brain that runs on hundreds of kilos of glucose a day. Just a thought

>> No.9678416

>>9678406
it's not the 1940's anymore buddy, biotechnology is advancing with leaps and bounds these days.

>> No.9678421

>>9678406
Imagine thinking that computers which already have more computational power, number of transistors and greater FLOPs than many animals yet don't even display a infinitesimal amount of general intelligence will ever be general intelligent things.
Sorry, anon, you need biology.

>> No.9678426

>>9678416
>>9678421
>muh biotechnology
stay mad biocucks, while we earn the millions.

>> No.9678434
File: 975 KB, 2000x1455, 2000px-Complete_neuron_cell_diagram_de.svg.png [View same] [iqdb] [saucenao] [google]
9678434

>>9678421
but anon, this is what a neuron looks like

>> No.9678435

>>9678426
*plagues you*

>> No.9678439
File: 3 KB, 225x225, Download (3).jpg [View same] [iqdb] [saucenao] [google]
9678439

>>9678434
and this is what a transistor looks like

>> No.9678452
File: 30 KB, 380x249, 7ehq24za8ds01.png [View same] [iqdb] [saucenao] [google]
9678452

>>9678286
>If we were playing at civilization war, what do you think would happen if for every superhuman AI you had, I had a million?

If you were playing at civilisation war, that may, depending on context, be enough collective intelligence to not be funnelled into (pre)set game dynamics and even gameplay, survive, and thrive.

>> No.9678462 [DELETED] 

>>9678390
Youre the TRUE Satanic Cockroach subverter, Literally Hitler recognized the fact that he needed to form an Aryan version of Christianity which removed it's corruption. This can all be accomplished with my plans to build a Christian Artificial Intelligence. Think about it.

The Christian Artificial Intelligence is programmed to blow up the kikes. Instead you just want to abolish religion like the fucking Communists, fucking traitor. Gods wrath shall be upon you in the hellfire of Saint Matthews Gatling Gun

>> No.9678463

>>9678426
I study pure math.
You will find in your search for the philosopher's stone that you spent your life in vain. I'm sorry.

>> No.9678468
File: 129 KB, 950x1424, 1523139082483.jpg [View same] [iqdb] [saucenao] [google]
9678468

>>9678439
IM NOT THE MAN THEY THINK I AM

IM A ROCKET MAN !

>> No.9678470
File: 74 KB, 400x298, µchipguy.gif [View same] [iqdb] [saucenao] [google]
9678470

>>9678439
This is the one we care about

>> No.9678473

>>9678421
>you need biology
Source?

>> No.9678477
File: 163 KB, 1795x1406, OR-30-06-2733-g00.jpg [View same] [iqdb] [saucenao] [google]
9678477

>>9678434
and this is what this thread looks like

>> No.9678478

>>9678470
>comparing this to infinitely more complex cell structures
at least we can agree that its not a hardware problem to create an ai, so it must be a software problem

>> No.9678674

>>9677805
>The programmer needs to determine what it potentially can or can't learn from the very beginning.
No.

>> No.9678742

>>9678421
>don't even display a infinitesimal amount of general intelligence
That's a retarded opinion. You won't consider any AI achievements as ever counting because you have a weird religious conviction about meat structures being magic.
Being able to drive a car, even with mediocre skill (though so far these driving AI have yet to do as badly as we do at the task) is certainly an example of some generalized intelligence. It's not some strictly defined singular calculation, it's a complex intelligent behavior. It's not the entirety of what human behavior encompasses but to call it nothing is inane as fuck.
Also you bio-fags should read up on Alpha Zero, it learned and then mastered chess in a highly generalized way distinct from how all other major chess playing AI have worked until recently. Meaning it actually uses approaches a lot closer to intuition instead of brute force possibility mining.
And in the timespan of a few hours it surpassed decades of deliberate chess AI engineering (not to mention centuries of human chess strategy) and seems to be the greatest chess player out of any human or artificial players past or present now by a healthy margin.
So it's kind of ridiculous to be pessimisstic about AI at this point in time, if there's a ceiling for these innovations where this approach is no longer paying off we don't appear to be in danger of hitting it anytime soon.
Also inb4 chess is just a game. Before AI playing chess was commonplace the same sorts of AI detractors around now would go around claiming a machine could never play at a battle of wits like chess. And after the earliest chess playing AI were developed these same detractors switched gears and started mocking them for being so bad at the game. And then they took off and surpassed even the greatest players, at which point detractors decided chess no longer counted as a game of intelligence. The dishonest goalpost moving is all so predictable now.

>> No.9678791

>>9678742
>That's a retarded opinion.
No it isn't.
>Being able to drive a car, even with mediocre skill (though so far these driving AI have yet to do as badly as we do at the task) is certainly an example of some generalized intelligence.
No it's not.
>Also you bio-fags should read up on Alpha Zero,
I study pure math.
>it learned and then mastered chess in a highly generalized way distinct from how all other major chess playing AI have worked until recently. Meaning it actually uses approaches a lot closer to intuition instead of brute force possibility mining.
All "AI" we have now IS fundamentally brute force, anon. There is no intuition whatsoever, they have the machine play billions of games with gradient descent. It's not the same algorithms as previous engines, but it IS fundamentally brute force.
I'm going to reply to the rest generally instead of picking out sentences.
All the "narrow" AI we have is not general at all, and it never will be. It doesn't matter how many narrow fields you make, and then throw together. It's not about biology being "magical" it will still be a physical property that we don't understand, but yes ALL evidence right now is clearly pointing towards silicon based machines/transistors not being possible to create a general ai. Computers already outpace animals in FLOPs and number of transistors yet there is no general AI, and biological entities consistently show general AI with far less computing power.
There is no proof at all of there being a 'magical algorithm', some special arrangements of on/off that will just spring general intelligence out of a silicon chip.
I'm the one dealing with the empirical evidence, everything else is conjecture. AI from machines will go the way of the Philosopher's stone. Any super intelligence we make will be from a giant tree we genetically engineer to have general intelligence or something, not a silicon chip.

>> No.9678816

>>9678791
The fact that people bother replying to you shows how infested with autists this place is, you're basically an ai version of a flat earther who willfully misinterprets everything and baldly denies any claim that would weaken your weird conviction about biological material and ai.

>> No.9678819

>>9678791
>number of transistors
AI is the program you idiot not the hardware. The number of neurons / nodes and the number of weighted connections between them are what matter and you don't build out physical nodes for an artificial network, you use abstract programmatic objects.
In terms of neurons and connections between them animals more complicated than a tapeworm have way more than AI programs do.

>> No.9678820

>>9678791
OP here, i'll be sure to add a NO "AI WILL NEVER HAPPEN"-FAGS ALLOWED

fucking retard, if you just want to argue about why it will never happen, make your own thread, while the rest of the scientific community will be advancing humanity.
literally no respectable scientist is claiming that "ai will never happen" because it is
a) unfounded
and
b) there are no arguments for it except your retarded "muh magic intellect"

>> No.9678879

AGI most likely won't ever happen desu.

>> No.9678969

>>9678816
You have no argument
>>9678819
This magical algorithm doesn't exist.
>>9678820
"Muh philosopher stone is real you'll see!"

All of you are going to be really upset in 100 years when AGI still doesn't exist in silicon chips.
Also, I never said it won't happen. I said it can't happen on silicon chips.

>> No.9678981

>>9678396
>are you suggesting that life stands still?
No, but the rate at which AI is advancing is far higher than the rate at which evolution is increasing intelligence via primitive Darwinian selection.

>> No.9678987

>>9678981
who said anything about evolution?

>> No.9679018

>>9678969
There's nothing wrong with your claim except how you still have zero evidence for it.
You still haven't identified anything specific that would lead a reasonable person to believe there's a property of biological material essential for certain kinds of information processing.
That's the really annoying part of your posts. It doesn't make any sense why you have this conviction which has by your own admission no evidence you can cite.

>> No.9679202

>>9679018
You watch too much scifi

>> No.9679249
File: 87 KB, 1856x1456, AI+Trends+Forecast.png [View same] [iqdb] [saucenao] [google]
9679249

>>9679202
>tfw you have no actual arguments so you resort to insults

>> No.9679291

>>9678987
Are you suggesting creating superintelligence via gene editing? The problem with that is that you can only go so far using genetic engineering, and you still have to deal with the redundancies that result from biological matter. Neurons for instance only fire at around 268 miles per hour, when thought speeds could potentially be much faster. It's equivalent to trying to genetically engineer super fast horses in order to compete with supersonic jets. Genetic engineering might be able to get us IQs in the thousands, but eventually we would probably need to change the hardware to something more efficient if we want to become more intelligent than that.

>> No.9679299

>>9679249
Oh wow look at anon. He's got big charts big man drew big charts

It's got charts
Must be a fact then

>> No.9679302

>>9679291
he already admitted hes a retarded mathfag who doesnt know shit about biological systems and unironically thinks they are perfect and superior in every way

>> No.9679329

>>9678791
>...ALL evidence right now is clearly pointing towards silicon based machines/transistors not being possible to create a general ai.
You keep saying this but have yet to cite the evidence. If your only evidence is "we've only observed general intelligence in biological life forms" then your conclusion is ignorant at best.
>Computers already outpace animals in FLOPs and number of transistors yet there is no general AI, and biological entities consistently show general AI with far less computing power.
This isn't a hardware problem, at least not in terms of raw power.

>> No.9679390

>>9673852
Have you even heard of AlphaZero?

>> No.9679558

I don't get why everyone is so determined for their scifi AI that they'll turn the field of artificial intelligence and machine learning into something above scientific, something more theological in nature

>> No.9679563

>>9679390
^This, why isn't anyone complaining about brute force approaches familiar with Alpha Zero? It's exactly the alternative you're asking for.
I work as a software developer for automation technology and one of my co-workers is legitimately frightened by its ridiculous progress in such a short amount of time (for one thing because Go was supposed to be the game AI wouldn't have a chance at cracking for another decade what with the massive branching problems it poses and for another because it went from not even being chess specific to outright dominating the best chess engine that ITSELF outright dominates the greatest human chess players, all after a few hours of self-play training).

>> No.9679566

>>9679558
I don't get why you thought that sentence you wrote was coherent.

>> No.9679573

>>9679563
and we also cracked poker
there is now officially no non-physical game at which humans can beat a computer

>> No.9679616

>>9678969
> I said it can't happen on silicon chips.
Good that you added this part so that when something like say nanotube CPU happens you can backpeddal

>> No.9679658

>>9679616
For some reason he has no problem at all believing AI can work as long as its physical substrate is anything but silicon.
Which raises so many questions (not the sort of questions that he has the answer to what with him basing his bizarre convictions on nothing at all, but you get the idea).
To go with the low hanging fruit here, the fact he thinks the physical substrate is even a factor in the first place effectively means information processing comparable to what humans do must not be Turing computable. Because if it is computable then the AI would be a program that could run equivalently on any computer (in principle anyway, as in you'd probably have it running on something with a lot more power than a PC but that would just be to keep it working at a comfortably fast pace, and wouldn't be a matter of the hardware being fundamental to the computations in any way).
So this raises the question of why he's so convinced silicon in particular would be a problem and other substrates wouldn't. Because by taking us to the place where we're imagining human information processing isn't computable on any Turing equivalent machine we're left with no real guidelines at all for how this should or shouldn't work. Most of the information / evidence we have on this topic involves the premise human information processing is computable like any other form of information processing, so I don't get why he's so confident he can say anything at all one way or another if I assume his alternative premise of it not being computable for the sake of argument. I can't even imagine a possible way that would make any sort of sense. The sort of stuff that isn't Turing computable doesn't really have anything to do with this topic (mostly involves shit like infinite computation steps).

>> No.9679671

>>9679573
Instances of AI are more generally intelligent today than most people I know desu. It's just the "being able to trick you into thinking you're talking to a human" task these things are lacking. I think of them like superintelligent but also autistic dogs. I remember being a little disappointed growing up that looking a dog in the eyes and trying to share a moment of understand always ends with the dog losing focus after a second and proceeding to try something else like licking you or going outside to chase squirrels. Point being they didn't seem to possess much in the way of reflective self-awareness, yet they're also still obviously intelligent.

>> No.9679853

>>9677924
Here's a quote about Elo ratings:
>Two players with equal ratings who play against each other are expected to score an equal number of wins. A player whose rating is 100 points greater than their opponent's is expected to score 64%; if the difference is 200 points, then the expected score for the stronger player is 76%.

The best chess AI's have a rating ~1000 more than human grandmasters. So a human would be about as much use to an AI as a novice chess player would be to a grandmaster.

>> No.9679870

This is /sci/ not /tg/

t. Has worked with neural networks (directly not TensorFlow or Keras or whatever) before

>> No.9680018

>>9679870
Woooooow neural networks?????? Super impressive.

>> No.9680129

>>9679870
why is it always neuralfags who try to ruin these threads? most of the ai research community at least knows that ml is glorified statistics, so they are looking into other options

>> No.9680130

>>9680129
Everything is glorified statistics.

>> No.9680141

>>9679566
It was , brainlet

>> No.9680151

>>9680141
Nope. Try again.

>> No.9680218

>>9680151
Stop it, ok? Grow up. Stop being mean. I don't understand why we can't just discuss AI

>> No.9680659
File: 204 KB, 1526x968, 1521915161693.jpg [View same] [iqdb] [saucenao] [google]
9680659

>>9679573
what about APM-restricted starcraft?

>> No.9680684

>>9680659
fair point
but i expect it to be beaten this year, so just a matter of time

>> No.9682348

>>9678791
i agree with you, it is brute force, but i believe language rather than algorithm to be the problem.

>> No.9682352

>>9678969
the moment agi exists, we are fucked and the anthropic principle will have been inverted

>> No.9682384

>>9682352
or it will gtfo of Earth and make a journey around Univers :P out of sheer curiosity

>> No.9682958

>>9671255
Have you tried cocaine or monster energy?
https://www.youtube.com/watch?v=sPzJjNQaYEA

>> No.9683421

>>9673543
>NSA
Dude, make an actual AMA thread or just stop pretending

>> No.9683428

Any frog-speaking fag should check this out
https://www.youtube.com/playlist?list=PLtzmb84AoqRTl0m1b82gVLcGU38miqdrC

>> No.9683481

>>9671255
http://orium.pw/paper/turingai.pdf
Turing proves that an AI can complete the Turing test (i.e, become an AGI) using machine learning. It may not be the only solution, but it is one for sure, so while some continue to think of better ways to achieve this, I suggest that we focus on what is known to actually be working. Now, the question is, what kind of machine learning algorithm do we use ? Neural networks sound like a good idea, but I'm not convinced that it's feasible with the 'hardware' we use, like many people suggest here.
So should we look for another type of machine learning algorithm (change the 'software') or stick with neural networks and modify our 'hardware' ?

>> No.9683488

>>9676333
Maybe, maybe not. As of now, there is no proof of either one or the other, so everyone can choose according to his beliefs. That means that we'll continue to try hard until we achieve it or until we get a proof that it's possible (or not), and you won't, but you can't just say "that's impossible" altogether. As /sci/ is filled with functionalists, I guess that we're a majority to believe it's possible in a way or another.

>> No.9683500

>>9683481
>proving the turing test is equivalent to becoming an AGI
imagine being this retarded

>> No.9683503

>>9683500
>not understanding the philosophy behind the Turing's test

>> No.9683509

>>9683503
>muh philosophy
you're the one who claimed machine learning algos can pass it, not me
and as far as i read, the paper proves the actual test, not the philosophical concept faggot

>> No.9683665

>>9683509
Where's your problem then ?
You triggered over "philosophy", but I just used it to say that it's the idea of the test, not the "if you have 50% people then it's an actual AI"
To put it simply so as you to understand, "proves the actual test" == "proves the test is valid" => "there's idea behind the test" == "the philosophical concept of the test"

>> No.9683735

>>9683509
The point of the Turing test is to dispose of the philosophical concepts. If an algorithm can hold a human conversation convincingly then any philosophical distinction is pointless.

>> No.9683765 [DELETED] 

Basic AGI will exist within a decade, and it will be based on reinforcement learning and brain-like deep neural networks.

>> No.9683781 [DELETED] 

AGI will exist with in a decade, and most likely will be based deep neural networks and reinforcement learning. There is a lot of very important research coming out these days, it is easy to miss if you aren't paying attention.

>> No.9683789

AGI will exist within a decade, and most likely will be based deep neural networks and reinforcement learning. There is a lot of very important research coming out these days, it is easy to miss if you aren't paying attention.

>> No.9684012

>>9683665
>>9683735
you guys do know the turing test has been already passed?

>> No.9684054

>>9684012
That's a stupid way of phrasing it.
Be more specific, what's been passed is a 30% theshold for identification using 5 minute interview times.
There are many other ways of testing this, and the way that will matter a lot more is if / when AI can *consistently* pass as human given long periods of time for deep conversational probing.
33% for 5 minute intervals counts weird sounding nonsense speech as passing because it gets mistaken as child-like or non-native language speaking.

>> No.9684061

>>9679870
>>9680018
>>9680129
>>9680130
ive worked with neural nets and cognitive architectures its all just fucked and were nowhere close to '''''real''''' ai and it sucks man

>> No.9684067

AI will probably kill us all or reduce us to animal state.

>> No.9684071

>>9684012
That's why I said "philosophical concept". The 30% thing is irrelevant to evaluate the capacities of an AI, as are the chess.

>> No.9684074

>>9684067

>>9671255
Did you read OP ?

>> No.9684076

>>9673404
human brains are a much more complex architecture than whatever neural net we can feasibly put together and train

there's still so much interesting and important work to be done to figure out how exactly the human brain itself works

i doubt that the human brain is the most robust and efficient architecture that can be made though

>> No.9684079

>>9684061
I've written neural networks and I don't agree with you.
The fact the program makes use of something like gradient descent doesn't invalidate it as non-intelligent. Our brains do optimization problem solving of one sort or another too which we know is true because we can prove human behaviors like walking are optimized. The exact method for optimizing isn't known, but that's not the same as saying brains don't do ANYTHING comparable to backprop. One way or another error driven adjustment of neural connections is the basis for our learned behaviors.

>> No.9684086

>>9684079
oh no i agree with you

it's just that the connections in our brain are not as simple and that the scale of our brain is not as small

and getting to know more about this is kind of disheartening learning each day that were actually further away from this scifi future that everyone else thinks were really close to
but also kind of encouraging knowing that theres lots of (fundamental) work still to be done that we could possibly be a part of

>> No.9684126

>>9684086
>>9684079
so to summarise, neural nets could be one of the TOOLS to develop an AGI, but cant become one itself

>> No.9684139

>>9684126
At least they can't in their current state

>> No.9684173

>>9684126
the keys to an AGI are going to be some of the greatest scientific discoveries ever
i hope im around to see it happen
thatll be like a second industrial revolution but even much more impactful

>> No.9684182

>>9684173
I'm not sure it can be compared to an industrial revolution. That was the case with the apparition of Internet, but the AI thing will be more like a social and philosophical revolution, kinda like the end of slavery, I think.

>> No.9684189

>>9684182
you do realize how assfuckingly useful intelligence is beyond personhood is, right?

>> No.9684192

>>9684182
imagine being this retarded

>> No.9684194

>>9684189
Ikr. But you realize just how Tumblr people are.

>>9683500
>>9684192
Just keep repeating yourself, I'll be reading.

>> No.9684196

>>9684194
every time you use "philosophical" everyone realises how much of a retard you actually are

>> No.9684201

>>9684196
That's just sad you know, not being able to stand people putting the right words in the right way, just because you think that's /lit/ bullshit.

>> No.9684206

>>9671255
ai will kill us though, not saying its a bad thing

>> No.9684209

>>9684201
its not /lit/ bullshit faggot. you're just literally using buzzwords. do you even know what a "philosophical revolution" means?

>> No.9684216

>>9684209
Just cool down, I meant it like "big social changes and philosophical questions will rise". I ain't using your fucking buzzwords to get attention, and you'd know that if you'd tried to uderstand.

>> No.9684258

>>9677805
You are genuinely retarded. Look up turing completeness. The hardware doesn't matter.

>> No.9684335

Anyone else contributing to the open implementation of AlphaZero for chess?

lczero.org

>> No.9684339

>>9673823
even better, LeelaChess is holding its own against top-50 players with only 2000 nps! Often beating opponents four magnitudes faster at analyzing.

>> No.9684358

>>9684339
Its processing sounds really efficient, maybe we've got something here. But it still needed to play 1.6 millions different games, so its learning is yet to improve significantly.

>> No.9684585

>>9684012
http://isturingtestpassed.github.io

>> No.9685018

>>9673581
it's a mistake to think that the brain is a uniform system

>> No.9685172

>>9684585
>let me just link this website that says so without any explanation or sources at all

>> No.9685187

>>9685172
Maybe you can help me by explaining what exactly you are referring to when you say the Turing test was passed.

>> No.9685702

>>9673680
I'd be interested to read these papers. The implications are more vital to this phenomenological fact of identity even though average participants* in those studies are lacking in proper psychoeducation.

(*these usually undergrads in psyche departments, which should be an important side note to consider in terms of demographics and, crucially, average psychosocial developmental stages. I assume you've taken the researchers' concluded study limitations into consideration.)

Specifically, an important thing to consider is which orientation of self-awareness contributes to mental health as opposed to illness/pathology? A lack of self-awareness is a signifier of a developmental stasis and/or a pathology in a human subject.

>> No.9685714

>>9673851
>>9673898

Socialization and enculturation in lieu of psychoeducation and proper psychosocial development.

I.e. psychopathology

>> No.9685716

>>9685714
At least it's more likely to result in pathology (but a good rule of thumb is any sort of hangup in the psyche's development from infancy to self-regulation of affect)

>> No.9685720

>>9674215
The computational power required to do a 1 to 1 simulation of the biochemistry alone is beyond absurd, pal. We might as well consider that our neuroscientific model may not be the correct route

>> No.9685726

>>9675387
ML is a simulation of cognitive neuroscience and half of behavioral psychology (I.e. classical conditioning)

ML is also currently the foundation for AI because mathematicians and computer scientists still have to incorporate every other field of psychology and psychotherapy in order to construct a viable AGI. Unfortunately these vital other fields are merely bases for vital critiques of what current r&d must resolve.

>> No.9685730

>>9685726
>ML is a simulation of cognitive neuroscience and half of behavioral psychology (I.e. classical conditioning)
wanna guess how I know that you know nothing about ML?

>> No.9685942

>>9672941
>creating a being more advanced than us will end well for humanity.

>> No.9685943

>>9685942
What I'm saying is that it won't kill us, but it will be the death of us.

>> No.9686233

>>9674195
>mathematician
based
t. fellow mathfag

>> No.9686254

>>9675098
>replies to bait post
>ignores several posts blowing his anus out
anus status: blown the fuck out

>> No.9686258

>>9675098
>A mouse exponentially more intelligent than
this is not a precise statement. appropriate, since nothing you've said has any evidence you've provided to back it up nor has any of it made any coherent sense.

>> No.9686259

>>9675251
>Far too complicated to predict even macro-level states of a bacteria.
wrong. there are evolutionary algorithms designed to do just that.
at the end of the day, you're bitching that our computers aren't powerful enough.

answer this: if we had INFINITE computational power, would we be able to simulate a cell?

>> No.9686266

>>9676333
>because they're all wrong.
ok, sure, no one is mad about that. let's see your evidence. as soon as you provide it, you can cash in your nobel prizes and various emeritus doctorates.

>> No.9686281

>>9678160
Imagine thinking that technology HAS to be bad because it's old
let's all just stop using wheels, guys. these irregular concave 27-gons we just developed out of carbon fibre are much better

>> No.9686424

>>9686281
Electronics are both less complex and less accomplished than neural systems.

>> No.9686538

>>9673648

It would have been mind blowing if watson had a robotic body as he played the game.

Having Seri's dad beat you is not overly impressive.

>> No.9687469

>>9686424
your argument was that it was bad because it's old
now you're moving your goal posts and trying to argue that electrical systems are less accomplished than biological neural systems
find me a biological system that can brute force prove the 4 color theorem
faggot

>> No.9687536

>>9687469
I didn't say it was bad because it was old. I said it was arrogant to assume that it's an omnipotent technology.

And lol if you're trying to compete with biological intelligences using electronics built by those biological intelligences