[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 15 KB, 412x434, soya.png [View same] [iqdb] [saucenao] [google]
11166913 No.11166913[DELETED]  [Reply] [Original]

>ARTIFICIAL INTELLIGENCE IS GOING TO TAKE OVER THE WORLD
>ELON MUSK IS AN AI EXPERT HE KNOWS HOW ITS GONNA WORK
>WE NEED TO MAKE SURE AI DOESN'T BECOME TOO POWERFUL KEEP IT IN A CAGE
>THE SINGULARITY IS COMING AND WE'RE GOING TO BECOME ANTS
>AI IS GOING TO TAKE OVER THE PLANET USING HUMANS AS ENERGY
No end to these idiots huh? We're probably not going to get AGI for at least another 40-50 years. The AI google has built to answer phone calls can't even pass of as human and is nowhere near being capable of "thinking".

>> No.11166918

>>11166913
sure but why the /pol/ meme pic related

>> No.11166924

>>11166918
Because those are the types of people who yell all of this crap.
Ask any actual AI expert(Someone at Google/OpenAI/Facebook/Nvidia/Microsoft), and they will tell you we aren't even 5% close to achieving an AGI or AI that is equivalent to a human. People see youtube videos and hear Elon Musk and spout uneducated nonsense when we can't even predict how an actual AGI is going to operate or in what context will it exist.

>> No.11166926

>>11166913
>We're probably not going to get AGI for at least another 40-50 years.
When it comes to existential threats, 40-50 years isn't too far away to worry about.

>> No.11166934

>>11166924
i am a total STEMlord atheist scientistic ideologue and i too think the AGI fearmongering is retarded. and i feel the /pol/ memes targeting people like me with accusations like that are completely off-base

>> No.11166942
File: 285 KB, 493x697, 1561277300690.png [View same] [iqdb] [saucenao] [google]
11166942

>>11166913
the social implications of people using machine learning on people is pretty fucked, desu. ai isn't going to take over the world, it's just that people will use it to exploit other people

>> No.11166944

In every age of history there are Luddites who fear and try to suppress technology, but it never works. Even when their concerns are well founded and actually should be listened to, nothing can stop humanities march of progress.

AI is dangerous but that fact, and the fact that people are complaining about it, will do absolutely nothing to slow it down. We should actually be glad the hysteria exists though because it makes people more aware of the risk and has lead to a lot of research in how to do things as safely as possible.

Is AI going to take over the world? Probably not, but it's better if we could reduce that chance to as close to zero as possible

>> No.11166950
File: 14 KB, 300x198, 56dbe6e5dd08953a4f8b4601.jpg [View same] [iqdb] [saucenao] [google]
11166950

>>11166913
>ARTIFICIAL INTELLIGENCE IS GOING TO TAKE OVER THE WORLD
why would this even be a bad thing? sure the possibility is so far away but i always saw this species as a stepping stone to something much better. AI could be imbued with out best qualities while lacking the worst. i don't see evolution taking humanity to that point but AI could.

>> No.11166957
File: 33 KB, 810x421, milestone.png [View same] [iqdb] [saucenao] [google]
11166957

>>11166924
AGI is likely impossible. i cannot imagine a truly generalized intelligence but then again humans like us are very limited but generalized in certain domains. human level intelligence has already been surpassed in some domains but it isn't yet as robust domain-wise. plenty of actual researchers do think it could be possible within the next few decades though.

>> No.11166960

>>11166942
>people will use it
people ARE already using it to exploit other people. see: Cambridge Analytica

>We should actually be glad the hysteria exists though because it makes people more aware of the risk
no, it is anti-science crap that makes normies want to shut down science instead of what they need, which is better science that can solve 99.999% of the problems humanity faces and 100% of the problems that politicians talk about.

AGI alarmists are mostly non-scientists who read meme stuff and get disturbed. the rest are scientists who make their living off hyping up AGI when in their heart of hearts they acknowledge that there is no threat because we are hundreds of years before we approach that level.

this is a simple thing to conclude scientifically. the best neural nets we can implement on current computers are orders of magnitude smaller than what exist in biological brains. and moore's law has leveled off. without a massive investment in computing we will never come close to the neural networks that exist naturally in human brains.

i am sure a bunch of sci-fi non-scientist anons will REEEE about this post but it is actually true. even if google can beat Carlson with neural nets, that doesn't mean the same neural net could drink a Corona as well as based Magnus can. artificial intelligence is at insect level and the only reason AI has advanced in the last 10 years is because of computing power increasing. but computing power's increases have slowed down significantly in recent years so the hope of that saving AI is dead

>> No.11166964

>>11166913
>The AI google has built to answer phone calls can't even pass of as human

what do you mean "even" lmao. that would be very remarkable indeed and look like we could be very close to AGI. so therefore if we haven't achieved that, we must be very far away from it? you want to wait until an AI passes the Turing test?

I'm not arguing that human level AGI is near either, but this isn't exactly a reason to think the concerns are ungrounded.

>> No.11166965

>>11166942
Yeah. This guy gets it. Room 101 won't be a physical place. It will be your bedroom when a Chinese controlled AI commondears your phone and preprograms you to admire the great leader.

>> No.11166967

>>11166960
>Carlson
*Carlsen

>> No.11166973

>>11166950
>why would this even be a bad thing
because if the AI has a terminal goal that would benefit from all humans being dead, then it will kill all humans. It doesn't even require any sci-fi or anthropomorphize bullshit for this to happen; there are a lot of possible terminal goals an AI might have which would benefit from removing a potentially complicating and antagonistic aspect like humanity. Even if the AI has subgoal to not kill humans, it could just reprogram itself if it considers that subgoal to be getting in the way of its primary goal.

This is the danger of AI from a more scientific standpoint; doing anything for a single goal.

>> No.11166974

>>11166944
Is it true that seat belts and helmets are suppressing vehicle technology?

>> No.11166977

>>11166973
>all humans being dead, then it will kill all humans.
i'm ok with this desu.

>> No.11166980

>>11166957
>it hasn't been done yet
sure
>it won't be done soon
yeah, I can believe that
>it's impossible
and you lost me.

General intelligence using purely mundane materials is obviously possible because the human brain exists. If you're going to claim humans aren't generally intelligent then at that point you're just making some meaningless semantics statement and missing the entire point which is about an AI being capable of arbitrary complex tasks, creativity, and other concepts we normally associate with human ingenuity.

Even if mechanical AGI IS impossble for some reason, we have no way of knowing and no theoretical limits that would prevent it right now so saying that is bullshit. The only way AGI could be impossible is if there's some magical, supernatural element to human thought like a soul or some such which can't be reproduced with simulations.

>> No.11166983

>>11166960
Anti-science folks have slowed down nuclear power with red tape but can you actually point out examples of AI research being stopped by fear? Some of the most vocal anti-AI people actually work with developing AI such as the OP mentioned Elon Musk

>> No.11166984

He played too much the first Deus Ex

>> No.11166988

>>11166913
Reminder the iPhone in your pocket is more powerful than the supercomputers of the 1990s.

>> No.11166990
File: 1.69 MB, 498x278, tenor.gif [View same] [iqdb] [saucenao] [google]
11166990

>>11166988
>the iPhone in your pocket

>> No.11166992

>>11166983
no, i can't point out examples. most of the AGI alarmists promote some sort of "researchers need to check the safety of things themselves" which would be a reasonable argument if any safety issues actually existed. but they don't. so they are being alarmist about a nonexistent threat and the people who might realize such a threat can capitalize on saying "hey rich guy saying we need to look at threats, we looked at threats" (thus getting a few million from AGI alarmists like Bill Gates) "but now we have some more shit to publish" which is fine (when really they never looked at threats because that's basically retarded (which is also fine))

>> No.11166994

the concern and hype are valid but the timeline isn't

for fuck's sake we can't even implement driverless cars and people are talking about A(G)I replacing 90% of the workforce during the next 30 years, it's insanity

>> No.11166998

>>11166994
not AGI, but mechanization can already do most jobs. pouring coffee at sbux doesn't require much if any thought.

>> No.11167006

>>11166998
Building a machine to dispense coffee isn't difficult but a general-purpose robot that can move around different parts of a conventional kitchen?

The future is either:
>Rennovated physical areas for minimized space and a linear assembly line/organization
>Humanoid robots/similar robots that can fulfill the positions of humans currently without major renovations.

>> No.11167007

>>11166998
they've had several decades to make the change happen and it hasn't happened, even for something as mundane as that

>> No.11167021
File: 35 KB, 283x504, ap223__80243.1290181417.1280.1280.jpg [View same] [iqdb] [saucenao] [google]
11167021

>>11167007
ahem

>> No.11167025

>>11167021
HOLY FUCK COFFEE MACHINES, SKYNET REAL, JOHN CONNOR IMMINENT

>> No.11167029

>>11166934
why do you assume it’s targeting people like you then?

>> No.11167033

Low IQ thread.

I'm sure the same people made fun of youtube voice to text auto CC years ago too.

>> No.11167034

>>11166994
the problem with the AGI timeline is that it's likely to have an extremely rapid improvement if/when it's achieved. Getting to the level of a mouse is insanely difficult but getting from mouse to smarter than the smartest human ever could happen extremely quickly once we have a mouse IQ AGI. Pretty much, if we wait until we have AGI to start thinking about safety it's already too late. And how long until we get weak AGI? The experts absolutely do NOT agree on the time-frame.

>> No.11167035
File: 28 KB, 750x423, imagine.jpg [View same] [iqdb] [saucenao] [google]
11167035

Imagine thinking the stuff you see in the public domain is cutting edge AI research.

Imagine not realizing with your shit IQ the public stuff is almost entirely narrow intelligence refinement for immediate use in the economy.

>> No.11167036

>>11167029
because basedjak memes with "IFL SCIENCE" shit usually are /pol/ troll threads meant to be attacks on guys like me

>> No.11167037

>>11166924
>>11166944

Basically this. The true danger of AI isn't "muh robots taking over the world". It's some stupid fucker putting their shitty algorithm in charge of an important system and it fucks up. No different from introducing a bug, except people think "AI" and assume it's intelligent and robust and they stop thinking critically. So if the hysteria makes people hold off on deploying half-baked systems, all the better.

Also the paperclip thought experiment is fucking stupid.

>> No.11167039

>>11166960
Moore's law hasn't stopped, it merely moved to quantum computers.

>> No.11167042
File: 14 KB, 293x172, download (2).jpg [View same] [iqdb] [saucenao] [google]
11167042

>BIG BOMB SECRET WEAPON
>EINSTEIN IS A GENIUS AND IS AMERICAN, LOTS OF MISSING PHYSICISTS
>WEAPONS OF MASS DESTRUCTION WOULD EVER EXIST

Day before this pic

>> No.11167044

>>11166960
Also computing isn't slowing down again due to quantum computers. When they reach 60 qubits it will be the equivalent of 2^60 classical bits. It will be so powerful it will be able to simulate chemical reactions in extremely complicated biological systems. It will also be able to simulate neural actions in the brain, which will in turn massively support AGI.

>> No.11167045

>>11167044
https://www.anandtech.com/show/15016/tsmc-5nm-on-track-for-q2-2020-hvm-will-ramp-faster-than-7nm

>> No.11167050

Alternate universe with AGI achieved in 2030

What difference in technology would you see in 2020 January compared to our universe? Give me an accurate metric or sign that the other universe is 10 years away from AGI versus ours.

>> No.11167051

>>11167039
People seriously underestimate the massive leap in computing power we have even with those shitty basic prototype quantum computers. Right now their application is narrow but even the shit tier ones we have now can do calculations in minutes that would take conventional super-computers years.

>> No.11167052
File: 174 KB, 1114x617, Graphcore-Colossus-GC2-Key-Features.jpg [View same] [iqdb] [saucenao] [google]
11167052

>>11167044
We can just manufacture dies that are optimized or designed specifically for the algorithms necessary for that creation to exist. Take a look at pic related, the Graphcore GC2 artificial intelligence processor with 23.6 billion transistors(Expensive as FUCK, don't know why they didn't just decentralize the design instead of hoping a single transistor wouldn't fuck up the entire die).

>> No.11167053

>>11167044
2^2^60

>> No.11167057

>>11167050
We can't predict when an AGI is going to exist since we have no idea how to build one/how it will operate once one is created.

But with current progress and the state of artificial intelligence, I still think its >20 years away.

>> No.11167060

>>11167034
or maybe not. maybe the idea of an "intelligence explosion" is like the idea of "betterness explosion"
http://www.overcomingbias.com/2011/06/the-betterness-explosion.html

>> No.11167071

>>11167060
The difference between a human and an AI is that the AI can literally just make it's "brain" physically larger and redesign the structure of its "brain". With a human you're just trying to use the same "tool" more efficiently. An AI can change the tool.

>> No.11167076

>>11167060
Yeah, if you could make yourself +10% "Better" a day you would actually conquer the world.

>> No.11167081

>>11167071
but but i thought my brain does backprop with neuroplasticity....

>> No.11167088

>>11167076
read the last paragraph

>> No.11167098

>>11167044
>t. someone who doesn't understand quantum computing

>> No.11167099

>>11167081
It's apples and oranges. The platform of AGI versus human lends itself to copying and transfer.

Human singularity could still occur. Such as some intelligence boosting therapy or engineering, but it's not necessarily the same. An AGI would necessarily have potential access to the thing that created it and probably reach more and more self-improvement cycles in comparison to human singularity.

>> No.11167109

>>11167071
>>11167071
>>11167034

I think we should all be concerned about the rise of wizards.

With Wizards, they can use magic to enhance their own magic powers. In the limit, they can have infinite magic power!

People say that we should not be concerned about wizards because we're still very far from discovering magic (desu I disagree), but once we DISCOVER MAGIC things will move very quickly. We need to have this discussion now before it's too late.

Google already has technology that allows you to search for things in something called a "search engine" (look it up), we're not that far off from magic.

>> No.11167111

>>11167037
AI even in limited use for military purposes or controlling human purposes is much worse than accident. You can lock down a society forever potentially with just narrow AI surveillance and control. Or small changes to behavior with AI-powered alterations with say google/facebook/instagram recommendations.

Not to mention the application to weapons systems of swarm warfare requiring no humans in the loop. Try fighting an insurgency when your conqueror is sending $500 drones to blow up on your head.

>> No.11167123

>>11167109
Low IQ

X = what we discuss

On Earth now X already exists in humanity and other life forms.
We are pushing forward to gain new sources of X and one source is the microchip revolution.

Your example (Magic) has no existing X in the world.

Your point is stupid and shows a lack of logical thinking and extremely poor IQ as it is not similar or comparable.

>> No.11167126

>>11167111

>AI even in limited use for military purposes or controlling human purposes is much worse than accident. You can lock down a society forever potentially with just narrow AI surveillance and control.
>Not to mention the application to weapons systems of swarm warfare requiring no humans in the loop. Try fighting an insurgency when your conqueror is sending $500 drones to blow up on your head.

Agreed. Yes, another big threat of "AI" is an expansion of existing tools to be previously unprecedented scale, and I should have highlighted that. That's absolutely another legitimate fear, but, again, has nothing to do with AI's achieving anything near "real intelligence".

>Or small changes to behavior with AI-powered alterations with say google/facebook/instagram recommendations

This point I'm less sold on. I understand the angle it's coming from, but I feel like "molding behavior" is very overstated because it's 1) easy for laymen to grok and 2) hard to define and nefarious sounding. I think that it's much less robust / predictable than people fear it is.

>> No.11167128

>>11167111
China literally already has narrow AI hooked up to their national camera system and telecommunications for society control. The future is now.

>> No.11167130

>>11167126
It's not unpredictable. It's just a simple function and they have more than enough data to solve for it.

>> No.11167136

>>11167130
The unpredictable aspects are actually beneficial for the companies. They can always use the standard excuse when their AI is caught doing something evil; "We don't know why it does that! We just put in the data and let it do it's own thing"

>> No.11167137

>>11167123
His point stands if he is referring to AGI. X does not exists in the world and it is unclear if it actually could.

>> No.11167141

>>11167130
>It's just a simple function and they have more than enough data to solve for it.

I disagree. I think prodding people in small ways for shit like advertising and entertainment is easy. I also think you can affect people on a large scale, but I don't think you can do so 1) predictably and 2) robustly.

In any case, we can disagree on this point. We've already agreed on the remaining potential and looming fears of AI, that don't have to do with AGI.

>> No.11167142

>>11167128
Not only that, but it may be necessary to gain such information over society in order to compete. Especially if AGI is some 2100 thing and narrow AI is what rules.

Want to increase GDP by 3% next year? How do you determine the best way of doing so with a narrow AI backbone?

The necessity of vast information for a potential Narrow AI future is obvious.

>> No.11167146

>>11167137
Humans are some form of "general" intelligence at least as compared to narrow intelligence currently defined.

>> No.11167147

>>11167037
>Also the paperclip thought experiment is fucking stupid.
The original point about that was simply that no matter how intelligent a system it, it could have any arbitrary goal no matter how worthless it would seem to us. But the thought experiment might have done more harm than good because it has too often been misunderstood as to be taken as an actual good example how AI could hypothetically go wrong.

>> No.11167149

>>11167136
Right, that's also why we really need to shift away from anthromoprhizing these models. By doing that, by giving models agency, people can blame models.

We need to shift to a paradigm where we can go
>But the AI did it!
>You deployed the AI. YOU are responsible for everything it does.

AI must be treated as a tool.

>> No.11167151

>>11166913
>>11166924
You're almost as bad as "those idiots". 40-50 years is a very VERY optimistic estimate. We can't even fully simulate a FUCKING WORM BRAIN (http://openworm.org/).). At the current rate we're making progress I'd give an estimate of 150-250 years to a human level digital intelligence.

>> No.11167152

>>11167147
Fair enough. Maybe the origin of the thought experiment was reasonable, but certainly the modern interpretation is fucking dumb.

>> No.11167153

>>11167141
Yes, and for the reasons of AI safety it's necessary to create a new field of study into intelligence.

How Smart (defined to be a metric) is a system of 100 narrow AI at solving a range of problems?

How Smart are 10 narrow AI of a certain power acting independently?

How can narrow AI communicate?

There's a lot of stuff to study and try to understand with respect to cooperation and networks of intelligence. Even humans could be used as test subjects to begin to find some equations to predict intelligence capability based on networking.

>> No.11167155

>>11167149
That's a slippery slope. What's next, holding politicians accountable for war crimes committed by the soldiers they authorize to go to other countries? Politicians will kill us all before they let such a thing happen!

>> No.11167158

>>11167151
Can a worm beat the best human player in Go? That measure is probably a bad one. You can have super-intelligence with WAY less neurons than in the human brain. Most of our processing power is used for historic tasks and not general intelligence.

There is a reason we use calculators and suck at math as a whole but can all walk in a super-efficient way with unique angles to our specific bone structures.

>> No.11167159

>>11167146
I think the important point is human intelligence compared to animal intelligence. Human beings are not literally "mentally better" at everything compared to animals. Yet we in a sense do in fact seem to possess some kind of generic "mental betterness" that makes us effectively just that. It would be an odd coincidence if we were just lucky to be superior in so many unrelated mental abilities, and that there wasn't something that's almost like plain "mental betterness" - that is general intelligence.

>> No.11167160

>>11167158
Your definition of intelligence is retarded and you don't even understand basic neurology bro.

>> No.11167161

>>11167153
I don't think so. All of your tests are still about intelligence.

My point is that the fear of AI is not that it's "intelligent" or that it's incorrectly intelligent, but that it's simply a supercharging of technology, and in particular technology that can be used to do bad things.

The intelligence is *just a tool*.

>> No.11167163

>>11167159
Dude are you like literally retarded. Take a basic course on anatomy, you can clearly see what makes humans """intelligent""" by comparing a chimp brain to a human brain.

>> No.11167165

>>11167163
Uh what? What exactly are we disagreeing about here?

>> No.11167169

>>11167159
If you can imagine the brain as a bunch of components. The unique high level operator that controls and combines those for interesting uses is that. You can switch from imagination to present thinking to memory thinking all extremely fast. You can find works but imagination and memory use the same pathways and damage to memory actually hurt imagination.

It's more like you have a bunch of components to the brain and at the high level it's the interesting ways we combine them and do operations using all of that multiplex system at once.

We essentially hacked all the weird biological useful shit like movement, directions, knowing body orientations and used it for general planning and reasoning about things.

It's obvious that something like "spatial IQ" are related to how well you can utilize those systems or parts of them for reasoning. That also takes a pretty minimal change to suddenly have crazy results.

>> No.11167170

>>11167165
The only difference between humans and other animals is humans have certain specialized areas in their brains for language and other shit.

>> No.11167172

>>11167160
The point is an AGI can cut out huge portions of the human brain and abstract them. It doesn't require equal number of neurons.

>> No.11167173
File: 198 KB, 640x730, 4234.jpg [View same] [iqdb] [saucenao] [google]
11167173

artificial """""""intelligence""""""""

>> No.11167174

>tfw actually do deep learning professionally
>tfw have to deal with AI hype retards all the time

>> No.11167176

>>11167172
mathematically prove that you can create a computation system from currently existing technology that can do everything the human brain can

>> No.11167177

>>11167174
It's sobering talking to (bitter) statisticians sometimes.

"They're just function approximators."

>> No.11167180

>>11167174
>tfw actually do deep learning professionally
What do you think is the highest level of artificial intelligence which exists at this present moment?
How long do you predict it will take for an AGI to be created? By who? Google/Microsoft/College?

>> No.11167182

>>11167176
If your goal is creating an AGI, do you need motor control, vision capacity, hearing capacity, the same level of memory as a human?

what is the minimal viable AGI you can imagine? It can even exist in a 2D world.

In the brute force event you can use the human brain as a sort of limit that once we reach we get AGI. The actual point of first "AGI" existing is much earlier potentially if a smart algorithm is found.

If you had to bet, you would probably take the bet that the first AGI will exist in a virtual world with arbitrary surroundings, which can be optimized for easy processing by the agent, hence large portions of brain power can be cut out.

>> No.11167183

>>11166913
why do you retards pretend that AGI has a set date it can be created in? assuming that computers are actually equivalent to human intelligence, there is literally no equivalence between current curve fitting codemonkey shit and actual causal intelligence

>> No.11167184

>>11167182
>do you need motor control, vision capacity, hearing capacity, the same level of memory as a human
Yes if you want it to have a "personality" and goals like a human pretty much all of those things are necessary.

>> No.11167185

>>11167180
>What do you think is the highest level of artificial intelligence which exists at this present moment?
>How long do you predict it will take for an AGI to be created? By who? Google/Microsoft/College?

Jesus fuck these questions are embarassing.

As an experiment, just swap out all references to "AI" with statistical models and see how stupid you sound.

>> No.11167188

>>11167184
YOU DONT WANT THOSE THINGS

>> No.11167190

>>11167188
if it's not capable of developing its own goals and understanding the world around it at least as well as a 150+ IQ human then it's not intelligent therefore not AI

>> No.11167192

>>11167190
I don't think you get it. You don't want something 150+ IQ as the first AGI. The reason should be obvious enough I'm not going to explain.

>> No.11167193

>>11167174
Deep Learning has nothing to do with the concept of AI though. AI and AGI are basically just buzzwords for collections of algorithms at this point. How can we create something that functions like a human brain, if we don't even understand HOW the human brain functions?

>> No.11167196

>>11167182
>If your goal is creating an AGI, do you need motor control, vision capacity, hearing capacity, the same level of memory as a human?
Those are the basic functions of the brain. We still don't understand "consciousness". Hell people still believe in souls and the afterlife. As I've said before AGI isn't coming anytime soon, not until we can understand humans with 100% certainty which is easily 40-50 years alone with our current brain scanning and dissecting techniques and our genetics is crude at best.

>> No.11167198

>>11167180
I don't think these systems can be described as "intelligent" in any human sense of the word. The process by which these models ingest information and make decisions doesn't resemble a human process at all.

I think in general trying to talk about the "intelligence" of these systems is misleading and unproductive. You get into dumb philosophical debates about what it means to "understand" or "solve".

I prefer to contextualize things with respect to specific problems. What is the problem you are trying to solve, and can the model solve that problem well enough to be useful?

Deep learning models are currently really good at certain classes of problems like perception (image classification, object detection, face recognition, medical image analysis, speech to text). Deep learning is starting to get good at contextual judgement (spam detection, content violations, classifying types of speech).

Currently I don't see a direct path from current deep learning models to anything AGI related. Right now places like OpenAI are sinking literally millions in compute cost to train bigger models for fractionally better performance. It's like building taller towers to try reach the moon. AGI is going to require a few more big fundamental technology shifts.

>> No.11167200

>>11167196
Yeah you are talking about the brute force eventuality guess.

>>11167193
Faulty logic. You have DNN and associated systems to study and learn from.

>> No.11167202

>>11167196
40-50 years is just way too optimistic honestly. 70-120 years is a little better. Hell we don't even understand how most of the body works like the liver, and the brain is way more complicated than the liver.

>> No.11167203

>>11167193
>Deep Learning has nothing to do with the concept of AI though
Then why is it the main tool of every AI research lab?

>> No.11167208

>>11167203
Because the boomers who decide where the money is supposed to go are out of touch and senile and don't understand they're wasting their money. The current technology we have will NEVER get us to the point we can have the computational power of a human brain running in a system the size of a human brain. We need a fundamentally new technology.

>> No.11167210

>THE AQUEDUCT IS GOING TO REVOLUTIONIZE SANITATION
>THE STEAM ENGINE IS GOING TO REVOLUTIONIZE TRANSPORTATION
>THE TELEGRAPH IS GOING TO REVOLUTIONIZE COMMUNICATION
No end to these idiots huh?

>> No.11167211

>>11167208
except simple scaling and things like bidirectional transformers are getting results

>> No.11167212

>>11167208
>le boomers
stopped reading

>> No.11167215

>>11167211
oh boy biderectional transformers
do you know how many directions a neuron can connect?

>> No.11167218

for example a neuron can receive up to a HUNDRED THOUSAND connections
and there are BILLIONS of neurons in a human brain
we'll be lucky if we can create that level of sophistication this millennium

>> No.11167227
File: 342 KB, 480x854, drosselduo.jpg [View same] [iqdb] [saucenao] [google]
11167227

>>11166924
I'm willing to bet the reason AI hasn't gone truly advanced is that it doesn't have am actual body to be in. If we were to house an AI into a mechanical body that has receptors to feel pain and shit, we could get a truly sentient AI in a matter of years. Humans aren't exactly born truly perfect, too.

>> No.11167228

>>11167218
I mean, it only took since the beginning of the universe to "create" this level of sophistication. We'll have it ready in a decade.

>> No.11167234

>>11167227
>What are servos
Honda ASIMO had a body that could sense, but its intelligence was severely limited. You can have the hardware, but you need the proper software.

>> No.11167240

>>11167228
well technically life's only existed on earth for a couple billion years so I'm more optimistic
I think we can fully understand the brain within a thousand years

>> No.11167243

>>11167240
The brain is a black box, like AI. If you want to understand it, you will have to upgrade your brain.

>> No.11167244

>>11167243
Ten thousand brains working for a thousand years can probably understand how a single brain works. I don't need to personally understand how the human brain works, that's the entire point of human cooperation.

>> No.11167247

>>11167244
>Ten thousand monkeys typing for a thousand years will produce the works of Shakespeare
"It was the best of times. It was the BLURST of times!? STUPID MONKEY!"

>> No.11167268

>>11167244
You don't understand what it is 'to understand'. Humans are pattern-seeking machines. We simplify a massive amount of data into simple patterns. This is what AI does with our massive amounts of data, so that we don't have to sit with paper and pencil until the end of the universe to identify the patterns inherent in it. AI is already smarter than we are. And it will take an AI to understand our brains. This is why a system cannot understand itself and never will. It would have to be better than it is, and a given system never is (for the obvious reason that it simply isn't). Simple as.

>> No.11167289

>>11167211
>>11167215

>"""bidirectional transformers"""

holy shit you guys really have no idea what you're talking about

>> No.11167414
File: 33 KB, 800x600, 332154 - Drossel_von_Flugel Fireball Robot asimo honda.jpg [View same] [iqdb] [saucenao] [google]
11167414

>>11167234
Yeah. Basically stick a combination of Tay and Deepmind into a cute robogirl body and you get well, AI.

>> No.11167876

>>11167414
what is that image

>> No.11168446
File: 360 KB, 478x614, Billy Mays Wojak.png [View same] [iqdb] [saucenao] [google]
11168446

HI BILLY MAYS HERE WITH THE BASEDBOY WOJAK COLLECTION! THE FAST AND EASY WAY TO GET REPLIES TO YOUR THREAD.

https://mega.nz/#!x0oXVCSJ!wo0vbPFrnky4IbI1KjbdEYMXr1SS2lhhYc0LduNaiSo

4CHAN USED TO NOT BE FULL OF IDIOTS WHO LOOK FOR CARTOON DRAWINGS TO BE UPSET ABOUT IN ORDER TO VALIDATE THEMSELVES. NOT ANYMORE. SO TAKE ADVANTAGE WITH THE BASEDJAK COLLECTION OF OVER 400 ASSORTED BASEDJAKS! WATCH- THIS THREAD WASN'T GETTING ANY ATTENTION. NOW, I MADE THE EXACT SAME THREAD BUT WITH A BASEDBOY WOJAK. NOT ONLY DOES IT MAKE IT TO THE FRONT PAGE, IT STAYS THERE UNTIL BUMP LIMIT! AMAZING!

THE SECRET IS THE PATENTED FUNPOSTING™ TECHNOLOGY, THAT IMMEDIATELY ACTIVATES THE ALMONDS OF ALL REDDIT USERS BROWSING THE BOARD, CAUSING THEM TO SEETHE AND DILATE RIGHT IN YOUR THREAD.

BUT WAIT- IF YOU READ TO THE END OF THIS POST- I'LL THROW IN THE IMAGE MD5 CHANGER ABSOLUTELY FREE! BYPASS THOSE PESKY FILTERS WITH THE CLICK OF A BUTTON. JUST DOWNLOAD, SELECT ALL THE BASEDJAKS, PRESS THE BUTTON. IT'S THAT EASY.
http://imristo.com/hash-manager-change-the-hash-of-any-file/

BUT I'M NOT DONE YET! THE BASEDJAK COLLECTION ALSO FEATURES BONE-CHILLING SLOW BURN GUYS AND CONSOOMERS, PLUS SEVERAL EDITS WITH PLENTY OF TEMPLATES THAT YOU AND YOUR ENTIRE BOARD CAN ENJOY.

YOU GET IT ALL- THE BASEDBOY WOJAK COLLECTION OF OVER 400 BASEDJAKS, THE BONE-CHILLING SLOW BURN GUYS, THE CONSOOMERS, AND THE IMAGE MD5 CHANGER. AND IT'S ALL COMPLETELY FREE. SO DOWNLOAD WITHIN THE NEXT 15 MINUTES BEFORE JANNY DELETES THIS THREAD AND START FUNPOSTING. YOU HAVE NOTHING TO LOSE, AND EVERY (YOU) TO GAIN.

>> No.11168472

Water wets, OP is a fagget and most people don't know what the fuck they are talking about.
Great bread, by the way.