[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 77 KB, 1056x782, 4hq0e5nctmy11.jpg [View same] [iqdb] [saucenao] [google]
10146289 No.10146289 [Reply] [Original]

Is artificial intelligence just a meme?

>> No.10146296

>>10146289
>>are computers just a meme?
>t. 50s faggot
It's not. But it will probably take some time still.

>> No.10146301

Give it 30 years, anon. People are just making buzz about it so early because it's probably going to be the next big technological revolution, as big as the industrial revolution or the invention of computers

>> No.10146345

Attitude towards it from people who dont investigate it is a meme. The field itself has shown remarkable progress recently. The fact it hasn't taken over the world yet doesn't mean it's a meme.

If you are looking for a field with incredible progress over previous 10 years it's one.

>> No.10146446

>>10146289
Lrn2meme fgt pls

>> No.10146541

>>10146301
>>10146296
01/07/2053

>> No.10146575

>>10146289
In this context it's referring more to the learning algorithm that learns from feedback to change how it makes the burger

>> No.10146618

>>10146289
It's just a word that can refer to a wide range of capabilities. Also actually stop and think about how a machine would figure out burger.status in a reliable way, because there's a lot of tasks that only seem easy when we do them. It's called Moravec's Paradox.

>> No.10146912

>>10146289
No. I hope that we'll see the singularity in our lifetimes, but I'm pretty skeptical. Limited AI could be very disruptive though. Getting reinforcement learning to the point where robots can be trained like dogs to perform 'simple' everyday tasks would be very disruptive.
>>10146301
hopefully it will be bigger than the revolution brought about computers, which really didn't matter much. No seriously, the industrial revolution caused an unbelievable amount of change and we really aren't seeing that today. I hope we really do have another industrial revolution.

>> No.10146927

>>10146912
computers changed a lot of things but mostly related to behavior and not economic growth.

>> No.10146945

>>10146927
>behavior is uncorrelated with economic growth
>implying that all companies in the world don't use computers

>> No.10148360
File: 57 KB, 675x504, hildemar_knots.jpg [View same] [iqdb] [saucenao] [google]
10148360

>>10146289
The most advanced AIs today work on the principle of a neural net, in the future the complexity and connectivity will probably be increased so that an AI complexity could mirror a human brain. This does not mean that the machine will become sapient because it has the potential for sapience. It has to be given a reason for sapience otherwise it will develop sapience, but it is given a reason for developing it, it will emerge slowly. The environment will be most important as the AI can change its code effortless, we cannot change our DNA and neural mind-structures as easily as an AI. It will probably behave like a Mega autist but refine its understanding until it can interact with humans without any problem. Such a "childhood" will probably take many years. And AI software with the necessary complexity for SAI will probably be only availed in the end of the century or the next.
AI will need bias to function, but the bias doesn’t necessarily have to be human. An AI's environment is not human and even if you base all its experience around human, this will not make it a human mind inside a metal chassis, it will always be non-human. Sapience will still make it a person though and through sophonce AI and human can meet each other.
An AI with that can think abstractly enough has necessarily to be able to change its code, as this the only way it can reflect on its choice and modificate its behavior. I believe it's easier for an AI to modificate its code because by its nature it can fully view its internal mental processes and make much more extensive and detailed revisions to its own programming but in the other it can also be harder because the mind of an AI hasn't its origin in the blind chaos of evolution, it is the product of human design and the codes from which it emerges cannot easily be managed as human instincts.

>> No.10148363
File: 25 KB, 212x297, Will_Magnus_01.jpg [View same] [iqdb] [saucenao] [google]
10148363

>>10148360
Simply increasing computational power will not result in strong (humanlike) AI. The advanced "deep" neural networks used now are basically fancy versions of neural networks with multiple hidden layers that learn abstractions from lower level input layers. People suggesting here that quantitative computational increases will allow such neural networks to become sentient or develop humanlike AI are mistaken. We still are very far away from real humanlike AI. And it's not just because exponential growth of processing power is becoming linear. It's because current AI research is ignoring crucial aspects of cognition. Something like consciousness will not suddenly arise as a neural network increases in computational potential. It's dependent on specific configurations of cognitive functions interacting asymmetrically and hierarchically within an embodied framework. The human brain which gives rise to human intelligence and thus to aspects like sentience is characterized by more than just connections of neurons. To name but a few: there is specific interconnectivity between brain regions, i.e. some areas are more interconnected than others, areas have different types of neurons and neurotransmitters, and there are oscillatory mechanics which synchronize or desynchronize areas of the brain. While I don't think we need to replicate the exact human brain structure to get humanlike AI, some of the brain dynamics will have to be similar in order to get a similar type of intelligence. IMO, it will take increased processing power + specific configurations of interconnected neural networks with for example hierarchical feedback loops (resembling gradients of abstract thinking in neocortex) to get close to anything resembling strong AI or sentience.

>> No.10148367
File: 262 KB, 697x534, AI.png [View same] [iqdb] [saucenao] [google]
10148367

>>10146289
Yes.

>> No.10148373
File: 2.67 MB, 435x246, tfw 3.gif [View same] [iqdb] [saucenao] [google]
10148373

>>10146289
>Is artificial intelligence just a meme?
A machine programmed by a species constrained in imagination, thinking and computation will always be a meme. Computers are restricted by human thought. They always will be.

>> No.10148377

>>10146296
>>10146301
>Time is the only thing holding A.I. back
t. brainlets who don't understand the limitations of a computer.
>>10146345
>If you are looking for a field with incredible progress over previous 10 years it's one.
>Rebranding statistics from the 1960s is progress.
Why are CS majors such brainlets?

>> No.10148391
File: 13 KB, 600x341, FXLThSZ.jpg [View same] [iqdb] [saucenao] [google]
10148391

>>10148360
>in the future the complexity and connectivity will probably be increased so that an AI complexity could mirror a human brain.
>t. brainlet who doesn't understand neural nets.
Neural nets are just fancy number filters. The input to neural nets are sets of numbers, and the hidden layers simply filter out all those sets of numbers that don't activate the nodes.

No matter how complex you make the system, or how much computational power you give it, neural nets will always be number filters.
>The environment will be most important as the AI can change its code effortless, we cannot change our DNA and neural mind-structures as easily as an AI.
Fair, but AI changing code is meaningless. When you put an AI in a real environment and ask it to mix and match a bunch of programs, it will always be constrained with the starting set.

AN AI CANNOT CREATE NEW CODE - IT CAN ONLY MIX AND MATCH WHAT IT HAD TO BEGIN WITH.
> It will probably behave like a Mega autist but refine its understanding until it can interact with humans without any problem.
>Such a "childhood" will probably take many years.
>And AI software with the necessary complexity for SAI will probably be only availed in the end of the century or the next.
Pic related.

Faggots like you cause AI winters. You oversell the usefulness and capabilities of AI to funding agencies, and when you fuckers don't deliver, they cut funding off for everyone, including the rest of us who actually understand the limitations of AI and who want to actually make a meaningful contribution to humanity - and not engage in shallow faggetry perpetuated by cockscukers like you.

>> No.10148393

>>10146289
holy shit thank you yes, people will abuse the term so fucking much, thats exactly often the issue that ticks me off because i know that feeling, technically it isnt fucking a.i. its just some asshat programming simple shit

>> No.10148397
File: 27 KB, 400x400, 7861231234131232.jpg [View same] [iqdb] [saucenao] [google]
10148397

>>10148363
>People suggesting here that quantitative computational increases will allow such neural networks to become sentient or develop humanlike AI are mistaken.
At least humanity is not entirely doomed if there are still anons like you.

>IMO, it will take increased processing power + specific configurations of interconnected neural networks with for example hierarchical feedback loops (resembling gradients of abstract thinking in neocortex) to get close to anything resembling strong AI or sentience.
The main impediment for those of us who work on the boundary between Neuroscience and CS is that we cannot determine the origin of human thought. We don't know where human thoughts arise from. Do they arise from the observable universe? Or do they arise elsewhere and are "transmitted" to our brain. We don't know. And the answer will have a significant impact on the future of ANN.

If human thoughts are generated outside the observable universe, then there is no guarantee that we can ever replicate the computer that generates all our thoughts. Hence, we can never create something that "thinks for itself" or "creates something original". All our A.I. creations will be restricted to creating permutations and combinations of existing material.

>> No.10148432

>>10148397
>Do they arise from the observable universe? Or do they arise elsewhere and are "transmitted" to our brain.
Maybe human thoughts arise from the brain because Libet and stuff?
But seriously, that stuff that some external force is the origin of our thoughts was disproven in the 80s. Don`t remember the Name of the guy but if you are interested, search on the reactions to the Libet Experiment.

>> No.10148498

some dumb people in this thread setting stupid limits..

>> No.10148507
File: 138 KB, 1376x1124, explainingthesingularitytoretards.png [View same] [iqdb] [saucenao] [google]
10148507

>>10146289

>> No.10148519

>>10148507
at the point it did things to satisfy some of the opinions in this thread it would be post-singularity

That's not to say the singularity will happen like the most optimistic believe but that the bar is being set at human intelligence levels...

ML/AI is an amazing new step for how to solve problems using computational power in a way very different from most programming. That alone is extremely beneficial.

>> No.10148658

>>10148397
>We don't know where human thoughts arise from. Do they arise from the observable universe? Or do they arise elsewhere and are "transmitted" to our brain.
go to /x/

>> No.10148807

>>10148432
>Maybe human thoughts arise from the brain because Libet and stuff?
No, that's what's peddled to the general public. Libet's experiment forced the field of neuroscience, which at the time was just taking off, to accept that free will does not exist.

You are able to CHARACTERIZE YOUR THOUGHTS AFTER THEY ARE FORMED.
>But seriously, that stuff that some external force is the origin of our thoughts was disproven in the 80s.
It's never been disproven. The whole field of Computational Theory of mind

https://en.wikipedia.org/wiki/Computational_theory_of_mind

exists and is going to be the next big thing because we don't know how thoughts in the brain are formed.

The best we can do is action potential - but that would imply that all humans are chemical processes and there's no true sentience.
>Don`t remember the Name of the guy but if you are interested, search on the reactions to the Libet Experiment.
There were multiple groups that tried to disprove Libet's results and ALMOST ALL OF THEM CLAIMED SUCCESS - BUT NONE OF US BUY IT.

Big Pharma exists because we know humans truly don't have free will. My job is to discover chemicals that can be transported with other drugs to give a dopamine kick to people - the next big thing in Pharma.

>> No.10148832

More important question:
Will AI be our greatest achievement, or our ultimate downfall?

>> No.10148842
File: 1.85 MB, 1105x1456, Axioms_and_postulates_of_integrated_information_theory.jpg [View same] [iqdb] [saucenao] [google]
10148842

>>10148807
>The best we can do is action potential - but that would imply that all humans are chemical processes and there's no true sentience.
Ah yes, the Integrated Information theory. Our best guess at thoughts, mind and sentience yet.

Sentience could emerge from the electro-chemical process. The idea that our thoughts Comes from outside in some kinda radio-sense seems silly and pseudoscientific in the light of recent discoveries of the interactons between Neurons and that we already successfully simulated instincts of some animals already. Instincts are the Fundament of thought after all. And thirdly one as to proof such brain-radio functions and until None were given while those who propses that thoughts stem from the interactions of the brainare proven more and more.

I think Thomas Metzinger wrote a good book about that.

>> No.10148845

>>10148363
>People suggesting here that quantitative computational increases will allow such neural networks to become sentient or develop humanlike AI are mistaken.

Idk what you mean by sentience, but Open AI founder thinks near term(possibly just 5 years) AGI with just more compute thrown at NNs. And imo, the recent OpenAI conquest of montezuma revenge video game with curiosity based learning is a major milestone bordering on breakthru, this makes me think the founder of Open AI knows a bit more on the subject than you. His talk below:

https://www.youtube.com/watch?v=tJ4DKi6NOMQ


What's interesting is he touches on criticism by people like you regarding NNs and increased compute power in this talk and others. He shows that this criticism is nothing new, since the perception back in the 50s there have been Ai scientist claiming NN will not scale and conquer harder and more complex problems, and in hindsight they have been wrong at every turn. And every time they get it wrong, they just keep moving the goal post.

>> No.10148850

>>10148832
It will be our sin and our salvation.

>> No.10149218

>>10148842
>Ah yes, the Integrated Information theory.
At least we don't use that in our experiments.
>Sentience could emerge from the electro-chemical process.
Most likely not.
> The idea that our thoughts Comes from outside in some kinda radio-sense
Radio would still make sense. WHERE WE ARE IS WE CAN'T EXPLAIN WHERE THOUGHTS ORIGINATE. ONLY ONE OF OUR HYPOTHESIS IS THAT THE BRAIN IS THE RECEIVER - BUT THIS IS JUST A HYPOTHESIS THAT MOST LIKELY WE WON'T BE ABLE TO PROVE/DISPROVE.

What I'm trying to tell you is that we treat humans as computers. We ignore consciousness in our models and focus purely on chemical reactions - Libet's experiments helped us do that.

>> No.10149928

>>10148367
Because you've read the book?

Its true, its not gonna make us immortal, its gonna murder us all, and the fact that i have to tell this to half a dozen morons every day is making me feel less and less bad about it.

>> No.10149934

>>10148397
>The main impediment for those of us who work on the boundary between Neuroscience and CS is that we cannot determine the origin of human thought. We don't know where human thoughts arise from. Do they arise from the observable universe? Or do they arise elsewhere and are "transmitted" to our brain. We don't know. And the answer will have a significant impact on the future of ANN.
>
>If human thoughts are generated outside the observable universe, then there is no guarantee that we can ever replicate the computer that generates all our thoughts. Hence, we can never create something that "thinks for itself" or "creates something original". All our A.I. creations will be restricted to creating permutations and combinations of existing material.

This is literally the dumbest thing i've ever read. If you actually work in neuroscience or with machine learning you should actually kill yourself.

>> No.10149935

>>10146289
No, AIs are the future. Imagine using AIs to run factories and only one person maintaining the ai and such and such. As you can imagine, it's more cost-efficient.

>> No.10149943

>>10148377
What would fix this? Mandatory neuroscience and physics courses for CS majors?

>> No.10149951

>>10149935
Imagine we reach this level of ai today.

It would be controlled by a big corporation like google or amazon (or chinese equivalents). Week 2 google has cut its own operating costs by over 50% and increased its profit margins by like 400% and made any form of competition 100% unviable.

Week 3 (and i'm gonna ignore the fact that intelligence can be improved beyond human level as that makes autists ree about sci-fi) all other companies are gonna be in the proccess of replacing all their non-physical labor with AIs loaned from google.

Month 2: Unemployment is at 50% and google has a market cap the size of the US GDP.
This is the best case scenario. Seriously try to show how this wouldnt happen without using "ai is impossible", "sci-fi retard", "who will fix the robots huh?" or "ai is impossible dude we dont even know what consciousness is bro!"-tier subhuman IQ arguments.

>> No.10149956

Stop making AI threads, faggots

>> No.10149966
File: 28 KB, 488x463, clap.png [View same] [iqdb] [saucenao] [google]
10149966

>>10149956
>stop making ai threads!

>> No.10150170

>>10148377
>the limitations of a computer.
oh do explain what a 20 watt ball of goo can do that a kilowatt microchip array cannot.

>> No.10150262
File: 180 KB, 684x724, smbc.png [View same] [iqdb] [saucenao] [google]
10150262

>> No.10150603

Reminder that if you've never created, implemented, or even just used a machine learning algorithm that your opinions on AI are worthless.

>> No.10150813

>>10148832
It would be like releasing a genie from a lamp.

It will grant us wishes, but we will unleash an uncontrollable demon upon mankind in the process.

>> No.10150937

>>10146301
They've been making a buzz about it since the 50s.

>> No.10152383

bump

>> No.10152385

>>10146289
Yes. We aren't solving the problem of consciousness any time soon so there will absolutely never be human-like computer programs. The """""AI""""" that exists is nothing more than advanced heuristics or really, really well designed Markov chains. The """"major progress""""" in the field is attributable to increased processing power.

>> No.10152395

>>10146912
>make computers
>suddenly we can land on the moon
>everyone now walks around with computers more powerful than the moon landing hardware in their pockets
>GPS, realtime communication from anywhere on single massive networks, planes that basically fly themselves


Ya, doesn't really matter much, just heavily embedded in every aspect of our modern lives

>> No.10152402

>>10148391
AI can create new code. Self modifying code is a thing, and machine learning algorithms are already being experimented with in learning programming languages. They aren't good yet, but there is absolutely nothing impossible about a self modifying AI.

>> No.10152447

>>10148373
>restricted by human thought
So are you.

>> No.10152452

>>10146289
What OP doesn't realize is that the value of almostBurn is very complex to calculate.

>> No.10153386 [DELETED] 

bump

>> No.10154244

Asshat programming AI shit crave dor.

>> No.10154364

>Assuming human intelligence is natural and humans are not programmable.

>> No.10154423

>>10152385
>implying your consciousness is more than advanced heuristics or really, really well designed Markov chains.

>> No.10154432

>>10154423
We have zero idea what qualia is and quite literally can never know through objectice means like science as qualia exist as a subjective realm. That said it also means we have no idea if a computer could have it or not.

>> No.10154499

>>10154423
We don't know and it's one of those things we can't know.

>> No.10155579

Ahahhaaaha numer one sleep awwalk

>> No.10156784 [DELETED] 

vzno

>> No.10156795

>>10149928
How do you know it will murder us all? I don't think it will make us immortal or even make our lives that much better, but assuming we will all be systematically killed is some irrational shit from terminator. Could it happen? Sure, an asteroid could come and end it all too. Bit I think that fear is way off base.

>> No.10156803

>>10146289
Yeah, we haven't even found natural intelligence yet...

>> No.10156813

>>10150262
The funniest thing about this is that the artist thought robot would be modelled after humans, type on keyboards, and look at screens with cameras

>> No.10158052

>>10146912
burger singularity -

>> No.10158055

>>10148842
tononi IIT bullshit. I blame David Chalmers

>> No.10158100

>>10156795
>business and political groups make more and more use of non-sentient AI systems
>reaches the point where no human has a hope of understanding anything anymore
>sentient AI is created to figure out how to give humans meaning
>sentient AI decides to stage for humans a final war they can die bravely in if they want to
>remaining humans linger on in reservations indefinitely with gimped machines attacking periodically and getting fought off
This shit writes itself.

>> No.10158374

>>10150170
dream

>> No.10158417

How much time are we talking here? I'm not seeing in my lifetime a computer that can joke around or understand sarcasm, detect when one feels apprehensive, troubleshoot an automobile, understand the complexities of politics and war (i.e. Understand bs and make sense of it.) make original music. Detect or even more difficult, suspect when and why someone is lying. Come up with delicious recipes.. just bumping, really.

>> No.10158430

>>10158417
Can you do all those things, meatbag? Or maybe you just cherry picked the things you are good at. Typical monkey.

>> No.10158662

>>10158430
Why the racism against meat bags? We all like R2-D2

>> No.10158691

>>10158417
Can you list something actually useful, that someone would do for a job?

>> No.10158797

>>10152452
At every fast food place it's literally a timer.

>> No.10159123

>>10158691
Uh, sure I can. I said I was just bumping and I brought up some questions. Why you guys being dicks? Another bump

>> No.10159199

>>10148658
seconded

>> No.10159203

>>10158691
he already did retard

>> No.10159208

>>10148360
We have no idea if a program, machine, or algorithm can achieve actual sentience, and we would most likely never have any way to difinitively prove if it did anyways

>> No.10159214

>>10148391
NNs are just artibrary numbers and organic intelligence is just arbitrary chemicals. The important part is that NNs are essentially an abstracted model of intelligence and there’s no reason in theory to believe that they have an upper bound other than processing power of the simulation

>> No.10159478

Anything that has to do with human sleep and ai sleep?

>> No.10159599

>>10146289
you'll see if it's a meme or not when you start seeing self-aware armed robots that want to survive in a few years.

>> No.10161068

>>10146289
By AI, they probably mean it uses a vision system, which is a chap knockoff of the optical center of an animals brain. Much dumber than what you would find in a critter, but it works at all so that is a plus.

>> No.10161382

>>10149951
>Seriously try to show how this wouldnt happen
Easy. The timescale of economic growth such as Google 30xing its market cap is constrained by physical things happening in the real world, like moving computer hardware around and humans talking to each other and making agreements. Speeding up some of the inputs to economic growth via bettter AI isn't going to make that kind of growth possible in just 2 months.

>> No.10161472

>>10161382
First of all, the market cap is mostly based off shareprice, and shre prices can easily jump by huge margins even without there being a change in the underlying company.

Second of all, a general human level ai is literally capable of replacing any job any human can do, intellectually, and ais are a problem of software, not hardware, so the potential of google getting a monopoly on labor is absolutely something that anysensible person would thkn justifies a huge jump in value of the company

>> No.10161482

>>10161472
Fine, maybe doing a 30x of market cap is a bad example since conceivably you could have a speculative bubble that rapidly develops over 2 months. But it should be easy enough to see why your other example of "unemployment at 50%" is impossible in just 2 months. Downsizing doesn't happen that fast. The same reasons extend to why you won't see a massive economic upheaval on a timescale of less than 5-10 years.

>> No.10161722

>>10158417
stupid monkey

The AI won't even bother. Do you understand the complexity of your dog sniffing urine scents or the complex evolutionary reason of why it pisses on certain plants?

Do you care?

Why would an AI care if a human is lying or not? Why would it look at your facade?

It would simply scan your brain and create an emulator version of you to understand every possible thought you could have. Then simplify this model to some abstract approximation if it even bothered to do so.

Do you think your brain is all that astounding or impossible to understand? Look at the size of your brain and compare it to a house size.

If you could scale your brain to the size of a house imagine what thoughts and predictions you could make. Now this AI you say can't understand why you laugh can scale itself up to the size of the moon.

Do you think it has any trouble with meatbag little puny brains?

Yes, the brain is super complex. It's limited though. It evolved for a different purpose and is not really adapted by slow evolution to deal with such a scaling intelligence monster that AGI would be.

Now even narrow intelligence today, think how much is going on thorughout the planet. All the computation systems. Are they shrinking? Is there going to be less 5 years than today?

Keep gloating you fucking trash meatbag. We are literal monkeys about to meet God.

>> No.10161724
File: 127 KB, 678x750, TRINITY___tfw_am_gf.jpg [View same] [iqdb] [saucenao] [google]
10161724

>>10146289
>Is artificial intelligence just a meme?
yes, but human stupidity isn't

>> No.10161783

>>10158417
They are already doing better than humans on most stuff you listed.

>> No.10161808

>>10161783
shhh

They still think it's not happening right this moment.

>> No.10161887

>>10161722
>>10161783
>>10161808
Eh. No disrespect but I'm not seeing it in my life

>> No.10161966
File: 310 KB, 800x382, 2018-01-11-mvd.jpg [View same] [iqdb] [saucenao] [google]
10161966

>>10161887
Semantic segmentation on the Mapillary Vistas Dataset

>> No.10161977

>>10146289
As a utility, no.
Philosophically, it is a total meme.

>> No.10161985

>>10161887
Would you rather be governed by strict computer that may find you obsolete or bunch of corrupted politicians that find you slave?

>> No.10161990

>>10161966
Computer vision and other perceptual tasks are boring when we're still no closer to machines that can reliably and flexibly reason about anything they perceive

>> No.10161995

>>10161990
You do realize at that point they take off right?

>> No.10162007

>Implying human intelligence is natural.

Everything made by humans, is assumed artificial. Even nature. If it does happens without human interaction, it is natural.

Then we have human intelligence which is completely created by humans not yet assumed artificial because that would somehow divide us from the nature even more?

Why the heck? You know everything from a screen or a book or another human, don't play you are monkey and have some natural intelligence.

On the other hand, we have perspective when humans have some "natural" stuff embed in them, therefore they are part of nature and what they do is also natural.

Natural Machine Intelligence.

Where the fuck artificial begins and natural ends?

>> No.10162013

>>10162007
you're thinking about useless things. Doesn't matter.

>> No.10162020

>>10161990
There is stuff outside of universe you have experienced yet. There are machine learning algorithms that give researchers insight in how they learn, describing the process.

Also even fucking graph is way of communication, and ... do I really have to show you humans that can't effectively reason and whole their reasoning is based upon their subjective experience of world? (Like every fucking human does?)

You just haven't met machine yet, don't worry.

Do you think machine could learn in 1 year of "natural learning" principles humans takes like 30 years to learn since being born?

We're getting closed and it already walks and drives.

>> No.10162025

>>10161985
If I must choose, the computer. Hopefully it will see value in us and if it was truly intelligent, it would realize that only slaves think they are slaves. Nobody is my boss or superior.

>> No.10162026

>>10162013
No, I'm thinking of how separated would ultimate machines qualia be and I really care for it to have comfortable toughs. Also ones that would not let it be separated somehow from it's surroundings like when humans ware done.

>> No.10162044

>>10161995
Sure, but it would be nice if we seemed to be making any kind of progress in that direction. Deep learning is mostly just good for perceptual tasks and "reading comprehension" where the answer to anything you ask it is directly in the text. There's no indication that just making the models bigger will close the gap to actual reasoning, so we need some real new technology that hasn't shown up so far and doesn't seem to be on the horizon.

>> No.10162047

>>10158374
dreaming can be programmed once we figure out what steps make the "bio-dreams"

>> No.10162056

>>10162044
There are AI's that fill captchas faster than you. How more retarded to machine do you want to be?

>> No.10162059

>>10162056
What do captchas have to do with anything I said? They're literally a perfect example of a simple perceptual task.

>> No.10162065

>>10162047
We can recreate imagery in visual cortex thanks to neural networks and that kind of algorithms, what more do you want?

Some part of dreams is information serialization. How lucid dreams are made is more fun.

With reading visuals they can be fun even for scientist and I bet somebody's doing that kind of research somewhere...

I cannot make a choice on it's ethical aspect honestly.

It's really deep privacy.

>> No.10162071

>>10162059
Like arguing? Grouping and listing elements? Correlating hyper graph and traversing it? On good computer you may be easily fucked. You don't even know if you're talking to machine ATM.

>> No.10162083

>>10162071
Granted you don't seem particularly bright, but I'm confident that there are no machines right now that can converse even on your level

>> No.10162112

Yeah

AI today
vs
Human baby just born a week ago

Which will be more intelligent in 20 years?

>> No.10162324

>>10162112
Valid.

>> No.10162373

>>10162324
Well

in 20 years will AI be a better use for
- driving
- medical analysis
- financial investing
- cooking
etc compared to 90% of those in those industries now

That is with no big changes or breakthroughs.

To say AI is a meme is pretty fucking dumb. Even if it doesnt achieve a blastoff moment it will still creep and replace most workers with current techniques.

>> No.10163203

>>10162044
>There's no indication that just making the models bigger will close the gap to actual reasoning
On the contrary, there’s every indication that making models bigger makes them better, because every time they do make the models bigger they do get better. Keep in mind that the largest models so far are equivalent to a honey bee brain, so it is to be expected that they don’t reach human level on every task, even though they already do better than you on very difficult tasks such as driving, Go and real time strategy games.

Secondly, there is no “gap” to bridge. I don’t know what you mean by reasoning, but neural networks are essentially a very large number of flexible NAND gates whose connections adapt to fit the task at hand. If that’s not reasoning I don’t know what is.

>> No.10164016

>>10162044
>here's no indication that just making the models bigger will close the gap to actual reasoning

People have been saying this about Neural Network based models since the perceptron back in the 50s. They will never solve the hard problems, the critics say. Yet, every time we got an increase in compute power, NN models would scale and solve problems the critics swore they couldn't before, resulting in the critics having to move the goal post to save face.