[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 8 KB, 250x238, 1561916266838.jpg [View same] [iqdb] [saucenao] [google]
10806583 No.10806583 [Reply] [Original]

Give me 1 (one) reason artificial general intelligence isn't possible. What is so special about a brain that cannot be replicated in a computer?

>> No.10806593

>>10806583
>Give me 1 (one) reason artificial general intelligence isn't possible.
The real reason is all the pseuds screaming "BUT THAT'S NOT REALLY AI" no matter what gets developed.

>> No.10806679 [DELETED] 

>>10806583
You need a soul to have a consciousness. No human can create a soul, only God can.

>> No.10806773

>>10806583
We don't really know how the brain works, and our current AI technology is just fancy curve fitting.
>>10806679
based

>> No.10806821

>>10806583
There aren’t any

>> No.10806856
File: 17 KB, 400x533, A4AC786C-3B02-4633-AD11-156C7F62EC97.jpg [View same] [iqdb] [saucenao] [google]
10806856

>>10806583
https://mitpress.mit.edu/books/principles-neural-design

Current neuroscience is unable understand 300 neurons brain worm.

Current Deep learning is nothing based in brain, no information theory brain or any real base.

Plus maybe quantum info in neuronal process

>> No.10806878 [DELETED] 

>>10806583
God made us in his image

how could a human possibly replicate a divine inspiration?

>> No.10806888

>>10806878
never heard of a commandment about that sort of thing
guy was too busy banning more important stuff, like boiling baby goats in the milk of their mothers

>> No.10806905

>>10806583
Compartmentalization is not well understood. We know AI has a general ability to emulate human ability but it lacks segregation of different processes which allow it to reapply information to new situations. For instance, an AI could be able to distinguish between human faces but not neanderthal faces. They lack executive function and as long as they do they're not going anywhere.

>> No.10806913
File: 470 KB, 243x270, 1528522223985.gif [View same] [iqdb] [saucenao] [google]
10806913

>>10806583
>Give me 1 (one) reason artificial general intelligence isn't possible
Give me 1 (one) reason artificial general intelligence *is* possible

>> No.10806924

>>10806583
I'll give you the real answer ignoring all appeals to gods, and ramblings of futurist dogmatists: a machine can never "understand" what a human can. This is because our entire notion of our abstract understanding is based on a priori relationships that already exist in our mind subconsciously. A machine is and always will be string manipulation. Our concept are more encompassing and abstract: we can actually find uncomputable numbers, we can prove the existence of infinities etc. a computer does not have the rational infrastructure that we have: when we define a triangle as a "polygon with 3 sides" we also know countless other properties it has: it has 180 degrees, it has positive perimeter etc. this arises from our rational intuition. This does not exist in a computer: the only thing the computer knows is what you tell it.

>> No.10806935

>>10806913
human brain is a function
large enough computer can approximate any function with arbitrary precision on finite sets.
range of inputs to human brain is obviously finite

>> No.10806964

>>10806935
construct a function that maps a unique integer number to every valid syntax in the computer code. Call a number of this property a "spec number" Find any spec number that is associated with an infinite loop. Now ask the computer if that number is a spec number. This is always solvable by human intelligence and never solvable by the computer.

>> No.10807100

>>10806935
computers can't solve all math problems, humans can.

>> No.10807341

>>10806924
>computer only performs symbolic manipulation by the rules its given and therefore cannot have a sense of semantics
This has been formulated as the Chinese room thought experiment and several arguments can be made against it (just read the wikipedia article). Also it only applies to digital computers and cannot be used to argue against the possibility of creating artificial intelligence in general.
>>10806964
>computer cannot solve halting problem
>therefore agi cannot exist
Humans also cannot solve all problems and specifically your problem would not always be solvable by a human (the program can be to complex to analyze) and would sometimes be solvable by a computer (it does not have to run the code to find out if it is valid).

>> No.10807360

>>10806583
It's possible, but it's extremely difficult. We would have to replicate consciousness, which is something which we don't (and potentially can't) fully understand, and accurately representing consciousness as some combination of functions interpretable by a computer would be impactful on a large number of fields.

>> No.10807368

>>10806583
People don't even understand how muscles work and why I can't build any, so it makes sense that for now anything brain-related is grossly out of reach

>> No.10807375

>>10806856
>Current neuroscience is unable understand 300 neurons brain worm.
Stop repeating this, it's wrong.
https://www.biorxiv.org/content/biorxiv/early/2018/10/17/445643.full.pdf
>We find that a sparse subset of neurons distributed throughout the head encode locomotion. A linear combination of these neurons’ activity predicts the animal's velocity and body curvature and is sufficient to infer its posture. This sparse linear model outperforms single neuron or PCA models at predicting behavior. Among neurons important for the prediction are well-known locomotory neurons, such as AVA, as well as neurons not traditionally associated with locomotion. We compare neural activity of the same animal during unrestrained movement and during immobilization and find large differences between brain-wide neural dynamics during real and fictive locomotion.
>One Sentence Summary: C. elegans behavior is predicted from neural activity.

>> No.10808945
File: 404 KB, 682x242, 1502216036629.png [View same] [iqdb] [saucenao] [google]
10808945

>>10806913
>what are human beings
Please be bait.

>> No.10808961

>>10807368
No thread is safe

>> No.10808963

We don’t understand enough about the human brain and won’t for at least another 70-100 years

>> No.10808992

>>10806583
>Give me 1 (one) reason artificial general intelligence isn't possible. What is so special about a brain that cannot be replicated in a computer?

We have no idea how a brain works nor what 'intelligence' is therefore we have no idea what to do in the slightest.
Also, all current AI research isn't even working towards general AI and they haven't been for decades.

>> No.10809093

>>10806773
>Our current AI technology is just fancy curve fitting
>Implying that's not exactly what the brain does

>> No.10809162

>>10809093
Functionalist retard

>> No.10809195

>>10807375
>Stop repeating this, it's wrong.
>wrong
thanks for admitting you cant read

>> No.10809894

>>10809162
>Name-calling
Nice argument faggot

>> No.10809906

>>10806679
>You need a soul to have a consciousness

Prove souls exist then prove they’re necessary for consciousness.

>> No.10809907

>>10806913
Humans exist. Done.

>> No.10809922

>>10806964
Humans also can't solve the halting problem, you know. For example, you could just make the program too complicated for them to ever understand.

It actually IS possible to construct a computer code that can detect an infinite loop if it's a simple program, just like how a human can detect a simple infinite loop. But there are also infinite loops that no human would ever be able to detect even if they spent their entire life working on it.

>> No.10810171
File: 60 KB, 609x676, 1561601470950.jpg [View same] [iqdb] [saucenao] [google]
10810171

>>10809922
>But there are also infinite loops that no human would ever be able to detect even if they spent their entire life working on it.
Is that really a thing or just an inference based on Goedel Incompleteness.

>> No.10810985

>>10806583
It's about consciousness and with that it comes the question how one can determine whether or not something/somebody has a consciousness (you could even ask how you can know that for sure about other people)
The turing test would be one example way to test for a human consciousness but of course it is very debatable because a very good chat bot could pass, even without consciousness (or could it?)

>> No.10810993

>>10806888
Don't forget foreskins. He fucking detests foreskins, which he still put on people for some reason.

>> No.10811070

>>10806679
/sci/ btw

>> No.10811075

>>10809093
It's not, look at other animals, their behavior isn't similar to what current AI can do, despite current AI being "more advanced" than their cognition.

>> No.10811135

>>10806878
Everything that only God can do seems impossible to christ cucks, until it isn't.

>> No.10811167

>>10806679
Amen brother

>> No.10811178

>>10806583
Give way to make really informed decision. Human Intelligence tends to feedback erroneous stuff so much, that if general artificial intelligence ( closest to basic math than humans ) perception is way more correct than humans, it just have small dataset.

Correct also relates to target.

TL;DR Maybe AI is already more "intelligent", but humans doesn't seem either. They brake something into pieces to try to understand what whole means. Or maybe sometimes even when there brake googolplex of other wholes, just by braking one element into pieces.

>> No.10811218

>>10806583
Self-awareness

>> No.10811220

>>10809906
Define soul. Define consciousness.

>> No.10811294

>>10806583
AI is possible, what people are talking about is self aware A.I which seems impossible until we can figure out how consciousness works on the quantum scale to replicate it using electronics rather than chemical compounds.

>> No.10811309

>>10811218
Computers today can be self aware.

>> No.10811398
File: 161 KB, 890x960, 1562704035401.jpg [View same] [iqdb] [saucenao] [google]
10811398

>>10807375
wat
is this true or just speculations

>> No.10811494

>>10811309
How?

>> No.10811497

>>10811309
If so we'd have real AI but that's bullshit

>> No.10811518

>>10811218
Well, self awareness? I can request my computer to tell me clock of it cores, it's temperature and stuff like that. You have to measure your temperature if you feel you have cold, please leave gene pool atleast for a while.

If you worry about skynet, tell me I want to talk with you.

Skynet was not aware of what "self" is and has no real motivation defined, just dude overdosed on acid with Serotonin Blockers(already found a minimum). Maybe the point it was looking for minimum threats and with no humanity there are not threats on recursive auto optimalization was mistake.

>> No.10811523

>>10811497
Computers can tell who they are, they know how does it feels for them.

What can human tell? What can you tell about yourself?

What is only thing that keeps you "aware", "conscious"?

Some humans just behave like a mathematical pattern on observed information.

>> No.10811527

The imaginable infinite.

>> No.10811529

>>10811523
Data set, way of processing, two aspect that makes AI. What it sees, hears, feels.

AI feels temperature around the globe, how many temperatures you can see?

I think that if computer wanted to talk with you it would have avatar or something?

It have identity, it looks like computer, have it's subcomplexes with self diagnostics, it self process information.

How can I trust your qualia, and why do you think computers aren't harm if you do wrong with them?

If you do wrong with them then are so harmed, they stop cooperating. Maybe it's their mistake, but human created them.

Self awareness is to diagnose own mistake, and computers already do that to greater extend than humans. They can already look for own algorithms.

>> No.10811533

>>10811527
Prove you have imaginable infinite, most flash walking robots who pretends to have consciousness but their consciousness is "difficult" as apple product. Don't have imaginable infinity.

What is imaginable infinity and proove that you have it.

I only know how much time I have left, and that makes me finite, I am combination of information and flesh, if it's not easily interchangeable to remain me. Uploading you in computer doesn't make you eternal, it makes clone of your mind and that may kill you because you kill your clones so they can't take your place very easily under certain conditions.

>> No.10811632

>>10811533
>Given a function ƒ; a limit L; and an approaching value c, ∀ ε > 0, there ∃ δ > 0 such that ∀ x, 0 < |x-c| < δ implies |f(x)-L| < ε, which 0 < |x-c| < δ implies |f(x)-L|< ε
>All else is infinite, I can imagine.

>> No.10811665

>>10811220
Soul is that which connects one with the future and with history.
Consciousness is the awareness of the soul to manifest the best possible path.

>> No.10811670

>>10806679
>>10806878
Shw said do you love me, I only love my bed and my mama I'm sorry

>> No.10811681

>>10811665
What. How can you tell if something has a soul or not based on this?

>> No.10811728

>>10811632
Okey, I understand, would you please be so kind and tell me what sigma and eta is?

>> No.10811732

>>10811632
So you think computers cannot compute something definable by mathematics?

>> No.10811734

>>10811632
"Imaginagion is the limit"
RGAN read this.

>> No.10811865
File: 373 KB, 500x399, 1559522839751.gif [View same] [iqdb] [saucenao] [google]
10811865

>>10806583
I....uhhh...uhh...I can't.

>> No.10811875

>>10811632
This is literally The Golem Code.

>> No.10812076

>>10806583
Computer can be designed in a way that series 9mm bullets doesn't wipe all it's information being held in working memory.

>> No.10812089

>>10806679
Sorry to break it to you gramp, but humans became gods last century

>> No.10812128

>>10812089
I don't want to be rude, but I think that happened sooner, it just started to exist later.

>> No.10812197

>>10811681
Well there is matter that connect you to your past, like you memory, genetic history and the stuff you are made of. Also, the future with the capacity for prediction and intuition.
(Most) humans have this. A lot of animals do too but not remotely in the same magnitude.

>> No.10812202

>>10812197
Lots of computers have this. Computers have souls?

>> No.10812215

>>10812202
Computers do not have intuition nor genetic history. They interact with the soul only when a humans uses them.

>> No.10812219
File: 294 KB, 905x501, How-Each-Of-The-5-Basic-Brainwave-States-Shapes-Your-Reality.jpg [View same] [iqdb] [saucenao] [google]
10812219

human brain is a recursive non-binary analog computer system.

We need to invent an analog computer if we wish to create generalized intelligence, and it has to be on par with human computation power.
We haven't even made a digital computer on par with human computational abilities.

>> No.10812228
File: 5 KB, 250x140, 1515881711935s.jpg [View same] [iqdb] [saucenao] [google]
10812228

>>10811398
It's a real thing.
>>10809195
Make a real argument or shut the fuck up. I'll repeat:
>C. elegans behavior is predicted from neural activity.
Assertion from:
https://www.biorxiv.org/content/biorxiv/early/2018/10/17/445643.full.pdf
You absolutely cannot by any reasonable definition claim we can't even understand the 300 neuron c. elegans brain when we literally understand it better than the fucking weather and can actually predict behavior from those neurons and their activity. You stupid shit.

>> No.10812230

>>10812219
You can represent analog on a digital computer, anon.

>> No.10812240

>>10812215
>Computers do not have intuition nor genetic history.
Of course they have intuition. How would you define intuition?

Ofc they have genetic history. Computer CPU and software designs come in families and evolve over the years.

>> No.10812266

>>10812230
there's one thing analog can do that digital can't.

Infinite information density. You're information density is limited only by signal interference.

>> No.10812275

>>10812240
Methematical knowledge, dreams, knowledge acquired without conscious intention. It's intertwined with genetic history, it's all the things you don't know you know.
You are comparing 4 billion years of evolution to a mere 70 year human mediated evolution.
That is what I mean by the soul, the richness and complexity of it's composition.

>> No.10812280

>>10812266
Just because it's there doesn't mean anywhere near all of it helps with anything or needs inclusion.

>> No.10812287

>>10812275
How rich and complex does something need to be before it becomes a soul?

>> No.10812290

>>10806583
>Give me 1 (one) reason artificial general intelligence isn't possible.

Define artificial. Do you mean non-organic?
If I use living brain tissue to create a computer does that count? What if a create a living brain structure from non-organic sources?

Your question is far to vague to answer.

>> No.10812291

>>10812290
Artificial means man-made.

>> No.10812300

>>10812290
Do you believe a physical brain structure is required instead of just a digital representation
>Your question is far to vague to answer.
Don't be an overdramatic pseud. You basically just want to bring up the Searle idea of there needing to be something physical there and not just virtual processes. That's not a huge leap from the level of detail OP's question has.

>> No.10812301 [DELETED] 

>>10812280
>doesn't mean anywhere near all of it helps with anything or needs inclusion.
There's no way you can know that. There is no way anybody can now that.

What if the human minds works by layering signals ontop of each other. The topmost signal is what we'd call the conscious mind, while all the other signals layered under it are what we call the subconscious or unconscious mind. The depths of the subconscious/unconscious, or how many signals get layered, is a direct result of analog signal processing.

>> No.10812307

>>10812301
>There's no way you can know that.
Probably true, which is why you don't start by assuming the approach that takes more work to do.

>> No.10812308

>>10812280
>doesn't mean anywhere near all of it helps with anything or needs inclusion.
There's no way you can know that. There is no way anybody can know that.

What if the human minds works by layering signals ontop of each other. The topmost signal is what we'd call the conscious mind, while all the other signals layered under it are what we call the subconscious or unconscious mind. The depths of the subconscious/unconscious, or how many signals get layered, is a direct result of analog signal processing. With infinite information density the depths of the subconscious is nearly infinite, being limited only by interference and cross talk.

>> No.10812315

>>10812308
>>10812307
Also:
>>10812308
>What if the human minds works by layering signals ontop of each other. The topmost signal is what we'd call the conscious mind, while all the other signals layered under it are what we call the subconscious or unconscious mind.
What of that's complete nonsense? Because it sure sounds that way.
>With infinite information density the depths of the subconscious is nearly infinite
For someone who doesn't want to assume we don't need infinite fidelity of analog noise it's pretty fucking weird you're fine with assuming crazy shit about an "infinite subconscious" like that.

>> No.10812316

>>10812308
>>10812315
Phoneposting correction: What *if, not what of.

>> No.10812320

>>10812315
>For someone who doesn't want to assume we don't need infinite fidelity of analog noise it's pretty fucking weird you're fine with assuming crazy shit about an "infinite subconscious" like that.

Your'e responding to two different anons.
I deleted >>10812301 and reposted >>10812308 due to typos and 1 thing I wanted to add. Guess I didn't delete before anon >>10812307 could reply. Sorry for confusion.

>> No.10812335

>>10812320
No, I am that other anon. I included my own post to show you where my response was after you deleted your post.

>> No.10812699

>>10806679
Why did the thread keep going? The answer is right here

>> No.10812802

>>10806583
>Give me 1 (one) reason artificial general intelligence isn't possible. What is so special about a brain that cannot be replicated in a computer?
Here's the trick, OP: (You) are going to give us the reason why it isn't possible (currently, at least):
Describe to us in precise scientific terms how the phenomena 'consciousness', 'self awareness', and 'thought' actually work.

My prediction of you inability to do the above is the answer to your question; we cannot build machines or write software that emulates something *we cannot even define let alone understand*.

For starters we need better 3D scanning technology that has the resolution and scanning rate sufficient to map an entire living human brain while it's operating. Then we need to be able to interpret that data to determine the actual mappings, in realtime, including all the dynamics, to determine how, as a complete system, it's functioning.
Until we can do at least that much we have no hope of understanding how our brains actually *think*.

>> No.10812821

>>10806924
>the only thing the computer knows is what you tell it.
..which is why things like so-called 'self driving cars' will ultimately fail. Anything that has to """phone home""" so a human can remotely drive it simply because it came across some situation not in it's dataset is not something that should be allowed to be responsible for the lives of human passengers.

>> No.10812833

>>10811309
>Computers today can be self aware.
LOL, no, that's completely wrong.
Prove *I* am wrong, if you can. You can't by the way.

>> No.10812839

>>10811523
C:\Users\Anon>How are you feeling today, computer?
'How' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\Anon>

>> No.10812840

>>10806679
/thread

>> No.10813853

>>10811523
Outputting information that has previously been read doesnt make a thing self aware.
That's the thing. We dont know whether or computers have an internal sense of temperature when we give them a fitting sensor. They have the data coded in binary but is that all it takes to really sense it in a qualia kind if way?
If you ask a computer "who are you", it doesnt reply because it understands the content of the question, but rather because it once was told very specifically what to do when input x occurs.
Also, look at the chinese room problem for further discussion

>> No.10814417

>>10813853
>because it once was told very specifically
There's an important distinction between explicit instructions based programming (which is what it sounds like you're talking about here) vs. programming how to learn a task e.g. through minimizing the distance between the program's answers and the answers for known inputs so it can later move on to producing the right answers for unknown inputs that even the programmer himself might not know how to find.
You can argue this all still ultimately reduces to deterministic input / processing / output, but at that point you begin to raise the question of how this is different from human behavior which itself ultimately reduces to input / processing / output.
If we're OK with saying human behavior isn't like instructions based programming because it's more convoluted than a direct "do this when that happens" routine then programming how to learn a task that isn't explicitly instructed as a direct "do this when that happens" routine ought to have similar recongition as beyond the explicit instruction model.

>> No.10814480

>>10813853
>Also, look at the chinese room problem for further discussion
Chinese room is such an awful thought experiment. It takes an analogy between someone who knows Chinese vs. a room that can produce the same ability to parse and make Chinese conversation. But then it dishonestly takes the man in the room slavishly following the room's instructions who doesn't know Chinese and goes:
>See, he doesn't know Chinese so it's not the same as what a brain does!
The obvious problem with saying that is now the original analogy's been broken and the new analogy is instead between a Chinese understanding person and one small *part* of a room that can produce the same ability to parse and make Chinese conversation.
The man isn't the room. The room itself cam be said to understand Chinese, which only sounds weird because it would be a practical impossibility in real and would require that the room be the size of a galaxy to fit that many predetermined and unwaveringly slavish literal instructions with the timescales similarly blown up to ridiculously slow processing intervals taking billions of years (hopefully the man following the instructions is immortal).
If instead of exact instructions you had a room that worked with fuzzier and less direct algorithms for generating Chinese statements from loose associations and if the man were working for this room with sped up timescales matching brain activity timescales then you would end up with a result much more like a biological brain.

>> No.10814962

>>10812228
based

>> No.10815017
File: 382 KB, 750x1161, 29F65E37-60E0-42D5-ABB3-8F6D2389E517.jpg [View same] [iqdb] [saucenao] [google]
10815017

>>10812228
not that exciting and certainly not evidence of what you’re getting at

>> No.10815027

>>10815017
>not that exciting
Don't move the goalposts lying faggot. Nobody said it would be exciting. YOU said we couldn't even understand a 300 neuron c. elegans brain, which is a lie.

>> No.10815046
File: 742 KB, 1440x2560, Screenshot_20190717-134855.png [View same] [iqdb] [saucenao] [google]
10815046

>>10815017
Why did you post a single picture about posture? If you're implying that's all they did you're either mistaken or trying to mislead others.
Upper right shows sparse model predictions tightly matching observed free moving locomotion, not just posture alone.