[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 89 KB, 397x399, 1314954194047.jpg [View same] [iqdb] [saucenao] [google]
[ERROR] No.3746740 [Reply] [Original]

Why can't we upload the brain to a computer?

>> No.3746744

Because we do not think in binary maybe?
How the fuck should we know, half of us are 16?

>> No.3746745

The brain is freaky complex and computing isn't that advanced yet?

>> No.3746746

because i don't want to

>> No.3746766

Not enough memory.
In theory, you can upload your connectome (= the whole neurons network) in a computer, but this would require an enormous amount of memory.

>> No.3746772

>>3746744
>Because we do not think in binary maybe?
Yes we do, 1 and 2 represent on and off neurons are either on or off, I know this from a book by carl sagan. He even calculated how much memory his brain has in computing terms, and how high his processing power is.

>> No.3746776

because the brain doesnt save memory into 1's and 0's. it works with complex chemical reactions.

>> No.3746780

I think it will be possible someday but I would be too much of a pussy to do it...you know because of all this qualia shit etc.
What I would do is to extend my brain by the use of technology and maybe gradually exchange parts by artificial simulated neurons etc.

>> No.3746783

>>3746740
Because there is definitely no software that supports it. Perhaps the hardware also doesn't support it.
Both should be technically feasable though.

>> No.3746784

A computer doesn't have plasticity like a brain does. Whether or not a human mind in a computer could still be able to learn and gain experiences is an important factor here.

>> No.3746785

>>3746776
I was talking about neurons. Also, computers memory system is already superior. A computer with human intelligence and computer memory would be better.

>> No.3746790

> A computer with human intelligence and computer memory would be better
> A computer with human intelligence and computer memory would be
> human intelligence, computer memory
I don't think you know how system architecture works.

>> No.3746791

>continuity of consciousness

>> No.3746792

>>3746740
We don't have the right dongle.

>> No.3746800

>>3746790
They both run off electricity, mr expert

>> No.3746801

>>3746766
Memory is not the only problem. The main issues are:
- We don't have such a precise idea about how neurons work. Even state-of-the-art "spiking neurons" do not encompass the complexity of neuronal interactions and only emulate them to some extent. We don't know the exact roles of glial cells either and how they control the energy available to surrounding neurons, but we know they play a key role (knowing the exact network of neurons does not simulate the brain if you don't have the interlaced network of glial cells),
- And even more importantly, we cannot yet determine with enough precision the layout of the neural network in someone's brain. We do not have the tools to precisely say where neurons are and what neurons they are connected to. We only analyze the activity of "groups" of neurons, at best (in the general case).


Ask again in a few decades / centuries.

>> No.3746803

>>3746791
its an illusion because our brains are hardwired to have concepts of and remember the past.
Actually there is no real continuity

>> No.3746819

>>3746803
this is why i hate greentext lolz

yes, that is what i was trying to point out. it wouldnt matter if we COULD upload our brains to a machine... it wouldnt be "us".

>> No.3746822

Are you from /tg/, op?

>>>/tg/16314766

>> No.3746832

What the fuck? We CAN upload a brain to a computer. It just needs to have it's memory wiped.

>> No.3746835

>>3746803

You may be right. But if you are. That means YOU don't really exist.

>> No.3746839

>>3746832

Just because you run a bunch of words together... doesn't mean your utterance has any real MEANING.
And yours doesn't.

>> No.3746847

>>3746839
Except we CAN upload our brains to a computer.

http://en.memory-alpha.org/wiki/Our_Man_Bashir_(episode)

Educate yourself.

>> No.3746855

>>3746847
>An episode as proof and source
wat

>> No.3746865

>>3746819
obviously you didnt understand what I said. I said there is no such thing as a I or continuity, its an illusion created by our brain...so it wouldnt matter if we upload our brains or not...there is no such thing as a continuity that would be broken

>> No.3746870

>>3746847

You faggot. That is fiction.

>> No.3746874

>>3746865
If our minds are an illusion, then they are self-aware and able to understand past and present kind of illusions.

If that is the case, we might as well be "real", no?

>> No.3746881

>>3746835
yes "I" doesnt realy exist as a single unchanging continuity or entity. We only think of it that way because we have memories of our past.
Actually I is a construct made of many things, that are constantly changing and often interrupted.

>> No.3746884

>>3746772

no. you have no reason to believe the brain is a discrete state machine.

>> No.3746886

we can op,

its called the internet

...yup

>> No.3746890

>>3746881

I am inclined to agree with you.

>> No.3746892

>>3746881
And how do you cement that? The brain doesn't exactly turn on and off as to to cut off continuity.

>> No.3746896

>>3746892
We call it sleeping.

>> No.3746901
File: 51 KB, 400x400, bitch-please.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3746740
Well op they've already simulated a section of a rat brain, can't be too long now

http://www.guardian.co.uk/technology/2007/dec/20/research.it

>> No.3746905

>>3746896
Sleep doesn't allow us to simply "turn off" a brain and kick start the brain every morning. What is dreaming if not the idle processes of a dormant brain? The brain -never- rests.

>> No.3746907

All i know is "self". "Self" is all i know.

"I" does exist. we are all a bunch of "I's", experiencing life individually together.

>> No.3746914

Because we still use von Neumann architecture, making the translation from neurons to transistors very complicated.

>> No.3746937

>>3746881
My viewpoint on dieing is that we are actually constantly dieing and are more or less gradually or suddenly replaced by an I that remembers being the I of the past.
The only thing that happens when we actually die is that nobody will be there remembering being you.
>>3746905
sleep does turn off the parts in your brain that create the "I" and thats the only thing relevant. We do not dream all the time of our sleep.

>> No.3746939

>>3746865
>it doesnt into wikipedia therefore it doesnt real

awesome

also "mindstream"

of course i just discovered That by searching for CoC.

...i dont rely on reading things to formulate my ideas.. people and their dumb books.

>> No.3746942

>>3746905
Sleeping does affect the brain a lot. If you didn't sleep, your synaptic weights would only grow. When you learn, the synapses that are used get "bigger". The others don't change. At night, things normalize themselves so that a neuron that has learned a lot today won't start spamming everything around it. If the brain didn't rest and normalize this shit, it will be epileptic (non-stop signals everywhere).

>> No.3746943

>>3746937
>We do not dream all the time of our sleep.
Yes we do. It's just we don't remember most of them.

>> No.3746951

>>3746937
>Locke's Socks
I think I remember reading somewhere that neurons (or some other cell in the brain) isn't replaced. Since you are your brain, this isn't really true.
The most you could say is that your body is constantly dieing, and you are never in the same body that you were in a few moments ago.

>> No.3746973

>>3746943
nope thats wrong

>> No.3746977

>>3746943
That's wrong.

Think of dream's as a place to express your inner desires. They are an amalgamation of your fears, hopes, lust, and so forth. If your life is perfectly stable, then your brain has no need to express these feelings and you won't dream.

>> No.3746980

>>3746977
thats also bullshit

>> No.3746983

>>3746977
That's worse than the guy who tried to use as Star Trek episode as a source for his claims.

>> No.3746986

>>3746740
Memory, alone, in a human brain is at least 2.5PB of space. We barely even have hard drives that go beyond 5TB.

>> No.3746993

>>3746901
simulate brain=/=upload a brain to a computer
2 different things

>> No.3746995

>>3746977

know one really knows why we dream there are many theories though people can only tell some of the shit that that happens while you dream

>> No.3746999

>>3746801
the only correct answer in this thread.

>> No.3747001

>>3746986
Ibm made a 120pb drive

>> No.3747003

>>3746977
dreaming was developed by our brain to simulate fighting situations, like with mammoths in the stone age. it was meant for training.

>> No.3747007

>>3746943
We do not dream "all the time." We dream frequently, and at various levels of sleep, with REM/dream incidents occurring at about 90 minute intervals. So, yes, there's a lot of dreaming, more than we remember, but not "all the time."

>> No.3747022

>>3746993

really how is this like copy paste vs cut and pace ? or due you mean plug in like an external hardware

why would you want to live the safety of you skull and enter a computer

>> No.3747082

>>3747022
I think what he meant is that if you want to simulate a brain, you don't need to simulate a particular brain after copying what is in it. You just need to simulate a network that could be a brain because it's built the same way. It's the same when you simulate anything: you might want to copy the initial conditions from reality, or just set them to something coherent because it's usually simpler.

>> No.3747084

It's possible in principle to make a full physical simulation of a human brain.

But we don't understand the brain (neurons, neurochemistry, etc) well enough yet.

>> No.3747089

>>3746884
You're assuming too much. All that is required is that the brain's chemistry be computable (you can make a simulation).
>>3747084

it has fuck-all to do with "logic".

>> No.3747092

>>3746993
Bullshit. A physical simulation is one form of mind uploading. It's a brain, and it's running entirely on a computer.

>> No.3747103

>>3747089

you're going to do it at the CHEMICAL LEVEL??????
HAHAHAHAHA!!

>> No.3747105

>>3747084
see >>3746801 for details.

>> No.3747113

>>3747103
Laugh all you want. You can't abstract away the nonessential dynamics until you know what dynamics are essential. We will probably have to make a simulation at the level of chemical concentration gradients and membrane potentials, and then work with that system to see how much we can simplify/model/abstract away the details while keeping the necessary functionality (a mind).

>> No.3747135
File: 154 KB, 550x843, Recursion leads to madness.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>uploading a brain into a computer while your brain is in a computer
>leads to recursive catastrophe
Madness!

>> No.3747140

>>3747113
(other poster)

>You can't abstract away the nonessential dynamics until you know what dynamics are essential.
I totally agree with that.

>We will probably have to make a simulation at the level of chemical concentration gradients and membrane potentials, and then work with that system to see how much we can simplify/model/abstract away the details while keeping the necessary functionality (a mind).
However this doesn't seem right. Newton's physics "knew" what dynamics were essential without knowing the underlying complicated model. You can understand things at high level without understanding it at low level. That's what sociology does to biology, what biology does to physics etc (cf. that XKCD comic).

>> No.3747147

>>3747113
>and then work with that system to see how much we can simplify/model/abstract away the details while keeping the necessary functionality (a mind).
But there is no way to see if a mind is present.

>> No.3747153

>>3747140
That's describing a top-down analytical approach to identifying the relevant dynamics of the brain. So far, we haven't made much progress. Though I fully agree that if we succeeded it would save us the trouble of a low-level simulation.

>> No.3747166

>>3747153
(cont)
Basically, the dynamics of brains are far more complex than the trajectory of apples and arrows.

>> No.3747182

>>3747153
A friend of mine has just completed a Ph.D closely related to such a top-down approach. Not really about the thought process, more about the memory process. Still had pretty interesting results: he started from a neural network that had no biological plausibility (a Hopfield network), and tried to improve it. And as he was improving it, he realized that every modification he did actually made the network behave closer to what neurobiologists currently think. Sparse network with high local density, resistance to noise, etc. Can't remember everything in the details (it's not my Ph.D.) but the discussions we had with his advisor were inspiring.

>> No.3747197

>>3747182
Perhaps the two approaches should meet in the middle, without a full simulation. All the critical components should be thoroughly understood (the internal dynamics of neurons, etc., and how their behavior couples with the outside environment) and then make a network with logical elements that have near-equivalent dynamics.

Basically, use low-level neuroscience to inform the basic interactions and dynamics you incorporate in the abstract models.

I'm mainly frustrated with how the difficulty of strong AI has been so vastly and consistently underestimated. We haven't even demonstrated a working mouse brain simulation, and there are projects to jump straight to a full human brain? No.

>> No.3747208

>>3747147
Yes there is. Talk to it.

Don't pull p-zombie bullshit on me. A mind is what a mind does.

>> No.3747221

>>3747208
I'm pretty sure there are ways to create a machine that passes the turing test that is not conscious.

>> No.3747241

>>3747221
The Turing test is misguided, in that passing for human isn't a good definition of intelligence. Too many profoundly stupid systems can parrot the superficial appearance for a few minutes, and too many intelligent systems would have little interest or even ability for conversation in English.

However, I think that anything that can really demonstrate its intelligence through conversation (not small talk bullshit, I mean something I can actually reason with) is intelligent. That's a sufficient condition, but not necessary.

Basically, there are no concerns I would have about a functional machine intelligence that I wouldn't already have about other people. And do you walk around asking yourself if someone is a p-zombie? I don't think the word even has coherent meaning. We're all just intelligent machines.

>> No.3747271

Simple answer to the OP:

We currently lack the technology.

Lunch, anyone?

>> No.3747277

>>3747271
Well, I'd rather say that we lack the understanding of the brain. We could run the simulation now, but we don't know what program to run. If we did, we could do it - it might just run very slowly on current hardware.

>> No.3747316

>>3747241
>However, I think that anything that can really demonstrate its intelligence through conversation (not small talk bullshit, I mean something I can actually reason with) is intelligent.
Intelligent # conscious.

If we had a huge database of words and sentences and their relation to each other and a highly efficient search algorithm (or even a neuronal net that acts as such)on a supercomputer it would propably pass the turing test...hell it would even count as intelligent...but it wouldnt be conscious.

>> No.3747320

>>3747277

If you wanted to make an efficient brain simulator, then, yes, we lack the understanding. If you want to make an accurate (or just easy) one, what we lack is the hardware.

Theoretically, if we could simulate the trillions of atoms in each cell, and simulate the hundreds of billions of neurons, glial cells, etc., then we could simulate a brain perfectly without knowing all that much about how the brain works. We could just serial section it, or even "grow" a human inside a computer.

The amount of processing power here is insane, of course. We're talking about more processing power than has ever existed. However, it's technically true.

Hmmm... There's a thought: If you simulated a human inside a computer, they'd still age and suffer senility. That makes the need to model the brain more in terms of processes than literally very interesting...

>> No.3747347

>>3747316
Just what do you mean by "conscious"? Are you holding on to some idea that you are special, and not just a biochemical machine?

>> No.3747355

>>3747320
Yeah, having a human body in silica would be a boon for medical research. But what you said about the computation involved is pretty much spot on.

>> No.3747361

>>3747316
I don't think what you're describing could pass as intelligent. It gets fuzzy when you start including neuron-like nets.

Anything that is based on just linking predefined words together doesn't have the underlying dynamics necessary for intelligent behavior. It would be just another chatbot, dumb as a rock like all the rest.

>> No.3747377

>>3747347
Nope but I think that conscioussness only arises under certain conditions which are not neccesarily met by all animals,machines, or neuronal networks.

>> No.3747391

>>3747377
> not neccesarily met by all animals,machines, or neuronal networks.
Of course not.

But you seem to be making a much stronger claim - that an intelligent machine is possible, but a conscious machine is impossible (whatever you mean by that).

>> No.3747407

>>3747391
Nope I think a conscious machine is possible but that not every machine that is intelligent is also conscious.

>> No.3747412

>>3746772
Carl Sagan was a great man but he was not an expert of the human mind.

>> No.3747429

>>3747377

I hope you mean most animals bro

>> No.3747431

>>3747407
> not every machine that is intelligent is also conscious.
This I would agree with, but I still don't know if we mean the same thing by "conscious".

For instance, we could have a machine intelligence that is excellent at solving general puzzles, but has no self-awareness.

However, you were saying that a machine that can be reasoned with in a conversation and that demonstrates intelligence in that manner is not necessarily conscious. Either we don't have the same behavior in mind, or we don't mean the same thing by "conscious". If it can talk with me about itself and show an abstract understanding of its own existence, (no, not just being programmed to parrot "I think therefore I am"), then what evidence do you have to say that it is NOT conscious?

>> No.3747440

>>3746784
Programs can have plasticity.

>> No.3747454

>>3747431
>However, you were saying that a machine that can be reasoned with in a conversation and that demonstrates intelligence in that manner is not necessarily conscious.

I was referring to my database example.
Just look at that Eliza shit etc. now imagine we would have something with a database 1000times bigger, more complex and much more efficient.
It could even be a neuronal net that searches the database intelligent for the correct answers, maybe even by regarding the context or by altering the predefined sentences according to some rules.
It would be intelligent,yet the database searching wouldnt be different from the puzzle solving.

>> No.3747461

>>3747454
No such system based on selecting pre-canned responses can demonstrate intelligence. Conversation flowcharts are not intelligent.

>> No.3747480

>>3747461
not yet and there is no prove that it isnt possible in the future.
Also if the puzzle solving machine can be intelligent that machine can be intelligent too.

>> No.3747495

>>3747480
>Also if the puzzle solving machine can be intelligent that machine can be intelligent too.
I think I see what you're getting at.

You CAN most certainly have a somewhat intelligent (on absolute standards, not human standards) machine that can make small talk. But that just shows how devoid of thought smalltalk is.

>> No.3747497

>>3747461

http://www.youtube.com/watch?v=WnzlbyTZsQY

I'm not a robot, I am a unicorn.

>> No.3747504

>>3747497
Ah, cleverbot.

But it's not very intelligent at all.

>> No.3747549

>>3747495
Well we will have no solution to this discussion until someone builds a machine that passes the turing test, and I dont think that its impossible that someday a database based approach could pass it.

Maybe even conscioussness would emerge out of the complexity and the neuronal net connected to the database would be capable of formulating new thoughts by applying the rules fundamental to our communication...however I would never upload my brain into such database.

>> No.3747565

>>3747549
>someday a database based approach could pass it.
Database approaches will never be able to produce a coherent and previously un-encountered train of thought.

They don't even pass the "please remember a word for me" test. There is no train of thought.

>> No.3747579

>>3747565
Thats only because they select the answer based on the latest answer. With more computational power they could always compute the whole conversation.
It may be speculative but I think its possible.

>> No.3747584

>>3747579
>Thats only because they select the answer based on the latest answer. With more computational power they could always compute the whole conversation.
And not a single novel thought was produced.

>> No.3747595

>>3747584
novel thoughts are nothing more than a recombination or alteration of old thoughts

>> No.3747599

>>3747595
And database systems don't do it.

>> No.3747621

>>3747599
a neuronal net could learn to recombine words and sentences drawn from a database so they make sense to humans and produce new thoughts in the process. But how would that be different from puzzlesolving?

>> No.3747627
File: 378 KB, 600x850, Neo-cortical-column.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

I won't read this thread because it will be full of awful opinions like >>3746744

We have uploaded a nematode: http://www.csi.uoregon.edu/projects/celegans/

We have grown mice cortical columns using genetic profiles:

This isn't the same as scanning the neurons from a serial section, but that's hard enough to do. Each layer is like 70 nm thick and you have to scan multi-micron-tall structures. It takes time to laminate a mouse.

http://www.youtube.com/watch?v=_rPH1Abuu9M&feature=related
http://www.youtube.com/watch?v=wDY4cFJauls&feature=related
http://www.youtube.com/watch?v=h06lgyES6Oc&feature=related

http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

>> No.3747638

>>3747621
>a neuronal net
The net would be intelligent then.

And I agree as far as the non-conscious thing - it could make smalltalk while being minimally intelligent and non-conscious. But if it can reason with me and demonstrate intelligence and self-awareness? Then it's both intelligent and conscious.

>> No.3747646

>>3747627
>It takes time to laminate a mouse.
I laughed.

>> No.3747658

>>3747638
>The net would be intelligent then.
so would be the puzzle solving net.
>But if it can reason with me and demonstrate intelligence and self-awareness? Then it's both intelligent and conscious.
thats redundand. If it can demonstrate self awareness it is self aware...however how would one demonstrate it?
If intelligence is enough to demonstrate self awareness then you should agree that the puzzle solving neuronal net is self aware too.

>> No.3747691

>>3747658
Sure, agreed. My original example puzzle-solver was an admission that there can exist machines which are fairly intelligent but not self-aware.

Just what are you arguing now? I'm not even sure we disagree.

>> No.3747723

>>3747691
The point is that in the end we have no way to find out if a machine is self aware or not.
The puzzle solving machine could be self aware...but there could also be a intelligent reasoning machine that is not, a p-zombie-machine if you want.
Isnt that the point were we disagree?
If you say that intelligence is the only criteria for self awareness you have to accept that the puzzle solving machine is self aware too. If its only self aware if it talks like a human thats just an anthropomorph bias.

>> No.3747756

Sure is stupid in here. I mean, how fucking retarded are you people?
>Hurr defuck durr komputaz save memory differently from the brain hurr
NO IDIOT. WE CAN FUCKING EMULATE NEURONS.

>> No.3747759

>>3747723
>The point is that in the end we have no way to find out if a machine is self aware or not.
No, sorry, I can't concede that point. Anything that can intelligently reason about itself is self-aware. That's more than sufficient. We tend to ascribe self-awareness even to animals that pass the mirror test (with a mark on the head or elsewhere that they investigate after seeing it in a mirror), and intelligently reasoning about an abstract notion of self is a higher standard than that.

If chatbots seem intelligent or self-aware to you even for a moment, you just aren't trying to investigate their "minds".

>> No.3747936

OK. Everyone ITT without at least a Master's in CompSci and/or neuroscience needs to shut the fuck up.

Srsly. you're like cavemen discussing the internal combustion engine. No, you don't understand the subject, no your opinion is not important nor educated, yes you are fagets.

>> No.3748864
File: 31 KB, 500x375, neocat.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

Here's a simple thought. Eventually, man will be able to fuse technology with the human brain and the rest of the human body. Then we will be able to upload all sorts of information in a matter of seconds i.e. like when Tank plugged-in Neo in to a data system an uploaded an entire encyclopedia on kung fu in to his brain.

Ignore the dumb re-dub: https://www.youtube.com/watch?v=KdGjO20VVqo

https://www.youtube.com/watch?v=NGR9l57Uxw8

>> No.3748887

>>3747759
Chatbots are shit.

>> No.3749045

>>3746740
Because the brain is not "a computer".

>> No.3749066

>>3749045
Neither is a file, but we can upload that to a computer. Hence, it is not necessary for a thing to be a computer in order for it to be uploaded to a computer. Your reasoning is incorrect.

>> No.3749110

you are now aware that if you upload your mind to a computer, what you're really doing is just creating a copy of your consciousness, and if your physical mind dies, it does not live on in the computer -- YOU will still die.

>> No.3749119

>>3749110
But I am my consciousness. As long as my consciousness exists, I will never die.

>> No.3749126

>>3749119

So if you copy your brain to a computer then fuck off to McDonalds for a Big Mac, you're in two places at once?

I don't think so, Tim.

>> No.3749169

>>3749126
I'm actually in billions of places at once, bro. My toes are about three feet from my fingers. The front half of my brain is in a different location than the rear half of my brain. But all of these parts are controlled by my consciousness, which means that my consciousness has influence at multiple points in reality.

>> No.3749190

it will happen once we achieve 100% of understanding of how the brain works

>> No.3749199

>>3749169


So while you're enjoying your delicious special sauce, lettuce, cheese, pickles, onion on a sesame seed bun, and the copy of your brain is sitting on C:\User\Desktop\My Brain, you're experiencing two different situations in two different places entirely removed from one another at once? You, in McDonalds, are experiencing the same things that you in the computer are experiencing and vice versa? Your brain power has been doubled and you experience the world in stereo now?

No. That's retarded. You are no more the copy of your brain than you would be your identical twin.

>> No.3749465

>>3749169
normally id say thats bullshit, but thats an interesting abstract idea. but you know that your brain, even if its multiple parts of the brain, control you body.

also

>hivemind
your conscious at a central point receiving and distributing info and stimulus. the central You, is still able to die, but whos to say that once hivemind is acheived you wont remain the conscious "self" you were as a biological person.

>> No.3750336

>>3749110
That's not the issue here though.

The question this thread asks is could it be done. The reasons may not specifically be to try and somehow live forever inside a computer.

>> No.3750387
File: 9 KB, 300x160, image3.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3749465
>>3749169

oh aye, the sensors and relays throughout your body do plenty of 'thinking' before anything gets to the brain.

See the related pic, which is a decoding of the signal sent by the optic nerve to the brain of a cat. Plenty of processing goes on in the thalamus, but the lion's share is done by the eye itself before it ever gets to the brain.

http://berkeley.edu/news/media/releases/99legacy/10-15-1999.html

>> No.3750413

Imma vote with, "because the brain is not a digital computer." Also, "No matter how much Deutsch says so, pseudo-random and NEARLY continuous simulations of analogue processes ARE NOT THE SAME as the real analogue computation"

http://folk.uio.no/ovrum/articles/deutsch85.pdf

>> No.3750445
File: 444 KB, 1184x1095, dinokid2.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

Because our brains work in trinary (3 trit states, 0, 1, and 2), while computers work in binary. Duh.

>> No.3750570

Does /sci/ believe quantum computing will have an impact on this? (ie. by facilitating the process of a computer to brain direct connection)

>> No.3750596

>>3746766

This was something that was said back when there was only 250GB hard drives. Brain only as around 1.5TB of info. Our memories are just bytes of info. If we got a "picture" memory the picture is only in VGA format. some goes if we got a "video" memory.

tl;dr

That concept is out and outdated to today's technologies

>> No.3750606

Why do I get the feeling binary AI fags are getting buttfustrated by biochem computing processes?

XD

>> No.3750622

>>3746801
Most of the time when they are cracking the new console or portable gaming system the main to dump a copy of the game not knowing how the fuck to read it. But with time and alot of research they manage to read it and decrypt it

>> No.3750746

>>3750570
<sigh> quantum mechanics is virtually irrelevant to the brain. Because large-scale quantum mechanics operates on EXCEPTIONALLY prepared matter, first chilled then evaporated then laser-cooled to a fraction above absolute zero. Unless your brain is at absolute zero, it operates classically. Full fucking stop, qm is irrelevant for the brain.