[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 20 KB, 300x300, brain.jpg [View same] [iqdb] [saucenao] [google]
[ERROR] No.3755024 [Reply] [Original]

A physics colloquium I went to last Spring semester was given by a professor studying the role of chaos theory in explaining the brain. I'll give you a brief greentxt intro into what Chaos Theory is.

>A sufficiently complex system may be dictated by deterministic forces, but its state may not be predictable over a relatively short period of time due to the slightest unkowns (see butterfly effect). Small changes can lead to cascades of change. A nonchaotic system is effectively predictable over long periods of time. However, systems aren't always exclusively chaotic or nonchaotic.

By experimenting with electrical stimulus across still living brain cell slices from mice, he found electrical induced electrical impulses lead to a cascade. The few cells that were stimulated by the current caused many cells across the brain to be activated, but it wasn't exactly chaotic. It fit into the mathematics of the border region between chaos and nonchaos.

The brain is on the edge of chaos. Your consciousnesses is from moment to moment is dictated by what neurons are active. Those neurons cause adjacent neurons to fire and the so on. The path of this causal chain is dictated by your brain's unique structure, which it itself has evolved over the years to in response to what you have thought about and experienced (regions of the brain you use more tend to have more connections).

>> No.3755036

(continued)
The causal chain of neuron firings are the deterministic side of consciousness, and in simplistic terms represents your train of thought. These neuronal causal chains are also started and influenced by stimulus from one's environment, but it isn't relevant at the moment. The chaotic side of the system is the random activation of neurons across the brain. Neurons are activated all the time without the help of the already active neurons and can spawn brand new cascades. One random neuron wouldn't constitute a thought, but it may case many neurons to activate. As a whole all those neurons may represent a simple thought, like a color or a smell.

These random neuron firings may be one aspect of creativity or perhaps help start new trains of thought when one has reached a dead end. For example, a person trying to figure out how to figure out the composition of a crown may have a random neuron firing that activates a region of the brain that represents the color, taste, or feel of copper. This may lead the individual to think about their copper bathtub and then about the actual experience of getting into the tub and the observation that water level rises when one slides into the tub. Then comes the realization of the Archimedes Principle. Archimedes may have been inspired by doing it first hand while thinking, but he could have just as easily had the inspiration due to the random remembrance of the experience of getting into his tub.

>> No.3755044
File: 74 KB, 240x191, brain_neurons.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

(continued)
On a side note, Attention Deficit Hyperactive Disorder (ADD or ADHD) may have roots in the over activation of random neurons, causing one's train of thought to be sidetracked more often than is beneficial. The amphetimines used to treat ADHD cause increased neuronal signal transmittance, which may help keep current trains of thought from being overpowered by smaller random activation.

You guys of course are free to discuss any portion of this subject, but the reason I decided to post is to get some input on what this all may mean for Artificial Intelligence. If these random neuron activations are a necessary part of human problem solving skills then perhaps Artificial Intelligences will benefit from similarly random activity. Thoughts?

>> No.3755060
File: 1.65 MB, 126x126, data_phone.gif [View same] [iqdb] [saucenao] [google]
[ERROR]

tl;dr - Will we need to introduce random events within the mind of AI's to allow them human level problem solving skills?

>> No.3755093

i thought there was no such thing as true random? doesn't this mean that human level artificial intelligence is impossible?

>> No.3755108

interesting if true

>> No.3755143

Not sure where to start but i don't see any 'random' behaviour at all is individual or large neuronal clusters.

>> No.3755172

>>3755143

in*

what i mean is areas of the cortex devoted to stimulus input (occipital cortex) have been highly predictable for decades. Also the frontal cortex and associated higher level processes are being described as highly task dependant/driven.

>> No.3755218

>>3755060
We already use noise ("random events") in optimization algorithm or in some decoders of channel codes in order to avoid converging to local minima (in the former) or to help convergence to a codeword (in the latter). Using noise to help computers solve problem is not a "new" concept at all.

>> No.3755221

>>3755093
Quantum effects are the only theoretically random event, but that is beside the point. Events can be practically random.

>> No.3755224

>>3755024
in other words, there's a difference between closed systems versus an open system

amazing

>> No.3755241

>>3755093
Computers are better at being random than human beings are.

Computers don't do "true randomness", they can approach it to a certain level (combining pseudo randomness with sensors from various things that really shouldn't be correlated etc), but they cannot be truely random. However it is not some kind of theoretical limitations to AIs, because the randomness they can produce is considerably higher than what you can produce. About randomness inside your brain (that you cannot observe but impacts on your intelligence), it's not theoretically impossible to simulate.

>> No.3755251

>>3755241
either you take the assumpt that everything is cause and effect, but one cannot always predict the variables EXACTLY for there are far too many to measure accurately or even focus/find them all.

or

it's all chaos and we are only able to make predictions because we are looking at it from the simplest possible perspective (aka predicting final outcomes and not the actual interactions).

>> No.3755252

OP: I found what you said about random neuronal firings possibly being partially responsible for creativity to be an excellent and very interesting idea. So thank you.

>> No.3755265

It sort of depends, I guess. In some ways, I don't think you'll need to intentionally put chaotic behavior into the system. On the other hand, the complexity of behavior required will on some level be inherently chaotic. It seems like it would be harder to get an AI to maintain more than a certain level of predictability in behavior and work reliably as an AI than it would be to make one that opportunistically adapts itself to the environment in a way that may not be obvious to an outside observer.

>> No.3755297

>>3755172
>>3755218
What is your area of study?

>> No.3755315

>>3755265

but it isnt inherrantly chaotic as the OP suggested with the illustration of the semantic association between objects and memories and other shitty

>> No.3755324
File: 828 KB, 384x272, 1314442110073.gif [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3755241
Oh gawd, I'd kill for a study/competition between humans and computers over which is better at random number generation.

The two groups would produce many random numbers and whoever has the fewest patterns present in the series wins.

>> No.3755337

>>3755324
I know fuck-all about it but something from my video-game days tells me that computers can only generate pseudo-random numbers?
Is that true, or is my brain full of shit?

>> No.3755339

>>3755324
the machines win

>> No.3755340
File: 492 KB, 500x262, 1314607049203.gif [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3755252
Half of the reason I post on /sci/ is for you little buddy. I'm glad I could help. Ideas don't do much good in the grand scheme of things if they are disseminated.

>> No.3755342

>>3755297
( >>3755218 here )
I'm a CSfag working on channel coding right now (information theory). They are codes that wrap information into a long but more noise-resistant message before transmission, so that a decoder has a high chance of perfectly recovering the information even when there is a lot of noise on the channel.

The most advanced decoders in today's systems (3G, 4G, numerical TV, wi-max, etc) use a lot of entities that work on local problems and communicate their estimates of what was transmitted, along with a value of certainty about this estimate. You let the entities "talk" long enough, and the decoder converges, hopefully to what you had at start. In some particular cases where a group of closely related entities were all unlucky to receive bad data, it is hard to make them realize that their data is erroneous, because they support each other, and that prevents the decoder from converging. Throwing a bit of noise in the system might get you out of this erratic behavior, without taking you too far from the solution, allowing the decoder to successfully recover your message.

>> No.3755363

>>3755324
You'll kill for it.
But will you pay?

Because I'll do it. I swear to Sagan I will.
Just limit the numbers to 10 digits, please.

>> No.3755379

>>3755324
It's easier to test than that. A funny experiment is the following. You take a human, and ask him to chose A or B. A computer must chose, before the human, A or B, reveal it at the same time as the human, and he wins if he choses the same.

Now if you only do it once, the computer will draw randomly, the human too, and it'll be a 50 50 chance.

However, if you do it many times (say one or two hundred times), and if the human is not allowed to "cheat" and pick random numbers from the outside world (he must "invent" them, put him in the dark in an isolated room and don't give much time to think, if necessary), then the computer can easily learn and eventually beat the human by a noticeable amount. This learning can be done in many ways, hidden markov models are a good one.

I can't give you pointers to this kind of experiment's data, but I know they've been done and their results were conclusive.

If, however, you ask a human or a machine to predict what another machine will do, 100 tries will be far from enough to start noticing a correlation. Millions of tries would still be too low for a simple hidden markov model to notice anything.

>> No.3755384

>>3755337
The computer program has to get the random number from somewhere. Fortran has a function that produces a random number every time it is used, but if you restart the application it will generate the same sequence of numbers it generated last time. That is why from my non-linear dynamics class I had to draw from the computers internal clock for an every changing seed number for the random number generator algorithm. If you set a computer to the same exact time and ran the same program it would generate the same number though. That is why computers can't generate truly random numbers. Every action they take is based on their programming.

The same kind of applies to humans. We don't have as clear cut of programming as computers do, but it's all deterministic in the end. The only true source of random numbers are quantum mechanical events... at least until we find the hidden variables. *fingers crosses*

>> No.3755395

>>3755384
>>3755379
system time is one of the weakest seeds entropy-wise.
both of you are morons.

also, you stuck a human in a small room with just a bed+ no light, they would loose the majority of their mental capacity within a couple years and be overloaded by simple stimuli.

>> No.3755400

Very interesting idea, OP, about random neuronal firings in the brain being creativity, and then relating that to ADHD. My brother, who consistently tested borderline ADHD throughout his life, likes artistic (and arguably creative) pursuits. When he eventually went on ADHD meds, he explained several times to me when he didn't take them that it was because when he was on them he felt less creative, while simultaneously feeling like he was in a fog. I always knew when he was off them because he was much more annoying when he was off them.

However, I would also like to add that while random neuronal firing could be a link to something creative, new connections between concepts can also be seen to be creative, and in that respect one would have to measure the average number of connections per neuron in the brain of individuals of measurably different creativity strengths. This causes me to consider the fact that female brains tend to have a thicker corpus callosum than male brains, and I believe the corresponding stereotype is that "men are like waffles" (compartmentalized information) while "women are like spaghetti" (every piece of information is connected to several other pieces of information). Although, I don't know how true the waffle/spaghetti comparison is; it sounds like pop. psych to me.

>> No.3755407

>>3755342
>They are codes that wrap information into a long but more noise-resistant message before transmission
>longer message... less chance of error
Is it kind of like transmitting the same bit multiple times and taking the average?

>> No.3755414

>>3755407
go do some research on loseless formats, there's a reason they're usually much larger than their counterparts that lose their bitrate overtime.

>> No.3755438

>>3755395
>both of you are morons
1) That's not nice.
2) I just needed random numbers for some chaotic system modeling and for once to approximate a quantum wave function with the Monte-Carlo method. It's not like I was encrypting top secret information.

>> No.3755439

stop going on about random neuronal firing, it doesn't exist. read probabilistic

>> No.3755454

>>3755438
>random numbers for some chaotic system modeling
>system clock

you're going to be able to find a pattern quite easily

>> No.3755459

>>3755438
> Monte-Carlo method.

and instantly i lose interest

>> No.3755471

>>3755407
Yes. The simplest example is to add a parity check, or to simply clone your data several times. However, the quality of these codes if very low compared to what you can do. You can look up "Hamming code (7,4)" on wikipedia, it is properly defined and illustrated, you should be able to understand most of the article without a problem and it will give you a pretty good idea about how many channel codes work.


>>3755414
Not related. Channel codes do not compress data the way lossless formats like FLAC or PNG compress a raw sound or image. They take your data (which might be compressed, it can be a FLAC sound for instance), add redundancy to it, and transmit it.

>>3755395
Can't see why I (>>3755379) am a moron. I haven't talked about seeds anywhere and I don't think I've talked about letting someone die alone in a room either. I'm talking about isolating a subject 2 minutes so that he gives you a hundred of binary choices that a computer will learn on.

>> No.3755484

>>3755439
They may not be ABSOLUTELY random, but that doesn't matter. What matters is that they are practically random when compared to the highly determined neuron cascades already going on in the brain.

If it bugs you that much just insert the word "practically" before every instance of the word "random". :\

>> No.3755483

>>3755471
people have a memory

>> No.3755515

>>3755459
FYI, none of you posts have added much to the discussion. Perhaps all of our time would be better spent if you picked another thread to focus your attention on.

>> No.3755516

>>3755454
Can't see you guys' problem with how he sets his seed to a time clock for his experiments. Actually, if he only runs his experiment once and lets his PRNG keep generating numbers without re-seeding it, he could very well leave the default seed. If you're only simulating things, a PRNG will have roughly the same inner correlation for any two seeds you feed it. The system clock's entropy is a problem if:
- You check your clock very frequently to initiate different seeds that are supposed to be independent (and it's obviously even worse if you check the clock more than once per second on an OS that has precision only up to the second),
- You are generating a cryptographic key pair and you're hoping that now one knows anything about your seed because it'll compromise it. Hopefully, in these situations, you usually don't just go for your own code, but use a program whose PRNG seed comes from several things, including inter correlation from "random" moves of your mouse (which aren't REALLY random but aren't inferable from an attacker).

For anything else, the initial seed really doesn't matter much.

>> No.3755528

>>3755483
I'm sorry, I don't get which exact part of your post you are answering and what you mean by this rather obvious statement.

>> No.3755534

>>3755528
and that's why you're a moron

>> No.3755538

>>3755484

I honestly completely disagree, and I realise its far beyond the realms of anything I could prove with a fucking MRI/MEG scanner. Which is probably why people don't say this very often. Politics bore me.

>> No.3755552

>>3755534
No, I'd think it's less of a clue about me being a moron than about you being a sucky troll.

>> No.3755565

>>3755552
>i take things for their face value and never actually think deeply
>you must be a troll because you do not elaborate enough for me to understand due to lack of a brain

>> No.3755581

>>3755565
> you must be a troll for keeping your thoughts deliberately senseless or at least incomprehensible,
> you must be a troll for criticizing people's hasty judgment while you call them morons,
Please close the door while you get the fuck of this thread.

>> No.3755586

>>3755581
>lol i am autistic

>> No.3755603

>>3755586
Well dear sir, I'd very much like to continue this talk, unfortunately it's almost 4am and I haven't had diner yet, so I'll grab some food and might not come back afterward.

However, with some luck we might meet again. You'll recognize me: I'm the guy that says things that you don't really understand with your low grade scientific education so that you feel compelled to answer by making a fool of yourself.

See you around.

>> No.3755622
File: 428 KB, 500x322, 1315515066182.gif [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3755538
So, you disagree with the claim that individual neurons fire throughout the brain without being caused by adjacent neurons?

I only have one possible explanation for why a neuron might fire without any adjacent neurons being the cause. Neurons communicate with adjacent neurons across the gaps between them (synapses) using chemicals (neurotransmitters). Once the message has been passed along the neurotransmitters are absorbed back into the neurons. There are always relatively small amounts of neurotransmitters left behind and in rare occurrence they may cause one of the two neurons to fire.

Yes no?

>> No.3755654

>>3755622
>II only have one possible explanation
>I only have one
>I

And that's why, kids, the human species is doomed.

>> No.3755678

>>3755622

The whole point of their behaviour is to be selective, and their global behaviour is far more conducive to a kind of 'selective flow' from lower to higher processing areas than today blobbology would have you believe.

ie, some of http://nbr.physiol.ox.ac.uk/~plc/

but you will have to bury yourself in the lit to get it

>> No.3755680

And thus, everything is fractal

>> No.3755698
File: 500 KB, 500x364, 1315025910493.gif [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3755654
I think you need to put more confidence in the human ability to reason. We are both smart individuals. We both know how neurons basically work. We should be able to come up with some possible explanations.

If you have any qualms with my theory then do speak up.

>> No.3755723

>>3755698
>one solution offered
>yes or no
>feel free to offer another solution, but don't criticize mine in the process
>my feels hurt

>> No.3755736
File: 677 KB, 320x184, 1314790883309.gif [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3755680
How are fractals relevant?

>> No.3755765
File: 841 KB, 200x201, 1314477458485.gif [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3755723
>feel free to offer another solution, but don't criticize mine in the process
What part of my post implied that?

>> No.3755830

>>3755736
Fractals are structure arising from a feedback system. One small change in the original seed dramatically alters the outcome. This is very much in parallel to chaos.
The issue you are overlooking about probabilistic vs random is in their distributions. Probabilistic functions can have weights or biases towards certain values much like neurons reinforce their probability to fire from certain sources - plasticity (right?).
There is a whole lot of chemistry involved here that we all are over simplifying. I am sure that if you really were concerned with it, you would find that even the "random" perturbations that cause the initial neuron cascades are much more probabilistic as a function of the temperature and surrounding compounds.

>> No.3755841

>>3755678
I apologize, but I don't have a formal education in neuroscience and only draw on the random bits of information I read online and in science periodicals. My knowledge of neuroscience terminology is therefore limited. Your post doesn't make much sense unless some you are referring to something specific when you say "selective flow" and "global behavior". Google doesn't even know what "blobbology" is.

The reason I went into physics is because I didn't want to be bogged down with terminology. Would you mind keeping it simple for my sake and focusing on what is physically happening within the brain?

>> No.3755848

>>3755830
I could use the same faulty logic to say "thus everything is a hash".

>> No.3755863

>>3755848
I agree that his argument is flawed. Though, it's kind of true that there are a few scale levels in the brain that could be interpreted as the first 3 or 4 steps of a fractal: locally dense networks of entities (neurons at the lowest "fractal" level, neocortical microcolumns at the highest level) with a few long-range connections, and roughly the same kind of behavior: activity of a unit tends to provoke activity of the units linked to it, if enough contribution is gathered.

>> No.3755864

>>3755848
C'mon now, its perfectly valid. Think about how similar fractals and cellular automata are. Then Think about how similar cellular automata and neural networks are.

Its obvious that there's a common dynamic between these.

>> No.3755877

>>3755864
Yeah but he wasn't directly criticizing your conclusion, or your premises. Only the logical reasoning in between. And he was right that it was flawed. It was a syllogism.

>> No.3755916

>>3755830
>One small change in the original seed dramatically alters the outcome.
In that case, bringing up fractals wasn't really relevant. Just because A contains C and B contains C doesn't mean A is B. It's like saying all modes of transportation are trains because trains move. Perhaps a more accurate thing to say is that everything is chaotic... or were you referencing some minor meme?

Everything boils down to quantum mechanics and the of course quantum mechanics is probabilistic, but if THAT is the reason you are arguing that there are no random neural activations then this argument is just semantics. You simply have an definition of randomness that excludes EVERYTHING.

>> No.3755952

Neurofag here

>>3755622
You're not going to have bleed over between synapses for multiple reasons:
>synapses are REALLY small. I forget the exact size, but it's something like 20-30 micrometers IIRC, although the morphology of the synapse does differ depending on the cell type. Meanwhile the distances between different synapses are much bigger.
>Neurotransmitters don't last very long once released. They break down pretty quickly. Some of the molecules don't even last long enough after being released to even make it to the other side of the synapse.
>It takes a LOT of neurotransmitter to activate most of the receptors on a synapse. There are a lot of receptors on a synapse too btw, and each one typically requires several molecules of neurotransmitter to bind to it before it activates. Keep in mind that just because one synapse becomes active doesn't mean that anything at all will happen. There are anywhere from hundreds to thousands of synapses on a neuron, and a good amount of them need to be activated for the neuron to reach an action potential. Not only that, but often there are multiple types of synapses on a neuron that do contradictory things (ie one may help inhibit the neuron while another may help excite it).

>> No.3755991

Same guy from >>3755952 here

Just wanted to add to the topic that what OP described in the second part of >>3755036 isn't a new idea at all. Cognitive psychology has been aware for awhile now that there is a huge blossoming effect in the brain/mind from virtually any stimulus. It actually has surprisingly big effects, and incredibly enough you can describe an astounding amount of cognitive function and behavior just from the idea that activation in one area can lead to activation in another which leads to further activation in several more areas and so on. If anyone is interested you can look into "embodied cognition". I actually thought the whole theory was total bullshit while I was taking a class by a professor who's a big name in the field, but after getting more familiar with the research in the area I'm actually pretty convinced by it now.

>> No.3756088

>>3755736
>>3755830
The mention about fractals wasn't same person, only to clarify his rational. With the same spirit, blobbology is clearly a mockery word of the practices done today.

>>3755952
Is there a form of quantum zeno affect happening? For instance, in photosynthesis they were able to show that the lifetime of electron hole pairs is greatly extended beyond what it should be due to quantum mechanical affects. They also were able to show the ~99% fidelity of the pair making it to the system that uses them for energy despite more than enough thermal temperature to invoke decoherence across them. Are there similar affects happening in the cells of the brain?

Also, what made you change your mind about the theory?

>> No.3756216

>>3756088
The measurements of how long a molecule lasts in the synapse are empirical, not theoretical, so if there was a quantum effect happening like that then it's already being taken into account. The reason the molecules break down is due to enzymes in the synapse btw, it's not due to a half-life inherent in the neurotransmitter molecules themselves. It all boils down to statistics and probability. The way it all works is that the presynaptic cell just dumps a fuckton of neurotransmitter into the synapse with the intent being that most of the molecules will make it to the other side where a tiny fraction will then, just by chance due to numbers, bump into a receptor and activate it. While all of that is happening though there's a lot of enzymes floating around in the synapse that will immediately begin tearing about any neurotransmitters they come into contact with.

As for the embodied cognition theory I was talking about, I was originally skeptical because of how crazy some of the studies were (there was one that found that if a smell of fish was pumped into a room in a concentration too low to be consciously detected by humans that participants would be more skeptical because "something smells fishy"; that was originally where I threw my hands up and said "okay, this is bullshit, fuck this class"). However, I began to buy into when we started talking about it could explain more down to earth cognitive processes like object identification, language, and mental simulations (e.g. daydreams).

>> No.3756226

>>3755952
I was actually theorizing that neurotransmitters activate the same synapse they were released into, simply at a later time than was originally intended.

The argument that neurons require too many neurotransmitters to fire definitely holds water. It boils down to numbers at this point, numbers like how many neurotransmitters stick around within the synapse, how many neurotransmitters are required to cause one of the two neurons to fire, uptake rate of neurotransmitters back into the neurons, etc.

Being right is nice, but learning is better.