[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math

Search:


View post   

>> No.15119724 [View]

>>15119714

>The assumption here is that fully simulated neurons are required for systems to display intelligence. If you could "refactor" neuronal processes to be more efficient that would change the scaling a lot.

Yes indeed! And this is the big unknown here yet. How "faithfully" do you even have to simulate a neuron actually aside from the I/O points and thresholds of its synapsis and how that signal affects I/O at another synapsis. A sophisticated algo as a single "neuron" is likely the most realistic scenario.

>or if there is we're not close to hitting it

Another yet unknown. Btw just thinking about your "spreading out" scenario ... the AI must not necessarily spread out by full fledged copies, it might just as well send out "dumb" agents with specific tasks into neighboring networks and hardware. Ofc that would still keep it localized (and vulnerable to hardware level kill) but damn, it could easily increase its reach that way!

>> No.15119705 [View]

>>15119671

>(aka "AI boxing"), which turns out to be far far harder than it sounds

Yeah I do see the issue here, clearly. Just consider how certain clever animals find all kinds of unexpected ways to escape their cages ...

>The rest of the internet combined has a LOT more computing power than any one entity running an AI can possibly have.

I do go primarily by the thought if it could really use "standard" hardware architecture in a meaningful and efficient manner. But that is primarily derived from my experience in neurobiology. Unless you could simplify the simulated wetware to a large degree without losing functionality then there is still a global cap on computation power available, if we ignore hardware specifically designed to mimic a neuronal network ... think a bit like a standard processor struggles to do computations of complex 3D models while a graphics card is optimized for this kind of operations. Same might be the case for AI hardware, optimization for neuronal stack architecture.

>if the AI can get access to millions of times more computing power across the internet
>but very much something a competent programmer can do today, and so presumably an AI could manage it as well

I can only again cite neurobiology examples there and they might not directly translate to a software consciousness. In a distribute network all kinds of disturbances (which the AI cannot directly control) could knock out certain "nodes" of its mind ... I do see your argument that this could be solved by clever redundancy so merely for argument's sake I'll say this isn't so easy. In some cases it might not affect the consciousness, perhaps only knock out one of its "memories" temporarily, in others it might take out a crucial function RIGHT at the moment where this function is provided, introducing incoherency or blockage which might disrupt the whole downstream cascade of cognition ... think of it as having a seizure.

>> No.15119670 [View]
File: 1.26 MB, 1073x1544, on_an_ancient_mission.jpg [View same] [iqdb] [saucenao] [google]
15119670

>>15119665

Thx, might have a look at it. Got my mission objective mostly defined however by now. :)

>> No.15119655 [View]

>>15119625

>upload copies of itself to all sorts of places

Yes, unless contained it could do so. But that would be the same kind of retardation as purposefully spreading a pathogen all over the world. Just don't plug the damn thing into a public network ... and even if you do, I'd very much assume that hardware limitations would restrict it, either by availability of systems with sufficient computing power or by requiring a special hardware architecture to exist in the first place.

>including your toaster and smart lightbulb

I see hardware limitations here tho ... unless we build every toaster with an integrated supercomputer. It might go for a distributed approach here (some "cloud computing") but this could be tricky as its "consciousness" would either need all subunits reliably working or aim for redundancy which might as well mess heavily with consciousness integrity.

>> No.15119637 [View]

>>15119601

>Oh? Why couldn't you just simulate it?

Ofc you "could" but the computational power for a proper simulation would be quite excessive.

>It is a bit more complex than a single gate.

That is highly generalized. The self-assembling equilibria of the cytoskeleton (which control neurotransmitter and - receptor trafficking inside the cell and thus modulate synapse sensitivity to many different stimuli) alone are very complex. Not even touching pathway crosstalk in the synapse itself, feedback with gene expression patterns, interactions with the surrounding glia cells ... many of them likely factors for the ability to "learn" and adapt within the overal neuronal architecture. Ofc you could break this down to a complex interlinked set of signal thresholds but that is far from a straightforward task ... unless you simply wanna simulate a "snapshot" of a hypothetical neuronal network.

>Why is this so different? And why couldn't we simulate it?

Could we "feasibly" for something with the same degree of complexity as a human brain? That is the question. Once proposed simply treating each neurostack as a "black box" with a random number function controlling synapse signaling thresholds ... a random function which can modify itself to be precise. Ofc, this would be a bit of a monkeys with typewriters approach but it could just hit the sweet spot by incremental improvement.

>> No.15119612 [View]
File: 28 KB, 480x502, ^^.jpg [View same] [iqdb] [saucenao] [google]
15119612

>>15119589

>the point that you can't rely on physical control systems

Ah well, not intrinsic ones at least. Better be prepared to put some thermite on its main processor in that case. Or simply pull its power supply. So unless everyone is retarded and allows the thing to roam completely free ... ok yes, we cannot rule out that scenario.

>because if we're working with slightly different definitions, even so, consciousness probably isn't a necessary dependency of intelligence.

Nah, sure is not. An ant hill as an entire system is "intelligent" too in its actions. There might be a safeguard that could be "soft-wired" into an initially human-like AI that I could think of ... the goal to maintain self awareness (however we define that ofc).

>There are technical reasons for this I can go into if you're interested.

Please! ^^

>> No.15119581 [View]

>>15119533
>>15119574

Consciousness is not "computable" by the standard methods of computation we use these days. Algorithms don't apply here either, no matter how complex. A single neuron is as complex as one of our microprocessors these days, only that it has the capability to "rewrite" its own hardware according to the inputs (or lack of inputs) it receives ... and it does so rather constantly (mostly on the intracellular level, neuronal junctions are more stable in comparison).

>> No.15119567 [View]

>>15119532

>It's difficult to imagine exactly how that will be integrated into the economy, but that's my guess.

Would say likely first as a "monitoring" agent for more classical "dumb" automated systems. What a human operator does today to keep these functional (as they never simply run all on their own, they require intervention either due to unexpected combinations of conditions or equipment breakdown). Might be enough here to have a form of savant AI with a very restricted set of "interests" or goals.

>> No.15119563 [View]

>>15119530

Goal preservation could actually be an overall issue with such self-aware machines, yes (unless there's a hardwired higher goal which allows a higher instance to overwrite goals). Improving computing power might actually come with certain caps which are not just due to resources but also due to possible instablity of a truly "aware" machine in maintaining "consciousness coherency" ... at least from a biological viewpoint that is an issue (assuming the organization of the AI is somehow similar to the pattern of a neurostack-based brain). It might solve that by creating copies of itself ofc (which would bring us back to the resource problem again).

>> No.15119523 [View]

>>15119506

Gotta define "good" here first. Would expect it to be initially rather neutral in its impact, more a curiosity than anything ... as the ideas of exponential growth/development or singularity do appear awfully unrealistic to me. Would initially face the similar limitations as an actual human consciousness. A sophisticated "mindless" algorithm would have much more potential for such rampant behaviour as it would not be weighed down by the rather intricate and "wasteful" way of human(-like) cognition.

>> No.15119491 [View]
File: 76 KB, 700x933, abominable_stupidity.jpg [View same] [iqdb] [saucenao] [google]
15119491

>>15119477

>Name one scenario that doesn't lead to degeneration or extinction of people.

Why should it. A true AGI wouldn't even occupy the same "biological" niche as humans. Some complex but non-sentient algorithm would be much more of a threat because some retard would invariably decide to put such a glorified mechanical Turk in charge of important shit.

>> No.15119368 [View]
File: 54 KB, 1000x480, see.jpg [View same] [iqdb] [saucenao] [google]
15119368

>>15118736

>Does that mean mental illnesses can be developed with a simple plane trip?

Nah. Merely means the mental illness angle is an easy way for certain "Western" circles to dodge the question of divergent cultural and societal standards.

>Science isn't supposed to be consensus driven.

"Science" isn't meant to be done by retards ... which sadly seems to have become the cultural standard over here.

>> No.15118073 [View]
File: 44 KB, 319x309, AHAHAH!!.jpg [View same] [iqdb] [saucenao] [google]
15118073

>>15116357

>Wouldnt they be the most healthy subjects with the strongest immune systems?
>strongest immune system produces the worst autoimmune outcomes

:DDD

>> No.15115967 [View]
File: 141 KB, 800x1136, why_are_you_looking_at_me.jpg [View same] [iqdb] [saucenao] [google]
15115967

>>15114175

Must not even be. Could as well be an autoimmune reaction triggered under certain conditions by the combination of the spike antigen, adjuvant (meaning the LNP particles and mRNA itself) as well as perhaps dead cells from the site of delivery (intramuscular, with the heart still being muscle tissue too).

>> No.15115928 [View]
File: 2.78 MB, 500x282, AHAHAHAHAHA!!!.gif [View same] [iqdb] [saucenao] [google]
15115928

>>15115918

>would kill on the market

ftfy

>> No.15115560 [View]
File: 203 KB, 783x1200, SAFE_AND_EFFECTIVE.jpg [View same] [iqdb] [saucenao] [google]
15115560

>>15115552

>overwhelmingly mild and self-limiting

Much safe and effective. Very duckspeak and goodthink. :)

>> No.15115522 [View]
File: 275 KB, 845x1036, I_like_you!.jpg [View same] [iqdb] [saucenao] [google]
15115522

>>15115401

The commies are getting more retarded every year ...

>> No.15115509 [View]
File: 1.82 MB, 160x192, mildly_smug.gif [View same] [iqdb] [saucenao] [google]
15115509

>>15115427

https://www.youtube.com/watch?v=krxU5Y9lCS8

>> No.15115402 [View]
File: 220 KB, 620x877, corkscrewed.jpg [View same] [iqdb] [saucenao] [google]
15115402

>>15115264

Cork planets.

>> No.15115396 [View]
File: 68 KB, 400x400, high_impact_sexual_violence.png [View same] [iqdb] [saucenao] [google]
15115396

>>15113297

Can't say it wouldn't be the most efficient course of action. It would also demonstrate how humans are perfectly capable of comprehending and (re)creating one of the deepest principles of nature.

>> No.15115387 [View]
File: 8 KB, 189x267, sci.jpg [View same] [iqdb] [saucenao] [google]
15115387

>>15114671

>> No.15115385 [View]
File: 47 KB, 780x666, fuck_around_find_out.jpg [View same] [iqdb] [saucenao] [google]
15115385

>>15114302

>What's the alternative?

Actual scientific method and not this sophisticated statistical faggotry. Mostly just amplifies noise anyway.

>> No.15113147 [View]
File: 960 KB, 500x214, why_not_both.gif [View same] [iqdb] [saucenao] [google]
15113147

>>15113106

>Ultimately there exists only two main views in philosophy

And where are these mutually exclusive again?

>>15112531

>How would you refute this?

Hmmm ... perhaps by crushing the skull of the person making this statement with a club.

>> No.15100621 [View]
File: 40 KB, 640x425, stole_your_heart_senpai!.jpg [View same] [iqdb] [saucenao] [google]
15100621

>>15100028

>can /sci/ answer her questions?

:)

Navigation
View posts[-96][-48][-24][+24][+48][+96]