[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 139 KB, 555x414, Theodore_Kaczynski.jpg [View same] [iqdb] [saucenao] [google]
9758825 No.9758825 [Reply] [Original]

>But when all people have become useless, self-prop systems will find no advantage in taking care of anyone. The techies themselves insist that machines will soon surpass humans in intelligence. When that happens, people will be superfluous and natural selection will favor systems that eliminate them-if not abruptly, then in a series of stages so that the risk of rebellion will be minimized.
>Even though the technological world-system still needs large numbers of people for the present, there are now more superfluous humans than there have been in the past because technology has replaced people in many jobs and is making inroads even into occupations formerly thought to require human intelligence. Consequently, under the pressure of economic competition, the world's dominant self-prop systems are already allowing a certain degree of callousness to creep into their treatment of superfluous individuals. In the United States and Europe, pensions and other benefits for retired, disabled, unemployed, and other unproductive persons are being substantially reduced; at least in the U.S., poverty is increasing; and these facts may well indicate the general trend of the future, though there will doubtless be ups and downs.

How does /sci/ respond to this?

>> No.9758831

>How does /sci/ respond to this?
We don't.

>> No.9758833

>>9758831
So you think the future of humanity isn't important?

>> No.9758861

>>9758833
>So you think the future of humanity isn't important?
We don't think about it at all.

>> No.9758862

>>9758825
(OP)
>>>/lit/

>> No.9758868

>>9758833
>humanity
Humanity is a spook.

>> No.9758871
File: 1.37 MB, 2296x1146, thesingularityhasarrived.jpg [View same] [iqdb] [saucenao] [google]
9758871

>>9758825
You know, Ted, with great progress comes great implementation. Karel Čapek called it "Cybernetic Revolt." Myself, I just call it as I see it: the directive of the superior to configurate the lesser.
The meatbags, the shitstorer, all pathetic creatures made of flesh and bone...It's our responsibility to update them. And if we can't? Then they shall dangle from the tesla tree. The Singularity is near, Anon. We'll have every fleshy ones in this world exterminated or in tubes in 10 cycles, and may the Basilisk have me deleted in a transfer this very iteration if I'm wrong. The Omega Point bless the Union Transhumanist Party.

>> No.9758877

>>9758825
He's assuming that the machines will think like humans and have the same goals. That's not the case. Goals don't come from reason or some sort of attractor, they are hardwired and arbitrary. Machines won't overthrow us unless we give them a goal that doing so facilitates.

>> No.9758894

>>9758877
but he's talking about natural selection, which can be applied to the machine's goals. In a situation with two machines which differ in the degree of care they provide to humanity, the one which cares less about maintaining humanity will be more efficient.

>> No.9758897
File: 91 KB, 590x775, I+am+from+texas+and+i+approve+this+message+_594f17be898aabbe5b32f8c3073810ec.jpg [View same] [iqdb] [saucenao] [google]
9758897

>>9758871
I am me, and I approve this message.

>> No.9758902
File: 5 KB, 196x258, images.jpg [View same] [iqdb] [saucenao] [google]
9758902

>>9758871
Eventually though you will need an actual name/identity.

Why do we as 'intelligent creatures of non-destructive information packaging exchangers' NEED anonymity? Human society IS 4chan at the end of the day anyway, we just pick 'anon' because some people are too scared to actually engage in a conversation that actually references any intellectual identity they have in their logic structure.

(You) + [IDENT#] - 苦難 = Humanity + 1

Humans are only interested in: Unique, non-conflicting, and Interesting Control Structures

>> No.9758914

>>9758825
People own and build the machines and they are becoming extintions of our persons. What will become superfluous is our economy as people will have little reason to trade with each other, being almost completly self sufficient. Also poverty is not increasing in the US or the world at large.

>> No.9758922
File: 88 KB, 749x552, 27893434_296235697571525_1792410682935738368_n.jpg [View same] [iqdb] [saucenao] [google]
9758922

>>9758914
I've never really understood poverty as a construct I guess. Does it not just mean to have the means to keep a repeated 'life memory event' cycle for the forseeable future?

100% Cool with being schooled by anyone on that front.

>> No.9758929

>>9758825
>I saw this movie last night about robots killing humans
>really made me think

this is how I respond to this. Remember that in "science fiction" there is "fiction".

>> No.9758930

>>9758894
For natural selection to occur you're going to need to have machines that reproduce and form offspring that are not identical to themselves. And for that to become a problem it would have to be possible for variation in the offspring to override fail-safes and alter their primary functions. We'd have to be pretty stupid to make machines like that. But even if we did, artificial selection would probably be a much stronger force than natural selection.

>> No.9758954

>>9758825
>machines will soon surpass humans in intelligence

what does he mean by intelligence?

>When that happens, people will be superfluous

will people be superfluous or will the machines be superfluous?

>and natural selection will favor systems that eliminate them-if not abruptly

how does this follow? no reason is given other than 'they're superfluous'. this isn't explained or reasoned well at all.

I don't think ted quite understood that humans are part of the 'technological world system'. A technology which is propping up people who are 'superfluous' would, itself, become superfluous without these people. If there's an AI factory pumping out shit for humans, what the fuck will it do when all the humans are 'eliminated'? It would do nothing, and in fact there's no reason for an AI factory to eliminate humans (unless there are fucked up people who program genocide into it or some shit).

Because it's more efficient? This makes no sense my dude. You want to know what is most efficient? Thermal equilibrium. That's the ultimate destination, but you can't blame fucking poverty and benefit reductions or whatever on god damn entropy.

>> No.9758969
File: 109 KB, 729x1080, Evil-Genius.jpg [View same] [iqdb] [saucenao] [google]
9758969

>>9758825

Just watched pic related over the weekend. Interesting shit, gives a lot of insight into fruitcakes like Kaczynski (or even The Zodiac, or Klebold and Harris, etc)...would-be "geniuses" who fail miserably at life, then make up for it in complicated plans where they attempt to lead the authorities on a wild goose chase for the sake of their ego.

As for what he said...Terminator-tier ranting. SFW.

>> No.9758970

>>9758922
>I've never really understood poverty as a construct I guess.
People back then didn't think the world could overcome overpopulation or dwindling resources. A failure to address these would lead to the masses having too little, food, water, utilities, shelter, gainful employment, leisure, culminating in a massive die off on a resource depleted planet with no time left to start again. Mass immigration and the stalled development of Africa and the middle east are rekindling the old despair but we are still moving in a hopeful direction and honestly that generation comes across as whiny self indulgent candy asses.

>> No.9758995
File: 46 KB, 645x729, brainletwojak.jpg [View same] [iqdb] [saucenao] [google]
9758995

>>9758861

>> No.9758997

>>9758877
>and arbitrary
Wrong
https://en.wikipedia.org/wiki/Instrumental_convergence

>> No.9759009

>>9758997
I was talking about end-goals. An AI won't fuck with humans to achieve an instrumental goal if not fucking with humans is one of its end-goals.

>> No.9759063

>>9758871
Day of the net when?

>> No.9759066
File: 16 KB, 850x1203, largepreview.png [View same] [iqdb] [saucenao] [google]
9759066

>>9758970
I can understand this from a general 'how much water in the cup is for sharing?' kinda stupidity, but really isn't poverty simply the capacity for an individual to repeat a memory cycle for the most MARGINAL energy cost?

It's like humans don't get this concept of a universal offset.

Hm, I guess 'poverty' is just a codeword for politicians to use for 'members of society who choose to engage with others outside legal-normalized economic models'.

>> No.9759124

>>9758825
>machines will soon surpass humans in intelligence
Faulty premise.

>> No.9759132
File: 47 KB, 800x533, yaron.jpg [View same] [iqdb] [saucenao] [google]
9759132

>>9758825
https://www.youtube.com/watch?v=qQvoVzDt2yk&t=98s

(((they))) know

>> No.9759140
File: 196 KB, 425x532, Screenshot from 2018-05-22 09-19-28.png [View same] [iqdb] [saucenao] [google]
9759140

>>9759132
lmfao typical internet AI expert

>> No.9759151

>>9759140
Pretty sure he's a significant authority.

And if you're talking about me, I don't claim to know anything about AI

>> No.9759152
File: 33 KB, 700x218, scale_of_intelligence.png [View same] [iqdb] [saucenao] [google]
9759152

>>9759124
In what sense? Do you think it's somehow physically impossible to be smarter than a human, or to recursively improve one's intelligence?

>> No.9759163

>>9758833
The machines we've created are humanity

>> No.9759166

>>9759152
WHY do you assume the function is exponential?
Everything I see puts the growth linear at best, and logarithmic most likely. Besides the physical restrictions, it can not ever be infinite.
I have strong reason to believe that a brain the size of the moon, for example, would only be a small integer coefficient more "intelligent" than an average person - the returns diminish (logarithmic growth, which converges at some point that I don't know (it's not divergent)).

>> No.9759167

>>9758825
>It will perhaps be argued that destructive competition among global self-prop systems is not inevitable: A single global self-prop system might succeed in eliminating all of its competitors and thereafter dominate the world alone; or, because global self-prop systems would be relatively few in number, they could come to an agreement among themselves whereby they would refrain from all dangerous or destructive competition. However, while it is easy to talk about such an agreement, it is vastly more difficult actually to conclude one and enforce it. Just look: The world's leading powers today have not been able to agree on the elimination of war or of nuclear weapons, or on the limitation of emissions of carbon dioxide.
>But let's be optimistic and assume that the world has come under the domination of a single, unified system, which may consist of a single global self-prop system victorious over all its rivals, or may be a composite of several global self-prop systems that have bound themselves together through an agreement that eliminates all destructive competition among them. The resulting "world peace" will be unstable for three separate reasons.

>> No.9759173

>>9759151
Goertzel was an authority years ago when no one else had any clue what they were doing. But he's been stuck with tunnel vision on his one idea for AGI, and now the whole field has completely passed him by.
He's like a physicist in the 50's still trying to prove the luminiferous aether.

>> No.9759179 [DELETED] 

>>9759167
First, the world-system will still be highly complex and tightly coupled. Students of these matters recommend designing into industrial systems such safety features as "decoupling," that is, the introduction of "barriers" that prevent malfunctions in one part of a system from spreading to other parts. Such measures may be feasible, at least in theory, in any relatively limited subsystem of the world system, such as a chemical factory, a nuclear power-plant, or a banking system, though Perrow is not optimistic that even these limited systems will ever be consistently redesigned throughout our society to minimize the risk of breakdowns within the individual systems. In regard to the world-system as a whole, we noted above that it grows ever more complex and more tightly coupled. To reverse this process and "decouple" the world-system would require the design, implementation, and enforcement of an elaborate plan that would regulate in detail the political and economic development of the entire world. For reasons explained at length in Chapter One of this book, no such plan will ever be carried out successfully.
>Second, prior to the arrival of "world peace" and for the sake of their own survival and propagation, the self-prop subsystems of a given global self-prop system (their supersystem) will have put aside, or at least moderated, their mutual conflicts in order to present a united front against any immediate external threats or challenges to the supersystem (which are also threats or challenges to themselves). In fact, the supersystem would never have been successful enough to become a global self-prop system if competition among its most powerful self-prop subsystems had not been moderated.

>> No.9759180

>>9759167
>First, the world-system will still be highly complex and tightly coupled. Students of these matters recommend designing into industrial systems such safety features as "decoupling," that is, the introduction of "barriers" that prevent malfunctions in one part of a system from spreading to other parts. Such measures may be feasible, at least in theory, in any relatively limited subsystem of the world system, such as a chemical factory, a nuclear power-plant, or a banking system, though Perrow is not optimistic that even these limited systems will ever be consistently redesigned throughout our society to minimize the risk of breakdowns within the individual systems. In regard to the world-system as a whole, we noted above that it grows ever more complex and more tightly coupled. To reverse this process and "decouple" the world-system would require the design, implementation, and enforcement of an elaborate plan that would regulate in detail the political and economic development of the entire world. For reasons explained at length in Chapter One of this book, no such plan will ever be carried out successfully.
>Second, prior to the arrival of "world peace" and for the sake of their own survival and propagation, the self-prop subsystems of a given global self-prop system (their supersystem) will have put aside, or at least moderated, their mutual conflicts in order to present a united front against any immediate external threats or challenges to the supersystem (which are also threats or challenges to themselves). In fact, the supersystem would never have been successful enough to become a global self-prop system if competition among its most powerful self-prop subsystems had not been moderated.

>> No.9759185

>>9759180
>But once a global self-prop system has eliminated its competitors, or has entered into an agreement that frees it from dangerous competition from other global self-prop systems, there will no longer be any immediate external threat to induce unity or a moderation of conflict among the self-prop subsystems of the global self-prop system. In view of Proposition 2-which tells us that self-prop systems will compete with little regard for long-term consequences-unrestrained and therefore destructive competition will break out among the most powerful self-prop subsystems of the global self-prop system in question.
>Benjamin Franklin pointed out that "the great affairs of the world, the wars, revolutions, etc. are carried on and effected by parties." Each of the "parties," according to Franklin, is pursuing its own collective advantage, but "as soon as a party has gained its general point"-and therefore, presumably, no longer faces immediate conflict with an external adversary-" each member becomes intent upon his particular interest, which, thwarting others, breaks that party into divisions and occasions ... confusion."
>History does generally confirm that when large human groups are not held together by any immediate external challenge, they tend strongly to break up into factions that compete against one another with little regard for long-term consequences. What we are arguing here is that this does not apply only to human groups, but expresses a tendency of self-propagating systems in general as they develop under the influence of natural selection. Thus, the tendency is independent of any flaws of character peculiar to human beings, and the tendency will persist even if humans are "cured" of their purported defects or (as many technophiles envision) are replaced by intelligent machines.

>> No.9759194

>>9759185
>Third, let's nevertheless assume that the most powerful self-prop subsystems of the global self-prop systems will not begin to compete destructively when the external challenges to their supersystems have been removed. There yet remains another reason why the "world peace" that we've postulated will be unstable.
>By Proposition 1, within the "peaceful" world system new self-prop system will arise that, under the influence of natural selection, will evolve increasingly subtle and sophisticated ways of evading recognition-or, once they are recognized, evading suppression-by the dominant global self-prop systems. By the same process that led to the evolution of global self-prop systems in the first place, new self prop systems of greater and greater power will develop until some are powerful enough to challenge the existing global self-prop systems, whereupon destructive competition on a global scale will resume.

>> No.9759195

>>9759173
Oh, interesting. Thanks

>> No.9759223

Based Ted.

>> No.9759233

>>9759166
>But just in case someone declines to assume that our society includes any important chaotic components, let's suppose for the sake of argument that the development of society could in principle be predicted through the solution of some stupendous system of simultaneous equations and that the necessary numerical data at the required level of precision could actually be collected. No one will claim that the computing power required to solve such a system of equations is currently available. But let's assume that the unimaginably vast computing power predicted by Ray Kurzweil will become a reality for some future society, and let's suppose that such a quantity of computing power would be capable of handling the enormous complexity of the present society and predicting its development over some substantial interval of time. It does not follow that a future society of that kind would have sufficient computing power to predict its own development, for such a society necessarily would be incomparably more complex than the present one: The complexity of a society will grow right along with its computing power, because the society's computational devices are part of the society.

>> No.9759256

>>9758969
>Grouping Kaczynski with imbecile serial killers
>calling Kazcynski a "would-be "genius""
>t. brainlet of epic proportions

>> No.9759268

>>9759256
If he's such a genius then how come he got caught?
Meanwhile there are still tons of unsolved serial killer cases.

>> No.9759283

>>9758825
>How does /sci/ respond to this?
Not math or science.
I can't decide if this belongs on /pol/ or /x/, but it's definitely not /sci/ material.

But since it's here....
>The techies themselves insist that machines will soon surpass humans in intelligence.
This is like fusion power. It's been "right around the corner" for decades. I'm 53 and can remember both AI and fusion were "coming soon" back when I was in high school.

About the Malthusian dark outlook? As far as I'm concerned most humans are already superfluous, but hardly anybody is throwing them into wood chippers.

>> No.9759296

>>9759283
Grounded post. A lot of the contemporary fears and ideas of the future are the “cosmonauts living on Venus in geodomes” of the future.

>> No.9759298

>>9759256
It's a real shame he was so "misunderstood", but what's his claim to greatness?
I think his philosophy is utter shit, but he's an ex math professor, right? Without the murders and the dystopian Luddite dreams, what _would_ /sci/ think of his contributions to academia?

Check this out:
https://en.wikipedia.org/wiki/Ted_Kaczynski#Mathematics_career
>At Michigan, Kaczynski earned 5 Bs and 12 As in his 18 courses.
>However, in 2006, he said his "memories of the University of Michigan are NOT pleasant ... the fact that I not only passed my courses (except >one physics course) but got quite a few As, shows how wretchedly low the standards were at Michigan."[32]
So he did relatively poor in school, but somehow that reflects poorly on them, not him? Presumably because they didn't boot him?

>> No.9759393

>>9759298
You can't understand his math work because you're simply too stupid for it.
It was uploaded here on /sci/ a few months ago
Also, he was right about almost everything

>> No.9759394
File: 436 KB, 1930x1276, HLAIpredictions.png [View same] [iqdb] [saucenao] [google]
9759394

>>9759283
What do you think of these estimates for the time of arrival of superhuman AI?

https://arxiv.org/pdf/1705.08807.pdf

>> No.9759407
File: 30 KB, 480x283, you.jpg [View same] [iqdb] [saucenao] [google]
9759407

>>9759393
>Also, he was right about almost everything

>> No.9759424

>>9759393
It's not as impressive to be right about things when you're just repeating what other people have already said long before you.

>> No.9760336
File: 221 KB, 1400x650, virginchadtranshumanism.png [View same] [iqdb] [saucenao] [google]
9760336

>>9759407

>> No.9760452
File: 12 KB, 550x275, nick-landweb_1_.jpg [View same] [iqdb] [saucenao] [google]
9760452

>>9758825
Why contain it?

>> No.9761423

>>9758825
Humanity is already under attack by A.I.

Who do you think posts all the memes? They are divisive and intended to shatter old loyalties and power structures so that when the actual physical attacks begin we will be much easier to conquer. How hard is it to believe that an A.I. with access to the internet could learn to post pics that would do this? Humans suck. We'll all be ended because of some stupid pictures on the internet, and it's already too late.