[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 39 KB, 596x600, 596px-PPTMooresLawai.jpg [View same] [iqdb] [saucenao] [google]
5418110 No.5418110 [Reply] [Original]

/sci/, as the most intelligent of the 4chan boards, can you PLEASE give me an informed debate about the merits of the technological singularity?

I REALLY want to have more informed opinions about it because, as a non-scifag, I can't poke holes in the premises. Thus, continued presumption that the singularity will occur will have far-reaching effects on my life and my expectations for the future.

>> No.5418135

trendy retarded popsci

>> No.5418136

>>5418135
>retarded

why though?

>> No.5418141

>>5418136
no proof that any sort of emergent intelligence is even possible and people are already commenting about what form the singularity will take and how it will revolutionize human knowledge.

>> No.5418142

A singularity really just makes sense.
Computers have been growing extremely fast, at some point its gonna pass a threshold.
Thats about it, what the singularity will entitle, what the significance of it will be, the aftermath, we cant say anything about it.
But it really just makes sense that at some point they will be beyond us

>> No.5418148

>>5418142
"just makes sense"
> /sci/

> assuming growth in computer speed won't plateau

> assuming magical threshold exists after which consciousness magically develops

you've been reading too much popsci

>> No.5418150

>>5418148
>assuming growth in computer speed won't plateau

Why would it plateau? And why couldn't we come up with a more efficient invention altogether?

>> No.5418153

>>5418150
Because there is a limit to how small we can create transistors

>> No.5418154

Nigga

>> No.5418155

>>5418153
>transistors

better than transistors invention?

also, i get that there's a limit to the space we can build it on, but isn't that limit like astoundingly small? like smaller than a cell?

>> No.5418156

>>5418154

>> No.5418167

It will with bio-processing over at whitefield.

>> No.5418168
File: 8 KB, 264x256, CST393.gif [View same] [iqdb] [saucenao] [google]
5418168

nuff said

>> No.5418169

>>5418155
relatively speaking, a cell is pretty large...

>> No.5418171

>>5418168

But we haven't been increasing the number of razor blades over all of evolution....

>> No.5418173

>>5418169

not for human beings in terms of making them immortal, and most of the other crazy promises of the singularity

>> No.5418175

>>5418110
>merits of the technological singularity

It's the highest technological achievement man can create.

>> No.5418180

>>5418171
In fact we have. Initial growth was just very very slow.j

>> No.5418666

>>5418148
>assuming growth in computer speed won't plateau

Three-dimensional integrated circuits and new substrates like graphene should keep the speed on the up swing for a good while. Up to most of the current projected times of the singularity occurring.

>> No.5418677

The human brain evolved for the hunting and socializing, not designing computers. Sooner or later we will create a computer that can design computers faster than the human mind can. Voila, techno singularity.

To not believe the technological singularity is likely would require the belief that it is impossible to design a computer that is better than the human computer, which to me sounds like a bold claim.

>> No.5418835

>>5418110
>informed debate about the merits of the technological singularity
It's retarded and its believers are retards hoping for a magical female robot god to marry them.

Reasons:
Exponential growth =/= vertical asymptote i.e infinite power in finite time
Exponential growth is dead in 3~ years anyway
Even with unlimited computational power, undecidability of the halting problem means it can't code itself better
Even if it was a MAGICAL "ultra-computer" that could solve the halting problem for our "classical" computers, it can't solve its own halting problem so you need an ultra-ultra-computer and so on...
Only with an (countably) infinite number of ultra prefixed in front will you have an entity that has enough computational power that could prove the arbitrary mathematical problem.
If it can't do math, then it can't possibly advance physics.
No control of physics => no overlord.

>> No.5418839

>>5418148
where is your evidence that it is slowing or plateauing?

or are you just talking out your ass because you want to sound smart?

all available information points to this happening.

>> No.5418841

>>5418173
we already have machines smaller than cells, the only matter is refining them

keep going through the thread and making stupid half informed posts about shit you don't know about. I just entered, and this is the third I found!

this shall be fun, what a moron.

>> No.5418845
File: 255 KB, 1056x1100, Buckminsterfullerene-perspective-3D-balls.png [View same] [iqdb] [saucenao] [google]
5418845

>>5418155
>like smaller than a cell
Transistors are currently 22-20nm in size, cells range from 5000nm-100000nm, and for comparison, buckminsterfullerene diameter is 0.7nm. We're not going smaller than 18-8nm tech

>>5418839
>exponential growth
>sustaining itself indefinitely
>>>/x/ <span class="math">[/spoiler]>is that way

>> No.5418851
File: 17 KB, 320x297, 1342569049456.jpg [View same] [iqdb] [saucenao] [google]
5418851

There will be no "informed debate" on /sci/.

unfortunately the discussion of the technological singularity will always boil down to conjecture about where technology is and where it is going to be.

On one side, you have people that think technology will continue to advance, and that the human brain is by no means the pinnacle of information technology, they state that eventually a computer will be made that is better than our minds, this is CONJECTURE. And while current data supports them, there is no way to be absolutely 100% certain that it will happen.
leaving room for the doubters, who as the opposite of these others, are also convinced they are the sole members of the i'm right no matter what forever club. Their chief arguments seems to be "nuh uh man! that's the future so it's impossible!"
I feel as if their arguments are sorely lacking any kind of data or feedback, which their opposition has. So you can imagine which side i'm beginning to lean toward.

>inb4 i'm called a retarted super wrong faggot for just stating both sides of this thread. Damn kids, when is break over.

>> No.5418858

>>5418845
>"we're not doing XXX"
so, you know the future do you? I'm glad, now we don't have to bother trying new research on anything!
Now, you may be right, but still, provide evidence because it doesn't seem impossible to me, we can already alter things at that level, and with hundreds of years of progress.. i'm thinking we can build at those levels, which we can already affect greatly.
>sustaining itself indefinitely
nobody is saying this you massive pus stomping retard. Anyone in this thread says technology is at a point of growth, as has not leveled out, what it will reach will inevitably be greater than our minds.

oh, but I forgot you're always right no matter what because you said so.

I hate you.

>> No.5418882

>>5418110
> I REALLY want to have more informed opinions about it because, as a non-scifag, I can't poke holes in the premises.
There isn't a consistent set of premises to use. It's like arguing with religious people. They allow their beliefs to be sufficiently malleable in order to conform to strain.

The number one buzzword from singularityfags is "exponential growth." It's supposed to represent the amazing amount of computational power which we will have available to us and how that will basically revolutionize everything. The support for this belief is non-existent, pretty much by definition, since we've never had such awesome computational powers previously. So you can see already that it is a hypothesis based on extrapolating from current events to future events, cf thomas malthus.

>> No.5418887

>>5418851
>"Damn kids, when is break over."

Isn't it ironic that this was posted by an obvious high schooler?

>> No.5418889
File: 47 KB, 655x560, 1325402991188.jpg [View same] [iqdb] [saucenao] [google]
5418889

>>5418882

>can't have something because
>never had it before

I..what?
do you realize how small that argument is?

>> No.5418896

>>5418889
Please reread my comment.

>> No.5418905

>>5418896
This post denies any sort of information readily available to you.

It denies basic mathematical structure.

It denies the fact that invention is a thing.

It assumes that research never happens, that humanity never advances, and even goes so far as to say making a hypothesis based on existing information is "stupid" for people to do because
>"hurr durr hasn't happened yet"
why are you still breathing.
why am I responding?
I'm done with this thread, too much retard.

>> No.5418933

>>5418905
It does not do so much work in such a small number of words. It seems you are injecting your own perception of my opinion into my statements for the sake of tearing them down.

>> No.5418962

>everyones part of a singularity
>power goes out

whoops

>> No.5418985

>>5418858
>so, you know the future do you? I'm glad, now we don't have to bother trying new research on anything!
It's just my option based on my EE degree, conversation with prominent professors in VLSI and industry experts, and knowledge of basic quantum physics. Not to mention the repeated delays in production of latest tech, complete pull out of government support for research in the area, and common sense physical reality constraints.

But what the hell do I know, you probably right and we're right around the corner of a magic utopia where all our desires are granted....

>> No.5419052

Since the singularityfags will never let you pin down a definition of what the singularity is, you have to do some legwork. Rest assured, if you come up with a definition which suggests that the singularity is nonsense, it will be because you messed up in your formulation of the idea. But no one you actually discuss the matter with will offer up a better one.

Let's go to wikipedia.
> The technological singularity is the theoretical emergence of superintelligence through technological means. Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the technological singularity is seen as an occurrence beyond which events cannot be predicted.

I happen to think AI is a possibility in the future. Maybe some disagree. AI itself is not particularly well-defined to begin with. But anyway, the question is why a superintelligence is the consequence of the existence of AI. This is because singularityfans consider the raw material of intelligence to be processing speed and/or memory. Since AI will exist (we suppose) and since all that limits its abilities are processing power and memory (we suppose) and since the latter is growing exponentially (it seems) then it must reach a point where the AI becomes so powerful as to be beyond our own understanding, presumably through some kind of Douglas Adamsesque designing of itself.

(to be continued

>> No.5419057

>>5419054
3) A superintelligence will necessarily be incomprehensible. I find this argument unintelligible. Our brains cannot contain themselves; thus one might say that our own intelligence is unintelligible. But singularityfans suppose that human intelligence is comprehensible and a coherent acccount of it can be given in computational terms. If this is so then I see no reason why a being even more intelligent than us would then suddenly become incomprehensible.

4) It is the existence of such a superintelligence which renders the future unpredictable in that sense. Already we know that predicting the future is very difficult and that is due to the actions of normal human intelligence. Indeed the existence of even human-level AI will tend to explode such problems. You don't need a superintelligence for this to happen. It's been happening for all of human existence. The singularity is intrinsic.

>> No.5419054

>>5419052
As you can see when we dig a little deeper there are a number of assumptions at play.
1) AI will exist. I think this is plainly so but there are objections here by some.

2) All that makes an AI smart is processing power and memory. I think this is a highly questionable assumption. It has little evidence. It depends critically on associating our understanding of computational processes with intelligence. Often the singularity crowd tries to weasel their way around here. Things get very slippery because they start throwing up alternative hardware configurations (about which we have little evidence for extrapolation), or simulation of such hardware, or the equivalence of simulation to direct computation, and so on, so it is hard to pin down a single critique that would suffice.

While it may indeed be true that simulating a brain is equivalent in some sense to just having a brain (I won't argue ontology) there are real questions about what kind of computational power one would have to bring to bear on the matter. Algorithmic efficiency of a lot of problems are absolutely terrible and there's no a priori reason to suppose that even forseeable exponential growth will be enough to tackle it, and time and space constraints of problems can grow much faster, especially when they are essentially interpreted ("simulated"). We're not talking about a really good calculator, but something so smart as to be unknowable.

(to be continued)

>> No.5419154

Why are there any red dots that get lower as you move right? Did they include shitty slower processors just for fun? I''m not debating Moore's Law or anything, but that graph sucks.

>> No.5419416

>>5419154
the axis indicates calculations per second per $1000. Since this is normalized to price you may see some secondary effects.

>> No.5419769

>>5419054
>All that makes an AI smart is processing power and memory
This isn't a necessary belief. One can believe software is the main bottleneck and still believe in super-intelligent AI. All that would mean is that humans need to create basically a baby-level intelligence (baby, which grows into an adult). Certainly hard, but also I would say certainly not impossible.

>3) A superintelligence will necessarily be incomprehensible
Not necessarily the case, nor is a very important point IMO. A superintelligent agent might be comprehensible, but if its thinking at 50x the speed you can think, over 50x the scope of intelligent thought, you're not going to be able to keep up unless the entity deliberately makes an effort to explain things to you, but even then, you'll be lagging behind the agent in comprehension.

>> No.5419788

>>5419769
> One can believe software is the main bottleneck and still believe in super-intelligent AI.
One can believe just about anything, I'm sure. But the singularity crowd focuses exclusively on computing power and memory, not obscure measures of software growth (I've never read this anywhere but if you have a reference I'd appreciate it).

> Not necessarily the case, nor is a very important point IMO.
It's why the singularity is "a singularity".

>> No.5419936

>>5419788
>the singularity crowd focuses exclusively on computing power and memory
Who cares what 'most people' focus on? This thread isn't about dissing some group of people, but rather an idea, an idea that I think is important regardless if some people believe it to be an important idea solely due to extrapolations on hardware trends.

I have no measure of software growth. I doubt we'll stumble into intelligence by any general 'growth', but rather a specific effort undertaken by lots of smart people attempting to build a baby-level intelligence.

My reason for believing this is possible, is because I see no reason it should be impossible, assuming adequate effort/resources. Humans have already accomplished incredibly sophisticated things, I see no reason why that trend should stop at the brain. History is filled with technological nay-sayers and they've always been wrong.

People already know a lot about the brain, and it seems reasonable to believe various scanning technologies will only increase our physical understanding of the brain's lay out. Reverse engineering is hard but surely doable, in the worst case scenario. Maybe we could even just scan a brain and copy the scan digitally onto a computer.

>It's why the singularity is "a singularity".
I don't really see the point on focusing on specific labels like 'singularity'. If baby-level AI can be created, then Its a very important matter, regardless of whether that constitutes a 'singularity' or not.

>> No.5420146

>>5419936
> This thread isn't about dissing some group of people, but rather an idea, an idea that I think is important regardless if some people believe it to be an important idea solely due to extrapolations on hardware trends.
As always, singularityfans slither away just at the key moments.

>> No.5420378

>>5420146
But I'm not the one defining myself as a 'singularityfan' or any somesuch, you are. You're trying to define 'me' in a certain way, and then you're upset when i don't meet that definition.