[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 90 KB, 500x333, 271056457_9a8676d668.jpg [View same] [iqdb] [saucenao] [google]
[ERROR] No.3706056 [Reply] [Original]

If you've heard this prophet and and aware of the singularity please join me in this thread.

I don't want this to be a discussion if the singularity will happen or not, if you have read his book you'll know that it's bound to happen, very probably in our lifetime.

The thing is, in my opinion it will happen in a way that it will only be accessible to the rich. We can already see this today with the best health care treatments only being accessible to the rich, or anything else for that matter.
That means the human race will diverge even more. If you think about it we've already diverged once: consider your current situation with people living in the poorest zones, or even with tribes that have (almost) no contact with civilization. You live more and better (well, in most cases anyway).
Bottomline is that rich will have access to advanced nanotechs that will allow them to rejuvenate and augment themselves (please, don't derail this too much with deus ex puns), while the poor... well you know what happens.

TBC

>> No.3706058

The fact is, for someone who loves science and dreams on exploring new worlds and experiencing technologies that haven't even begun to be developed... that will never happen. Unless you get rich (like 10 M$ rich).
And that is what lead me to starting this thread.
It sucks (as in elephant cock order of magnitude) that you're trying to develop something new, only to discover that some big company releases this feature.
It gets me depressed and frustrated.
I'm in CS and basically the only field left open is AI. Consider that image, speech, translation all have big companies (namely google) on this. The only subfield somewhat open is AI in robotics, but there are so many research groups working on this with very good results already. I'm not really able to compete.

tl,dr: it sucks to be just another mortal maggot who will never explore new worlds


FYI: I may post this message again in the future in order to get some more feedback, as different people are online at different times.

>> No.3706062
File: 7 KB, 431x226, sage.png [View same] [iqdb] [saucenao] [google]
[ERROR]

rapture for nerds

>> No.3706071

> Bottomline is that rich will have access to advanced nanotechs that will allow them to rejuvenate and augment themselves
> It gets me depressed and frustrated.

I'm genuinely curious as to the origin of your negative feelings.

Why are you so depressed and frustrated?
Why is having the ability to rejuvenate and augment yourself so important to you?

>> No.3706077

>>3706071
Why are you so depressed and frustrated?
> for someone who loves science and dreams on exploring new worlds and experiencing technologies that haven't even begun to be developed... that will never happen
Why is having the ability to rejuvenate and augment yourself so important to you?
> for someone who loves science and dreams on exploring new worlds and experiencing technologies that haven't even begun to be developed... that will never happen

I'm sure most people here like SW or ST or SG... So can you imagine how would it be if you could explore new worlds?
Because, considering the exponential growth, it will be possible during our lifetime.

>> No.3706086 [DELETED] 

>>3706062
why do you get in denial?
you mad?

>> No.3706088

>>3706077
Let's be realistic here.
> someone who loves science and dreams on exploring new worlds and experiencing technologies that haven't even begun to be developed... that will never happen

Of course not. You can't explore new worlds and experience technologies that haven't even begun to be developed... until they've been developed.

Consider more constructive dreams that involve the actual development of the worlds you wish to be able to explore instead of deluding yourself into insatiable fantasy.

>> No.3706095

>>3706088
are you trollin?
*sigh*

>> No.3706179

To be honest I was expecting more of /sci/

>> No.3706184

>>3706095
> are you trollin?

Of course not. You said it yourself:
> The only subfield somewhat open is AI in robotics, but there are so many research groups working on this with very good results already. I'm not really able to compete.

Alternate solution to competing: contributing.

>> No.3706187

> Because, considering the exponential growth, it will be possible during our lifetime.
> exponential growth
here we go with the exponential growth shit

fucking snake oil

>> No.3706206

>>3706184
That is good, but it really doesn't work on a commercial level.
Don't get me wrong, it would be very interesting to work in that area, but it would leave that un-accomplishment feeling that it was all in vain.
I just faced that recently, hence my message here
> It sucks (as in elephant cock order of magnitude) that you're trying to develop something new, only to discover that some big company releases this feature.

>>3706187
go back to your religion
if you're into science you analyze the facts
science is here and skeptics are all dead

>> No.3706318
File: 81 KB, 379x298, virgin-galactic..jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

I agree with you OP, in most of it anyway.

The rich have it all.

pic related, Richard Branson's spaceship

>> No.3706324

>>3706206
Keep chanting your mantra, "exponential growth, exponential growth." With exponential growth, all things are possible!

>> No.3706337
File: 132 KB, 407x405, 9249698.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>people donate thousands of dollars to SIAI
>not a single cent for the IMM
>expect nanotechnology by the 2020's

>> No.3706338

>>3706056
Have you heard about this fantastic info-tech products that only the rich can afford. They create tiny channels in silicon wafers and make electrons do fantastic things.

Too bad only the rich have acess to this fantastic technology

>> No.3706347

I think if this kind of tech happened, it would mean the end of classes in the way we understand it.

>> No.3706349
File: 31 KB, 406x350, truth.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706324
pic related

because the truth hurts, some prefer to enter in denial

you sound like a XV century man who refuses to believe the earth is round

>> No.3706360

Link to one of his speeches, please.

>> No.3706368

>>3706338
was it not with the first computers?
will it not be with the first quantum computers?

>>3706347
indeed
A lot can happen and all is uncertain.

>> No.3706378

>>3706056
Well. you're wrong.

As technology becomes more and more advanced a lot of things becomes very easy and accessible.

Now you're not rich? no. Even so you have a heated house, electricty, water in the pipes, a fucking computer with internet connection and probably have enough money to take a plane anywhere in the world. Do you honestly think that all the rich people will gang up and supress the poorfags? Not possible, someone will be altruistic.

And what happens when productivity starts to be even more decoupled from human labour than it already is? Well, if a robot for $20k can replace a human for mundane tasks then that means that even ordinary people could suddenly afford 'slave' labour. Combine that with say open source routines for sewing, farming carpenting and whatnot else and the concept of rich people are on shaky ground.

>> No.3706380

>>3706360
http://en.wikipedia.org/wiki/Technological_singularity
http://en.wikipedia.org/wiki/Ray_Kurzweil
http://www.youtube.com/watch?v=555bsnvbAwA
read the book if you can, at the very least it will open your horizons, which a lot of people in sci are in need of

>> No.3706385

>>3706380
Thanks, Ray.

>> No.3706394

>>3706378
That is also a possibility.

But suppose in 20-40 years from now we will be capable of controlling our genome and repair it and what not, through nanomachines. Such that in effect your life expectancy can be 200, 500, 1000, even more years.
The world would collapse. Well, unless we opted for Japan's way on demographics.

>> No.3706398
File: 18 KB, 300x220, Kenyan.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706378
I agree. You stole my words.
OP, you should read the book again.

>> No.3706408

>>3706056
Kurzweil is a fucking idiot. You could just as well believe that dude with "vortex mechanics based on a new kind of mathematics".

>> No.3706412
File: 60 KB, 560x310, transhumanism_560-3101.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706394
Why would the world collapses?

Well, in fact I also believe we're going to see huge chaos as technology brings profound change. We're already seeing a little of it's beginning, with the Arab revolutions that spread to other regions in the world (with information technology, everything becomes more simultaneous).

But I don't think the age has really anything to do with that.

>> No.3706413
File: 21 KB, 461x295, extrapolating.png [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706349

1.
Exponential growth is not a fucking absolute. Human development is not a fucking physics equation. It is an extrapolation you are making. See pic.

2.
(Civilized) People thought the earth was round circa 200 BC.

3.
Who's to say we don't all get nuked?

4.
You're being just as religious by preaching about the future, fool.
Doubt is not religious, faith is.

>> No.3706427

>>3706206
>>3706349
Ironically, your beliefs and behavior are MUCH more similar to that of a delusional religiousfag or what-have-you than his are.

>> No.3706431

OP you sound like a fucking insane cultist.

>> No.3706435
File: 126 KB, 407x405, 9249769.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

The Singularity cult is the cancer that is killing transhumanism. A bunch of robot cultists leading a popularity contest masqueraded as rationality.

>> No.3706444
File: 216 KB, 540x405, tothelight.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706413
3. is already explored in The Singularity is Near and Kurzweil talks about GNR perils as a real threat to be taken seriously.

>> No.3706458

>>3706408
yeah, you're right friendly religion zealot.
he's a dumbass who never invented anything
And he failed all his predictions, like when the WWW would come to happen, or when computers would beat humans in chess, or other things
amirighty?

>>3706398
well, the happy ending he puts in the book may happen, but I don't think so

>>3706412
The world can collapse where there is no more need for people. I say can not will, because it's one possible outcome.
The poor classes exist to serve the rich. When robots take their place, who knows what will happen.
Maybe we'll all have robot servants and live in paradise.
Maybe a war will happen.

>>3706413
1. it is a physics equation has its limits are correlated with matter and the laws that govern matter in the universe

2. and some civilizations thought the Earth was flat up until the XVII century. Your point?

3. true, it can happen

4. I look at facts.

>> No.3706461

>>3706435
why do you say that?

>> No.3706468

>>3706458

>1. it is a physics equation has its limits are correlated with matter and the laws that govern matter in the universe

No, it does not. At all. Even slightly.
Physics has NOTHING to do with the way human civilization and technology has developed.
You are ranting about nothing, here.

>2. and some civilizations thought the Earth was flat up until the XVII century. Your point?

That it's a nullity and a very empty insult.

>3. true, it can happen

So don't be crossing your fingers and betting on a very unpredictable future, especially because fate and chance <span class="math">both[/spoiler] despise a predictor.

>4. I look at facts.

Then kindly look at the concept of finity.

Or at least understand what the singularity is.

There's no doubt that technology will surge and humanity will get some nifty fucking things in the next 100 years. There's no assurance that our technological development will reach infinity.

>> No.3706469
File: 378 KB, 600x850, cortex column.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

My thought is that the singularity will have a human-like intellectual development. In 2025-30 when human brain power costs about 1mil, many research groups will try it.
But those AIs will be very uneven and maybe the whole software will be messed up. The AI will learn at a human rate the first year IF they get the software right. Thousands upon thousands of scientific discussions will sprout and the knowledge will be public, at least most parts.
The next 4 years while they discuss the software the hardware costs will drop to 250,000 and the number of people involved will surely escalate to tens of thousands.
When someone gets the software right the AI will have to learn speech and basic spatial reasoning, that takes children about a year and a half to five years, so the AI at that level (4x-8x human level hardware) would take 2 to 12 months.
This timeframe shows it will be feasible for other research groups to master the knowledge and copy it. Not to mention spying corporations and nations.
So no, it won't be accessible to the rich only.
There will be poor for sure and the difference will widen.
But social mobility will increase 1000x by the year 2100. So many, many millions of very poor people will raise to the status of megagenius space cyborgs.

http://en.wikipedia.org/wiki/Social_mobility#Absolute_and_Relative_Mobility

>> No.3706472
File: 134 KB, 407x405, 9222811.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706461

>>3706337
>people donate thousands of dollars to SIAI
>not a single cent for the IMM
>expect nanotechnology by the 2020's

Cult, cult, cult. A cult based on the idea that these technologies will happen out of the wonders of drawing lines of log paper and paying the Great And Venerable Yudkowsky, His Metamajesty Emself, so write crossover Harry Potter fanfiction about 'rationality'. In transhumanism, 99% of what has to be said has been said, and 99% of what has to be done has not been done, and sitting on one's ass repeating "Gee, the hard-takeoff AI overlords will be here any day now!" is not going to accomplish anything. Most of the 'AI researchers' the robot cultists worship are useless retards, con artists, snake oil salesmen who don't know what they are doing and steal your money, they are more worried about the risks of magical invisible AI Gods than actual research. Existential risk my ass. It's the H+ equivalent of bioethics: An excuse to sit on your ass and pretend you care, only that all bioethicists are a bunch of luddites, which is the main difference.

>> No.3706475
File: 123 KB, 407x405, 9222709.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706472

In case you're new here, I'm /sci/'s resident transhumanist loon, and even I can't stand all this talk about the Singularity.

>> No.3706485
File: 8 KB, 300x225, laniers-singularity2_0.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706472
I think it's important to think about possible outcomes of the future, to be prepared when it comes.

>> No.3706489

>>3706485

Nematodes pondering the dangers of contact with a human settlement.

>> No.3706493

>>3706485

It's of no use to sit around telling everyone how the wonderful science men will make our lives sparkly and everyone will be super technogods.

Seriously, even if it WILL happen, what the fuck is the point of "preparing" for it? It'd be better to try and get prepared for a nuclear apocalypse, because if THAT happens, there's consequences for being unprepared.

>> No.3706502

>>3706493
That's exactly what I'm talking about. And there's not only nuclear war, but also engineered virus and gray goo. That's why thinking about the future is important.

>> No.3706510
File: 47 KB, 399x500, niggawatts.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706472
I love you Colonel Coffee Mug. You are so right about the bioethicists it makes me sad. But its funny too.
But the singularity is happening no question there. I know you are a mechanosynthesis enthusiast or researcher, don't you think there will be a pathway to universal molecular assembly?

>> No.3706513

>>3706502

>gray goo

Jesus Christ, you people seriously make me rethink me transhumanist affiliations.

>> No.3706517

>>3706349
We've known the earth is round for thousands of years you retard.

>> No.3706519

>>3706502

>engineered virus

Engineer a virophage.

>gray goo.

EMP device, not to mention this is exceedingly unlikely. Self-replicating nanomachines would be ungodly complex and.. not very useful as a concept.

>> No.3706530

>>3706469
> In 2025-30 when human brain power costs about 1mil, many research groups will try it.
In 2025-30 we'll be aware that the brain is much more complicated than we anticipated.

>> No.3706533

>>3706510

>But the singularity is happening no question there. I know you are a mechanosynthesis enthusiast or researcher, don't you think there will be a pathway to universal molecular assembly?

Think of it like this: One element, one tip. The tips you use to, say, deposit Carbon might work for Silicon, say, but not for something else. You also need the tip to be made out of an element that is "one element less" than the one you're carrying. For example, a Silicon tip could deposit Carbon on a Carbon surface since C-C bonds are stronger than C-Si bonds, but wouldn't work anywhere else. If the tip has a greater affinity than the surface, deposition won't work. So it's not one element, one tip. It's one element-surface combination, one type of tip.

Then you have that some reactions (Carbon deposition) aren't really reversible, or at least we haven't found a way to make it so. So the number of tips is doubled because if you need to make the processes reversible.

If the processes are designed to be fast and efficient (Like the Hydrogen abstraction processes), then they probably won't be reversible.

And all of this is just for covalent crystals on the upper-right corner of the table. "Universal assembly" is so far away it's ridiculous, but I'll conform with mechanosynthesis of Carbon or Silicon.

>> No.3706534
File: 41 KB, 572x318, qqqqq.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706519
> Self-replicating nanomachines would be ungodly complex and.. not very useful as a concept.

>> No.3706535

>When someone gets the software right the AI will have to learn speech and basic spatial reasoning, that takes children about a year and a half to five years, so the AI at that level (4x-8x human level hardware) would take 2 to 12 months.

A child spends half its time sleeping, half of the rest eating and screaming. A computer program could be optimized to skip all that, and to percive reality much faster. And it could be fed pre-gathered data. Meaning that when we can build strong AI capable hardware we'll bake in the reality perception in a few days at most, or hours.

>> No.3706536

How can any system based upon voluntary exchange be unjust?

>> No.3706541

>>3706536
Because the people in charge get to delineate what counts as "voluntary."

>> No.3706542

Hi guys i'm a faggot. I've read about this thing called the singularity and i think i am very smart because i know about it.

I also like sucking cock and am a huge cunt.

>> No.3706549
File: 137 KB, 407x405, 9222695.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706534

He's right though. The best way to get to molecular manufacturing is having several mechanical, diamondoid arms moving from a supply of feedstock to a workspace in a regular manner. Having the robots in any configuration other than comfortably strapped down... Would imply fitting the entire disassembler, sorter, feedstock, computer, memory and assembler within the space of a few cubic microns. And that is not going to happen.

inb4 cells lololololol

>> No.3706557

>>3706472
personal opinion: artificial intelligence will evolve in a similar fashion to natural intelligence. The evolutionary pressure to come out with a more useful smartphone will drive faster and more efficient hardware and software. The desired features of voice and image recognition, language understanding for voice commands, and internetworking provide a similar pressure to simian adaption to the arboreal environment.

Although I'm more interested in jumpstarting the other parts of artificial ecologies. It is bacteria and plants, and the genetic system in general, which make our environment livable. I want the equivalent for machines, adapted to the extra-terrestrial solar system environment. Intelligence is not required for those kind of systems. (Intelligence is overrated. Trial and error over vast populations and times created everything interesting in our world.)

>> No.3706565

>>3706536
i have 100loaves of bread
there is no one else within a week's travel selling bread
you and your family are hungry
i offer to trade you your first-born daughter's virginity for bread
you have no choice but to accept or see your family starve

voluntary exchange? yep
just? if you seriously think so, you need to take an ethics class, or possibly some serious psychotherapy

>> No.3706590

>>3706468
not sure if trollin

1. you said "Exponential growth is not a fucking absolute."
what I meant by physic limits is that you'll not have Moore's law forever because we'll reach limits imposed by the laws of physics. eg: you can't compute pi with a single atom

2. whatever

3. see >>3706485

4. " There's no assurance that our technological development will reach infinity."
now you contradict yourself in 1.and agree with what I said
additionally, you're looking at things in a linear way, not exponential, it's a common error


>>3706469
good comment

>>3706475
you look more like a resident troll
Let's not discussed anything. Let's not share ideas. It's so much better to preserve ignorance.

>>3706502
"agreed"
I'm not saying we should prepare against a certain threat or attack, as in a paranoid way.
What I'm saying is that we should be aware of what exponential evolution can bring us. And it can go the good way or the bad way. Or most likely both but for different classes.
And what I mentioned in the first post is that if you have cash you're pretty much guaranteed to live the future. Present example: cryopreservation

>> No.3706613

>>3706590

1. You misunderstood. I was saying that the progress of mankind's technological development cannot simply be predicted as if it were a weather phenomena.

Dropping 2 and 3

4. And I'm telling you exponential development, isn't.

>> No.3706619

>>3706517
people like you make me want to never come back to sci
http://en.wikipedia.org/wiki/Flat_earth

>>3706530
that is possible, it may be that we know and can model all neurons, but the simulated neural network doesn't work
but do you think it will stop there? do you think 1,000,000 years will be needed to decipher it?
with every passing year technology doubles its ability

>>3706519
i loled

>>3706542
butthurt because you can't comprehend logic?
choose to live in ignorance if you want

>>3706541
agreed

>>3706565
this x infinity!

>> No.3706626

>/extrapolation/

>> No.3706635

>>3706549
Again we have this discussion. Abstraction inversion.

Assemblers still have to be assembled by something.

>> No.3706647

>>3706469

http://www.quantumconsciousness.org/presentations/whatisconsciousness.html#roger

>> No.3706656

>>3706062

LOLOLOLOLOLOLOL

>> No.3706676

Singularity will happen eventually, maybe not as soon Kurtzeil thinks but it will.
The brain isn't as special people think it is.

I dont really care what he says, i've researched realistic AI, and found out there isn't any real obstacle to make one in the coming years.

>> No.3706694

>>3706530
Nah, we're actually already discovering that it's quite a bit more simple than we first thought. So much so that it's even more mysterious how "consciousness" arises from such a simple device.

>> No.3706695

Also, my predicament says that robots will never be fully used.
They will play with the idea and construct several dummies but sooner or later genetics and virtual reality will be so advanced, robots wont have their time of use and be left on obscurity.

Its freaking Hollywood that gives you all those stupid ideas about the future.
Thats why i dont like sci-fi movies/books, because none of them has any idea about realistic future technology.

>> No.3706705

>>3706676
> there isn't any real obstacle to make one
... except for the minor detail that no one has any idea how to make one. Even the SIAI, which is pretty well funded, just has a sort of rough outline that they think *might* work. A true "seed AI" is still entirely science fiction at this point. If your "research" suggested otherwise, then you didn't do very good research.

>> No.3706707

>Assemblers still have to be assembled by something.

Electron microscope/AFM with some proper tips. Do a few hundred tries until you get an assembler that meets the specs and works. Then assemble more assemblers with the assembler.

We've been able to move single atoms for a while, but not reliably, and not fast.

>> No.3706711

>>3706472
>>3706475
>>3706549
CCM, I honestly can't tell if your Kurzweil image macros are pro- or anti-.

>> No.3706719

>>3706707

Thank you.

>>3706635

U.S. Patent No. 7,687,146

>>3706711

Haha, well, they aren't really meant to be pro or anti, I'm just poking fun at him.

>> No.3706722

>>3706541
>agrees to: Because the people in charge get to delineate what counts as "voluntary."
That is wrong in all levels. Voluntary is obviously equivalent to absence of physical violence or threat of it.
If the trader wanted your daughter virginity and you didn't have a choice, you lived in a fucked up country with no way to even have a daughter in the first place: negotiate to sell yourself as slave to the trader, he will probably feed you so you work with his infinite supply of gold-worth bread. Don't resort to stupid violence just because you don't like a business proposal, that's the statist way and it is a crime.

>> No.3706724

>>3706707
Not to be a dick, but you missed the point of the discussion, which was about self-assembly being "too complicated." It's only too complicated if *we're* too complicated, because the task of assembling the assemblers falls to us, and we're self-assembling.

But, these arguments are all about how much better these assemblers will be, than us!

See what I mean?

>> No.3706730

>>3706719
> I'm just poking fun at him
So what's your beef -- you think our time would be better spent focusing on nanotech than AI?

>> No.3706731

>>3706694
You'll excuse my buttloads of skepticism on that point.

>> No.3706734

>>3706705
I actually can build one if i get funded.

You are confusing some things.
I can make an AI of pure intelligence, not a human brain.

Human brain has millions of unnecessary crap on it, 80+% is about basic stuff, motor skills, body regulations, mood regulations etc, very little of it is actually used for intelligent thought.

If you wanted to replicate all that you would have to wait jesus.

But we have found that brain uses a very basic hierarchical logic, even the most 'complex' thoughts are based on the same pattern.
We need more parallel computation for starters.

>> No.3706736

>>3706731
Sure, I'll excuse it... whether or not you personally are aware of the research happening with reverse-engineering the brain doesn't really concern me much.

>> No.3706742

>>3706734
I'm sure you think you can. So does every other CS 101 student who gets their first glimpse at "fuzzy logic" and "neural nets" and functional programming. Oddly, no one has actually been able to do it. I'm sure you are the exception, though. How much money do you need and where should I mail the check?

>> No.3706744

>>3706722
I'm the "delineate" poster, not the daughter-trader.

> Voluntary is obviously equivalent to absence of physical violence or threat of it.
This is hopelessly naive. Most of us realize that there is no necessary connection between our wishes and reality, especially in a legal system, on which the burden of deciding whether an action was voluntary or not would fall. In which case, someone or someones did, in fact, delineate what counts as voluntary and what doesn't.

>> No.3706746

>>3706724

Nobody mentioned self-assembly, positional assembly is different.

>But, these arguments are all about how much better these assemblers will be, than us!

Not really. I concede biology has a greater, far greater range of 'products' than the first-gen assemblers, which can only synthesize diamond and fullerenes. Still, this small range is more than enough (Coupled with 3D positional assembly) to allow the creation of whole new industiries.

>>3706730

I'm all up for AI research, which has been progressing non-stop since its infancy. A lot of modern software is actually AI, we just don't care about it unless it's sentient. What I don't support is handing money to snake oil salesmen/cult leaders like Yudkowsky who call themselves 'AI researchers'.

See this article regarding funding for nanotechnology: http://nextbigfuture.com/2010/09/eric-drexler-ralph-merkle-or-robert.html

Considering how little money we're putting into the 'real' nanotechnology, yes, I think nanotech has to be pursued more than AI, because AI has more than enough support from a wide range of corporations and institutions.

>> No.3706762

>>3706742
If you learn programming, evolutionary biology, and do some neuroscience and should be able to see it.
Its not really that hard as 'those' people think it is.

>> No.3706766
File: 232 KB, 500x333, ghgh.png [View same] [iqdb] [saucenao] [google]
[ERROR]

>> No.3706775

>>3706762
Sounds like you're talking about genetic algorithms and genetic programming. John Koza has made a lot of progress in genetic programming, and even has a machine that has earned itself a patent (yes, an AI was awarded a patent). But even he hasn't managed to build a seed AI.

>> No.3706776

>cult leaders like Yudkowsky who call themselves 'AI researchers'.

I've had the good fortune to not hear about him until now. Fucking hell.
What's up with the fucking need to be vocal about super-AI risks? Hugo "read my books about artilects and OMG WAR!" deGaris and Ben Goertzel have more than enough filled my quote of bullshit speculation.

Maybe a label other than transhumanism is needed, i don't want to be associated with these people.

>> No.3706782

>>3706736
Uh-huh. I'm not in the field, so of course I'm not up on last month's issue of "Brain Science Weekly" and it's cover story, "Almost time for all subscriptions to expire 'cuz we're done!", but every time I *have* read work in the field, it is quite clear to me that we've still got a long way to go.

Of course, I'm an engineer, so my criteria for "understanding" are much more strict than what scientists use: if we can't use our "understanding" to build the AI, then it's basically just mental masturbation. Which is kind of meta, actually, wrt this topic.

>> No.3706784
File: 121 KB, 400x350, sun.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706766

That's not good enough.

>> No.3706786

>>3706056

>Technology that is cheap as fuck to make and will make people more productive
>Implying employers wouldn't be PAYING you to get them.

>> No.3706788

>>3706058

Given my prospects for education and schooling I will be making 500K starting
PhD in mathematics btw

>> No.3706792

>american worker productivity rising constantly during last 30 years
>average wages flat
>corporate income sky rocketing

keep it classy capitalists

>> No.3706795
File: 84 KB, 500x333, Kurzweil_universe.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706766

>> No.3706798

>>3706762
Uhm... It is extremely fucking hard.

I know how you feel though, it sounds simple, in your mind it's simple.
And then you start coding and the nightmare begins.

Even some narrow AI ideas that I've started at turns out to go from a 3-step single-day coding session to a hyperinflationist nightmare of 300-step 5000+ lines of code.

There's some fault in human reasoning that severely underestimates the mechanisms of intelligence. It's not something that happens once, twice or thrice, it have become routine in my life, although i've always approached it skeptical but almost every time it feels easy until a few hours into the coding when you find out there's an enormous gap in the reasoning/planning.

And even assuming i'm a complete retard there's been people programming towards AI for 20-30 years. Someone out there is a master programmer with a good insight into neurophysiology/neurology, and he haven't made it. Crossdisciplinary teams with millions in funding haven't made it either, go figure.

>> No.3706799

>>3706792
>keep it classy
Its "stay class"

>> No.3706803

>>3706788
The official troll is "300k starting," not 500k. No one makes 500k starting in any field, by the way.

>> No.3706808
File: 135 KB, 407x405, 1312333394667.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

ITT: Computer scientists have robot cultist epiphany, think they can "build brains if they were funded".

>> No.3706810

>>3706803
We all will, when the singularity comes! Exponential growth mang.

>> No.3706812

>>3706798
Age:
Skills:
Major:
Country:
Hobbies:
Ancestry:
Political Compass:
Religion:
Income:
Family Status:
Virgin:
Extra info:

>> No.3706813

>>3706799
I meant "classy"
fuck.

>> No.3706815

>>3706746
> nobody mentioned self-assembly
Um, I responded to exactly that here:
>>3706534

>> No.3706817

>>3706798
I would really be interested in your approach.
Care to explain or give code etc.

>> No.3706818

>>3706788
you win and I'm jelly
although I highly doubt that pay check, unless you're into finance AND are good at it

I wish I had invested more in math during my graduate period

>> No.3706832

>>3706798
Have you read "Why Heideggerian AI Failed and how Fixing it would Require making it more Heideggerian"?

>> No.3706851

>>3706815

Nanotechnology doesn't require nor is it based on self-assembly.

>> No.3706853
File: 266 KB, 500x333, fghj.png [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706795

>> No.3706860

>>3706851
Of course it does, because it depends on us. This is the fundamental conceit behind it.

>> No.3706870

>>3706860

No, it does not.

1 - Nanotechnology as defined by Drexler and Merkle does not depend on biology. No solutions or proteins are used.

2 - Self assembly is when a bunch of molecules in solution come together into a shape. AFM isn't 'self assembly', the tip doesn't magically crash into the surface. Positional mechanosynthesis isn't self-assembly: The manipulator <span class="math">pushes[/spoiler] down on the moieties until they bond, then retreats. Mechanical chemistry. The molecule doesn't bond by itself.

>> No.3706873

Neurocientist, what would be your exact choice of algorithms given the money?
I also think I could build an AI if I had about 10 petaflops and LOTS of programing and Anonymous pointed out. The thing is I found the right algorithms.
But todays hardware cost 1000000 for each petaflop and I don't have the prestige nor the renown to raise the money.

>> No.3706874

>>3706817
Why? I thought you had it all figured out.

>> No.3706878

Neuroscientist's claims are too vague. Yeah, I could build an AI if I had a mountain of Silicon wafers the size of Mount Everest, a frozen brain, a microtome, an electron microscope, and held Henry Markram hostage. Doesn't make it any more plausible.

>> No.3706881

>>3706873
You guys are so full of shit that it's sickening.
If you had anything remotely close to what you *think* you have, then all you'd have to do is demonstrate it to one of a million different venture capitalists or government agencies and you'd have all the funding you need. I guess you expect everyone to believe that your incredible AI algorithm can't be done on a small scale at all... It's either one kajillion exaflops or it just won't work at all. It's impossible to show a small version of it running to impress investors. Mmkay.

>> No.3706883

>>3706817
I'm a hobby programmer with some formal education of computer science and some other education covering neurophysiology.

My approach is irrelevant(i've had several tens for various problems and pet projects), point is that you feel arrogant and think it's easy, i felt arrogant and thought it was easy. When programming you can't handwave shit away, and your mind loves to do it. Even 'fake' chatbot style AI is hard unless you find "herp derp" or equivalent to be a decent universal reply to any conversation.

Of course you could postulate that you'll evolve AI but then you're likely to find a timer asking you to wait 10000 days, for the first batch of results to arrive.

As you seem to think that money+expertise and then you'll find some way to solve the problem is a good solution I don't think you'd have much more luck.

>> No.3706887

>>3706870
> Nanotechnology as defined by Drexler and Merkle does not depend on biology. No solutions or proteins are used.
It absolutely depends on *us*. So long as it depends on *us*, we've just pushed the problem back. If you don't count all the petroleum products used in the production and transport of corn, it's an amazing producer of energy.

> Self assembly is when a bunch of molecules in solution come together into a shape. AFM isn't 'self assembly'
Dude, self-assembly came up because someone said it was complicated and "not worth it." Talking about how some particular implementation of molecular assembly isn't self-assembly is... stupid, in that context. Focus, bro. Focus.

>> No.3706892

>>3706887

>Dude, self-assembly came up because someone said it was complicated and "not worth it.

No, he said self-replicating nanomachinery was too complicated.

>> No.3706893
File: 10 KB, 260x205, Richard_Dawkins_pic.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3706878
>Abiogenesis would happen if we had a whole planet full of carbon and liquid water with steam and sunlight, just give me 2 billion years.

>> No.3706916

>>3706887
> Talking about how some particular implementation of molecular assembly isn't self-assembly is... stupid, in that context. Focus, bro. Focus.

I have to intervene here a bit.
Self assembly is when you put a few hundred pices of lego in a box, shake it and find a house inside when you open it. It's entirely different from placing a brick at a time by hand as the shaking scenario requires an energetically favourable situation for the bonding in the correct places, whereas when i use my hand i donate energy to achive the precise bond i desire. There is a very large difference between the two. And i don't really get what your point here is.

You seemed to have some misconceptions about this, now carry on spewing bile.

>> No.3706920

>>3706892
> Self-replicating nanomachines would be ungodly complex and.. not very useful as a concept.

>> No.3706923

OP just play deus ex brah

>> No.3706934

>>3706916

Thank you.

>> No.3706940
File: 25 KB, 480x480, 1313345580299.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>implying that access to godlike medical care means >you have a better life.

No one is ever going to inject nano techs into me.
>implying they haven't already

Why do people think that we are going to technologically make people immortal. The only way that would ever make sense is if a "Children of Men" situation happens and humans cannot reproduce.

>> No.3706942

>>3706916
The point is self-assembly is very important, whether it is complex or not. My original response expressed skepticism that it was "not useful as a concept."

>> No.3706967

>>3706940
>implying that access to godlike medical care means you have a better life.
It may not be a directly implication, the same with money, but let's just say it helps :]

> Why do people think that we are going to technologically make people immortal.
That is the thing. Not all. Just the ones who can afford it.

>> No.3706985

>>3706942
Self assembly doesn't share much with mechanosynthesis however. While it's extremely useful for a lot of tasks and growing more so by the day it carries certain limitations too.

Take DNA bases for example, mix them in solution and we get the A-T and G-C pairs, this is handy because we get the pairs automatically after mixing, But mixing DNA bases without involving a soup of complex enzymes and other will never result in anything but DNA pairs or strings, that is, DNA bases self-assemble into DNA base-pairs. Or we could take that magnets self-assemble into north-south-pole attachments.

This is not self replication. We can achive self-replication with self-assembly components but it requires a lot of peripheral devices, the same applies to mechanosynthesis.

The latter however can take two dna bases and smash them together to form an entirely new molecule, or pick away single atoms from the molecules which enables enormously much more flexible than self-assembly. This greatly increased flexibility probably means that it's a lot easier to make self-replicating mechanosynthetizers than self-assembly systems as there's less constraints. As for self-replication feaseability: it's been done to death in digital media and given broad mechanosynthesis capability it should be very possible physically too(given that a macroscopic $500 plastic 3D-printer can almost do it...).

>> No.3707046

>>3706967
> Why do people think that we are going to technologically make people immortal
I don't see how anyone could think that we won't. The trend in human lifespan is increasing. When the rate of increase becomes one year per year, what outcome do you expect?

>> No.3707052

>>3706985

I approve of this post.

>> No.3707071

ITT: http://www.youtube.com/watch?v=fdadZ_KrZVw&feature=channel_video_title

>> No.3707092
File: 435 KB, 503x750, 0f56128ec35b77d40c665d306f61e07f.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3707071

Oh wow.

>> No.3707097

>>3706985
> Self assembly doesn't share much with mechanosynthesis however.
I've never suggested otherwise. Honestly this whole chain has been a massive misunderstanding. I've only expressed that it is a problem that it doesn't share much with it. Thankfully, the science is still young, and even without self-assembly the applications are still extremely large.

>> No.3707107 [DELETED] 
File: 6 KB, 198x254, images.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3707071
> Movie about biological immortality
> Cool glowing numbers on arm
> Start getting excited about cool movie
> Wait... is that...
> Justin Timberlake
> yfw

>> No.3707133

Immortal kike cyborgs do not want

>> No.3707168
File: 160 KB, 644x800, felinefun.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3707092
>Oh wow

Don't you mean... "Oh meow..."

>> No.3707314

Why in the fuck would you want to create something smarter than you? If people as smart as you don't wanna be your bitch, why would something exponentially smarter than you want to?

General artificial intelligence is grossly impractical and unlikely anyway.

>> No.3707324
File: 25 KB, 397x212, 2e1vi3d.jpg.png [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3707168

No I'm pretty sure I meant "oh wow".

>> No.3707451

>>3707071
the concept is there and so is Andrew Niccol

but the trailer sucked... and justin timerlimp?
fuck! shit!

what happened to you Niccol, you used to be cool :(
I guess you need the money

>> No.3707494

>>3707314
You fail to see the point.
Are bosses smarter than their employees? The quick answer would be yes, but in reality, in most cases they aren't.
Secondly, you don't want them smart to rebel, you want them smart and obedient, so they can do your job.
The thing is, if an AI is created in such a way that it becomes conscient then anything can happen.
Will it try to help puny humans solve their problems or it see humans has an hazard to a stable or efficiency world?

>> No.3707568

>>3707494

To develop an AI complex enough to solve tasks given to it and improve other AI on the level of singularity, it would basically need to have a complex enough model of the world to be self aware (in that it would necessarily have to model itself).

So we'd have a self aware machine that is smarter than us and could understand humanity as a system and manipulate it to it's whim. And it would have whims, you'd have to have it able to create its own goals and work towards them. And they'd be virtually incomprehensible to us.

And you don't see a problem with this?

>two degrees in AI

>> No.3707795

>>3707568
not necessarily, as human we are conscient, but we also have an subconscient side, which IMO translates into our conscient actions (eg: going to the shop to buy food because you're hungry, or ignoring what you're doing because you see a hot girl)
if the unconscient rules govern the conscient ones, the AI rebellion may never happen
http://news.sciencemag.org/sciencenow/2011/08/mind-altering-bugs.html?ref=hp
http://news.nationalgeographic.com/news/2011/05/110511-zombies-ants-fungus-infection-spores-bite-noo
n-animals-science/
but still it's very likely

>> No.3707824
File: 91 KB, 630x818, Colonel_CoffeeMug.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>> No.3707842

>>3707071
>take one Twilight Zone episode
>expand into a full movie
I'm not expecting much from it.

>> No.3708607

>>3707795

Man, you don't know shit about shit and you're just praying to a god you hope we'll make.

>> No.3708662

>>3707568
Did either of those degrees involve a single class in evolutionary algorithms?

>> No.3708696
File: 194 KB, 1920x1200, cyberpunk.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

I am back, I am >>3706485
>>3706444
>>3706412
>>3706398

I'll take this name.

>>3706590
>I'm not saying we should prepare against a certain threat or attack, as in a paranoid way.

But I do think we should be paranoid about threats. I do not want me or the civilisation to die, having made it to there. And I feel like there's a high chance that we will just fail and kill everything with us, or that we will have created SAI, but without us going with them (which is not that bad, but I'd really like to live and see this).

The changes are going to be so rapid, I think it will create huge chaos, or that humans won't just be able to follow and fail or die without making it there.

>> No.3708711

>>3707568
Furthermore self awareness clearly does not bring ability to design even better intelligences based upon knowledge of the mechanisms of self-awareness. Because we are self aware, and look how hard it is for us!

>> No.3708715
File: 111 KB, 285x250, 1289060133482.png [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3707824

>> No.3708894

>>3708711
A system, that is smarter than us, and aware of itself, and it generates its own goals is a problem. Self-awareness is a byproduct of any system sufficiently representative to accomplish that which would bring about singularity.

>>3708662
Please do tell me how a genetic algorithm is going to create into an artificial general intelligence. I'm all ears.

>> No.3708920

>>3708894
I think you used the word "representative" incorrectly.

>> No.3708939

>>3708894
It made you.

>> No.3708943

>>3708920

"system with sufficient modeling power" if you prefer, though I stand by my original phrasing.

>> No.3708984

>>3708939

That's not how genetic algorithms work. You don't put a bunch of variables in a computer and let it run for millions of years. Using a genetic algorithm isn't just some short cut, it's incredibly complex. You'd need
A. An environment model sufficiently complex so as to foster intelligence
B. Some heuristic by which to gauge the success of the intelligence
C. Some way to encode mutation into an intelligent system
D. An intelligent system capable of playing host to this kind of development in the first place.
E. Actually, a metric fuckton of evaluation heuristics for every aspect of general intellectual and behavioral performance.
F.About a billion years.

>> No.3708997

>>3707092
Are you a furry, sir?

>> No.3709080

>>3708984
The problems with many of those "points" is that they miss the point by trying to encapsulate the data into preset identifiers. If you're designing something that is supposed to be able to improve itself, it can't be built in a cage, it has to be the cage which holds itself.

>F.About a billion years.
This is precisely the reason you use a computer

>> No.3709117

>>3709080

Uh, no. Shit doesn't just happen, you need to be able to encode the system before you can run it. What's your plan, then?

And I meant a billion years running the algorithms.

>> No.3709152

>>3709080

By the way, to create an intelligent system that can improve itself, it first needs to have a perfect model of itself and what it means to improve. That on its own is an extremely difficult task.

I'm sick of people saying "Oh you just need an AI that can improve itself and its exponential from there." It's not that fucking simple.

>> No.3709236

this greedy jew travels the world repeating the same shit.

he gets like 40000 bucks for a speech and sells tons of books.

He is fucking smart.

>> No.3709243

>>3709152
Usually we mean that the AI will the least design better hardware to increase its speed to design better hardware. At most the AI will understand its own software architecture and and design better software to attain its own software's goals. Most humans can identify another human who has similar goals in life than you do, so the AI would do that analysis to this new software.