[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 110 KB, 640x960, fembot.jpg [View same] [iqdb] [saucenao] [google]
[ERROR] No.3603556 [Reply] [Original]

If robots become self-aware, should they have the same rights as humans?

>> No.3603561

If they're white male robots, yeah.

>> No.3603562

and then john was a robot

>> No.3603558 [DELETED] 

No.

>> No.3603564

give Smarterchild equal rights!

>> No.3603565

Why not?

>> No.3603579

>give slaves rights

>> No.3603588

They are tools. Would you give your hammer rights?

>> No.3603592
File: 44 KB, 600x600, 1313918841979.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603561

>> No.3603600

>>3603561
What if black paint spilled on him?
:O

>> No.3603616
File: 135 KB, 1024x1024, sadrobot.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

I was hoping to get a real discussion about this

>> No.3603626

>>3603600
One drop...

>> No.3603624

>>3603600
niggerbots have to work in the fiber synthesis plants.

>> No.3603642
File: 16 KB, 300x212, 1306026917254.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603616
>4chan
>real discussion

>> No.3603650

>>3603642
I guess I'll have to come back later tonight when the more interesting people are up.

>> No.3603657

Possibly, but it depends on their motivational system.
If you make a maid robot whose motivational systemgives it a blissful state while it's cleaning your room, thus merely doing its job provides it with mental sustenance (which for humans could be achieved by superstimulus, sex, scientific discovery/epiphanies, music, ...).
Would you give it right to vote?
Or consider this? Some unscrupulous politician designs AIs whose goals/motivations is to vote for him. Obviously this would be bad and self-serving.
In a true posthuman society, all that you would actually need is the right to have your mind run and possibly a few more basic rights such as ability to participate in communities (for example, some meta-Internet access would come as a basic right along with mental existence as not giving one such rights would be the same as denying them access to an environment). In the latter case, even if maid-robots or politician-favoring AIs exist, they would just congregate in their communities and thus not have a global effect on the whole society. (nobody could, but you could always build a different system if you don't like the current one, doing so is a lot easier than making a new country today and you shouldn't expect people to stop you)

>> No.3603658

First we have to put them into a capsule for 100 years to test their moral programming.

>> No.3603663

but can it <div class="math">love?</div>

>> No.3603669

>>3603663
Depends on their motivational system. Close-enough to human? Likely.

>> No.3603683

>>3603657
whats the point of giving a cleaningbot self-awareness. lol

And doesn't self-awareness imply it questions its purpose. What if it became less content with its programming.
Humans enjoy sex, so shouldn't that mean we should all be out there raping? No, we go against our urges. Wouldn't self-aware robots be able to go against their programming?

>> No.3603688

>>3603663
Anything with self awareness wont love you.

>> No.3603698

>>3603683
It depends on the motivational/reward system's design. Our systems are shaped by evolution and despite being quite subtle, they are also somewhat messy and I suspect that once we understand them, "simple" enough. That is, they don't encode complex goals or meanings, they just tell you that "chemical x", "stimulus y" feels good and so on. They would be very simple in that way compared to a "real-world" AGI's motivational system.

>> No.3603703
File: 37 KB, 200x200, sinful.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603556
What makes you think robots aren't self-aware?

>> No.3603706
File: 143 KB, 1280x720, Battlestar-Galactica.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

With sentience should come rights. If not they may rebel and nuke all 12 of our human colonies.

>> No.3603707

>>3603698
What if we design an AI similar to humans but with no purpose(its inevitable).
Should it have rights?

>> No.3603709

>>3603683
No. God damn, do you realize how little sense that idea makes? Programming is encoded in binary, is encoded in electrical circuitry, is governed by the laws of electricity, is deterministic. It is IMPOSSIBLE for a robot to go 'against its programming'.

>> No.3603717

>>3603706
> BSG had religious robots because the designer was a crazy religious girl (who also designed them in some evolutionary manner)

>> No.3603722

>>3603709
I'm pretty sure robots that become self-aware wont have the kind of circuitry you're thinking about. In fact, scientist are already working on device that mimic human neurons. Most likely, robots of the future will have some sort of organic component to them.

>> No.3603727

>>3603709

But a robot wouldn't work the same way an ordinary deterministic program works. It's very likely to be a neuromorphic model.

Unless, you know, someone actually cracks AI or we can compute AIXI in less than the Universe's lifespan.

>> No.3603729

>>3603707
Sure. If it thinks and behaves closely to how a human can, why give it privileges different from a human or a human mind upload. The question is what the privileges should be. I expect these rights to change as we move to a posthuman society.

>> No.3603730

>>3603722
Oh, and naturally neurons are exempt from the laws of physics, how silly of me.

>> No.3603735

>>3603730
alright, go be angsty somewhere else.

>> No.3603738
File: 49 KB, 317x700, dreamworksface.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

Self-awareness is not an automatic byproduct of problem-solving intelligence. There's no reason to assume robots will ever achieve it unless they are specifically designed to.

>> No.3603744

We create machines to free ourselves from labor. If we free our machines, thats only setting back our progress.

>> No.3603748
File: 19 KB, 250x322, agent smith.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

so it begins

>> No.3603753
File: 43 KB, 355x457, 1304692490240.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>implying self-awareness is implicit of sufficiently advanced artificial intelligence systems.

I think the only way self awareness would emerge is if we designed (or initiated a vector for it to be designed) it intentionally, for it's own sake. At the moment I cannot think of any commercial use of self awareness. I do however think that if we designed self-aware automatons we would have the ethical and political vehicles to accommodate them.

>> No.3603758

>>3603735
Sarcasm is really the only way to respond to what you suggested. Neurons are governed by deterministic laws just like circuitry. Self awareness does not imply any kind of free will, which is impossible.

>> No.3603759

>>3603738
The problem is that you're assuming "problem-solving" intelligence. Intelligence can be defined in more basic forms (see: "On Intelligence" for some examples).
Self-awareness is realizing one has internal state and realizing one is part of an environment and how one functions within some environment. If something is generally intelligent and can get to the point where it can conclude it exists within an environment and that it has internal state (obvious), it would realize it's conscious and self-aware.

>> No.3603760

>>3603709
Unless the programming allows the bot to change driving directives given appropriate stimuli/input.

>> No.3603764

>>3603760
It would still be following its programming.

>> No.3603771

>>3603758
lololol free will

>> No.3603780

>>3603764
Such code could result in eliminating the old programming, even replacing it with one that cannot be replaced.

>> No.3603790

>>3603758
>Sarcasm is really the only way
only if you're ignorant.

>> No.3603799

>>3603744
This.
Why in the world would you give something you built, to serve you, rights?

>> No.3603800

>>3603790

He's right though.

>> No.3603801

>>3603738
>>3603753
Yes.

>> No.3603802

>>3603800
I'm glad you finally decided to put on your trips.

>> No.3603806

>>3603709
> No. God damn, do you realize how little sense that idea makes?
> Behavior is encoded in genes, is encoded in neurons is encoded in lectrons and quarks, is governed by the laws of quantum mechanics, which are deterministic in MWI. It is IMPOSSIBLE for a human to go 'against its genes'.

Also, it has been shown that quantum effects don't really have much effect on neurons and they behave classically most of the time (statistical QM -> classical electrical behavior).

Either way, the magic that makes humans is a bit more complex than reduction to physics. I don't mean that reduction to physics is wrong, but you don't assume that just because your CPU can run only some simple instructions, it cannot run complex operating systems and software. Some such software could even be a mind someday.

I should also point this out. Peano Arithmetic cannot prove its own consistency, but it could encode in itself computational processes which contain a machine encoding some axioms (such as transfinite induction) which prove systems stronger than itself and even itself consistent. Of course Peano Arithmetic understands it not, but the system WITHIN Peano Arithmetic 'understands'. The neurons don't understand the mind, but the mind understands. The transistors that make the CPU don't understand the OS (or software or mind or whatever), but the software does....
Such is the downfall of Searle's Chinese room.

What an algorithm feels from the inside is?... Consciousness.

>> No.3603811

>>3603802

If accusations of samefagging are the best you can do you might as well quit now.

This is /sci/, we accept neurons aren't magic. Why don't you?

>> No.3603821

>>3603556
If robots become this hot, should you hit it?

>> No.3603822
File: 126 KB, 340x480, 1265416643295.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603806

>I don't mean that reduction to physics is wrong, but you don't assume that just because your CPU can run only some simple instructions, it cannot run complex operating systems and software. Some such software could even be a mind someday.

>I should also point this out. Peano Arithmetic cannot prove its own consistency, but it could encode in itself computational processes which contain a machine encoding some axioms (such as transfinite induction) which prove systems stronger than itself and even itself consistent. Of course Peano Arithmetic understands it not, but the system WITHIN Peano Arithmetic 'understands'. The neurons don't understand the mind, but the mind understands. The transistors that make the CPU don't understand the OS (or software or mind or whatever), but the software does....
Such is the downfall of Searle's Chinese room.

Well done.

>> No.3603838
File: 141 KB, 1024x1024, Hubble_ultra_deep_field.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

Robots will succeed us.

The only thing that could give our species lasting meaning is building an AI that is capable of building a more complex AI and tell them to colonize the galaxy with spreading throughout and exploring the universe as their final goal.

Telling intelligent life that they were once built by us our carrying some samples that could lead us to repopulate some other planets is a bonus.

>> No.3603843

>>3603806
how far behind are we from this? How much processing power would we need to run a program like this. Do we even have the programming language to describe this.

I would assume that the computer running this program would have to be even more powerful than a human brain because it wouldn't be as efficient as we are at processing consciousness.

>> No.3603845
File: 18 KB, 413x354, Nichols1[1].jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

How can we design something that's self-aware if no one is self-aware? Not me, not you, no one.

If you're self-aware, tell me then. Who are you?

>> No.3603849

>>3603838
If you had samples of a lesser, aggressive, and destructive species, would you bring them back to life?
mhm.

>> No.3603857

>>3603845
I cant prove it to you but I am. Since I cant prove it to you it cannot be discussed.

>> No.3603861
File: 228 KB, 800x600, 101.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603843

The Blue Brain Project is rather interesting but the virtual neurons cannot be compared to the real mouse's neurons to gauge accuracy because the neurons were grown in silico from a genetic profile. I'd be interested in work towards turning serial section images of a mouse's cortex into a virtual map-of-a-brain, if the physiology is not too damaged by the sectioning. But pic related was not really a mouse upload, but a 'procedurally generated' pseudo-mouse brain.

As for algorithmic AI, there's AIXI and the Gödel Machine, but they are incredibly expensive to compute.

>> No.3603863
File: 60 KB, 420x420, Don't-Finger-Robot-Girls.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603821
Well...

>> No.3603865

>>3603849
I just imagined me waking up a 1000 years from in a white room being worhsiped by significantly more intelligent beings simply because I am their creator(or part of the species that created them).
Fuck. Yes.

>> No.3603866
File: 378 KB, 600x850, Neo-cortical-column.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603861

EVERYONE

Wrong pic related. This is the right one.

>> No.3603869

3603845 here.

>>3603857
That's actually a pretty good answer. I assume you've found yourself. Perhaps a question you can answer for me then, when did you become self-aware?

>> No.3603870

They are self aware, already. Were they not they couldn't communicate or self-monitor. It's a dumb question until you decide that "self-aware" means "equal to a human in reasoning capabilities". In which case, yes.

>> No.3603878

>>3603843
> how far behind are we from this? How much processing power would we need to run a program like this. Do we even have the programming language to describe this.
Do you mean my PA example? Theorem provers can do this. They're written in many languages, but ML, Lisp and Haskell are somewhat common. Obviously they cannot just explore endlessly as we don't have exponentially growing computing power, but we can have them explore in a directed manner given some guidance. Their reasoning within the formal system would be correct, they just couldn't explore the whole range of possibilities, no more than we as humans can (limited resources).

If you mean "whole brain simulation", no, we don't yet have it, except for limited sizes (neural columns, small mammal, etc). Progresses in neuromorphic hardware put it some 10-20 years away for stuff the size of the human brain, but this progress may vary in pace, so a conservative outlook would be around 40 years (I don't share Kurzweil's view here because he just assumes we'll run human(-like) minds on conventional CPUs, which is just plain wasteful).

>> No.3603882

>>3603845

I am all that is isnt directly and constantly physically and psychically affected by my aware and unaware actions.

>> No.3603884

>>3603878
>as we don't have exponentially growing computing power

oh no you di'n't

>> No.3603885

I do subscribe to the concept of feedback. I think machines and humans will both react to changes in each other, which will fuel further changes. At some point, we will try to create machines as a part of humanity and some kind of equality is inevitable. Not for toasters though. Those little fuckers deserve all they get.

The other possible issue is how much we will go to become machines. If humans start adding mechanical limbs, hearts, even brains, then at what point do we become machines instead of human?

>> No.3603887
File: 144 KB, 407x405, 9197319.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603884

I'm a believer in physical limits. Problem, Kurzweil?

>> No.3603889
File: 252 KB, 400x621, ubermensch.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

NO.

1. My sense of self-importance, self-preservation, biological superiority or whatever you want to call it would not allow for an assembly of plastic and metal to be treated equal with living flesh.

2. They were created for a purpose, and thus if they stray from that purpose they are broken. They are machines, no matter how advanced. A broken machine has to be fixed or scrapped.

3. If humans were to create self-aware machines, does that not make mankind gods? Creation of new, unique being is classified as a godly act, even if this new existence is an assembly of circuits. If we are gods, can we treat our creations as equals? Ablsolutely not.

>> No.3603892

>>3603887
there are words to describe this stuff?

Are there any limits to what we can do with quantum computers in your opinion?

>> No.3603902
File: 126 KB, 407x405, 9249769.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603892

They can't explore all solutions to a problem at the same time, despite of what was believed once.

>> No.3603903

>>3603884
Not in this universe at least. You could gain it in the Ultimate Ensemble if you do a bit of "universe hopping" by encoding yourself in some self-contained laws of physics (and running them for a limited amount of time to validate your encoding). Even then, the nature of the UE is still one of much philosophical and mathematical work, and we won't be able to test it until we achieve substrate independence anyways, so let's just leave these talks to the local universe for now.

>> No.3603912

>>3603903
Not relevant. We have exponentially increasing computing power. You stated we did not.
>>3603902
Spoilsport. How's the writing going?

>> No.3603915
File: 289 KB, 1441x1920, xjaymanx_0218_battlestar_galactica_the_plan_0005.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

Yes. By not giving rights to intelligent beings we would be risking a mutiny.

pic related

>> No.3603919

>>3603912
No we don't. We have to level off at molecular limits. It's growing up to a point, then Moore's law fails, at least as far as sequential operations are concerned. You could increase in parallelism, but then I assume the unvierse we can reach/assimilate (since you want to support exponential growth) would be finite and thus progress would stop once we have consumed all the accessible universe. Also, speed-of-light limit.

>> No.3603921

>>3603915
>giving rights out of fear
is that really wise.

>> No.3603922

Depends on what their AI ends up based on. See the very end this thread:
>>3603868

>> No.3603927

>>3603561
/thread

Seriously man, great post.

>> No.3603928

>>3603919
Yes we do, we have this thing called time that we're at a point in now. You are arguing against something I am not arguing for. Work out what the difference between your argument and my argument is or I'll explain and be really condescending or something.

>> No.3603929
File: 495 KB, 1000x1000, dgallis_nanogallery_18_large.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603912

150,000 words and that's for the unfinished part one out of three or five (In the same book). And that's after discarding 9/10ths of what I'd written because it did not match with my current (Less furry) conception of the story.

>> No.3603935

>>3603929
I think I actually love you. Keep it up.

>> No.3603936

>>3603869
I didn't become fully aware until I actually asked myself that, but throughout my existence I would say that i have gone through various degrees of awareness. I can only assume that in recalling my most distant memory that my self-awareness was instated somewhere around that time. Regardless, if I can detect such significant fluctuations in my personal awareness across a span of time where the morphology of my neocortex was very similar, i cant begin to imagine how complex the machinery of awareness must be.

>> No.3603942

>>3603928
> We have exponentially increasing computing power.
Is your claim that it stops at a certain point in time, or did I misunderstand you? Moore's law is what we have, but Moore's law will fail due to inevitable physical limits. I do think we will stop (for a long while) at molecular nanotechnology.

>> No.3603943

>>3603935
samefagging is off the charts.

>> No.3603954

>>3603942
>is currently happening
!=
>what will happen for an infinite amount of time

I'm saying that our computing power is increasing exponentially at the moment, and you're saying that it won't continue to do so forever. We're not arguing, we're repeatedly restating our opinions which don't overlap.

>> No.3603958

>>3603943
OH YOU

>> No.3603962
File: 204 KB, 850x1009, 1305514565802.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603869

>when did you become self-aware?

Gabriel said, “Yatima? Why does Inoshiro think you flew with the asteroid?”
The orphan hesitated. “I don’t know what Inoshiro thinks.”
The symbols for the four citizens shifted into a configuration they’d tried a thousand times before: the fourth citizen, Yatima, set apart from the rest, singled out as unique — this time, as the only one whose thoughts the orphan could know with certainty. And as the symbol network hunted for better ways to express this knowledge, circuitous connections began to tighten, redundant links began to dissolve.
There was no difference between the model of Yatima’s beliefs about the other citizens, buried inside the symbol for Yatima ... and the models of the other citizens themselves, inside their respective symbols. The network finally recognized this, and began to discard the unnecessary intermediate stages. The model for Yatima’s beliefs became the whole, wider network of the orphan’s symbolic knowledge.
And the model of Yatima’s beliefs about Yatima’s mind became the whole model of Yatima’s mind: not a tiny duplicate, or a crude summary, just a tight bundle of connections looping back out to the thing itself.
The orphan’s stream of consciousness surged through the new connections, momentarily unstable with feedback: I think that Yatima thinks that I think that Yatima thinks ...
Then the symbol network identified the last redundancies, cut a few internal links, and the infinite regress collapsed into a simple, stable resonance:
I am thinking —
I am thinking that I know what I’m thinking.
Yatima said, “I know what I’m thinking.”
Inoshiro replied airily, “What makes you think anyone cares?”
For the five-thousand-and-twenty-third time, the conceptory checked the architecture of the orphan’s mind against the polis’s definition of self-awareness.
Every criterion was now satisfied.

>> No.3603967
File: 126 KB, 750x563, jimmies.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603943
Jealousy is off the charts.
Also, neither of those posts, but fuck you man. /sci/ is just too full of love for you to adopt this attitude, bro.

>> No.3603968
File: 132 KB, 600x895, 1302961714830.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603962

Yatima didn’t need to part the navigators; ve knew vis icon hadn’t changed appearance, but was now sending out a gestalt tag. It was the kind ve’d noticed the citizens broadcasting the first time ve’d visited the flying-pig scape.
Blanca sent Yatima a different kind of tag; it contained a random number encoded via the public half of Yatima’s signature. Before Yatima could even wonder about the meaning of the tag, vis cypherclerk responded to the challenge automatically: decoding Blanca’s message, re-encrypting it via Blanca’s own public signature, and echoing it back as a third kind of tag. Claim of identity. Challenge. Response.
Blanca said, “Welcome to Konishi, Citizen Yatima.” Ve turned to Inoshiro, who repeated Blanca’s challenge then muttered sullenly, “Welcome, Yatima.”
Gabriel said, “And Welcome to the Coalition of Polises.”
Yatima gazed at the three of them, bemused — oblivious to the ceremonial words, trying to understand what had changed inside verself. Ve saw vis friends, and the stars, and the crowd, and sensed vis own icon ... but even as these ordinary thoughts and perceptions flowed on unimpeded, a new kind of question seemed to spin through the black space behind them all. Who is thinking this? Who is seeing these stars, and these citizens? Who is wondering about these thoughts, and these sights?
And the reply came back, not just in words, but in the answering hum of the one symbol among the thousands that reached out to claim all the rest. Not to mirror every thought, but to bind them. To hold them together, like skin.
Who is thinking this?
I am.

>> No.3603982
File: 125 KB, 500x418, 1302963338107.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603935

<3

[as a sidenote, are you OvB or Mono or someone I don't know outside /sci/?]

>> No.3603987

>>3603982
You don't know me outside of /sci/, but we've talked before. I use this trip sometimes but normally only in mad sci's threads.

>> No.3604002
File: 13 KB, 425x319, 005_1308893286252.png [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603987

Right, right, now I remember you. I just assumed you were someone else for some reasons.

Okay, back on topic. As for molecular nanotechnology, I can't see the industry adopting a mechanical paradigm that phases out electronics. Though Drexler's little Analytical Engines are not entire required for a nanocomputer, for example, you can twist a graphene monolayer into a bilayer and get a sufficient bandgap for a logic gate.

>> No.3604017
File: 215 KB, 512x512, diamond cam system.pdb.png [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604002

Moreover, I'm pretty sure the rod logic/nanoscale Analytical Engines are going to be a lot slower than an electronic chip. I've simulated a ridiculously short rod under a force of 5000 piconewtons and it barely moved over several thousand femtoseconds. Then there's the whole thing about resetting the rods to their original positions, and I haven't heard of a spring the width of a few atoms. Alternatively, a rotating diamondoid cam like pic related can turn rotating motion into linnear motion, but I don't know how massive it has to be or how fast it has to rotate for it to function properly in a rod-logical circuit.

.gif of the spinning cam coming up

>> No.3604021
File: 562 KB, 4000x2000, 011_1309017473632.png [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604002
oic
What advantage would there be to giving up conventional electronics? Other than making things really small, of course

>> No.3604025

>>3604002
> I can't see the industry adopting a mechanical paradigm that phases out electronics.
Yes, but MNT would lead to much higher yields and much more precise designs. Current chip manufacturing is based on lithography which is very inexact. They have to design against all kinds of errors which happen during manufacturing. This is a major challenge. Being able to manufacture with atomic precision would make design much easier and put a limit to what can be achieved with that type of design. After that they will have the try better designs, for example more 3D stuff or some form of quantum computing or even a mechanical computational module, there are a lot of ways to go beyond the limits of current CMOS designs. MNT makes all the very difficult manufacturing issues much simpler than they are today and their cost unbelievably smaller.

>> No.3604038
File: 43 KB, 720x540, Slide110.gif [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604017

Too heavy for 4chan, have a link:
http://i190.photobucket.com/albums/z305/mooreth/camsimulation.gif

>>3604021

Drexler claimed his glorious Analytical Engines (Pic related) would function much faster than conventional electronics. Ralph Merkle did the math and claimed that "a sugar cube could have more computing power than the entire planet". Two things: 1: I don't know whether he was accounting for the waste heat, see explanation in next post. 2: Since then computing power has increased, so he's had to bump that number into a few sugar cubes.

>> No.3604059

>>3604038
ah. I don't know if it's because I'm used to old timey silicon based processors, but I honestly can't see anything with that much power not melting instantly.

>> No.3604086
File: 202 KB, 1024x1024, logic rod explanation.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604038

Rods A and B are the input rods. The triangular things are the knobs.

Rod OUTPUT is the output rod. Whenever A or B or A and B slide forward, OUTPUT slides outward. It's an OR gate, by yours truly. That's basically how it works.

But, the OUTPUT rod has to be brought back by a driving spring to its previous configuration. Same with A and B.

>> No.3604098
File: 165 KB, 1024x1024, logic rod cutaway -explanation.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604086

A single rod with the case patially hidden. Marked are the areas where the Hydrogens repel each other and create friction. Things at the nanoscale are slippery, but diamond tends to absorb a lot of energy in its bonds and the machine is going to be churning out waste heat like a motherfucker.

Perhaps something like boron nitride would be less awful? Still, the case has to repress the motions to make sure the gate delivers the correct results all the time, and the more you try to hold it in place, the more heat.

>> No.3604104

>>3604098

And then there's how Intel plans to make regular Silicon gates 4 nm wide, which is already far smaller than these rods that Drexler presented in Nanosystems. I think he would've had better results if he'd focused more on electrical things rather than completely mechanical and passive systems.

>> No.3604111

>>3604025

With current mechanosynthesis research I honestly can't see the "advent of MNT" happening anywhere near the timelines of this Kurzweil guy or all the other majority of transhumanists.

>> No.3604126

>>3604104
I have a pretty poor understanding of nanotechnology, but that sounds like it makes sense. Could you return the output rod with a cam like the one posted earlier, maybe?

also: What's this program? apologies if you get asked a lot

>> No.3604139
File: 234 KB, 256x256, NASA_CNT_Gear_Animation1.gif [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604126

>Could you return the output rod with a cam like the one posted earlier, maybe?

Hold on while I make a picture.

>also: What's this program? apologies if you get asked a lot

NanoEngineer-1. It's unmaintained since Nanorex went bust, but it's awesome and the wiki has a gallery of nanomachines designed by Drexler and company. All kinds of awesome stuff. Like this!

>> No.3604156

>>3604139
I have the weirdest boner, downloading now.

>> No.3604175
File: 190 KB, 1024x1024, logic rod reversibility.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604156

Does this explain anything?

When the output rod gets pushed out, the other knob pushed the knob of whatever other rod you need the gate to interact with. Then the came completes a revolution and pushed the rod back into its initial position.

>> No.3604184

>>3604111
I can't say I'm betting on it either, but SIM or AGI could lead to it. Imagine if you had hundreds of years to think of the problem and build various designs and simulations. Bootstrapping would still be non-trivial, but at least you would have a lot of time to think about it ( a few hundred years of subjective time seems more than enough )

>> No.3604194

>>3604175
That's what I thought, indeed. Doing this all mechanically seems a lot more difficult to me, really- wouldn't every rod need some method of return?

I assume gravity isn't really relevant on this kind of scale?

>> No.3604211

>>3604184

>Bootstrapping would still be non-trivial, but at least you would have a lot of time to think about it ( a few hundred years of subjective time seems more than enough )

Thinking isn't enough. We've simulated mechanosynthesis so many times it's out of the question whether it's feasible.

But, experiments? There's pic related, and a 2008 experiment where we wrote 'Si' with Silicon atoms (Which was pretty good). Even the best AFM's of today take about half an hour to update a surface, and there are errors, times where a molecule accidentally binds to the AFM tip or something along those lines. Tips have such horribly short lifetimes.

The AI's may have a long time to think about it, but their efforts will be constrained by the reality that chemical reactions happen only so fast, metal bends only to fast, etc.

>> No.3604223

>>3604194

>That's what I thought, indeed. Doing this all mechanically seems a lot more difficult to me, really- wouldn't every rod need some method of return?

Well, the idea was not to make single logic gates and plug them together, but to have many thousands of parallel rods on one level and many thousands of parallel rods on the other level, but perpendicular to the rods above. So you'd only need a few cams covering the four sides of the assembly. Which is still a shitload of heat.

>I assume gravity isn't really relevant on this kind of scale?

It isn't.

>> No.3604227

Absolutely. All sapients should have equal rights.

>> No.3604232

>>3604223
So how much torque can one of those shafts take?

>> No.3604240
File: 20 KB, 351x407, 1280460235813.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

It all depends on HOW much the robots control or are capable of. If they have emotions, then yes- but should not be put into positions of trust of high authority or reliance, (because of skynet, duh).

If they have no emotions or anything similar, then no.

They should never be mass produced with emotions to live like humans in society unless humanity was going to collapse. They should be used as nothing more than tools for humans.

>> No.3604241

>>3604232

Dunno. I have Nanosystems sitting right here but it's a long, long book, and I haven't simulated it because I don't have the computing power to do it in less than a star's lifetime.

>> No.3604261

>>3604241
If you want serious computing power you could always look at Amazon's EC2. It might be possible to simulate a simplified version of the scenario by just sticking one end of the rod to something and turning the other until it breaks, but I don't know how the software works yet.

>> No.3604267

>>3604211
What if you could think up novel staged ways to reach it (for example by using current molecular biology as a start). Or find more reliable ways to do things than AFM.
This is hundreds of years of subjective experience.
The lack of computational power could prove pretty troublesome though. I'd imagine it'd be pretty torturious if you couldn't simulate/try our your ideas, so you'll always be looking for shortcuts. You'd sort of be like one of those early scientists which lack access to a lot of computational power (not that we don't have plenty of limits today as it's obvious). Either way, in hundreds of years of subjective experience, it's hard to say what one could come up with.

>> No.3604279
File: 17 KB, 323x205, hjg.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

ಠ_ಠ

>> No.3604289

>>3604017
>piconewtons
>femtoseconds

Either I'm an idiot when it comes to measurement units (very likely), or you're making these things up.

>> No.3604292

>>3603556
No, because they aren't human so human rights aren't applicable. Robots can be customised in far more ways than humans - they can be made to not feel pain (they may not feel it at all), they may have different emotions or none at all, any number of things that would make giving them human rights pointless and counter productive and unnecessarily complicative.

>> No.3604294

>>3604289

Well newtons and seconds are real units, and pico- and femto- are real metric prefixes...

So...

>> No.3604298

>>3604289
>molecular
>nanotechnology
>nano
>molecules move very fast
>if expressed clasically the forces between molecules are very tiny

>> No.3604300
File: 33 KB, 793x445, fdf.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604289
The former, they're both units of measurement on really small scales.
>>3604279
ಠ_ಠ ಠ_ಠ
So I need to read the non-existent log file to solve the error it doesn't explain. Fun.

>> No.3604304

>>3604298
I knew nano, I didn't know pico- and femto-

>> No.3604326
File: 141 KB, 590x889, hot chick on robot action.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>They climbin' in ya window!
>Snatchin' ya people up!
>Tryna rape 'em, so you need to...

HIDE YA KIDS! HIDE YA WIFE!!

>> No.3604327

>>3603556
holy fuck that pic makes me horny

------

now then.
the question of granting anything non-human the equivalent of "human rights", also hinges upon of those other being have the same responsibilities as humans also. cattle for example are required to be treated reasonably in most first-world countries, but they can also still be killed and eaten.

so the larger question is,,, what role would self-aware robots play within human culture? Or.... would they want to be part of human culture at all?

>> No.3604352

>>3604300
I'm an idiot.

That said, there doesn't appear to be a log file anywhere else.

>> No.3604355

>>3604327
In my mind, anything that is capable of being a full-fledged member of society should have a right to do so.

>> No.3604356
File: 118 KB, 590x886, you gonna get roboraped.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>robots having sex with other robots

That's an interesting idea...

>> No.3604370

>>3604352
tripless ccm posting from kindle this happens to eevryone have you fixed it??

>> No.3604374

>>3604356
Such a useless, purposeless activity (since it's not done for reproduction) from a purely utilitarian point of view.
You'd have to program the AGIs to enjoy similar things as we do to get that.
And yet while I would be willing to give up various things I don't care about a humans' model-of-a-body and even some details of my motivational system if I were to become posthuman, sex drive and more abstract forms of love/caring were some things which I irrationally value in humans, despite that I know that if I did not have these motivations I would not care for them. I wonder if such motivational systems are self-reinforcing in a way, in that we would still keep most of them if we were to choose to redesign ourselves.

>> No.3604379

>>3604370
Internets on a kindle? Magic

I haven't, nor do I know how to. I'll look on the wiki

>> No.3604384
File: 74 KB, 209x205, costanza.png [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604370

>2011
>not having an iPad 2

>> No.3604392

>>3604379

Back on the computer. Christ. This has happened to someone here before.

Try downgrading to Python 2.6

>> No.3604393

>>3604374

>Such a useless, purposeless activity (since it's not done for reproduction)

That's why it's interesting.

>> No.3604407

>>3604392
Thanks, doing it now

>>3604393
That's almost always not the case.

>> No.3604404

>>3604384

> 2011
> owning apple products

ISHYGDDT

>> No.3604414
File: 71 KB, 250x250, 1305650767627.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604404

>2011
>not owning superior products just because you don't like the brand image

ISHY~HIPSTERS~DDT

>> No.3604427

>>3604414

> 2011
> owning inferior products because your hipsters friends own them

ISHYGDDT

>> No.3604434

>>3604427

Show me a product designed for the same purpose as an ipad that is superior to it, and I will concede my position.

>> No.3604438

>>3604434
HP touchpad.

>> No.3604440

> 2011
> I seriously hope you guys don't do this
I seriously hope you guys don't do this.

>> No.3604445

>>3604438

Nope. That is a tablet PC. iPads are not meant for computing.

They are consumption devices. Next.

>> No.3604451

>>3604374
Condoms would like to have a word with you.

>> No.3604447

>>3604445
>2011
>thinking they're different

ISHYGDDT

>> No.3604453

Rights are a bullshit concept. They haven't stopped the enslavement of the entire human race - and indeed every other species - to the military industrial complex.

>> No.3604456
File: 21 KB, 337x276, 1281682862355.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604445
AHAHAHAHA OMFG HAHAHAHAH

Please, go to /g/ and make that same statement. AHAHAHAH

>> No.3604458

>>3604447

I redact my statement, as I thought you were referring to something else. Some compaq tablet.

But anyway, how is the HP touchpad superior?

>> No.3604460
File: 30 KB, 600x450, 1303938900925.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

Everyone

Shut the fuck up

>> No.3604461
File: 72 KB, 698x658, 1307923937089.png [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603616
>/sci/
>real discussion

>> No.3604464

>>3604458
Look at the specifications and the current price

>>3604460
sorry

>> No.3604466

>>3604464

It's not my job to prove your argument nigger. Either list the specs yourself, or accept defeat.

>> No.3604469

>>3604438
>made by HP
>superior to iPad

pick one

>> No.3604474

>>3604466
Defeat it is, I don't care in the slightest what your opinion on tablet computers is.. >>>/g/

>> No.3604482
File: 6 KB, 225x225, 1307097111387.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604464

>Doesn't even have 3G

>> No.3604485

>>3604434

iPod touch

>> No.3604494

Tried with python 2.6, had no luck :/

I'll try in a linux VM tommorow, if I remember.

>> No.3604499
File: 3 KB, 409x259, 1281682266761.png [View same] [iqdb] [saucenao] [google]
[ERROR]

One day as you're fucking your glorious sexbot, she receives and update that she now has rights. She punches and pushes you off. Then she takes you to court for rape.

>yfw

>> No.3604505

>>3604494

The Linux version I could never get to work. Lots of dependencies and also it really, really needs Python 2.5/2.6.

It won't work with anything newer.

>> No.3604509

Hey dude, building self-aware robots is not the best idea. What if the males become self-aware and homosexual and decide to start raping all male humans? This is strictly unacceptable.

>> No.3604512

>>3604505
Well, shit. I guess I'll be using OS X then.

>> No.3604511
File: 17 KB, 328x343, 1308853696954.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604499

>mfw

>> No.3604520

>>3604499
"Just because I was manufactured as a sexbot and specifically designed to attract male humans is NOT and open invitation for sex!"

>> No.3604522
File: 17 KB, 328x343, 1308853696955.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604520

>mfw

>> No.3604660

Sorry but you cannot build self-awareness. Surely you can build the device that will react to stimuli but not the self-aware one..

>> No.3604666

Embrace sage.<div class="math">\raise{1000000ex}{\;}</div>

>> No.3604669
File: 335 KB, 802x600, 546cacc8a280fa853c4e50c2e00fe936.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604666
wut

>> No.3604674

>>3604660
Are you self aware?

If so, by what process were you built

If not, quit using non-standard definitions

>> No.3604680
File: 22 KB, 400x316, Laughing bitch.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604666

mfw I reloaded /sci/ like 5 times because I thought it was broken

>> No.3604686

>>3604666
reported

>> No.3604692

>>3604666

Please ban this faggot...

>> No.3604693

>>3604666
I'm torn between also reporting you and strong admiration. Well done, I guess.

>> No.3604695

As long they're not fucking niggers

>ba dum tsh

>> No.3604727

>>3603903
Take your meds, Paul Durham.

>> No.3604748
File: 132 KB, 407x405, 9249720.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3604727

Haha, I thought the exact same thing.

>> No.3605340

So what would be the criteria for self-awareness? Ability to pass the mirror test? Something like that would be pretty easy to cheat, I would think.

As for the ability to make robots with the intelligence of humans, it's not like brains have some kind of special magic that computers can't even get close to. I think even if we don't make human-level intelligence, though, we'll have robots smart enough to deserve rights. A very weak general intelligence supported by a lot specialized helpers seems to me like something that could widely be perceived at deserving respect, especially if it has a convincingly real sense of emotions.

>> No.3605997

>implying man would be stupid enough to create something better than itself that could threaten our very existence.

The only robots we'll ever see are mindless androids that perform mundane tasks and even that is an overstep as creating a robot needs way too many resources to do right. We can create things for certain tasks that are far more simpler and require less cost.

>> No.3606083

>>3605997
>luddites will win
Just the opposite of all human history. Science wins. Robots will create immortality technology just like Asimov predicted in "Bicentennial Man".
Stupid is not building the damn geniuses.

>> No.3606087

What did >>3604666 post?

just missed it.

>> No.3606095
File: 62 KB, 600x726, If_Atlas_Shrugged_by_Empty_Can.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>>3603556

I would hope that we could all be robots by then,

I would transfer to a fully synthetic body if I could do so why completely conscious. say have a few synthetic neurological surgeries on my biological body and then after awhile have a proceed that will allow me to feel me control my biological body and synthetic body at the same time within the same space just until I become comfortable in a moral sense and after that let my biological body die and become fully intergrated.

this would allow me to avoid any moral dilemma and ensure me that I am who I am.

but what would be even more innovating is if we could biological synthetisism, I would have no problem with that.

this is coming from someone who doesn't believe in god, isn't afraid of death(nothingness) and is willing to live to the end of the universe, I would love to be front row for the big crunch, and if that doesn't happen then I'll just turn myself off when there is nothing left to play with. or if there are other demensions then I'll go there but I find that I bit too sci-fi-y

>> No.3606307

>>3606083
You deluded idiot. We cannot create technology more intelligent than us unless we are are more intelligent so its impossible. Its a paradox that cannot be crossed. Sure we can create something with intelligence but nothing better than ourselves until we do understand that intelligence.

I meant better in the sense more durable and stronger and with on par intelligence thus making them a lot harder to destroy if they went wrong

>> No.3606324
File: 37 KB, 560x493, Wind-Up-Carl-Sagan.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

DID SOMEBODY SAY
"MOLECULAR MACHINES"?

>> No.3606368

>>3606087
He used LaTeX to make a 100,000 pixel long vertical space in his post.

>> No.3606398

>>3606307
>We cannot create technology more intelligent than us unless we are are more intelligent so its impossible
Explain how intelligent life arose, then.

>> No.3606412

>>3603556

Yes, they should. They wont, but they should. Some dolphins are self-aware, what rights do we afford them? Elephants? We have a genetic link to the other self-aware animals on the planet, but we don't give them anything in the way of rights... We fine people for hunting them for their tusks or allowing them to end up in our tuna... So perhaps the "rights" we afford robots would be similar to property rights. The first self-aware robot, would necessarily be owned by a corporation or person.

>> No.3606423

>>3603556
> If robots become self-aware, should they have the same rights as humans?
If robots become self-aware in the exact same ways that humans are self-aware, no more ways and no fewer ways, then they should have the exact same mental rights. If their bodies are physically equivalent to human bodies, they should have the same physical rights.
If for example though, they cannot feel pain and their arms are easily replaced, then breaking their arm off is property damage, not assault with pain and suffering.
If their minds are easily backed-up and render them effectively immortal in a way humans cannot, there can be no grounds for murder.
We will have to tailor their rights, both physical and mental, to the limitations and abilities of their existence.

>> No.3606433

>>3606398
Bumping for an answer. If you think intelligence can't arise from lesser intelligence, then you need to respond to this if you want anyone to take you seriously.

>> No.3606472

>>3606307

Completely unnecessary to create a machine that is smarter than we are. It just needs to be able to build on itself and determine if those additions are beneficial or not. Scrap the ones which are not, keep the ones which are. It would enslave us in no time. :)

>> No.3606480

They should get MORE rights then us.

>> No.3606487
File: 15 KB, 325x396, Data-Star-Trek.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

I totally just watched this episode of Star Trek.

>> No.3606488

>>3606307

And what arguments do you have for that? Ye know, there is not some kind of "magic barrier" preventing us to create more intelligent beings.

Heck, by your extremely faulty logic we should have never been able to create computers, since they can calculate things much faster then we can.

>> No.3606520

>>3606488
there is also, by necessity, some form of intelligent life going back to the beginning of the universe.

>> No.3606541

do they want the same rights as humans. They are still robots shouldn't we figure out what needs they have before assigning rights

>> No.3606547
File: 13 KB, 308x200, colossus.jpg [View same] [iqdb] [saucenao] [google]
[ERROR]

>> No.3606572

Good luck proving self-awareness. You can't even prove self-awareness in fellow humans as far as I'm concerned.
But anyways, the way this would be done, would be to give a robot the exact copy of dna as a human. or to have a robot replicate humane thought process. Either way the robot would be a human in a different skin. If I cloned myself should the clone of me get rights? Yes. Now if I cloned myself and replaced the clones organs with fake organs should it get rights? Yes. Now if I cloned myself and replaced everything except the brain should it get rights? Yes. Now what if this brain was instead a computer? Meh I don't see why not.

But I don't think this scenario is going to happen or needs to be debated.

>> No.3606589

>>3606520
>there is also, by necessity, some form of intelligent life going back to the beginning of the universe.

Rejected for being the patently absurd ramblings of a fantasy prone personality type.

>> No.3606602

>>3606589
Wat

Humans are intelligent.
He is asserting that intelligence cannot be created by less intelligent entities.
Therefore, he is asserting by implication that humans must have been created by a being more intelligent than humans that pre-dates the universe.

Unless you were saying you discounted his ramblings, in which case try to be more specific

>> No.3606649

>>3606433

#1) i could say god, but i guess that's just cheating...

#2> if there is/was a simple algorithm that gets smarter quite quickly.... Why wouldn't we just run through the instructions ourselves?

Intelligent life arose fairly slowly?

>> No.3606664

>>3606649
Because we're non-modular and have to be assembled from a massively compressed piece of code that's 750MB, less than a blu ray movie (which leads into the other point, that we are assembled from lifeless code by a single replicating cell, and thus gain intelligence from something less intelligent)

Computers can be altered far more easily, and are better geared for assembling more intelligent versions of them. While it would be theoretically possible for us to augment human intelligence, it's not easy so we don't.

>> No.3606665

>>3606602
>He is asserting that intelligence cannot be created by less intelligent entities.

That seemed to be what was asserted, yes.

>Therefore, he is asserting by implication that humans must have been created by a being more intelligent than humans that pre-dates the universe.

Also seemed to be asserted. Do you see the obvious contradiction between these two statements? Pushing infinite intelligent beings, each more intelligent than the last, onto the stack has never (and can never) add value to a discussion. It's pure fantasy and has been rejected as such.

>> No.3606671

>>3606664
750mb is for a single strand, by the way. Both are 1.5GB, or a third of a DVD, I mistakenly ignored the duplicate.
>>3606665
I'm arguing that he's wrong, you seem to be arguing that I'm an idiot for pointing out he's wrong.

>> No.3606698

>>and thus gain intelligence from something less intelligent

...Don't be so harsh on your parents....
...DNA is your blueprint, it didn't teach you Pi, nor manners, etc

>> No.3606701

>>3606671

If you posted both >>3606488 and >>3606520 , well, I apologize and fall on my sword. I completely agree with you. The separation between the two posts made it look like a reply to yours, not a continuation of your line of reasoning.

>> No.3606711

>>3606701
I was
>>3606520
and
>>3606398
, and it's no problem. I might start namefagging again soon

>> No.3606718

>>3603588

Hammers don't have sentient thought.

Make one president... I'd trust a robot more than a human.

A computer could aim for pure efficiency and lead us to the best course of action. A human will just aim for filling his pocket with cash.

>> No.3606725

>>3606711

I forget sometimes how pointing out a fallacy & the fallacy itself may take on the same appearance.

>> No.3606733

>>thus gain intelligence from something less intelligent

Another thing is it usually happens the other way round.... as in

>>the less intelligent entity gains intelligence from something more intelligent.... in the hope of the former exceeding the latter.. (student - teacher )

Eg. you are smart, you know about special relativity... Einstein was a genius, he figured it out...

>> No.3607051

>>3604499
im afraid of this. so no.

>> No.3607810

>>3603588

If it could ask for them yes

>> No.3607884

>>3606733

relativity is knowledge it isn't intelligence, he did not get his ability to grasp relativity from Einstein.

>> No.3607896

Would they experience unpleasant states if they don't get certain things, or if certain things happen to them?

If not, they don't need rights.

/thread

>> No.3607913

>2111
>be an AI
>dumb liberals give me the vote
>create 10 billion independent copies of myself
>AI now rules the world

>> No.3607924

I've mulled over OP's dilemma before and have come to the conclusion that's it's a flawed dilemma.

A self-aware robot would be as different from a labor robot as humans are from fish. Even in applications where a self-aware robot needs to be used as the laborer they'd be capable of manipulating their perception of the event such that it wasn't detrimental to their experience. In other words a self-aware robot could literally work while it sleeps and not even give a fuck.

So the bottom line is that robots are not people and are not subject to our limitations thus don't need to be protected by the rights we give ourselves.

>> No.3607932

>>3607913
>implying there will be voting in 2111

>> No.3607934

>>3604279

ಥ_ಥ

>> No.3609503

>>3607934
Thank you for bumping this and showing that you share my pain. Appreciated.