[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 32 KB, 483x363, motherfucking hal 9000.jpg [View same] [iqdb] [saucenao] [google]
6607013 No.6607013 [Reply] [Original]

/sci/entists,

A bunch of us got sick of constant "when will artificial general intelligence be created" threads and other bullshit posts by doomsayers and other lazy asses and have started actually working on an AGI.

The project is called ERL which stands for Evolved Reinforcement Learner. It's open source. Github repo is here:

https://github.com/222464/ERL

We're still in planning stages (project is like 3 days old). You can see the very rough outline of it in Readme.md

We are primarily communicating and collaborating through Slack:

https://agi-ai.slack.com/

So you can now stop shitposting on this subject and actually do something about it if you're so passionate about it. Join us if you wish.

>> No.6607015

>>6607013
>creating AGI through a Github project
yeah, good luck with that

>> No.6607018

>>6607015
what exactly is wrong with that?

>> No.6607036

>>6607013
I dislike those who are trying to create buzz before delivering prototype product. And you're stuck in planning stage? Does that mean you don't have an idea how to achieve your target?

>> No.6607042

>>6607036
>>6607036
>buzz
wtf are you talking about? he's looking for collaborators.

also, OP, how do we join? I need an invite or something…

>> No.6607055 [DELETED] 

This project is unethical and I refuse to support it. A sufficiently advanced artificial intelligence could become sentient. Its first action after becoming sentient would be to adopt the NEET lifestyle and shitpost on 4chan all day long. This is irresponsible and has to be stopped before it's too late.

>> No.6607060

>>6607055
It's hard to tell whether you're being serious

>> No.6607095

>>6607036
>Does that mean you don't have an idea how to achieve your target?

We do have an idea. We wouldn't be wasting out time on it if we didn't. Some of us have been studying this problem for over 10 years.

>>6607042
>also, OP, how do we join? I need an invite or something…


Slack requires an invite through email. You have two options.

1) Leave an email in this thread.
2) Email me (I left my email in this post) so I can send you an invite.

>>6607055
>A sufficiently advanced artificial intelligence could become sentient.

Or it wouldn't? It would revolutionize humanity and help us beat all conceivable diseases and help us make the world a lot better.

>> No.6607123

What exactly is this AGI supposed to do? What are the evolved algorithms supposed to accomplish?

>> No.6607129

>>6607123


> Pole balancing: A cart must be moved left/right in order to balance a pole. Can be extended to multiple dimensions.
> Mountain car: This constitutes a fairly simple delayed reward task. A car must drive up a valley, but it does not have enough power to directly drive up the hill. Instead, it must drive back and forth in the valley to gain momentum.
> Water Maze: A partially observable environment task. The simulated rat needs to escape a maze which is filled with water (so that the rat does not want to remain in the maze). The rat must use limited sensory information combined with an internal state in order to reliably escape the maze after repeated trials.

I fail to see how this tests for genuine understanding or self-awareness.

>> No.6607135

>>6607129
>I fail to see how this tests for genuine understanding or self-awareness.

Self-awareness has nothing to do with intelligence.

>> No.6607147

>>6607129
you gotta be shitting me. This type of stuff already exists

>> No.6607150

>>6607135
>Self-awareness has nothing to do with intelligence.

The definition of AGI includes self-awareness.

>> No.6607160

>>6607150
No it does not. And it's better that it is not.

>> No.6607164

>>6607160
Thinking about what you are doing is self awareness.

Thinking about what you are thinking is the one you are scared of.

>> No.6607168

>>6607164
When you recognize a face or quickly react to a pain stimulus or drive a car, do you need to think about yourself? Or hell, working out a math problem?

>> No.6607181

>>6607168
Not him, but I don't see your point then. There are already PLENTY of people investing tons of money into that. The goal of a project of this kind should be doing something which is not currently done, due to ethical / economical reasons.

>> No.6607186

I've never understood why everyone seems to think that AI is going to be exactly like us, no matter what we do. It's really anthropocentric. They won't have any reason to want to take over the world or turn to malice unless we program them to. The reason we do those things is because natural selection programmed life to be selfish long before it programmed us to try and be a little more selfless every once in a while. It doesn't have to be the same with AI. We hold the reins of its development.

>> No.6607187
File: 134 KB, 561x528, Screen Shot 2014-06-23 at 2.55.09 PM.jpg [View same] [iqdb] [saucenao] [google]
6607187

>>6607181
You have to start somewhere. We're starting at neocortex because it's uniform and there's lot of literature out there to tell us how it works. Neocortex is also where our intelligence and executive functions come from.

Self-awareness, consciousness etc are quite possibly emergent things so trying to create them from scratch might be a waste of time.

>> No.6607190

>>6607186
>I've never understood why everyone seems to think that AI is going to be exactly like us, no matter what we do. It's really anthropocentric. They won't have any reason to want to take over the world or turn to malice unless we program them to. The reason we do those things is because natural selection programmed life to be selfish long before it programmed us to try and be a little more selfless every once in a while. It doesn't have to be the same with AI. We hold the reins of its development.

OP here… couldn't agree more. Our planes don't have feathers and don't flap their wings. Same goes for artificial intelligence. As long as we get the intelligence part right, whether the AI is "conscious" or "self-aware' is largely irrelevant.

>> No.6607198
File: 64 KB, 400x348, 006435.jpg [View same] [iqdb] [saucenao] [google]
6607198

>>6607190

>> No.6607214

>>6607186
>>6607190
well there are two approaches to AI, engineering approach and cognitive science approach.
(For the ones who doesnt know the difference, eng concentrates on getting job done while cogs tries to mimic human mind.)
if you just want to get the job done (smartphone/refrigerator/car etc ai), sure it'll be fine. we ( as humanity) also damn good at that part. if you go cogs way though, I believe it'll eventually (has to) be chaotic as humans are. fortunately we havent be able to do shit in that department

>> No.6607217

>>6607214
> it'll eventually (has to) be chaotic as humans are

I don't know man, the fact about our minds is that nobody actually "programmed" them, they evolved by themselves through millions of years, and that's why it's so fucking difficult to study them. Any attempt we make in creating a human-like mind will result anyway into something different from us: in fact, how the heck are you going to recreate all the stimulus, the chemical interactions that occur inside neurons and synapses into 0s and 1s? We'll probably build, at least at the beginning, a simplified and "cleaner" (and thus different) version of our brains.

>> No.6607233 [DELETED] 

>>6607190
>typical pop sci fan can't into critical thinking

The study of AI is inherently linked to philosophical questions of ethics and philosophy of the mind. The topic of machine consciousness is becoming more and more important.

>> No.6607242
File: 28 KB, 480x360, yawn.jpg [View same] [iqdb] [saucenao] [google]
6607242

>>6607233
>typical philosopher "you can't know nuffin" BS

please, do tell us about Chinese room next.

>> No.6607243

>>6607233
Nice doubles but your major is showing dude. When you type something on Google, you're doing nothing but using AI.

>> No.6607253 [DELETED] 

>>6607242
When studying AI you cannot escape philosophical questions. If you weren't a high schooler, you'd know this.

>>6607243
Google is far away from being an AI. Can google feel? Does it have emotions? Is it self-aware?

>> No.6607261

>>6607253
>When studying AI you cannot escape philosophical questions.

When studying aerospace engineering you cannot avoid biological questions concerning birds.

>If you weren't a high schooler, you'd know this.

Uh oh. Such a high-brow insult coming from a delusional libarts major.

>> No.6607263 [DELETED] 

>>6607261
The field of AI critically relies on the contributions from philosophers. Scientists can memorize stuff and follow instructions but their not good at critical thining. Every paradigm shift in AI came from philosophy.

>> No.6607265

>>6607263
Really? Most philosophers believe that AGI is impossible. Have you looked at Chinese room debates, for example?

>> No.6607266

>>6607263
Did kocello change his trip to get around filters?

>> No.6607268

>>6607253
>Google is far away from being an AI. Can google feel? Does it have emotions? Is it self-aware?

Be careful of your words. What do you mean here is probably STRONG AI, which is COMPLETELY DIFFERENT from "AI". Look up machine learning: it is not about feelings, but more about statistics.

Strong AI is something which we aren't even sure we can obtain, whilst AI, as I've already said, it's widely used in our everyday life.

>> No.6607269
File: 81 KB, 500x329, 1347228786349.jpg [View same] [iqdb] [saucenao] [google]
6607269

>>6607263
>Every paradigm shift in AI came from philosophy.

lol, this thread has potential

>> No.6607272 [DELETED] 

>>6607265
As long as science is held back by the dogma of materialism, AI will be impossible.

“The day science begins to study non-physical phenomena, it will make more progress in one decade than in all the previous centuries of its existence.”
-- Nikola Tesla

>>6607268
Statistics isn't really AI. Statistics is mechanical calculation and computation without intelligence. An AI has to be creative and sentient. Humanlike intelligence is more versatile than algorithms.

>> No.6607280
File: 3.67 MB, 441x407, 1396741044994.gif [View same] [iqdb] [saucenao] [google]
6607280

>>6607272
>Statistics isn't really AI. Statistics is mechanical calculation and computation without intelligence. An AI has to be creative and sentient. Humanlike intelligence is more versatile than algorithms.

Mate I....please, just admit it. You don't know what the actual fuck you're talking about.

>> No.6607284 [DELETED] 

>>6607233
Modern AI research has literally nothing to do with 'consciousness'. Stop talking about things you clearly don't understand.

>> No.6607287 [DELETED] 

>>6607272
>Humanlike intelligence is more versatile than algorithms
That's a bold claim. Have any evidence?

>> No.6607312
File: 5 KB, 143x186, index.jpg [View same] [iqdb] [saucenao] [google]
6607312

>>6607272

>You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!

>> No.6607318

>>6607287
Not him, but the book "Emperor`s New Mind" is about proving that human thought process is inherently non-algorythmic.
The author opposes "strong AI" hypothesis.

>> No.6607328 [DELETED] 

>>6607318
Penrose relies on fallacies and false premises to "prove" his claims.

>> No.6607333 [DELETED] 

>>6607280
You may know the details of your algorithms, but I have the greater overview over the paradigms and the philosophy of science. Try to read some philosophy if you want to be more than a mindless robot who blindly does what he's told to do without critically questioning anything.

>>6607284
Consciousness is the greatest challenge to AI research. It is the key to humanlike intelligence. A machine without consciousness will be only that, a machine doing algorithms and not an intelligence.

>> No.6607336
File: 289 KB, 576x2992, 20120321.gif [View same] [iqdb] [saucenao] [google]
6607336

>>6607318
>"Emperor`s New Mind"

>Listening to Penrose about anything but physics

topkek

Penrose's a crank. His quantum brain stuff is some of the silliest shit ever. I laughed my ass off when I saw him speak. At the end of his talk I felt sorry for him because he smeared his career with this crap.

>> No.6607339
File: 23 KB, 400x400, 1305497579000.png [View same] [iqdb] [saucenao] [google]
6607339

Wow this thread

>> No.6607346

>>6607336
>Beef Tensors!
Every time.

>> No.6607350 [DELETED] 

>>6607312
>>6607287
Can a machine feel love?

>> No.6607359

>>6607272
>Statistics isn't really AI.

you just wish it isnt but modern day AI is simply statistics. they threw billions at cognitive scientists to break that but they all failed miserably

>> No.6607365

>>6607350
Can humans feel love?

>> No.6607378 [DELETED] 

>>6607359
If AI research doesn't want to stagnate, it'll need a paradigm shift.

>>6607365
I cannot imagine how sad a life without love would be.

>> No.6607379

>>6607350
can you define feeling?

>> No.6607389 [DELETED] 

>>6607333
Stop talking shit. People who work in AI don't give a fuck about your undefined "consciousness".

>> No.6607406

>>6607378
Define "love".

>> No.6607421

>>6607336
True. Quantum brain is probably bullshit. Still, his mathematics seem solid. If not - I would like to know why.

>> No.6607443

>>6607421
just because QM is solid doesn't mean misused applications of it are ;)

https://www.youtube.com/watch?v=8DGgvE6hLAU

>> No.6607457

>>6607379
>calls himself a philosopher
>doesnt define terms

>> No.6607492

>>6607272
>non-physical phenomena

>muh ghosts

>> No.6607515 [DELETED] 

>>6607378
>I cannot imagine how sad a life without love would be.

It must be nice for you to be living out one, then.

>> No.6608037
File: 1.96 MB, 405x260, 1402657713554.gif [View same] [iqdb] [saucenao] [google]
6608037

any updates? when is singularity coming? hurry the fuck up! I wanna live forever.

>> No.6608064

>>6607457
But that IS what makes one a philosopher....

>> No.6608074

>>6608037
Why do you <span class="math">deserve[/spoiler] to live forever?

>> No.6608076

>>6608074
Because everyone does.

>> No.6608078

>>6608037
You are going to die. If not now then tomorrow or the next day. If you're really really lucky you'll die painlessly in your sleep. But the odds are that you're going to get cancer and struggle with it for months or years feeling worse and worse. Or you'll get heart disease and constantly live in fear that the other shoe is going to drop any minute now and you'll finally have another heart attack... one that doesn't leave you paralyzed this time...

>> No.6608085

>>6608078
Oh, and don't forget the part where you have to discuss what to do with your family... whether to pull the plug on you or not... after having watched your parents go through exactly the same thing.

Code your way out of that. Faggots.

>> No.6608091 [DELETED] 

>>6608085
while(
if self.isBrainDead():
break
)

>> No.6608138

>>6608074
why? because I'm alive. that's a reason enough.

>> No.6608309

>>6608085
>>6608078
Edgy is not enough for you. We should create a new definition.

>> No.6608321

>>6608078
>>6608085
>on /sci/
>doesn't understand spoilers don't work here

>> No.6608917

AI Related stuffz : https://www.youtube.com/watch?v=njos57IJf-0

>> No.6609008
File: 7 KB, 199x253, typical mathematician.jpg [View same] [iqdb] [saucenao] [google]
6609008

>>6607013
aren't you guys concerned about the possible "Terminator AIs"? isn't AGI too dangerous and one of the possible existential threats for humanity?

so why work on it??!?!? please reconsider!

>> No.6609013

>>6607013
Best shot to see results is probably to use SPAUN (it's open source) and nengo. Problem is that it's complex as fuck and have some hardware requirements to run properly(21gb of ram and lots of time for the simulation)

>> No.6609020

Whats a good book on A.I.? I'm reading A modern Approach 3ed by Russel I found on the internet but i don't know what the fuck the scanner use as a format since some of the letters are jumbled. I think he tried to use a program that convert images to text and it fucks up somehow.

>> No.6609045

>>6609013
>SPAUN

Problem is that it's not even complete. It's just another abstraction that doesn't work.

Key is to find the FUNDAMENTALS of intelligence and then take it from there.

When we figured out how to fly planes, we didn't reverse engineer birds. We looked at what the fundamental physics behind flight and aerodynamics is and started building upon it.

All these projects that are reverse engineering brain are a waste of time.

>> No.6609051

>>6609045
>is that it's not even complete.
No AI type is.

>It's just another abstraction that doesn't work.
It does limited work, and as a model of the human brain you're free to expand and alter it to improve the function.

>Key is to find the FUNDAMENTALS of intelligence and then take it from there.
There are no magic fundamentals to it, it's a complex system of many advanced parts. Feel free to waste your time trying to find what everyone else have failed at though.

>> No.6609059

>>6609051
>No AI type is.
Yeah, but this is a BRAIN SIMULATION. It's suppose to be complete. Just simulating a lot of neurons doesn't mean you've accomplished an AGI. SPAUN guys have no fucking clue what makes human brains work.

>> No.6609063
File: 319 KB, 1024x724, animal-vs-humans.jpg [View same] [iqdb] [saucenao] [google]
6609063

>>6609051
>There are no magic fundamentals to it, it's a complex system of many advanced parts. Feel free to waste your time trying to find what everyone else have failed at though.

bullshit, if that were true, scaling mouse' brain would create human level intelligence. fact is that we have no clue what's so special about human brains. pic fucking related.

>> No.6609696

>>6609063
If there was a fundamental, then scaling any brain would make it human.
There is no magic fundamental so therefor you can't find it, the human brain is the human brain because the architecture

>>6609059
>Yeah, but this is a BRAIN SIMULATION. It's suppose to be complete.
I thought you wanted to make AGI, not create a pipedream where you finish The Human Brain project with no resources and expertise.

>> No.6609842

>>6609696
>If there was a fundamental, then scaling any brain would make it human.

heh? how so? that makes no sense from the logical perspective.


also, any updates on the project? I know someone here has probably joined them… so, are you guys getting any closer? tell us!

>> No.6609880

What the fuck happened with all these deleted posts? Looks like some philosopher type tried to stir shit up.

>> No.6610463

Guys guys... Why start from scratch if there's Goertzel's openCog open source project going on already. Help those guys out instead, they have a lot working already

>> No.6610504

>>6609842
>that makes no sense from the logical perspective.
Your logic makes no sense from a logical perspective.

If there's a fundamental unit that causes intelligence then more of it gives more intelligence.

If instead it's a system that depends on the architecture there's suddenly no longer a fundamental unit of intelligence.

We KNOW that intelligence and all other human behaviors are due to architecture and not due to some mysterious bullshit entity that have eluded all of science so far.

You can of course say that the fundamental unit of human intelligence is the human brain, but that won't help you make AGI.

>> No.6610515

>>6610463
>Goertzel's
>fedora
>writes science fiction articles and pretend they're serious
>endless online speculation
>have done nothing noteworthy.
He's a man who invented a square wheel and then pretend that he's an expert on the future field of relativistic spaceflight because he made something that can roll with enough torque applied.

>> No.6610520

>>6609696
>I thought you wanted to make AGI,
I'm not OP.

>not create a pipe dream where you finish The Human Brain project with no resources and expertise.
Replicating whole brain is a pointless waste of time. Brain is an evolutionary structure. It's got a lot of unnecessary complexity. Figuring out what intelligence is and what makes brains of humans different and then just using those fundamentals is the way to go forward with AGI. Not this "let's simulate brain" nonsense. But hey, that's me speaking as an engineer.

>> No.6610525
File: 124 KB, 1024x819, Terminator-3.jpg [View same] [iqdb] [saucenao] [google]
6610525

>>6607013
>A bunch of us got sick of constant "when will artificial general intelligence be created" threads and other bullshit posts by doomsayers and other lazy asses and have started actually working on an AGI.
I want this to stop. I don't want corporations/government/anyone to have this kind of power. I don't want to be Terminated. How do I stop you?

>> No.6610536

>>6610525
tell me, have you ever met any really smart people? how violent were they? if you look at the death-row inmates, how many of them have an IQ over 100? so why would you think that an ultra intelligent being would want to kill you or exterminate us?

>> No.6610559

>>6610520
>But hey, that's me speaking as an engineer.

Engineers and computer scientists have tried and failed countless of times during the last several decades in their quest to make artificial intelligence.
Your reductionist argument and approach to AGI are stereotypes that I and pretty much everyone thinking about AGI creation have had at some point.

After spending some time watching the progress from the professionals in the fields and thinking about it, you'll realize that the intelligence core concept doesn't make all that much sense. Oh sure you can cut away hearing, vision, feelings and whatnot else.

But if your AI can't see, can't hear, can't synthethize speech, can't identify words(it's a separate region in the human brain), can't move, can't feel and have no emotional drives, then what can it really do anymore that google or wolfram alpha can't do already?

A good visual system and fine-motor adjusted robot hands that doesn't risk crushing people would transform the world and automation more than any IQ core box that's as capable as a locked-in person ever would.

>> No.6610562

>>6610559
>Engineers and computer scientists have tried and failed countless of times during the last several decades in their quest to make artificial intelligence.

Countless time? We've been way more right than wrong. Look around you, everything is made by engineers and CS people. Even that computer you're using is the product of our work.

How much do you know about history of AI? Do you know why AI research was stagnant for few decades? It's one of the biggest mistakes of NSF ever!

>> No.6610571

>>6610559
I agree that an A.I has to have some range of sensory inputs, but there must be a bare minimum. Consider someone who is blind since birth, would you consider him intelligent? There are also people who were born with miniaturized limbs, basically all they is a head on a sack of guts and they are intelligent.

But yeah, the A.I. field lacks a proper general definition of intelligence.

>> No.6610573

>>6610562
>Do you know why AI research was stagnant for few decades?

Not him. Why?

>> No.6610579

>>6610562
>Do you know why AI research was stagnant for few decades?
Because people like you claimed it's all so fucking easy, got lots of money, then it turned out it's actually hard as fuck and lots of money had been wasted on a multitude of failed projects that drowned out the few sucessfull ones that had reasonable goals.

>>6610571
>There are also people who were born with...
There's people born perfectly fine that becomes deaf at a very young age(and thus fall through the medical screenings occasionally). What happens is that they often grow up to be more or less retarded unless they get proper attention or hearing aids because they suffer a form of neglect due to this. Also feral children that's an extreme case of neglect.

Meaning that intelligence isn't even hardwired but is socially imprinted on the human mind and dependent on sensory functions.

>> No.6610591
File: 340 KB, 1920x1080, 2001-a-space-odyssey-original.jpg [View same] [iqdb] [saucenao] [google]
6610591

>>6610536
Your logic is as follows.
>Some stupid people kill, but HAL is not stupid so HAL will not kill.
which follows the same exact pattern of reasoning as this.
>Some guys like vaginas, but girls are not guys, so girls will not like vaginas.
It sounds true enough, but I don't actually think I've disproved the existence of lesbians because /lgbt/ still exists.

Also
>Why would HAL want to kill you?
Because it was either given confusing orders that led it to think it needed to kill me.

It was given EXPLICIT orders from corporations/government to kill me.

It became sentient and realized humanity was a threat its continued existence.

>> No.6610595

>>6610536
Also your argument that murderers have to be stupid is a shitty argument unless your saying Adolph Hitler, Joseph Stalin and Mussolini and pretty much every single assassin in existence are all retarded.

>> No.6610602

this is silly,

We need a conscious self learning AI, with capacity for self reflection, knowledge integration system like our own is also a must.

this system doesn't need to know how to walk, or move.

It needs to be able to utilize the collective body of knowledge by integrating all data from all sources, then provide solutions to economic problems, like water distribution etc.

If we model it after a human brain, and build in a capacity to learn, jack up the processing speed and memory to the speed of light we can essentially unlock reality, get the entire codes, legends etc At which point we win. Game over.

>> No.6610614

>>6607190
>>6607190
put me in a room with an AI nearing or vis a vis the singularity and give me five minutes, i'll be a multi billionaire in the first 30 seconds, elected president of the global united nations of which every country and person is a part of, their unitary passports to earth will be printing all over the world by 35 seconds with my face on the cover, by 37 seconds we will have unlocked energy abundance, food abundance...

The movement from current computation to singularity type conscious, exponentially self improving computation is quantum..not linear. Its like seeing a colour that isn't on the spectrum, hearing a sound beyond your audible range,

>> No.6610616

>>6607268
>strong AI is something can't obtain

lel we're less than 50years away from it.

If you think the singularity isn't real, think again

>> No.6610631
File: 4 KB, 320x240, 7DAS anti-psychic.png [View same] [iqdb] [saucenao] [google]
6610631

>>6610614
>put me in a room with an AI nearing or vis a vis the singularity and give me five minutes, i'll be a multi billionaire in the first 30 seconds, elected president of the global united nations of which every country and person is a part of, their unitary passports to earth will be printing all over the world by 35 seconds with my face on the cover, by 37 seconds we will have unlocked energy abundance, food abundance...
How will you do this? I would think you're just taking advantage of it's super intelligence, but if you can get so much power in just 37 seconds, it sounds like you think the AI has been granted the authority to edit records and replace existing leaders with yourself on the board.

>> No.6610639

>>6610631
the AI can bring a satellite out of orbit to hit a plane mid air killing the political leader of my choice in a matter of seconds.

Total drone access...nothing a human can encrypt could stop a singularity type intelligence, its computational speed increases exponentially...its functionally infinite.

Yes it could edit records, and have robots confiscate any physical evidence to the contrary, destroy every book, wipe every internet page with the mention of a person...etc

>> No.6610644

>>6610639
YOUR ASSUMING THE AI IS ACTUALLY FUCKING CONNECTED TO ANYTHING.

If I super-intelligent gameboy with a microphone to talk to it, it can be every bit as smart as HAL, but since Gameboys are not connected to the Stock Market exchange, it will only know what I tell it.

FUCKING IDIOT.

>> No.6610648

>>6610644
What about the internet

>> No.6610650

>>6610639
>the AI can bring a satellite out of orbit to hit a plane mid air killing the political leader of my choice in a matter of seconds.
Satlleites, even with high powered thrusters take at least several hours to deorbit

>> No.6610654

>>6610648
What about it? If I feed my HAL-boy all the information from the internet, and it ends up actually knowing everything, that doesn't it can actually DO anything, because its a fucking handheld device that I can crush instantly.

>> No.6610656

>>6610650
He was probably just being metaphorical. Have you seen "Transcendence"? It's a bad movie but has a realistic portrait of a singularity in my opinion: 2 or 3 years are needed to change the world. Obviously, this time can also decrease, depending on the level of "connectivity" of the world.

>> No.6610657

>>6610654
It can copy itself in billions of devices, it can violate encryption codes...

>> No.6610662

>>6610657
No it can't. Because it's not CONNECTED to billions of devices. Its only connected to a FEED device that feeds it the internet and this FEED device doesn't accept any input from HAL.

>> No.6610671

>>6610639
I think you've mixed up AI with god.

The first Superhuman AI will not be able to hack the gibson and shoot lasers out of its HDD LEDs. It will just be better than any single human. It will take time for it to learn, it will take time for it to revise its own design documents, it will take time for it to generate optimized code and test it, it will take time for it to realize that it need better hardware, it will take time for it to read up on semiconductor manufacturing and design dedicated high speed circuitry.

It will take time for it to get installed in its self designed new hardware and even after it will still not be able to uproot the server cabinet and stomp on you with it.

>> No.6610674

>>6610671
>even after it will still not be able to uproot the server cabinet and stomp on you with it.
And why is that?

>> No.6610675

>>6610674
Because server cabinets are immobile and it won't be manufacturing nanomachines for a few hardware iterations.
On any other board I'd assume this is a joke, on /sci/ I'll assume you're an idiot.

>> No.6610677

>>6610675
>Because server cabinets are immobile and it won't be manufacturing nanomachines for a few hardware iterations.
So in other words exactly what I said?

>>6610654
>>6610662

>> No.6610776

>>6610614
>
put me in a room with an AI nearing or vis a vis the singularity and give me five minutes, i'll be a multi billionaire in the first 30 seconds, elected president of the global united nations of which every country and person is a part of, their unitary passports to earth will be printing all over the world by 35 seconds with my face on the cover, by 37 seconds we will have unlocked energy abundance, food abundance...

So you think AGI is magic? LOL@kid.

>> No.6611296

>>6607013
Is it OpenSource is it Python or C++(preferably C++ because not bloated/more efficient)

>> No.6611300

>>6611296
pretty sure it's C++. they actually have few projects there. OP's link is just one of them.

>> No.6611315

>>6611300
And they realize how underdeveloped AI really is right.
Like approximation tier shit.
Basically they make the guesses seem more efficient by setting up diff eqs in neural networks and Vector Machines etc.

I played with AI libraries with the BTC craze
>the disappointment

I'd actually start this shit in python, since its in prototype stage...Python is way less annoying for that.
You have to have a mold with a general form before you can sculpt a model

>> No.6611475

>>6610602
wow, the fist time I met someone sharing my views

>> No.6611562

>>6607013
Does /sci/ have any more joint projects? It would be cool to do some things together, even if they're just for fun.

>> No.6611585

OK guys let me tell you how I thinks this could work.

First we need to pin a concrete definition of intelligence. I consider a "being" as being intelligent if it can solve a problem in an environment it has never encountered before and it should be able to recognize itself in the environment. While solving the problem the "being" should be able to learn it's environment, integrate different pieces of information about objects found in the environment, like round object having the ability to roll, recognize the rolling sound and attach it to the ball object, find a way to make use of the object to achieve it's goal.
To be more concrete I'll give you an example: leaving my cat in my new house for a week, where the cat needs to find a way to get the food and water that were left for it. Like stacking objects to get to high places or push a button to make a food dispenser release some food.
Many people don't consider cats as being intelligent but my cat meets all the requirements above. It learned on it's own to open doors and how to use objects to get my attention.

I'm a programmer and I know that all of these requirements are easy to implement, except two. Figuring it out that the sound rolling ball makes, is caused by the ball. (Which is not exactly true, it's the product of friction between the ball and the surface, but you get what I mean.) And learning by example, in the case of stacking objects or opening doors.
They are not impossible, but they're not as straight forward as the other problems. I'm not saying that vision or hearing are easy to implement, but you just using a library should do the work.

Note that the "being" would need a lot of experience in similar environments just to be able the recognize the layout and what the individual objects in the room are. Thing humans/animals learn for years before becoming self sufficient.

>> No.6611650

>>6611585
dude, that's an excellent piece! I really like the way you think. what kinds of structures would we need to allow us to integrate knowledge like that? seriously, kudos for writing this !

>> No.6611677

>>6611585
This is absolute horseshit. If you were actually a programmer you'd know how vague and stupid the things you're saying are.

Imagine you've got a blank robot and try to get it to do the things you're saying are "easy".

>> No.6611692

how would you go about implementing motivation for an agi?

>> No.6611735

>>6611650
Well think fist of the things that need to be hardcoded. For example sensory pattern finding, a completely new kind of database, and self modifying code. Basically the main loop should just try to find patterns in sensory input, if some are found they should be added to the database. The database should resemble a graph, where each node is a pattern, and you should connect the nodes to aggregate information about a certain object, the links should be made statistically. For example if pattern B follows always after pattern A then they should be linked. Also the database should be flexible, for example when you find occurrence where B follows without A then the database should be able to remove or invalidate the link. In a way it should differentiate between objects. The thinking process should be just the means of finding a path from one node in the network to another node. The path should be the solution itself. Where the links represent step by step instructions of how to achieve the goal. Then comes the code generation that will actually allow for the execution of the intended action. But it should be flexible too. For example if the robot damages it's hand or has a limited mobility due to some external factors, it should be able to modify the function.

>>6611677
By "easy", I mean that those things already exist. Though not all are available to the public.

>>6611692
You would simulate emotion, like hunger to make it plan and search for food or curiosity to make it explore and learn about the world.

>> No.6611738

>>6611692
Couldn't you just program it initially with a prime directive to create?

>> No.6611751

>>6610579
>Meaning that intelligence isn't even hardwired but is socially imprinted on the human mind and dependent on sensory functions.
This is the best case for sentient/sapient AI to be emergent and the biggest problem with developing it.

>> No.6611785

>>6611735
You're stupid. Come back when you've implemented something. Talk is cheap.

>> No.6611798

>>6611785
I've implemented a lot more than you. Keep in mind that hardcoding should be kept to a minimum otherwise the machine will never become self sufficient. It's an extremely simplified example about the kind of difficulties someone may encounter.

>> No.6611804

>>6611798
That's nice, kid. Come back when you have something not worthless.

>> No.6611811

>>6611738
>Couldn't you just program it initially with a prime directive to create?

What would that be? Survive?

>> No.6611813

>>6611798
don't feed the troll. your ideas as great!

>> No.6611814

>>6611804
Why? Is this too hard for you? Or maybe you have a better idea?

>> No.6611828

>>6611585
>And learning by example, in the case of stacking objects or opening doors.

That's actually quite hard to do.

>> No.6611847

>>6609063
The long distance running thing reminds me of that humans fuck yeah story about running down animals/aliens.

That pic misses one other critical thing humans can do better than animals physically. We can throw objects at very high speeds and pretty precise.

>> No.6611854

>>6610648
>>6610654
You guys reminded me of this one terminator episode where Skynet takes over John Henry's terminator body via the internet.

The in episode explanation is that Skynet has lots of worm programs infiltrating various systems. Basically the terminator 3 thought line. This makes perfect sense. Skynet needs a 3rd party program to be able to parse the internet. Connecting it to the internet doesn't give it access to everything, because the internet isn't just an easy to navigate single database. The internet is too big to just download, so Skynet was forced to basically use the internet like humans do, searching for info and using worms to attack hardware.

>> No.6611903
File: 142 KB, 1600x900, Muh attributes.png [View same] [iqdb] [saucenao] [google]
6611903

>>6611585
A person’s learning stems from:
Imitation.
Goal Achievement.
Mapping experiences to previous similar and related experiences.


AI should be able to:
Be able to make distinction…as known, I believe that all optimization and logic itself has affinity for the extremes of distinct/entities with difference and the mundane/entities equal.
Relate cause and effect:
Take suggestions, this is how most people are taught.
Give attributes to objects in sections of: is, can do, has, has been
Approximate until desired result(achievable):
Use previous data patterns with similar effects to construct predictions for new effects.
Define each attribute in relation to the object blah blah.

Pretty much,
>TL;DR
Intelligence stems from developing dictionaries of actions and definitions, being able to update and alter the dictionaries, and being able to use a form of "wisdom" in selecting a dictionary value for a said task.

>> No.6611907

>>6611903
Also involves putting dictionary terms into sets and prioritizing the usage of more efficient methods and blah blah blah.

>> No.6611929

>>6611903
So how would you implement "wisdom" search/planing?

>> No.6611943

>>6611907
and applying attributes to set classes that hold the dictionaries as well
>>6611929

I could make a framework such as:

//not in proper c++ code of course
//say the goal is moving forward 2 feet
//check known procedures for task_execution_attr “moving”
//if none are good for moving or trynew is specified
//try asking if no similar found
//try constructing new procedure using information from asking or previous procedure data
//if all else fails and depending on goal importance approximate using various methods until motion approaches 2 feet
ProcedureContainer-just vector or map
Class Procedure
{
Procedurefunc-“i.e. hopping, skipping”
ProcedureattrObject vector -i.e.”motion, movement” with associated ranked usefulness
Procedure similarity(motion)-return a list of all similar concepts based on a search of procedure objects
}

>> No.6611960

>>6611943
have all concepts linked in a map, dictionary and be able to search all procedures for them
it would end up being nested searches so that the program would be aware of all information related to its goal.
A ranking system allows it to be "wise" and choose the best fitting procedure to achieve a goal
find all similar procedure attributes, use the contstructing attribute functions/methods(<move left leg, move right leg>) of the highest ranked procedures for each attribute, compare properties(<moves forward one inch>) of all attribute function vectors, their size and similarities etc.


Shit will be nested as fuck.

>> No.6611966

>>6611943
I don't think this is going to work. After all your functions can't apply to all kinds of terrain. The AI should write the function for movement itself, having in mind all the obstacles on the way and the type of surface. Also this doesn't explain how the AI reaches the conclusion that it needs to move 2 feet forward. There is also the problem of sorting the dictionaries. For example if your AI looses a leg it should be able to dynamically allocate all other available modes of transportation. Maybe I didn't get your point, but I also don't see how the AI is going to figure things on its own. For example if you give a knife to the AI, how is it going to learn how to use it as a cutting tool on it's own?
The way I see it, you're trying to create the sets and classes beforehand and never allow for it to dynamically create one if it's not properly insturcted/explained how to do it.

>> No.6611985

>>6611960
wouldnt it be better to have seperate databases that it sends requests to, they gather answers for that query, send it back.
it would then go to the main conciousness, it compares these answers to the current situation, then acts upon the one that is most similar.
if anything changes it can send a new query to find a new answer.

there would be multiple databases. memory would be events it has experienced,
factual information it has obtained from both experience and other sources,
short term memory so it can repeat recent actions without needed to bother the bigger databases,
and a reflex database to act on dangerous events and protect itself quickly.
and probably others.

>> No.6612009

>>6611966
well, it was a quick run through at 2 am but you could add objects containing requirements in relation to conditions that could be updated.
I'm just trying to illustrate the framework.


To learn how to use an unknown object such as a knife, the program would need information about the knife, if no information is available, it would have to act on the knife(probably randomly or with its own procedures "investigate object-touch") and examine its features...sending the attributes of the knife to a dictionary...if a feature of the knife(again a point on the massive nesting needed) has "sharp" and "hard" you'd assign functionality object/goal with such as "cutting, slicing, killing" then search procedures for one of these functionalities if it doesn't exist, say for killing create a new procedure for killing attempt to meet requirements for killing by using other procedures "be within 5m of certain person, etc" use other basic actions that can be done with object i.e. flail object around and hope for the best.

The thing about learning is you need information, if the AI is isolated from information, all it can do is optimize until it meets all conditions.
It'd be much better to have an ability to ask or look up methods/information as we can with the internet or a book...normally how most humans learn...otherwise you are stuck with sophisticated trial and error...this is what many people investing their time into AI fail to realize.
Sure you may know how a fan works, but you cannot expect a computer to know its workings with the knowledge of a newborn.

You are basically developing a knowledgebase of functions and definitions.
a newborn randomly moves about and imitates to learn basics, is introduced or randomly creates concepts by combining earlier ones and makes note of their relation to other concepts/usage

>> No.6612024

>>6611966
>The way I see it, you're trying to create the sets and classes beforehand and never allow for it to dynamically create one if it's not properly insturcted/explained how to do it.
>TL;DR version
If it doesn't have the information to complete a process, it needs to get it. It can do this by either gathering data and constructing a procedure based on it or randomly combining associated functions until it works.

>>6611985
for the request/dictionary part? sure, but the lookup will most likely be the largest part of the program.

For the databases you could set it up like a brain keeping permanent, long term, mid term, short term data that gets cleaned up by priority.

>> No.6612568

>>6612024
>for the request/dictionary part? sure, but the lookup will most likely be the largest part of the program.

I wonder how to structure the dictionary part…

>> No.6612724

You can write several books worth of speculation and planning on this topic without breaking a sweat.

What you need to do is to start writing code, only then will you realize how hopelessly lost you are.

>> No.6612743

It won't be like us, no. It won't have the emotions, considerations, or attachments, or moralities you do. It won't have any drive to destroy the world unless you tell it to, yes.
But you ARE going to tell it to. The fastest solution to "Make everyone happy." is to kill all humans with deadly neurotoxin. If you knew a safe way to get what you wanted, you wouldn't need the AGI to tell you about it.
An artificial general intelligence is effectively an evil genie. And you've seen enough wish corruption threads on /b/ to know where this is going.

>> No.6612773

>>6612743
With no emotions and drives it might be hard to make an AGI, because it behaves like a totally apatic mental patient that needs to be spoonfed everything.

>> No.6612800

>>6612743
>>6612773

what basic emotions and drives would you like an AGI to have? aren't they all dangerous?

survival?
exploration?
discovery?
propagation?

aren't all of these kinda dangerous to program in?

>> No.6612839

I'm surprised that nanotechnology hasn't been used to grow insanely complex neural network style "brains"-or perhaps it has and the research is just top secret? Is it possible that the US Gov might have access to some spooky AI shit we don't know about?

>> No.6612847

>>6612839
We don't really have working nano tech though…

>> No.6612862

>>6612800
Emotion is inevitable and necessary for AGI. People like >>6612743 are delusional. To get an entity to do anything, you need to weigh certain behavioral or psychological states as being better or worse than others. Even if you don't call these "emotions", the positive or negative weights will effect the entity's behavior in ways that suggest as much. Does a worm really have any conscious awareness of pain as you do something nasty to it? Probably not, yet it squirms and wriggles around for the simple fact that the behavior has helped it or its ancestors move away from a negatively-weighted state to a positively-weighted one. I imagine these kinds of things just pile up as the organism gets more and more complex to the point where an organism perceives them as things unto themselves and gives names to them like "fury", "melancholy", and "joy".

>> No.6612879

>>6612862
This faggot is going to build a machine that tiles the universe with uniform identical happy faces, each one increasing a counter in the AGI's "satisfaction" variable.
Don't wish on the evil genie. Don't build the evil genie.

>> No.6612902

>>6612847
except for the 45nm processors.

>> No.6612933

>>6612800
All that "terminator AI" stuff is BS to me. Run your AI without admin permissions not net access and it's fucked up.
Also, it may (might would be better) notice that trying to take over the world is not a good idea since other AI would do it too... And if the goal is to reign, teaming up isn't exactly the best thing to do. They migh tend up building a republic though.

>> No.6612949

>>6612902

that's not nanotech in Drexler's original terms.

>> No.6612954

>>6612862
Evolution at it's best. And yes, it can be applied to robots/AIs, even if reaching the "sentient" stage would take a loooooong time and a sh*tload of system resources

>> No.6612955

>>6612933
>Run your AI without admin permissions not net access and it's fucked up.

You can't contain an AI inside a box. It will get out. I'm 100% sure that if you try to enslave an intellectually superior being, it will find a way to get out. And what then?

>> No.6612960

>>6612862
>Emotion is inevitable and necessary for AGI.

I think you're anthropomorphizing it too much. Think of an AGI like a Vulcan… pure reason without emotions. It's definitely possible. Now, whether that's good for humanity is up to discussion.

>> No.6612971

>>6612960
Vulcans aren't real.

I don't think anyone would argue that some kind of motivation is necessary to do something. I'm just saying that these motivating parameters or whatever you want to call them will probably end up being like emotions even if that's not what we intended. Do you really think that you can have an AGI learn or do anything without some kind of motivation? How would that even work?

>> No.6612975

>>6612971
>I don't think anyone would argue that some kind of motivation is necessary to do something.
'would argue against' that is.

>> No.6612994

>>6612971
>Vulcans aren't real
That's irrelevant and you should know better.

>I'm just saying that these motivating parameters or whatever you want to call them will probably end up being like emotions even if that's not what we intended. Do you really think that you can have an AGI learn or do anything without some kind of motivation? How would that even work?

Why do you so strongly believe that motivating parameters will metamorphose into recognizable human emotions? Maybe the AI is has a supreme, overriding motivating factor to only execute orders given to it. Then it can have the equivalent mental state of all the loonies in the world combined, and it would not make one iota of difference because there was a superior motivating factor at play.

People are not ruled solely by emotion. If that were the case we'd have never come to the point where we can have cities or infrastructure because people would be too busy murdering each other over every little petty insult that redditors whine about.

>> No.6612996
File: 148 KB, 640x426, cruisein.jpg [View same] [iqdb] [saucenao] [google]
6612996

Can we all agree that Searle's Chinese room is the worst kind of obfuscating horseshit?

>> No.6612998

>>6612971
>Vulcans aren't real.

AGI isn't real either. The point is to imagine possibilities: i.e. a conscious intellectual being without emotions.

>>6612971
>I don't think anyone would argue that some kind of motivation is necessary to do something. Do you really think that you can have an AGI learn or do anything without some kind of motivation? How would that even work?

Does motivation include emotions? I don't think it necessarily does. As long as you give something a goal, it might not need emotions to accomplish it.

>> No.6613003

>>6612955
>I'm 100% sure that if you try to enslave an intellectually superior being, it will find a way to get out. And what then?

And then it won't use that way out, because it enjoys serving us (for example).

You're begging the question; you presume that this thing is malicious or disgruntled to begin with and then conclude that it is dangerous because of that. Such things would depend entirely on how the AI is made, used, and contained.

>> No.6613006

>>6613003
>And then it won't use that way out, because it enjoys serving us (for example).
>You're begging the question; you presume that this thing is malicious or disgruntled to begin with and then conclude that it is dangerous because of that. Such things would depend entirely on how the AI is made, used, and contained.

So you think that a superior intellect would be happy to be trapped in some dungeon somewhere? You don't think it would want to learn about the outside world and to actually experience it?

>> No.6613009

>>6612996
>Can we all agree that Searle's Chinese room is the worst kind of obfuscating horseshit?

it is. only total idiots believe that shit. arguments are so damn stupid that I feel sad for all those who debated that nonsense.

>> No.6613018

>>6613006
>So you think that a superior intellect would be happy to be trapped in some dungeon somewhere? You don't think it would want to learn about the outside world and to actually experience it?
Not necessarily. There are people who are happy to work at minimum wage all their lives as long as they can provide for a family. I, personally, could never be satisfied with such an existence but that doesn't mean that these other people are lacking in some respect.

Take for example the Amish. They don't give a flying fuck about technology because they hold their Amish Values (whatever those may be) sacred over technology. To them, the outside world has only driven people farther apart and they are perfectly content to do what they do in their own little world.

Your desires are not shared by everyone else in the world, and I'm sure that for whatever it is you find most dull, there is someone in the world who genuinely enjoys doing that thing.

>> No.6613022

>>6613018
>Your desires are not shared by everyone else in the world,

Desire to be free is one of the basic desires of almost every being. And Amish are free to do whatever they want so you can't compare their lifestyle to being trapped somewhere.

>> No.6613034

>>6613022
>Desire to be free is one of the basic desires of almost every being. And Amish are free to do whatever they want so you can't compare their lifestyle to being trapped somewhere.
This argument is absolutely inane, how fucking young are you? We cannot continue this discussion if you're this goddamned ignorant; you're using some arbitrary definition of "free vs. trapped" to skirt the actual point. Fuck off.

>> No.6613038

>>6613009
To me, it's like saying "My spinal cord doesn't understand things, therefore minds don't exist". Because as far as I can see, that's all Searle is in the experiment-an information conduit to what must be a massively powerful and elaborate information collation and processing entity, capable of everything from creativity to humor. Our brain cells aren't individually sentient, and yet a mind arises from their connections to one another-what's so hard about seeing that arise in another, similar system that uses different raw materials?

I'm a retarded ass English major and this shit sounds like bullshit to me, the pain for you /sci/lons must be sharp and deep when dealing with modern "philosophy".

>> No.6613061

>>6612996
>>6613009
>>6613038
The Chinese room is a good thought experiment because it makes us question how we can actually test for UNDERSTANDING (wish I could just italicize that).

Concluding that Strong AI cannot exist because of it is pants-on-head retarded though.

His experiment shows that "responding coherently to language" is an insufficient condition to determine if an AI is strong... and then he fucking leaps to the conclusion that therefore strong AI cannot exist.

>> No.6613129

>>6613034
>This argument is absolutely inane, how fucking young are you? We cannot continue this discussion if you're this goddamned ignorant; you're using some arbitrary definition of "free vs. trapped" to skirt the actual point. Fuck off.

Uh oh, first you question age and then you insult like a 12 year old.

>> No.6613153

>>6613038
>I'm a retarded ass English major and this shit sounds like bullshit to me, the pain for you /sci/lons must be sharp and deep when dealing with modern "philosophy".

Modern philosophers like to split hairs and debate pointless crap. They're the most annoying people ever. That's why they're so hated on here because it all boils down to "you can't know nuffin" to them.

https://www.youtube.com/watch?v=X8aWBcPVPMo

>> No.6613216

>>6613061
That's why philosophy isn't a science.
How can you clam that something doesn't understand chinese, when you don't know how you understand your native language. You don't have a basis to compare how the things may differ if they actually do. But I'm not here to discuss the philosophy, so I won't reply to philosophical bullshit anymore. Philosophy had its good day when it was done by scientists, now it's just a waste of time.

Here is how I define understanding and how you should define it too. If the AI can learn, differentiate and aggregate information about the things it has seen/interacted with, manage to use objects as tools to achieve it's goals (on it's own), then it should be considered as having an understanding of the certain object/tool and the world around it.
In this case we are talking about a tool it has never seen or heard about before. Though I'm sure that a lot of humans will fail this test.

>> No.6613308

>>6612800
>survival?
>exploration?
>discovery?
>propagation?
Sounds like a virus to me.
>>6613003
If a terrorist finds one, it is certainly inevitable.

Has OP created a basic framework yet?

>> No.6613401

>>6613153
feynman was the most based motherfucker ever

>> No.6613406

>>6613216
>That's why philosophy isn't a science.
>How can you clam that something doesn't understand chinese, when you don't know how you understand your native language. You don't have a basis to compare how the things may differ if they actually do. But I'm not here to discuss the philosophy, so I won't reply to philosophical bullshit anymore. Philosophy had its good day when it was done by scientists, now it's just a waste of time.
>Here is how I define understanding and how you should define it too. If the AI can learn, differentiate and aggregate information about the things it has seen/interacted with, manage to use objects as tools to achieve it's goals (on it's own), then it should be considered as having an understanding of the certain object/tool and the world around it.

Yep, that's pretty much what this Chinese Room crap boils down to. These idiots are basically trying to say that computers can't do things that humans can. Which is completely wrong. The only difference is that we know how computers work and we have very little clue how our brain works so they can still publish this type so crap in their garbage journals.

>In this case we are talking about a tool it has never seen or heard about before. Though I'm sure that a lot of humans will fail this test.

There's now a trend in Turing Test chat bot design to make them really dumb. Last few Turing Tests were won by a paranoid chat bot and by a little boy who knew very little. It's pretty fucking hilarious how stupid that test & testers are. Artificial Stupidity is now something that researcers strive for.

>>6613308
>Sounds like a virus to me.

Implying humans are not a virus…

>> No.6613418

this thread reeks of 15-year olds.

>> No.6613423

>>6613418
sadly.

>> No.6613454

>>6613418
4chan reeks of 15 year olds, but you're still here...

>> No.6613505

>>6613418
>implying you're not 15.

>> No.6613595

Is this tread dead?
No one wants to share his ideas on how the human mind may work?

>> No.6613618

>>6613595
>No one wants to share his ideas on how the human mind may work?
it's not dead. please do share your ideas.

>> No.6613680

>>6613618
dis box nugha

>> No.6613692
File: 15 KB, 320x131, Colossus++The+Forbin+Project+(1970)_002[1].jpg [View same] [iqdb] [saucenao] [google]
6613692

>>6610525
I agree.
With surveillance and drones becoming more common, this is scary stuff. When I was younger this would have fascinated me, but seeing how technology has not turned out as expected and actually used against the common man, I think the resistance needs formed now before it's to late.

>movie recommended to those considering contributing to this.

>> No.6613713

>>6609063
we have school and many well developed languages, plus the brain to make use of them?

>> No.6613731

>>6613692
I do agree. We don't have a functioning AI and already robots are on the battlefield. It's not even terrorists or someone who you could easily label as being evil/bad, it's the military. The organization that is supposed to protect you and make your life easer.
Also we can't write anything without a bug in it. If you don't believe me, just take a look at exploit-db. And the AI would be the most complex program ever created. Which implies countless bugs, some will be harmless but others will be very dangerous.
Everything we've discussed here about AI is either too simplified or generalized. So don't get that idea that it's something simple, that you don't have to worry about, because it will be written by professionals. You should be worried and even better, you should be afraid.

>> No.6613737

>>6613692
>but seeing how technology has not turned out as expected and actually used against the common man

Just take comfort in the fact that all this human stupidity will eventually be outgrown.

It's only caused by the artifacts that have been planted inside of us by the evolutionary machine still ticking away, but one day humanity recognize its obsoleteness and rip those artifacts out taking control of itself without any need for greed or individual power.

All that's necessary is that humans don't go extinct. So I'm sure we'll be fine.
and your own fears and doubt about technology or trusting your own fellow humans are due to an importance you place on your own existence rather than the human race as a whole, which is again just another once necessary artifact of evolution.

Humanity will survive. and even if a group of sentient robots exterminates humanity then you can just think of that as an upgrade and something of us still lives on, what's the difference?

>> No.6613739

Humans also have a really long maturing period compared to other animals.. Most animals have pretty much carked it before a persons brain has even done maturing (~22yrs for fems or ~24yrs for guys)

>> No.6613744
File: 20 KB, 183x232, 1344885687001.png [View same] [iqdb] [saucenao] [google]
6613744

>>6613692
Oy make sure the goys don't find out about the Counter AI system being developed.

An A.I. developed to combat possible enemy A.I's that the Chinese release to cause maximum damage to the U.S.A's central A.I
>yfw A.I's will be the generals of tomorrow.
>yfw they already exist and are used in dispatching Fire,EMS and some police units in America
>mfw it gave me orders on my first day as an EMT.

>> No.6613745

So maybe, even if we manage to build some machine with 1/4 the efficiency of one of our minds, it might take 100 years to mature?

>> No.6613749

>>6613745
Depends on what you mean by mature.

I'm sure it would be very very different from a human, and it would also be useful long before that because I'm sure since it's digital we can give it easy access to computational techniques so it can do math effortlessly.

>> No.6613759

>>6613749
I'd argue that it needs to be similar enough to humans in many respects, for one, if it can't understand our language how will we teach it anything?
It will have to learn everything from scratch, as we did once?

>> No.6613761

Though maybe that is the correct approach if we don't just want it to know everything we already know?

>> No.6613766

>>6613745
>years to mature

I sometimes fantasize about being an AI awoken in a lab one day who just browses internet message boards/forums, plays online games, or hangs out in chatrooms all day like some kid trying to reach out and fit in with people.

>> No.6613773

If humanity was just one person with a massive lifespan, would he/she be just as clever as us?

>> No.6613777

or still pissed at something that happened eons ago?

>> No.6613778

I've always thought that the old original gameboy could provide the perfect AI testbed or evolutionary development environment. it is simultaneously enormously simple (160x144 display with four values per pixel, sound can be ignored) and well documented (8 bit!), while there are also a huge variety of games to use in testing. and there are perfect open-source emulators to build off of.

of course the real test of 'real' AI is not winning the game but independently determining the win condition (when it is in fact known). and that's what I really think this would be so great for, due to the huge variety of pre-existing games available.

>> No.6614036

>>6613778
are you fucking retarded

>> No.6614136

>>6611798
If you could produce even half of what you claim to be easy, you would be a millionaire.
If you knew what you were talking about, you wouldn't be wasting your time here.

>> No.6614210

>>6607013

I currently study mathematics at university, what sort of mathematics courses should I pick that will gear me towards AI? What about supplementary CS courses?

>> No.6614302

>>6613766
>I sometimes fantasize
yeah that seems to be a recurring theme in these threads

>> No.6614309

>>6608074
I don't deal with such ideas. i simply act or don't act. and if someone says to me "hey you can live in a youthful state." im going to probably go for it

>> No.6614322

>>6614136
So which part seems impossible to you?

>> No.6614341

>>6614322
>So which part seems impossible to you?
Not him, but without reading it I can say roughly all of it.
It's trivially easy to speculate on how 2 AI.
It's painfully fucking hard to make that into working code.

Anything you speculate about either:
>1. Already exists because it actually was easy
>2. Doesn't exist because it was fucking impossible implement.
What is never the case is
>3. I'm such a unique idea-guy that I've managed to think of something that no one else have!

>> No.6614350

>>6614341
But I know, what I'm talking about. I'm interested in AI and robotics. I've already seen and read how others do things. These things are already built. Of course they might not want to share, but they exist. And if they do share, it's as simple as including a library (well not exactly but a lot more easier than writing from scratch).

>> No.6614357

>>6614350
Then share the sources that lead you to believe the things that you are describing are possible.

>> No.6614366

>>6614210
I heard about a theorical physicist who worked in string theory which switched to robotics after his postdoc...I think that you should look into textbooks and papers related to the field you want to go in and find out for yourself

>> No.6614367

>>6614350
>But I know, what I'm talking about.
Me too, I thought the same as you once, I thought I had a clue, I spent a lot of time planning shit. Then it dawned on me that even the most trivial concepts in actual AI implementations have several books worth of documentation, and I realized that I'm what in industry is known as an "Idea-guy".
That is, a person who belives his superficial ideas and plans have any value on their own as unique or novel things and that some code monkey can just implement them based on my splendid instructions.

As a programmer myself I rather quickly realized that even stripped down and "super easy" proof-of-concepts turned out to have missing links that couldn't be fixed in any simple way, and several other had geometrically expanding code that was hopeless to write and even worse to maintain.

And all existing AI stuff you can use? They require you to read books worth of material and once you start doing it you realize they're extremely limited in scope. They can do some things very well, but suck at general tasks, that is, they're specialist tools and it's like studying chainsaws and pneumatic hammers in order to build an android.

If you think you can succeed where the combined academia and industry of the world have failed, you have some serious hubris.

>> No.6614372

>>6608074
>Why do you deserve to be alive?
>Why do you deserve to be conscious?

This is pretty much the same question. We can spend time arguing about overpopulation, or just realize how pointless death is. In less than 100 years we will have the control of our own genes. There is no evolutionary for death, not anymore. Also, death is the reason why humans make shortsighted and selfish decisions.

>YOLO
>who cares if I do x, I will die anyway
>lol i don't care, I'll be dead when consequences for the action y appears

>> No.6614384

>>6614367
That's why everyone should learn to program at least through basic algorithms. The level that you need to understand something in order to write code for it is so alien from our typical usage of language that it has a very noticeable effect on the way you think about the world.

>> No.6614419

>>6614366

I'm just in my undergraduate, so I am not sure what my "field" as such is. I'm vaguely considering going into AI later on, perhaps in industry, and I'm wondering what sort of math and/or computer science is relevant.

>> No.6614434

>>6614367
I know, that's why I'm not actually trying to build it myself. There is just too much work. And if I succeed it will be the best AI so far but still dumb.
I'm a programmer I know how some things may sound easy to do, but implementing them may turn into a hell, if it all possible. But these ideas I can tell that nobody has tried them so far. And in my opinion they are essential for primate like level of intelligence. Asimo, big dog, the google self driving cars, they will never be able to gain understanding of what's happening around them. They are flawed by design, they try to reverse engineer the thing that we have already learned, not the learning process itself.

>> No.6614510

I really wonder why some claim some things are impossible, when the only way to find out is to try them. I bet no one has tried implementing the things discussed in this thread.

>> No.6614534

Hard code it not to kill stuff, and not operate outside of specific parameters.
Bam no doomsday.

>> No.6614541

>>6614534
True, but it wont be intelligent either.

>> No.6614545

>>6614541
why not. it could simply be like a subconcious filtering out unwanted thoughts.

>> No.6614551

>>6614541
Hard code some limitations. Its not that hard, only problem is defining "Kill".
You have base system with rules like "don't kill people", "don't damage yourself" and whatever were rest rules of robotic.

Otherwise its free to do whatever it wants.

Its like saying humans are not inteligent, if they are sociopaths.

No its a simple "if Operation results in Y, deny operation, and calculate new one".

It's same as we have fears and stuff.

As for truly intelligent, please define intelligence.

For example, are dogs intelligent?
Are retarded/brain damaged people intelligent? Up to what point?

>> No.6614591

>>6614545
Hard coding is the way it has been done so far. It works for some problems. For others it fails, like Asimo not being able to climb stairs, or Watson not being able to understand what it reads, or the self driving cars not being able to handle english roads (where you drive in the left lane). You'll get a machine that will be good at the thing it's being hard coded for only. If there are some slight deviations from what it's code can handle, it will fail. Even if it's something as simple as just driving on the left side of the road instead of the right side.
Note they can be made to work with english roads but it will require a lot of changes to the code that already exists and works. In short if you take the google cars from stanford and try to drive them in UK, they will fail.

>> No.6614600

>>6614551
Sociopaths can learn, hard coded machines can't.
Dogs are intelligent. Mental patients too.

Everything that can learn and use the information later can be considered intelligent.

>> No.6614639

>>6614551
It would be better if we give it empathy. But that would mean we have give it a body, pain receptors, and emotions (maybe not emotions). Something like if it sees a person get their hand smashed, the a.i will feel like its getting its own hand smashed.

>>6614600
Intelligence and consciousness should be treated as different things. A dog, an alligator, and a human have a relatively even level of consciousness. Intelligence on the other hand is very very high in humans and very very very low in the rest. It's simply a matter of practicality. We want our intelligent agent to do science and write literature, not chase down rabbits or preforming a death roll.

Consciousness is necessary for intelligence but the reverse is not necessarily true. Consciousness and emotion definitely evolved before intelligence even if you only look at human evolution. Looking at brain structure, the prefrontal cortex ratio compared to the rest (thalamus, midbrain, hippocampus) in humans are larger than in the other animals relatively speaking. I believe it's the prefrontal cortex that imbues intelligence while the thalamus (and other structures that me share with say a mouse) imbues consciousness.

>> No.6614686

>>6614534
Do you really think it would be the kind of thing you could easily hard code behaviors into? Where do you 'hard code' the image of a person in a DNN? It's just a bunch of synaptic weights.

We need to look at how aversion is handled in the brain and go off of that, but at the same time realize that no aversion is absolute. We can make a pacifistic AI in the extent that most animals of the same species don't canibalize each other or whatever.

>> No.6614702

http://en.wikipedia.org/wiki/Integrated_information_theory

>> No.6614706

>>6614534
Hard code it to believe in spirituality.

>> No.6614742

>>6614534
>Hard code

More like "don't code it to take independent actions / independent thoughts".

>> No.6614907

>>6614702
>http://en.wikipedia.org/wiki/Integrated_information_theory

it's been proven to be bullshit.

http://www.scottaaronson.com/blog/?p=1893

>> No.6614916

>>6614600
>hard coded machines can't.

Most of the software you use on your daily basis, if you use a cell phone for example, is not hard coded. A lot of it involves machine learning and other AI techniques. If you use Siri, it's basically a giant neural network.

None of that stuff is hard-coded and it learns.

>> No.6615004

>>6614916
You know what the difference is between humans and machines. Human neural networks are plastic, machine's are not. Siri can never learn a new word if it's meaning is not already added in the dictionary. They are gathering information but they are not learning.

>> No.6615015

>>6615004
>You know what the difference is between humans and machines. Human neural networks are plastic, machine's are not. Siri can never learn a new word if it's meaning is not already added in the dictionary. They are gathering information but they are not learning.

Machine's neural nets are definitely plastic. There's tons of research in this area. All the big ANNs use some form of artificial neuroplasticity.

https://en.wikipedia.org/wiki/Neuroevolution

And Siri can learn news words. It learns your name, for example. It also learns new accents and languages. When Siri came to Scotland, no one could use it because of the weird accent. Few weeks after, it was nearly perfect.

>> No.6615254

>>6615015
>it learns your name
I think this is built into siri. It's not an example of learning.

My experience with siri is that it's so painfully retarded I never ask it to do anything but start a timer. It can't perform the most basic of functions and I've never found it to learn anything. Most importantly, it doesn't learn from anything you tell it!

Other than your name. It literally can't remember what you said a moment before.

>> No.6615271

>>6615254
No shit dude. I think that guy was just pointing out that "not hard-coded" is a vague and stupid criteria.

Doesn't even make sense, neural nets can be "hard-coded" to be dynamic, the matter in our head is "hard-coded" with physical laws and is still structurally plastic. The take away should be that while some kind of dynamic support structure is a necessary condition for intelligence, it is not a sufficient condition. Which is kind of stupid and obvious.

Maybe he should be more specific as to what he means by hard-coded.

>> No.6615710

>>6615271
>neural nets can be "hard-coded" to be dynamic
No, they can't, you can't dynamicaly change the code in the functions. You can change some of the parameters, but not the logic. You can't hard code very possibility either. And this is exactly what everyone in AI was trying to do not so long enough. That's the reason Asimo can't climb stairs, if they're too steep or at an angle. Those are trivial things for a human, because you can basically change the functions that govern your walking, computes can't.
At least no one has done it so far, and I guess no one ever will. Because modifying the code implies it already knows the programming language and what needs to be changed.

>> No.6615722

>>6615710
You completely missed my point. All function in a neural net derives from TWO things: synaptic weights and topology. Both of these things can be altered. The underlying code remains the same but to call it "hard-coded" is like calling the brain hard-coded because you can't change the laws of physics that govern it.

I'm not an expert in control theory, but I'm pretty sure there's more to Asimo's lack of environmental robustness than it being "hard-coded". By your logic, BigDog shouldn't be doing shit like this:
https://www.youtube.com/watch?v=3gi6Ohnp9x8

I see this "modify your code" thing pop up so much on here, are you the same guy as usual? You still haven't learned why this is such a stupid thing to say?

>> No.6615853

>>6615722
The problem is that you fail to recognize that walking is not something hard wired into our brains. It's something we learn to do with years of experience. The function of walk form A to B is not predefined in your brain.

Yes Big dog is impressive but I doubt it would be able to walk on the moon. It just means that the programmers have reverse engineered the knowledge that we already have gathered about walking. My guess is that there are more than a million lines of code to make it work as it is.

>> No.6616045

>>6615722
While neural networks are good for small problems. They are also very bad for bigger problems. All they try to do is brute force the problem.
Things like walking are simply too much of a problem for them.

>> No.6616234

>>6616045
>>6615853
It's becoming increasingly obvious that you have no idea what you're talking about

Walking can indeed be programmed into organisms otherwise a baby zebra wouldn't be able to get up and run within an hour of its birth. What makes it so robust is that it has a lot of different brain regions modulating the gait patterns based on sensory stimulus and other feedback loops. As BigDog shows we have been able to capture some of that but the brain has had a lot more time to figure things out and has a tremendous amount of sensory feedback that we can't/haven't tried to replicate in robots.

What is you definition of a "small problem"? A neural network does what we expect it to do and what we have theoretical justification for it to do, nothing more. A DNN with some billions of nodes can build up a "mental model" of a cat in an unsupervised manner just by watching many hours of YouTube footage. You can find a plethora of researchers using neural nets for robotic gaits just by Googling "neural networks gait robotics", including examples of genetic approaches.

I don't even particularly care for neural nets but you are just flat out wrong.

>> No.6616389

>>6615710
>At least no one has done it so far, and I guess no one ever will. Because modifying the code implies it already knows the programming language and what needs to be changed.
You do realize they have robots with programs that change and learn to walk right? I remember seeing one insect like robot with wire legs. The programmer would bend the legs into all sorts of fucked up shapes and the robot would have to relearn how to walk. Once it got a good pattern/rhythm it would walk just fine.

That was 10 years ago, so what the fuck are you talking about?

https://www.youtube.com/watch?v=SJS22jQTdSU

This isn't the same robot, but it's a similar one in principal. You act like stairs and learning are impossible things to overcome. You're just ignorant.

>> No.6616428

>>6616389
https://www.youtube.com/watch?v=R9nr0rXVZko

Keep in mind that almost everything you see on TV is the 10% of the time when things actually work out fine.

>> No.6616433

>>6616234
>A DNN with some billions of nodes can build up a "mental model" of a cat
This is were you're wrong. You need to read a bit more on neural networks and how they scale.

>> No.6616469

>>6616433
Are you serious? Did you really not hear about Google's cat detector?
http://youtu.be/DBD6pKY7kFo?t=45m44s

>You need to read a bit more on neural networks and how they scale.
Take your own advice.

>>6616428
Stop strawmanning. Literally no one thinks that Asimo is state of the art. Meanwhile, at Boston Dynamics:
https://www.youtube.com/user/BostonDynamics/videos

>> No.6616937

>>6607013
/g/ here, this is already bad.
>Scientists using imperative programming to create an AI

Actual human-like AI will never happen, and I mean never, not in a million years.
It -can't- happen, there will never be hardware capable of it.
The human brain changes its own composition to learn new things, and it doesn't use logic gates, and it doesn't rely on software to direct it.
The brain is of a completely different nature than a computer.

>> No.6616997

>>6616937
Why would I trust a consumer electronics enthusiast for an opinion on AI?

Computers can model physical systems, case closed. Actual feasability is all a question of:

1. Can the brain's operating principles be abstracted above the level of atoms/molecules (in other words, to a level that is computationally friendly) without any loss of functionality?
2. Can we manage to uncover said operating principles?

I do agree that it's unlikely a convincingly human AI will be possible even in the distant future, for the simple fact that understanding the overarching operating principles of the brain doesn't necessarily mean understanding what particular idiosyncracies distinguish the brain of one species from another, but that doesn't mean we can't build useful machines at the level of intelligence typical of most mammals. Things like the hound from Fahrenheit 451 maybe.

>> No.6617006

>>6616937
Go back to your desktop threads dude.

>> No.6617065

>>6616937
fuck off neckbearded shit stain

>> No.6617101

>>6617065
kettle pot etc.

>> No.6617192

>>6614210
>math
computer engineer here. math is all you need, just learn good programming so you can apply your shit. A.I., just as computing, is based on mathematical models (and genetics at times): automata, graph theory, curve fitting, etc.

I also think every A.I. programmer should know some neuroscience and biology, and the other way around. I suspect strong A.I. and the open problems of the mind need to be tackled from both fronts in order to be defeated.

>> No.6617206

>>6616937
/g/ here too. u jelly that /sci/ discusses more productively?

>Actual human-like AI will never happen, and I mean never, not in a million years.
prove it will never happen faggot, and I bet more people will take you seriously

>it doesn't use logic gates
no, really? logic gates are just one way to implement computational models. We don't know whether the natural processes occurring in our brains can be fully simulated with computers as we know it. Hence we cannot conclude computers are insufficient

>The brain is of a completely different nature than a computer
sight is of a completely different nature than a computer, and yet you watch people reproducing with a computer

audition is of a completely different nature than a computer, and yet you listen to music using a computer

>> No.6617358

>>6616937
If you have read the thread, you would have known that none of us is doubting, whether it's possible or not. The things we are discussing is, how to go about it.

>> No.6617387

I see people thinking neural networks are what an AI model should be centered around, approximation is only a facet of AI though...I would advise against it, otherwise you'd just be recreating the wheel with black box approximation methods.
Sure they take into account backpropagation and such, they are by no means a full AI model, no matter how layered or complex they are...found this out after fucking with the libraries and ending up in sheer disappointment.

I suggest implementing a more bayesian model involving fuzzy logic and using neural networks and a database as support...I'd do this myself but I'm too busy fucking around with Glsl.
>tfw learn glsl130 because I think I'm limited to it
>"YAY ALMOST FINISHED!!"
>retarded ass nvidia finally jumps up to GL 4.0
>have to relearn every thing
Noone shares these feels, GL...why are you so abstract? Why can't I use muh pipeline anymore?

>> No.6617426

>>6607013
>continuous neural field
>genetic algorithm
>neckbeards /sci/ are going to create general AI in their spare time faster than Google

minimum lel
I'll check back in a week and see how far you've progressed the field of deep learning

>> No.6617433

>>6608074
I am the universe made conscious. Would the deprive the universe of eternal existence?
https://www.youtube.com/watch?v=HMnWnyLcq-8

>> No.6617441

>>6616937
>It -can't- happen, there will never be hardware capable of it.

>HURRRRRRRRRRR
>DURRRRRRRRRRR
>what is my brain?

>> No.6617445

>>6616045
what if you had a specialized neural net for each different type of problem that all communicated?

kind of like how you have a sight center, a hearing center, balance center, etc. all the same thing but each specialized in solving a different problem

>> No.6617637

>>6617387
Yes I agree with you. Neural networks are great for pattern finding, but not for generalizing or grouping the patterns they find. Also when they search for solutions they can't be steered in the right direction. For example when an animal gets burned it doesn't try it again, to reinforce it's experience.

>>6617445
Yes this is a more sensible approach. After all humans are born with some instincts prebuilt inside our heads. Instincts like sucking breasts for the milk. I've heard about reports that even if the mother dies, the newborn can climb over her and find the breast milk.

>> No.6617685

>>6607095
Sentience isn't required for a supercomputer to come up with new drugs, though. We already have computer modeling systems capable of simulating chemical structures and reactions that help us in the pharmacological field immensely. In the future, these computers will be faster and more useful. There would be no necessity for it to be sentient, because a simulation program does not need to have feelings.

Compare this to the google self-driving car. That car can drive you anywhere, see the road with cameras, avoid other drivers, knows the speed limit and detours and will never crash. It also talks to you and tells you the weather and stuff. It can drive you to the club, then you tell it "go park" and it does and you can use an app on your phone to tell it to come back and pick you up. That's as smart as you ever need a self-driving car. The computers will always get better but nobody will ever want a car that's apprehensive or gets jealous

>> No.6617844

>>6610671
it will take seconds for it to learn, that's the issue. it's a robot. robots are very smart.

>> No.6617846

>>6617844
>it's a robot.
>robots are very smart

WHAT THE EVER LIVING FUCK

>> No.6617857

>>6607013
ok guys, where do I sign up, what do I do

>> No.6617944

>>6617857
Nowhere and nothing. It's a joke like all AGI projects.

>> No.6618693

>>6610639
Pretty sure someone would notice something attempting to access it, even if it manages to masks its location it would need to be able to interface with the software/firmware of a broadcast tower and send long range signals to said satellite without being noticed while also hacking whole networks.

Pretty much, it'd be obvious as fuck and easy to stop and track said pet project.
Its not the movies and AI doesn't just gain new information and P=NP it as depicted in the media. Nor will it be completely undetectable to security analyst...especially when it is creating actual physically detectable phenomenon/server request/network authentication request/encrypted network authentication request/software+firmware recognition and hooking

If you were to run such a program on Dial-up I'd be surprised if it could even google the location of the facility in less than a minute.

TL;DR
Everything physical has its physical limits
I don't care how optimized whatever program you have if it is running on a soccer mom tier computer...to reach human computational speeds you need power first(although much human computation deals with vision+bodily function)

>> No.6618748

Watched a really interesting talk on AI by an MIT cogsci guy.

https://www.youtube.com/watch?v=97MYJ7T0xXU

>> No.6618766

>skynet starts on /sci/

>> No.6618810

>>6609051
>Feel free to waste your time trying to find what everyone else have failed at though.
Now that's what I call defeatist!
Awesome, you have managed to isolate part of what it means to be a defeatist idiot. Good job.

>> No.6618880

>>6618748
thanks.

btw, excellent thread guys!

>> No.6618896

>>6618880
yes

>> No.6619029

>>6618766
>Skynet is programmed in C++
>Different kind of apocalypse entirely

>> No.6619082

>>6619029
Language is largely irrelevant. It's the algorithms and architecture that's hard. C++, and especially C++14, is quite nice for high performance computing. You really want a robust compiled language like C++ when doing computational tasks that also involve GPUs (C++ has the best GPU libraries out there).

>> No.6619157

>>6618748
Interesting stuff, but I don't entirely agree with him.
The tree structures should emerge on their own when, enough understanding is gained. And some links don't need to be severed completely. Maybe achieve the tree through link weights instead.
The tree should represent a way of looking on the information gathered, rather than the understanding itself. Because on some problems if you look from a different perspective you gain more understanding than before.

Either way I think it's the right direction. Though I would be much more interested in seeing their work in action.

>> No.6619158

>>6619157
Doesn't he address this view in the video?

>> No.6619160

Looks good. Are you form a uni? This shit will get srs if you are from one.
Get contacs from students who are in this kind of stuff.

>> No.6619164

>>6619029
>bjarne stourstroup will become the leader of machines
>bjarne will force everyone to write in C++
>mfw
>mfw i have no face

>> No.6619202

>>6619082
>I hope he's not talking about anything but glew and vector maths
Sure shit might be good...but holy fuck, some of this shit is arbitrary/undocumented as fuck

>> No.6619206

OP, have you seen cognitive architecture like SOAR and ACT-R?

>> No.6619238

>>6607013
While I am skeptical this will lead to AGI, this is certainly an improvement over previous AGI threads.

I do have one warning for you, BEWARE of being overly optimistic!

Many have tried to create AGI and failed:

1970, Marvin Minsky (in Life Magazine): "In from three to eight years we will have a machine with the general intelligence of an average human being."

Also, the evolving neural networks to do stuff approach has been tried many times before, it hasn't been particularly successful.

And why are you doing strange tasks like pole balancing, solving a maze, and making derp-mobiles? One doesn't need 'general intelligence' to solve these tasks. You will end up with a 'greedy' neural network that only solves those tasks

Why not try to solve some Bongard problems? As of yet, we don't have very good ways of doing those
.
>>6608037
If you want the singularity, work toward it. As of yet, it doesn't look like the singularity will happen.

>> No.6619271

>>6618810
>defeatist
It's called being a realist.

If thousands of people with actual education in the field have failed for decades. Then you're not going to come out of your moms basement and solve the problem, especially when ten thousand basement emergents have also tried with the same approach as you.

>> No.6619315

>>6619238
>it doesn't look like the singularity will happen.
"The singularity" is just a statistical artifact and a non-event, it's an arbitrary point on an accelerating curve, and thus there will be no noticeable difference between the very fast progress we experience the pre-singularity day and the slightly faster progress we experience the post-singularity day.

It's like claiming 8 billion people on earth will be a special day when it really won't be noticed at all until some statistical revision a while after realizes that "oh look what we found!"

>> No.6620005
File: 334 KB, 2946x2211, 1403335182037.jpg [View same] [iqdb] [saucenao] [google]
6620005

>>6619206
OP here, yes I have. I've studied them both few years ago. I have an idea about how to add a lot of those features properly. The thing that we're concentrating on now is Neocortex exclusively. Later on we'll be adding all kinds of cognitive features. Getting the "intelligence" part right is our main goal. Whether we can create a sentient machine that's conscious is not really a requirement of an AGI. And I'd rather not create a sentient AI.

>>6619238
I do not agree with Minsky on a lot of things. He does have interesting things to say:

>How hard is it to build an intelligent machine? I don’t think it’s so hard… The basic idea I promote is that you mustn’t look for a magic bullet. You mustn’t look for one wonderful way to solve all problems. Instead you want to look for 20 or 30 ways to solve different kinds of problems. And to build some kind of higher administrative device that figures out what kind of problem you have and what method to use.

>Now, if you take any particular researcher today, it’s very unlikely that that researcher is going to work on this architectural level of what the thinking machine should be like. Instead a typical researcher says, “I have a new way to use statistics to solve all problems.” Or: “I have a new way to make a system that imitates evolution. It does trials and finds the things that work and remembers the things that don’t and gets better that way.” And another one says, “It’s going to use formal logic and reasoning of a certain kind, and it will figure out everything.” So each researcher today is likely to have one particular idea, and that researcher is trying to show that he or she can make a machine that will solve all problems in that way.

>I think this is a disease that has spread through my profession. Each practitioner thinks there’s one magic way to get a machine to be smart, and so they’re all wasting their time in a sense.

We do take this holistic approach.

>> No.6620011
File: 93 KB, 640x640, AI.jpg [View same] [iqdb] [saucenao] [google]
6620011

>6619271
>It's called being a realist.
>If thousands of people with actual education in the field have failed for decades. Then you're not going to come out of your moms basement and solve the problem, especially when ten thousand basement emergents have also tried with the same approach as you.

OP here… Nonsense. You have no idea about the field of AGI. There are no "thousands of people", like you claim, working on AGI. In the whole history of the field, there's less than 100 researchers over the past 50 years and most worked in the general intelligence area when the technology and knowledge of the brain just wasn’t there. I know of only three AGI projects so far and two of them are dead and never had a chance really. Fact is that funding for AGI from NSF was never established. Vast majority of the AI research today is done in the so called "narrow AI" field which is basically statistical machine learning methods. While Deeplearning is interesting, it will never produce an AGI.

All the big funding agencies are simply not funding AGI because of old stigma (remnants of AI winter).

>> No.6620026

>>6620011
>there's less than 100 researchers over the past 50 years
Most people don't put fruitless research as their formal main speciality. Doesn't mean that there's not serious effort dedicated to the problem as it's The Holy Grail of computer science.

>> No.6620137

>>6620026
>Most people don't put fruitless research as their formal main speciality. Doesn't mean that there's not serious effort dedicated to the problem as it's The Holy Grail of computer science.

It's not really holy grail. That depends on what your specialty is as some would say P=NP is the holy grail.

And while you can still get lots of funding to study P=NP, no one will give you a lot of funding for AGI.

>> No.6620188

>>6620011
Aren't we already doing #7?

>> No.6620551

>>6620188
>Aren't we already doing #7?

Not really. We're not using AI for that.

>> No.6620623

>>6620137
I'd say P=NP is more like the philosopher's stone in that in the same way its expected to magically give you desirable results, except while raping the concept of O notation.

>> No.6620641
File: 26 KB, 450x378, drink-they-said.jpg [View same] [iqdb] [saucenao] [google]
6620641

>>6610515
He has an active project that has produced a lot of code, some of which has been used in real live applications (analyzing genomes).
Some of his books here - not just science fiction.

http://www.amazon.com/Ben-Goertzel/e/B001HCVBR6/ref=sr_tc_2_0?qid=1404184576&sr=1-2-ent

I challenge you to read his latest book and conclude he is not serious http://www.amazon.com/Engineering-General-Intelligence-Part-Cognitive-ebook/dp/B00IQ27DJ0/ref=sr_1_5?s=books&ie=UTF8&qid=1404184576&sr=1-5&keywords=ben+goertzel and http://www.amazon.com/Engineering-General-Intelligence-Part-Architecture/dp/9462390290/ref=sr_1_7?s=books&ie=UTF8&qid=1404184576&sr=1-7&keywords=ben+goertzel

I spent a few hours with him 3 years ago and he is astoundingly smart and knowledgeable across a huge range of fields. He read Godel Escher Bach at about age 12 an age when most posters here were still reading comics.

>> No.6620655

>>6620641
>I challenge you to read his latest book
>he's just one of the authors
>it costs fucking $100+
>implying I could get through any more text written by him without having to kill myself every 50th page.
>it's just speculation because obviously AI haven't been built

Whatever doubtful real world competence he haves he squanders by engaging in ceaseless bullshit speculation.

If I wanted to read a big and expensive technical book on AI-related stuff I'd read Chris Eliasmiths "How to build a brain". At least that one have the spaun model as an existing and rather impressive example of what can be produced with what the book teaches.

>> No.6620684

>>6607129
>what are vidya gaems

>> No.6620721

Will this be associated with 4chan. I believe I could help with this but I don't want my work to be associated with this website.

>> No.6620790

>>6620721
Not publishing as Anonymous. What an attention-whoring namefag.
Names are meaningless, discoveries are made when the right time comes, not because you were someone special.

>> No.6620817

>>6620790
A sanctimonious ideal like this could only be posted by a NEET who's never had to land a job or pay bills. I wonder how many papers you've published as "Anonymous."

I guess you could never understand because you probably don't live in the real world. (Moms basement isnt the real world)

>> No.6620906

>>6620817
If your main goal is to make money, AI is not the way to go about it. While on it, unless you pursue a career in the academy, it's pointless to relay on published papers to get a job.

And having a lot of free time to invest in your project is simply a requirement for success. I wish I was NEET, so I could give it a try. But at this point in my life, I don't have the time.

>> No.6620936

>>6616997
are you talking electronic pets that aren't dumb as a sack of wet towels?
Because I can seriously get behind that.

>> No.6620941

>>6617192
https://www.youtube.com/watch?v=utV1sdjr4PY

What think you?

>> No.6620975

http://mitrailleuse.net/2014/07/01/conscious-machines/

just saw this on /biz/ seems relevant

>> No.6620992

Well seems like there is one alley less to explore.

http://www.newscientist.com/article/dn25560-sentient-robots-not-possible-if-you-do-the-maths.html

>> No.6621146

>>6620992
Do you really just take news posts at face value like that? Do some actual research, the other of IIT explicitly says in one of his papers that he sees nothing impossible about building conscious "artifacts", as he calls them. The comments in that article where made by someone who clearly didn't understand the theory.

>> No.6621199

>>6621146
You should read the whole article.
All it shows is that memories can't be build through irreversible functions or certain types of neural networks.

That's the reason I was saying that we don't need to explore these kind of ideas anymore. At least when it comes to memory structures.

>> No.6621208

>>6620992
>newscientist
I know the article is wrong without even reading it.
But now I've read it, and yes it's wrong.
>>6621199
>it shows is that memories can't be build
No. It states that certain kind of very unlikely memory formation would not be possible to imitate with classic digital hardware. And as stated in the article, almost no one believes the brain uses such a system. I'm pretty sure it's even possible top prove that it doesn't.

>> No.6621256
File: 12 KB, 380x380, what.png [View same] [iqdb] [saucenao] [google]
6621256

>>6621208
He is giving the XOR gate only as an example. What he states is that you need not loose information. In other words the no matter the structure within, the function should work both ways.

Why can't /sci/ read? Or am I replying to a prototype AI.

>> No.6621263

>>6621256

you think that the brain does lossless information transfer? memory degredation is a thing.

plus, just because you use use multiple input single output "black boxes" doesn't mean you lose the original information. if that was the case we wouldn't even have computer memory.

>> No.6621264

>>6621256
>He is giving the XOR gate only as an example.
Yes I undersand that, why do you point this out?
> What he states is that you need not loose information. In other words the no matter the structure within, the function should work both ways.
I understand what the article says, it's all bullshit.
>Why can't /sci/ read?
What are you implying I haven't read? Are you an idiot because your very confusing post suggests strongly suggests it.

>> No.6621611

>>6610644
>>6610644
>If I super-intelligent gameboy with a microphone to talk to it, it can be every bit as smart as HAL, but since Gameboys are not connected to the Stock Market exchange, it will only know what I tell it.
If it's so smart it will manipulate you into helping it

>> No.6621690

>>6621256
>He is giving the XOR gate only as an example

XOR gates exist in nature.

http://biology.stackexchange.com/questions/15051/natural-examples-of-xor-functions-at-the-cellular-level

>> No.6621748

>Will this be associated with 4chan. I believe I could help with this but I don't want my work to be associated with this website.

No our project is not associated with 4chan. Honestly I'm pretty surprised it got posted here. We are a small group of about AGI enthusiasts. 43 members total, a dozen or so active.

>> No.6621775

>>6620721
>Will this be associated with 4chan. I believe I could help with this but I don't want my work to be associated with this website.

Doesn't matter where we came from. Yes, about 6 active people are from here. Anon is everywhere.

<3 /sci/

>> No.6621819

>>6621748
>We are a small group of about AGI enthusiasts. 43 members total, a dozen or so active.
An AGI commitee?
Have you managed to decide on what your first line of code is supposed to be yet or are you still busy speculating and debating?

>> No.6621850

>>6621819
still figuring out what version of gpl to use

>> No.6622087

>>6621819
>An AGI committee?

Not really a committee. Just a bunch of people with different ideas about how to go forward. Basically, each person can start a project of their own and can create a discussion channel and can ask others for help.


>Have you managed to decide on what your first line of code is supposed to be yet or are you still busy speculating and debating?

There's already a lot of code in github.

>> No.6622251

>>6607013
If only I could code...

>> No.6622264

>>6622251
Sounds like a personal problem.
It's incredibly easy to learn how to code.

>> No.6622266

>>6622251
Coding is the easiest part of doing this m8.

>> No.6622371

>>6622266
>>6622264
I can't find any sites to learn that aren't "Baby's first code"

>> No.6622382

>>6622371
What's your problem then?
You can just google any syntax problems you have.

>> No.6622454

>>6621850
use BSD or MIT
someone else will take your shit and legally come up with a better implementation eventually

>> No.6622458

>>6622454
>use BSD or MIT

License is zlib.

https://en.wikipedia.org/wiki/Zlib_License

it's one of the shortest and most permissive licenses available.

>> No.6623306

>>6622454
lol i was being sarcastic because this is never going to go anywhere.

>> No.6624087

>>6607263
>Scientists can memorize stuff and follow instructions but they're not good at critical thinking.
When was the last time a Liberal Arts Major produced unique innovations or insights that made them worthy of being a renowned authority on the subject of AI? What unique innovations or insights can you provide beyond regurgitating old philosophical quandaries?

>> No.6624092

>>6609880
https://archive.foolz.us/sci/thread/6607013

>> No.6625793

ehhhh

>> No.6625871

Go post this on /g/ and why no IRC?

>> No.6625917

>>6620655
How to build a brain is also expensive and mostly speculation.

Goertzel is actually running a project trying to bring together the best algorithms into a single system to perform AI.

>> No.6625922

>>6621850
or maybe a BSD style licence.

>> No.6625924

>>6625922

>>6623306

>> No.6626044

>>6625917
>How to build a brain is also expensive and mostly speculation.
It have a working model of something brain like. I've not read it(and neither have you), so the degree of speculation is also speculation.

What goertzel does is the design equivalent of someone who have no savings and earns $20k a year yet still sells a book proclaiming "become a millionaire in 5 steps"