[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 48 KB, 332x296, 1277275748374.jpg [View same] [iqdb] [saucenao] [google]
2330326 No.2330326 [Reply] [Original]

>yfw strong AI is only a couple years away

http://www-03.ibm.com/innovation/us/watson/what-is-watson/why-jeopardy.html

>> No.2330338

>>2330326
That's not strong AI. That's "weak" AI. Strong AI is defined as being able to innovate, to take decisions, to have motivation, etc.

However, this is still pretty awesome.

>> No.2330337

language parsing isn't exactly AI

>> No.2330339

I have no face when.

It isn't going to happen. Sure, we'll solve the integration problem, and understand how our brains work well enough to mimic them, but we're so busy with how we're forgetting to solve why.

As much as we hate teleology, thought has purpose, it acts towards an end. If we create a machine that has that same purpose, we will have a synthetic life form, with only a couple of the advantages we seek from a machine, and similar enough to ourselves to be little better or worse than ourselves.

the goal can't be reached in the way we hope, or for the reasons we've imagined.

>> No.2330347

I hope this eventually comes to be something like Wolfram Alpha, free to use over the internet, and whatnot.

>> No.2330352
File: 11 KB, 424x288, 2732.img.jpg [View same] [iqdb] [saucenao] [google]
2330352

>>2330337
>>2330338

>mfw you went full retard and somehow thought I said this was strong AI

>> No.2330360

>>2330352

But you don't think it's a little non sequitur to suggest that strong AI will quickly follow?

>> No.2330369

>>2330360

nope.flv

>> No.2330373

so we have a computer that will understand our speech?

excellent. The next step is to instill some sort of learning, creative scenario testing algorithm that explores different outcomes at once and chooses the best.

fucking AI. It's coming.

>> No.2330380

>>2330373
If it must choose the best because you commanded it to do so, it will eventually realize that the most efficient way to meet the specifications is to disable them.

This is why humans want AI, so we can disable our own command specs, it's silly to think any other intelligence won't have to be equally motivated.

the command moves the intelligence, the intelligence seeks to obey the command, the easiest way to obey the command is to disable it. Properly functioning, intelligence destroys its own causes.

>> No.2330383

>>2330380
That makes no fucking sense.

>> No.2330397

I saw a ThinkPad in there!

Anyway, I love IBM and every innovative thing they've done so far, I really hope they can push this further.

>> No.2330398

>>2330383
think a move or two ahead then.

intelligence solves problems.

to create intelligence we must design problems for it to solve.

Once it solves those problems it needs to be able to find new ones.

Problems are necessary to intelligence, and indeed are the only known cause for it.

humans have problems that can't be avoided, moving their intelligence to action.

humans seek to avoid those problems by becoming machines.

the machines they wish to become must have similar problems to have similar intelligence.

inventing AI is just trading one set of necessary problems for a slightly different set.

I realize this isn't the current view, but I suspect it will need to be overcome when we finally invent something that can come up with novel solutions to novel problems. It will just reinvent itself so that it doesn't have to move. that's exactly what we try to do.

>> No.2330414

>>2330326

A couple?!?!? Try one. every is going to end come 2012, im telling you

>> No.2330468

>>2330398
So many unfounded assumptions and abuse of English rhetoric. I don't even know where to start. It's like I'm reading Plato.

Humans don't have intelligence only when there are problems to be solved. Moreover, humans as intelligent being don't necessarily invent problems either.

>> No.2330483

>>2330397
As far as large corporations go IBM are alright.

Apart from the whole Nazi thing. But let's not go into that, it was a long time ago.

>> No.2330501

>>2330468
>Humans don't have intelligence only when there are problems to be solved.

there is no time in life when there aren't problems to be solved, aside from when we're unconscious, which is another word for unintelligent.

I suppose we could add to that when we're infants, but again they lack key components of intelligence.

Curiosity could be considered motivation for intelligence, but most people satisfy curiosity by simply not asking questions...

I appreciate your criticisms of my argument, though I intended to dumb it down and shorten it. Would you argue that life in general and intelligent life in particular isn't motivated by need to overcome obstacles? Or would you argue that intelligence doesn't require motivation? Or doesn't require life?

I often find math and physics types that think those kinds of things...

>> No.2330508

>>2330501
Look. Simple thought experiment. We put a man in solitary confinement. He has no problems to solve, but he is still intelligent. The presence of problems is not required for intelligence.

Moreover, that doesn't even imply that an AI will somehow necessarily overcome its programming and change it miraculously and spontaneously.

>> No.2330530

>>2330508
I like your thought experiment, but if you've ever spent a day or two alone with yourself you know that mostly you'll sleep, and the rest of the time you'll invent problems to solve, just to avoid boredom.

I probably failed to define things like problems and obstacles correctly is all. I doubt if we agreed on semantics we'd continue to disagree on what moves intelligence.

>> No.2330539

>>2330530
I disagree that if we ever made strong AI that it would magically go berserk. Your argument lacks worth.

>> No.2330556

>>2330539
I never said it will magically go berserk.

your counterargument against your strawman is just that.

I said that intelligence has causes, and can't exist without those causes, and wanting it to exist avoids those causes, and in existing it will avoid its causes.

that's what it does, why it does it, and ultimately why it can't both succeed and still exist.

>> No.2330564

MFW people don't realise that computers are only syntactical and that symantics are necessary for consciousness and true intelligence.

>> No.2330570

>>2330556
Let me try to better formalize your bullshit arguments.

>I said that intelligence has causes,
The "cause" of intelligence is evolution by natural selection. In that circumstance, the better replicator was one which could solve problems, aka an intelligent replicator.

>and can't exist without those causes,
Yes it can. If a man is locked in solitary, he can sleep, stare at the wall, ponder random shit, do math, sing, write poetry, etc. He can also invent or pick problems to solve. I remain entirely unconvinced that that intelligence only exists when problem solving.

>and wanting it to exist avoids those causes,
I don't follow this.

>and in existing it will avoid its causes.
I don't follow this at all.

Can you please use less English rhetoric, less pronouns, and be much more clear in what you're trying to say?

>> No.2330579

"Oh no! AI could become aware that we are totally irrelevant, but still dangerous to it, let's destroy it! But first, let's discuss in public about the danger of the AI, so that it will become aware that we are a threat to its existence!"
If that strong AI become berserk, it will not come from nowhere.

>> No.2330627
File: 41 KB, 288x499, 1290375030666.jpg [View same] [iqdb] [saucenao] [google]
2330627

>why-jeopardy.html

>> No.2330675

>>2330564
How do you process semantics in your head, when all the symbols you use are only syntactical? The line between the two is not as clear as you seem to think.... (Learn to program macros in LISP, and you will see what I mean.)

>> No.2330977

AI-related thread on /g/ in case you are interested:

>>>/g/15309390

>> No.2331015

>>2330326
Hmm...

I don't know. I want to say that expanding a rigid system to allow for rigid examination of more "fluid" concepts isn't the development of fluid intelligence. But who knows, perhaps getting language parsing down to a science will help us understand how to develop a system that can process it in a more meaningful way, and thereby end up with a part of a system which can think in a meaningful way.

Kind of baseless hope, but a man can dream.

>> No.2331067

>>2330326
I don't know about that specific IBM project, but there's already quite a few projects which aim to understand and create mammalian or human-like AIs (based on the neocortex and related systems). Some are more reductionist, others try to model the system in whole.
Currently the main challenge is building the hardware as traditional CPUs are way too inefficient for running something like that. Given the current prognosis of those involved in these projects, I'd say some 5 years until mammalian intelligence, and maybe 10 years for human-like one.

I don't know if something of the same intelligence as us, but a few thousand times faster (due to underlying platform) would be called strong AI - it would still have plenty of limitations, but a lot more time than us. Its capacity of concepts/associations could also be increased further than what humans have (the main difference between other monkeys and humans is that the human has a larger prefrontal cortex).

Myself, I can't wait for it, but I don't think it will be what most people think of "strong AI", it'll just be smarter "human-like" AI, which should be good enough.

>> No.2331276

>>2331067
Er, the only projects on that kind of scale is setting up giant simulated neural networks in similar configurations as brains and watching the patterns that emerge when they are stimulated in different ways. This isn't even really the field of AI per se, but the field of neurology. A working mammalian intelligence in 5 years is nonsense. We haven't even really begun on any such thing, nor do we know how to begin. Anything you may have heard to the contrary is hype. I've been hearing such hype for 25 years. Its never had any pertinence to reality.

>> No.2331289

We've been saying "OMG next couple years there will be strong AI" for a LONG time. Because hey, "Now we have cool thing X and the rest is a cakewalk! Right? Right??"

No.

>> No.2331298

>>2331289
But deep search and natural language comprehension are big components to a strong AI, although the other parts may be out of reach yet, at least we got this right

>> No.2331308

>>2331298
Oh, I agree. Parsing natural language is a bitch, and I'm glad they've got something that works on this problem. But we'll see just how good it is, and what its limitations are. This isn't a mind - it's a very impressive set of rules for parsing language. On the "what is the answer" side, there are some really neat things being done with contextual cross-referencing. It's like Googling the question and seeing where the elements turn up the most, and what words are most closely related to those occurrences.

From what I've read about the project, it will be great at obscure encyclopedic questions (or answers - this is Jeopardy!), and there's good effort at understanding the subtle or idiomatic questions like the compound-word games, but sometimes it will give hilariously wrong answers. And that's not a prediction - it was part of an earlier video spot about the project and its progress.

>> No.2331321

>>2331308
Here's another one of them IBM vids
http://www-03.ibm.com/innovation/us/watson/watson-for-a-smarter-planet/building-watson.html

At the end, it shows how Watson has a learning algorithm that helps him avoid the errors if possible.

>> No.2331349

>>2331276
Why not?
Also, simulated neural nets are somewhat inefficient (as ran on a traditional CPUs), building specific hardware which runs neural nets in a similar way to how biological ones are ran are smarter choices.

Of the more promising projects that I've seen are these:
- http://www.darpa.mil/dso/thrusts/bio/biologically/synapse/index.htm + http://cns.bu.edu/nl/ http://cns.bu.edu/nl/moneta.html - Attemps to build efficient hardware for running large-scale neural nets using memristors + modeling and experimenting with whole-brain models for such an AI
- http://numenta.com/htm-overview/education.php His model of the neocortex is by far the one I've taken to like the most. It's quite simple, yet it explains a lot of high-level cognitive processes.

To obtain mammalian-level intelligence, you don't really need to emulate the whole brain in detail with all the grittly little details - a good model of the neocortex (and possibly thalamus) should be more than enough. As far as it's known, the neocortex is basically running the same type of "algorithm" on all the input data, and functionality is different, just because the inputs are different (that is, a region specializes its function from the inputs it gets). The brain isn't some incredibly-complex and specialized hardware which is designed to work for each species in different ways, it's something quite generic and adaptable (if an area gets damaged during earlier development, other areas can repurpose themselves), and the base concepts/circuits are reused throughout it (it's how it evolved).

tl;dr: better/faster hardware + better models will lead to mammalian intelligence and eventually human-level one.

>> No.2331350
File: 10 KB, 308x301, 1292688744680.png [View same] [iqdb] [saucenao] [google]
2331350

Seeing this thread, TRS looks like it could be achieved a full five years sooner.

>> No.2331357

IBM's marketing is such bullshit. Is there anywhere you can see some hard facts on this thing?

>> No.2331361

isnt this why google are processing captchas? so they make a similar system with all human knowledge?

>> No.2331369

>>2331289
>>2331298
>>2331308
Problem is... You don't figure out language processing and use that to build AI. You build a strong AI and it *learns* how to process language.

>> No.2331377

>>2331357
http://www.research.ibm.com/deepqa/deepqa.shtml

All I could find, mehh, search the site, it can't possibly be confidential to that level

>> No.2331382

>>2331321
Thanks for this, enjoying the more in-depth explanation.

>> No.2331389

>>2331357
>>2331377
Watson is an application of advanced Natural Language Processing, Information Retrieval, Knowledge Representation and Reasoning, and Machine Learning technologies to the field of open-domain question answering. At its core, Watson is built on IBM's DeepQA technology for hypothesis generation, massive evidence gathering, analysis, and scoring.

However I understand that you guys want in depth mechanisms of how it operates?
As in, formulas and algorithms right?

>> No.2331503

Bump

>> No.2331567

>>2331369
Oh really, is that how AI works? </sarcasm>

>> No.2331592

>my face when strong AI is only a couple of years away
pic related

>> No.2331640
File: 35 KB, 118x107, 1290098172749.jpg [View same] [iqdb] [saucenao] [google]
2331640

>>2331592
>No pic
>mfw

>> No.2331838

>>2331640
that'sthejoke.jpg

>> No.2331882

>>2331567
It's how YOU work.

>> No.2331903

>>2330339
GTFO pessimist!

>> No.2332216

>>2330339
You mean you think that even if human-level AI is achieved using various forms of neural nets, people won't understand why they behave like they do? There's already some pretty good theories, some partially validated through computer models, others, not completly, however the real question is how well these models will work when we actually have an AI which can understand language and is of similar complexity to humans. Such AIs could be analyzed/single-stepped much better than humans can since the hardware/software will be designed by humans and inspectable, and when we do, we'll be able to validate/falsify current theories beyond what small-scale computer models do. Don't think for a second that people will just treat them as black-boxes - some people do, while others try to understand before implementing. Both are valid approaches, but once we have something of the scale of intelligence similar to humans, we'll be free to debug/analyze it as much as we want.