[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 97 KB, 500x434, image.jpg [View same] [iqdb] [saucenao] [google]
6818095 No.6818095 [Reply] [Original]

I wondered because the human brain isn't capable of understanding a lot of things in the world.

Very rarely do we spawn humans that can greatly progress in fields of science

So could we design an AI that researchs on its own? This would be a huge project for humans and would take a ridiculous amount of time.


Think though:

Humans: Uh, we realized that pi is irrational and can't actually be represented as a number.

Computers: Pi is 3.1415926535897932384626433275... Keeps going until neccesary.

Humans: uh, the strong force appears to be strong

Computers: If we combine hydrogen atoms with these exact conditions then exactly this will happen.

>> No.6818110

You know Star Trek: NG?
It's set in a world a thousand years more advanced than we are now, and Data is still the peak of human invention.

Considering our top AI can't even get out of a crater on Mars, an AI that can learn and progress is not even something we should consider.

>> No.6818131

>>6818095
I'm sure they would. I remember reading an article about a AI which given a system deduced conserved quantities - the system was very complicated too (two hinged pendulum - and it derived conservation of momentum and conservation of energy).

In the next ten years I'm pretty sure AI will be used for diagnosing medical conditions. IBM's Watson was fed a few encyclopedias of general knowledge and beat the Jeopardy champions, I'm sure slight alterations and using medical textbooks/journals would allow for fast reliable diagnoses beating the top medical professionals.

>> No.6818149

>>6818131
>>6818110
I think what OP is asking is can it think for itself. And I strongly, and whole heartily believe. No. We can make the computers "Smart" enough to figure out problems that we lay out for it, without having to do as much work ourselves, but can it just start to imagine new and incredible things like the human mind. Never. (Although never say never)

>> No.6818156

>>6818110
>our top AI
You mean that remote controlled robot? That's not even an AI.

>> No.6818160

>>6818156
It has AI

>> No.6818164

>>6818149
I bet you haven't even studied machine intelligence. Stop talking out your ass on subjects you know nothing about.

>> No.6818169

>>6818160
If you consider what we have in video games to be AI, then yes, it has 'AI'.

But seriously, it's not an actual AI, just some very basic automated control systems.

>> No.6818187

>>6818149
>Never
Well that's pretty pessimistic.

If we want to prove that a machine could have property X that humans have we can reformulate the question as follows:

Premise 1) The universe runs using rules
Premise 2) The human mind is completely embedded within our universe

If we could simulate the universes rules perfectly we could model a human brain perfectly, hence have an AI with human qualities.

At the current time we cannot perfectly simulate our universe, as Feynman said ~ "It takes an infinite amount of time to calculate what's going on in an infinitesimal amount of space-time". Advances in physics could solve this, but I doubt it.

So the question of whether a machine can experience emotions, have consciousness (what ever that means) and have imagination in the same way humans do reduces to the question of:

Can we simplify the universes rules, while keeping the brain processes working correctly (equivalently), to the point where it no longer takes an infinite amount of time to simulate?

So what components of our universes rules can we replace with simplified models which could turn simulating a brain inside the universe to something computable?

I think it's safe to say we can ignore gravitational interaction as a active process for the brain model, similarly the strong and weak force. So we could downsize our model considerably. I believe the quantum electrodynamical effects within the brain could also be simplified to something similar to a circuit - I don't think such simplification would remove any of the brains human properties. There was a TED talk speaking about some research where they simulated millions of nuerons in a computer and it actually produced primitive thoughts.

Anyway food for thought.

>> No.6818544

>>6818187

>food for thought.

Hah nice pun

>> No.6818585

>>6818187
Jesus christ your premises are atrocious. Dont create such awful assumptions to try to force an implication, start with "if we can make a perfect model of the brain..."

Then, AI is not about modeling the brain. If you model a brain you obviously have human intelligence, but that's not the best or only way to go, we're not restricted to copy but to build.

Most AI research I've seen is directed very far from neuroscience. Bayesian statistics, pattern recognition and machine learning, etc.

>> No.6818595

>>6818585
If you don't add those assumptions, people just say that you obviously can't make a perfect model of the brain. Usually because of souls, quantum, or occasionally incomputability.

>> No.6818604

>>6818595
The assumptions are incredibly harder to argue for. I have NO idea how you're gonna argue for the universe running on rules, the mind running on rules, the mind's rules being observable to us and our ability to mimic them.

Assume directly that we can figure out the brain appealing to our model of neurochemistry.

>> No.6818627

>>6818110
So much bullshit in one post, you really are stupid.
Star Trek is fiction, more distant from reality than Harry Potter or even LotR. Using it as example is idiotic.
The Mars rover is not our top AI. Our top AIs have already replaced bankers in buying and selling stocks, they are better at diagnostics than every doctor, they find connections and can make accurate predictions no human ever could. The advantage of AIs is that they can process data trillions times faster than humans. The current major issue with AIs is that we humans have different badly understood hardware and decades of learning in an environment designed for us. The problem is how you teach an AI for example to recognize various objects and usefully process the data. This is a very complex matter but there is consistent advance in AI research and AIs surpass humans in more and more areas.

>>6818095
>Humans: Uh, we realized that pi is irrational and can't actually be represented as a number.
>
>Computers: Pi is 3.1415926535897932384626433275... Keeps going until neccesary.
>
>Humans: uh, the strong force appears to be strong
>
>Computers: If we combine hydrogen atoms with these exact conditions then exactly this will happen.
Are you brain damaged? You at least have no clue what you're talking about.

It is difficult to predict how long it will take until AIs can research completely independently but depending on what research they do, from simple manual experiments we already have the tech, to complex research, it will take a couple decades at best, and for various reasons we will never reach the tech at worst.

But to answer your question in one word: yes.

>> No.6818635

>>6818585
>>6818604
This guy knows what he's talking about no matter how badly he expresses it. Listen to him.

>> No.6818640

>>6818627
13-year-old detected.

My comparison was showing how complex 'progressive' AI is.

>lists off examples of AI that extrapolates data or works as an encyclopedia
lol

That's nowhere near AI that can replace a human.

>> No.6818644

>>6818110
>our top AI
>NASA rover

the only thing top is lel

>> No.6818648

>>6818635
Do I really express myself badly? That might explain a lot. Any pointers or anything specific?

>> No.6818652

Lesswrong is a nice place for a layman to begin understanding machine intelligence. Google it if you are interested in this stuff.

>> No.6818656

>>6818652
>lesswrong
>>>/x/

>> No.6818664

>>6818656
It's fine if you disagree with the views of the most prominent posters; it's still a good place to learn about fields related to intelligence.

>> No.6818684

>>6818648
>Do I really express myself badly?
I exaggerated and perfectly understand you.

Your posts are information dense which makes them difficult to understand for those who lack the education. However, I like your style and might try to emulate it. Keep it up. The only problem is that people often ignore points if you don't stress them enough.

>> No.6818704

>>6818640
Your posts are full of fallacies, misinformation, insults and other idiocy. You literally don't provide a single factual or logical argument in either of your posts.
>That's nowhere near AI that can replace a human.
I gave you examples of areas where AIs already have replaced humans and you in your ignorance still wrote this. You're denser than the average /b/tard.

>> No.6818709

>>6818684
I've really felt that way sometimes, people are used to words being fillers, and in my posts I intend to make every word precise. It does result in people overlooking a lot of it a lot of the time, but eh. As you said, people who understand will catch it and I hope people who don't will feel oddly about it and ask.

>> No.6818722

>>6818656
I am very happy that this was the first reaction to lesswrong. It's not something I'm proud to admit but I used to occasionally read a bit of fanfiction, and there I was introduced to it.
Those lesswrong followers believe that singularity is imminent (before 2040) and that the singularity AI, to punish everyone who did not devote all their ressources to reach singularity faster for quicker positive utility, will simulate for everyone of those people infinite parallel universes to torture them for all eternity to create infinite negative utility for them and this somehow hurts them. To save the average person from this infinite negative utility and as means to devote all their personal ressources to achieve singularity, their brilliant idea is to spread their stupid bullshit by writing fanfictions, I shit you not.
Their literature is terrible. Constant bashing of how stupid everyone and how brilliant the protagonist or singularity is.

>> No.6818731

>>6818704
An AI that can watch stocks and predict using trends is nothing.

OP is talking about an AI that can come up with new scientific theories, or solve unsolved maths problems.

>> No.6818745

>>6818722
Your depiction of lesswrong is narrowly focused and exaggerated, but I suspect you know that already. As with most communities, it has a hivemind as well as users with conflicting opinions. It's a useful, casual, introductory resource into fields related to intelligence; there is no reason to deny it that status.

>> No.6819192

Yes. We just need to make the AI a little more semantic oriented, if you will, and a lot more broadly scoped than the ones we've seen in the past. There have been some attempts in this direction in the recent past, in the AGI research community (see Goertzel's open cog project for example).
Anyway, answering your question more specifically, there are many limitations to human intelligence as it is now. One is the relatively small amount of "pieces of information" any one person can think about at the same time. Of course in the past we've dealt with that problem with exernal media, knowledge organization, writing... and all of that, but it's still a big empasse for our intelligence, especially in modern research fields where the scope of knowledge considered is ever increasing and more and more encompassing.
Another limitation is speed of course.
And also, ultimately, the biggest limitation which is the actual cognitive algorithms used for thinking that i dont see any reason why they coudnt be refined, polished, optimized or even qualitatively improved altogether. This last point atm is the only advantage our human intelligence has over the artificial ones, but its gonna change.
As i see it, the intelligence ladder extends way beyond our human level. I mean why should it be otherwise?

>> No.6819317

>>6818585
You've completely fallen short of understanding my argument. The first two premises just say that the universe models the brain. Clearly no one would approach AI research from simplifying the standard model of particle physics. I'm just rephrasing the question of can a human-like AI exist to the question of can we simplify the universes interactions within the brain to the point of computability while keeping the relevant brain interactions intact - the answer to this question seems to be yes, given that complex uncomputable processes seem to be able to be replaced by computable processes.

>>6818604
>NO idea how you're gonna argue for the universe running on rules

Clearly this isn't a water tight argument; clearly neither pure deduction or induction can prove anything about the physical world. But it's fair to say that everything we've discovered in the universe so far has simple rules governing it - therefore its reasonable to say that the universe runs on rules. We may not know the details of the human brain, but everything we do know in detail runs via a mechanism using these rules. If you don't think the mind is completely run by the brain you're straight up retarded.

Go back to /lit/ now.

>> No.6819340
File: 2.54 MB, 1280x720, .webm [View same] [iqdb] [saucenao] [google]
6819340

>>6818745
>I suspect you know that already
Indeed, it was just mockery. Thank you for suspecting this instead of treating me like an idiot.
>It's a useful, casual, introductory resource into fields related to intelligence; there is no reason to deny it that status.
Yes, there is. Every single lesswrong follower I have discussed with so far had ridiculous beliefs, zero evidence to back up any of their claims, always ignored my scientific arguments and insulted me instead. It is misleading misinformation, not appropriate introduction to the highly complex area of AI. If you are willing to invest so much time into it, you might as well read books and do some courses.

Maybe I just had bad luck and met the wrong people. I am sure not everyone in the lesswrong community is an ignorant who claims that his baseless beliefs are rational and everything else is not. But honestly, the entire thing just seems silly to me. Trust me, I understand their position as I too once believed in an AI based singularity but I grew out of it when I was in my early teens.

>> No.6819415

>>6819317
This comment is defensive and hollow. You're not adding anything, and now you're using induction on models to argue you can prove things about the universe.

You said arguing that the brain can be modelled directly by appealing to our scientific models is dumb and no one will agree, but now your argument is "if you don't agree you're retarded".

I'm appalled.

>> No.6819503

>>6819415
>I'm appalled
https://www.youtube.com/watch?v=DksSPZTZES0

If you don't agree with the following facts:
1) The universe is completely defined by rules
2) Our mind is the completely the product of our brain which follows these rules
Then you don't belong on /sci/, you belong on /lit/ or /x/.

All I'm adding to these two premises is the fact that if we could simplify the universes rules to something computable while keeping the brain functions intact (at least isomorphically) then we would have an AI having all the features of the human brain. Given our current understanding of the brain this seems reasonable. I'm not saying that this is the right way to construct an AI, I'm saying that the possibility of doing so is sufficient for a human like AI. How the fuck do you manage to misunderstand such a simple argument? All I'm doing is turning a retarded philosophical question which would never be answered into a scientific question. http://youtu.be/6Waurx8e-1o

>You said arguing that the brain can be modelled directly by appealing to our scientific models is dumb and no one will agree
What the fuck are you on about? Stop posting.

>> No.6819513

>>6819340
I knew one LessWronger IRL who really gave me a positive opinion of the userbase. I provided a plausible scenario under which an intelligence explosion might not be possible and they actually changed their beliefs in response to new evidence.

So I have solid evidence that at least one person actually learned rationality from LessWrong.

>> No.6819524

>>6819503
I'm telling you to drop the philosophical side and just use the scientific one you're already using:

>Given our current understanding of the brain this seems reasonable

>> No.6819533

>>6818095
So why exactly are we trying to become mathematicians, scientists, etc.? Its all meaningless if an AI is gonna do our jobs for us and be multitudes better at that. What an AI does in a few years we wont he able to understand it in millions of years

>> No.6819537

>>6819533
AIs do things in logical steps.
We will be able to understand anything an AI does because we create the logic and relationships possible inside those steps.

>> No.6819552

>>6819537
Isnt that only valid for the current AI. Not sure if its possible but what about a self-learning AI? In a few years computers would be better that a human brain, in a few decades better than all the brains combined. Just think what a sufficiently advanced AI could do with such computational power

>> No.6819559

>>6819552
Sure it is. The way in which the self-learning will work is logical and programatic, anyway. It can be procedural or logic or functional, but it's all logical and trivial to reproduce. You will have the steps which enabled the AI to mimic these processes, and you will be able to see how it does it.

AI can't magically start doing things such that we don't know how they work. We program them. We program the ways in which they associate knowledge and generate rules, and the way in which they learn. This is why it's hard.

>> No.6819564

>>6819340
Why not just butter two pieces of bread then put them together?

>> No.6819568

>>6819559
Okay, I will take an example from Isaac Asimov's "I, Robot" to show you what I mean. If you've read it then you know how this goes. They asked an AI if interstellar travel is possible, and it said yes(with some convincing from our heroine). It told them what materials to get, how to construct the ship, etc. They did that, and when they got in there were no controls. The AI theb started the ship and sent it to another system or something. The humans have no idea how it works, maybe they figured it out eventually, I dont remember.

My point is that they know how they programmed the AI, but they have no clue as to what the solution was that the AI calculated since those calculations are way beyond our capabilities. My question is will such a thing be possible and is so, will we be basically useless?(provided we dont merge with the AI,or whatever those techology singularists predict)

>> No.6819573

>>6819524
>Given our current understanding of the brain this seems reasonable
>But we don't know about the soul? How are we going to simulate that in a computer?
>The brain could be way too complex to ever be able to model
>There are so many gaps in our understanding of the brain we can't know that

There is a difference between saying that "we could construct a human like brain from scratch" and "in principle if a perfect model of the brain simulated by the universe exists then its reasonable to say it can be simplified". The rebuttals to each will be different.

>> No.6819593

>>6819573
I never disagreed. If you look at my first post:

>Assume directly that we can figure out the brain appealing to our model of neurochemistry.

You use the fact that we have models that mimic observable reality and that they work to argue, and then it's reasonable.

Jumping from philosophical assertions of universal determinism and mind determinism, assuming as well our ability of grasping universal rules and then simulating those rules all to build down to something much weaker, which is creating a scientific model, is nonsense.