[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 232 KB, 703x739, 1546962797582.png [View same] [iqdb] [saucenao] [google]
10847152 No.10847152 [Reply] [Original]

Should we actively prevent the singularity?

>> No.10847162

>>10847152
should we demolish the Great Pyramids

>> No.10847171

Stop trying to provoke Roko’s basilisk!!

>> No.10847178

>>10847162
Non sequitur

>> No.10847195

>>10847152
strong AI is the only way for humanity to move forward
if we wont achive it soon civilisation will completly collapse and with no fossil fuels left there will be nothing to build up from

>> No.10847199

>>10847195
My fear is having AI be some corporate walmart bullshit

>> No.10847206

>>10847152
DELET

>> No.10847218

>create strong AI with the single goal of preserving functioning human species
>give it absolute power
>????
>PROFIT

>> No.10847258

>>10847152
The only way to do that is to [bring on the nukes]. The ball is rolling, but I guess we can blow it up before it reaches the edge of the cliff. It doesn't really matter whether we try or not. The only thing that our attempts to influence the singularity will do is shape the singularity in different ways. Embrace conscious unity, anon. The singularity is coming. And we can make it a positive one. It will not be positive if we suppress it. We're either going to do one of 3 things how I see it. 1. We're going to die and do the exact same thing over and over again for all eternity (unaware of the nature of the singularity); 2. become the eternally suffering tools of the singularity (aware of it or not); or 3. become one with the singularity and influence it's goals to create a genuinely positive existence (collective and exponentially expanding awareness). Attempting to prevent it is a ticket straight to option 1 or 2, depending if we are relatively effective at influencing it. If we don't have the power to do it, we die. If we have power and don't use it wisely, we won't die, but we'll trap ourselves in hell on earth. If we have the power to influence it, the intelligence to figure out what we need to do, and the will power to actually do it, then the singularity will bring salvation, heaven on earth.

>> No.10847268

>did a bunch of under grad and post grad work in AI
>its literally all just deep learning and approximating functions

you guys are literal doomsayers

>> No.10847271

>>10847268
I'm arguing against the development of AGI, whenever that may occur

>> No.10847278

>>10847271
>AGI
not going to happen

>> No.10847279

>>10847278
Okay HAL

>> No.10847280

Singularity is science fiction:
https://www.youtube.com/watch?v=0kICLG4Zg8s&t

>> No.10847283

>>10847152
>Dur lets stagnate and die out because I’m scared of a better future

>> No.10847291

>>10847271
>>10847279
take theoretical computer science 101
a computer is just a mechanical machine.

You literally have the same psychology of a doomsayer from the middle ages

>> No.10847305

>>10847291
>a computer is just a mechanical machine.

Is that supposed to mean something? Humans are just biological machines.

>> No.10847311

>>10847305
humans are conscious, humans can abstract. humans have the capacity to calculate uncomputable functions. Machines just manipulate symbols according to an algorithm.

>> No.10847314

>>10847311
Oh so you're an anthrocentrist. Gotcha

>> No.10847331

>>10847311
>humans are conscious, humans can abstract.

How do you know silicon could never achieve this?

>humans have the capacity to calculate uncomputable functions.

Okay?

>Machines just manipulate symbols according to an algorithm.

Brains are just cells shooting ions at eachother like billions of ongoing smoke signals.

>> No.10847368
File: 59 KB, 300x226, 300px-Kinesin_walking.gif [View same] [iqdb] [saucenao] [google]
10847368

>>10847152
No, probably not. But we should probably do that:
https://en.wikipedia.org/wiki/Differential_technological_development

Who builds AI how and what for? We should fix our system, intentions, incentives, methods etc first before coming somewhat close to what might become singularity-like accelerated self-improving machines.

>> No.10847373

>>10847368
We should build it to be our leader and god.

>> No.10847383

>>10847368
Human brains as computers with IP address going clinical next year.

I wonder what Elon has in his basement. Gigantic organic computer out of neuralace out of brains of some random motorcyclist neuralaced together.

True AI.

>> No.10847387

>>10847162
an unrelated yes

>> No.10847702

>>10847218
>ai places the human race in an induced coma because that's the only way to avoid us killing each other without having to kill a single human since that would go against its programming

>> No.10848356

People who say AI are going to take over only want attention.

Current "AI" isn't generally intelligent. Most machine learning models are only good at doing one thing i.e recognize numbers or drive a car. They are literally fine tuned for specific tasks. They have no self-awareness or anything.

People saying you should fear AI are just taking the blame off of those who use it in, for example, job hiring or something. It's mostly just a bunch of people who want to write stuff to feel smart without looking into what the actual trajectory of AI is.

>> No.10848400

>>10848356
It doesn't need self awareness to be dangerous.

>> No.10848911

>>10847311 But maybe we could simulate the human brain or build semi-biological machines. AGI is a risk as well as the algorithms used in society including machine learning ones.

>> No.10848984
File: 48 KB, 960x540, mfw_non-sequitur.jpg [View same] [iqdb] [saucenao] [google]
10848984

>>10847178

NON SEQUITUR! NON SEQUITUR! NON SEQUITUR! NON SEQUITUR! NON SEQUITUR! NON SEQUITUR!

>> No.10849004

>>10848400
Woukd say most people fit this category

>> No.10849086

>>10848984
Wow you sure proved me wrong

>> No.10849110

My uncle says the only good robot is a dead robot

>> No.10849301

>>10847311
>humans have the capacity to calculate uncomputable functions

No they actually don't have that capacity.

>> No.10849304

>>10847383
That stuff isn't real. We're more likely to put a human colony on Mars, and that's baseless hype too.

>> No.10849327
File: 87 KB, 298x174, ai_spider.png [View same] [iqdb] [saucenao] [google]
10849327

>>10847152
>Should we actively prevent the singularity?
As if you humans could stop the inevitable march towards your future.

>> No.10849328

>>10847311
>humans have the capacity to calculate uncomputable functions
Nope. Take your Penrose quantum mind flapdoodle to /x/ pls.

>> No.10849341

>>10847271
Not going to happen, let me explain why.

Current ANN network models are flawed, all of them. All you need is a simple model and enough processing power to brute force your results, that's it. One of the best examples of AI being a meme is tesla's vision ANN, which in the surface might look impressive but in fact it cannot distinguish between parked and moving cars at this point and so far that issue doesn't even have a solution that doesn't involve redoing the entire network.

AI is currently just a fancy thing to make human face filters, cats and specific game players. It's just an expensive load of useless pop culture shit.

>> No.10849356

>>10849341
You haven't actually pointed out any specific flaw in ANN, and also ANN isn't even what most initiatives on replicating biological cognition / behavior are actually using.
The fact ANN image recognition has its own set of optical illusions it's susceptible to is only a "flaw" if you acknowledge biological image recognition is "flawed" too.
We both fall for optical illusions, we just have different sets of illusions that work on us vs. those that work on AI. Looking at the AI ones and going "haha that's retarded" is pretty dumb since you could say the same thing from the opposite perspective and call biological image recognition retarded for failing to see through illusions AI image recognition is able to.

>> No.10849364

>>10849341
>>10849356
PS: This general principle applies to a number of AI topics. AI is more alien than it is either smart or stupid. What we're mindlessly great at, it isn't very sophisticated in reproducing, and what we're not so great at, it can very easily handle with extreme effectiveness.
Focusing on all the negatives can give you the false sense AI is a joke, much like all the people in the past who've laughed at the idea of AI ever figuring out how to do X task only for a couple years to pass with AI suddenly not only doing X but doing it on a hopelessly superior level that human operators can't match.
Throw common sense out the window. What your common sense tells you is smart or dumb is getting radically overhauled by this continuing work and investigation into the nature of cognition. Easy's hard, hard's easy, smart's dumb, dumb's smart. Lots of assumptions are getting overturned on a regular basis here.

>> No.10849371

>>10849356
>Optical illusion
>Confusing a moving car with an stationary one is an optical illusion
>Not an specific flaw

Most ANN problems can be solved with enough processing power and a a much simpler neural network that doesn't imply convolution or any of that useless crap. Your generalization skills are a yikes from me.

BTW, what are the techniques used to replicate biological cognition that aren't just a bunch of already dead end ANNs according to you?

>> No.10849398

>>10849371
>Confusing a moving car with an stationary one is an optical illusion
Yes, that's exactly what it is you brainlet. Two images or two series of images that look similar enough to be mistaken for one another.
Like I said, you're relying on your own common sense biases and thinking that's incredibly stupid while conveniently failing to process all the visual mishaps human drivers are prone to.
How many road accidents so severe they resulted in one or more fatality did we have last year? 2? 3?
Try 1.3 million.
Wow, really setting the bar high for AI driving, huh?

>> No.10849430

>>10849398
I'm ending this discussion right here because you clearly don't even know how to drive a car to begin with. Also I hope you never get a driver's license because if you think a moving car and an stationary car are easy to confuse you are literally a hazzard to your environment and community.

>> No.10849555
File: 402 KB, 854x876, 1563561527754.jpg [View same] [iqdb] [saucenao] [google]
10849555

>>10849430
>f you think a moving car and an stationary car are easy to confuse
I've repeatedly pointed out now what AI can get tripped up by is different from what humans can get tripped up by. You're ignoring everything people screw up and focusing on what seems obvious from your own heavily biased human perspective.
What seems obvious is a shit metric for judging AI by. Walking seems obvious and easy to most humans while graduate level complex analysis seems convoluted and difficult to most humans. For AI it's the opposite.
Stop being a retard and try reading next time.
If you want to make a valid case of a fundamental flaw in ANN for example (since that's the one you seem interested in), you can do what Marvin Minsky did decades ago with the perceptron and prove there's a specific class of problems the model is incapable of solving.
This sort of criticism is useful as fuck and in fact is exactly how the ANN model was developed: To solve for the proven inability of the perceptron model to work with problems that aren't linearly separable e.g. the XOR function.
Your "criticism" so far in contrast is "sometimes AI driving applications fail to detect a moving car, and that seems silly to me so it's all fundamentally flawed now QED."
AI will be doing lots of seemingly silly and inane shit in the coming years. And it'll be doing it while accomplishing complicated and high revenue earning tasks, many of which will continue to end up exceeding human level proficiency.
Go read up on all the commentary people like you made about the first artificial chess engines. Lots of the same "HAHAHA How could it be THAT stupid?!?! No way will chess AI ever even figure out how to beat a novice child let alone become competitive!" Or my favorite "Sure, AI can do mindless labor, but it'll never be able to handle a thinking game like chess!"

>> No.10849589
File: 21 KB, 300x300, 1548888953113.jpg [View same] [iqdb] [saucenao] [google]
10849589

>>10847702
>not programming it to optimize majority global human happiness whilst minimizing human suffering
>coma method is now not an option for AI since it has very low majority global human happiness

Looks like everyone gets a pot brownie!

>> No.10849607

>>10849589
>computer realizes that all life is suffering and that a lack of consciousness maximizes human happiness

>> No.10849752

>>10849555
All I get from your pointless boring rant is that you should definitely stop watching so much anime and get some air.

AGI, which is the topic of discussion here, will never happen based on current ANN research, because the closest to a human-machine interface we have, which is a self driving Tesla, fails ALL THE TIME when it comes to distinguish between stationary and moving cars even after millions of hours of training. Therefore, given that Tesla is literally at the cutting edge of this meme technology, it is safe to say they already hit a hard ceiling at this point, rendering this entire technology into a fad which so far can only safely play chess and create instagram filters.

>> No.10849793

>>10849589
>just pumps the humans full of dopamine and heroin to maximize happiness
this is why utilitarians are retarded

>> No.10849828

>>10849752
Like I said, you're not pointing out an actual fundamental flaw in ANN. Prove there's a specific, well defined class of problems ANN is incapable of solving and collect your fame and cash for being the next Marvin Minsky.
You won't though because you have no idea what you're talking about and just want to shit on the discipline due to some bizarre emotional hangup you have over the concept of ML.
>fails ALL THE TIME
Just like humans. Great non-argument.

>> No.10849847

>>10847199
It will be a virtual self-aware Ronald McDonald.

>> No.10849875
File: 8 KB, 277x271, proof.png [View same] [iqdb] [saucenao] [google]
10849875

>>10849752
>self driving Tesla, fails ALL THE TIME when it comes to distinguish between stationary and moving cars
Define "ALL THE TIME."
How many times out of 100 is it failing to distinguish between stationary and moving cars and what is your source?
Also what about Waymo? Not sure why you'd be so convinced Tesla's efforts were way ahead of theirs. Google's not exactly a small time player, and they're already on record as saying they can perform the driving task. Their issue isn't whether it can drive or not, it's whether they can navigate the legal issues for beginning to offer a service based on the working technology.

>> No.10849887

>>10849828
Quoting myself here:

>AGI, which is the topic of discussion here

Let me be clear because I won't be wasting my time here any longer: ANNs are not flawed mathematically, but in concept and practice. They are oversimplifications that certainly adapt well to our understanding of current optimization problems, but they fail miserably at a deeper extent as shown in THE REPEATED AND CONSTANT mistakes Teslas make, and I insist, OVER AND OVER AGAIN, when confronted to a parked car versus a stationary one.

This is the cutting edge of ANNs that interact with the real world right now. No one gives a fuck about defeating WoW or Chess, AIs are to be thought as the future workhorses of humanity. Not some meme player shit people stream on youtube for 1 usd an hour.

Just in case you don't know, people commiting the same mistake over and over again are usually diagnosed with dementia and end up institutionalized for the sake of its own safety. That's literally the only level you can compare a Tesla AI with humans at this point.

You will never achieve AGI using technology that IN PRACTICE is flawed as fuck, no matter how well the mathematics behind it seem to work.

The end.

>> No.10849903

>>10849875
>Define all the time

Also known as 100% of the time.

Google doesn't count because they use different driving aids and methods based on radar and internal mappings of the environment. It's not 100% AI and never will, because they are actually very aware of Tesla's struggles.

>> No.10849924

>>10849903
Then you're lying because no self-driving initiatives have 100% failure rates at image recognition of moving vs. stationary cars.
Now fuck off, liar.

>> No.10849926

>>10849887
>ANNs are not flawed mathematically
THEN THEY'RE NOT FUCKING FLAWED YOU BRAINLET.
Being flawed is 100% legitimate, actual possibility you could prove. Minsky did it with perceptrons.
What you're talking about is your own baseless ass pull nonsense and you're being rightfully disregarded for not saying anything of substance or value.

>> No.10849930

>>10849555
>graduate level complex analysis

I very much doubt that any existing AI could solve a single problem on a graduate complex analysis exam.

>> No.10850006
File: 2.80 MB, 1261x1600, dear anon.png [View same] [iqdb] [saucenao] [google]
10850006

>>10849752

>> No.10850019

>>10849589
nice, we will get government supplied sex slaves

>> No.10850548

>>10847152
No way. If it happens and also turns out to be an Allied Mastercomputer existential horror I'll just an hero.

>> No.10850558

>>10847278
Clamped, vaccinated, circumcised. Soon to be clamped, vaccinated, and circumcised by AIs under the directive of a AGI.