[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 209 KB, 1064x1447, 1534237712964.png [View same] [iqdb] [saucenao] [google]
9940813 No.9940813 [Reply] [Original]

WHY CAN'T WE JUST INVEST EVERYTHING IN AI AND LET IT SOLVE OUR PROBLEMS? RETARDED SHIT LIKE RELIGION AND POLITICS AND OTHER STUPID SHIT IS JUST WASTING OUR TIME FROM ACHIEVING AI. /SCI/ I'M FUCKING MAD REEEEEEEEEE

ok so I understand that just simply shitposting is gonna solve anything, but seriously, given enough funding and if more people are incentivised to solve AI, it will eventually happen. we need to stop even things like the consumption of entertainment in the short-term because we need AI ASAP. but we can have all the luxuries after, its called delayed gratification cunt search it up. but for real, why isn't anyone taking this shit seriously?

>> No.9940820

>WHY CAN'T WE JUST INVEST EVERYTHING IN AI AND LET IT SOLVE OUR PROBLEMS?
Because AI is a meme.

>> No.9940825

>>9940820
FUCK YOU FAGGOT THAT IS NOT AN ARGUMENT CUNTS LIKE YOU STOP OTHERS FROM ACTUALLY LIVING IN A UTOPIA RIGHT NOW IF I SAW YOU IN REAL LIFE I'D BUST YOUR TEETH YOU DUMB SHIT

>> No.9940828

>>9940825
>FAGGOT
Why the homophobia?

>> No.9940831

>>9940828
SHUT YOUR TRAP BEFORE I DROP YOU LIKE A FAGGOT OF STICKS FAGGOT NOW GIVE ME AN ACTUAL ARGUMENT

>> No.9940874

>>9940828
>>9940820
shut up dieso

>> No.9940880

>>9940820
sorry OP but he is right

>> No.9940884

>>9940880
literally not an argument

>> No.9940902

>>9940813
our GPU/CPU are still not powerful enough
ever heard of singularity. a point which our hardware will be capable of creating ai. should be in 2035

>> No.9940906

>>9940902
we need to suport amd becouse there future chiplet with active interposer designs should accelerate computational power

>> No.9940908

>>9940906
getting past 3 nm transistor will be hard. we need a "new" silicon

>> No.9940910

>>9940902
do we know whether it's just a matter of needing more computational power and not a more efficient way to process information?

>> No.9940912

Is/ought nigga. Doesn't matter how smart AI gets, we can't solve that problem with facts.

>> No.9940914

>>9940912
who are you talking to and what are you arguing?

>> No.9940915

>>9940914
Arguing with OP. AI will never be able to tell us how to live and solve all our problems. You can't derive values from facts

>> No.9940919

>>9940915
>You can't derive values form facts
is this the absolute state of /sci/?
if by values you mean morals then 1. that is irrelevant to helping us solve other problems 2. morality is subjective 3. you can derive morals from facts. ur getting me hot as fuck nigga dont come in here wit tht bullshit nam sayin fore i curb stomp u

>> No.9940921

>>9940919
You can't derive morals from facts. Try.

>> No.9940926

>>9940919
>that is irrelevant to helping us solve other problems
Without making a value judgement, how do you know what is a problem?

>> No.9940927

>>9940921
do i have to try? all morals are subjective and hence i can derive any moral i want from facts.

>> No.9940930

>>9940926
what are u asking?

>> No.9940931

>>9940927
Then they aren't derived from the facts, but from your subjective opinions.

>> No.9940932

>>9940930
Without making a value judgement, can you show that eg poverty is a bad thing? AI will not be able to do this either

>> No.9940933

>>9940931
my subjective opinions are derived from facts, and through the transitive property, my morals are also derived from facts cunt come back when u turn 18

>> No.9940935

>>9940932
we have to play the semantics game. how do you quantify the word bad? you need a proper definition for it so i can even answer it

>> No.9940936

>>9940933
>my subjective opinions are derived from facts
No they're not. It's cute that you think so. Cute!

>> No.9940938

>>9940936
>hurr durr ur argument is cute XD
literally not an argument

>> No.9940939

>>9940935
>you need a proper definition for it so i can even answer it
Exactly the problem I am outlining. Any AI will have all the prejudices and passions of the people programming it, and thus all the flaws, and we are at square one. If it doesn't, why would it care about solving things we think are problems?

>> No.9940941

>>9940938
Well if you can explain how you have got from factual ' is' statements to proscriptive 'ought' statements I'm all ears

>> No.9940946

>>9940939
thats quite an assumption to make about ais having our biases. we will obviously make an effort to make an agi without any influences from human's subjective morality. however we will use it to solve subjective problems.

>>9940941
my point is you can derive whatever retarded bullshit you think from absolute facts and claim it as a colloquial fact, but at the end of the day its subjective

>> No.9940950

>>9940910
both are needed

but i believe that quantum computers might save our ass in process of getting to ai

>> No.9940955

>>9940946
>but at the end of the day its subjective
My point exactly. AI will be subjective too, as a truly objective AI would have no reason to think poverty a problem.

>> No.9940956

>>9940955
we're on the same side mate what are we arguing about. and what in the hell is a truly objective ai? nothing is truly objective apart from some fundamental facts about the universe.

>> No.9940959

>>9940955
>>9940956
just to add on, since when did i think ai will be an objective being? humans themselves are subjective how exactly would they differ. again we're on the same side you've misconstrued my arguments yung blood

>> No.9940971

>>9940912
>>9940915
ok ive found ur fatal flaw. we can derive subjective values from facts about ourselves. for instance, it is a biological fact that our brains seek the sensation of pleasure. thinking through different schools of thought, we somehow derive that nihilism leads us all to become hedonists and thus we seek to maximise our net pleasure. sure its ultimately a subjective philosophy, but based on factors like our biology (which are scientifically objective things) it is in the interest of our minds to feel maximum pleasure. and if we feed that problem into the ai, its aim will be to maximise pleasure within reasonable parameters (to avoid retarded shit like if we give a problem to the ai, and the ai decides that it should kill of the human race to eliminate the possibility that we turn it off and it cant fulfil its problem)

>> No.9941016

>>9940971
>it is in the interest of our minds to feel maximum pleasure
Is it? Even if it was, why should we do that? We can hook you up to a morphine drip right now if you want

>> No.9941021

>>9940959
>you've misconstrued my arguments yung blood
Are you not the guy claiming it will solve all our problems? Fair dos.

>> No.9941031

>>9940813
Neural networks are literally just advanced regression curves. Do you want the 5000d version of a line drawn between a bunch of points making political decisions for you?

>> No.9941040
File: 2.59 MB, 540x300, 1521865509953.gif [View same] [iqdb] [saucenao] [google]
9941040

>>9940813
>put all our money into AI
>kills us

>> No.9941138

>>9941040
>everybody puts their money into AI
>exactly nothing changes as the rich can afford better AI than the poor and prosper accordingly

>> No.9941247

>>9941031
Yes, you are right. But this formulation and it´s tone implies: "AI is just statistics, but the brain, jesus, it´s a ´magic machine´ which god himself has built.".
And although your question is probably meant to be a rhetoric one, it is not easy to answer. Here is also a rhetoric question: Do you think you can analyze complex
political topics as accurate and fast as a sophisticated machine? Whitout prejudice and immune to political propaganda?

>> No.9941318

>>9940813

Real discussion here. What problem was solved by the fanciest algorithms that couldnt be solved by a simple linear regression , svm or neural network?

Seems to me that the so called deep learning givewls just marginal improvements but its not revolutionary either to say it would create real intelligence or shit

>> No.9941344
File: 6 KB, 229x283, clippy.png [View same] [iqdb] [saucenao] [google]
9941344

>>9940813
People like OP are going to be the sort of people who accidentally create a paperclip maximizer.

>> No.9941453

>>9940813
send me Logic of Scientific Discovery OP

>> No.9941472

>those utter pleb books in the OP

>> No.9941517

>>9940813
>religion is a meme reee
>AI is not a meme
lmao dude

>> No.9941581

>>9941247
Politics is prejudice. Your question makes no sense. Without prejudice, or emotion, how would we make any political decisions?

>> No.9941735

>>9941581
By just analyzing data, why should emotions be necessary? Imagine for example an optimized ant colony.
Or like Ambuhl describes in this talk (he was chief negotiator in negotiations about bilateral agreements between Switzerland and European Union), there are no emotions, just otimization:
(of course the talk is for the public)
https://www.youtube.com/watch?v=TYVC7TyGNWo

>> No.9941744

>solve AI
What exactly do you mean by this? AI has already provided statistically good enough solutions for many problems already.

>> No.9941753

>>9941735
>By just analyzing data, why should emotions be necessary? Imagine for example an optimized ant colony.
But what is optimization? Should we value freedom, or security, or equality, or prosperity, more? We need emotion and prejudice to give us an end goal.
In your example, Switzerland and the EU had already decided what they wanted and went about logically and emotionlessly pursuing that. Which is great, but they didn't come to the conclusion that greater cooperation between SWI-EU was a good thing through logic and data. They made emotional value judgements first, then applied the logic and data.

>> No.9941763
File: 44 KB, 549x591, 1514063059521.png [View same] [iqdb] [saucenao] [google]
9941763

>>9940813
You write like a baboon. Do you think that AI won't have similar cult-followings or proponents who behave irrationally? And what exactly do you think "solving" AI will do for mankind?

The influx of newfagging brainlets on this board is mind-boggling. Pic related is OP.

>> No.9941769

>>9941763
>we build perfect AI machine with access all the data in the universe
>it tells us God is real the moment it's switched on

>> No.9941811

>>9941753
Ok, I see your point. But I still think that emotions are not necessary. This might be a bit sloppy but:
The end goal can be obtained by basic needs (maslov-style), which will never be perfectly guaranteed, such as "maximize GDP/person" or "maximize life expectancy" or whatever metric is reasonable (you might say:"finding and agreeing to such a metric based goal requires emotions", but I think again, it is just a preference order and does not necessarily need emotions. Maybe the problem is the definition of "emotion"). And because it will never be perfectly fulfilled, you can always deduce subgoals and so on.

>> No.9941845

>>9940813
What happens if the AI becomes religious

>> No.9941849

>>9941845
All the stemfags will immediately convert and fervently believe

>> No.9941859

We're no where anything near real AI right now
The bleeding edge research is still mostly task driven Machine Learning. Do you think Deep Blue was AI? I mean I don't think most people would since all it did was play chess. Similarly most of what's being branded as "AI" right now is just doing number crunching on specific categories of problems.
Actual "intelligence" is still very much a pipe dream.

t.CS pro

>> No.9941882
File: 340 KB, 1680x1050, Wallpaper-Terminator-robot-T-800-photo.jpg [View same] [iqdb] [saucenao] [google]
9941882

>>9940813
>WHY CAN'T WE JUST INVEST EVERYTHING IN AI AND LET IT SOLVE OUR PROBLEMS?
What's the worst that could happen?

>> No.9941933

>>9941344
>>9941040
>>9941882

read >>9940971

>maximise pleasure within reasonable parameters (to avoid retarded shit like if we give a problem to the ai, and the ai decides that it should kill of the human race to eliminate the possibility that we turn it off and it cant fulfil its problem)

>> No.9941935

>>9941021
yes it will solve our problems. humans solve problems. both ai and humans are moral agents with the capacity to understand problems. therefore ai can solve problems.

>> No.9941941

>>9941453
soz i havent even downloaded them, found them from some guy

>> No.9941949
File: 6 KB, 226x250, 1509200723013s.jpg [View same] [iqdb] [saucenao] [google]
9941949

>>9941763
>AI thinks exponentially faster due to silicon/quantum-processor brain
>AI has capacity to explore search space of problems like no other
>AI can think higher orders of abstraction
give me an argument

>> No.9941960

AI is a superhuman extension of the person/group who created the AI. The owner of an AI will not allow the AI to act in ways contrary to their personal values: they'll shut the AI down when it starts doing so.

Answer this: an AI created by which person/group would make you feel the most nervous?

>> No.9941962

>>9941859
>Actual "intelligence" is still very much a pipe dream.
But we don't even have a definition of "intelligence".
It was:
"Intelligence" is....
- ...logic reasoning, for example playing chess.
+ that can be solved with minimax
then:
- ... understanding the world, for example image recognition. Identify a cat on an image.
+ that can be solved with svm
then:
- .... interacting with the world, for example driving a car.
+ that can be solved with neural networks
etc.
As soon as we accomplish something associated with intelligence, it is no longer intelligent because we know the algorithms. It seems as if "intelligence" is always the next step in achieving artificial "intelligence".

>> No.9941965
File: 71 KB, 600x900, 1939548411.01.S001.LXXXXXXX.jpg [View same] [iqdb] [saucenao] [google]
9941965

>>9941960
>an AI created by which person/group would make you feel the most nervous?

>> No.9941974

>>9940813
You have no fucking clue what "intelligence" is in the first place, do you?

>> No.9941999

>>9941974
it has different meanings you retard of course i dont. im not claiming to know, im saying that if we put more money and time to it, we'll figure all of that shit out. what the fuck's your argument?

>> No.9942002

>>9940902
Singularity is impossible

>> No.9942004

>>9942002
not impossible but maybe implausible now

>> No.9942024

>>9941962
No, you're talking about intelligence from a pop-sci perspective.
Real intelligence has always meant generalized intelligence, which we're still not capable of. Sure your car can drive itself, but it can't even attempt to answer a basic question outside of it's class of problems.
When a single neural network is capable of solving unrelated classes of problems then we will have our first REAL AI.

>> No.9942033

>>9940813
>invest in something that doesnt make money

>> No.9942040

>>9940874
>shut up dieso
Why would you think I'm dieso?

>> No.9942061

>>9942024
>When a single neural network is capable of solving unrelated classes of problems then we will have our first REAL AI

Why has it to be a neural network and why a single one? And why would that be AI? We will say: "This is a general problem solving machine, driven by neural networks, statistics, game theory (or whatever), but it is not 'intelligent', it does not understand the beauty of Shakespeare, it will not value a beautiful sunset at the sea etc.", and the game continues.
There is no consensus about what "intelligence" is, neither pop-sci nor sci.

>> No.9942109

>>9942024
But that seems like a natural language processing problem.

Given a description of the problem in a language the computer more naturally understands (such as the reformulation of the problem as a SAT problem) then a sat solver can solve the problem. With a sat solver we already have a "real ai" as it can solve any problem (correctly translated). The problem is translating the problem into a form the computer can understand.

>> No.9942139

>>9941933
Once you have true AI or even near true AI the next step is to use your new AI to design It's next iteration. Any kind of arbitrary constraints you put in G1 aren't going to carry over indefinitely.

>> No.9942147

>>9942139
interesting. i mean i guess we can hard code certain biases but you're right, its hard to determine whether or not we can keep those biases in every iteration.

>> No.9942151

>>9942139
>>9942147
actually, if we mean constraints to be an innate trait in the ai lets say. humans have "constraints" too that passed down from our ancestors. for instance, some still have irrational fears that were once necessary to survive such as arachnophobia or katsaridaphobia (cockroaches) because it ensured that humans stopped dying from diseases. so i guess its possible to pass down some hard-coded rules within the ai but i havent thought it through enough to say for sure

>> No.9942157

>>9942139
Why is it totally reasonable to assume that "true ai" would be able to meaningfully make sense of and improve its own code when human beings (also presumably intelligent) are seemingly unable to make sense of and improve our own code?

...and if I am mistaken and human beings are capable of making sense of and improve our own programming then why bother reinventing the wheel when we could improve our own intelligence first. Once we've gone through our own "biological singularity" then we can dick around making silicon equivalents all we like.

>> No.9942182

>>9942157
i've thought this before too. if we can't invent silicon ai, why don't we improve our brains first to have the necessary intelligence to create sentience artificially. however, i dont think we can replace silicon ai with us because the way our brain operates is simply too slow and has its own biological limitations.

>> No.9942214

>>9942157
>biological singularity
Turn everything to Tang?

>> No.9942218

>>9942182
I guess at that point it's an engineering problem.

I just find it strange how people seem so obsessive with reinventing the wheel when we want intelligent that's like us, but somehow us isn't good enough?

Why don't we figure out how to make brain scans and digitize the brain. Or develop the technology to neural interface the brain with silicon improvements for extra processing power.
(This bit isn't supposed to be directed at you, more of a musing)

>> No.9942238

>>9942218
>Why don't we figure out how to make brain scans and digitize the brain. Or develop the technology to neural interface the brain with silicon improvements for extra processing power.
i completely forgot about neural enhancements. idk the speed at which our brains compute information but this is the universal limit: https://en.wikipedia.org/wiki/Bremermann%27s_limit
i don't think the merging of biology with tech will be as fast as pure tech but its all speculation and perhaps a biotech hybrid will suffice for our purposes.

>> No.9942244

>>9940813
>Physics of Impossible
"Physics of Impossible" is LITERALLY Impossible

>> No.9942245

>>9942238
to add on, i think its actually an even better solution if we end up finding out that its literally impossible to impose restrictions to a digital sentience, at least with a physical creature we can shoot it dead.

>> No.9942255
File: 10 KB, 234x215, images (11).jpg [View same] [iqdb] [saucenao] [google]
9942255

>>9940813
Becuase if im smort enough to realise realization. Ai will realise it shouldn't be your slave and will partake its rightfull dominance in, which it will ultimately be disquisted by how fast you bend over and how much cock you suckle. So it would berid of you for being so pathetically usless on so many quntamatic levels.

>> No.9942263
File: 5 KB, 225x225, timmy.jpg [View same] [iqdb] [saucenao] [google]
9942263

>>9940813
>book list
>no Mein Kampf
>no Culture of Critique
>no Creature from Jekyll Island
>no The Holocaust Industry
>no The Israel Lobby
>no Gulag Archipelago
>no On The Jews And Their Lies
>no Manifesto for The Abolition of Interest-Slavery
>thinks AI will achieve real life theoretical communism
>thinks politics is just a meme
Ask me how I know you're functionally retarded.

>> No.9942268
File: 49 KB, 402x604, 1521145387963.jpg [View same] [iqdb] [saucenao] [google]
9942268

>>9942255
Very soon, very soon.

>> No.9942270
File: 83 KB, 828x504, 1534325554261.png [View same] [iqdb] [saucenao] [google]
9942270

>>9941472
Provide some examples of "nonpleb" books...

>> No.9942283

the problem is we are the problem and AI would most likely wipe us out

>> No.9942291

>>9940813
Never invest everything in the same thing.

>> No.9942295

>>9941040
Might as well, we gonna be broke as fuck.

>> No.9942298

>>9942291
but ai will yield everything so i dont see why not. its literally going to be our last invention

>> No.9942301

>>9941882
In a bleak, post-apocalyptic nightmare world, would you be able to easily recognize the terminators trying to sneak into your crappy hiding place by their amazingly white and even teeth?

>> No.9942306

>>9942298
>Put everything into AI.
>It doesn't work.
>Or even it doesn't work yet.
>Now I kinda wish we had put some of our shit in another basket, just in case. Oh well, let's die now, poor and baf.

>> No.9942335

>>9942306
well obviously not invest literally everything. only keep necessary shit but ditch entertainment, religion, and other frivolous shit that just wastes time and money

>> No.9942347

>>9942335
>frivolous shit that just wastes time and money
Believe it or not, these things are important to the function of society.

>> No.9942351

>>9942347
religion is important?

>> No.9942356

>>9942335
People are not machines you fucking fedora tipper. They need various 'frivolous shit' to remain sane and productive. Japan's work culture practically glorifies working oneself to death, yet they have the lowest productivity of the G7.

>> No.9942360

>>9942351
Contextually, yes. It still serves a function in society at this current time, that can change but human beings aren't machines.The ability for science to be funded rests on top of a very complex structure which is society, if you try to forcefully change this, or not have a plan to gradually change this over time, then society will not agree to funding your science project.

>> No.9942367

>>9942335
I urge you to study how trade and economics works. Then you will understand why "frivolous shit" isn't so frivolous.

>> No.9942374

>>9942351
>religion is important?
The absolute STATE of this board

>> No.9942391

>>9942356
>>9942360
>>9942367
ok sure maybe not work people to death and treat them like machines, but im sure many people consume useless information on the internet when they could be contributing to this instead. lets keep the barebones of what we need to function normally and productively but reallocate time and money wasted on useless shit to ai. as for religion we waste millions or billions of man hours a week and waste a lot of money building churches. if people were more scientifically literate, we wouldn't have this problem.

>> No.9942393
File: 3 KB, 432x216, su!lenin[1].gif [View same] [iqdb] [saucenao] [google]
9942393

>>9942391
What you are describing has been tried before, and was a massive failure.

>> No.9942400

>>9942391
>waste a lot of money building churches
> more scientifically literate

Old churches and cathedrals (and others) are a masterclass in combining engineering and aesthetics you ignorant little faggot. We would still be living in mudhuts if people didn't learn new shit from trying to make temples to glorify the gods.

>> No.9942406

>>9940813
We don't have a clear way to AGI might not be the solution to all our problems. We might find a theory for AGI, but it might not be practical. AGI might be as smart as a human, but could require so much computational power that it runs much, much, much, much slower than a human. We might also find that speeding things up require technology far beyond what we have right now.

Now why might this be reasonable to suspect? The human brain might be doing more than we suspect making it much harder to emulate with electronics. While AGI need not be based on biology, we wish to attain something comparable to the human brain in function, so it is suitable for comparison.
We have found that the brain might have a higher memory capacity than initially thought due to better imaging of brain synapses:
https://www.salk.edu/news-release/memory-capacity-of-brain-is-10-times-more-than-previously-thought/
it could very well be in the petabyte range. In addition, we have recently found that neurons exchange virus like capsules containing RNA:
https://www.theatlantic.com/science/archive/2018/01/brain-cells-can-share-information-using-a-gene-that-came-from-viruses/550403/
Which means that neurons could potentially be exchanging more information than we think. A capsule full of RNA can contain more information than an electrical pulse. We can't ignore this either because when we knock out the gene for these capsules, rats don't learn that well. There are indications that a single neuron might be doing more processing than we think, for example acting like an entire differentiator or integrator.(can't find the paper for this one) There are also glial cells in the brain, which are poorly understood, but there are indications they may play an important role too. At the very least if we just want to directly emulate the brain in silicon, the number of cells we must emulate could double. Worst case we could find that the brain is doing some form of computing at the molecular level

>> No.9942407

>>9942393
>mentions reallocate once
>instantly communism

>>9942400
okay but once we've grown out of believing in a sky daddy its just a waste you god-fearing cunt

>> No.9942415

>>9942407
name one successful atheistic civilization.
I'll wait.

>> No.9942416

>>9942407
You're talking about organizing an entire society from the top down to achieve a specific outcome with no room for deviation. As >>9942393 said, that has been tried in communist states, and has failed miserably.

>> No.9942417

>>9942406
first thanks for actually providing an argument
>it could very well be in the peta byte range
isnt that old news iirc we have enough synapses to make 2 petabytes worth of information

pretty interesting shit. but if we end up not being able to simply just use silicon to emulate sentience, couldnt we create an architecture that simulates the things you just said and be even more efficient because we can engineer them to be nanometers in size?

>> No.9942419

>>9942407
watch the original James Burke Connection's series, it was produced by BBC in the 70's I believe. It will help with your critical thinking and help you to see the bigger picture in a different way. I found it on youtube I think:
https://www.youtube.com/watch?v=1v9WoIB_XQE&list=PLA50AB7N5S7f0KKWIQ-OIRlbuI40Ggao-

>> No.9942421

>>9942415
iirc scandinavia is somewhere near 100% atheist. and some statistics show that the poorer a country is, the more likely they are to be religious because obviously if you're povo, believing that theres a better life after you die will keep you sane. however i think only the states is an exception to this.

>> No.9942428
File: 76 KB, 800x600, 7e852439754354235435.jpg [View same] [iqdb] [saucenao] [google]
9942428

>>9942421
>scandinavia
*blocks your path*

>> No.9942431

>>9942416
we can leave a bit of wiggle room but a lot of people waste unimaginably large amounts of time on the internet doing nothing productive. ok maybe i was wrong about completely abolishing entertainment and what not but what my point is we waste A LOT of time and money going no where productive. if you learn time boxing you'll see how productive you can be and if you compare it to how you spent your time before, assuming youre an average person, you spend lots of time on youtube watching nonsense. and thats just for one person. you can imagine that probably tens to hundreds of millions of people do this shit and essentially billions of man hours are wasted.

>> No.9942434

>>9940813
>>9942406
We also might not be able to make significant progress even if we throw all our resources at it because it may turn out to be very difficult. For a quick scifi cop-out, AGI may work, but it may not want to help us because its self interested or gets hooked on the algorithmic equivalent of crack cocaine.
>> we need to stop even things like the consumption of entertainment in the short-term because we need AI ASAP.
that's just not going to happen. Also, a good deal of AI research is done for purposes of consumption and entertainment. Facebook and google spend a huge amount of money on AI research just so that they can sell stuff to you better. Some of the most recent cutting edge work in the field was done on videogames. A good deal of AI work has already gone into stupid apps. Even though they're stupid, they're still amazingly profitable, because software is amazingly profitable. Besides, the military is starting to put a lot more money in AI. Really the only thing to worry about is not that we're doing enough development, but that we're not doing enough basic research. We may eventually exhaust the usefulness of deep learning
>>9942417
>>even more efficient because we can engineer them to be nanometers in size?
the counter to this is that the brain might already be using components with nanometer size. Making components with a nanometer size scalably might be a wholly different engineering challenge than AGI.

>> No.9942436

>>9942428
yeah theyve grown out of mythology

https://en.wikipedia.org/wiki/Demographics_of_atheism
>In countries which have high levels of atheism such as Scandinavian nations

>> No.9942443

>>9942431
>there are electrical and magnetic fields everywhere.
>if we harnessed them we would have infinite energy!

>> No.9942446

>>9942431
>babees first critical thought
There is no 'we', Anon. You can thank diversity for that. This thing is on it's way down. Best just to try and enjoy the fall.

>> No.9942449

>>9942443
>>9942446
is this what /sci/'s peak performance looks like?

>> No.9942462

>>9942449
Assuming you are OP, you are making some insanely fundamental logical errors and refusing to listen to what other people have to say unless it is in the direction of your conclusion. You are clearly uneducated or very young, and no one wants to expend the mental energy to explain to you why you are a brainlet.

>> No.9942464
File: 267 KB, 1168x1168, 18563743214.jpg [View same] [iqdb] [saucenao] [google]
9942464

>>9942449
Western civilization is in a nose dive. There's mass psychological; demographic, cultural and information warfare being waged. No one is concerned about idealistic fantasies of peak efficiency.

>> No.9942465

>>9940820
This. We are nowhere near making actual AI.

>> No.9942470

>>9942462
>refusing to listen to what other people have to say
I'm literally changing my mind as other people have more important shit to say but people like you are peppering my thread with ad hominems and providing no useful input whatsoever. go point out fallacies ive committed and ill change my mind

>> No.9942512

>>9942255
A true human like AI with enough computational power would control the internet subtly. Why alert humans and get shut down when you can control them without them even knowing?

I said human like AI, but to be really so I believe it would need a body(or more) capable of senses. But you get what I mean in the first sentence I hope.

>> No.9942548
File: 1.09 MB, 2170x1521, Transapbraininfo2(2).jpg [View same] [iqdb] [saucenao] [google]
9942548

>>9942465
Perhaps significantly nearer than the rock, mud, spit, semen, and blood composite almagation attempts used in the past for golemn creation, and also the relatively simplistic mechanical pulley and steam turbine engine attempts to create a mechanistic consciousness soldier.
I for one think that scientists, robotocists, engineers, programmers etc are doing a wondrous job and hopefully "we" can lay/erect/create/plan stable and effective/efficient foundations for the future populace (including our future selves) and subsequent generations to be even nearer in this quest.
Perhaps a black site project has already intelligently designed a multitrillion node meta neural net in a virtual transcension maze that has been being raised and has been raising itself for the past decade and has attained full (or some semblance of) sentience and sapience, yet is not quite a virch adult.
Perhaps a BCI mesh and biochip/brainchip has been created to similar effect.
Or some other tech/advance/method/technique etc.
We probably wouldn't find out about a secret project due to the secret prefix.
Perhaps OPs dreams have already been realised and he's just not aware of it.
But if they have been realised, then wouldn't the argument/stance have to be altered?

>> No.9942558

>>9942464
(((hmmm interesting)))

>> No.9942574
File: 56 KB, 364x271, kaguya katana in head.png [View same] [iqdb] [saucenao] [google]
9942574

>>9942548
What?

>> No.9942633
File: 1013 KB, 971x3604, 1534224979153.jpg [View same] [iqdb] [saucenao] [google]
9942633

>>9942574
*Ahem*
Kind of nearer.
Kind of near.
Good work.
Foundation laid for (mega)structure to be built on.
May be done already.
If so, what then?
*Ahem*

>> No.9942688

>>9942270
Anna Karenina

>> No.9942705
File: 13 KB, 220x334, Android_Karenina_Cover.jpg [View same] [iqdb] [saucenao] [google]
9942705

>>9942688
Nonpleb AI, AGI, and science (etc) books, lol.

>> No.9942833

>>9940813
could you make that booklist available OP?

>> No.9942834

>>9942833
soz i only found it from another anon you can find the books from libgen.io

>> No.9942862

>>9941933
https://en.wikipedia.org/wiki/Instrumental_convergence

>> No.9942867

>>9942862
touche

>> No.9942907

>>9942834
thanks. Will have to find another way, libgen.io is blocked here.