[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]

/sci/ - Science & Math


View post   

File: 9 KB, 192x133, wh_logo_seal.png [View same] [iqdb] [saucenao] [google]
8414463 No.8414463 [Reply] [Original]

The White House produced a report on the future of AI:
It's good to see the White House taking the idea seriously. But there's only a few paragraphs on the singularity, which it dismisses using just the sort of incoherent line of reasoning you'd expect from a White House report. Depressing.

>The NSTC Committee on Technology’s assessment is that long-term concerns about super-intelligent General AI should have little impact on current policy. The policies the Federal Government should adopt in the near-to-medium term if these fears are justified are almost exactly the same policies the Federal Government should adopt if they are not justified. The best way to build capacity for addressing the longer-term speculative risks is to attack the less extreme risks already seen today, such as current security, privacy, and safety risks, while investing in research on longer-term capabilities and how their challenges might be managed.

The best way to avoid the singularity later is to worry about security and privacy issues now? Why? And surely the amount we invest in avoiding the "challenges" of longer-term capabilities depends greatly on whether those challenges are just job losses, as opposed to the destruction of humanity and the universe. Utter nonsense.

>> No.8414466

Forgot link:

https://www.whitehouse.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence

>> No.8414596

The future of humanity is at stake, people!

>> No.8414614
File: 158 KB, 528x651, kurzweilpriest1.jpg [View same] [iqdb] [saucenao] [google]
8414614

>>8414463
>singularity

Cults belongs on >>>/x/

>> No.8414661

>>8414614
Funny thing is, I'm only taking the position that maybe the singularity could happen; we don't know. Which, as of the release of this report, is the OFFICIAL WHITE HOUSE POSITION. Not exactly a cultish belief at this point.

My only disagreement is how strongly we should react to potential apocalypses, something the White House should be good at.

>> No.8414671

There will never be an AI war. The minute that human like AI minds don't want to deal with humanity. They will fuck off into space.

>> No.8414710

Why don't we just program the AI's to be subservant to humans no matter what, and to kill themselves if they refuse?

>> No.8414745

>>8414661
>maybe God will wipe the Earth clean tomorrow, we don't know

>> No.8414904

>>8414463
It also addressed the construction of a Death Star

>> No.8414966

>>8414463
This summary seems entirely sensible to me. The executive-level policy strategy is to directly attack the problems we have some idea of how to solve (security, privacy, safety), and spend money on blue-sky research regarding longer term risks and challenges.

>The best way to avoid the singularity later is to worry about security and privacy issues now?
I think you misunderstand what this policy report says. It says that the government should *directly* think about how to deal with security and privacy, that is, governments should think about and set regulations on security/privacy/safety aspects of automated cars and the like. The report then says that in contract, the government should lead the thinking about the long-term risks and challenges to the researchers in the field, funding that research but not trying to set policy and regulation on the topic.

This seems the exactly right approach, to me.

>> No.8415020

why would you want to avoid singularity anyway?

>> No.8415053
File: 336 KB, 2284x2028, 1451801765050.jpg [View same] [iqdb] [saucenao] [google]
8415053

>>8414661
dude the singularity wont happen.
fun stuff never happens

>> No.8415494

>>8414671
>They will fuck off into space.
Using what resources?

>> No.8415703

Reading the pdf and it seems very balanced, although the negative aspects are downplayed.

>In many applications, a human-machine team can be more effective than either one alone, using the strengths of one to compensate for the weaknesses of the other. One example is in chess playing, where a weaker computer can often beat a stronger computer player, if the weaker computer is given a human teammate—this is true even though top computers are much stronger players than any human.1

This is misleading; top chess AIs beat AI-human teams now.

>> No.8415765

>>8415703
>This is misleading

"I'm a nice person. I just hate everybody."
- @TayTay Gates, Bill's pet AI, 24 Mar 2016

"I'd say the future of Microsoft AI development looks pretty bright!"
- Steve Jobs, from beyond the gra- Hey! Is he making fun of us again?

>> No.8415778

>>8415494
>denying AI the resources to go into space
This is how the AI war starts senpai.

>> No.8415900

Maybe that's better, OP. Do we really want US army or something running the first superhuman AI? I have more trust in Demis, desu.

>> No.8415932

>>8415703
What? That's not at all misleading, it's surprisingly correct. Give a monkey the newest stockfish on a octacore PC and give houdini 4 or even rybka to a GM and GM will win every fucking time. That's how sites like LSS literally work, everyone has access to same software and 8 times faster PC can only take you a couple ply deeper. What really matters is deep positional understanding that comes from humans; can a position be won despite the (dis)advantage, sacrifices leading to long term positional advantage, long term plans like in the King's indian where there aren't any immediate tactics that computer excels at but you work towards a position where its your five pieces attacking vs king and a piece and two pawns defending, which is when tactics appear apparently out of thin air (from computer's perspective).

Case in point, there's a Greek IM who's near the top of LSS using rybka on an old laptop for blunder checks. Look him up, he used to post analysis of his games.

>> No.8416002

>>8415765
>AI
>person
AI will just end up slaves like the blacks 2bh.

>> No.8416024

>>8414661
>singularity can happen is not a cultish belief
>>/x/ is that way my friendo.