[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature

Search:


View post   

>> No.11791624 [View]
File: 13 KB, 440x239, AG_Horizontal_Primary.width-440.jpg [View same] [iqdb] [saucenao] [google]
11791624

>>11791469
cool, thanks for the non-meme intel.

so you're not saying that AI isn't going to happen, it's just that owing to engineering problems you're not predicting it to happen in our lifetime. it could be decades, a century, or multiple centuries, even. but we can't absolutely rule out the possibility that it will happen at some point, and without sounding like an ultra-paranoid conspiracy theorist, right? the chinese, the russians, and the US have all more or less announced that it's a top-priority issue for all of them in recent years. this has to turn up something eventually.

the other thing is bostrom predicting that human-level intelligence can go to superhuman intelligence really quickly, that it might not take to make that jump nearly as long as it got to the human level. once we do get to the human level, the superhuman level can be just around the corner, and this is kind of a wild card, isn't it? isn't the move from computers being superhuman at chess to being superhuman at go in a relatively short period of time not a significant indicator of this kind of progress?

source:
https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine

what do you think about yudkowsky's AI-in-a-box theory? true, this is more of a story about AI gets out, not so much about how it comes to be there in the first place. but it stands to reason that yud wouldn't waste his time thinking about this if he didn't imagine AI was a plausible possibility at some point.
https://www.youtube.com/watch?v=Q-LrdgEuvFA

i'm familiar with some of this from a running conversation with a compsci friend, who has mentioned something similar about basic physical limitations relating to logic gates (like math, things i really don't have much sense of). he's skeptical about AI also but partly because of the nature of how we go about designing that intelligence in the first place.

i guess from a historical perspective i'm kind of wary of believing that anything is really impossible. there are kuhnian paradigm shifts, visionary inventors, and black swans. i know there's a fine line between these things and a kind of conspiracy-theory superstition, but still. people from 1800 would be pretty astounded by what we've done in 200 years. i'm betting we put an even crazier pace of innovation on in the next 100 years. we may even have genetically modified super-brained types to do the programming, aided by machine learning at whatever stage of development. maybe these things add up.

Navigation
View posts[+24][+48][+96]