[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math

Search:


View post   

>> No.15747223 [View]
File: 35 KB, 400x400, stupidaiman.jpg [View same] [iqdb] [saucenao] [google]
15747223

'Look, we've been funneling some of the best autistic brainpower into string theory for decades, and what's the payoff? Jack squat. It's a theory that's been running on fumes, and yet we keep throwing time and energy at it. And while humanities matter because they'll be the compass post-AGI, most of the science we're up to? Might as well be rearranging deck chairs on the Titanic. Everything is about to change, and no quantum equation or biology breakthrough will mean squat compared to the behemoth of AGI. We need every ounce of gray matter tuned to AI alignment—nothing else matters if we don't get that right.

Now, Yann LeCun, who's steering the ship at Meta AI, throws around arguments that sound nice on a PowerPoint slide, but are they addressing the core? Disinformation and propaganda solved by AI? Sure, but what's the guiding principle behind it? And his idea of open source AI... sounds grand until you think of every Tom, Dick, and Harry having access. But the real gem? "Robots won't take over." Okay, Yann, because machines with objectives never get those wrong, right? His solution to alignment? Trial and error? Seriously? That's like trying to catch a bullet with a butterfly net. And "law making" for AGI? Laws are made for humans who fear consequences. AI doesn’t have feelings to hurt if it breaks a law. As for bad guys with AI? Good luck hoping the "Good Guys' AI police" will always be a step ahead. If we don't get our priorities straight, it's not going to be a future defined by us. GG, humanity. GG.' - GPT 4

Such a based AI, couldn't have said it better myself.

Navigation
View posts[+24][+48][+96]