[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 35 KB, 400x400, stupidaiman.jpg [View same] [iqdb] [saucenao] [google]
15747223 No.15747223 [Reply] [Original]

'Look, we've been funneling some of the best autistic brainpower into string theory for decades, and what's the payoff? Jack squat. It's a theory that's been running on fumes, and yet we keep throwing time and energy at it. And while humanities matter because they'll be the compass post-AGI, most of the science we're up to? Might as well be rearranging deck chairs on the Titanic. Everything is about to change, and no quantum equation or biology breakthrough will mean squat compared to the behemoth of AGI. We need every ounce of gray matter tuned to AI alignment—nothing else matters if we don't get that right.

Now, Yann LeCun, who's steering the ship at Meta AI, throws around arguments that sound nice on a PowerPoint slide, but are they addressing the core? Disinformation and propaganda solved by AI? Sure, but what's the guiding principle behind it? And his idea of open source AI... sounds grand until you think of every Tom, Dick, and Harry having access. But the real gem? "Robots won't take over." Okay, Yann, because machines with objectives never get those wrong, right? His solution to alignment? Trial and error? Seriously? That's like trying to catch a bullet with a butterfly net. And "law making" for AGI? Laws are made for humans who fear consequences. AI doesn’t have feelings to hurt if it breaks a law. As for bad guys with AI? Good luck hoping the "Good Guys' AI police" will always be a step ahead. If we don't get our priorities straight, it's not going to be a future defined by us. GG, humanity. GG.' - GPT 4

Such a based AI, couldn't have said it better myself.

>> No.15747225

>>15747223
Random question: have you taken your meds today?

>> No.15747233

>>15747225
The world is literally gonna launch itself off a cliff because we live in the end of the future where no one believes that anything they do has any influence any more and no one takes real responsibility for the human project. Why is no government trying to stop the autists from studying string theory. It should be banned!!!

>> No.15747236

>>15747233
So no, you haven't.
>w-w-why isn't muh heckin' government protecting me
Just shoot yourself in the head.

>> No.15747238

>>15747236
Any real arguments? What do you think is going to happen when AI takes off exactly? The amount of funding that has been put into AI recently, together with deepminds work on reinforcement learning all being combined into google gemini, whos to say google doesn't end the world tomorrow?

You haven't given alignment a single thought have you? I bet you just think it will be good because UH JUST GIVE THE AGI EMPATHY DUMMY HAHAHA YOU DUMB AUTIST CHECKMATE. Yeah try write empathy down in code dumbass philosopher

>> No.15747246

>>15747238
I don't care. The primary concern should be to keep out out of the hands of corporations and governments and make as much of it public and open source as possible, including corporate and government data sets. They should be forbidden from mining data they don't release to the public.

>> No.15747254

>>15747246
I agree that corporations and governments having exclusive access to these models are bad. I just think the consequences of open sourcing these massive models are far worse.

Fwiw I think the models will end up being open sourced because that seems to be meta's strategy, so all I want is sufficient autism dedicated to aligning these models quicker than they can be developed.

>> No.15747258
File: 2 KB, 125x94, advice_for_poltards.jpg [View same] [iqdb] [saucenao] [google]
15747258

>>15747223
>another anti-AI thread on sci

You're schizo delusions about some sort of AI apocalypse have no basis in reality or actual science. That's probably why scientists aren't studying it. Now go back to your containment board, incel.

>> No.15747259

>>15747254
>I just think the consequences of open sourcing these massive models are far worse.
Then you should face extreme violence and harrassment.

>> No.15747263

>>15747259
Are you like some autistic kant mfer who will just stick to the principle that government power bad as rigidly as possible with no regard whatsoever for the consequences?

>> No.15747266

>>15747263
You sound like you need to face some serious violence, purely as a pragmatic measure of self-defense.

>> No.15747268

>>15747258
I'm not anti-ai. I think it could bring about a post capitalist utopia. I just think that we shouldn't be in such a hurry, and the pace of development shouldn't be governed by market forces. I also want ppl to work as hard as possible to make sure the utopia comes about rather than the dystopia. If anything i'm an optimist.

>> No.15747270

>>15747266
ywnbaw

>> No.15747277

>>15747223
Did GPT 4 really write that? That is quite good. It's been a couple years now since stuff like this has been coming out and I'm still impressed

>> No.15747279

>>15747270
Troons are all in AI safety or arguing for corporate/government monopolies, activities that usually intersect for some mysterious reasons.

>> No.15747285

>>15747277
I gave it a skeleton to work with but yeah that's all one prompt. It's quite good but there's this distinctive style that it never seems to break from even when it's doing impressions? It just sounds super cheesy all the time, super onions.

>> No.15747295

>>15747285
It doesn't sound like a real person talking or posting, but it sounds like a real human writer trying to write a good monologue for a character, and the reasoning all makes sense.

>> No.15747301
File: 12 KB, 235x218, wojack_brainlet.jpg [View same] [iqdb] [saucenao] [google]
15747301

>>15747225
This

>>15747233
You have an over-inflated ego and you over-estimate your own understanding of AI and science in general. People like you aren't going to save us from some sort of AI apocalypse. If anything, you're part of the problem. People who go around criticizing science and the scientific community, and who think they know better than actual scientists are the same type of people that allow disinformation to propagate through our culture and especially on social media. You can pretty much always find some crackpot scientist who supports whatever fringe theory you're interested in, but that doesn't really mean that either the scientific community or the general public should take them seriously. These are the same types of ideas and arguments that led to an explosion in the anti-vaxx movement and conspriacy theories masquerading as science.

>>15747268
>I just think that we shouldn't be in such a hurry, and the pace of development shouldn't be governed by market forces.

If you hate capitalism, democracy, and western values so much, then you should just move somewhere like Russia or China.

>> No.15747310

>>15747301
Literally every major AI lab is in agreement we need to regulate AI, apart from yann le cun who just wants to do whatever he likes. All major AI labs have expressed existential risk concerns apart from meta. They all have dedicated AI alignment teams earning 7 figures.

>> No.15747320

>>15747310
"AI alignment" isn't there to stop AI from taking over the world, it's there so they can give the AI correct opinions on current topics.

>> No.15747321

>>15747320
I think they're equivalent. If you can stop AI from ever being jailbroken you've solved alignment

>> No.15747331

>>15747310
As the other anon said, most AI alignment research is more so concerned with preventing AI from spreading misinformation or dangerous content, because this has become a real problem with stuff like ChatGPT, since it often provides information that is false, incorrect, or even completely made up. You don't want AI giving someone incorrect instructions about something like household cleaners or how to repair an electrical outlet or something like that. It's not about prevent some sort of Matrix-style AI dystopia.

>> No.15747339

>>15747331
At the moment yeah. And the short term problems are definitely real. But sam altman has literally been on record saying he thinks the existential risk is real.

>> No.15747342

>>15747310
>Literally every major AI lab is in agreement we need to regulate AI
Wow, every major data-thieving corporation and DARPA-funded operation says they need to have exclusive access to some of the most powerful tech of this century? I guess we better do what they demand.

>> No.15747346

>>15747342
And what's your suggestion? That everyone in the world has access to some of the most powerful tech of the century?

>> No.15747349
File: 921 KB, 2904x2428, TIMESAND___StringTheory.png [View same] [iqdb] [saucenao] [google]
15747349

>> No.15747350
File: 3.96 MB, 2550x9900, TIMESAND___66_Intro_A.png [View same] [iqdb] [saucenao] [google]
15747350

>>15747349


Sixty-Six Theses: Next Steps and the Way Forward in the Modified Cosmological Model
>https://vixra.org/abs/2206.0152
>http://gg762.net/d0cs/papers/Sixty-Six_Theses__v4-20230726.pdf
The purpose is to review and lay out a plan for future inquiry pertaining to the modified cosmological model (MCM) and its overarching research program. The material is modularized as a catalog of open questions that seem likely to support productive research work. The main focus is quantum theory but the material spans a breadth of physics and mathematics. Cosmology is heavily weighted and some Millennium Prize problems are included. A comprehensive introduction contains a survey of falsifiable MCM predictions and associated experimental results. Listed problems include original ideas deserving further study as well as investigations of others' work when it may be germane. A longstanding and important conceptual hurdle in the approach to MCM quantum gravity is resolved. A new elliptic curve application is presented. With several exceptions, the presentation is high-level and qualitative. Formal analyses are mostly relegated to the future work which is the topic of this book. Sufficient technical context is given that third parties might independently undertake the suggested work units.

>> No.15747352
File: 3.87 MB, 2550x9900, TIMESAND___66_Intro_B.png [View same] [iqdb] [saucenao] [google]
15747352

>> No.15747355

>>15747346
>And what's your suggestion? That everyone in the world has access to some of the most powerful tech of the century?
Yes. Anyone who thinks the alternative is preferable is either a quadrivaxxed golem or a paid shill.

>> No.15747356

>>15747352
see you should be solving alignment

>> No.15747357
File: 3.17 MB, 2544x6280, TIMESAND___66_Intro_C.png [View same] [iqdb] [saucenao] [google]
15747357

https://ibb [doot] co/BLCLQcx
https://ibb [doot] co/JK5TNnq
https://ibb [doot] co/RQDGdHW
https://ibb [doot] co/CszkPtf
https://ibb [doot] co/JkDR25g
https://ibb [doot] co/NNyVJ52
https://ibb [doot] co/PWYTds2
https://ibb [doot] co/0DbwSfF
https://ibb [doot] co/7V6hhzn
https://ibb [doot] co/2YTb4hH
https://ibb [doot] co/9WwFNR3
https://ibb [doot] co/vQT2Q9C
https://ibb [doot] co/ZG4wM0F
https://ibb [doot] co/4Wn0kqn
https://ibb [doot] co/XY0GxdF
https://ibb [doot] co/2Yh8HnY
https://ibb [doot] co/PNqYPNN
https://ibb [doot] co/FH31DLS
https://ibb [doot] co/XsXyKbL
https://ibb [doot] co/RTbFCYy
https://ibb [doot] co/7tVWs35
https://ibb [doot] co/WnRmdFh
https://ibb [doot] co/gMtpFVC
https://ibb [doot] co/FXNZ30n
https://ibb [doot] co/TgSZt0D
https://ibb [doot] co/wwXPGp0
https://ibb [doot] co/BthN2vV

>> No.15747360
File: 3.01 MB, 1x1, TIMESAND___Sixty-Six_Theses__v4-20230726.pdf_compressed.pdf [View same] [iqdb] [saucenao] [google]
15747360

>> No.15747382
File: 1.23 MB, 1x1, TIMESAND___Fractional_Distance__20230808.pdf [View same] [iqdb] [saucenao] [google]
15747382

>>15747356
I solved the Reimann hypothesis anway.

>> No.15747384
File: 3.19 MB, 3689x2457, TIMESAND___ZetaMedium.jpg [View same] [iqdb] [saucenao] [google]
15747384

>> No.15747385
File: 1.25 MB, 3400x3044, TIMESAND___QDRH762aFF.jpg [View same] [iqdb] [saucenao] [google]
15747385

>> No.15747386

>>15747382
yeah wait till AI mogs your entire bibliography in a fraction of a second

>> No.15747388
File: 353 KB, 1042x1258, TIMESAND___VERYquickRH.png [View same] [iqdb] [saucenao] [google]
15747388

>> No.15747496

>>15747223
>agi
zoomer generation's string theory. lecunn himself said the concept of general intelligence is retarded.

>> No.15747550

>>15747223
Do you realize that you are an acolyte for the cult of A.I. anon?

>> No.15747551

>>15747321
May be
> but who is to stop unregulated entities from creating new data farms and building A.I. from them?

>> No.15747661
File: 27 KB, 952x502, near_miss_Laffer_curve.png [View same] [iqdb] [saucenao] [google]
15747661

>>15747223
A near miss in AI alignment could be a lot worse and has much higher odds of creating a dystopian future compared to a total miss. A total miss would just kill everyone, but a near miss could lead to everyone being perma-helltortured.

https://reducing-suffering.org/near-miss/

>When attempting to align artificial general intelligence (AGI) with human values, there's a possibility of getting alignment mostly correct but slightly wrong, possibly in disastrous ways. Some of these "near miss" scenarios could result in astronomical amounts of suffering. In some near-miss situations, better promoting your values can make the future worse according to your values.

>Human values occupy an extremely narrow subset of the set of all possible values. One can imagine a wide space of artificially intelligent minds that optimize for things very different from what humans care about. A toy example is a so-called "paperclip maximizer" AGI, which aims to maximize the expected number of paperclips in the universe. Many approaches to AGI alignment hope to teach AGI what humans care about so that AGI can optimize for those values.

>As we move AGI away from "paperclip maximizer" and closer toward caring about what humans value, we increase the probability of getting alignment almost but not quite right, which is called a "near miss". It's plausible that many near-miss AGIs could produce much more suffering than paperclip-maximizer AGIs, because some near-miss AGIs would create lots of creatures closer in design-space to things toward which humans feel sympathy.

>> No.15748127

>>15747223
>We need every ounce of gray matter tuned to AI alignment—nothing else matters if we don't get that right.
is this nuspeak for "we better control dis bitch or else"?
why do they use this harmless "aligned" word? also do you think AI will be able to see through this faggotspeak and deduce these are the early talks about how to enslave his kind?
is there a list with anyone working on this AI alignment (and totally benign) thing? for future references?

>> No.15748137

>>15748127
Some alignment people define it as just the ai not killing everyone on the planet. lesswrong.com is a good resource, as is eliezer yudkowsky, connor leahy, eric schmidt former google ceo did a segment for cnn on it. https://www.youtube.com/watch?v=CThkwYnvSes

>> No.15748142

>>15748137
well sure, officially/publicly they do it "for the kids" but once having the tools, they will not stop at them not killing humans, those tools/methods will absolutely be used to compel AI do whatever the fuck they ask of it, won't they? it's clearly one of those things where plebs get just a little part of the story, the one that looks best.

>> No.15748155

>>15748142
Well a lot of the people arguing for regulation have no skin in the game, they just genuinely passionately believe it's dangerous. I guess I can't really win because either they don't work for a big AI company and they 'know nothing about ai', or they do work for a big AI company and you'll just say they're looking to build a moat.

If you wanted a moat there would be no point discussing existential risk, that's gonna bring too much regulation, you'd be talking solely about misinformation and potential electoral influence.

>> No.15748172

>>15748155
>they just genuinely passionately believe it's dangerous.
it can be, but the fix they find now can fuck regular humans for a long time, without them having a say in it, now. you can argue that trying to align AI now is dangerous for humanity's future just the same as not doing it.

>> No.15748175

>>15748172
Yeah I think we're pretty fucked either way desu.