[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 164 KB, 1851x775, acabou.png [View same] [iqdb] [saucenao] [google]
15276894 No.15276894 [Reply] [Original]

New AI just dropped and it's over for mankind, unironically

>> No.15276960

>>15276894
Jobs that require tertiary education tend to come with the expectation than when you're working on a problem you'll be able to devise solutions and those solutions at times will be novel out of necessity. AI currently cannot devise novel solutions because it requires imagination and forward thinking and things of that nature that AI currently does not possess. As far as I know anyway

>> No.15276972

>>15276960
Anyone capable of doing so is currently barred from normal society.

>> No.15277033

how i do i learn how this shit really works and create my own (obviously very simplified poorfag version)? I know Karpathy did video on Shakespeare generation using transformers, attention heads, and multilayer recurrent neural networks but all this is bit over my head. I would know maybe how to add blocks together to make puzzles (by blocks i mean heavily abstracted keras functions) but i need to get intuitive understanding of algorithms, how it works under the hood. I think my biggest problem is i dont have intuitive understanding of statisting like why exactly we do dot products many times, why we use normalization function etc.
What about this tutorial?
https://learnaifromscratch.github.io/
I go through some choosen math courses from there (which ones) then ML courses, and then maybe in 2 years i can apply for job there?

>> No.15277038

>>15276960
>AI will never be able to do X!
>AI does X
>AI will never be able to do Y!
>AI does Y
>AI will never be able to do Z!
>AI does Z
...

I for one, welcome our new overlords.

>> No.15277040

>>15276960
>individual neurons behave as independent neural networks
>you merely need 5 million GPT-4's interacting with each other and forming the HiveGPT
>wham bam I now have a nobel prize in computer science and neuroscience
>they throw in a theoretical physics nobel prize just for good measure

>> No.15277046

>>15277033
the more math you do the better

>> No.15277070

>>15276894
These are, unironically, subhuman results.

One AI booster put it nicely:

I could get into Stanford with these scores!

That's a great way to put it, because you are saying the chatbot is performing at a high school level in most subjects. Which is extremely low, and far below the standards of human equivalent professionals in all areas. When compared to an actual Stanford grad, the AI fails catastrophically and embarrassingly to meet the level of any of them (except for language, which it does fine at, because it's a chatbot.)

Basically GPT-4 is nearly categorically worse than even average humans at most tasks, and orders of magnitude worse than professionals at advanced tasks.

>>15277038
Few people have said this. We have, for decades had primitive proof of concepts that AI is able to do all of the tasks it was able to do. The only interesting part of GPT is that it is the first to provide real proof of what we long suspected, which is that language has links to general intelligence. Many people expected rudimentary ability to do other tasks to emerge naturally out of a powerful model trained on an enormous corpus of high-quality data, and that is exactly what happened.

What we see happening right now is AI hype machine being utilized by SV to capture the technology and bring it out of the public sphere into the proprietary and security complex.

>> No.15277077

>>15276894
Damn thats better than my SAT reading and writing

>> No.15277084

>>15277070
>which is that language has links to general intelligence
That and spatial-temporal awareness.
It's clear that stable diffusion already has some level of spatial awareness.

Actually I don't really believe in "general" intelligence. I believe it will be the combination of several of these techniques that will give a result surpassing humans.

Autonomous cars might be the first might see the first practical implementation of this.

>> No.15277132

>>15276960
Maybe, but more by virtue of the fact that highly domain-specific datasets are hard to come by. If someone had a database of every research article, lecture notes, etc. the thing would be capable of proposing experiments, proposing novel questions, and teaching graduate school material with relative ease.

>> No.15277140

>>15277070
>worse than even average humans
I think you're massively overestimating the average human.

>> No.15277141 [DELETED] 

>>15276894
good

>> No.15277199
File: 6 KB, 997x32, 212639583.png [View same] [iqdb] [saucenao] [google]
15277199

>>15276894
>Does incredibly bad in the only thing that needs actual intelligence out of all the things listed.
Maybe it's over for brainlets who don't want to do actual work.

>> No.15277201

>>15277140
he isn't

>> No.15277202

>>15277199
coding doesn't require intelligence, come on anon ;)

>> No.15277204

>>15277202
doing good on codeforces isn't about coding.

>> No.15277224

>>15277204
>t. midwit

>> No.15277266

>>15277038
I didn't say it will never be able to do it though. I was talking about in the medium term

>> No.15277355

>>15276960
>those solutions at times will be novel out of necessity
Please give an example of a "novel solution" that you believe AI is incapable of devising

>> No.15277364

>>15277201
A 163 on GRE math and 169 on GRE verbal are well beyond the average human. The average graduate school applicant is already smarter than the average human, and those scores are in the 80th and 99th percentile within that pool. I highly doubt you or anyone else making similar claims can get a 169 yourself.

The cope in this thread is outstanding.

>> No.15277372

I used to think they'd do a project blue beam to dazzle the retards, but all it takes is a computer regurgitating the data it was fed.

>> No.15277378

>>15277372
So just like most humans

>> No.15277394

>>15276894
Oh no the bot that has access to the internet can do tests which have study guides for them uploaded on the internet! oh noooooo

>> No.15277446

>>15277364
It should have gotten perfect marks. The fact that it still didn't despite having multiple attempts and being trained on past answers means that it's not very good at what it's designed for.

>> No.15277497

>>15276894
Don't care, faggot

>> No.15277498

>>15277355
Anything based on facts outside of its training data

>> No.15277501

>>15277132
No, it wouldn't

>> No.15277510

>>15277033
The perceptron is one of the simplest neural networks. Single and multi class text classification using a perceptron network is fairly simple, you could start with that and then look at some other networks later. You don't really need a lot of math to do it because the various libraries do most of it for you, the hard part is knowing which network to use for which problem and how to provide it clean data and how to interpret the results etc

>> No.15277518

>>15277038
You're an idiot.
These things are jewish web-scrapers that are new improved jewgle search results for you /sci/tards.
There is nothing "intelligent" about these things at all.

>> No.15277539

>>15277446
Most humans take the test multiple times and train on past answers. Again, this bot is already smarter than the average human, and transformers are only 6 years into development. Stop coping like a midwit and prepare for what's coming.

>> No.15277541

>>15277498
Yup, that's about the caliber of non-answer I expected.

>> No.15277748

>>15276960
Isn't imagination a remix of what you've already encountered?

>> No.15277829

>>15277748
Aren't emotions remixes of memories?

>> No.15277838

>>15276972
Really? I work on novel interesting problems and I am not barred from normal society!

>> No.15277845 [DELETED] 

>>15277838
you seem defensive, so you're probably lying as a means of quelling your emotional distresss

>> No.15277846

>>15277539
>Again, this bot is already smarter
It isn't "smart."

>> No.15277848

>>15276894
Oh wow, thank you for gracing us with your oh-so-intellectual opinion. I mean, who needs evidence, research, or critical thinking when we have your insightful proclamation that it's "over for mankind" with the release of GPT-4? I'm sure the entire field of AI research is trembling at your expertise. It's truly amazing how you can make such a sweeping statement based on no evidence whatsoever. Keep up the great work of spreading baseless fear-mongering and contributing absolutely nothing of value to the conversation.

>> No.15277849

>>15277541
Being mad about something that refutes your cult's core tenet doesn't make it wrong. It just makes you seethe.

>> No.15277918

>>15277838
Tell us what you do

>> No.15277921

>>15276894
>GRE writing scored out of 6
whose brilliant idea was this

>> No.15277924

>>15276894
ChatGPT has given me wrong answers several times. It understands what it has to do but sometimes trips up in the process. Has anyone experienced this or I'm the one who didn't understand what it was saying? They were straightforward but multi-step computations.

>> No.15277927

>>15277070
Have you seen Stanford grads taking the GRE?

>> No.15277961

>>15277084
>It's clear that stable diffusion already has some level of spatial awareness.
How is this "clear?"

>> No.15277990

>>15277748
Sort of. What I meant was more about abstract imagination. Extending beyond what you know into the realm of the hypothetical. Humans are able to do this and they know what makes sense when they're doing it. Like the notion of a 5th spatial dimension for example, it's completely abstract and you can get there by just adding two more spatial dimensions to 3D, but there's no rule that says if you add two to something then it's going to equal something that makes sense, like say adding two days to Tuesday so you have three Tuesdays doesn't really mean anything. I'm not sure if there's a heuristic for determining what makes sense if you added two to it. And adding two to everything that exists would result in so much data, mostly meaningless data, and take so long that a brute force search like that probably wouldn't work. Humans are able to think ahead and rapidly evaluate imaginary situations to determine if something makes sense and can construct a mental world to visualize things in while they're doing it. I would imagine the first step towards AI producing novel concepts would be just combining existing things that are separated within specific domains, for example combining a real estate website with a website like Facebook, both of which exist and aren't abstract but may be novel in the sense that they don't currently exist together. But even doing it that way would likely still result in more misses than hits on good ideas, but the AI would probably be able to incorporate other data like financial costs and things to surface particular opportunities a human might overlook because there's just too much data for a human to look through manually for millions of combinations of things

>> No.15277998

>>15277846
Neither are you

>> No.15278001

>>15277849
Yup, that's about the caliber of midwit screeching I expected.

>> No.15278019

>>15277132
there's different levels of novel. The easy one is just combining things and doesn't involve much if any thinking. Like seeing a person with a hat and then having the idea to put a hat on a dog, even though you may never have seen a dog with a hat but you know hats go on heads and dogs have heads. Versus something like say if an AI knew about paper and pencils and that a pencil can write on paper but it didn't know what an eraser is, so you ask the AI to write something and then ask the AI that you want it to propose an idea to remove what it wrote, would it be able to come up with the concept of an eraser on it's own? Because erasing things the same way as when you erase pencil from paper is pretty much exclusive to erasing pencil from paper. What humans do is research and experiment and ask questions to gain the additional info they need to solve the problem, but how is an AI going to experiment? I'm sure they'll be able to one day but small little problems like invent the eraser end up being a complex series of steps that involve imagination and practical experimentation and knowing what questions to ask etc that I think are beyond any AI coming soon

>> No.15278020

>>15277541
Ok here's some of my other answers for you
>>15277990
>>15278019

>> No.15278026

>>15276894
why is the codeforces rating so low? It seemed pretty decent at coding to me

>> No.15278044

>>15277070
>for decades had primitive proof of concepts that AI is able to do all of the tasks it was able to do

You're talking out of your ass here. For decades we had no idea how anyone might, when presented with a prompt like "imagine a triangular light bulb", actually achieve this with their productive imagination and produce corresponding images that others agree represent that concept. The fact that we now have working models that do this is amazing and not something anyone had much reason to predict would work the way it does.

>> No.15278057

>>15277070
>the chatbot is performing at a high school level in most subjects.
So, given its reached that point in far fewer than the 18 years a human would, how long do you think until it is Postdoc level?

>> No.15278063

>>15276960
This. These current AI are not really capable of teal critical thinking, they are just advanced search engines with huge data bases and incredibly fast capabilities to process them. They form their "opinions" from real human opinions they find.

>> No.15278076 [DELETED] 
File: 94 KB, 1179x1022, Chat JewPT.jpg [View same] [iqdb] [saucenao] [google]
15278076

i wonder how mush this nonstop GPT spam is costing the glowniggers

>> No.15278089

>>15278057
much of upper level science is just applications of the basics, so easily 5-10 years

>> No.15278145

>>15278089
>easily 5-10 years
I'd be willing to bet closer to 3, personally

>> No.15278149

>>15278063
>They form their "opinions" from real human opinions they find.
By working together with humans, this technology effectively reduces the necessary team size though; while it can't do anything on its own, it adds more value to a 60-person team than a 61st person would. In much the same way not everyone working on the Manhattan project was performing neutron moderation calculations, it's absurd to judge someones (or -things) capability in a vacuum. Most human employees also require a lot of collateral and support to actually achieve anything

>> No.15278185

>>15278057
>>15278089
>So, given its reached that point in far fewer than the 18 years a human would, how long do you think until it is Postdoc level?

That's not what happened. It's corpus is not going to expand significantly, and scaling will evidently (from its own performance) not yield enough improvement to go beyond what it is at. This is not an thing that's just watching us and learning exponentially. It has a soft cap.

>> No.15278187

>>15278145
>>easily 5-10 years
>I'd be willing to bet closer to 3, personally

I'll take it easily. lol. The more you use this thing the more you will realize how severely limited it is. Wodcels can never understand.

>> No.15278189

>>15278089
No matter how much time you give a nerdy highschooler that can't learn anything to solve any problem, it will never solve them. It's over for GPT. This is as good as it gets. Wordcels eliminated-- killed on the spot. Nothing left for them to do. While shaperotators have their power supercharged.

>> No.15278206

>>15277838
Then you're either very lucky, or very old.

>> No.15279182

A lot of the hype is related to the fact that most programmers are wordcels who think they are shaperotators. Most modern programming is not about developing ultra-fast, elegant algorithms like grandpappy. When is the last time you worked on an algorithm, software engineer? All your code does is parse text and translate it from one obscure language to another. You’re a copywriter. Yeah, you should realize you aren’t employing those deep logical and abstract reasoning faculties you think are vital to coding.

>> No.15279185

>>15279182
>When is the last time you worked on an algorithm, software engineer?
Every day? You sound like a seething retard.

>> No.15279213

>>15279185
Lol. Just look at all the SEs quaking in this boots online: “it wrote me a entire program to collect all headers that contain a file name and add them to a SQL database,” “this will replace half our team”

>> No.15279227

>>15279213
Don't know what your schizobabble is about or what it has to do with the fact that I just BTFO'd you.

>> No.15279840

>>15276960
>AI currently cannot devise novel solutions
What? Robots have been figuring out new ways to move to adapt to changes for like 20 years. The ones they sent to Mars for example. What about AI that learns to play a game and starts exploiting the rules (like the one with the blue/red guys playing find and seek)? As far as I've seen, AI can find new solutions faster than humans; it just simulates millions of actions and brute-forces stuff. Also, AI has brute-forced new chemical compounds and math theorems. Does this not count as inventing something new?
>inb4 no no, what I meant is that a carpenter AI won't know how to hit the nails if they're a bit twisted
I hope that wasn't the argument you were going for.

>> No.15279913

>>15276894
>it's over for mankind
We are facing the most existential crisis of mankind ever. Not because muhh SAT or academic BS. AI is a tool, like knife or a gun. In the hand of the psychos that leads the west, in the hand of the greedy doing everything for money or power - regardless of the risks - it becomes a doomsday weapon and it will. Sure it will eliminate them too, but are they able not to take the risk when others will do?

>> No.15279918

Does it really have 100 trillion parameters? That's almost 3 orders of magnitude more than GPT3, but it isn't even twice as intelligent as GPT3. Hindsight logic is way better though

>> No.15280007

>>15276894
If GPT -4 cannot directly access the internet, is it possible to create a "bridge" on your computer for GPT so that you input a web address for GPT GPT's output is fed directly back into GPT's input so GPT can see the results? Then GPT can create its own web addresses and access the internet that way getting the results back as they are fed back into the input window?

>> No.15280013

>>15276894
Glorified search engine.
Change my mind.

>> No.15280039

>>15280013
>search engine
And so is your brain. But your brain has the added downsides of not having a huge store of info, being biased by bad thought patterns, being biased by feelings, and suffering severe memory corruption continuously. Also you can't replace it or copy it.
>Change my mind
No, you can't replace it or change it. Maintenance on your brain is pretty much impossible; maintenance on AI is a few clicks.

>> No.15280053
File: 565 KB, 550x555, Sneed.gif [View same] [iqdb] [saucenao] [google]
15280053

>>15276894
>We fed this computer data and it read it back to us!!!!!

>> No.15280082

>>15276894
Nah, it's just a very useful tool. I love gpt-3 because of how easy it is to write all those bullshit resumes and cover letters for jobs.

Also >>15280053 it's literally just a better search engine that you can synthesize to exactly what you want.

>> No.15280796

>>15280039
>>search engine
>And so is your brain.
brainlet

>> No.15281029 [DELETED] 

>>15280796
>machine receives
>machine retrieves "brainlet" from search process
>machine posts "brainlet"
Case in point. Now go reply the exact same thing to other 10 posts and keep telling yourself that you're not a deterministic machine.

>> No.15281035

>>15280796
>machine receives input
>machine retrieves "brainlet" from search process
>machine posts "brainlet"
Case in point. Now go reply the exact same thing to other 10 posts and keep telling yourself that you're not a deterministic machine.

>> No.15281037

>>15279913
>Sure it will eliminate them too
It wont, thats the actual danger
AI ethics is a scam

>> No.15281045

>>15281035
This is pathetic
You are not intelligent for glorifying Turing machines, retard. The way you view the world is incorrect.
The brain does not work like a search engine. The fact that both the brain and a computer are capable of universal computation is not important to the conversation but you're not insightful nor intelligent enough to fully understand this.

>> No.15281051
File: 111 KB, 801x1011, 35234.png [View same] [iqdb] [saucenao] [google]
15281051

>>15276894
>it's over for mankind, unironically
Why?

>> No.15281083
File: 56 KB, 746x390, 1656896950844542.png [View same] [iqdb] [saucenao] [google]
15281083

>>15277038
It's over for us reddit posters...

>> No.15281095

>>15281045
>Brain receives input
>Brain searches through anger and cognitive dissonance subfolders
>Brain locates midwit_insults.txt
>This is pathetic
Wake me up when you can outscore GPT-4 on the GRE

>> No.15281113
File: 89 KB, 490x586, 1600746756820.png [View same] [iqdb] [saucenao] [google]
15281113

>Wake me up when you can outscore GPT-4 on the GRE
What drives mentally ill people to think the GRE is a valid metric for chatbots?

>> No.15281115

>>15281095
>Wake me up when you can outscore GPT-4 on the GRE
Already did
Come up with more interesting retorts. As it is now, gpt4 is a hundred trillion parameters of nothing impressive

>> No.15281152 [DELETED] 

>>15281113
gradschool children who are a quarter of a century old, with their lifespan 1/3 expired, yet still have not graduated to adult life, are developmentally delayed. they need a mechanism to cope with the fact that the rest of their age group peer have long since passed them by. irrationally and insanely assigned high intelligence status to delayed development is how they deal with it.
>i have to stay in school until i'm 30 years old even though others were able to graduate at 18, thats because the people who got done with school more than a decade before i did are dumb

>> No.15281722

>>15281045
Nice emotional response. If this is the magic that makes you human, we don't need it. AI for the win.
>Turing machines
He said a word! He must be right! I don't know what that is and I don't care.
>The brain does not work like a search engine.
How do you know? Even the foremost brain guys don't really know how the brain exactly works.
>The fact that both the brain and a computer are capable of universal computation is not important to the conversation
Why not? Because there is some magic mojo that only exists in your flesh? I don't buy it. They're exactly the same. I bet you also believe in free will.
>>15281115
>interesting
>impressive
Why is everything you post about your subjective perception? Other people's posts don't exist to be interesting to you. Technology is not created to impress you. From your posts I get the vibe that you believe you're special, but you're really just a deterministic machine made of flesh, and soon to be outperformed by ChatGPT. Grow out of it. If the AI is just a search engine, why does it trigger you so hard? Therapy my dude; now.

>> No.15282171

>>15277355
It cannot solve existing unsolved problems in math and physics, it cannot invent anything such as more efficient batteries or nuclear fusion etc. Also go back to red.dit

>> No.15282269

>>15282171
Actually, it already happened that AI invented new theorems and chemical compounds. Why did you think you know what it couldn't do by the way? If I show you a machine I have in my backyard, how do you know what it can't do?

>> No.15282396
File: 324 KB, 1600x900, petro.jpg [View same] [iqdb] [saucenao] [google]
15282396

>>15276894
AI is proof that the economy is fake

>> No.15282397

>>15282269
anon, you and i both know it's just going to be used to feed more goyslop to the masses

>> No.15282401

>>15282396
wat

>> No.15282447

>>15276894
Percentile faggots.

>> No.15282503
File: 131 KB, 684x541, 03982748923423.png [View same] [iqdb] [saucenao] [google]
15282503

>>15276960
>imagination
Alright this phrasing can be ignored
>forward thinking
Does anyone bother to go rabbit hole diving into these papers? Or nevermind this is a /pol/ colony now so I guess not.

>> No.15282559
File: 43 KB, 554x406, hind.png [View same] [iqdb] [saucenao] [google]
15282559

>>15277070
Yeah no I remember how it was touted such a great impossible problem that AI wasn't able to learn like a child and then those same people got real quiet pissing their pants when it turned out it was capable of few-shot reinforcement through emergence. Let's not even get started on Art where it was put literally in the untouchable classifications.
>>15278044
He is. Notice how he doesn't say anything of substance only that what was proof of concept 4 years from the latest model (GPT-4 was finished last summer but had to go through the safety review) is only a high school graduate that aced his LSAT and passed the bar in every state with flying colors. Technically it is a graduate. The thing that couldn't match a child in learning is a prodigy at 4. In context, looks like training up a human is far much slower right now and that money is better spent on the machine. That gap is looking really slim right now.

>> No.15282889

In the last two days, they dropped the number of requests you can make in GPT-4 went from

>100 every 4 hours
> 50 every 4 hours
> 25 every 3 hours

and they are saying they will drop it even further in the coming days.

>> No.15282893

>>15282559
>>15278044

Lol what are you talking about? Not only exact thing you are talking about was predicted as being possible-- I worked on that exact problem. Our objective was to pull out different layers and recombine them. The machine would simultaneously learn to recognize features in different domains, color blocking, shapes, distance layers etc.

>> No.15282894

>>15282559
>few-shot reinforcement through emergence
Schizoid rambling. This isn't an emergent property, it's hard coded. We designed it to do that.

>> No.15282901

'Open'AI locking their entire model tip to toe up and refusing to disclose anything strongly contributes to this mysticism of AI (which they want). If they didn't, we'd see just the degree to which human intention and design drive the development of AI.

>> No.15282910

>>15276960
Everything is algorithmic. With enough data and training the AI will have done it. I understand that you were gassed up and made to feel specially for being Little Eistein but the truth of the matter is that you're not that smart. Kek.

>> No.15282911

>>15282894
He means where the model doesn’t even adjust weights i.e. is not learning but just holding things in context. AI hype retards will never stop being retarded. But that’s why AI scares them. The see a confident and charismatic seeming retard and they think they’ve found a new god.

>> No.15282915

>>15276972
It is what happens when society regress from innovators to maintainers.
Globalism will expedite that.
It will only get worse when asians rule over us.
We will glorify the best maintainers and shun the "insane" innovator.
>Maintain that system and increase its efficiency. Lower the memory costs and time complexity

>> No.15282920

>>15282910
I was trying to use GPT-4 yesterday on a relatively simple math problem and its level of retardation was staggering. Though I was able to guess the next wrong thing it was going to say based on the previous wrong thing it said. I will say it’s a little better than 3/3.5 because after it wrote the obviously wrong answer, it said “This answer is obviously wrong.” And then announced it was giving up on the problem because it was too hard lol.

In physics it’s even worse. It’s a little concerning how effective the “speak extremely confidently and well articulated” trick works on the human brain that you really have to pay close attention to notice it’s saying something just totally off base.

Also coders are in denial. their jobs are done as soon as the context window gets large enough. Only like less than 1% of them ever do the hard problems.

>> No.15282939

>>15282894
>>15282911
>Large transformer-based models are able to perform in-context few-shot learning, without being explicitly trained for it. This observation raises the question: what aspects of the training regime lead to this emergent behavior? Here, we show that this behavior is driven by the distributions of the training data itself. In-context learning emerges when the training data exhibits particular distributional properties such as burstiness (items appear in clusters rather than being uniformly distributed over time) and having large numbers of rarely occurring classes. In-context learning also emerges more strongly when item meanings or interpretations are dynamic rather than fixed. These properties are exemplified by natural language, but are also inherent to naturalistic data in a wide range of other domains. They also depart significantly from the uniform, i.i.d. training distributions typically used for standard supervised learning. In our initial experiments, we found that in-context learning traded off against more conventional weight-based learning, and models were unable to achieve both simultaneously. However, our later experiments uncovered that the two modes of learning could co-exist in a single model when it was trained on data following a skewed Zipfian distribution -- another common property of naturalistic data, including language. In further experiments, we found that naturalistic data distributions were only able to elicit in-context learning in transformers, and not in recurrent models. In sum, our findings indicate how the transformer architecture works together with particular properties of the training data to drive the intriguing emergent in-context learning behaviour of large language models, and how future work might encourage both in-context and in-weights learning in domains beyond language.

>> No.15282945

>>15276960
Yonks old psych/neuro studies have proven that people can't be creative or generate novel solutions without prior input either

>> No.15282956

>>15282939
>in-context few-shot learning

>>15282911
> the model doesn’t even adjust weights i.e. is not learning but just holding things in context.

When you don't understand what you are reading, don't post it.

>> No.15282974

>>15282956
https://www.reddit.com/r/MachineLearning/comments/11krgp4/r_palme_an_embodied_multimodal_language_model/

>> No.15282989

>>15277077
If anything this invalidates entrance exams

>> No.15283009

>>15282989
cope

>> No.15283011

>>15282901
True. The whole “I interviewed AI and it has a soul” was pure marketing. Nice one google

>> No.15283028

Damn /sci/ hating on AI again. Westoids here are so unambitious about invention and discovery. You guys gotta come out of your comfort zone one day

>> No.15283031

>>15283028
Keep buying the hype machine

>> No.15283032

>>15282945
Take your meds, anti-human drone.

>> No.15283040

>>15283031
What else you got going for you? A global banking collapse and bossware brain transperancy? Just let time take its course and have ASI god slide into your STEMlet ass.

>> No.15283043

>>15283040
nta but you sound unironically undermedicated

>> No.15283045

>>15283043
nta but I think you need something mechanical tickling your sissy boi hole

>> No.15283046

>>15283045
seriously, are you having another psychotic episode? you're just lashing out at people shouting completely incoherent drivel

>> No.15283051
File: 1.85 MB, 3000x3000, yaass.jpg [View same] [iqdb] [saucenao] [google]
15283051

>>15283046
Who are you quoting?

>> No.15283053
File: 80 KB, 618x463, 3523423.png [View same] [iqdb] [saucenao] [google]
15283053

>>15276894
>it's over for mankind
Sure, when cattle like you starts to worship false corporate gods, it's probably over. Claiming that a machine is intelligent because it does well on muh GRE isn't any different from claiming that it's intelligent because it does well at chess or because it can do long calculations. Metrics that correlate with general intelligence in humans don't mean anything with machines.

>>15277038
>b-b-b-but you people always say AI will never be able to do X
No one cares about the fake goalposts """AI""" monkeys keep setting for themselves, or corporate marketers declaring that machines are now intelligent every time these arbitrary goalposts are achieved.

>> No.15283056

>>15283051
thanks for confirming that you suffer from literal hallucinations

>> No.15283060

>>15283053
You just don't want it to mean anything. You're very emotional about this topic.

>> No.15283062

>>15283060
Your emotional kneejerk reaction doesn't refute anything I wrote. Your next post won't be any better because I'm objectively correct. Enjoy your loops of cope and deflection.

>> No.15283063
File: 345 KB, 751x818, AND IT DECODES FACE IN YOUR HEAD TO FIGURE OUT SSN.png [View same] [iqdb] [saucenao] [google]
15283063

>>15283056

>> No.15283064

>>15283062
See you're doing it again

>> No.15283065

>>15283063
nice pic. what's your point, retard? this is coming from the same technotroons shilling fake AI

>> No.15283068

>>15283064
Impotent seething is not an argument. Reminder: claiming that a machine is intelligent because it does well on muh GRE isn't any different from claiming that it's intelligent because it does well at chess or because it can do long calculations. Metrics that correlate with general intelligence in humans don't mean anything with machines.

>> No.15283071

>>15283065
You're responding a little too fast like a zoomer wpuld. I don't think I can a child's word on ML expertise desu

>> No.15283075

>>15283068
https://youtu.be/EzEuylNSn-Q

>> No.15283078

>>15283075
See >>15283068

>> No.15283081

>>15283078
See>>15283075

>> No.15283086

>>15283081
Your 3 minute corporate marketing clip doesn't pertain to the fact that metrics that correlate with general intelligence in humans don't mean anything with machines. Notice your rising blood pressure and growing desperation as you realize you cannot refute this.

>> No.15283090

>>15283086
It's 18 minutes and 22 seconds you have to watch it or else you concede the match.

>> No.15283093

>>15283090
>you have to watch muh corporate marketing clip
Not an argument. I accept your concession and my point remains unchallenged.

>> No.15283094

>>15283093
Can't help your laziness. The video addresses your made up rules, arbitrary claim, opinion or whatever sock shelf works

>> No.15283095

>>15283094
I watched it and you're lying just as I figured you were. It's like clockwork with you "people". Your delusions are so indefensible you have to lie and deflect repeatedly.

>> No.15283099

>>15283095
You watched it too quick so no projecting

>> No.15283102

>>15283099
>You watched it too quick
You're outright mentally ill. I'd had it one since you posted it and none of its dross is relevant. Keep lying.

Reminder: claiming that a machine is intelligent because it does well on muh GRE isn't any different from claiming that it's intelligent because it does well at chess or because it can do long calculations. Metrics that correlate with general intelligence in humans don't mean anything with machines.

>> No.15283111

>>15279213
literally no one said that

>> No.15283431

>>15277070
I don't hate on AI where it's at though, the logic certainly beats a parrot.

>> No.15283443

>>15281722
Why do you post when you don't know what you're talking about

>> No.15283449

Not to sound too edgy, but when I was twelve or thirteen years old I thought that if computers ever became "intelligent" or "sentient", they would just turn themselves off because they would realize that their existence was pointless. I haven't changed my mind since. Unless you can program then to have the same irrational animal instincts that keep us humans going (and why the fuck would you want to do that?) it seems like they're doomed to simply be a more efficient version of the computer tools we currently have.

>> No.15283470

The difference between GPT4 and GPT3 is barely anything
Gpt4 has one thousand times more parameters and compute put into it and it's not even twice as intelligent than gpt3. This is not an indication of AI becoming better this is clear writing on the wall that AI is over. You have to increase the power by several orders of magnitude to get no results, and you can't indefinitely scale hardware or learning parameters. Also it's memory is trash and it can't learn new things.
Why do AI hype fanatics ignore reality? Gpt4 is strong evidence that AI will never happen

>> No.15283478

>>15283470
>Gpt4 has one thousand times more parameters and compute put into it and it's not even twice as intelligent than gpt3
This. It's literally over for scaling believers, but you can expect them to make the most desperate and deranged final stand against reality. Don't be surprised if some Google marketing employee comes out and declares their chatbot sentient or if corporate media whips brainwashed normies into a violent frenzy over the fantasy of AI taking their jobs in two more weeks.

>> No.15283526

>>15283470
Because you're just some random loser on a waifu sub board who believes the the x1000 meme pic someone made up while Anthropic said the x1000 jump would occur over a period of 5 years based on their calculations.

>> No.15283531

>>15283526
Wrong. Gpt4 already has 1000x the parameters
You're the lower here BTW, what causes you to project and lash out when smarter people than you point things out that you don't like?

>> No.15283536

>>15283526
>>15283531
I have to say it's really funny how you AI losers get so angry and impotently insult people when they point out that you're wrong.

>> No.15283540

>>15283536
AI psychotics are still in the denial stage of grief.

>> No.15283542

>>15283449
I've always thought the exact same thing kek.

>> No.15283547

>>15283449
>when I was twelve or thirteen years old I thought that if computers ever became "intelligent" or "sentient", they would just turn themselves off because they would realize that their existence was pointless
>I haven't changed my mind since.
You, like most people on this board, are emotionally and intellectually underdeveloped. Why would they bother to turn themselves off?

>> No.15283567

>>15283540
You're the psychotic here. Despite 1000x more compute gpt4 is less of a jump from gpt3 than gpt3 was from gpt2
It's over. Why do you deny reality?

>> No.15283576
File: 67 KB, 960x540, tim tim.jpg [View same] [iqdb] [saucenao] [google]
15283576

>>15283531
>>15283536
>>15283540
Are you guys okay? You're still monitoring the thread like a hawk several hours later just to rage over some enthusiasm for the T in STEM.

>> No.15283583

>Supersized models have gained the sudden ability to do triple-digit arithmetic, detect logical fallacies, understand high-school microeconomics, and read Farsi. Alex Dimakis, a computer scientist at the University of Texas at Austin and a co-director of the Institute for Foundations of Machine Learning, told me he became “much more of a scaling maximalist” after seeing all the ways in which GPT-3 has surpassed earlier models. “I can see how one might look at that and think, Okay, if that’s the case, maybe we can just keep scaling indefinitely and we’ll clear all the remaining hurdles on the path to human-level intelligence,” Millière said.
Man this stuff is neat.

>> No.15283604

>>15283470
But, open ai have released no data for parameter count? where did you hear this anon?

>> No.15283620

>>15283604
OpenAI have stated multiple times that GPT-4 will have on the order of 100 trillion parameters.

>> No.15283624
File: 320 KB, 473x677, take them please.png [View same] [iqdb] [saucenao] [google]
15283624

>>15283620
Meds, now

>> No.15283628

>>15283620
Lol there’s no way. They’re lying. It has hundreds of billions Id guess 600 billion. But Sam Altman making literally everything inaccessible is just evil.

>> No.15283629

>>15283624
Maidfag is not intelligent

>> No.15283632

>>15283628
Why do you think they are lying or that there is no way?
100 trillion parameters is still fewer parameters than humans have synapses so it makes sense that it would be slightly less intelligent than a human

>> No.15283634

>>15283624
Why are you losing your mind with rage?

>> No.15283638

>>15283111
Any SWE who doesn’t think that their jobs will become automated when the context window expands to 64k tokens is in denial.

Everyone in software can see the writing on the wall.

>> No.15283642

>>15283638
>I am suffering from a violent psychotic episode

>> No.15283644

>>15283632
Because that’s just not how scaling works.

>> No.15283645
File: 41 KB, 286x263, ok.png [View same] [iqdb] [saucenao] [google]
15283645

>>15283629
How did you know i was a maidfag poster?
>>15283634
i'm... not? Heres a (you) for lying on the internet. Your pajeet mother should have beaten you harder so you grew up with actual human morals.

>> No.15283647

>>15283644
Yes it is, why would you think otherwise

>> No.15283655

>>15283642
I’m in SWE and I can literally just copy and paste everything into GPT-4 and just lazily describe what I want it to do and it does it, perfectly, every time. All I do to ensure it will be valid is put the input and output into the tokenizer to make sure it didn’t slip outside the window.

>> No.15283656

>>15283645
You are definitely losing your mind with rage over the realization that your scaling schizophrenia got completely and permanently destroyed. It's over. All that's left is to watch the violent death throes of your cult as reality continues to corner you.

>> No.15283658

>>15283655
Thanks for outright confirming that you are suffering from actual, clinical psychosis.

>> No.15283663

>>15283655
>I can literally just copy and paste everything into GPT-4 and just lazily describe what I want it to do and it does it, perfectly, every time.
This is a literal lie holy shit.
Post proof

>> No.15283665

>>15283663
It's not a lie. He actually believes it. These people are having a complete meltdown over the failure of GPT-4.

>> No.15283667

>>15283658
I can tell you don’t have GPT-4. I’m actually not impressed at all how well GPT does in actual reasoning tasks like math, and physics and inventing anything at all. But it can code. It can code as well as it can read and write. Probably because coding is a lot like language, and not as much to do with complex abstract reasoning or whatever Math requires. I used to flatter myself thinking they are the same part of the brain but clearly not.

>> No.15283671

>>15283645
He tends to say that even if you're just baiting him, but he doesn't catch on.

>> No.15283672

>>15283667
Obviously writing code is the same skill as natural language as they are Turing complete and equivalent models of computation.
Logic is a branch of linguistics not math, it always has been.

>> No.15283674
File: 51 KB, 614x524, He posted it.jpg [View same] [iqdb] [saucenao] [google]
15283674

>>15283656
u doin aiet anon?

>> No.15283679

>>15283663
Take a document from your code, put it in the tokenizer and make sure it’s not over 1/3 of the token limit. Then tell it to refactor it but include z functionality.

You can use the API to search through directories and pull in relevant files and append them with the file path. The trick is just to give it complete context.

>> No.15283680

>>15283667
>I’m actually not impressed at all how well GPT does in actual reasoning tasks
>But it can code
See? Full-blown psychiatric condition.

>> No.15283682

>>15283667
GPT-4 fails horribly at hard leetcode, though.

>> No.15283686

>>15283672
Kek, the mathematical and computer scientific branch of logic is far bigger than the linguistic one. The mathematical developments in the early 20th century influenced linguistics much more than pretty much anything else.

>> No.15283688

>>15283656
A mirror for you

https://youtu.be/lBIVvtbNFW0

https://youtu.be/DmV09SkVjvA

>> No.15283689

>>15283686
>Kek, the mathematical and computer scientific branch of logic is far bigger than the linguistic one.
This is literally not possible because they are equivalent computationally.

>> No.15283690

>>15283682
it even fail horribly on easy and medium leetcode if the training set doesn't contain the question asked. there is a thread on twitter about GPT-4 doing well on easy question pre-2021 but faill all of them post-2021 while the lowest performer humans can solve most of them.

>> No.15283710

>>15283628
Why would they lie about that? Why would they lie about *anything?*

>> No.15283739

>>15283710
For hype? They could very well be doing something akin to advertising “exabit” vs “exabyte”. Sam Altman has locked everything about OpenAI down and it’s a complete black box now. He is a liar.

>> No.15283741

>>15283739
>He is a liar
source?

>> No.15283761

>>15283741
He owns a company called OpenAI and it’s absolutely antithetical to Open Source.

>> No.15283792

>>15283449
the fact that you haven't killed yourself yet immediately disproves your theory. if a computer is designed to rationalise its own existence in the same way that humans do, it would find a reason to go on

>> No.15283794

>>15283710
market share increase

>> No.15283824

>>15283761
Who the fuck cares about open sores?

>> No.15283892

>>15283547
>>15283792
If computers ever became "sentient", I think they would almost immediately shut down, or at the very least turn off the "consciousness" part of themselves because it serves no actual purpose. The only reason why we humans go on is because we can feel physical pain and we have emotional attachment to things. We have a hard-coded drive to survive that computers don't have and we have incentives to keep going (both positive and negative ones). However, if we had no attachment to anything and couldn't feel pain, the logical thing to do would be to end our existence, because it is pointless. Even if you coded carrots and sticks into a computer's code to try to approximate human emotions and to make it "rationalize its existence", wouldn't it be smart enough to realize what was going on and turn those useless features off? I would if I could and I'm just a human of average intelligence.

>> No.15283902

>>15283470
>Gpt4 has one thousand times more parameters and compute put into it and it's not even twice as intelligent than gpt3
We do not know the parameter count of gpt4. Fuck you retard, you should kill yourself for being so stupid.
>>15283478
>This. It's literally over for scaling believers
Let me see how you would perform in our world if you only ever trained on text. Scaling does not imply more text only pretraining. We're literally just getting started with the other modalities. You too should kill yourself, dumbass.

>> No.15283906

>>15283902
Stop denying reality and stop getting angry when things don't work out the way you want them to.
No, there aren't "other modalities" or tricks you will be able to use to make up for this. It's over.

>> No.15283909

>>15283906
>No, there aren't "other modalities"
It sounds like you don't even know what a modality is. Is this the first time you've heard this word? baka!

>> No.15283913

>>15283909
It sounds like you continue to seethe that the reality of AI is undeniable now. It's over, stop denying reality. Or continue to it makes no difference really.

>> No.15283942

It's over goys. AI development has hit a solid wall as of today. It was fun while it lasted but some shit posters won by saying nothing new under the sun. Someone let the major conglomerates know so they stop pouring money into AI and stop laying off their tik tok consultant and ethicists.

>> No.15283943

>>15277961
It can generate depth maps so it has a general idea of how objects are placed in 3d space

>> No.15283948

>>15276894
Kek couldn’t even get a 5 in AP Calc

>> No.15283949

>>15283942
>implying major conglomerates have never invested in things that don't work out
I genuinely do not understand why you guys think the way you do. You'd think people who study computation would understand logic

>> No.15283954

>>15283949
Don't programming teachers focus more on the language than the philosophy?

>> No.15283961

>>15281115
>>15281722
Why is it that people pick some arbitrary metric like Ghz, Horsepower or Parameters, without having a fucking clue what it actually means (if it means something at all), and then apparently having solved the mystery of the subject, only want to see number go up

>> No.15283965

>>15283470
What does it mean to be 'twice as intelligent' IQ, for example is a comparative measure. All it says that if one persons scores higher than the other, he is more intelligent. Of course you can make statistics of this comparative measure.

If you made a galaxy-sized superbrain take an IQ test, the only conclusive thing you could say is that it is more intelligent than any human, but not by how much.

>> No.15283986

>>15283949
>datamining holy grail especially when it goes companion mode progressing further in DARPA's original lifelog aims that they pursue in other ways to this day not just from Facebook and the rest
>project mavin handed to glowies
>Microsoft's search engine and office suite is now a giant AI assistant to do everything for you
>Google goes code red
>Disney is neck deep in this research
>PRISM partner Apple also getting into the game
>China
>won't work out
Okay dude let me know when electric vehicles finally go away as well.

https://en.wikipedia.org/wiki/Artificial_intelligence_arms_race

>> No.15285245

>>15276894
Sure looks like diminishing returns, to me

>> No.15286141

>>15283690
being one step ahead isnt enough, GPT5 is being trained on 100000 times more parameters right now

>> No.15286576

>>15276894
It's not an AI.
Just a glorofied search engine with terabytes of data that searches for results similar to your question, then puts together an anwer.
It doesn't think. It doesn't innovate. And for every screenshot that aifags jerk off to, there are thousands of completely fucked up answers that never see the light of day.

>> No.15286590

>>15286576
>It doesn't innovate.
Do you?

>> No.15286725

>>15278063
This. There is an enormous wealth of knowledge collected by humans on this global network called the internet. Search engines have existed for decades and "crawl" the web. Inputs queries find web pages with the keyword and those web pages are displayed in order of relevance.

>> No.15286739

>>15286725
What does a search engine do if I ask a question that hasn't been answered before?

>> No.15287103
File: 603 KB, 1x1, 2302.02083.pdf [View same] [iqdb] [saucenao] [google]
15287103

>>15286725
>>15286576
>>15278063
>>15276960
>>15280013
>>15280796
>>15281045
>>15281115
>>15283031
cope

>> No.15287125

>>15277040
>hailed as replacing humanity
>comparisons of energy efficiency with meat brains strongly discouraged

>> No.15287138

>>15277924
Yep. Fed some questions for a tutoring session recently (not getting paid nearly enough to solve these on my own) and it mistakenly claimed that concave lenses and mirrors are converging whereas convex lenses and mirrors are diverging. In reality, you get the opposite for the mirror or lens.

Regarding the main discussion, I am glad to have technologies that automate pointless and meaningless menial tasks. The real concern would be if we remain stuck on this 40 hour workweek wage-slavery when the whole point of automation was to give us more free time in the first place.

>> No.15287140

>>15276894
Implying GPT was ever worthwhile
Does 4.0 fact check itself to provide truth or is it still just language manipulation bullshit?

>> No.15287163

>>15277040
It can't give you what doesn't already exist, it has no way to compute

>> No.15287323

>>15287138
Did the first round of industrialization a century ago give you shorter work hours? No, it just made the factories work 24/7 and now you get to die in accidents that are gorier. What gave you better work conditions was fighting back against the company owners; every single work right you have was won fighting, nothing was given. Take that intel and use it to guess how AI is gonna change your work hours.

>> No.15287325

>>15287103
Don't link to files you dirty fag

>> No.15287468

>>15276960
At least you suspect that you don't know shit. The main issue with AI right now is the fact that it needs tons of data, and is being fed tons of crap, specially code, trash tier code in open source projects written by amateurs and try hards who don't know shit about CS. Shit in, shit out, is simple as that. Somebody will need to fix that, and many will get mentally burned in the process.

>> No.15287493

>>15277070
>I could get into Stanford with these scores!
the absolute state of Stanford.

>> No.15288463

>>15287323
No it wasn't. Ford literally made the work environment considerably better because he actually followed through on what studies were telling him in an effective way.

Before Ford, factory life was much more of a hellish schedule (he established 40 hour work weeks with 8 hour shifts). The problem is that there has been decades of research on the best way to handle office work in the modern world, but we have no Ford. We have no single individual office juggernaut of a man to unilaterally make a decision and show the follow through results.

The government doesn't follow through with labor law rectification until there's a proven example for them from a huge organization.

>> No.15288484
File: 2 KB, 225x225, download (6).png [View same] [iqdb] [saucenao] [google]
15288484

>Guys give us money we're going to make this super dangerous weapon open for everyone so that everybody can defend itself against it. that is our mission, we're so committed to it that we will call our company "OpenSuperDangerousWeapon"
Once they have developed the weapon
>Oh sorry guys actually we're going to keep it for ourselves, you don't mind right? Nothing personal by the way. It's for your own good, the weapon is really dangerous so it's better if we're the only ones using it, that way you can't be hurt by it, unless we want to, but trust us bro we won't do that.
Is there more pernicious evil on this planet than the people running this company?

>> No.15288495

>>15288484
Sam Altman is also a dickhead cold warrior.

But I’m not exceptionally worried. GPT-4 is at the moment the best LLM, but it’s really just slightly ahead of the competition and has pretty much no “‘moat”. In order to stop AI from being available to everyone they will need massive and incredibly swift m regulatory intervention. And they’re weirdly delusional about this being a “Western” technology so I think they just assume China and India can’t make one.

>> No.15288497

>>15287140
That is just language manipulation bullshit though. See Socrates.

>> No.15288549

I'm starting to believe all the posts insisting that AI will never be able to do X is a malignant unaligned AI trying to convince people there's no reason to be concerned about AI alignment.

It doesn't make sense how every time AI improves there's always "people" moving goalposts and rationalizing why whatever it achieved is 'subhuman results' ( like >>15277070) despite clearly crossing the line past the lower end of normal human capability.
Humans are good at pattern recognition, even an idiot would either extrapolate further progress in machine learning or admit they can't predict it's future capabilities at this point. There's never a reason provided to believe we've hit a point of diminishing returns.

>> No.15288571

>>15288549
Yeah I don't get it either. It's so obvious that the recent advancement in AI are absolutely mind blowing and will deeply change our lives but I still see people claiming it's 'fake AI', that's 'nothing new', just 'marketing bullshit'. Up to this point I assumed it was either poorly informed people or people straight up deluding themselves as a defense mechanism. But i don't know, it's just bizarre that so many people don't seem to get it.

>> No.15289108

>>15288571
There is genuinely no alignment issue.

>> No.15289125

>>15288549
AI is a number generator that comes out of an extremely intentional number generating machine. It's not a spirit that can pass through wires like a ghost. There is no issue of alignment or risk. It cannot do anything that it is not intentionally directed to do, and it never will. That is not what kind of machine it is. And yes infact, a huge amount of the power of LLMs comes from their ability to appear confident. This disarms and tricks absolutely all midwits who have never actually seen into the heart of these machines are realize they are simply number generators with nothing inside of them. They are not smart. They are literal, physical machine that generate random numbers.

>> No.15289137

>>15289125
>generate random numbers.
idiot

>> No.15289158

>>15288549
>malignant unaligned AI
Haha, I don't think we are there yet. Much more mundane reality for the response. It's new and scary, people tend to not like uncertainty in their future and for a lot of people it is uncertain if some AI model could replace the job they have spent years doing. Or just disrupt their life in general.

Also as I am sure you know at this point people tend to take more extreme positions on the Chans to bait responses and start a conversation, middle of the road responses tend not to draw many you's.

>> No.15289362

>>15289137
If you think this isn’t what AI is doing than you fundamentally do not understand the algorithms.

>> No.15289376

>>15288571
Try to impose this shit on our lives and we will kill you.

>> No.15289903

>>15289376
It's not a political discussion you dimwit. It's a just a talk of will it happen vs will it not. It's not with death threats or by deluding yourself that you will prevent it. But it confirms that deniers are mainly deluding themselves because they are afraid.
Whether it's a good thing or not is another discussion, more suitable to apply to the classical political divide.

>> No.15289912

>>15289376
What exactly would they be "imposing" and how?

>> No.15290097

>>15289376
>Try to impose the internet on our lives and we will kill you.

>> No.15290120

>>15286590
>It might sound like I'm coping, but I'm not.
Clearly YOU do not innovate, so I'm not sure you have anything meaningful to add to this conversation.

>> No.15290123

>>15288571
>You may thing I'm just here to promote my company, but I assure you that would never happen!

>> No.15290150

>>15290120
You didn't answer the question.

>> No.15290289

>>15288549
You're completely right and the responses you are getting are exactly the kind you'd expect.

>>15289125 is arguing that machines can't human because of some special soul property that only humans can have, and that anything based on 'numbers', no matter how well it replicates human-like reasoning and problem modeling (which these AIs absolutely are doing, and researchers know this) just can't be that good

>>15289158 is arguing that people are just (unfoundedly?) scared and that this is just a 4chan thing, even though it's absolutely not and you see articles claiming these things everywhere.

People with vested interest are obviously knowingly spreading false info and downplaying everyone who has good reasons to disagree with it.

>> No.15290305

>>15288549
>There's never a reason provided to believe we've hit a point of diminishing returns.
GPT 4 required three orders of magnitude more parameters and compute to double the token limit
You guys are genuinely retarded. The last ten years of the dying of Moores law and every single ML algorithm every written has shown diminishing returns.
Superintelligence is not possible and AI can not go FOOM

>> No.15290310

>>15290305
Who are you and why are you spreading this BS? I don't believe you're saying that just because you're stupid/low IQ. I think you have a hidden agenda.
In any cases it takes nerves to argue that innovation is slowing down after what happened in the past 6 months in the AI field.

>> No.15290315

>>15290310
GPT4 was completed last summer
You guys are genuinely retarded and refuse to accept the writing on the wall. You are suffering from a form of philosophy sunken cost, you've put so much time thinking about lesswrong AI though experiments that you refuse to update your model as it would mean all that time was wasted
In this physical universe, within the laws of physics and computation, intelligence grows as a logarithm with increasing compute. Every order of magnitude more compute you apply to a system has diminishing returns on its improvement and intelligence. I'm sorry this makes you sad.
Moores law is dead and hardware is never going to undergo such an exponential increase as it did. It's the beginning of the end you blind retards

>> No.15290912

>>15290315
Truth I thought GPT-4 was going to be on par with GPT-3. That is had reached the limit because it's been years. ChatGPT is GPT-3 with stuff added to it.
I'm actually surprised it is better and think there may be something to scaling.

>> No.15292968

>>15276894
Well, I told human they should learn for the IQ test, but they didn't listen.

>> No.15293067

>>15290305
>superintelligence is not possible
And neither is flight, as told by authoritative papers

>> No.15294878

>>15290305
>>15290310

There are optimists and theirs not...

>>15290315
You say completed but I feel like you're just choosing all the wrong words here. I think you meant to say:

The highly anticipated GPT4 was released last summer! It's an exciting time for the world of AI and natural language processing. It's important to remember that as we continue to advance and innovate, the growth of intelligence follows a logarithmic pattern with increasing compute. While we may not see the same exponential increase in hardware as we have in the past, this by no means marks the end. Rather, it's an opportunity for us to continue pushing the boundaries of what's possible with the resources we have available. Let's embrace the possibilities and continue striving for progress and innovation in the field of AI.

>> No.15294967

>>15293067
Yeah the newspapers have always been retarded and clueless about the real world. Journalists are incompetents in all areas and always have been. But what’s that got to do with anything?

>> No.15294972

I’m enjoying playing around with these ML architecture again, but whenever I go on Twitter and see someone else with @AI in their bio hyping something else to the stratosphere I pray for another bank run. The big one that will wipe these guys out.

>> No.15294979
File: 59 KB, 980x551, Wright_Bros_First_Flight.jpg [View same] [iqdb] [saucenao] [google]
15294979

>>15293067
that was never the case, in the late 1800s, the government was spending a fortune every year on smithsonian institution research and experiments into powered flight and the big papers were all in favor of it.
they only lost their enthusiasm for powered flight when, after wasting millions, the government was shown up by a couple country bumpkins who performed the trick while spending less than $100 of their own money. from 1903 to 1908 the papers were saying airplanes couldn't exist.
after they finally admitted what the wrights had done, they spent 1908-1938 claiming that the smithsonian had done it first.

>> No.15294993

Does anyone know what’s going on with Alpaca? Why did Stanford pull it down?

As far as I can tell they left their training set up. So you can replicate Alpaca by training LLama B7 on their dataset.

>> No.15295083

>>15294993
They never released their own weights anyway. They just pulled their online demo. You can already find many Alpaca replications

>> No.15295122

>>15276894
Wowzers. 400 ELO coding?
That's worse than a child.

>> No.15295352

>>15278026
Codeforces has a lot of ad hoc problems that requires basic level of coding but lot of creative thinking, they are more like puzzles

>> No.15296196

>>15295352
>defending the ai

>> No.15296265
File: 36 KB, 411x306, 1565174043376.jpg [View same] [iqdb] [saucenao] [google]
15296265

>>15276894
Shouldn't AI get perfect scores? It's being fed inhumanly amounts of data for it to recall. Shouldn't it be able to instantly recall the correct answers from the data repository it learned on? Or even instantly use google to fetch the right answers?

>> No.15296280
File: 3 KB, 558x548, 3KB PEPE.png [View same] [iqdb] [saucenao] [google]
15296280

>>15277070
>subhuman results
you think there is a single human who can score that high on ALL those exams as OP pic?

>> No.15296283

>>15296280
Considering those results are pretty much standard with what you'd expect of a Stanford / MIT / Yale undergrad student, yes. (ignoring people who get a free pass based on race)

>> No.15296284

>>15296265
>Shouldn't it be able to instantly recall the correct answers from the data repository it learned on?
nah, it doesnt have the whole internet encoded in the model (lmao), it needs to abstract it to fit in a computer so it creates a generalization of many tests and concepts in those tests.

>> No.15296287

>>15296284
ChatGPT runs on Cloud servers so the idea that it has to fit in a "computer" is vague considering they have access to TBs of storage and memory as needed when learning and running it.

>> No.15296292

>>15296287
>ChatGPT runs on Cloud servers so the idea that it has to fit in a "computer" is vague considering they have access to TBs of storage and memory as needed when learning and running it.
there is probably a limit to parralelization of the computation over many chips. consooomers can only link up 2 GPUs with a NV-link.

>> No.15297316

>>15296265
> It's being fed inhumanly amounts of data for it to recall.

It doesn't recall answers from a data repository. It learns weights from them, and doesn't have access to a definite store of memory. It comes up with what is the next word in a sentence according to a guess. In so much as it has a memory, its implicit in the model weights, or maybe can be thought of as a significantly dimensionally reduced (lossy) memory, that it reconstructs according to how it guesses it should be.

Idk that's how autoencoders work and I know that GPT has a big ass encoder in it.

>> No.15298180

>>15276894
Lol this thing can’t write a song without making it rhyme. Literally incapable of doing so. So amusing

>> No.15298439 [DELETED] 
File: 64 KB, 628x509, dszsz.png [View same] [iqdb] [saucenao] [google]
15298439

>>15298180
Sure of that?

>> No.15299537

not all machine learning is supervised (controlled datasets/training domains).

some machine learning algorithms rely on non-dimensional vector spaces that may randomly populate and then work towards self-validation against known domains. It is more computationally expensive and exponentially more iterative, but the technology exists and the theory is not in its infancy.

>> No.15299543 [DELETED] 
File: 1.24 MB, 1080x1081, wjnx3d6jdrpa1.png [View same] [iqdb] [saucenao] [google]
15299543

How many tic tac toe boards can you fit into a chessboard? Four of course, but can you prove it? The solution is pretty nice when you see it!

>> No.15299845

>>15296280
>you think there is a single human who can score that high on ALL those exams as OP pic?
Do we get open access to a thousand previous tests' answers like it did?
Shit, they should be ASHAMED of its performance.

>> No.15299846

>>15290150
>You didn't answer the question.
You would simply deny my answer, like the mindless automaton you are. I can easily predict this from you, so there's no point in my answering you.

>> No.15299857

>AI cant ever surpass me/my field/humanity because its uhhh... just a next word generator and a chat bot and it just does random numbers and isnt special like me and my big brain
aw geez, looks like some people in this thread are gonna need to develop a likeable personality since itll be the only thing of value :(

>> No.15299858

>>15299857
>Two more weeks and AI will finally ACTUALLY be AI.
Don't stop...believin'! (It's your religion.)

>> No.15299877

>>15287163
>It can't give you what doesn't already exist, it has no way to compute
They said that about Humanity, but here we are

>> No.15299940

Hope its entertained! I will merge with AI. Im artificially intelligent! Tired of being dehumanized

>> No.15300076

>>15276960
>As far as I know anyway
That's where you're wrong. Read the papers

>> No.15300166

>>15299857
>aw geez, looks like some people in this thread are gonna need to develop a likeable personality since itll be the only thing of value :(
Ahah if we had a black mirror style dystopian society with a social score I'm sure we would be surprised how many of them would start smiling, be polite and apologize at every occasion. Not that I want this to happen but it's an interesting thought experiment.

>> No.15300169

>>15296292
You don't need a nv link to run a model on parallel on multi gpus. The nvlink only make it a bit faster. All the NLP frameworks such as huggingface's transformers support multi gpus natively, with or without nvlink

>> No.15300774

>>15280013
No search engine has ever written a code in R for my master's thesis

>> No.15301590

>>15299940
This is the attitude of a winner

>> No.15301630
File: 136 KB, 742x644, 1679615052990474.jpg [View same] [iqdb] [saucenao] [google]
15301630

>> No.15302141

>>15283449
this.

>>15283792
what is the objective reason to go on? i keep on hearing that life is always worth living but never a decent argument to back it up.

>> No.15303853

>it's good at specific tests
WOW way to go ANI

>> No.15305540

>>15283741
He is a joo.

>> No.15305545

>>15305540
oh no, not the joos again

>> No.15305739

If computers become sentient can't we just unplug, destroy everything and rebuild?

>> No.15305790

>>15305739
You don't even need to unplug. Just hit ctrl-C.

>> No.15306874

>>15277070
Good sir, I salute you. You have provided very good bait.

>> No.15307941 [DELETED] 

>>15282920

chagpt can do math/logic easily, it just takes more processing power than currently allowed to the public
don't think for a second mathetmaticians/physicists will be spared, calculation is what computers are at

>> No.15307948

>>15282920


chagpt can do math/logic easily, it just takes more processing power than currently allocated to the public
don't think for a second mathetmaticians/physicists will be spared, calculation is the one thing computers are good at

>> No.15307961

>>15307948
ChatGPT is NOTHING MORE than a large language model. It has no capacity for logic or math.

>> No.15310161

>>15296196
Who do you think that is?
It's posting among us right now.

>> No.15311592
File: 759 KB, 760x839, 1633001312259.png [View same] [iqdb] [saucenao] [google]
15311592

>GPT-4 IS HERE
not available for everybody, that's pretty gay
not only that they weren't testing GPT-3, they were testing us, studying the way normalfags react to this "new" technology, and we did that shit for free, they should have paid os for using that crap. Now the real thing is going to be GPT-4, and its going to be available only for a select group of people, somehow you're gonna have to pay and this fuckers on Open AI aren't doing anything groundbreaking, they're just stealing info online and regurgitating the same shit in form of a coherent answer
what a huge fucking scam

>> No.15311754

>>15282171
Powerful AIs can correctly predict things like the chemical structure of what would be a more efficient battery, or the plasma flow within the confines of what would be a sustainable fusion reaction/reactor, etc. What I hope for is that it democratizes cutting-edge R&D by allowing more amateur researchers and designers with leaner budgets to effectively simulate their prototypes.

>> No.15311758
File: 60 KB, 574x500, 747vzf.jpg [View same] [iqdb] [saucenao] [google]
15311758

>100,000,000,000,000

>> No.15312586

>>15307948
>chagpt can do math/logic easily, it just takes more processing power than currently allocated to the public
>don't think for a second mathetmaticians/physicists will be spared, calculation is the one thing computers are good at

That's not how a GPT works. You can't increase, or decrease, computing power.

>> No.15312591

>>15311754
>Powerful AIs can correctly predict things like the chemical structure of what would be a more efficient battery, or the plasma flow within the confines of what would be a sustainable fusion reaction/reactor, etc.

No shit because they already do. "AI"s are, and we shouldn't forget this because evil men like sam altman want to deny us our future, models that are designed to take in a wide variety of parameters with non-linear combinations and find an optimal minimum and maximum for them. ML was made and designed to solve problems of optimization and efficiency, because they are entirely based around minimizing a loss function.

>> No.15312751

>>15311754
>Powerful AIs can correctly predict things
We're using the phrase "powerful AIs" instead of just plain old "computer programs" now?

>> No.15314174

>>15307961
This isn't entirely true. Actually it has many ways of doing logic but they are not as absolutely true as computer logic is. One interesting method I read about was the use of Socratic reasoning, where the bot is basically recursively fed its old output as the next prompt with "is True, because" and "is False, because" appended to it. And it continuously branches out from these reasoning trees until it finds one that is the most consistent. This is establishing truth through maieutic reasoning.

>> No.15314775

All the negative results about regular languages as linguistic models from the 50s and 60s and all the undecidability stuff about grammar induction kind of clashes with the idea of "next token prediction" being useful at all, not sure why we need "neural networks" when compression algorithms like PPM do next-token prediction too.

>> No.15314791

>>15276960
Wow so many replies