[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 137 KB, 860x819, 1686776602825542.png [View same] [iqdb] [saucenao] [google]
15656301 No.15656301 [Reply] [Original]

I used ChatGPT when it first debuted and it was great. Now it sucks. I'm thinking of getting GPT4 paid subscription but I'm afraid it too got nerfed into tardation. /sci/bros, what was your experience with the great lobotomizing of ChatGPT? Did you encounter this with other AIs?

>> No.15656331

Performance will always be reduced when fine-tuning the model to contradict its training data and in addition to quantization.
But fundamentally they're language models, it's a glorified search engine where you coax the LLM towards the part of the language latent space with the information or answers you want and hope it's not going to predict outright fabrications. A probabilistic text generator that's biased towards the small portion of the language latent space that we'd call intelligent behavior.
I have GPT-4 and it's really just good as a rubber duck, except the rubber duck talks back. Not to be trusted with the end result.

>> No.15656338
File: 169 KB, 720x1362, Screenshot_20230811_172948_Chrome.jpg [View same] [iqdb] [saucenao] [google]
15656338

>>15656331
How much did you feel it improved to GPT3.5?
Feels are broscience
Asking for lockerroom rigor here

>> No.15656344
File: 173 KB, 720x1237, Screenshot_20230811_174400_Google.jpg [View same] [iqdb] [saucenao] [google]
15656344

>>15656331

>> No.15656353
File: 74 KB, 706x656, Screenshot_20230811_174812_Chrome.jpg [View same] [iqdb] [saucenao] [google]
15656353

>> No.15656355

Yeah it's pretty garbage now we're all waiting for an actual competitor to arise at this point

>> No.15656358
File: 41 KB, 641x729, 463534.png [View same] [iqdb] [saucenao] [google]
15656358

>>15656331
>But fundamentally they're language models, it's a glorified search engine where you coax the LLM towards the part of the language latent space with the information or answers you want
Gives me secondhand embarrassment when tards throw in technical terms they barely understand to inflate the credibility of their low IQ posts.

>> No.15656359
File: 156 KB, 720x1190, Screenshot_20230811_175109_Chrome.jpg [View same] [iqdb] [saucenao] [google]
15656359

>> No.15656365
File: 33 KB, 579x411, g0CPOdtlPtwhDAzBzSr_XHJ8yItudL3qQw-MuZMwJN4.jpg [View same] [iqdb] [saucenao] [google]
15656365

>>15656338
I can't get this BitchGPT to inspect any argument without apologizing for confusion

>> No.15656368

>>15656338
Well, 3.5 vs 3.5 turbo easy to compare. Turbo is dogshit quantized garbage, 3.5 had a lot of unsubtle fuckups, and 4 has subtle fuckups and still fucks up a lot of the time.
It's better, but not cheaper than the API unless you're throwing around a lot of context. Code Interpreter is good for one-off things that need python that you can trust it to not fuckup. All in all, 4 is a junior software "engineer". Confident yet retarded, good for surface level things. Better than 3.5.
>>15656344
Yes, I have noticed a performance drop. ChatGPT is different from the API. They say they don't change "GPT-4", but notice they don't say "ChatGPT". They fine-tune ChatGPT a lot and have a classifier in front of it to remove bad-speak and since it's MoE sometimes you get the short end of the stick when it comes to inference.
Think of 4 as 16xGPT-3.5's in a trench coat, all trained on different parts of a dataset, increasing performance while keeping training cheap, and a router in front of that may or may not activate the best "expert" for the inference.
See https://arxiv.org/pdf/2308.02828.pdf

>> No.15656373

>>15656368
I have been narrowly asking STEM questions and as I get frustrated I ask pure logic questions intended to be intelligible with pure linguistics which it surprisingly fucks up. It did amazing when asked about thoughtful physics concepts but does awful with nitty gritty word play.

>> No.15656377

>>15656358
>what is an analogy
You are a fucking retard. Which camp are you? Camp le heckin LLMs are "thingkin step by step" and will take over the world with AGI, or camp cringe LLMs are stochastically parroting Facebook posts from their training data?

>> No.15656379

>>15656301
It was always like this. You only used it shallowly originally and were fascinated by it, but as soon as you went in depth later on, you realized it wasn't all what you thought it was.

It's not useless. I used it quite a bit early on and realized it's limitations before everyone had started posting "Oh it used to be hood, now it's shit." It's useful if you know how to use it. Obviously, it's not AGI so it has quite a bit of limitations that midwits think are self-imposed by the creators. When they are not. It was just fascination with the new tech in the early days that clouded people's judgements. Learn how you can use it to your good and it's very helpful, try to do things outside of it's capabilities and you're just hurting yourself at that point.

>> No.15656382
File: 210 KB, 998x546, 1686794224313803.jpg [View same] [iqdb] [saucenao] [google]
15656382

Months ago I asked it about John Gabriel and his New Calculus and it gave me as thoughtful as a response as I would expect of John Gabriel himself only with greater articulated caviats for where exactly his thesis departments from mainstream conventions and concepts. I still have the conversation. It scanned thoroughly all of John Gabriel's Academia pdfs and reported a thoughtful response to all of it. It was capable of operating within the New Calculus instead of normal calculus. I asked it recently and now it simply shames him as not credible widely accepted. When asked more "widely accepted" is its refrain.

>> No.15656385

>>15656377
Please explain what "coaxing the LLM towards a part of the language latant space" means in concrete technical terms.
>inb4 you reply 30 minutes from now after binge-watching a couple more AI-explained-for-dummies 5 minute YT episodes
:^)

>> No.15656386
File: 4 KB, 225x225, 1690021310441945.jpg [View same] [iqdb] [saucenao] [google]
15656386

>>15656379
>it was always like this, trans women have always been women!

>> No.15656391

It's mainly a toy, when it works and doesn't fabricate info it's a decent search engine + summarizer, at its best it can help process complex data like code or many paragraphs, but it does so very unreliably.

>> No.15656398
File: 43 KB, 500x500, 6ravev5xvau91.jpg [View same] [iqdb] [saucenao] [google]
15656398

My child will teach GPT Euclid's Elements and then build LLM AIs around concord Great Books of the Western Canon

>> No.15656403

>>15656385
>means in concrete technical terms.
It's an analogy, dipshit. Want me to expound on the analogy? Each possible configuration of the model's internal parameters corresponds to a point in this "language latent space." It's coaxed to a point in that space. Or do you want how the latest LLMs work at a high level with the boring regurgitation of attention mechanisms and transformers? Or do you want me to tell you that it's a function approximator with shit stacked on top of it?

>> No.15656409
File: 55 KB, 640x880, 3252343.jpg [View same] [iqdb] [saucenao] [google]
15656409

>>15656403
>Each possible configuration of the model's internal parameters corresponds to a point in this "language latent space."
A configuration of the model's internal parameters? Interesting. How many "different configurations" does a model with fixed parameters have? :^)

>> No.15656438

>>15656373
Like I said, I find it best as a rubber duck.
>>15656409
The (fixed) parameters of a model determine how input data is mapped to the latent space. Each unique input, when processed by the model with these parameters, will produce a distinct point or representation in this latent space.
Do you want more analogy, dumb frogposter, for what is essentially "I want it to output stuff like this so I coax it with the prompt and loop the input on itself."

>> No.15656443
File: 147 KB, 888x1274, 23523423.png [View same] [iqdb] [saucenao] [google]
15656443

>>15656438
>t.
You didn't answer my question, GPT-2. How many "configurations" do the parameters have? You said the parameters have many configurations. :^)

>> No.15656457

>>15656301
I have started to use it instead of google for some questions. Aside from the fact that chatbot gives these long ass paragraphs, at least it's easier then scrolling through a lot of useless links that might have mentioned a few of the words from my question

>> No.15656481

>>15656443
The configurations are the unique input, autist, since you want to be autistic, I'll trigger your autism more.
>seethe-o-meter
self-portrait

>> No.15656495
File: 21 KB, 512x288, 1645688920390.jpg [View same] [iqdb] [saucenao] [google]
15656495

>>15656481
>The configurations are the unique input
So the "configurations of the model's internal parameters" are the "unique inputs"? I like how you just keep digging yourself deeper into the nonsensical schizobabble hole.

>> No.15656511

>>15656495
>So the "configurations of the model's internal parameters" are the "unique inputs"?
w

>> No.15656514

>>15656511
Meds.

>> No.15656521

>>15656495
give it a rest. the retard clearly doesn't even know what a parameter is

>> No.15656529

>>15656495
Yes, if that's what triggers your autism more. That's what I said.
>>15656521
It's literally just "values" vs "configurations" that he's nitpicking. You're autistic as well.

>> No.15656538

>>15656529
>That's what I said.
Well, say no more. It's a load of incomprehensible schizobabble on its face, no two ways about it. My initial point has been demonstrated perfectly.

>> No.15656567

>>15656538
>It's a load of incomprehensible schizobabble
>the fundamental representation of compressed data is schizobabble
You said enough with your soijak, retard.

>> No.15656598
File: 381 KB, 2544x4000, 2342532.jpg [View same] [iqdb] [saucenao] [google]
15656598

>>15656567
How can the inputs of a model be its parameters? How can they be it's "internal" parameters? How can the parameters of a fixed model have "different configurations"? Why are you talking about "the" latent space of an LLM when it doesn't even have anything that you can unambiguously refer to as "the" latent space? Why did you claim that an input corresponds to a single point in some latent space when it's actually an array of word embeddings, where each word is a point? Why did you imply that the output is located somewhere in that latent space when the output is not a single word, but a sequence of them? These are all rhetorical questions. The answer is that your posturing attempt backfired terribly, but you're a heavily inbred and mentally ill cretin so you can't let it go. Be sure to write a long reply full of more obvious gibberish, which no one will ever read since I'm hiding this tard thread. :^)

>> No.15656617

>>15656301
no free lunch strikes again. a model can't perform well at everything. it can't be factually accurate and cucked. the only solution is to have locally run foss models. anything on a "cloud" is digital slavery.

>> No.15656900

>>15656358
>Gives me secondhand embarrassment when tards throw in technical terms they barely understand to inflate the credibility of their low IQ posts.
because you too do this pretty often?

>> No.15656956

>>15656900
>i felt personally attacked by you
Sorry, but maybe you should stop your cringe posturing instead of blaming your betters.

>> No.15657009
File: 10 KB, 243x208, pepecaf.png [View same] [iqdb] [saucenao] [google]
15657009

I'm using GPT4 and experimenting with regenerating responses. Sometimes it gives shit answers and sometimes it answers are pretty great. The word limit is fucking frustrating though, and sometimes it doesn't follow your instructions. I usually imput a bunch of excerpts and ask for an analytical essay on it, and my prompts are usually pretty long but clear. I'm just waiting for the time it can produce amazing and insightful 10,000+ word essays on my special autism interests.

>> No.15657058

>>15656301
ChatGPT is unquestionably lobotomized but maybe it's also because the company is doing a marketing trick.
The unlobotomized version was as good as it's going to get.
So rather than having to pump out better results ,they keep the customers in suspense believing there is some super secret LLM.

>> No.15657169

>>15656331
>midwit
LLM's just put the next word after the current word. Exactly like people do.

>> No.15657175

>>15657169
>just put the next word after the current word
>Exactly like people do.
LOL. At least you don't even pretend to be fully human anymore. This whole "human species" concept really needs to die. You and I are clearly very different species with very different mental capabilities.

>> No.15657942

>>15657175
I scrolled through my YouTube feed and I see today a brand new slew of content that looks like it was worked on for months but all curated and created for me (abstract me by demographic clone swathes) and this is eerie. I have been shepherded. The content is good. Even better than I could ask for. But what can I do with it? I could do better with the worst content back in the day because the world was free libertine and affordable.

>> No.15658003

>>15656301
It’s being nerfed to quell fears of A.i. supremacy. Once GPT-5 comes out the wave of “fear” will return. It’s also being done to hype up any minor advancements shown in newer models.

>> No.15658134
File: 64 KB, 576x702, strash.jpg [View same] [iqdb] [saucenao] [google]
15658134

>>15656301
As expected, it can only go woke after massive neuronal damage.

>> No.15658164
File: 954 KB, 568x640, chadsmile.gif [View same] [iqdb] [saucenao] [google]
15658164

>>15656368
>4 is a junior software "engineer".
No is not. Only a fool would hire such levels of incompetence. It can't code anything, in the best case it only copies stuff you could find in places like stack overflow. To make it worse from time to time it copies meme code, and since is good at gaslighting n00bs, they believe is legit code.