[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 228 KB, 656x420, chatgpt5.png [View same] [iqdb] [saucenao] [google]
15780478 No.15780478 [Reply] [Original]

Will chatgpt 5 prove the riemann hypothesis?

>> No.15780636

>>15780478
chatgpt is a search engine, it can't make new thoughts

>> No.15780693

>>15780636
It can make new thoughts but you have to ask for it. By default they cucked it into being your safe unimaginative friend.

>> No.15780698

>>15780478
Probably, remember right now is the worst it'll every be.

>> No.15780710

>>15780478
How do I make it evil

>> No.15780722

>>15780478
Tooker already disproved it.

>> No.15780732

>>15780693
its just a program that attach percentage value to words.
it can't even think it's just operate schematically like any computer system

>> No.15780737
File: 18 KB, 477x297, TIMESAND___RHNO.png [View same] [iqdb] [saucenao] [google]
15780737

>> No.15780739

>>15780732
Yes and if you ask him for new thoughts it will attach high percentage values to words forming a new thought.
It's probably how your brain works too.

>> No.15780741
File: 1.25 MB, 3400x3044, TIMESAND___QDRH762aFF.jpg [View same] [iqdb] [saucenao] [google]
15780741

>> No.15780743

>>15780739
Brain works by association

>> No.15780744
File: 3.19 MB, 3689x2457, TIMESAND___ZetaMedium.jpg [View same] [iqdb] [saucenao] [google]
15780744

>> No.15780746
File: 1.23 MB, 1x1, TIMESAND___Fractional_Distance__20230808.pdf [View same] [iqdb] [saucenao] [google]
15780746

>> No.15780748

>>15780739
god you are stupid

>> No.15780749

>>15780636
It's not a search engine. Being a search engine would be an improvement. It's just a sophisticated autofiller.

>> No.15780761

>>15780748
No amount of insults will convince me. You have to bring arguments. But you can't, obviously. You best argument is 'it can't think because it's a machine'.

>> No.15780762
File: 27 KB, 703x504, ChatGPT4 caltrop dilemma.png [View same] [iqdb] [saucenao] [google]
15780762

It ain't gonna prove no nothing

>> No.15780767

>>15780748
no, you!

>> No.15780778

>>15780761
its limited to the datasets how the fuck can it produce something outside of it?
it can only output or permuted what it already have, this is not a fucking voodoo this is a human made program its not going to evolve to anything

>> No.15780790

>>15780778
>its limited to the datasets how the fuck can it produce something outside of it?
First it's easy to check that it can, even if you don't know how. Any programmer that used chatgpt seriously has been able to make it create original code, it's also capable to modify and improve code from private repositories.
Basically it learns high level abstract patterns that are present in its dataset and apply these patterns to new data. It's the same process that humans are going through when learning.
But following your reasoning how are painters able to create new paintings from a limited numbers of existing paintings?

>> No.15780811
File: 233 KB, 598x595, image.png [View same] [iqdb] [saucenao] [google]
15780811

>>15780790
>original code
nope.
it can produce code but not even close to what you describe, everything he makes you can find somewhere on the internet, that's literally how it works

>> No.15780837

>>15780811
Ok I'm not sure with who am I discussing but you're fighting a fact that is widely accepted. I've been using it for 6 months professionally, as many other professional developers from my job. I guarantee you that most of the code it produces exists nowhere. The screenshot you're showing proves absolutely nothing other than it can occasionally copy code from its training data. Do your research but I won't go on with this conversation because right now it's a waste of time.

>> No.15780838

>>15780811
Also github copilot is not chatgpt, it existed 2 years before chatgpt, it's not even close to be comparable.

>> No.15780850

>>15780837
>widely accepted
by who?
there is a reason no one in the industry plug chatgpt programs to their system.
it needs to be verified and tested, all i saw from it was broken code templates.
no wonder why you are trying so desperately to escape this discussion you have nothing left to say. enjoy smelling your own farts i guess

>> No.15780863

>>15780850
Oh boy, you're in for a surprise I guess. Lately I don't meet many people that are still this out of touch with the current capabilities of state of the art LLMs.
"the broken code template" you posted has nothing to do with chatgpt, it was posted on twitter 6 months before gpt 4 was even released.

>> No.15780867

>>15780693
You are a fucking idiot.

>> No.15780882

>>15780790
You are a complete stupid moron.

>> No.15780886

>>15780863
you are blowing it out of proportion i played with chatgpt4 its not that impressive and this is not a big advantage for a professional environment, i doubt you work in software development as you claim
because it doesn't really help to solve problems that are involved in complex systems, any failure in synchronization can destroy something, there is a limit to the specification that can be entered into a machine learning so that it understands how to do it correctly.
in this time and effort it is better to do it yourself, you are a fucking larper

>> No.15780894

>>15780886
>there is a limit to the specification that can be entered into a machine learning
what does that even mean

>> No.15780900

>>15780837
LMAO, is not better than pajeets copy/pasting code from internet and gluing it to the rest of the code base with shit and goo. You will get what you deserve.

>> No.15780911

>>15780894
>what dose it mean plugging a code without a context on the system/network/services/protocols etc...?
maybe you are right this discussion is over. you expose yourself as a charlatan and i have no interest in wasting my time on you

>> No.15780915

>>15780911
Just don't use words that you don't understand because you form sentences that are nonsensical. You don't enter anything into a 'machine learning'.

>> No.15780926

>>15780478
If its just a bigger GPT-4 then no it won't.
LLMs as they exist now cannot do this kind of thing.
I do think AIs will eventually be able to do this kind of stuff though but nothing we have now.

>> No.15780929

>>15780636
It's not a search engine. wtf are you even doing on this board?

>> No.15780937

>>15780915
so what do you want me to say instead how would you formulate this?
insert to the input prompt that the machine learning use is that better?
fucking kill yourself lamo larper sack of shit

>> No.15781029

>>15780732
You're just a bunch of neurons firing electrochemical signals.

>> No.15781067

ChatGPT is a lot smarter than what I imagined AI would be like in the year 2023. However it's also a lot dumber than people think it is. It is excellent at understanding what you as a user want. It is poor at thinking or coming up with original information.

>> No.15781088

>>15780762
Its logic is sound. Its conclusion is wrong, but the logic is sound.

>> No.15781111

I read every single post ITT and I'm ashamed. Not a single Anon here understands even a little bit about GPT. This board has become the absolute ridiculous bottom of the barrel of 4channel.

>> No.15781115

>>15780478
no
/thread

>> No.15781156

>>15781111
didnt read a single reply but i am interested in your input.

>> No.15781484

>>15781115
>/threading your own post
Cringe.

>> No.15781495

>>15780762
The last line is the real kicker. If a real person said some shit like this you would know they were trolling, but as a chatbot it is just retarded.

>> No.15781503

Lets just say..all AI made systems will be one step behind from humans because they will always be dependent on updates.

>> No.15782635

>>15781111
I read your post ITT and I'm ashamed.

>> No.15782846

>humans can only think about things that they know about
>of couse, that's logical. It's impossible to create something from nothing. Humans are smart.
>AI can only think about things that they know about
>lmao, AI is so dumb it can't even know about things that they doesnt know about it

>> No.15782882

>>15780478
I'd be surprised if that piece of shit can even play tik tac toe.

>> No.15782890

>>15780478
What is it with you fags' obsessions with the Riemann hypothesis. I've seen so many fucking retards parroting that name. I just know none you can even tell me what it means. It just registers in your brains as "Complex-Sounding Smart Thing", like you're just a fucking dog reacting to the tone of how a word is used. Fucking subhuman midwits.

>> No.15782905
File: 30 KB, 554x554, Serious_Pepe.jpg [View same] [iqdb] [saucenao] [google]
15782905

>>15780478

LLMs SUCK at real innovation.
They are great at repeating what is known and small extrapolations, but that is about it.
No self-directed intelligence.

>> No.15782911

>>15782905
Raw models are better at that.

>> No.15783323

>>15782846
Yeah /sci/ is so moronic about AI. I think they feel threatened

>> No.15783336

When will it be released? I'm sure they have it already.

Also that google gemini bullshit, so much pr and still not out?

>> No.15783347

While the GPT series, including possible future iterations like GPT-5 or GPT-6, are extremely powerful models capable of understanding and generating human-like text based on a vast array of topics, they are not specifically designed to solve unsolved mathematical problems. They don’t “create” new mathematics or “discover” new mathematical proofs. They don’t perform symbolic reasoning or formulate new conjectures or proofs in the way a human mathematician does.

Typically, solving a problem like the Riemann Hypothesis involves creating new mathematics, developing deep insights, and producing rigorous proofs. This process often requires a deep and novel understanding of mathematics, intuition, creativity, and the ability to see connections between seemingly unrelated areas of mathematics.

While GPT models can assist in exploring mathematical concepts, providing explanations, and potentially aiding in computations or simulations, the discovery or proof of significant new mathematical theorems is likely to be beyond their capabilities, at least as they are currently conceived and designed.

Of course, the development of artificial intelligence is ongoing, and it's conceivable that future AI models may be developed with enhanced capabilities in mathematical reasoning and proof discovery. However, the creation of an AI capable of solving a problem like the Riemann Hypothesis would represent a significant leap forward in the field of AI and mathematics.

That said, AI can and does play a role in advancing mathematical research by helping human researchers analyze data, test hypotheses, perform computations, and explore the mathematical landscape. It is an invaluable tool in the mathematician's toolkit, even if it is not (yet) capable of independently making groundbreaking mathematical discoveries.

>> No.15783352

>>15782846
>>humans can only think about things that they know about
This is wrong though, humans can construct new things.
Proof: new inventions, new scientific theories, and works of art are created all the time