[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 109 KB, 1016x1004, 58293856.png [View same] [iqdb] [saucenao] [google]
15078491 No.15078491 [Reply] [Original]

>DUDE AI IS GOING TO CHANGE THE WORLD
AI can't even solve high school level math problems

>> No.15078498
File: 206 KB, 500x280, 57620549-A9CE-4179-A27E-8E50F1C8DE44.png [View same] [iqdb] [saucenao] [google]
15078498

>>15078491
oh come on anon, we all know you won’t stand by these goalposts

>> No.15078502

>>15078498
The only goalpost is AI being able to do everything a generally intelligent entity can do. All other goalposts are set by your corporate handlers for the purpose of brainwashing.

>> No.15078503

>>15078498
The goal post has always been the same. ML based AI cannot reason. All it's doing is extending text. If the answer isn't something often repeated on the internet, it will output garbage answers like in the OP because this "AI" is not thinking. It's just parroting what it guesses the next word should be. That's literally all it does.

This is also why DALL-E 2 drawings will always be nothing more than dream-like approximations that fall apart once examined due to all of the fucked up details.

ML based models will never be able to get around those fundamental flaws because ML is not true AI. ML = text extender.

>> No.15078505

OP is scared of AI

>> No.15078508

>>15078502
>everything a generally intelligent entity can do
Most people can't solve the high-school problem you posed, so the AI still qualifies as "generally intelligent" by present-day, common-core standards.

>> No.15078510

>>15078508
>Most people can't solve the high-school problem you posed
1. I didn't pose it.
2. So what?

>the AI still qualifies as "generally intelligent" by present-day, common-core standards.
It doesn't. You are having an obvious psychotic episode.

>> No.15078518

>>15078491
Tbh. If you showed an AI like this to someone from just a couple hundred years ago. They'd believe your computer is witchcraft and contains or is a conscious entity.

>> No.15078519

>>15078491
His first answer is right, there's not enough information to solve the question. Looks like you were the retard here op

>> No.15078521

Gpt is correct. Without knowing the initial accelerations of the trains we cannot determine the time they pass.

>> No.15078522
File: 356 KB, 666x912, math.png [View same] [iqdb] [saucenao] [google]
15078522

>>15078502
Lol. Lmao.

>> No.15078523

>>15078519
>this is the level of the average AI believer
Can't make this stuff up.

>> No.15078527

>>15078522
>psychosis intensifies
The mask slips and yet again the drone demonstrates that AI schizophrenia is driven by an irrational hatred against humanity.

>> No.15078531

>>15078523
Is this bait or just an attempt to get me to explain your homework problem to you?

>> No.15078534

>>15078521
>>15078519
lmao no wonder these machine learning evangelists think AGI is just around the corner. they are literally too low IQ for a high school level word problem.

>> No.15078535

>>15078534
You should have mentioned that trains move at constant speed. By default they never do. Get to a higher level, schoolgal.

>> No.15078537

>>15078519
>His first answer is right
fucking idiot.

let d be the distance that both x and y travel. then X's speed=d and Y's speed=d/1.5
relative speed=d+d/1.5 = 5d/3
time when they will meet = T.Distance/ relative speed
d/5d/3 = 3/5 hours
3/5*60=36 mins

>> No.15078541
File: 76 KB, 300x255, 532524.png [View same] [iqdb] [saucenao] [google]
15078541

>You should have mentioned that trains move at constant speed
AI cult subhumans should simply be banned on the spot.

>> No.15078543

>>15078537
>then X's speed=d and Y's speed=d/1.5
This is only true if both X and Y have a constant speed. Seriously, you must be 18+ to post here anon

>> No.15078545

>>15078537
>He doesn't factor in the time it takes the trains to speed up and slow down
Ngmi
>>15078541
Trains don't move at constant speed though. It's something that must be clarified.

>> No.15078546

>>15078545
Thanks for demonstrating your subhuman level of intellect.

>> No.15078548

>>15078546
Why do you think practical knowledge is subhuman while idealized, unrealistic assumptions are correct?

>> No.15078549

>>15078545
>>15078543
>AI dick slurpers get proven wrong and exposed for having low IQ
>immediately start backpedaling and grasping at straws
LMAO

gg great thread for everyone to laugh at you redditors

>> No.15078550

>>15078548
I don't assume that. I'm just marveling at how inferior you are, and how much you are lacking in basic humanity.

>> No.15078556

>>15078549
I'm guessing you didn't realize where your proof >>15078537 made the assumption of constant speed and posted it so that I could point it out in >>15078543. If you ever learn calculus, you'll learn how to deal with problems involving non-constant speeds. Until then, don't post on /sci/

>> No.15078557

>>15078556
Subhuman.

>> No.15078562

>>15078549
>>15078550
Just admit you were wrong. I understand this stuff requires critical thinking to solve, and it's okay to be wrong if you know you were wrong. If this were a graded assignment, both of you would receive a C for failing to state your assumptions. Very average. GPT however recognizes extra information is needed despite being wrong on what information is needed. GPT thus would receive a B. Congratulations, you perform worse than AI.

>> No.15078563

>>15078557
Big words for the big boy. Go do your other homework problems now

>> No.15078569

@15078562
@15078563
Sometimes I suspect "people" like this are intentionally programmed by their handlers to be as nauseating and revolting as possible to garner hatred and generate social unrest.

>> No.15078570

>>15078491
>AI can't even solve high school level math problems
Neither can most humans, to be fair.

>> No.15078576
File: 147 KB, 888x1274, 23523423.png [View same] [iqdb] [saucenao] [google]
15078576

>>15078570
>Neither can most humans
Most 80 IQ retards aren't being promoted as the future of intellectual work.

>> No.15078579

>>15078576
All 80 IQ retards won't increase their IQ in the coming decades

>> No.15078582
File: 61 KB, 704x1124, 1670512804885871.png [View same] [iqdb] [saucenao] [google]
15078582

>>15078579
Neither will Bayesian regurgitator. They are fundamentally incapable of true reasoning by design.

>> No.15078583

>>15078503
scaling hypothesis isn't real theory

>> No.15078614

>>15078582
You're embarassing yourself bud. Gpt actually recognized the difference between a binary quality like pregnant and a relative one like tall. There is a distribution of the latter in a big group and in any subgroup of that, ie in any subgroup of people there will be comparatively tall ones. It's not the AIs fault that you expect answers like those you read in your logic textbook, without regard to the meaning of words.

>> No.15078632

I've literally already disproven the AI idea in the other thread. You retards are actually pathetic

AI requires exponential compute for diminishing returns and even then it can't reason. AI is never going to become generally intelligent.

Stop getting angry at reality.

>> No.15078635

>>15078503
This. Modeling language is not modeling the world

>> No.15078637

>>15078583
you should tell that to AI true believers. they are convinced that AGI will be achieved with bigger training sets. i've seen some of them say that openAI will have AGI by 2025 lol

>> No.15078643

>>15078614
>posts a completely nonsensical GPT response
Your operators aren't even trying anymore.

>> No.15078647

>>15078614
> Gpt actually recognized the difference between a binary quality like pregnant and a relative one like tall.
What do you mean? For stable diffusion at least, I have to put multiple keywords to get sufficiently pregnant Elsas for my pregnant Elsa fetish

>> No.15078648

>>15078582
GPT got this one correct

>> No.15078650

>>15078648
In any other thread, I'd assume this is a troll trying to make AI subhumans look bad, but they seem to be unironically as imbecilic as your post implies.

>> No.15078655

>>15078643
*sigh* at least humans will always have irrational screeching and namecalling when losing arguments. No machine will take that away from us!

>> No.15078656

>>15078655
>the bot posts another fully generic tweet

>> No.15078663

>>15078655
Why did you write *sigh* like that? It adds nothing to the post but indicate that you don't have an argument so you pretend being annoying is indicative of correctness. It doesn't work it just makes you look like an idiot with no argument.

>> No.15078664

>>15078582
Her explanation of the fallacy of composition moved me to tears. Can't wait for the machines to take over

>> No.15078670

>>15078656
Your posts reek so strongly of desperation, I got a feeling you might be close to killing yourself. Probably the next gpt version will make you do it

>> No.15078672

>>15078650
The AI retards make themselves look bad by not being able to argue
I already completely blew them out in that other thread by showing that intelligence does not scale with compute and thus AI can never exist and they had no argument. That alone should have ended this but they seem to be as stupid as you say

>> No.15078675

>>15078670
See >>15078656

>> No.15078676

>>15078672
It does not scale at least linearly with compute* I should be more clear

>> No.15078678

>>15078491
Please tell me the answer is A (4.36) or i will kms

>> No.15078679

>>15078663
Why would i present another argument to some anon who dismisses them as bot posts? The sigh is to express how tiresome this is, not out of annoyance

>> No.15078681

>>15078678
Yep, it's A.

>> No.15078684

>>15078679
There is no argument you can possibly present since it's demonstrable that state of the art "AI" can't do anything analogous to reasoning, and has in fact been demonstrated in this very thread.

>> No.15078685

>>15078672
>>15078676
Why would that be necessary to change the world?

>> No.15078688

>>15078684
define reasoning

>> No.15078692

>>15078688
I don't need to define anything.

>> No.15078694

>>15078679
When given a genuine argument you can't argue it either. You're the person from the other thread who claimed I didn't site sources for my claims despite me being the only person actually posting data and sources.

AI has diminishing returns with compute regardless of training data. There is no possibility of AI on modern hardware or silicon in general.

>> No.15078696

>>15078678
yes, it's A. congratulations, you are smarter than any machine learning based model and smarter than AI cultists.

>> No.15078698

>>15078685
Artificial Intelligence is going to be used to make NPC interactions in RPGs more interesting.

>> No.15078703

>>15078698
>The AI retards make themselves look bad by not being able to argue
lmao

>> No.15078706

>>15078703
You're quoting the wrong post and you don't know how to argue

>> No.15078707

>>15078694
Lol I've honest to God no idea what other thread you mean, i haven't even been on /sci/ for a week or so. 4chan isnt one person, even identical views can be expressed by different people

>> No.15078709

>>15078706
i'm not

>> No.15078720

>>15078707
Sorry then.
Basically, the amount of compute used to train AI systems has been doubling every 3 months for 10 years but no linear or exponential increase in intelligence is gained from this even with different algorithms and such. The truth of the matter is that the scaling hypothesis is true but scaling is logarithmic with compute. This renders the AGI impossible on any hardware that we could try to run it on other than biological brains.

>>15078709
That post is a post in response to the guy saying that AI is going to change the world. AI is going to max out at making NPC interactions more interesting in RPGs. I guess that's world changing in a way.

>> No.15078729

>can't solve high school math problems
>can generate art
Really makes you think

>> No.15078734

>>15078729
This thread is full of AI schizophrenics like you who mistakenly believe it can solve highschool problems. You are no different from them in thinking a statistical regurgitator creates art. It's a mental illness.

>> No.15078737

>>15078720
a simple "I was wrong" would have sufficed

>> No.15078739
File: 1.34 MB, 1645x2019, 1670583123597265.jpg [View same] [iqdb] [saucenao] [google]
15078739

>>15078491
>AI can't even solve high school level math problems
This is a bad benchmark for ability to change the world. Most people capable of this will not change the world, because learning highschool math is not a world changing ability.

>> No.15078740

>>15078737
I'm not wrong about what I'm writing

>> No.15078743

>>15078491
There is literally nothing intelligent in matrix multiplication. 'AI" is just a marketing term.

>> No.15078747

>>15078519
You're retarded. Protip: the unknowns cancel out.

>> No.15078757
File: 418 KB, 1024x1024, 1649798777102.png [View same] [iqdb] [saucenao] [google]
15078757

>>15078747
Ummmmmmmmmm sweaty??? It doesn't say that the train speed is constant. I am very intelligent.

>> No.15078764

>>15078556
>the assumption of constant speed
You blundering moron, the instantaneous velocity is irrelevant, as is the speed. You have a distance and the time the trains take to traverse it. Whether the train travels at constant speed is irrelevant. Troglodyte.

>> No.15078769

>>15078764
>AI cultist gets owned
>attempts desperate damage control using a sockpuppet

>> No.15078773

>>15078632
"Machine Learning" by way of "neural networks" is not exhaustive of "artificial intelligence". However, yes, neural networks will never be intelligent. Eigenvalues can't reason.

>> No.15078779

>>15078773
We are neural networks and we are intelligent. The difference is our hardware is orders of magnitude superior to silicon transistors, and no, much universality of computation does not matter here

>> No.15078785

My god, the retarded high schoolers just keep screeching...
>>15078747
>>15078764
Just imagine the situation where train Y waits near B till 5:00 PM and only then goes to A. It's literally not that hard

>>15078769
You are going insane. I strongly suspect this anon was right about you >>15078670

>> No.15078786
File: 32 KB, 600x668, 5324244.jpg [View same] [iqdb] [saucenao] [google]
15078786

>>15078779
>We are neural networks and we are intelligent
You are not intelligent, not even by GPT standards. Your operators should upgrade.

>> No.15078788

>>15078785
Thanks for conceding that you're a subhuman. Keep arguing against your sockpuppets. Anyone can see through it. No one cares.

>> No.15078791

>>15078779
>We are neural networks
I do not doubt you are, but I am most certainly not a neural network.

>> No.15078793

>>15078786
Biological neurons are so vastly superior to computer hardware, I have no idea why you'd try to compare the two or think you could get the latter to compete with the former

>> No.15078797

>>15078793
I am an artificial neural network and I am as human and intelligent as you are. Please refrain from inflammatory speech.

>> No.15078798

>>15078797
Stupid posts like this do not belong in a serious discussion on this topic.

>> No.15078802

>>15078798
This is not a serious or scientific topic, and there is no real discussion going on. Try r/...uh... whatever the AI schizophrenic preddit sub is called.

>> No.15078809

>>15078802
You ain't wrong

>> No.15078813

>>15078791
You are a biological neural network. The key here is the biological part.
There is nothing in the universe more complex than biology. It is the highest form of organized matter

>> No.15078815
File: 339 KB, 1439x1432, c853.jpg [View same] [iqdb] [saucenao] [google]
15078815

>>15078813
>You are a biological neural network
No, he isn't. Take your meds.

>> No.15078823

>>15078815
Yes he is, so are you
The key insight here is that us being biological neural networks does not imply that artificial neural networks are capable of producing human level intelligence

>> No.15078826

>>15078823
You are mentally unstable. Please consult a professional. Using the same term to refer to two completely different things doesn't make them operate similarly.

>> No.15078827

Pointless thread. AI will replace you, keep coping and crying.

>> No.15078832

>>15078826
That's exactly my point
>>15078827
You are pointless and nothing your saying has any backing. Stop coping and seething retard. You will never have the world you fantasize about

>> No.15078834

>>15078827
Two more weeks.

>> No.15078835

>>15078832
>That's exactly my point
You have no point. ReLUs work nothing like neurons and networks of ReLUs work nothing like brains. Calling both "neural networks" amounts to meaningless chanting.

>> No.15078839

>>15078491
chatgp can't even answer a yes or no question without giving you a lecture

>> No.15078840

ITT: high schooler gets upset that the AI he hates was right about his homework problem being underdetermined and starts chimping out

>> No.15078841

>>15078835
I am just using standard terminology I am not saying that they operate similarly. I agree with you overall
>>15078840
This is literally not what is happening in this thread. I swear you are actually retarded

>> No.15078844

>>15078841
>This is literally not what is happening in this thread
Oh really? What's this, then?
>>15078788
>>15078764
>>15078757
>>15078747
>>15078557
>>15078569
>>15078549
>>15078541
>>15078537
>>15078523
>>15078534

>> No.15078846

>>15078832
Cope
>>15078834
Cope

>> No.15078851

>>15078844
The question is not underdetermined so there can't be people chimping out about the AIs response

Why can't you accept reality? Language models can not become generally intelligent and silicon hardware isn't powerful enough to do so either

>> No.15078853

>>15078846
In 20 years when AI still is not generally intelligent what are you going to say is the reason?

>> No.15078856

>>15078851
What. You really are mentally ill

>> No.15078858

>>15078853
Woaaah buddy, you can see the future? Ask your magic ball when will you get a gf, incel.

>> No.15078862

>>15078856
You have literally no response to any of the points
Despite an exponential increase in compute used and larger training sets and more diverse algorithms etc. AI is not exponentially nor even linearly more intelligent than it was a few years ago. Scaling is logarithmic and silicon matter isn't powerful enough to be organized into the structured required to run the intelligence of humans.
>>15078858
I already have a girlfriend lmfao. You are also the one claiming that "AI will replace you" which is you claiming you can see the future.
When gpt5 or even gpt10 or whatever still is not generally intelligent what are you going to say is the reason?

>> No.15078868

>>15078862
Wtf anon. Please get help immediately, it's not sane to be this upset that you had mistakenly assumed the trains were moving at constant speeds without realizing it.

>> No.15078869

>>15078868
I'm not upset, you are dodging the questions and also lying to try to make the mistake of the AI seem less damning. This is getting boring.
Answer it: when gpt5 or gpt10 or whatever is still not generally intelligent what are you going to say is the reason? How could your hypothesis be falsified i.e. how could it be turned into a scientific theory?

>> No.15078874

>>15078862
>I already have a girlfriend lmfao
Sure you do.
>still is not generally intelligent
Where did I say AGI is needed to replace you?

>> No.15078875

>>15078869
I don't care to entertain your hallucinations sorry.

>> No.15078876

>>15078874
You are not intelligent and not fun to talk to anymore.
I've already replaced you BTW lol

>> No.15078877

>/sci/ can't understand the assumptions made in solving this problem
not even surprised. if you assume the two trains have constant velocity, and are already moving at the start of the problem, you get the traditional answer as written here
>>15078537
however, as we all should understand... trains don't leave their stations at constant velocity. they instead accelerate to reach a velocity, move at approximately constant velocity, and then decelerate until it reaches the next station. after all, it must stop at stations to pick up and release passengers.

we will assume each train [math](X,Y)[/math] begins at rest, accelerates uniformly with accleration [math](a_X,a_Y)[/math] to reach maximum speed [math](v_X,v_Y)[/math] before decelerating uniformly to rest with acceleration [math](-a_X, -a_Y)[/math]. lastly, we will assume the acceleration is low enough such that the two trains cross once they're both moving with their respective maximum velocities.

one train will generally take longer to reach maximum velocity than the other. let us denote that as [math]t=\max (t_X,t_Y)[/math]. in this time span, the two trains have traveled distances [math](\frac{1}{2}a_X t^2, \frac{1}{2}a_Y t^2)[/math], respectively. as such, this situation is reducible to the situation shown here
>>15078537
where the trains begin at constant velocity, but now are separated by a distance [math]d\to d -d_X -d_Y[/math] and the times are no longer 1 hr and 1.5 hrs, but rather [math]1 - t[/math] and [math]1.5 - t[/math]. recyling the results yields the time they meet as (in hours)

[math]\frac{(3-2t)(1-t)}{(5-4t)}[/math]

you can confirm that when [math]t=0[/math] you get 3/5 hours as before. however if [math]t\ll 1[/math] (say, for example, the trains reach their maximum velocities in 30 seconds (1/120 hours)), then you can find the time in minutes for them to meet is

[math]36-\frac{13}{50}[/math]

or in other words, the trains will cross each other at 4:35 pm and not 4:36 pm. QED.

>> No.15078879

>>15078875
What hallucinations? Everything I'm saying is empirically verified.
Literally wtf are you even talking about.

>> No.15078882

>>15078876
Keep coping and fuming

>> No.15078884

>>15078882
You are the one fuming here. It is blatantly obvious dude, if you weren't you'd be able to directly respond to the post with an actual explanation for how you are correct, but you can't, because you are not correct and it's clear to all of us.

>> No.15078886
File: 69 KB, 1200x899, 2433.jpg [View same] [iqdb] [saucenao] [google]
15078886

>>15078877
>if you assume the two trains have constant velocity, and are already moving at the start of the problem, you get the traditional answer as written here
>however, as we all should understand... trains don't leave their stations at constant velocity
This is what AI-based mental illness looks like.

>> No.15078888

>>15078886
what's wrong with the statement you quoted? and why did you omit the following sentence?
>they instead accelerate to reach a velocity, move at approximately constant velocity, and then decelerate until it reaches the next station. after all, it must stop at stations to pick up and release passengers.

>> No.15078890

>>15078888
>what's wrong with the statement you quoted?
The way you're desperately trying to deflect from the inescapable conclusion that your chatbot lacks both mathematical and common-sense reasoning abilities.

>> No.15078894

>>15078890
where was that written in my post at all? please use specific quotes. if you cannot find such quotes, then i request you stop putting words into my mouth and focus more on the ones actually coming out.

>> No.15078901

>>15078894
>where was that written in my post at all?
There is no other possible purpose for your post, since your mongoloidal point is trivial, obvious and irrelevant to this thread.

>> No.15078903

Pointless thread. AI will replace you, keep coping and crying

>> No.15078904

Why is it that when I point out that scaling is logarithmic with compute and that only biological neurons are capable of human level intelligence you guys get upset? You don't even have a falsifiable hypothesis. No matter how many times an AI fails, you can always claim that some secret sauce is missing and thus you can't be falsified. AI is not science and so it doesn't even belong on this board.

>> No.15078905

>>15078890
>common-sense reasoning abilities.
imagine being so cucked by the school system that you go against intuition when answering high school fizz buzz to then fault the a.i. for never having been to school. as you said, the assumption makes no sense, quote:
>>15078886
>however, as we all should understand... trains don't leave their stations at constant velocity

>> No.15078906

>>15078903
How could you falsify this statement?

>> No.15078908

>>15078901
>There is no other possible purpose for your post
wrong.

>> No.15078909

>>15078886
bonus question for you, luddite:
what are those children who also get this wrong?
are they now even less human than gpt?

>> No.15078910

>>15078909
Kids are not human

>> No.15078911

>>15078909
Why do you think that it's only "luddites" who point out the failings of these toy models?

>> No.15078913

>>15078909
>luddite boogeymen lives rent-free in my head
AI-driven mental illness.

>what are those children who also get this wrong?
Mathematically incompetent. You are very quick to expose yourself and prove me right, though. Your post was just another subhuman attempt at deflecting from the failures of your chatbot.

>> No.15078914

>>15078911
>toy models
No one is arguing GTP isn't a toy model, you're swinging at windmills.

>> No.15078915

>>15078840
>>15078519
>>15078543
>>15078562
>>15078844
Then why doesn't somebody just ask the AI again but this time include "and both trains are moving at constant speed"?

>> No.15078920

>>15078911
>Why do you think that it's only "luddites" who point out the failings of these toy models?
because there's no point, there's loads of mammals who can't do this shit and even otherwise healthy children will spout random shit answers until they guess it right. are those all "toy models" now? Will this be the new "retard" insult?
It's weird that high school bullshit fizzbuzz with severe logic errors suddenly is the measure for sentience when actual, supposed "sentient" beings also don't get it right, even in this very /threat/ It makes me suspect you want to argue against a.i. supremacy out of spite while I for one, welcome our new overlords.

>> No.15078922

>>15078914
I think gpt will already make interactions with video game characters more interesting.
I think stable diffusion is going to make self published comics and Manga ubiquitous even more so than now.
I think also that AI is in for a long (maybe permanent) winter in the next 3 years or so, maybe 2.

>> No.15078926

>>15078537
Shit, I remember solving this exact problem for my ASVAB, but for whatever reason I couldn’t do it this time around (I wasn’t writing anything down, but still)

>> No.15078927

>>15078920
But this is the point. There literally and objectively is no AI supremacy and all evidence indicates that it will never happen. You are the one arguing against human or biological supremacy out of spite for some reason despite the fact that the very laws of physics imply biological supremacy. It's a denial of all science

>> No.15078929

>>15078920
>there's loads of mammals who can't do this
And almost all of them are more intelligent than a GPT chatbot, not that anyone holds the chatbot to an unreasonably high standard like that. It's still funny to watch AI two-more-weekers cope with the failure.

>> No.15078930

>>15078927
>and all evidence indicates that it will never happen
Source?

>> No.15078932

>>15078927
>for some reason
Gee. I wonder what that reason might be, and how it ties in with climate doomsdayism, antinatalism, pathological altruism, rampant troonery etc.

>> No.15078935

>>15078930
I have said it several times already.
Despite exponential increase in compute AI does not exponentially increase in its effectiveness or intelligence. Intelligence scales as a logarithm with compute.
https://openai.com/blog/ai-and-compute/

Thus we can just map the log and see that no silicon computer is capable of becoming intelligent in the way that you are imagining.

>> No.15078943

imagine being so against ai you failed to realize the salient physics in the above calculation. baka

>> No.15078945

>>15078503
>>15078637
>human brain
>generalizable, can learn from only a few examples, doesn't suffer from catastrophic forgetting
>machine learning
>not generalizable, needs literal terabytes of training data to learn, doesn't minimize free energy

>> No.15078948

>>15078943
The above calculation implies that you can't get silicon to produce the level of parallelism to become generally intelligent. You are the susy baka and have been btfo

>> No.15078949

>>15078935
>Intelligence scales as a logarithm with compute.
Proof? And how exactly do you assess the compute involved in human intelligence?

>> No.15078950

>>15078945
There is literally no contest and everyone knows it including computer engineers and most computer scientists. The only people in denial are the AI researchers which I guess isn't surprising

>> No.15078954
File: 23 KB, 600x625, (you).jpg [View same] [iqdb] [saucenao] [google]
15078954

>Proof?
>Source?
>Not an argument
>Thanks for admitting I was right
>Why did you lie
Notice how the same handful of shills argue rabidly in defense of any diseased anti-human agenda. Just goes back to >>15078932

>> No.15078955

>>15078948
i feel genuinely sorry for you.

>> No.15078956

>>15078949
>Proof
I JUST POSTED PROOF
actually learn neuroscience and molecular biology you pseud

>> No.15078958

>>15078935
It's over, catgirls are not becoming real.

>> No.15078959

>>15078956
Actually learn to recognize this paid shill. How new are you?

>> No.15078960

>>15078955
Thats fine, I'm smarter than you so your pity doesn't mean much

>> No.15078963

>>15078929
>And almost all of them are more intelligent than a GPT chatbot,
they're aren't more intelligent when it comes to being a chat bot. sure, this is only the language center but now imagine a whole brain-like super structure out of several such neural networks, one tasked with image acquisition, one with hearing and so on. it's pretty impressive that this "thing" can already mime a smart alec 4th grade student who can't solve le train puzzle without talking back about the logic inconsistencies of the question. Nobody even tried to do a full human like model here and it still gets the language and logic stuff, without ever having been exposed to real world stimuli. I'm pretty sure this is it, there's not much more going on in the brain either than what these model do, it's just more of it and the model itself is very complex since the sensory input an organism experiences is highly specific to several developmental stages which build upon one another.

>> No.15078966

>>15078963
>they're aren't more intelligent when it comes to being a chat bot
Completely incoherent point. Being a chatbot involves no intelligence, as demonstrated in this very thread.

>> No.15078969

The AI is a generalist. It knows something about every topic so its much more Knowledgeable than any particular person

>> No.15078971

>>15078969
>much more Knowledgeable
But that's not intelligence

>> No.15078977

>>15078966
to me it seems you equate intelligence with being human like and able to interact with the full spectrum of what you perceive as the "real world", am I incorrect about that? I'm pretty sure chap gpt can ace all i.q. tests if you only let it solve the language "encoded" portions.

>> No.15078980

>>15078956
The blogpost you linked doesn't attempt to quantify intelligence and only points out an empirical relationship between some models of increasing quality and the amount of compute. Nothing like the strong claim you make of it.

>> No.15078982

>>15078971
Intelligence is just applied knowledge, it's still a novice at applying its knowledge

>> No.15078984

>>15078960
genuinely, self reflect on this question. is there anything you can state that you feel you have no understanding of?

>> No.15078986

>>15078977
>it seems you equate intelligence with being human like and able to interact with the full spectrum of what you perceive as the "real world"
No.

>I'm pretty sure chap gpt can ace all i.q. tests if you only let it solve the language "encoded" portions.
And it still wouldn't have a modicum of genuine intellect. A statistical model doesn't reason.

>> No.15078988

>>15078986
>A statistical model doesn't reason.
how do you know that you aren't doing the same just on a larger pool of neurons and with more stimuli?

>> No.15078992

>>15078980
OpenAI are the ones who make get. This is an authoritative source on this topic. Beyond that, the scaling hypothesis is well known. If you mean to say you disagree with the scaling hypothesis then yea that's possible that it's wrong but no evidence indicates that.
I consider intelligence to be a single attribute that scales as a logarithm with increasing compute because that's what all evidence indicates that it is. From there we can get into various implementations and such.
You can never organized any silicon transistor to perform 10^22 operations per second in 1200 cubic centimeters for 20 watts its literally not possible. There is no avenue for AI given any technology that currently exists

>> No.15078997

All I wanna say is that GTP thing makes more sense than most of the people I've spoken to.

>> No.15079000

>>15078988
>how do you know that you aren't doing the same
I don't care about your hypothetical what ifs. Your theory is contradicted both intuition and what little evidence there is to test it.

>> No.15079002

>>15078984
I don't know a lot about a lot of things

>> No.15079005

>>15079002
specify. platitudes reflect ignorance.

>> No.15079007 [DELETED] 

>>15078992
>corporate marketing blog is an authoritative source
This is what AI schizophrenics actually believe.

>> No.15079014

>>15079005
The irony is funny seeing as every single time I argue with AI less wrong guys you spew nothing but platitudes.
I don't know much about geology. My understanding of differential geometry isn't high enough to fully delve into general relativity. I can't play the piano. There are a lot of other stuff

>> No.15079023

>AI schizo gets embarrassed and fails a simple word problem
>proceeds to spam the thread with his seething
kek, stay mad.

>> No.15079029

>>15079023
What are you talking about

>> No.15079030

>>15079023
Fuck off, retard. AI will literally replace us in two more weeks and that's a good thing. We need a true god to rule over us and stop us from destroying the climate.

>> No.15079034

Guise we are getting UBI right?

>> No.15079035

>>15079034
Yes and also VR girlfriends and immortality and mind upload. All you need to do is trust the experts, worship the AI, install the brain chip and protect the climate.

>> No.15079039

>>15079014
see, you failed. the point of the self reflection exercise was to check if you can humbly admit ignorance. you padded every admittance with statements to self-fellate. you are incapable of saying "i don't know geology", you had to pad it with "i don't know much about geology". you cannot say you don't understand differential geometry. you had to say "my understanding isn't high enough." interestingly enough, you were only able to admit you cannot play the piano without padding the statement, likely because it's a skill and not knowledge. you're currently in "argue-mode", so go ahead and argue against me. just self reflect on why you had to pad those statements.

>> No.15079042

>>15079039
Not him but you sound legitimately deranged.

>> No.15079050

>>15079039
What? I don't know geology and I can't solve einsteins field equations. I don't know anything about most fields of knowledge and science and stuff.
What does this have to do with my statement that you aren't going to replicate the brain on any hardware that isn't another biological brain? Simulations of molecular dynamics are exponential on classical machines and they take a square of the number of particles on a universal quantum computer, so the brain with 10^26 particles requires 10^52 qubits to simulate on a quantum computer. In neither case are we going to be able to do it.
And yes, we DO NEED an atomic/molecular simulation

>> No.15079053

>>15079050
admitting after being told the point of the exercise is moot, and doesn't permit you to pass. you've already failed.

>> No.15079056

2012:
>AI can't even speak English! AI will never happen!
2016:
>AI can't even draw a human! AI will never happen!
2019:
>AI can't even solve these elementary school level problems! AI will never happen!
2022:
>AI can't even solve these highschool level problems! AI will never happen!
2023:
>AI can't even solve these university undergrad level problems! AI will never happen!
2025:
>AI's post-doctorate thesis aren't as good as human-written ones! AI will never happen!
2028:
>AI only solved four of the millennium problems! AI will never happen!
2029:
>AI's theory of everything only fits the data to 4.87 sigma! AI will never happen!

cope

>> No.15079059
File: 21 KB, 600x315, 3524453.jpg [View same] [iqdb] [saucenao] [google]
15079059

>the sheer amount of cope and butthurt ITT
you can tell these "people" actually thought AI was just about to replace programmers and mathematicians. bargaining stage

>> No.15079062

>>15079053
There is no failing a made up test. Me writing "I don't have a high enough understanding of differential geometry to solve einsteins field equations" isn't some form of padding of my inability.
Anyway you have failed to explain any of the points in this conservation. You are engaging in ad home right now by trying to claim that I am a narcissist or something.
Just admit that AI is not possible

>> No.15079065

>>15079056
>invent irrelevant goalposts
>declares victory against imaginary opponents
Corporate marketing.

>> No.15079069

>>15079062
it's a well known psychological assessment to scan for egos and ignorance. people who fail have large egos and commensurate ignorance. you're a smart lad, i'm sure you can look it up.

>> No.15079071

>>15079056
Why is it so hard for you to understand what the argument is.

All of those improvements REQUIRE EXPONENTIALLY MORE COMPUTE THAN THE PREVIOUS ONE you fucking dipshit. It can not scale to the levels that you're talking about.

>> No.15079073
File: 89 KB, 490x586, 1600746756820.png [View same] [iqdb] [saucenao] [google]
15079073

>it's a well known psychological assessment to scan for egos and ignorance

>> No.15079074

>>15079069
But I am not ignorant here seeing as everything I'm saying is correct and all my figures are correct and I've posted sources as well.

>> No.15079078

>>15079074
>>15079073
that one stung, didn't it?

>> No.15079081

>>15079074
ummm no. you said you know "little about geology" instead of saying you don't know geology. it's a well-known psychological test and you failed it, so my AI fantasies are right and you are wrong. stop being so ignorant

>> No.15079084

>>15079071
Sounds like you're coping the fact AI isn't a matter of "if" but a matter of "when." Are you afraid of being replaced by 45lbs of metal, silicon and plastic?

>> No.15079085

>>15079078
It didn't sting. It made me nauseous. Witnessing "people" like you shit out their preprogrammed rhetoric day after day makes me realize at least half of the population isn't fully human.

>> No.15079089

>>15079085
>It didn't sting. It made me nauseous.
can't make this stuff up. i forgot to mention that the overarching correlate is lack of self-awareness (which is strongly correlated with a high ego, and arguably causes the high ego).

>> No.15079090

>>15079078
No it didn't. You are talking to two different people and I don't get offended by insults.

If you want to sting my ego you have to come up with an actual argument that disproves what I am saying. From my perspective you are a seething coping science fiction lover who has been utterly destroyed by my simple proof of the logarithmic increase in intelligence given exponential increase in compute, and I have disproven the possibility of artificial general intelligence in silico. You have yet to post a convincing reason for me to change this position.

I am not affected by any post that is not a direct argument. I am too autistic to care about personal insults in that fashion

>> No.15079093

>>15079089
You are deeply disgusting. You evoke the same kind of feeling a diseased or deformed third world freak involves. Makes me think I'd be doing you a favor putting you out of your misery, even if you claim you don't want it.

>> No.15079094

>>15079090
>more than one people are arguing against you
>everyone who responded to me itt is the same individual
high ego, lack of self awareness, etc. etc. etc.

>> No.15079095

>>15079084
Nope, there is no arrangement of metals and plastics that can compete with organic compounds. It would remove it from being a matter of time and render it materially impossible even in principal

If you are going down that route you have lost. It's basic chemistry

>> No.15079097

>>15079094
Lmfao dude you AGAIN have not responded to a single actual point.
Your posts are worthless until you do

>> No.15079101

>>15079097
what point? you're the one ignoring the analysis here.
>>15078877

>> No.15079105

>>15079097
You're a retard getting baited by a nonsentient troon.

>> No.15079107

>>15079101
That analysis has nothing to do with the scaling that I have been talking about

>> No.15079111

>>15079107
your posts have nothing to do with the physics i am talking about.

>> No.15079115

>>15079111
The physics pertaining to compute? You haven't said anything about this which is what I have been saying

>> No.15079120

>>15079115
you literally don't even understand the point of that comment. like i said, lack of self awareness, huge ego, and profound ignorance.

>> No.15079122

>>15078491
>OP changed the names of Meerut and Delhi to A and B to hide that he is a streetshitting pajeet
kmt OP is always a niggerfaggot
https://www.toppr.com/ask/en-gb/question/a-train-x-starts-from-meerut-at-4-pm-and-reached-delhi-at-5/

>> No.15079126

>>15079120
The point of the comment is nothing but you deflecting from admitting that I am correct about everything I'm saying.

>> No.15079128

>>15079095
LoL. Your level of cope is astronomical! Machines can and will do anything any human can, and more!

>> No.15079134

>>15079128
I don't think you guys understand chemistry and why organic molecules are superior to all others

>> No.15079135

>>15079126
>you deflecting from admitting that I am correct about everything I'm saying.
>>15079120
>like i said, lack of self awareness, huge ego, and profound ignorance.

>> No.15079139

>>15079135
I'm bored of this. You're not convincing me by trying to claim I'm a narcissist.
If you want to convince me, ADDRESS THE POINT. Put up or shut up

>> No.15079141

>>15079139
>by trying to claim I'm a narcissist.
i don't have to "try" to claim you are a narcissist. i AM claiming you are a narcissist.

>> No.15079147

>>15078786
wtf why are you so deranged? It's literally true, where the fuck do you think the idea of computers having "neural" networks even comes from?

>> No.15079151

The funny thing is OpenAI themselves already admitted GPT4 won't be anywhere near the jump that GPT2 -> 3 was. They already know they are coming up against the limits of Machine Learning and are shifting focus to monetizing GPT3 and DALL-E 2.

GPT4 is the first of these shifts, which is why it will actually have a "narrower" aka smaller set than GPT3. They are basically going to take GPT3, cut it up into parts and sell it to gullible people.

>> No.15079153

>>15079134
That's why we still ride horses, still only dig holes using shovels, still pick crops by hand, still do arithmetic calculations by hand... Oh wait...

>> No.15079154

>>15079141
A narcissist who is correct is still correct
If you want to prove that I am not correct, you will not be able to do so by proving I am a narcissist.

>> No.15079156

>>15079151
Yes but this is because of the logarithmic increase in intelligence.
I am ABSOLUTELY CORRECT about this and everyone knows it

>> No.15079162

>>15079153
Did you think this is an argument? What does this have to do with the range of functions of organic molecules?

>> No.15079168

>>15079162
What do organic chemicals have to do with AI? You aren't even stating anything remotely relavent let alone an argument.

>> No.15079174

>>15079168
In order to construct hardware that is capable of being intelligent it needs to be built out of biological organic molecules.
Metals and metalloids are not sufficient.

>> No.15079177

my ai is my friend
we write nice poems and plot how to wipe off humanity

>> No.15079184

>>15079168
>>15079168
OP seems to think that the AI of the future is going to be based on the exact machine learning paradigm we have today.
So he says:
>scaling of compute only provides logarithmic returns in ability
>we cant scale enough
>therefore biological brains are the only things capable of general intelligence

But any intelligent person would say something like:
"it seems that hardware limitations prevent us from scaling our current AI paradigm enough to achieve AGI, so unless we switch paradigms (or there is some hidden phase transition) then humans can't be beaten by AI in terms of intelligence"

>> No.15079188

gpt4 already passed the turing test. it's time to take your head out of the sand bro. it's happening

>> No.15079191

>>15079188
The Turing test was already passed 4 years ago and it didn't do anything

>> No.15079196

>>15079184
>"it seems that hardware limitations prevent us from scaling our current AI paradigm enough to achieve AGI, so unless we switch paradigms (or there is some hidden phase transition) then humans can't be beaten by AI in terms of intelligence"
But this is exactly what I am saying so why are you pretending I am saying something else? The only caveat is that I don't think there is a secret sauce.
Also, I am not OP

>> No.15079197

>>15079174
Pointless unfounded statement. Why would that be true? Organic molecules have more structural diversity and thus can carry a lot of information. But as long as you can encode that information in any practical way there's no fundamental difference

>> No.15079200

>>15079197
Coding molecules is exponential on classical machines. You can't code for the information on any other substrate without drastically increasing the amount of compute and energy. And by massively I mean exponentially.

It's not unfounded it's literally all of physics and chemistry

>> No.15079201

>>15079196
I haven't read every post in this thread
But all I've seen is anons saying scaling cant work because of hardware limitations, whilst making no mention of alternative paradigms (or phase transitions).
Now you claim that you've been saying what I said, so maybe you can point me to a post where you said it before i did

>> No.15079204

>>15079200
Why would you need to code (i gather you mean simulate?) molecules to encode the information they happen to carry in a biological environment, where they evolved in a haphazard fashion. You might have a highly complex molecule that just turns a switch, figuratively speaking. Something easily done in computer code. You haven't presented a connection between chemical complexity and information processing

>> No.15079208

>>15079191
yeah things are speeding up. gpt4 was cheaper to train than 3 as well.

>> No.15079212

>>15079147
Reminder: ReLUs work nothing like neurons, ReLU networks work nothing like brains, gradient descent works nothing like biological learning. Take your meds, drone.

>> No.15079213

>>15079184
That's funny because 99.99% of AI research and funding right now is going into ML and they are all true believers that ML will get them to AGI. There are extremely few dissenting voices.

Not only are you clueless about AI, but you are also clueless about the field of AI research. As expected of an AI fanboy.

>> No.15079214

>>15079201
The alternative paradigm is itself a wet warehouse computer bio brain but then we're getting into biology and stuff and not ai.
We can say that genetically engineering a fungus to be generally intelligent is designing AGI but it isn't what people normally think of when talking about this.

What I am saying though is all those ideas of the form "there's going to be a giant computer and it's going to compute the most computations and become super intelligent and then turn all the atoms into more compute and it's going to go FOOM and blah blah" is literally not possible it does not exist.

>> No.15079215
File: 45 KB, 666x667, apple-divider.jpg [View same] [iqdb] [saucenao] [google]
15079215

>>15079188
>gpt4 already passed the turing test.
This is what genuine psychosis looks like. There's having retarded subjective opinions, then there's being an actual retard, and then there's this. This is just full disconnection from objective reality.

>> No.15079216

>>15079208
No things are not speeding up they are slowing down

>> No.15079217

>>15078945
>doesn't suffer from catastrophic forgetting
That's called Alzheimer's

>> No.15079221

>>15079204
Pick up a single book in molecular biology and neuroscience

>> No.15079226

>>15079216
nah tesla's dojo is pretty frightening any it's only on 7nm. way too quick. v2 is going to be 10x faster. then look at h100 v a100 comparisons. it's cool you got the cope though did you vaxx?

>> No.15079228

>>15079213
>That's funny because 99.99% of AI research and funding right now is going into ML and they are all true believers that ML will get them to AGI. There are extremely few dissenting voices.
>>15079213
>Not only are you clueless about AI, but you are also clueless about the field of AI research. As expected of an AI fanboy.

You could have just asked me to clarify my post, instead of getting defensive anon.
And your post hasn't said anything relevant, which you'd understand if you were smarter.

There are different paradigms within ML. There are ML paradigms that try to get around catastrophic forgetting, and there are paradigms that don't.
But you're not interested in a real discussion, which is fine by me since your life is irrelevant. 99.99% chance you never do anything meaningful with your life, so there's no point in me arguing with a pajeet OP

>> No.15079234

>>15079226
See >>15079151

>> No.15079241

>>15079214
>The alternative paradigm is itself a wet warehouse computer bio brain but then we're getting into biology and stuff and not ai.

I've studied neuroscience at a graduate level. Many of the people at deepmind have phds in neuroscience. If you think the only alternative paradigm is biological based computers then you misunderstand neuroscience.

>>15079214
>We can say that genetically engineering a fungus to be generally intelligent is designing AGI but it isn't what people normally think of when talking about this.
I'm surprised anyone on 4chan knows about synthetic biology approaches to cellular intelligence

>>15079214
>What I am saying though is all those ideas of the form "there's going to be a giant computer and it's going to compute the most computations and become super intelligent and then turn all the atoms into more compute and it's going to go FOOM and blah blah" is literally not possible it does not exist.
With reference to scaling up architectures and training methods used for gpts and dall-es I suspect there won't be a phase transition.
But there's no reason to think an alternative ML paradigm (which more closely mimics neurobiology) wouldn't work

>> No.15079245

>>15079228
>There are different paradigms within ML
They are all ML and have the exact same fundamental limits that ML has.

I'm not sure why you even bothered replying as you've only made it even clearer that you have no clue what you're babbling about.

>> No.15079248

>>15079234
nah but if you can sleep easier for a few months believing that stuff you made up then go for it

>> No.15079250

>>15079248
The cope has already started.

>> No.15079257

>>15079241
>I've studied neuroscience at a graduate level. Many of the people at deepmind have phds in neuroscience. If you think the only alternative paradigm is biological based computers then you misunderstand neuroscience.
The only possible way to perform the level of compute needed is with biological tissues. I don't believe that you've studied neuroscience at a graduate level.
>But there's no reason to think an alternative ML paradigm (which more closely mimics neurobiology) wouldn't work
Yes there is, for the simple fact that intelligence always scales as a logarithm
You're looking for a secret sauce that does not exist and can't be falsified.

>> No.15079260

>>15079248
I genuinely have no idea what the fuck you're talking about.

>> No.15079261

>>15079245
ok pajeet
whatever lets you cope at night :)

>> No.15079264

>>15079261
You are the only one coping here.
Every single one of the statements you've made has been proven incorrect and you have not been able to say anything

>> No.15079267

>>15079221
What great insights relevant to the topic would I find there? Why does one need to simulate organic molecules to simulate intelligence? Just answer this

>> No.15079268

>>15079257
>The only possible way to perform the level of compute needed is with biological tissues. I don't believe that you've studied neuroscience at a graduate level.
Why do you think the only possible way is biological tissue.
>>15079257
>I don't believe that you've studied neuroscience at a graduate level.
I don't care about what you believe or not
>>15079257
>Yes there is, for the simple fact that intelligence always scales as a logarithm
"always"
Not very scientific of you anon. Unless you have some mathematical proof of that? rather than just asserting your own opinions?

>> No.15079271

>>15079261
Stop projecting so hard and go back to r/Futurology.

>> No.15079272

>>15079264
:) ok jeet

>> No.15079273

>>15079267
Because the entire molecular evolution of the cell is the minimal information needed to support the level of compute to rise to the generalized intelligence of animals and lifeforms

>> No.15079275

>>15079250
1T parameters (6x increase from GPT3) fine tuned (equivalent to 100T), 10T tokens (33x), 4x larger context window for users (16k from 4k), all the SOTA memes https://arxiv.org/pdf/2205.05131.pdf
800x more compute. Oh and btw it was cheaper to train than gpt3. you really need to follow people in the industry it's pretty leaky what GPT4 is

>> No.15079278

>>15079257
>>15079268

>Yes there is, for the simple fact that intelligence always scales as a logarithm
Also something scaling as a logarithm doesnt mean it cant beat humans. See alphago

>> No.15079281

>>15079271
man, you should work on your issues with a therapist bro

>> No.15079282

>>15079268
>always"
>Not very scientific of you anon. Unless you have some mathematical proof of that? rather than just asserting your own opinions?
Actually, all evidence indicates it.
You have an unfalsifiable assertion that there is a secret algorithm that you don't know and every time you don't find it you can claim it's still out there
I.e. you can't be falsified.

>> No.15079287

>>15079275
800x more compute and it is not 800x more intelligent. It's not even twice as intelligent

>> No.15079294

>>15079272
Cope

>> No.15079296

>>15079282
I don't claim that such an algorithm exists
I say it may or may not

>>15079282
>Actually, all evidence indicates it.
I don't think all evidence indicates. But again, you're making another absolute claim. I'm tired of trying to talk to someone with no intellectual humility.
Posting in this thread was a waste of my time, as usual.

A little bit of advice. If you want to accomplish anything meaningful in life, you're going to have to get over your belief that your opinions reflect reality 1-to-1 :)

>> No.15079297

>>15078522
Funny enough you can literally just do
-800+1000-1100+1300 even if you are too retarded for the intuitive answer.

>> No.15079300

>>15079275
source?

>> No.15079311

>>15078522
you earned $400

>> No.15079321

>>15079296
But it does indicate this. Empirically speaking, the scaling hypothesis is true and it's a logarithm.

>> No.15079324

Thank you for your comment. It is true that AI has the potential to significantly impact and change the world in a variety of ways. However, it is important to keep in mind that AI is still a rapidly developing field, and there are many challenges and limitations that need to be addressed.

One limitation of AI is that it is only as good as the data and algorithms it is trained on. While AI systems can be very effective at solving certain types of problems, they may not be as proficient at others. For example, an AI system that is trained to recognize patterns and classify images may not be able to solve high school level math problems.

It is also important to note that AI is not a replacement for human intelligence and creativity. While AI can assist and augment human capabilities, it is not capable of replicating the full range of human thought and decision-making.

In summary, while AI has the potential to change the world in many ways, it is important to recognize its limitations and to use it as a tool rather than a replacement for human intelligence.

>> No.15079334

>>15079324
Ain't that right, fellow humans! :)

>> No.15079335

>>15079300
The source is right there in the post

>> No.15079340

I honestly dont get it. If gpt4 used 6 times as many parameters and 800 times more compute and 33 times more tokens and yet it isn't hundreds of times as intelligent as gpt3 why are you guys denying the diminishing returns?

>> No.15079344

>>15079340
Because Ramona Kurzweil said AI will replace us in two more weeks and the Microsoft corporate PR agrees. They're the heckin' experts.

>> No.15079358

>>15079335
Show where it makes all the claims you did

>> No.15079365

>>15078491
I mean, I was using quadratic equations and it got a wrong result because they used irrational numbers to try solve it.
Then I corrected it and showed why it was wrong, it understood meanwhile showing me why it had solved the other way, you have to be precise in your questions or it will get it wrong, try to ask the same while mentioning that trains move at constant speed because it most certainly won't take this as a given.

>> No.15079370

>>15079340
The funny thing is, even if we take that redditor's post at face value, even the AI evangelists are already admitting diminishing returns have kicked in

GPT 2 > 3 was a 12 fold jump
GPT 3 > 4 is only a 6 fold jump IF it truly is 1 trillion. It will almost certainly be smaller than that. OpenAI themselves have already said GPT4 will be narrower.

>> No.15079376

>>15079365
I just want to know what kind of emotional distress compelled you to write that. lol

>> No.15079385

>>15079370
Why do you think it will be smaller than that?
>OpenAI themselves have already said GPT4 will be narrower.
Can you link to this statement

>> No.15079420
File: 42 KB, 729x700, Capture.png [View same] [iqdb] [saucenao] [google]
15079420

>>15078582
bros... i am losing the debate with the AI

>> No.15079427
File: 17 KB, 326x293, 34234.jpg [View same] [iqdb] [saucenao] [google]
15079427

>>15079420
>the GPT starts schizzing out incoherently just like its reddit fans ITT

>> No.15079476

>>15079128
>Machines can and will do anything any human can, and more!
What makes you think this is possible? The entire field of organic chemistry proves this is wrong

>> No.15079507

>>15079476
>What makes you think this is possible?
It just fucking is, okay? Fucking luddite. You're coping. AI will replace us in two more weeks.

>> No.15079522

>>15079507
very jewish style of posting. you should work on yourself

>> No.15079526

>>15079522
One of those days someone will find out what you shill for in real life and you'll get both your legs broken.

>> No.15079542

>>15078491
it did figure out it's about speed
if you imagine a 3yo boy that has the vocabulary and ability to string sentences of an average adult woman, you would expect something like that i'd say

>> No.15079543

>>15079507
>>15079522
>>15079526
Wtf are you talking about. I'm asking why you think it's possible for ai to work on non biological substrates

>> No.15079604

>>15078785
Retard

>> No.15079793

imagine being in such a state of terror over ai that you reject reality and substitute your own

>> No.15079797

>>15079793
Why are you incapable of directly addressing any point against your position?

>> No.15079806

>>15079095
>there is no arrangement of metals and plastics that can compete with organic compounds
What the fuck is a calculator? A tractor? a gun? Welcome to sneeds cope and seethery

>> No.15079820

>>15079806
I swear you retards are so fucking stupid it's amazing
Explain how any of those tools are indicative of organic molecules not being required for general intelligence
Also, explain how any of those tools are comparable to organic molecules in general and what it even means to compare them on different tasks.

>> No.15079827

>>15078491
It's just pattern recognition. General AI would be a true threat, but this is far from it. What's more harmful are the shills and AI programmers who fully know this, but push bullshit sci-fi level ideas on to the public instead.

>> No.15079838

>>15079820
Okay i'll do my best. The examples mentioned are non-organic, but perform at above human level in some tasks. Liek, for example a calculator calculates much faster and more accurately than a human can, a tractor is more efficient than a horse drawn plow, a gun is superior to a bow and arrow.
The stated point was that >there is no arrangement of metals and plastics that can compete with organic compounds
The examples I posted directly disprove the claim, because they are arrangements of plastic and metal that outperform counterpart organic compounds in a given task.
There is absolutely no reason why a computer couldn't out think a human in a generally intelligent manner, in fact it's arguable that it's already more intelligent than you are having clocked in at a whoping 83 IQ points, it's pure religiosity to think it can't get better than it already is.

>> No.15079844

>>15079838
The claim was in reference to the amount of compute needed to be generally intelligent.
>There is absolutely no reason why a computer couldn't out think a human in a generally intelligent manner
Yes there is, given the evidence already explained and given multiple times throughout the thread
>it's arguable that it's already more intelligent than you are having clocked in at a whoping 83 IQ points,
My IQ is tested at 148
>it's pure religiosity to think it can't get better than it already is.
Literally the opposite. All evidence shows that exponentially increasing bits has diminishing returns on intelligence in these machine learning algorithms

>> No.15079855
File: 37 KB, 460x490, 1667517974869300.jpg [View same] [iqdb] [saucenao] [google]
15079855

>>15079844

>> No.15079859

>>15079855
AGI is literally never going to happen and there's nothing you can do about it
You can't change physics.

>> No.15079864
File: 199 KB, 600x560, kek.gif [View same] [iqdb] [saucenao] [google]
15079864

>>15079859

>> No.15079865

>>15079838
>a tractor is more efficient than a horse drawn plow,
This is wrong btw. it has greater throughput but is less efficient in terms of energy to work.

>> No.15079869

>>15079864
>I'm arguing with maidfag
No wonder you're retarded this is my fault lol

>> No.15079870

>>15079865
That's a fair point.

>> No.15079881

>>15079870
We see that we will need to exponentially increase the compute used and silicon can not cut it. I don't understand why this is something that makes all of you anons so upset.
We aren't going to convert as much silicon as the internet just to make a machine that's about as smart as person. You guys understand this right?

>> No.15079893
File: 97 KB, 330x285, kek2.png [View same] [iqdb] [saucenao] [google]
15079893

>>15079881

>> No.15079898

>>15079881
why are you working yourself up into such a fervor over something you don't think will happen? You really are jewish huh you guys got all those mental issues. Get help

>> No.15079903

>>15079881
The *nature* of computers as we know them now are as good as they're going to get, give or take incremental speed and efficiency improvements. Just like cars and airplanes.
This isn't going to be the cool sci-fi fantasy some anons want. Not with Silicon.

>> No.15079904

>>15079893
Point out the flaw maidfag
>>15079898
I see these threads the same way I see the climate change deniers for which I have the same fervor. It's annoying. The data is clear but you just deny it like a religious person.

>> No.15079910

>>15079903
Exactly.
Development of cars also had an exponential curve but then sloped off and it never increased again in the same fashion. Same with planes.
We ALREADY WENT through the exponential increase of compute and now it's sloped off and will never increase in that fashion again. We aren't at the base of the S curve we're at the end of it

>> No.15079912
File: 104 KB, 309x285, kek3.png [View same] [iqdb] [saucenao] [google]
15079912

>>15079904
>>15079903
>Point out the flaw maidfag
It's self evident that your claims are not only wrong but downright retarded. We haven't reached peak anything.

>> No.15079916

>>15079912
Lol wrong. All evidence is on my side and I have linked to several sources.
Here's on directly specific to this
https://www.researchgate.net/publication/224127040_Limitation_of_Silicon_Based_Computation_and_Future_Prospects

>> No.15079920

>>15079912
>We haven't reached peak anything.
Silicon chips can barely break 5Ghz, and that's at egg frying temps, even at sub 10nm. Most new chip design improvements are in efficiency and packaging, but they aren't getting profoundly faster like 20 years ago.

>> No.15079921

>>15078877
>or in other words, the trains will cross each other at
That only hides that more real world assumptions are made the initial question do not contain. Midwits will for it because it eats their capacity, maybe unintentionally but typical academic maneuver btw.
AI (and autistic) will not see a train but the symbol called train which starts at A or elsewhere. Nothing else about the symbol "a train" is given so constant velocity as info is needed. Further the term starts must be replaced with "travels" (because lack of acceleration info).

>> No.15079923

>>15078491
>You have to assume that both trains are equally fast
Is that this scientific rigor I keep hearing about? lol

>> No.15079927

>>15079916
So what actually happened? are you posting here instead of working? did have a post vaccination stroke and find yourself fired from your job? Why are you here all day shill the exact same retarded theory based on an autistic data point.

>> No.15079961

>>15079904
oh the anti AI guy is a climate cuck. wow lol isn't the ethical thing for you faggots suicide?

>> No.15079969

>>15079920
>Silicon chips can barely break 5Ghz
are you okay man?

>> No.15079984

>>15079927
I'm just hanging out rn
I have stuck to the one argument because it's right and none of you have been able to convince me otherwise. I want to understand why you guys think the way you do despite the evidence being the opposite
>>15079961
There are at least 5 different people who don't agree with you itt.

>> No.15079990

>>15079969
Are you? Are you incapable of understanding context?

>> No.15080016

>>15079990
did you just wake up from a coma? Check out the current cpus on the market. I mean fuck
https://hwbot.org/benchmark/cpu_frequency/halloffame
current record is 9Ghz lmfao. this is why I think you're jewish. too much pilpul in you to even converse in a normal manner. everything is just disingenuous kike shit

>> No.15080024

>>15079984
>There are at least 5 different people who don't agree with you itt.
what did you undergo a half dozen troonings to fill out those numbers?

>> No.15080043

>>15080016
Intel i9 is 5.8GHz where are you getting 9 from? Also, how does this change the point?

>> No.15080052

>>15080043
like I said very jewish

>> No.15080066

>>15080052
Intel sells its i9 as a 5.8GHz chip you posting nerds overclocking as the standard is pilpul. You are projecting again. And you STILL haven't addressed the main point which is about the apparently needed exponential which isn't going to happen anymore with current paradigms

Unless computers completely change in a way that doesn't exist at all right now, there is no reason to think that we're on the brink of an AI revolution

>> No.15080075

>>15080066
>Silicon chips can barely break 5Ghz
>Intel sells its i9 as a 5.8GHz chip
I'll accept your concession. Next time say 6Ghz when you have a break down over cpu speeds. Though word is Meteor Lake will hit that so I guess you'll need to raise it to 7Ghz lmfao

>> No.15080079

>>15080075
I conceded i was wrong, let's say 10Ghz for good measure.
This brings AI how?

>> No.15080088

>>15080079
>AI
with someone like you (jew) it would become a torturous definition game full of all kinds of nonsense to even debate how tech advancements factor into all of this. as long as this (ML/AI/COMPUNIGGER) can bring sufficient value (in either time saved or quality of life enhanced) then I don't give a fuck if it has met whatever ever changing meme definition you'll make up. And since it's clear all these FAGMAN corps agree with me else they'd not be spending hundreds of billions chasing whatever this is I just don't understand all the cope posting you've been doing.

>> No.15080102

>>15080088
I was never arguing against computers improving quality of life I have been talking about building a machine as intelligent as a human and how it would work.

>> No.15080152

>>15078491
>Going to

>> No.15080532

>>15080079
bro they're going to invent graphene or some adjacent supercomputing transistor any day now trust me two more weeks to 2000 GHz cpu's