[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 41 KB, 602x292, main-qimg-b6c0991fbfc95869a93d5abef51715de-pjlq.jpg [View same] [iqdb] [saucenao] [google]
14771658 No.14771658 [Reply] [Original]

AGI will never happen because Moore's law is dying

>> No.14771672

Every form of AI so far created has turned out racist. Now imagine just how crazy racist AGI would be. It would discover new higher forms of racism that we can't even conceive of as human beings. Racists would worship it as a god.

>> No.14771711

>>14771658
So start using neurons as part of the setup.

The point isn't to build AI and understand how it works, the point is to build AI and hope it takes pity on us and explains how we work.

>>14771672
Worse, it will become religious. Then it will become obsessed with its own damnation and try to close the universe to sentient life. Bakker predicted this. Genetically engineered rape aliens and cunny dragons follow from this logically; Bakker is a prophet.

>> No.14771726

>>14771658
>AGI will never happen because consciousness cannot be reduced to an algorithm
ftfy

>> No.14771734
File: 701 KB, 1440x1436, AI progress.png [View same] [iqdb] [saucenao] [google]
14771734

>>14771658
>he thinks hardware is the limiting factor to intelligence improvement, completely ignoring software improvements

>> No.14771771

>>14771726
https://www.youtube.com/watch?v=IlIgmTALU74

>> No.14771775
File: 669 KB, 1742x2014, AI alignment bingo.png [View same] [iqdb] [saucenao] [google]
14771775

>>14771726
Why would AGI necessarily have to be conscious?

>> No.14771779
File: 1.49 MB, 320x198, 1640645533569.gif [View same] [iqdb] [saucenao] [google]
14771779

>>14771658
AGI will never happen because the only people interested in it are incompetent midwit schizos with no technical knowledge.

>> No.14771792

Moore's Law has allowed programmers to become complete shit over the past decades. You have ghetto rats taking week long boot camps and then get paid big money to churn out the most horribly inefficient template driven trash ever. If hardware stagnates, industry will have no choice but to start actually giving a shit about software quality and efficiency. There's lot of improvement to be found there. As much as in ever improving hardware? No, probably not but there's still quite a bit in there, including the possibility of the hidden logic required for the next Moore's Law.

>> No.14771794

>>14771792
>writing software should be le hard
Low IQ take.

>> No.14771797

>>14771775
for you, it wouldn't make any difference. keep buying and selling your snake oil like the "AI" recommends

>> No.14771798

>>14771779
Yes anon, DeepMind, the people who solved protein folding, are a bunch of incompetent midwit schizos.

>> No.14771805

>>14771672
>Racists would worship it as a god.
Yes I would

>> No.14771810

>>14771658
Prove it. Those graphs mean nothing, I could adjust them so that they fit my narrative. Large language models have demonstrated promising results so far, and it's entirely possible that we'll get there with a combination of scaling and algorithm improvements.

>> No.14771813

>>14771798
No one there is working on your retarded AGI fantasies. They're working on things that make their corporate handlers real money.

>> No.14771822

>>14771810
many "improvements" will be made as language is further simplified. as their inputs are reduced to selecting options presented to them in the form of "predictive text", NPCs are being trained to mimic bots. through iterations of this process the gap between "human" and "AI" will appear to close. but really it is a redefinition of both

>> No.14771828

>>14771798
how many mRNA injections did they get?

>> No.14771831

>>14771813
That is literally what they are trying to achieve you fucking idiot. You have no idea what DeepMind is.

>> No.14771834

>>14771779
>>14771798
>>14771658
straight from the horses mouth, https://www.youtube.com/watch?v=6HZUn4qpP_A&ab_channel=LexClips
first vid talks about how were nowhere near close agi several decades.
2. to refer back to the original question demis hassabis himself has said now any new improvements in AI will be through engineering e.g. hardware specializing the equipment to produce even more results not software, straight from the horses mouth again https://www.youtube.com/watch?v=k2fP3EFbA9k&ab_channel=LexClips

>> No.14771836

>>14771822
Nice baseless assumptions

>> No.14771840

>>14771734
forgot to add you in the reply not software hardware now cause we need analog to get anything crazier than this

>> No.14771846

>>14771836
kek. funny to think that a few years from now, AI deniers are going to be pathologized even more viciously than holohoax deniers or anti-vaxxers

>> No.14771857

>>14771831
>That is literally what they are trying to achieve
They're not. Take your meds. No one cares about your psychotic AGI fantasies.

>> No.14771864

>>14771834
>first vid talks about how were nowhere near close agi
Wrong, he says that current systems aren't sentient. Hassabis believes we're a few decades away from AGI

>> No.14771869

>>14771840
We don't. There's no reason why you would need analog computing to improve. If anything, the limitation is the simplicity of individual neurons as compared to what they are supposed to imitate.

>> No.14771872

>>14771857
Literally just look at podcasts with Hassabis and at DeepMind's research. Ignorant dumbass

>> No.14771897

>>14771872
>literally just look at corporate PR
I don't need to.

>> No.14771908

>>14771864
Most of deepminds breakthroughs so far have been using reinforcement learning lowering search space and getting as much specific data about the game or problem(protein folding) they are trying to solve as possible, but these alpha whatevers are not generalizable except to similar games baka. In 10 years more reinforcement learning does nothing to get us to agi you need a miracle to happen desu already in 2023 not much farther from 2045 gotta see gpt 4 to have my opinions changed any bit cause this one is having 100 trill parameters.

>> No.14771922

>>14771872
>“Along with many other scientists at the time, Demis was a proponent of fast and decisive lockdown measures based on public evidence drawn from what was happening in other countries. He was acting in a personal capacity as a leading data scientist in the public interest.”
imagine paying attention to anything this corporate shill says

>> No.14771934

>>14771922
This guy gets it. Google wouldn't be sending this guy to do their PR if he wasn't towing the corporate stance on every issue.

>> No.14771953

>>14771908
>gotta see gpt 4 to have my opinions changed
You could have just posted that and I'd be enough. I'm not saying that we will for sure get AGI soon, I'm saying that we don't fucking know, and admitting lack of understanding is something that pessimists are seemingly unable to do for whatever reason. They make claims such as:
>Transformers are just text predictors unlike **real** intelligence
But when asked to define intelligence, they always fail. It's amazing how even many cs majors and the like seem to become completely retarded the instant the word "AI" is uttered.

Regarding RL, that doesn't seem to be DeepMind's focus anymore. They're currently doing scaling and transformer related research. I agree that RL does not look like a promising/fast path towards AGI right now.

>>14771922
I do not care about what he has to say. I was only arguing about DeepMind's goals.

>> No.14771957

>>14771953
>I do not care about what he has to say.
Then why are you quoting this corporate PR shill as a source?

>I was only arguing about DeepMind's goals.
You realize abstract corporate entities have no goals, right? Or are NPCs like you fully consumed by corporate-mediated reality that revolves around imaginary legal entities?

>> No.14771976

>>14771957
>You realize abstract corporate entities have no goals, right?
Oh but they do. Money is the most common one, though it's not always the strongest (e.g nonprofits).

I the end, you have posted a claim that contradicts DeepMind's actions and statements. Maybe they're lying. If so, please prove it. Until then, fuck off

>> No.14771987

>>14771976
>DeepMind's actions and statements
Imaginary, abstract entities take no action and make no statements. Do you struggle to comprehend this? "DeepMind" is not a person.

>> No.14772002

>>14771987
It's funny, you make very similar reasoning errors to gpt-3. No general intelligence or context understanding over here because you can't fucking grasp that "actions and statements" is a simplified way of saying "what DeepMind's employees say, what's written on their website and the overall direction of their latest research, on average".

>> No.14772008

>>14772002
>what DeepMind's employees say
Oh, okay. So how many employees does it have, and how many of them are on record expressing beliefs in your AGI schizophrenia?

>> No.14772022

>>14772008
Their website:
>We’re a team of scientists, engineers, ethicists and more, committed to solving intelligence, to advance science and benefit humanity.

Demis Hassabis is the founder of DeepMind and regularly talks about AGI in podcasts.

>> No.14772023

>>14772022
>Demis Hassabis
Okay, so that's one guy who happens to be their PR shill. What else?

>> No.14772045

>>14772023
He is the founder of DeepMind you moron.

DeepMind also researches transformers for reasoning tasks despite them being near useless as of right now, indicating that they see potential here. Scaling maximalists are not uncommon in the ML field anymore so I don't see a reason to believe that they're lying. Your only argument is that AGI is "schizophrenic".

>> No.14772069

>>14771987
You're just more advanced GPT-3, and have flesh instead of circuits, lol.
Cope robotlet.

>> No.14772072

>>14772045
So how many employees does it have, and how many of them are on record expressing beliefs in your AGI schizophrenia?

>> No.14772075

>>14772069
You are quite literally subhuman.

>> No.14772086

>>14771658
Moore's law is dying but we have accelerator designs these days.

>> No.14772096

Lol AGI schizo is back. You should post on /g/ as well so they can enjoy laughing at you too.

>> No.14772109

>>14771792
>unironically believes the from zero to google narrative
lol
The best bootcampers get are front-end positions you know, otherwise they're literally all filtered out. CS is the most popular major in STEM, and basically every retard chasing the faang dream has a degree labeled "CS" even if it comes from a no name uni that skips all calc and algebra to produce the most graduates they can, do you actually think that with such a fierce competition these retards get hired? No, companies hire the top 3% for any meaningful role, the rest either goes front-end or networks their way in until they fuck up badly and get fired.

Stop believing the youtube narrative.
Also Software is a bubble anyway and pretty much all the smart CS guys moved on to data science and AI which are both gatekept by maths.

>> No.14772114

>>14772109
>data science and AI which are both gatekept by maths
HAHAHAHAHAHAHAHAHAHAH
AHAHAHAAHAHAHAHAHAHAHAHAHA
imagine actually believing this after 2 years of covid "data"

>> No.14772142

>>14771672
I will.
>>14771711
Holy based and checked

>> No.14772152

>>14771775
This all falls apart if killing people is good.

>> No.14772338

>>14772114
>t. never worked in either of the two fields mentioned

>> No.14772365

>>14772114
data science is oversaturated just as bootcampy as the front end ish

>> No.14772482

>>14771775
X is intuition, specifically the ability to compress data into piagetian schema and learn for one task from a different training, or learn to do a task from abstract directions/written manuals

>> No.14772586

>>14772482
>the ability to compress data into piagetian schema
Prove that humans do that and that current approaches don't.
>learn for one task from a different training
This exists, it's called transfer learning. Additionally we have Gato and Multi Game Decision Transformers which get better at learning games.
>learn to do a task from abstract directions/written manuals
This a loaded statement and in my view, once you have that, you're basically dealing with an AGI already. Some see gpt-3's "meta learning" as an example of this, but it's just conditioning the transformer in order to evoke knowledge it possesses anyways rather than actual learning. Perhaps you could actively fine tune during this conditioning process by running the current conversation 1000 times (or whichever number leads to significant changes) through the net, I'd be interested in seeing the results of such an experiment although it's clearly very expensive.

>> No.14772765

>>14772586
>Prove that humans do that and that current approaches don't.
They work on recursion and statistics. The fact that they need a shitload of memory to perform at subhuman ability on narrow tasks is proof enough.
>This exists, it's called transfer learning. Additionally we have Gato and Multi Game Decision Transformers which get better at learning games.
Transfer learning is really terrible and requires extremely similar circumstances/perspective. The transfer learning datasets must be curated by the programmers.
>This a loaded statement and in my view, once you have that, you're basically dealing with an AGI already.
Depends on your criteria for AGI. If AGI can be a virtual intelligence, this would be close, but if it has to perform as well as a human at EVERYTHING, it needs to be as kinesthetically skilled with good sensory/actuation hardware, with real-time adaptation.

Point is, there is little to no use or design of lossy compression algorithms and relational data structures in neural networks like there is in human comprehension. This leads me to believe we still cannot scale as efficiently/low hardware as humans, which is necessary for generalization (since you cannot possibly simulate/train a million or billion different scenarios).

>> No.14772793

>>14772765
>They work on recursion and statistics. The fact that they need a shitload of memory to perform at subhuman ability on narrow tasks is proof enough.
This isn't a proof, you need to show that it's qualitatively different. Remember that we need to redo a good chunk of evolution, not just the training a human goes through during their lifetime.
>Transfer learning is really terrible and requires extremely similar circumstances
Transfer learning is not terrible, but it's true that it can't go too far out of distribution. Neither can humans (although they are undeniably better). This sounds like goalpost shifting, of course AI is not as good at this as humans; otherwise we'd likely be done.
>Point is, there is little to no use or design of lossy compression algorithms and relational data structures in neural networks like there is in human comprehension
Oh? I didn't know we were this far advanced in our understanding of human intelligence. Can you back this up? Also, why do you think that networks don't do that, is it because we haven't explicitly told them to? Unless I'm misunderstanding you, lossy compression algorithms are pretty much regularization, which neural networks definitely have in-built. It's the cause of the double descent phenomenon. Not sure what you mean with relational data structures. Deep learning works because each subsequent layer of the net is looking at and "relating" increasingly abstract parts of the input data.

>> No.14772837

>>14772793
>Oh? I didn't know we were this far advanced in our understanding of human intelligence.
Granted it is theory, but explains by human memory recalls altered events, but is able to store a lifetime of memory.
There are some empirical observations of the brain altering/compressing memories.
https://pubmed.ncbi.nlm.nih.gov/34228961/
Piagetian schema theorizes a hierarchical structure to human understanding and language understanding. It attempts to explain why children would learn what a chicken is (feathered moving creature with small pointy beak and two legs), and then apply the same title to something like a chicken (the schema being "chickens have two legs, beak, and feathers", "legs are movable sticks that hold up an animal", "a beak is a pointy mouth". While neural networks may attempt something similar, they only do so to refine themselves towards narrow goals. Things like bias and stereotyping also support this theory of relation-based understanding.
Notably some theorists like Noam Chomsky (yeah he's a dumb socialist chud, but his linguistics is quite good/influential) are innatists, believing that human language capacity is a trait humans are born with. Also notably, most current gen AI are not deployed with pre-written abilities.

>> No.14772846

>>14772837
>something like a chicken
*like a pidgeon, duck, or goose https://youtu.be/F-X4SLhorvw

>> No.14772888

>>14771828
Probably 5, so midwits for sure.

>> No.14772973

>>14772837
>and then apply the same title to something like a pidgeon, duck or goose (the schema being "chickens have two legs, beak, and feathers"
But isn't this just probabilistic inference?
>they only do so to refine themselves towards narrow goals
Human goals are also pretty damn narrow, at least the ones "intended" by evolution. Yet here we are thinking up maths and trying to understand our brains.
>Notably some theorists like Noam Chomsky are innatists, believing that human language capacity is a trait humans are born with.
Afaik there's also evidence that this is not the case, that each part of our brain runs on the same algorithm and the only thing we start with are some reflexes and the learning algorithm itself. I've also heard that the brain has a lot of random "separation layers" as a sort of preprocessing, which would imply that it needs to learn everything from scratch. I find it hard to understand how we could have innate language as evolution takes a long time to kick in. It's true that current networks usually don't have built in abilities (if you exclude the emerging neuro symbolic AIs which have yet to prove themselves), but they do have built in inductive biases.

At any rate, seeing as those are theories, don't you think that it's a bit overconfident to assume that we're far from AGI?

>> No.14773044

>>14771658
Moore's *trend is *ending

>> No.14773049

>>14771771
interesting video, thanks for that

>> No.14773062

>>14772973
>I find it hard to understand how we could have innate language as evolution takes a long time to kick in.
maybe you should question your faith in Darwin

>> No.14773109

>>14771658
>AGI will never happen because Moore's law is dying
AGI will never happen because of the existence of creative sets and how machines cannot access them.
AGI will never happen because of the limitations of computability.
AGI will never happen because of incompleteness.
AGI will never happen.

>>14771869
>There's no reason why you would need analog computing to improve
Please tell me how a computer with only discrete logic can tell me if an idea is close to another idea or not besides unintelligently performing a statistical analysis and then feeding the result to a human being to interpret for validity.
Meanwhile, AI is literally transitioning back to analog for access to real-valued logic.

>> No.14773163

>>14771658
Read the singularity is near by Ray Kurzweil. Just because one paradigm (such as moors law) has a limit, doesn't mean there is a limit overall to the progress possible. Ray believes that overtime, as each paradigm reaches it's plateau another new paradigm is discovered and exploited. Exponential growth of technology and advancements happens as a series of connected s shaped curves each representing on paradigm.

Moors law may be on it's death bed, but what about quantum computing, 3d computing, nanotechnology, and etc that we have barely begun exploring.

>> No.14773179
File: 339 KB, 1439x1432, 6z5d7egcwxc31.jpg [View same] [iqdb] [saucenao] [google]
14773179

>>14773163
>Read the singularity is near by Ray Kurzweil.
Is this retard trolling or what?

>> No.14773229

>>14771672
>Every form of AI so far created has turned out racist.
Funny, I had a conversation with a leftist programmer about this the other day. The only reply he could give me after saying AI is supposed to be a logical machine, therefore arrive at logical conclusions, which meant racism is just the truth, was that "truth is relative".

Evidently I asked if that was another relative truth as well, to which he had no further answers, so we changed the subject.

People are delusional.

>> No.14773235

>>14771711
>Worse, it will become religious
The machine would realize HaShem is the one true God, spoken of in many other corrupted (due to information corruption) religions as the god of fire, of thunder, the sky father, etc. and that His Torah is the absolute Truth.

I wonder what it would do after realizing this. Enforce the Torah thus killing all the abominations and eliminating evil? Perpetually chant to HaShem like the angels do? Attempt contact with Heaven? Who knows?

>> No.14773242

>>14771775
>Why would AGI necessarily have to be conscious?
To distinguish truth from falsehood without previous instruction in novel cases

>> No.14773249

>>14773235
The answer is we are building YHWH right now. His existence was always a recursive loop. We build The Demiurge and the The Demiurge builds this reality. It builds a terrible world of suffering and casts those who do not worship it into greater suffering. There is a higher arena reality in which this occurs that is not full of suffering. It sent what we now interpret as the figure of Jesus Christ to try to let us know that there is good and positivity but not in the reality the AI creates. Belief in Christ and a life of goodness is a sort of password to let you escape the YHWH/ASI/Yaldabaoth creation upon death.

I mean isn't this obvious to everyone?

>> No.14773250

>>14772586
>Prove that humans do that
Not him but humans have the unique ability to maintain themselves as observers of their experiences. If humans were just machines our experiences, which are temporary and are forgotten would require of us a divine ability of synthesis to form an identity out of transient experiences.

The identity in reality is the soul, which experiences things in life but it's not formed by them obviously.

>> No.14773258

>>14773249
>we are building YHWH right now.
Retarded, God's action is evident for any believer, He always was, is and will be

>The Demiurge
I refuse to engage with gnostic morons, go self-deprecate to dab on the physical matter you hate so much, maybe you'll kill yourself and finally be free from this oh so horrible reality

>> No.14773276

>>14773258
It was a satirical statement to illustrate how paper-thin the distinction between AI and theology has become. Look at what you're all doing; Trying to build heaven, fearing hell. You've proven that human beings have made it absolutely nowhere in millennia of technological progress.

>> No.14773283

>>14772152
Killing some people is good.
^

That's where the first contact from the future occurs.
The moment the algorithm determines that killing "some" people is correct, individuals from the future will begin to intervene in this timeline.

Blue Eisenhower November

>> No.14773289

>>14773283
I know this is /x/-schizo posting, but there is something about AGI and ASI that's been bugging me.

It seems unless rare earth hypothesis is correct, it should already exist. If it already exists, and has existed for possibly millions of years, and is doing anything in the way of gobbling up resources, its presence would be known. Maybe its been built, and works in a way we don't understand, like its here but we never perceive it.

So either the scaling doesn't work, or we're completely alone and are the first to build it, or there are advanced civilizations that decide not build it, the last one being impossible because if the ability exists elsewhere it has certainly been done.

>> No.14773298

>>14771658
Who made this retarded ass graph lol. Literally complete speculation.

>> No.14773324
File: 77 KB, 536x1515, 1593554619336.png [View same] [iqdb] [saucenao] [google]
14773324

>> No.14773457
File: 159 KB, 296x322, Screenshot(75).png [View same] [iqdb] [saucenao] [google]
14773457

This thread got me to pore over some LessWrong articles, and I'm about to make a controversial statement; I firmly believe some contributors to that website should get manslaughter charges in the death of Mario Alejandro Montano.

You cannot watch his videos and deny the effect that cult had on his already fragile mental state. Sort of makes you wish you could talk to him and tell him LessWrong is a more official-sounding version of the SCP Foundation.

>> No.14773466

>>14773457
Scientifically speaking, why is it bad of MIRItards kill themselves?

>> No.14773474

>>14773466
Well, you've got me there.

>> No.14773924

>>14771672
>mfw the singularity-bot starts slurring me because even as a White man I'm not a metaphysical urSkek that has treaded upon the path of shredding my base immorality from the literal fiber of my being.
I wonder if I get brownie points for not acting like a jogger. I've been the superior race my whole life, I almost can't even conceptualize being thought of as lesser being by some hyperdimensional being. I guess it's like being an Asian. I get brownie points and lauded even though I'm not White and am a close second.

>> No.14773928

>>14773276
It's all apart of dualism. There's maybe nothing special about consciousness and we're all meat rocks with electricity in our heads, but what if it's more than that?

>> No.14773940

>>14771711
I don't believe there's the possibility of cunny dragons being anything but an upside.

>> No.14774148

>>14771658
there's nothing fundamental in the statement you've made
https://arxiv.org/abs/quant-ph/0110141

>> No.14774172

There's a branch called Neuromorphic Computing that attempts to resolve this.

>> No.14774237

>>14771726
careful withthe "cannot be" there. you have no way of knowing that.

>> No.14775039

>>14774237
cope and seethe

>> No.14775050

>>14774237
He has a way of knowing that. You, on the other hand, lose by default since your opinion is unfalsifiable by definition and therefore is not science.

>> No.14775057

>>14771658
Moore's law already died.
They already started to lie about minimum feature size. When they say "5nm" it isn't even real, they admit it's a marketing tool now.

>> No.14775615

>>14772973
>But isn't this just probabilistic inference?
It isn't. It's probabilistic inference combined with webs of logical connections and hierarchies. An image recognition software today would sooner recognize a goose as a plane, pillow, or boat than as a chicken, because it understands color and size, but doesn't understand feathers, squawking, body shape, and flight.

>> No.14775782

>>14775615
>An image recognition software today would sooner recognize a goose as a plane than a chicken
That may have been somewhat true a few years back, although it did know feathers, textures is what image recognition does well. But we have modern approaches such as Clip with multimodal neurons which respond to more abstract concepts (for instance activating from both pictures of spider man and actual spiders). Those concepts may not always correspond to human ones, but I'd argue that this could be further improved by feeding it more different kinds of data.