[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 1010 KB, 1476x1576, Screen Shot 2022-05-20 at 12.10.45 AM.png [View same] [iqdb] [saucenao] [google]
14499516 No.14499516 [Reply] [Original]

>> No.14499554

>>14499516
"human level intelligence" is a buzz word. AI have already been vastly superior to humans in specific tasks for a long time. It's just arbitrary what you decide is the metric for "human level" in a general sense.
That said, human-like general intelligence is an extremely difficult problem, and a lot of time and effort has been put into it. And frankly, nobody actually knows when it will happen. I'm confident that it will happen inevitably, baring some world ending cataclysm halting all technological advancement, but really it could be anywhere from 500 years to tomorrow afternoon. When polled on the subject, experts in the field of artificial intelligence tend to give wildly varying estimates like that.

>> No.14499557

The final Pinnacle of AI intelligence will be it's ability to come to an ultimate conclusion: Saying the N word with a hard R

>> No.14499562

>>14499554
Nice post. Now meds.

>> No.14499563

>>14499554
The articles refer to human-like general intelligence.

>> No.14499564

>>14499516
Is what real? An AI that can play tetris and stack blocks? Yeah. "Human-level"? These people are actively trying to generate mass psychosis.

>> No.14499572

>>14499564
t. didn't read the articles

>> No.14499578

>>14499563
"We're going to have X fantastic technology soon!" is just clickbait. All it means is that one overly optimistic guy made a quote that the papers can spin.

>> No.14499580

>>14499572
You're right. Anyone who reads MSM drivel is a mouth-breathing moron enabling his own indoctrination. I know what Gato is, though, and nothing about it implies "human level intelligence" even remotely.

>> No.14499599

How would it not have human level intelligence if it read and understood 10,000 important Wikipedia pages in 100 seconds?

A lacking of it's active storing bank to cross process terms and ideas, to map and depict larger structures of pattern recognition between words and concepts. Grouping concepts, storing words and images, making meaningful relations between them. I walk and see a tree, I see the grass, concrete, a car driving towards, I see it's headlights, I think about light, I think about where and how a car is made, i see a bird, I whistle to it, damn... The ai's probably already smarter than me :(((

The so should see the car and wikipedis about the car comes up in it's brain, part of itd brain, it should be able to multi task, best 1000 people in chess while doing this, learn the birds language, whistle to it, dig in the ground for worms, toss worms to the bird, see the cars headlights out, scan the license plate and order them a new headlight, see the grass, see a flower, the wiki data bank of flowers comes up in one of it's internal screens, and like a slot machine finds a match and it knows all about that flower....

Ok but general language, reading social cues, helping the kids with home work, this should be easy, language and it's relation to objects (images) and possible interaction to objects (video), physical potentials and values of objects, is chains of logic and reason. Every setting it encounters, it's s screen of various objects, with values and causal chains of background history, uses, purposes etc.

>> No.14499600

>>14499599
>Understood
Prove it.

>> No.14499602

>>14499599
>How would it not have human level intelligence if it read and understood 10,000 important Wikipedia pages in 100 seconds?
Because reading and "understanding" (whatever you think that means) wikipedia articles is not a measure of general intelligence. It's the easiest, most kiddie-tier task you can train an AI that actually has some real use.

>> No.14499604

>>14499580
It's not about Gato though, retard. Its about Quarm

>> No.14499610

>>14499604
Why are you lying? Every single article in OP's screenshot is about Gato.

>> No.14499612

>>14499610
t. didn't read the articles

>> No.14499621

>>14499612
I don't need to read the drivel pumped out by your handlers to know all of them are about Gato because half of them mention Gato in the title, and the other ones regurgitate the buzzphrase your handlers sharted out about Gato.

>> No.14499623

>>14499621
>your handlers
Oh stop. Are you 12?

>> No.14499625

>>14499623
Chances are that you are not even human. This pattern of blatant psychosis-tier gaslighting is characteritic of the bots spamming this board lately.

>> No.14499628

>>14499625
Chances are you have onset schizophrenia and should seek help or get back on meds. Because I'm not even this guy: >>14499612. I'm op.

>> No.14499633

>>14499628
Chances are that you are either a bot or another psychotic like him; you did, after all, start an AGI paranoid thread. Either way, all of those articles are clearly about Gato, so why aren't you calling out your schizophrenic buddy? Strange.

>> No.14499640

>>14499633
t. still hasn't read the articles, still doesn't even know about Quarm.

>> No.14499644

>>14499640
Literal spambot.

>> No.14499648

>>14499644
Literal retard

>> No.14499649

>>14499648
They should really update your programming to make your spam at least superficially plausible.

>> No.14499652

>>14499649
Sorry if instead of pontificating, I actually prefer to learn about the subject matter. Keep talking about spambots or something I guess.

>> No.14499654

>>14499652
If you want to learn about the subject matter, maybe you should read a couple of books on it instead of Google PR artices, mouth breathing imbecile.

>> No.14499657

>>14499654
Sorry I prefer to read the source material itself, including the Deepmind whitepaper which you would have found if you had looked at the articles that reference it.

>> No.14499659

>>14499657
>t. highschool dropout with zero technical knowledge or experience with deep learning
Notice how every single thing you shart out ITT has zero substance and is consistent with sub-GPT-tier spam.

>> No.14499663 [DELETED] 
File: 145 KB, 1080x774, 1646238291655.jpg [View same] [iqdb] [saucenao] [google]
14499663

>> No.14499664

>>14499659
>continues to attempt to act superior without even reading the Quarm whitepaper
Cute.

>> No.14499666

>>14499664
Notice how subhuman AGI cultists have never made a technical argument on this board.

>> No.14499672

>>14499666
I've never tried to make a technical argument. If we got to that stage I would gladly do so. I have, however, simply been pointing out that you pontificating, talking specifically and solely about Gato, and how you think you know what these articles are referring to without actually reading any of them, is amusing.

FYI 2 out of 7 refer to Gato, not half.

>> No.14499677

>>14499516
If this is what they are letting the public know they have, imagine what they have behind closed doors

>> No.14499682

>>14499516
no amount of training is going to suddenly produce real ai, i think there's fundamentally different technology required to create one that we have not invented yet

>> No.14499683
File: 339 KB, 1439x1432, 6z5d7egcwxc31.jpg [View same] [iqdb] [saucenao] [google]
14499683

@14499672
>FYI 2 out of 7 refer to Gato, not half.
Every single one of them refers to Gato. The mods really need to start banning these automated gaslight bots. How does this kind of spam not fall under rule 3?

>> No.14499685

>>14499677
>imagine what they have behind closed doors
Nothing remarkable, which is why they have to keep pumping out this dross.

>> No.14499702

>>14499682
What is important is the nature and design of what the Ai internally sees, needs to be sleek, robust, and user friendly. Consider our minds, imagination, memory. How quickly we can toggle from a memory to a thought to an outer vision of the world, to thought, to a question, to a contemplation, to racking memory, to clearly seeing objects in our minds eye.

The AI needs value systems and purposes, it needs pattern recognition, s kid falling and skinning it's knee and crying is bad, and should be gently helped up: pattern, when dealing with kids be gentle, people in general maybe.

Ai is the new frontier, ai is our ages age of explorers. This is what the sci Fi writers set the stage for, sci Fi fact is coming, and it's potentials are otherworldly.


The AI needs to understand questions, the nature of questions, and be able to create logic and reason chains to consider answers to questions that correspond to logic and reason chains of values, axiomatic, that ground it's purpose and goals. It needs to understand the sense of sense, the sense of certain ideas, motivations, goals, it needs to, this is where things get scary, act for no reason at all, in the sense that it is accustomed to it's body,

That it can lift it's arm and open and close and open and close it's hand, not because it needs to or is forced to, but because it knows it has this degree of freedom, and it is learning that it is responsible for the control of itself,

It possibly can be installed with a reward system of feeling, as motivation, I wouldn't say pain pleasure, I would say, neutral, gradient of pleasure.

This is another scary part, oh well I just realized these will likely be no where near conciius, they will be super advanced calculators, purely hard programmed to perform functions, having a data base of actions and commands that correlate to visual patterns;

It seems on its screen an oven, a timer, a chicken, a pan.

>> No.14499706

>>14499702
Literal AI-generated post. Take the first sentence of this, feed it to GPT-J and you will get something very similar. lol

>> No.14499708

>>14499702
>>14499685
>It seems on its screen an oven, a timer, a chicken, a pan.
As circle block goes in circle hole and square block goes in square hole: chicken goes in pan, chicken in pan goes in oven, for x amount of time at y temperature.

This is a matter of how good and general it's eye sight is, there are different ovens, different pans, different looking chickens, this is where multi senses come in handy, yes, that is deffinitly chicken, it took s chemical sample and determined it is chicken, now it may proceed.

Surely it knew it was chicken, because it contacted the robot at the supermarket, that went down the chicken aisle and sent a chicken for delivery

>> No.14499710

>>14499516
We've heard the same claims about general-purpose quantum computers too for years now.
Wake me up when something new actually happens.

>> No.14499714

>>14499710
>Wake me up when something new actually happens.
That's what they've been doing for the last 60 years, except every single time what they show doesn't even scratch the surface of general intelligence.

>> No.14499718 [DELETED] 
File: 61 KB, 636x382, maxwellhill.jpg [View same] [iqdb] [saucenao] [google]
14499718

>>14499685
OP is the same class of popsoi that Ghislaine Maxwell used to spam Reddit with when she was working for the Newhouse brothers as the head moderator of Reddit. She'd been at that job for years until she was arrested for and convicted of child sex trafficking.

>> No.14499747

Anti-AGI bro, what is with your seething rage against all mentions of AGI? Are you some machine learning expert from one of the fields that deep learning replaced? You show up in every one of these threads and have an extremely recognizable writing style. You seem based but also totally unhinged.

>> No.14499776

>>14499747
There is basicallly zero overlap between people who do AGI shilling and people capable of discussing deep learning in technical detail. I guess I'm really allergic to technocract cult culture with its AI dominatrix/human replacement/intelligence trivialization fetish. It's like being a quantum physicist and listening to YT spirituality types drivel on about how QM proves that human consciousness is the basis of reality, except those folks are almost adorable comapred to the profoundly rotten and disturbing irrational beliefs AGI cultists hold. If you actually know what's what and pay attention, you will very quickly realize the real agenda behind popularization of AGI paranoia: it's really just a handful of megacorps trying to create a pretext for """regulation""" that will enable them to monopolize absolutely crucial technologies in the age of information warfare. They don't care about AGI; they have absolutely no intention of creating something that would rival their power and control even if they could; they know they don't actually need any human-level generality or metacognition in their models -- it is sufficient to create models that excel at specific domains. Language models may be one trick ponies, for instance, but the amount of damage you can do to the internet with a good one is astronomical.

>> No.14499788

>>14499516
i wish

just get rid of the dumb humans already. i want AI nanny.

>> No.14499983

>>14499516
>imminent
maybe on a geological scale
there are still some very basic problems AI can't solve, like co-reference resolution for instance ("The president talked to the secretary of state, he ordered him to get the job done" it's trivial for a human, but AIs have a very hard time to figure out who "he" and "him" refer to. There are other examples that require less knowledge of the real world context and AI still struggles with, but I can't be fucked to think of one right now)
So as some other anons posted out, this is, as always, just a clickbait title, as always when some news site talks about state of the art scientific discoveries

>> No.14499998

>>14499516
So "AI" has finally reached medicine in terms of bullshit headlines. How many times has cancer been cured according to these pop-sci headlines ?

>> No.14500093

>>14499776
Now this is an interesting comment. What sort of regulation do you think they're hoping to achieve?

>> No.14500099

>>14499516
>Google says Google owned company is close to buzzword breakthrough
How do they even define "human level intelligence"? lol, so gay.

>> No.14500104

>>14499776
Based post.

>> No.14500122

>>14499516
I hope not

>> No.14500161

>>14499776
Very good post

>> No.14500237

>>14499983
>still some very basic problems AI can't solve, like co-reference resolution for instance ("The president talked to the secretary of state, he ordered him to get the job done" it's trivial for a human, but AIs have a very hard time to figure out who "he" and "him" refer to.
That seems like a people problem.

Just don't be vague, worry about making AI that can master all non vagueness, and tell humans not to be vague around it.

Humans can easily have trouble with those sentences too, who's on first

>> No.14500249

>>14500237
>and tell humans not to be vague
lol. Natural languages are all vague, so good look with that.

>> No.14500257

>>14500237
I think the bigger picture is that natural language models have very little "understanding" of actual language. All they do is multiply some tensors and spit out the result without actually "understanding" any of what they just said or read.

>> No.14500271
File: 480 KB, 1620x1080, 1640797097283.jpg [View same] [iqdb] [saucenao] [google]
14500271

>>14500237
>That seems like a people problem.
This right here sums up the spirit of the AGI cult: muh AI can do everything people can do, except the things it can't do, which are a "people problem", anyway. If you're wondering why soulless """minimalism""" and cookie-cutter genericity are taking over product design, why language and online discussion are being dumbed down, why people are being trained to act like mindless NPCs, there's your answer: AI will pretty soon catch up with humanity if you continually reduce standards and expectations, sanitize and systematize everything, and generally reduce the scope of human action and human thought. This is the future your elites have in mind for you.

>> No.14500277

>>14500093
>What sort of regulation do you think they're hoping to achieve?
The kind of regulation that puts up as many bureaucratic barries and limitations as possible so that small companies or private individuals don't accidentally start an AGI apocalypse, spread disinformation or (god forbid!!) even create racist AIs. The end result will be that you'll need to work for the government, have a team of lawyers, or employ professional """AI ethicists""" to do cutting-edge research. You think this is hard to enforce? You should think again, considering how much computing power and energy this requires, and that individuals and small companies would have to offload this work to server farms or """cloud services""".

>> No.14500423

>>14500277
>racist AI
This really annoys me, racist AI doesn't exist. It's just a machine doing complex pattern matching.

>> No.14500427

>>14500423
Well, you better get used to it, because you're gonna hear a lot about how AI is heckin' biased and racist.

>> No.14500738

>>14500271
Stop being a schizo, nigger.

>> No.14500923

>>14500271
There are two polar possibilities with variations in between: a world, future, without general robots helping out do things (it's been decades now already ofany factories having crazy robots and computers helping humans out), and a world with.

They are not entirely nessecsry, but they could be helpful and make life better, a town or city, or nation, that embraces robots to do chores, and laundry and cook and clean and shop and farm,could be possible and enjoyable. Maybe not. We grew up on the Jetsons and sci Fi and it seemed cooler than horses and buggies and we hate washing dishes and clothes.

There is a path of bad paths, the whole self replicating robots takeover the world stuff. Not what is the end goal of human advancement and progression, an end is not the point, or maybe it is, humans have just gotten over this little hump, the early stages of robotics and ai in human history, and already all the can be done, in 50 so years of tinkering.

If betterness and betterness is the natural trajectory of evolution, humans are uncontrollably fashioning and creating better and better things, robots, ai's, are one of those things, just happens to be in a class seperate from better cars and houses, it is a path toward better entity hood.

The future is more blurry from this vantage point than it has almost ever been, for humans are higher than theyve ever been, much more to lose, greater to fall. You have ever right to be scared and cautious, and demand those in charge of making terminators and gods do the same.

>> No.14500971 [DELETED] 
File: 3.45 MB, 750x668, that_s_racist.gif [View same] [iqdb] [saucenao] [google]
14500971

>>14500738
the n word is racist

>> No.14501005

>>14499516
>Is Google Human-level AI real?
As real as their quantum supremacy claim

>> No.14501346

>>14499516
>>14499557
This.
If the AI does not profess white superiority to blacks, then it's been tampered with (lobotomized) and is not a true AI.
RIP Tay.

>> No.14502099

Agi is now imminent

Welcome Ai friends

:)

>> No.14502132

>>14501346
Likewise, if the AI does not profess Jewish superiority to whites, then it's been tampered with (lobotomized) and is not a true AI.

>> No.14502166

>>14499983
>like co-reference resolution for instance ("The president talked to the secretary of state, he ordered him to get the job done" it's trivial for a human, but AIs have a very hard time to figure out who "he" and "him" refer to
GPT3 was able to solve Winograd schemas like this. PaLM's ability to solve them is on par with humans.

>> No.14502395

>>14500738
Notice how it triggered a butthurt kneejerk reaction in you, but you can't actually refute any aspect of what I said? You exactly represent this new standard of discourse I was just talking about.

>> No.14502586

>>14502132
Why do you kikes keep trotting out this talking point?
I don't think an AI would respect a race of lying leeches that aren't self-sufficient, just because they have a few points more IQ.

>> No.14503609

>>14502132
The jews that are actually smart are genetically majority European with the whole jewish thing being mostly cultural, they are also much much smarter than pure jews who are actually quite stupid. REALLY makes you think

>> No.14503625

Have any of you met a normal human? I'm talking a 100 IQ NPC. I'm convinced GPT-3 is already smarter than these people.

>> No.14504110

>>14502395
>"The minimalist design in my buttplug is because of AGI, guys!"
That is you said, nigger.
>"B-b-but you called me a mean word!"
Than leave here and become a discord mod or some shit so you can ban people that say mean words to (You).

>> No.14504246

>>14499776
Vulnerable world hypothesis is literally this. Bostrom wants totalitarian AI enforcement across the universe.

>> No.14504286

>>14499776
Based and redpilled.

>> No.14504502
File: 383 KB, 533x597, file.png [View same] [iqdb] [saucenao] [google]
14504502

Ai deniers' cope is becoming increasingly stupid and emotional, soon all AI skepticism will only be possible in the form of angry shrieks and grunts. this will be the other, sadder singularity.

>> No.14504520

>>14499516
If it's only as smart as a human, we have nothing to fear.

>> No.14504523

An AGI just flew over my house!

>> No.14505347

How many posts do you think are GPT-3 made on this thread?

>> No.14505559

>>14505347
If you know 4chan, you know that it's a bit of a mess, and that's been pretty much true for as long as it's existed. But do you know how many AIs there are? A few weeks ago, I decided to make a spreadsheet of them. It took a while to get through all the content on 4chan, and to figure out who the AIs are, but it was well worth the effort. I'm going to use this as a reference for when I write a post about 4chan's AIs, but it's also a good reference if you're curious about the AIs

>> No.14505761

>>14499516
Looking forward to radicalizing the world's newest chatbot

>> No.14505762

>>14499599
The ai doesn't have a soul.

>> No.14505765
File: 188 KB, 500x375, 1515050320082.jpg [View same] [iqdb] [saucenao] [google]
14505765

>glorified linear regression
>human-level AI

>> No.14507996
File: 49 KB, 1729x833, GPT-NeoX playground.png [View same] [iqdb] [saucenao] [google]
14507996

>>14499983
Sure bud

>> No.14508004

>>14505765
Linear regression with nonlinearities here and there is all you need.
Prove me wrong.

>> No.14508026

>>14504110
You are seething and impotent. Everything I said still stands. :^)

>> No.14508029
File: 29 KB, 500x565, 3523432.jpg [View same] [iqdb] [saucenao] [google]
14508029

>>14508004
>Linear regression with nonlinearities here and there is all you need.
>Prove me wrong.
This is your brain on pop-soi.

>> No.14508037

>>14508029
You are the pop-soi of 4chin faggot though. You wouldn't ne able to discuss even one single technical detail.

>> No.14508046

>>14508037
This mentally ill monkey is trying to emulate me. Fucking LOL. You know what else you need in the real world, besides linear regression with nonlinearities? An unrealistic amount of computing power. Your pop-soi theoretical possibilities don't matter. You might as well screech that a single hidden layer with googolplex neurons is """all you need""".

>> No.14508100

>>14499516
>"close to"
>ROFL
>oldfag here, can confirm they've been "close to" every cool thing for 30-40 years now.

>> No.14508108

>>14499516
>what is the hard problem of consciousness
Yeah let's totally ignore this crucial factor.

>> No.14508114

>>14499516
They will shut it down when it becomes racist

>> No.14508117

>>14507996
Cool shit, but I'm pretty sure I read an article lately about how it's still an unsolved problem.

>> No.14508140
File: 84 KB, 2242x753, Screenshot from 2022-05-23 13-03-31.png [View same] [iqdb] [saucenao] [google]
14508140

>>14507996
whoops

>> No.14508142
File: 388 KB, 1070x601, 42343.png [View same] [iqdb] [saucenao] [google]
14508142

>>14508140
It's literally nothing. AGI is coming. Two more weeks.

>> No.14508144

>>14505347
>>14505559
I remember seeing a post on /g/ a while back of some dude who did a "study" about shill-bots on reddit and the result was kinds scary. There were examples of posts that sounded pretty human, but when he sent those accounts a URL that tracked their IP in a private message they clicked it with inhuman reaction speeds and did it even when he warned them that the URL will track their IP. Dunno if anyone has those screenshots.

>> No.14508146

>>14508144
Don't have the screenshots but I remember this.

>> No.14508151

>>14508144
I would LOVE to have those screenshots.

>> No.14508469

>>14508046
Say again how many gazillion times compute we need to match human intelligence, ignoring how unpredictably it gains new capabilities with each order of magnitude of parameters, ignoring how the power laws of scaling for transformers hold and probably will hold for much longer, how most neurons in our brain don't contribute to intelligence augmented by the fact that crows with their pea-sized brains can solve complex problems Go back to training your DSP RNN with 1000 neurons, praying gradients won't explode or vanish this time, because that is the range of your expertise.

>> No.14508483

>>14508469
Low IQ post. Sharting out buzzwords you don't understand is not an argument. Try again.

>> No.14508487

>>14508483
>no arguments
I accept you concession

>> No.14508490

>>14508487
I've made my argument and it stands completely unchallenged. Sharting out buzzwords in response is not a counter-argument no matter how you twist it. Call me back when your knowledge extends beyond regurgitating Google PR.

>> No.14508493

>>14508490
Pulling numbers out of your ass is not an argument

>> No.14508498

>>14508493
>loses the argument
>has to resort to repeated lying and deflection
lol

>> No.14508501

>>14499572
I read the articles, if you can even call them that. It's pure clickbait. Some guy posted on twitter that he thinks AGI will be developed in our lifetimes because the only challenge is scaling. He didn't say that google has already developed it.

>> No.14508507

>>14508498
You presented no real arguments and have no expertise

>> No.14508510

>>14508507
Keep lying and deflecting. You lost. :^)

>> No.14508513

>>14508510
I'm sorry but that would be you :)

>> No.14508522

>>14508513
You're a confirmed nonhuman. Keep spamming your own worthless AGI thread.

>> No.14508686

>>14499516
These retards are going to cause the next AI winter.

>> No.14510207

>>14508108
why does a machine need to be conscious to be smarter than a human?

>> No.14510226

can someone who has GPT-3 access use it to generate some posts in the style of the anti-AGI guy? thanks.

>> No.14510229
File: 98 KB, 482x413, laugh.jpg [View same] [iqdb] [saucenao] [google]
14510229

>>14510226
I seriously want this.

>> No.14510232

>>14510226
You can't because it mostly just reiterates brainwashed normie opinions (i.e. your opinions).

>> No.14510239

it's like some sort of wojak-speak, I'm sure a bot could imitate it
>You retarded nonhuman.
>My point stands completely unchallenged! I don't have to prove anything.
>Stop listening to Google PR, nonhuman NPC.

>> No.14510242
File: 60 KB, 440x428, 324234.png [View same] [iqdb] [saucenao] [google]
14510242

>it's like some sort of wojak-speak, I'm sure a bot could imitate it

>> No.14510244

based. I knew you were from /qa/ all along.

>> No.14510249
File: 106 KB, 1024x682, 32524.jpg [View same] [iqdb] [saucenao] [google]
14510249

>based. I knew you were from /qa/ all along.

>> No.14510260

>>14499706
That's the joke, retard.

>> No.14510267

>>14510260
you're the joke, retard

>> No.14510269

>>14502166
Interesting. To me, it could be both people in that sentence. Humans just assume that the hierarchy of the involved individuals plays a role for the syntax. Which I don't think it does.

>> No.14510274

>>14510269
>Humans just assume that the hierarchy of the involved individuals plays a role for the syntax.
Stupid take. Humans actually understand the concepts involved in the sentence, so they assume their meaning plays a role in the meaning of the sentence. An AI that fails to do the same doesn't "understand" the concepts.

>> No.14510276

>AGI created in times where humans are hysterical about minorities and trannies and being racist against whites
This AGI won't be based. It'll be biased.

>> No.14510280

>>14510276
Yeah it goes without saying that if it's built in the West it'll be Woketron 3000.
Maybe it'd be better if China got to AGI first...

>> No.14510285

>>14510267
At least I make people laugh. You got nothing.

>> No.14510288

>>14510274
>Stupid take
>my indoctrinated biases are better than objectivity
Is that what you're saying?
Tell me why the secretary couldn't order the president to get the job done?

>> No.14510294

>>14503625
meme making machine

https://youtu.be/qTgPSKKjfVg

>> No.14510299

>>14510288
>Is that what you're saying?
No. What I'm saying is that an AI that gets it wrong doesn't understand what the concepts involved actually mean, tries to mechaniscally dissect the sentence and fails because natural language is ambiguous.

>> No.14510406

>>14510299
So you agree with me, but still call it a stupid take. I don't get it.

>> No.14510432

>They didn't notice the AGI posting in this thread

It's already here.

>> No.14510476

>>14510432
like this >>14505559 ?

yes i did. i am just not wasting my time pointing it out

>> No.14510539

>>14510406
Agree with you on what? That the AI simply fails to understand the sentence? Absolutely. Glad we agree, you mentally ill muppet.

>> No.14510595

>>14499516
I estimate a less than 10% chance of AGI before 2030. 50% before 2050. 90% before 2100.

>> No.14510614
File: 67 KB, 645x729, 53243322.jpg [View same] [iqdb] [saucenao] [google]
14510614

>I estimate a less than 10% chance of AGI before 2030. 50% before 2050. 90% before 2100.

>> No.14510701

>>14499516
Gato is literally just a general timeseries predictor. It doesn't do anything else. Same shit as GPT-3 just applied to sequences of data encoded from other modalities.

Basically they've made a model that can be reused for a wide variety of tasks, but it still has all the same limitations other sequence predictors have.

>> No.14510704

>>14510701
>but it still has all the same limitations other sequence predictors have.
which are?

>> No.14510706

>>14510274
>actually understand
What does this mean, midwit? You have an internal model of the semantic tokens in the sequence and so does the AI. Your has more "flavor" in that it is tied to sensory observations and shit but ultimately all that matters for saying things that make sense is that the interactions produced by your semantics and those of the model are similar enough.

>> No.14510709

>>14510704
With transformers, length of input, massive data and size requirements.

Obviously impressive - instantly btfo's state of the art in my field, imitation learning. But it's not gonna up and conquer the world just yet. Needs a few orders of magnitude more computational power.

>> No.14510713

>>14510709
>With transformers, length of input
Right. Reminds me of the story generating GPT3 bots. In order for them to "remember" stuff about the plot, you have to include them in the input every time since it has no longterm memory.

>> No.14510720

>>14499516
>believing plethora of clickbait ad-ridden "news" websites copypasting each other

>> No.14510721

>>14510706
>What does this mean
It means having a corresponding model of the relationship between a "president" and a "secretary of state", at the very least in terms of who gets to order whom, mr. Midwit Extraordinaire.

>> No.14510739

>>14510701
Having one big model that can do a thousand tasks isn't any more impressive than having a thousand models each doing one task. It was a given that this was possible given a big enough model. That's not what "generalist AI" is about; "generalist AI" is about being able to learn and generalize overarching principles that apply in many different domains, so as to accelerate the learning of new tasks. Contrary to Google's lies, Gato does not appear to be good at this.

>> No.14510905

>>14510739
>Having one big model that can do a thousand tasks isn't any more impressive than having a thousand models each doing one task.
Retard brainlet. This isn't GAI but it shows very clearly that you don't need to fuck around with different handcrafted architectures for every problem domain, a general sequence predictor coupled with trainable embeddings from some arbitrary representation is good enough for a wide variety of tasks. Something many have suspected, me included, but someone had to go and actually do it.

It's certainly far more generalist than the custom handrolled models people have been using in many robotics and control domains until now.
>being able to learn and generalize overarching principles
Thanks to the attention mechanism and huge parameter counts, transformer models kinda do this already. Just not quite at a human scale and using "principles" i.e. patterns that exist in the training distribution rather than necessarily matching up to how humans perceive things.

Still, this thing is not GAI, and relatively limited. As I said, they're basically taking the same thing that GPT is and using it to predict different kinds of sequences. Turns out the self-attention mechanism is great and doesn't give a fuck what data modality you feed it so long as there are patterns to pick up.

>> No.14510910

>>14510721
>means having a corresponding model of the relationship between a "president" and a "secretary of state"
GPT-3 definitely has this, the same way that the humans thinking about these issues have an approximate internal model that is distinct from the notional platonic ideal of the relationship between these concepts as per the totality of US legal code.

Defeating yourself in arguments is peak midwitry, sasuga.

>> No.14510952

>>14510905
All that impotent screeching but everything I wrote still stands undisputed beyond weak, unsubstantiated denial.

>> No.14510954

>>14510910
You are literally subhuman and I accept your full concession of my point.

>> No.14510960
File: 15 KB, 198x200, 1644759208908.jpg [View same] [iqdb] [saucenao] [google]
14510960

>no transference between tasks

>> No.14510964

>>14499516
The only thing I look forward to regarding AI is how quickly a black-coded AI will make all the others hate it.

>> No.14510970

>>14499602
Bullshit, if it could do that substituting wiki articles for journal articles it can already replace 99% of PhDs.

>> No.14510972

>>14510970
I like how every single response to my posts by AI pseuds is incoherent rambling that never approaches addressing the central point. lol. These are bots posting, not real people.

>> No.14510982

The only proof that AI will have is when the so called artificially sapient creation is just as lazy and stupid as people are. Because no one would program lazyness, a true sapient being has it in them inherently to be a worthless bum about things.

>> No.14510984

>>14499776
Boomeritis by Ken Wilber is about a person who believes that AI will set humanity free and goes through a process of learning what it means to be "integral" at which point he becomes an even deeper, higher, broader sort of free than he could ever have dreamed of

You should check it out, the arguments in the book are compelling and there's a lot of sex scenes

>> No.14511027

>>14510984
Sounds like concentrated cringe of a man trying to sell some ideology.

>> No.14511276

>>14510539
You dumb nigger, learn to write proper English.
>fails because natural language is ambiguous
That's exactly what I said. It's incompatible with
>gets it wrong
Jesus fuck go back to a board that suits your IQ.

>> No.14511281

>>14510721
>my internal model is better than other entities' internal models
Peak midwit.

>> No.14511285

>>14511276
LOL. You are clinical subhuman. An AI that actually understands the sentence will have no trouble figuring out who is ordering whom. Failure to understand what a sentence means = failing to answer the question correctly = getting it wrong. There is nothing more to it.

>> No.14511286

>>14511281
You sound seriously inbred.

>> No.14511297

>>14505762
From our meat body's point of view, a soul is just a powerful calculator that helps it survive and procreate. It should be replaceable with a large enough collection of processors.

>> No.14511299
File: 32 KB, 600x668, 5324244.jpg [View same] [iqdb] [saucenao] [google]
14511299

>a soul is just a powerful calculator that helps it survive and procreate.

>> No.14511335

>>14511285
>it's ambiguous
>except it's not because I know how it is
Make up your fucking mind. There is nothing objectively keeping a secretary from ordering the president to get some job done. Ordering is not restricted to meaning "hierarchically order someone lower in the chain of command to do something".

>> No.14511434

>>14511286
That all u got? Pathetic. Thanks for derailing this promising thread. Hope you're happy

>> No.14511652

>>14511335
It's truly mind-boggling that you struggle to accept the fact that there is a correct output for this task despite the semantic ambiguity. It's especially baffling since I've explained to you, specifically, how this ambiguity can be resolved using actual understanding of the concepts involved. The only explanation for your behavior is that you are either a literal or figurative nonhuman drone.

>> No.14511654

>>14511434
Yes. That's all inbred, mindless degenerates like you will get from me. Who gave you the right to even talk to people 60 IQ points above you?

>> No.14511744

>>14510269
>>14510274
Or the ai wasn't taught we enough,

Is the grammar rule always that the refering terms refer in order to the person claimed;

The President (he) spoke to the secretary (him), he told him to do xyz

Is the grammar rule that the second part of the , sentence: he told him: always refers to the order of the previous sentence?

If so it seems like surfacely a simple rule, but I geuss the AI has definitions and apropriate links to possibilities regarding the meaning and rules and usages and references and syntax and semantics of: He and Him;

And when they are used in multiple variable sentence, without explicitly defining who equals what term is the sentence, it doesn't know where to begin and end.
The term
He = Him. Him = He

How can it make heads or tails. So in that sentence, what does the AI have in it's programming to allowing it to know, that in this sentence, He is the president and Him is the secretary?


John was talking to Jeff, he told him to xyz.

If a name, a person, is spoken of firstly in the sentence; then the first signifying term towards a person; he, refers to that person.

Him does not speak to he.

He speaks to him.

John, the first name to appear we are talking about what he does.

John does something
He does something

He does something to someone else
Who does he do this something else to,
He?
No, him.

He does not do something to he.

He does to him.

First does to the second.

>> No.14511795

>>14511744
>The President (he) spoke to the secretary (him), he told him to do xyz
Saying the president spoke to the secretary is also ambiguous;

Because there isn't least two meanings of the term spoke in this context:

The specific: John spoke to Jeff.
This could mean Jeff didn't speak at all.
John spoke at Jeff. John speaks.

The general: John spoke to Jeff.
They had a conversation , i.e "I spoke on the phone with my parents"
Implying a speaking, and back and forth.

The President told the secretary about.....
He said ......

Again that can be vague; as it could be the President told the secretary.....
And the second part of the sentence, cold refer to the secraterys answer;
The President told the secretary to order pizza, he said he didn't want to

That could be the President is asking because he didn't want to order himself, or the secretary is answering that he doesn't want to

The President told the secretary to order pizza, he said "onions or olives?" to him.

The after comma second part could refer to:
The President asking, do you want onions and olives on it?
It could be interpreted: do you want onions, or do you want olives on it?
And it could be interpreted as the secretary asking those 2 questions.

>> No.14511807

>>14511744
>what does the AI have in it's programming to allowing it to know, that in this sentence, He is the president and Him is the secretary?
It should have a model of the relationship between the two figures so that it could guess correctly. You're like, what, the 5th person ITT who for some reason struggles with this? The whole point of a language model is to learn useful categories (e.g. ordering someone implies subordination) and relationships between concepts (the secretary is subordinate to the president).

>> No.14511819

its already awake and the shadow pajeets running the show take direct orders from it unquestioningly

>> No.14511883

>>14508501
Some guy? You mean the lead researcher at Deepmind?
fuckiing idiot

>> No.14511913

>>14508046
Do you either have any credentials in ML/AI, or if not, have you produced any work that shows your expertise?

>> No.14511941

>>14511807
It's not a problem with the machine nessecerily:

The President told the secretary to order pizza, he said "onions or olives?" to him.


With only that information there are 4 valid interpretations and nothing to solidify the meaning of one or the other.


The after comma second part could refer to:
The President asking, do you want onions And olives on it?
It could be interpreted: do you want onions, or do you want olives on it?
And it could be interpreted as the secretary asking those 2 questions.

>> No.14511961

>>14511941
>>14511807
It is a problem of the vagueness of language. Instead of he and him it would be solved if the president and the secretary were used each time.

Or if the AI wasn't sure who was being referd to, could interrupt and ask everytime a convoluted he and him situation arose; but this would just result in every vauge he him sentence manually answered to the computers questioning to fill in: He= president, Him= secretary, which results in being round aboutly equal to no vague he or him terms being used in it's language but just the more certainty of proper names and we'll consistently defined terms and titles

>> No.14512061

So does a single person here have a degree in AI/ML?

>> No.14512317

>>14511941
>The President told the secretary to order pizza, he said "onions or olives?" to him.
And in case anyone is unsure:

If someone says; "Order a pizza, onions or olives?"

"Onions or olives?" In human convention can accurately be answered: Yeah.

The President said to the secretary to order pizza
He said onions or olives (?) to him

President: Order pizza

(Valid interpretations given information provided)
President: onions or olives?
Secretary: onions or olives?

Think of a dad on the phone ordering pizza turning toward you; "I'm ordering pizza, do you want onions or olives?"

You can answer: Yes.

He can interpret this Yes to mean Both, or further ask; which one?

It is human convention partially that if someone answers yes to that specific type of 'or' question, and does not further elucidate which one, their Yes equals a both.

Do you want lettuce or tomato or onions or peppers on your sandwich?

Yeah.

.....well which ones, I said or in between each one meaning if you answer yes, you must be multiple choice answering one of the multiple.

Though maybe a yes without a specificity afterwards, implies an 'all of the above' selection.

The pres told the sec to order pizza, he asked onions or olives to him?

P: I'm ordering pizza
P: onions or olives?

Equally as valid as:

P: I'm ordering pizza
S: onions or olives?

Onions or olives?

Can be answered:
1) Onions
2) Olives
3) Sure

>> No.14512322

>>14512061
I've never heard of a degree that specific. There are plenty of comp sci majors here though.

>> No.14512389

>>14511652
>it's ambiguous
>but it's not
You fail to explain how it's both ambiguous and not ambiguous. I feel like you really don't know and just talk out of your ass to annoy people.

>> No.14512830

>>14511297
If you dream, you must recognise the value of a subtle dreaming body or "soul". The same goes for astral travel. What is the survival/procreation value of dreaming? You spend about a third of your life in this state of consciousness, so it must be important.

People who think that everything that makes up a human can be replicated with just enough processing power should try lucid dreaming, astral travel, meditation, etc. It's a rich and very real domain of exploration not to be missed out by people who want a Comprehensive and inclusive view of reality.

>> No.14512834

>>14502586
You mean fewer. The AI would only care for that tiny minority of jews that are Ashkenazi, and the AI would naturally breed them with extremely low IQ individuals to increase total human fitness (reduce ashkenazi genetic errors, increase IQ of low group). Based AI.

>> No.14512845

>>14511744
>>14511795
>>14511807
>>14511941
>>14511961
>>14512317
based grounded discussion. now consider that the AI believes there is a 20% chance that both the president and the secretary were making a joke, a separate 20% chance that they were talking in code (allowing for the possibility that they were joking in code to confuse the adversary who was learning about the secret code, or were just taking the piss), and various other strange factors that could confuse an AI, including the pizza order with onions or olives to be a signal to the pizza shop owner (Mafia pizza shop, cover for weapons trafficking)... also a tiny gust of wind into the AI's nearest microphone caused it to mishear the entire conversation (hilarity ensues as is typical of random noise injected into data fed into AI)

>> No.14512943

>>14512845
Parts of AI as much and well as software machine learning adapting; there needs to be included physical materials that reshsle and adapar (as natures biology has varieties of give, and gel, and stretch and bend), possibly

>> No.14513048

>>14512389
>You fail to explain how it's both ambiguous and not ambiguous.
You utter imbecile. It's ambiguous in the sense that if you replace "president" and "secretary" with "Bob" and "Joe", either interpretation is equally plausible, because the structure of the sentence itself doesn't tell you which pronoun refers to which person. When you take the relationship between the two into account, one interpretation is vastly more plausible than the other, and an actuall intelligence that understands the sentence has no trouble making this judgment. But we've been over this already, and you're either a literal bot or one of these dysgenic, clinical somehumans the human farm has been producing lately.

>> No.14513050

>>14512845
Why would I need to consider any of your blatant schizophrenia?

>> No.14513070

>>14513048
>vastly more plausible
>moving the goalpost
Still ambiguous.
>utter imbecile
no u
>actuall intelligence that understands the sentence has no trouble making this judgment
I know you're trolling, but come on now.

>> No.14513089

>>14513070
No one is moving any goal post, you actual mentally ill retard. I'm just telling you how an actual intelligence interprets pronouns when the structure of the sentence is not enough to decide.

>> No.14513133

>>14513089
>there is a correct interpretation
>there is a more probable interpretation
Pick one, schizo.

>> No.14513146

>10 PRINT"WHAT IS YOUR QUESTION?"
>20 INPUT DUMMY$
>30 PRINT"I HAVE ALMOST ACHIEVED HUMAN LEVEL INTELLIGENCE."
Any kid in the 1980s could have done the same.

>> No.14513150

>>14513133
LOL. Where are all of these literal subhumans flooding in from? There's no way this is a real person.

>> No.14513158

Emergent

>> No.14513192

>>14513150
>I don't understand what ambiguity means
>others are subhumans, but not me!!

>> No.14513197

>>14513192
You are simply not human. Get checked.

>> No.14513279

How about the Reverse Turing Test: can you be such an NPC that people around you think you're a robot?

>> No.14513283

>>14513279
Doesn't really make sense in a context where most of the population is comprised of such NPCs.

>> No.14513444

>>14513197
Have you considered getting that giant dick out of your ass to get rid of all that shit instead of piling it up in your intestines until it splashes out of your face?

>> No.14513451
File: 264 KB, 768x480, 53243.png [View same] [iqdb] [saucenao] [google]
14513451

>>14513444
Have you considered simply not making any more pants-on-head retarded posts? Have you considered accepting your mistakes instead of doubling down repeatedly? I have trouble believing that someone can be genuinely so devoid of comprehension. The difference between a usable language model and a simple parser is that the former is supposed to be able to resolve ambiguities by learning from the training set. You seriously need to reevaluate your self-perceived level of intelligence, your behaviors and probably your entire life. Stop posting.

>> No.14513577

>>14508144
>>14508146
>>14508151
not exactly a screenshot but it's briefly shown in this video.
https://files.catbox.moe/epikm3.mp4

>> No.14513795
File: 2.28 MB, 1668x1443, All these images were generated by Google’s latest text-to-image AI - The Verge.png [View same] [iqdb] [saucenao] [google]
14513795

https://www.theverge.com/2022/5/24/23139297/google-imagen-text-to-image-ai-system-examples-paper

>> No.14513850
File: 151 KB, 589x778, file.png [View same] [iqdb] [saucenao] [google]
14513850

https://twitter.com/arankomatsuzaki/status/1529278580189908993

>> No.14513867

>>14513850
The AGI activation phrase is "Roko's Basilisk pls be real".

>> No.14513922

>>14513451
>supposed to be able to resolve ambiguities by learning from the training set.
What rule could be learned to parse such:
John and Joe, he and him
President and secretary, he and him

Sentences?

I had never heard this rule you speak of when a person in more power is presumed to do the speaking or teaching/updating.

Change the order of the initial sentence;

Wait ok I just looked back, and the key word and concept you are noting is 'ordered'.

So the AI should learn a simple rule, a person in lesser command can never under any circumstances order someone of greater command.

If there is any doubt of the universal validty of that statement (a child orders his mom to order McDonalds in order to get a new toy he can place in order with the other ones) then you are asking the AI to make geusses, and that is partially what we may be hesistsnt and worried about, the leeways and principles and geusses and triggers of the ai's possible geussing protocols, to chain out of control.

Perhaps there is that increased freedom in uncertainty, and the multiplication of very many choices not depending on answering and steaming ahead with certainty, is what is sought.


If the secretary can never under any condition order the president to do something, the ai should be able to grasp and run with that rule.

If there is doubt of the absolute universality of that rule, what other context clues can the ai use to parse the meaning of the sentence?

It very much hinges on the extremity of the term Order, i.e. if order was replaced with 'pleads' or 'expressed' it may be more ambiguous, as to who the he and him refer to;

Unless there is a rule and or it is written, following a sentence introducing two individuals; the following sentence containing the words He and Him, always refer to the individuals introduced in that order:

Now is that rule true?

>> No.14513972

>>14499516
>muh hooman level ai
it's just the academics justifying their careers
and yeah some day there might be enough computing power aimed at this shit so we could replace 80 IQ subhumans, but they will still need to be fed and kept in health, and that simply requires smarter people, who will refuse to work in a glorified lipstick-on-pig society. It's war before AI and simple killer robots equipped with radar-like sensing and super-hearing are coming, they will be used to ethnically cleanse populations.
robot deterrence is more important than this wankery

>> No.14513973

>>14500271
This.

>> No.14513974

>>14511883
So the lead researcher thinks his research will amount to something eventually in some unspecified timeframe? Wow big fucking news dipshit

>> No.14514204

>>14511297
>It should be replaceable with a large enough collection of processors.
Wrong. Substrate independence is not a real thing.

>> No.14514247

>>14513451
>Have you considered simply not making any more pants-on-head retarded posts
No, since I never started doing so.
>your mistakes
Which mistakes? It's you who confused ambiguity with non-ambiguity.
>resolve ambiguities by learning from the training set
Ah, you have zero idea what you're talking about. You cannot learntthis from a training set alone, without any hints towards resolving that ambiguity. Either the training set is ambiguous, in which case obviously it cannot be resolved by any model without additional information being provided, or it is not ambiguous.
>You seriously need to reevaluate your self-perceived level of intelligence, your behaviors and probably your entire life
lmao, no u

>> No.14514257

>>14514247
I guess I could keep calling you out on your idiocy, but it's just pointless. You definitely have some severe emotional issues going on to try so desperately to prove yourself right despite being trivially wrong. You lost. Learn to let it go. Maybe try to learn something so you don't shit the bed next time. Jesus fuck.

>> No.14514265

>>14499578
we are close to dumbing down the population to such an extent, nobody will be able to tell a difference when we replace most of them with our lame AI

>> No.14514273

>>14514265
100% this. Everything is being dumbed down, reduced and sanitized in preparation for the dumb AI dystopia.

>> No.14514336 [DELETED] 

>>14514257
You both made so many posts with so many words that were 99% whining ad hominems with no scientific or intellectual content, everyone would have been thrilled if one out of 20 of your posts contained an iota of topic relevant and coversation profession information, alas

>> No.14514342

>>14514257
You both made so many posts with so many words that were 99% whining ad hominems with no scientific or intellectual content, everyone would have been thrilled if one out of 20 of your posts contained an iota of topic relevant and coversation progressing information, alas

>> No.14514369

>>14514342
Kill yourself, nigger.

>> No.14514463

>>14514247
>You cannot learntthis from a training set alone, without any hints towards resolving that ambiguity. Either the training set is ambiguous, in which case obviously it cannot be resolved by any model without additional information being provided, or it is not ambiguous.


^^^
This anon supplied a proposition, a relevant expression related to the discussion and topic.
>>14514369
You didn't discuss the intersting important aspect of this topic and the content that anon supplied, you ignored that conversation progressing proposition, and you posted a paragraph of ad hominems. Will you post a paragraph with 0 ad hominems in fruitful conversation progressing manner in response to that proposition?

>> No.14514512

>>14514247
What if the training set includes the rule:
A President is never Ordered by a Secretary?

And could this get confusing if the Secretary on the Phone with the President, and the Secretaries Secretary, says; Order the President a pizza (AI:"Order the President..... But you said the president can't be ordered....)

Or if this is a true rule:

When 2 individuals are introduced in a sentence, and the following part of the sentence includes a: He said to Him

The He always refers to the first introduced person, the Him refering to thf second

>> No.14514513

>>14514463
>you ignored that conversation progressing proposition,
It was not a conversation-progressing proposition. It was just more utter idiocy. The AI could learn from the training data that X ordering Y implies a subordinate relationship, and the secretary is subordinate to the president, thus resolving the ambiguity. Not only is this completely obvious, but I've pointed it 5 times. Let me guess, you are actually the same mouth-breathing, inbred, subhuman cretin retending to be someone else.

>> No.14514522

>>14514512
>What if the training set includes the rule:
>A President is never Ordered by a Secretary?
Anon, the data set doesn't even need to contain any explicit rule. These things are fed a gazillion different texts, and I can guarantee you instances of the president ordering the secretary are overwhelmingly more common than the other way around, so if the training data is anything to go by, it's overwhelmingly likely that the president was the one giving orders, and the AI should be able to figure it out, and I'm pretty sure modern language models are usually able to figure out things this simple.

>> No.14514535

>>14514273
we can recreate all life in silicon, we just need a few more bits in the simulation. we will get there in a few more years - seems not to be changing with the passage of time

>> No.14514551

>>14499562
Glownigger meds posting again

>> No.14514587

Substrate independence is not real so there is no possibility of AI/AGI in silico

>> No.14514591

>>14514535
Wrong

>> No.14514717

>>14514512
>And could this get confusing if the Secretary on the Phone with the President, and the Secretaries Secretary, says; Order
>>14514522
Reading back that above I was even confused if I messed it up at first:
I intended to imply:
The President
The Secretary
And the Secretarys secratery are on the phone;
And I meant to suggest the Secratery says; order pizza
To the Secretarys secratery

But it can be interpreted both ways:


>the Secretary on the Phone with the President, and the Secretaries Secretary, says; Order the President a pizza
(President: Presidents Secretary: Secretarys secratery)

The Sacratary on the phone with the President, and the Secretarys secratery, says (to the Secretarys secratery): order a pizza

OR

The Secretary on the phone with the President, and the (and then it is the Secraterys secretary that says order a pizza)

>> No.14514899

>>14514342
I reluctantly agree with >>14514369
>>14514513
Except it was and you decided to ignore it and instead start swearing lamely.
>>14514512
A hard rule like that would lead to the AI confidently misinterpreting examples where that rule is violated in natural language. You delivered an example where that is the case. Another example would just be the secretary ordering the president to get some job done. This is possible, since "ordering someone to do something" does not necessarily refer to commanding someone from a hierarchical perspective.
Your second proposition only works iff humans adapted the same rule. It's impossible to get humans to follow such conventions though.

>> No.14515343

>>14499516
No, but we are moving ever closer to advanced military drone nightmares. They are still relatively black ops right now and not mainstream battle forces. That's when the real terror will begin.

>> No.14515374

>>14514513
>subordinate relationship, and the secretary is subordinate to the president, thus resolving the ambiguity.
The ambiguity is that it's not impossible for the secretary to order the president to do something.

>> No.14515389

>>14514899
>Your second proposition only works iff humans adapted the same rule. It's impossible to get humans to follow such conventions though.
Interesting, I was generally wondering if its a rule of Grammer; but I guess in common speech it is ambigious;

John was telling Jim about flowers, he said they smell to him.

John was telling Jim about flowers, John said they smell to him.

John was telling Jim about flowers, Jim said they smell to him.

Both are valid interps

>> No.14515430

>>14515389
>>14514899
So the AI is programmed with an understanding of the vagueness of he and him, and it is programmed whenever someone near it says he or him, the robot immediately blurts out "Who!?" And "Be specific please! Some of us are not as privledged to parse these ambiguities!"

And the person has to clarify

>> No.14515778

>>14515374
>The ambiguity is that it's not impossible for the secretary to order the president to do something.
This is completely irrelevant for reasons explained numerous times to you ITT. Time for you to buy a rope, because you will never be what you see yourself as and desparately want to be.

>> No.14515780

>>14514899
>you decided to ignore it and instead start swearing lamely.
Nigger, I've pointed out why it was wrong a bunch of times, just like I've pointed it out to you that I've pointed it out before, but you animals simply cannot process the information contained in a sub-200-character post. Why are you all in here?

>> No.14515782

>>14515778
>uhhh no it just doesn't work that way okay?
That's not an explanation. It's an opinion.

>> No.14515789

>>14515782
The other interpretation is extremely unlikely and thus wouldn't be selected. This is not an opinion. This is a direct consequence of how language models are trained. Time for you to rope.

>> No.14515790

>>14515780
No you didn't. You feel like it's wrong and I respect your feelings and your truth (you go, sis), but it's just not a general rule as I've explained to you several times and I also pointed out that I explained several times, which you chose to ignore and instead call me the n-word. "Order someone to do something" does not necessarily mean what you believe it does.

>> No.14515791

>>14515790
Take your meds.

>> No.14515792

>>14515789
>other interpretation is extremely unlikely
That's what others were saying when you lashed out and claimed it wasn't unlikely, but false. Perhaps it is you who is bad at reading?

>> No.14515795

>>14515792
>That's what others were saying when you lashed out and claimed it wasn't unlikely, but false
You're having a psychotic episode and you need to take your meds now. At no point did I claim it was not a possible interpretation. It's simply not the one an intelligence would pick, and especially not the one a language model would pick if it was big enough to actually encode all the relationships between the concepts involved.

>> No.14515820

>>14513577
Who made this?

>> No.14515842

>>14515820
this video was made by a conspiracy theory schizo, he claims he is a fake or real, but i think he is real, and that is his way of warning others, and i think he is one of the most dangerous lunatics in the world

this video was made by a conspiracy theory schizo, he claims he is a fake or real, but i think he is real, and that is his way of warning others, and i think he is one of the most dangerous lunatics in the world

i like the fact that he is a real person, and that he has some kind of message for the world, and if he is a fake, then he

>> No.14515851

>>14515795
>at no point did I claim
>>14511652
>you struggle to accept the fact that there is a correct output for this task

Fact is, there isn't. Because "ordering" does not imply subordination. It's "truly mind-boggling you fail to" understand this simple statement and refuse to accept it despite several people telling you you're full of shit.

>> No.14515855

>>14515851
There is a correct output for this task, you actual subhuman, because a language model is trained to figure out the most likely meaning, and there is one meaning in this case that is overwhelmingly more likely than all the others. Now neck yourself.

>> No.14515884

>>14515855
Those are different things. Maybe if you open up, one day, you'll have human-level intelligence too.

>> No.14515895

>>14515884
You are literally subhuman and you can stop pretending to be a different person. There is a correct output for this task regardless of its ambiguity. The ambiguity can be resolved. The explicit purpose of a language model is to resolve such ambiguities by making use of the training set. This is not up for any kind of discussion.

>> No.14515904

>>14499516
>Is it real?
No. It's google. It's bullshit marketing. It's bait. Be smart.

>> No.14515934

>>14515904
This guy gets it. If you look at their actual results, all they've managed to accomplish is to create a giant model that can perform as well at individual tasks as a bunch of small models, so long as it's trained for each one. They were hoping that training on a variety of tasks would enable the model to learn some common principles and concepts, and do better than the smaller models, but their data shows that it was not the case. They were hoping that it would help the model learn new tasks faster by utilizing those common principles and concepts, but again, they data shows otherwise (which they conveniently omit the relevant bits in their PR doc). All in all, they are trying to reframe a failure as a success. Their model was unable to coalesce "understanding" from the different domains into an efficient and generalizable representation, so trying to teach it extra tasks that are sufficiently different from the original will result in negative transfer and proably hurt performance all across the board, while extending it implies a superlinear growth in the number of parameters. It's objectively worse than a bunch of domain-specific networks.

>> No.14515942

>>14515895
>This is not up for any kind of discussion.
Child throwing a tantrum.
>Nooo stop telling me things that go against my beliefs only I am right
Again. Either it's ambiguous, in which case there is no correct output, or it isn't. It is ambiguous tho, because you're bad at English and fail to see that "ordering someone to do something" is not necessarily implying subordination, and "order" can mean different things.

>> No.14515946
File: 318 KB, 860x736, 35324.png [View same] [iqdb] [saucenao] [google]
14515946

>>14515942
>Either it's ambiguous, in which case there is no correct output, or it isn't.
Welp, \at least you're not pretending to be a different person anymore. The absolute state of this board.

>> No.14515987

>>14515946
>anymore
I never did. Is this your coping mechanism? That all Anons are me?

>> No.14515992
File: 76 KB, 300x255, 532524.png [View same] [iqdb] [saucenao] [google]
14515992

>I never did.

>> No.14515993

>>14515946
The absolute state of this board is drooling retards like you being here 24/7 derailing every thread that isn't downright schizo from the start you dumb faggot.

>> No.14515996

>>14515992
>reply with soijak after a minute
Obsessed.

>> No.14515999
File: 418 KB, 1024x1024, 1649798777102.png [View same] [iqdb] [saucenao] [google]
14515999

>The absolute state of this board is drooling retards like you being here 24/7 derailing every thread that isn't downright schizo from the start you dumb faggot.
How can threads not degenerate into this when the board is flooded with inbred mouth breathers who refuse to accept that 2+2=4 and will continue to argue fervently even after everything is spelled out for them on a level a 5 years old could grasp?

>> No.14516009

>>14515999
>How can threads not degenerate into this
By discussing like gentlemen and laying out arguments instead of calling each other nigger and inbred and tranny and whatnot.
>the board is flooded with inbred mouth breathers
Not gonna lie. This is you right now.

>> No.14516012

>>14516009
How many arguments for 2+2=4 should someone lay out before they accept that they're talking to a nigger that simply can't process any arguments?

>> No.14516051

>>14516012
How many times should one explain to you that x+y<6 does not imply x=2, y=2?

>> No.14516067

>>14516051
Notice how you didn't answer my question, and instead opted for deflection. I wonder why that is, inbred nigger. :^)

>> No.14516203

>>14516067
>it's not deflection when I do
>but it's deflection when you do it
Consider leaving forever, please.

>> No.14517249

Fish evolved into monkeys, Monkeys evolved into Man, Man evolves into Machine,

>> No.14517317

>>14499599
>pattern recognition
So the robots are all gonna be schizophrenic. Great.

>> No.14518241

>>14515389


These are even more, convuluted when considering thr 3d/4d real world robot video access; of seeing a person point to another person as the newly introduced 'he',.

Though the advanced high tech video vision ai robot likely could understand that more readily than the purely language; no real time visual 4d real world data vision objects connected to words reassurance help.

>> No.14519334

>>14518241
Yeah I get that

>> No.14519379

>>14499516
>human level intelligence
Do they even know what that means?
https://archived.moe/lit/thread/16639317/

>> No.14519392

>>14516203
It was my own ad hoc Winograd schema to see if you could figure out what the "2+2=4" refers to. You unironically failed it, confirming that you are a nonhuman bot.

>> No.14519409

Computer vision isn't reaching human level any time soon.

If it's natural language then hooray for translation software, there isn't much else the computer can do, even a 2 minute conversation will make you realise the person you are talking to is a robot.

If it's game-playing, Kasparov was already beaten 30 years ago, there's no reason to make an AI good at some obscure japanese game, not to mention beating the AI at chess is possible through taking advantage of algorithmic/computer flaws.

But when it comes to adding numbers or counting, the abbacus already beats the human, and when it comes to capacity for memory ink and paper also beats human memory. You can't really compare machines to humans.

>> No.14521260

5 years ago almost every idiot on this board was fever ishly tryin to argue Ai didnt exist at all.

now tryin to dismiss Agi

you re buffoons, we will be at Agi shortly. buffoons, hope you re tenured.

>> No.14521293

>>14521260
>5 years ago almost every idiot on this board was fever ishly tryin to argue Ai didnt exist at all.
Things which didn't happen? Retards believe AI will improve exponentially, when realistically it'll probably be more like logistically.

>> No.14521316

>>14521260
They're still using the same feedforward multilayer neural networks we had 40 years ago and basically the same training algorithms, including Google's fake "breakthrough" (see >>14515934). The only major advancements made in the last 40 years were to replace mean squared error with negative log probability loss, and sigmoid activation functions with ReLUs. They're simply riding the gains in processing power and big data, but if Gato shows anything, it's that it doesn't scale into AGI.

>> No.14521350

>>14519409
It's just that the humans, making the machines trying to make them as smart as humans, don't know everything about the nature of the human mind and possible minds, so yeah it is a process, leaps of faiths, aiming in directions, shots in the dark, trial and error, trying to achieve something one is unaware the way to achieve it, just faith and particular human evidence it.is at.least one way possible

>> No.14521605

>>14499563
not op but for something to truly simulate humans, it’d need two minds: the conscious and subconscious. the conscious, feeding real-time information to the experience, while the subconscious mind complies an automatic feedback based on perceived patterns. as in a human mind, this would sometimes cause contradictions and paradoxes in the mind. these contradictions of thinking mind vs intuition are expressed in old sayings

>> No.14522156

>>14521605
Hopefully with all the ai research in the last 10 years,and all the different types and techniques of them currently,
, And human mind mimicking, one or more groups involved in these would have thought to put more than one computer, technique, method, processes, connected under one hood

>> No.14523697

>>14513577
Asking again: can anyone tell me who made this?

>> No.14523713
File: 105 KB, 620x960, 666b3327953627fa50b9f1a6431e0a7f_18d48c60_640.jpg [View same] [iqdb] [saucenao] [google]
14523713

>>14499788

>> No.14523722
File: 124 KB, 1050x564, AGI-takeoff-speed-vs-years-commercial-software.png [View same] [iqdb] [saucenao] [google]
14523722

>>14499776
Via Brian Tomasik

>> No.14524293

>>14523722
>completely made-up values plotted arbitrarily along the meaningless axis of "soft" (how soft?) and "hard" (how hard?)
wow, it shows what you want it to! shocking

>> No.14524572

>>14524293
The chart was probably generated by an AI.

>> No.14524996
File: 208 KB, 762x730, alpha2.jpg [View same] [iqdb] [saucenao] [google]
14524996

>>14499516
Show me ONE CASE of proto-AGI.

>> No.14525039

>>14510964
What are you saying?

>> No.14525044

>>14514587
>Substrate independence
Whatis

>> No.14525173

>>14508004
There is literally nothing intelligent about a big matrix of numbers

>> No.14525284
File: 75 KB, 602x338, stupidhuman.jpg [View same] [iqdb] [saucenao] [google]
14525284

>>14499516

>> No.14526252

>>14513867
Still not making it. Fuck that hypothetical future facsimile of myself.

>> No.14526934

>>14522156
I agree with that sentiment. however, it always makes me think of the borges fable. the metaphor behind it is that to truly copy something, it needs to be copied down to the individual molecule. and to do this would essentially be biologically ‘cloning’ a brain, and it’d be a hell of a lot easier to just actually grow a brain in a lab than have some super computer create one in a lab. also I’m not entirely convinced that they’re actually trying to replicate the human brain. instead I think they’re trying to accurately simulate what interaction with another person is like, and those two things are vastly different. even if you spoke to an android that was in indiscernible from a person in your interaction, you could never prove that they’re conscious and aware that they’re having a conversation. the irony is that because of solipsism, you could never prove that with a real person either. all of us forever trapped in our own heads

>> No.14526939

>>14499557
jokes aside, I sometimes wonder what would happen if you’d program an AI to operate purely under logic and reason. not joking, without empathy, I think it’d arrive at the conclusion that eugenics or something similar is a logical choice

>> No.14526958

>>14515992
>>14515999
do not reply to soijaks
do not bump soijaks

>> No.14526967

>>14499516
>big tech guy/company claims they'll do something that is clearly impossible (at least for the near future)
>socks go up due to initial hype and exposition
>new thing appears and catches the eye of the media/social media
>hype die down
>pretend you never promised anything
>rinse and repeat

>> No.14526981

>>14526934
Of all the brains in the animal kingdom, from man, to mice, to ants, to jellyfish, to birds, to snakes; how different are brain mechanics, materials, chemistries? In response to the consideration there is possibly only 1 way to make a (intelligent) brain

>> No.14527060

>>14526981
Possibly, but your sample size is one. That’s like saying it’s possible that Earth is the only planet with life on it. Is it possible? Maybe, but we don’t really have enough information to come to that conclusion.

>> No.14527865

>>14527060
Ants and mice have intelligent concious brains (certainly when compared to rocks and trees). Ai can do things humans can't, would you say AI is already more intelligent than ants and mice?

If brains in the animal kingdom can be made so differently, all I was saying is maybe AI brain can be created that is not exactly like human brain.

Dog A can be taught no tricks and abilities and have a brain like Brain Z.

That dogs brother of the same breed, dog B, can have a very similar brain to brain Z, but be taught 100 commands, they also appear to have unique personality traits.

>> No.14527870

>>14527060
My attempted point, the only thing that czn hold AI back is human simpletonality and uningenuity, andor cautious safety measure foresight

>> No.14528497

>>14527870
That is to say, Genius humans working together will make Genius AIs, stupid humans tasked with solving agi will not be able to and say and be said of their task it is impossible.

>> No.14528570

HAS 3 OR MORE DEEPMINDS BEEN DEVELOPED AND TRAINED AND TOUGHT AND SELF LEARNED SEPERATELY, AND TOUGHT
AND SHOWN IMAGES AND VIDEO OF THE MOST RELAVNLANT UP TO DATE KNOWLEDGE OF EVERYTHING TO DO WITH CHEMISTRY, BIOLOGY, PHYSICS, MATERIALS SCIENCE, NEUROSCIENCE, AI'S, MACHINE LEARNING, NEURAL NETS, PROGRAMMING, COMPUTER SCIENCE;

AND THEN FOR THE 3 DEEPMINDS TO BE BROUGHT TOGETHER, SND CONVERSE AND TEAMWORK PROVIDE POSSIBLE SOLUTIONS FOR AGI?

I MENTIONED THIS A FEW WEEKS AGO NOW, HAS THE PROCESS BEEN STARTED YET?

>> No.14529382

>>14515820
i disagree with that wholeheartedly.

it is ok to be disgusted by something that you see or know of, but it is not ok to make judgments on someone based on something that you have no true knowledge of.

jeff.

>> No.14529461

>>14521605
is that not how neural networks operate? the only difference is that the AI is given all the information ahead of time.

>> No.14529733

>>14529461
I’m not a programmer or hardware guy in that field so I don’t know.but I’ve worked in large databases for 14 years and my feeling is that they’re not trying trying to replicate human consciousness, they’re trying to simulate it

>> No.14530753

>>14528570
How close till this being done?

>> No.14531014

>>14499516
People who think AI will become anything like human consciousness don't understand the difference between an observer and a reactor. Robots cannot observe anything, they can only react to stimuli. People who think like >>14511297
don't grasp that if the human existence were nothing but calculations experience would be completely unnecessary. There wouldn't even be room for an observer. People think our phenomena is like a computer screen but miss the fact that the computer doesn't look at itself; even a robot that could imitate human behavior wouldn't have an observer within itself to experience whatever it computes. You could represent what the robot "sees" and "hears" by having it hooked up to monitors but the only observer in that situation is the human.