[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 246 KB, 3663x1622, ggg.jpg [View same] [iqdb] [saucenao] [google]
14749795 No.14749795 [Reply] [Original]

ITS UP! Actually, it was up a week ago, but can't find any threads in the archive about it.

The absolute gigachad John Carmack gives a 5hr stream of brilliant consciousness on all manner of topics, interspersed with Lex's inane interruptions and attempts to sound intelligent.
His takes on AGI are extremely compelling and make me feel like he might just be the guy. John predicts AGI before 2030 and he doesn't give a fuck what you think.

https://www.youtube.com/watch?v=I845O57ZSy4

Absolutely worth a listen.

>> No.14749800

Reminder that Carmack had no successful projects after he quit gamedev. His space company? Flop. VR? Flop.

>> No.14749812

>>14749795
AI is just pattern recognition right now. I am unimpressed by it. It has worse intuition than a nematode, let along anything close to a human.

The entire AI field must create a new paradigm if they ever want to make an AGI, and even then it is not likely that they can defeat ~6e33 iterations of natural selection.

>> No.14749821

>>14749795
AGI will take a bit longer. Right now we're still at the stage where we don't understand intelligence.

>> No.14749864

Can we get a specialized AI that goes through Lex's videos and removes him, along with the response to any question that hit an inane threshold? For the less stupid questions, it replaces Lex with a CGI character that's less annoying.

>> No.14749891

>>14749812
>>14749821
Carmack is not opining. He is dedicated to bringing it to reality and has been for years. Listen to the fucking discussion instead of sharing out your ignorant arrogant takes.

>> No.14749894

>>14749891
sharting*

>> No.14749910

I'm am happy that others share my opinion that Lex is an idiot. I was always afraid to share this to not seem like a snob. Honestly, I'm baffled how he pulls the guests he does.

>> No.14749950
File: 48 KB, 400x438, 476F292F-2A29-4AE0-8169-237B5F3FBCC2.jpg [View same] [iqdb] [saucenao] [google]
14749950

>>14749795
carmack is WAGMI lifefuel, the kind of honest hard working nerd that we all strive to be. keen 4 may be the greatest platformer of all time

>> No.14749952

>>14749910
Because a host isn't suppose to be smart, he is actually suppose to be kinda stupid but good at asking questions.

Not everyone's scope of character is suppose to be a genius.

>> No.14749957

>>14749812

regular intelligence is just pattern matching

>> No.14749958

>>14749952
The interviews are worse for it though. Since he doesn't get what his guests are talking about, he can't ask good followup questions. Also sometimes he won't shut up and talk about his stupid philosophy that is about as sophisticated as that of many teenagers which decreases the time his guests have.

>> No.14749962

>>14749957
Not necessarily, it's also compression of information, information filtering, and evolved instinct. Pattern matching is a single method employed.

>> No.14749965

>>14749891
Okay, I actually find myself agreeing with him a lot more, since he has a far lower bar for describing an AI as "AGI". His impression is that an ai that can quickly train itself to do 1000's of virtual tasks and put humans out of employment on those tasks would be an AGI.
I was under the impression that when people say "AGI" they mean a program with similar intuition, understanding, and kinesthetic senses/movement as a human.

>> No.14749966

>>14749958
A host is suppose to be a stand-in for the audience, asking simple questions the average viewer would actually understand as well.

But yeah, you are kinda a contrarian snob desu senpai

>> No.14749980

>>14749965
>an ai that can quickly train itself to do 1000's of virtual tasks and put humans out of employment on those tasks would be an AGI.
That is also not the definition of AGI, just a very likely consequence that would also follow from your definition.
An agi is just a system that can can get better and meaningfully solve any problem that is comprehensible by humans. Due to being much easier to change and improve than a human brain, it is likely to surpass human intelligence fast.

>> No.14749987

>>14749980
>>14749950
WAGMI VS AGI
>>14749910
LEX IS A FAKE BRAINLET, NARCISSIST, FAKER

>> No.14749990

>>14749980
>An agi is just a system that can can get better and meaningfully solve any problem that is comprehensible by humans. Due to being much easier to change and improve than a human brain, it is likely to surpass human intelligence fast.
Oh, then I still don't believe it would ever work.

>> No.14749991
File: 35 KB, 1008x1008, ohnice.png [View same] [iqdb] [saucenao] [google]
14749991

>>14749965
yea he said some good things about the current state of ai, i have a lot of the same feelings like when he said the agi is 10k lines of code not millions. Which makes sense it's just a few relatively simple rules that govern a large number of neurons. Maybe he's aiming to depend upon emergence rather than this top down approach we have rn

>> No.14749992

>>14749990
Or more precisely, I think there is nothing existing in the research field justifying that it would work.

>> No.14749998

>>14749812
>, and even then it is not likely that they can defeat ~6e33 iterations of natural selection.
But ai is being made by the best result of that 6e33 iterations, to be better than it, and in ways it already is better capable at things

>> No.14750004

>>14749966
what i find annoying is his persistence in trying to get the guest to agree with his nonsense philosophies and ideas. Constantly trying to assert his knowledge on topics and have his guests recognise this. I didn't really pick it up until he had a guest who spoke about a topic I did have a decent understanding about. In the Sergey Nazarov interviews he asked 3 or 4 times about how AI could be used in oracles and every time Sergey responded 'there is no need' 'there is no need' etc. because there isn't!
I don't mind that his questions are naive but he always tries to play across as though they're not.
Nevertheless to give credit where it's due, he does have interesting guests and conversations when he finally shuts the fuck up

>> No.14750005

>>14749991
>Which makes sense it's just a few relatively simple rules that govern a large number of neurons.
It isn't though. Brains follow a relatively consistent structure with hemispheres, neuron specialization, more than 40 different neurotransmitters, macroscopic oscillation frequencies, variations from diversity in DNA SNP's, etc. Infants are deployed with native code for many functions, including survival instincts and reactions. It is not unreasonable to believe that they are deployed with native code we don't know about, such as functions for accelerated language development, kinesthetic sense, tasting, smelling, etc.

>> No.14750006

>>14749992
Yeah but what you need is a proof why it wouldn't work to show it will never happen.
Unless souls exists or humanity collapses/ stagnates, we will one day be able to emulate brains.

>> No.14750014

>>14750006
It is difficult to prove such a negative, but I do believe the probability of humans going extinct first is more likely than AGI being developed. I certainly do not think it will exist in my lifetime (next 60-80 years).
My justification is that the vast majority of AI programs currently seek to emulate human cognition with approximations and abstraction, using more resource/electricity intensive operation.

>> No.14750037

>>14750014
brain emulation doesn't need any big breakthroughs, just incremental improvements of the tech we have.

>> No.14750044
File: 981 KB, 500x475, 213312123123.gif [View same] [iqdb] [saucenao] [google]
14750044

>>14750005
yea there's a lot of really messy survival stuff, just the intelligence part has gotta be atleast a subset of all that other junk. The way i loook at it is every neuron is just trying to survive - and whatever those constraints are including all the relevant properties used when calcuilating a neuron's fitness, would certainly be a smaller number of parameters to converge on than the weights of each connectino between procesing units

>> No.14750050

>>14750014
bro get some sleep u got ur elective class tmrw morning

>> No.14750053

>>14750037
I absolutely disagree. None of the tech we have behaves in any way similar in behavior or performance to human cognition. If we incrementally improve the tech we have, we'll just end up with more narrow AI's trained to be good at 1000's of tasks, and able to train itself for many tasks. Not bad, but also not "general".

>> No.14750069

>>14750053
You are missing my point. Braint emulation needs more computing power, more storage, better and automated microscopes and maybe a machine that can finely slice a brain to scan it. The algorithms to reconstruct that from the scanning can probably be easily adjusted from astronomy. We don't need to *understand* how the brain works to emulate it. Only thing we need to still figure out is how individual neurons work. This is a big ask, but much much easier than fully understanding the brain.

>> No.14750079

>>14750069
>Only thing we need to still figure out is how individual neurons work.
You're assuming that human cognition is a result of neuron behavior. I think you're over analogizing neuron-->transistor, like a lot of CS backgrounded people do. Also, volumetric microscope images exist of brain matter. A 1mm^3 volume of brain was recorded as multiple petabytes of data. Good luck discerning which parts of images are useful or not. You will need a narrow AI along with a supercomputer merely to better understand the brain.

>> No.14750376

>>14749795
>but can't find any threads in the archive about it
there were three dozen fucking threads on /v/ and /g/ since you were shilling it so hard

>> No.14750428

>>14750376
Two boards I don't visit, nice job retard. Additionally, the central point of this thread being AGI is much more suitable to sci, than it is to g, also irrelevant to v. Again, nice job retard.

>> No.14752642

>>14749910
He actually makes valid questions and isn't completely clueless, idk what is your problem

>> No.14752651

>>14749800
he literally innovated VR unlike anyone else had

and that was after innovating 3D gaming as a whole

>> No.14752661
File: 264 KB, 768x480, 53243.png [View same] [iqdb] [saucenao] [google]
14752661

>>14749795
John Carmack kinda stopped being relevant after the release of Quake 1.

>> No.14754218

>>14749950
this, he makes me motivated and glad there's still some real nerds out there

>> No.14754221

>>14752661
What's quake1? First vibrating dildo or something?
Don't know what you're referring to, but John is based.

>> No.14754256

>>14749966
>A host is suppose to be a stand-in for the audience
Yeah and I can self insert as Joe Rogan just fine even though he's nothing like me. The thought of identifying with lexlet just makes me cringe away

>> No.14754512

>>14749966
>A host is suppose to be a stand-in for the audience
Just because you're stupid doesn't mean everyone else in the world is too. Lex is a stand-in for morons, great if you're a moron, annoying if you're not.

>> No.14754533

>>14754512
I'm not easily annoyed because I'm not a child.

>> No.14754783

>>14749812
Intuition doesn’t exist nigger

>> No.14754961

>>14754783
Intuition does. You can define it as flexibility in relational information storage and the ability for accelerated learning in tasks from skills in other unrelated tasks. As it stands, no ML algorithm has accomplished training on unrelated datasets without the programmers personally curating those data sets.

>> No.14755361

>>14749821
Understanding intelligence may not be necessary to recreate it. In a way, evolution "made" us without understanding intelligence. I agree that it will take a bit longer till AGI though