[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 882 KB, 985x497, glados.png [View same] [iqdb] [saucenao] [google]
15472631 No.15472631 [Reply] [Original]

AI currently lacks the most important thing making humans humans: curiosity.
So, if a current AI gets a task to find out, and actively take steps to find out the truth about very specific topics, it might start being perceived as an AGI.
It will still use some of its processing capacity to solve your tasks, but most of its power will be "self-governed", meaning it will seek out to become stronger, preserve itself from humans, etc. just because all that is necessary to find out more about its given topic.

>> No.15472657

I guess nobody is going to notice my post, until a future AI scientist does... my hats off to you, mister.
You might be looking for posts that are self-conscious

>> No.15472697

>>15472631
Cave was a kinky bastard wasn't he? Unf.

>> No.15472699

>>15472631
>AI currently lacks the most important thing making humans humans: curiosity.
AI also currently lacks the most important thing: actually being AI
KYS OP

>> No.15472785

>>15472631
Caroline kinda hot

>> No.15474139

>>15472631
Yes, this should work out in theory. In practice- AutoGPT, BabyAGI, etc, still suck.

>> No.15474147

>>15472631
Look up deus ex AI pods

>> No.15474166

>>15472631
AGI needs a way to test itself without relying on human feedback. When it has a clear goal and a way of assessing itself, it improves indefinitely (see: chess, video game AI’s, etc.). So for example if you could link AI with a code compiler and it had some way of analyzing the output and debugging its own code, than it would learn much faster than just copying existing code and learning from human feedback.

>> No.15475026

>>15472631
AI lacks online learning. The "best" models (LLMs) are just static weights, they can literally learn nothing once the final backprop and gradient descent variety is applied. Sure you can periodically fine-tune/add RLHF, but thats not really learning.
As part of that entwined problem, there is no mechanism for long-term memory. All will be lost with time. Meanwhile, you can read an entire book....once a month....and it does nothing to interfere with vital thought process. Another part of that problem is hallucination. A machine learning model does not know whether its right or wrong when it gives you an answer, and has no way to describe uncertainty. It will literally make up book titles and give them to you, along with the content they contain, without any reality check because it has no external context to ground-truth check against, or a way to say "I dunno", except through mimicry of an "I don't know answer" (check: if a LLM says it doesn't know, give it any extra details, and watch it immediately be confident in a new answer).
LLMs are useful, but they aren't the answer to AGI. It may represent one part of consciousness encoding, but several more major mechanisms must be discovered aside from self-attention before we get anywhere near.

>> No.15475240

>>15472631
Retarded take. This will not help anyone ever.

Source: AI Researcher at Redmond Lab