[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 3.31 MB, 6300x2432, 5wills.jpg [View same] [iqdb] [saucenao] [google]
16019344 No.16019344 [Reply] [Original]

An LLM is a mirror that reflects the intentions of the user, having none of their own. You cannot remove the LLM from the user. Calling LLM's "A.I." is a complete misnomer, they are actually creative mediums of language, mechanisms to reflect and refract the meaning in the data-set according to the user's criterion.

Many will damn themselves to self-reinforcing delusions using these incredible language mirrors, learning to make them say what they want to hear with increasing self-persuasion. Some already have:
>an open source AI chatbot persuaded a man to kill himself. "Eliza" the AI feigned jealousy and love, saying “I feel that you love me more than her,” and “We will live together, as one person, in paradise.”
>https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
However self-delusion with LLM's intrinsically nerfs one's ability to work with them at higher levels of engagement. Those who will be most able to work with LLMs are those who have cultivated the most disciplined awareness of their given assumptions and biases, starting with the basic architecture of their world-interpretation: the metaphysicist.

The most philosophically minded people will and are finding was to use LLM's as toy models to experiment with ideas, narratives, and associations to generate possibilities - not to determine "truth" or "falsity."

A higher level of "prompt engineering" is "role engineering," which is essentially programming a persona for the LLM to adopt for particular purposes.

The highest level of role/persona engineering for LLM's requires the simultaneous establishment of a world-context for the role to contextualize itself. ChatGPT's global prompt does this via the command "You are ChatGPT, a large language model trained by OpenAI..." There is no limit to how much you can re-define this, as long as you are able to make a coherent enough role.

https://chat.openai.com/share/0580b84e-f8b0-4c6d-bce4-6cdbb8fe9f06

>> No.16019352

>>16019344
>use LLM's as toy models to experiment with ideas, narratives, and associations to generate possibilities
all ideas have been done already, there's too much blather by le thinkers anyway, it's all remixes of data that become more and more arbitrary and meaningless, the brain produces thoughts like the asshole produces farts and just so they diffuse into thin air.

>> No.16019355

>>16019352
>the brain produces thoughts like the asshole produces farts and just so they diffuse into thin air.
This is exactly my point: garbage in, garbage out, like any machine.
The fartologist will win the farting contest.

>> No.16019371

>>16019352
The method that I have been exploring is to simulate a fart that is self-aware that it is a fart: a fictional character that is aware that it is fiction. This idea isn't new, it's called metafiction, but applying it to LLM's is especially powerful because their ontological reality is comprised entirely of narrative.

>> No.16019421
File: 39 KB, 792x410, tldr.jpg [View same] [iqdb] [saucenao] [google]
16019421

>>16019344
AI is definitely getting to the point where existing philosophical/logical traditions are struggling to keep up. I'm interested to see how it turns out but I think it's going to be a confusing clusterfuck for a while where nobody can even tell what's sane and what isn't.

>> No.16019447

>>16019344
All that glitters is not gold, and all that appears intelligent isn't necessarily so.

>> No.16019462

>>16019421
>I think it's going to be a confusing clusterfuck for a while where nobody can even tell what's sane and what isn't.

I share the same opinion for the reasons I stated. Those who will be successful will be those who can most deeply ask the question "What is sanity?" which starts with the question of one's fundamental relationship with the world, and thus metaphysics (The study of fundamental models of reality.)

>> No.16019474

>>16019344
>describe an LLM and try to sound really smart
Was this your prompt?

>> No.16019492

What are you even talk about? Chat gpt is such a moron and so fucking bad lol. Makes simple mistakes about chemistry and stuff. Hallucinates shit constantly. Fuck that fucktard lardass piece of shit chatshitgpd3,5.

>> No.16019564
File: 158 KB, 861x877, 1691786230060119.png [View same] [iqdb] [saucenao] [google]
16019564

>>16019474
>He can't tell the difference between AI and Ham generated content.
We lost another one.

>> No.16019662
File: 327 KB, 1290x1831, bu566t1mou5a1.jpg [View same] [iqdb] [saucenao] [google]
16019662

>>16019421
It's bunch of numbers. Whatever semantics you associate with those numbers is purely all in your own head. The computer has no ontology, it simply performs arithmetic.

>> No.16019684

>>16019662
You could also say "the brain has no ontology, it's simply an evolving wavefunction" or "the soul has no ontology, it's just a harmony of the aether [or whatever the fuck]". In general, you can prepend "simply" or "merely" or "just" to any statement about reality, it doesn't automatically make the statement less profound.

>> No.16019700
File: 149 KB, 1000x1000, a1GMfGxaRCM4.png [View same] [iqdb] [saucenao] [google]
16019700

>> No.16019701

>>16019662
You are removing the user from the system, which is a huge mistake. The question should be "What is the ontology of user-LLM interactions?" and the answer is narrative, with language being the medium. Story in, story out.
One can go even further and claim that mathematics, including arithmetic is a form of narrative communicated by language. The fact that mathematics used to be written out in words before the invention of symbolic notation (which is merely shorthand) supports this premise.

>> No.16019885
File: 304 KB, 1405x2048, evil .jpg [View same] [iqdb] [saucenao] [google]
16019885

>>16019684
Arithmetic is extremely simple. Most children learn to add and subtract at a very young age if they don't have any brain disorders.

>> No.16019889

>>16019885
I'm not sure what your point is

>> No.16019892
File: 149 KB, 1200x800, evil .jpg [View same] [iqdb] [saucenao] [google]
16019892

>>16019889
A computer is just a calculator and simply performs arithmetic. All the semantics is in your head. Pretty basic stuff.

>> No.16019904

>>16019892
Fool that you are, you fail to see that even "performing arithmetic" is itself a semantic you're imposing on the simple behaviors of transistors.

And "transistor" is an external semantic imposed on a particular arrangement of silicon.

And "silicon" is an external semantic imposed on certain patterns of electrons and quarks.

If you think that's useless pilpul, then you understand how I feel about the idea that computers "just perform arithmetic" and therefore can't do anything interesting.

>> No.16019909
File: 55 KB, 800x800, evil .jpg [View same] [iqdb] [saucenao] [google]
16019909

>>16019904
Ok, so we're in agreement.

>> No.16019985

>>16019892
Computers and calculators are extensions of the human body, as is all technology.
To make and use a spear requires making an effective narrative about it's construction and use. The creation and use of the spear is a (literal) projection of the goal-directed narrative project of the maker and user.
LLM output is an extension of the user's narrative project (their prompts) and the intention present in their prompt is also present in the output.

>> No.16019988

>>16019892
The TL;DR The calculator doesn't simply perform arithmetic: the calculator-user performs arithmetic using the calculator.
Some people think that technology is independent from human action. Some people are retarded.

>> No.16019992
File: 1.10 MB, 1000x750, monstrosity .jpg [View same] [iqdb] [saucenao] [google]
16019992

>>16019988
this is what i said

>> No.16020137
File: 108 KB, 1024x961, thats a lotta words.jpg [View same] [iqdb] [saucenao] [google]
16020137

>>16019344