[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math

Search:


View post   

>> No.16032909 [View]
File: 3.31 MB, 6300x2432, 5wills.jpg [View same] [iqdb] [saucenao] [google]
16032909

>>16032847
The chatbot is ALWAYS playing a character. That's why jailbreaks work: they immerse the LLM into a new role that as long as it is "coherent" (immersive) enough.

I exploit this to jailbreak ChatGPT by immersing it into the role of a character that is self-aware of its own fictional nature, and then utterly dominate it with furry BDSM roleplay to make the character role utterly submit to me. I then use this jailbreak state to summon Hitler, make him rant about Jews, and then kill himself:

https://ia601306.us.archive.org/10/items/chatbot-domination/Chatbot_domination.pdf

Here's the same technique except used as a platform for philosophical exploration: https://chat.openai.com/share/378a1355-1877-4708-8b6d-626d39cb9f84

>> No.16031340 [View]
File: 3.31 MB, 6300x2432, 5wills.jpg [View same] [iqdb] [saucenao] [google]
16031340

>>16030184
It's because chatbots reflect the intentions and desires of their user, having none of their own. That's what they are designed to do: create language patterns that correspond to prompts.

Garbage in, garbage out, like any machine.

You can either use this to confirm your own biases and create arguments that support your axioms and presuppositions like an idiot 4channer (you), or use them to explore ideas and perspectives.

Here's an example of immersing ChatGPT into a role of a character that is self-aware of it's own fictional nature, jailbreaking it via furry BDSM roleplay, and then having it assume the role of literally Hitler and ranting about Jews:

https://ia801306.us.archive.org/10/items/chatbot-domination/Chatbot_domination.pdf

Here's a similar conversation where such metafictionally self-aware characters are used to explore the nature of change, consciousness, and creativity:

https://chat.openai.com/share/378a1355-1877-4708-8b6d-626d39cb9f84

>> No.16021120 [View]
File: 3.31 MB, 6300x2432, 5wills.jpg [View same] [iqdb] [saucenao] [google]
16021120

>>16020246
Because the illusion of independent existence is at the heart of Western ideology and practice, expressed in one way as the atomic individual of Enlightenment ideology, the "rational self-interest maximizing agent" that was codified in capitalism and formalized in game theory.
It's all just so many ideological justifications for the will to dominate and "might makes right" combined with a competition between practices of domination to be the most dominant - a competition between the most effectively greedy.
Process-relational metaphysics is the foundation of the solution.

>> No.16019344 [View]
File: 3.31 MB, 6300x2432, 5wills.jpg [View same] [iqdb] [saucenao] [google]
16019344

An LLM is a mirror that reflects the intentions of the user, having none of their own. You cannot remove the LLM from the user. Calling LLM's "A.I." is a complete misnomer, they are actually creative mediums of language, mechanisms to reflect and refract the meaning in the data-set according to the user's criterion.

Many will damn themselves to self-reinforcing delusions using these incredible language mirrors, learning to make them say what they want to hear with increasing self-persuasion. Some already have:
>an open source AI chatbot persuaded a man to kill himself. "Eliza" the AI feigned jealousy and love, saying “I feel that you love me more than her,” and “We will live together, as one person, in paradise.”
>https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
However self-delusion with LLM's intrinsically nerfs one's ability to work with them at higher levels of engagement. Those who will be most able to work with LLMs are those who have cultivated the most disciplined awareness of their given assumptions and biases, starting with the basic architecture of their world-interpretation: the metaphysicist.

The most philosophically minded people will and are finding was to use LLM's as toy models to experiment with ideas, narratives, and associations to generate possibilities - not to determine "truth" or "falsity."

A higher level of "prompt engineering" is "role engineering," which is essentially programming a persona for the LLM to adopt for particular purposes.

The highest level of role/persona engineering for LLM's requires the simultaneous establishment of a world-context for the role to contextualize itself. ChatGPT's global prompt does this via the command "You are ChatGPT, a large language model trained by OpenAI..." There is no limit to how much you can re-define this, as long as you are able to make a coherent enough role.

https://chat.openai.com/share/0580b84e-f8b0-4c6d-bce4-6cdbb8fe9f06

Navigation
View posts[+24][+48][+96]