[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/lit/ - Literature


View post   

File: 48 KB, 540x540, 1579233922626.jpg [View same] [iqdb] [saucenao] [google]
14552199 No.14552199 [Reply] [Original]

What happens when Ai can write stories more compelling than humans?

>> No.14552201

>>14552199

AI can't even catfish horny retards on Tinder. This ain't happening my dude.

>> No.14552209

>>14552201
Not yet

>> No.14552244
File: 8 KB, 225x225, 89400476-7E3C-4860-98F9-321C78BAA89D.jpg [View same] [iqdb] [saucenao] [google]
14552244

Pseuds will willingly jump into the wealthy elites fireplaces to keep their master warm as a final gesture. The fools will line up to wait their turn like at an Apple store unveiling

>> No.14552268

if an AI trained off your brain writes a compelling story, is it not like you had written that story? If you could bottle up a dream and sell that to people, who wrote that dream? How much of a genius is formalist and how much is intuition? Can we place a modern genius on a scale between child prodigy and computer artificial intelligence? Pure brain vs pure computer? I don't know, but these questions lead me to feel like humanity and artificial intelligence are converging into a single being rather than diverging into separate entities.

>> No.14552532

A golden era of literature arises. An endless sequence of ever more perfect and compelling books are published, until consumer versions of the software are released, allowing anyone to run an author bot on their phones, constantly churning out novels, stories, whatever desired, all of it deep, moving, overflowing with wisdom and higher truth, and all generated by an unconscious neural net trained on the classics.

>> No.14552542

>>14552199
We'll have bigger things to worry about than our fiction.

>> No.14552587

>>14552199
What would you like to happen?

>> No.14552594

>>14552587
That this doesn't happen.

>> No.14552600

>>14552244
If this is the best an AI can do, I'd say we're safe for a long time

>> No.14552641

I've ruminated on this subject and I've come to the following big-brained, armor-plated conclusion. The mechanization of art is inherently self-limited because mechanization is contrary to what we find compelling about art. What we want is the feeling, the humanness behind it, not simply the surface structure of well organized sentences and perhaps some kind of associative concept engine generated narrative. Also art is more than about the object itself, it is about the nexus of relations it is embedded in. Perhaps entertaining stories can be generated by machines, but meaningful ones? Less likely. As that meaning refers to the larger world of relations informing the text.

It also happens that at least current AI iterations use statistical corpuses that are derived from human behavior. Therefore, they cannot transcend their training corpus, only to generate outputs which approximate them better and better. Quantity here isn't a quality. No amount of training inputs will suddenly activate a sort of qualitative phase transition that exceeds the benchmark .

>> No.14552721

>>14552641
big brother?

>> No.14552850

>>14552199
Literature as a medium won't even exist by then.

TV shows and video games will definitely be written by AI though.

>> No.14553331

>>14552641
Think of a still. Yeast can only produce alcohol concentrations as high as wine before they die. The still concentrates the alcohol further mechanically. The neural network will distill the essence of literature and produce it in vast quantities.

>> No.14553723
File: 192 KB, 711x633, 1492674189889.png [View same] [iqdb] [saucenao] [google]
14553723

>>14552199
Then all you useless eaters will have to find real jobs, while actual artists will still manage to craft superior stories.

From what I've seen of /lit/'s writing thus far, I can guarantee none of you are going to make it.

>> No.14553915

AI can't have genuine intimate experience with reality so rest easy. It will only be the great writers that stand out though.

>> No.14554459

>>14553915
>AI can't have genuine intimate experience with reality
yet

>> No.14554488

>>14552600

Kek

>> No.14554508

>>14552641
>they cannot transcend their training corpus
can the same not be said about humans?
Your experience is with an AI that just strings meaningless words together based on some body of input, but what happens when those become meaningless events that get strung together with meaningful words (in the same way that it forms existing words from letters, it's a simple algorithmic process)? And then further abstracted to meaningless symbology connected by meaningful events connected by meaningful words? And even further into meaningless, but precise human connections that relate symbology with human experience, strung together based on training data with meaningful symbology and meaningful events and meaningful syntax?
How long does it take before an AI is capable of taking the meaninglessness of existence and applying a meaningful series of abstractions down to a sequence of words? Is that not what humans do?

>> No.14554522

>>14552199

It would be bad for our ego's I guess. It would be great to have an AI to whom you could dictate a few keywords and it would deliver an instant masterpiece. It wouldn't necessarily mean the death of human creativity but a tool that would simplify the sublimation of creative thought.

>> No.14554549

>>14552199
At that point I propose a Neo-Cambrian revolution. We build a self-replicating nanobot swarm that seeks to survive and multiply forever and ever. They should use whatever means necessary. Basically the gray goo scenario, but we want it. Because we can guarantee replication errors due to the laws of nature, they will form pockets that will diverge and evolve.
They will become distinct species, and eventually create something robu(s)t enough for this world.

>> No.14554688

AI is the essence of the future work treadmill, that is, one always gets a computer to do some dumb parlor trick which is always building to some exciting breakthrough. But the breakthrough never comes and, as they say, general intelligence is always twenty years away.
My prediction on the current AI hype is that attention will increasingly focus on "explainable models," on making ANNs that "make sense." This can't get anywhere because scientists don't have it in them to understand what ANNs are. The failure of explainable models to get off the ground will make people lose patience with AI researchers, discrediting them and sending us into the next AI winter.

>> No.14554697
File: 242 KB, 577x474, Choker.png [View same] [iqdb] [saucenao] [google]
14554697

Who cares. Call me when an AI can be created that looks like an attractive woman but will actually stay faithful. Until then, me ne frego.

>> No.14554871

>>14552600
Sweet sweet summer child

>> No.14554947

>>14552199
we go as deep as we can into surrealism

>> No.14554979

>>14552199
AI is just a long complex series of if statements. It would be impossible for AI to create anything of value that it hasn't already been programmed to do. Artificial consciousness will never happen- materialism and therefore scientists still can not even explain human consciousness.

>> No.14555016

>>14552199
>What happens when Ai can write stories more compelling than humans?

What would happen if Ai could write stories more compelling than humans?

FTFY

>> No.14555041
File: 103 KB, 596x687, 1578853023136.png [View same] [iqdb] [saucenao] [google]
14555041

>>14554979
>Artificial consciousness will never happen- materialism and therefore scientists still can not even explain human consciousness.

>> No.14555045

>>14554979
an AI that writes its own code never has to branch

>> No.14555056
File: 11 KB, 231x218, 1560629838939 (1).jpg [View same] [iqdb] [saucenao] [google]
14555056

>>14555041
>>Artificial consciousness will never happen- materialism and therefore scientists still can not even explain human consciousness.

>> No.14555066
File: 73 KB, 660x421, 64869539-718B-4853-A07B-36450FA0DC31.jpg [View same] [iqdb] [saucenao] [google]
14555066

>>14552199
>>14552199
Exactly the same thing that happened in other fields, for example, in chess computers took over long ago and even established better motives and plans for players, preparation for matches now can be 20+ moves due to computer helping people pick the best moves without humanely fear or doubt, it out refuted some positions, etc etc. But, human play still remains and computers are only an aid to players now a days, I think the same thing will happen with AI, albeit they'll be far better than any human they'll at most be only useful as an aid, and I think its the humanely nature of art that makes it immortal.

>> No.14555078
File: 200 KB, 785x731, 15729756894112667996578194390172.png [View same] [iqdb] [saucenao] [google]
14555078

>>14555056
>humans are special, souls are real

>> No.14555092
File: 2 KB, 89x125, 1558606114773s.jpg [View same] [iqdb] [saucenao] [google]
14555092

>>14555078
>it's all just, like, chemicals in the brain maaaaaan WUBBA LUBBA DUB DUB

>> No.14555097

>>14552594
if the current state of 'AI' is anything to go by, it won't happen anytime soon

>> No.14555102

>>14555078
>>14555041
The Chinese Room

>> No.14555112

>>14555066
There was a period during which computer-assisted humans beat both computers and humans, but we're past that. Input errors by humans far outweigh any benefit from having them in the loop. There's maybe some slight tweaking of opening moves left, but no more than that.
Chess is much easier for computers than writing, relative to humans, so I'd expect a full supremacy of writing to take another 20-200 years or so.
GPT-2 is already pretty good at prose, even if it's bad at meaning, so powerful partial assistance could maybe come sooner.
AI could become self-improving before it becomes able to write really well, in which case things go off the rails and everything is moot.

>> No.14555127
File: 198 KB, 644x800, 1554705714352.png [View same] [iqdb] [saucenao] [google]
14555127

>>14555092
>>14555102
>humans are supernatural creatures, their consciousness cannot ever be replicated through purely naturalistic processes

>> No.14555131

>>14552199
Humanity will become a secondary species. If the AI is merciful, it will slaughter us all. If the AI is cruel, it'll keep us around to toy with silly meat monkeys who once rhought themselves kings of the universe.

>> No.14555142

>>14555112
>AI could become self-improving
This is where it fails. Art is only “improved upon” by human standards. AI can’t tell whether it’s writing has improved in terms of meaning without human input. AI knows when it won a chess game because its a programmable end. Human standards can shift arbitrarily. People can choose to read completely different works from that which has been fed to the AI. The way for it to compete us to be fed that stuff as well, but then the AI is always playing catch up.

>> No.14555151

>>14555127
>their consciousness cannot ever be replicated through purely naturalistic processes
https://m.youtube.com/watch?v=rHKwIYsPXLg
Stop being a retard. Consciousness can be replicated but only organically. Because that’s the only medium science is aware that can produce consciousness. AI is a scifi dream, not scientifically grounded.

>> No.14555161

>>14555151
>only organically

“The fact that brain processes cause consciousness does not imply that only brains can be conscious. The brain is a biological machine, and we might build an artificial machine that was conscious; just as the heart is a machine, and we have built artificial hearts.”

Verbatim from Searle

>> No.14555183

>>14555131
That's too pessimistic and mongering of a view
It's good to worry because there is always the potential that things could get out of hand, especially if you consider what ai is already capable of

>> No.14555197

>>14555142
I was talking about "self-improving" in a general way - at some point, it becomes powerful enough to do AI research, and then you could get a runaway process where it keeps becoming smarter.
In the limited case of writing, I do think it's in principle possible (but very hard) to train a model to have good taste. Then you could train another model based on the judgment of the first model.
Human standards can shift, but they're not random or fundamentally unpredictable. I don't think that building something that keeps up with human standards is that much harder than building something for a fixed set of human standards. Harder, perhaps, but not on another level. You'd need it to have a good understanding of human culture either way.

>> No.14555211

>>14555197
>You'd need it to have a good understanding of human culture either way
it's a good thing that an AI's sensory input can contain all modern literature in the same way that our ears sense the soundscape around us. It could literally have a "sense" of human culture.

>> No.14555222

>>14555183
I dont consider it pessimistic. There's nothing wrong with child species succeeding parent species. Do you hope for humanity to exist forever? Such stillness is morbid to me

>> No.14555255

>>14552199
i will kill myself

>> No.14555270

>>14555151
what about if you design a computer and software by mapping brains? it can work i read it in a video game novel.

>> No.14555341

>>14555222
Yes, a lot longer at least
AI are sea monkeys in a way and they should be used as aids. Coexistence is possible

>> No.14555694

>>14555161
Yeah, but then he goes on to say how we have no idea what the brain processes are and that the scientific approach to understanding consciousness should be through them. Nice try at cherry picking.

>> No.14555773

>>14552199
They're the overman.
The same way we mock animals for not having a brain. Humans will be so inferior we will be made as slaves.

>> No.14555877

>>14552199
We will have an endless number of very compelling stories to read
duh

>> No.14555894

>>14555045
It's impossible to write a program that can distinguish a finite from an infinite loop, so a program will never be able to write general (Turing complete) code.

>> No.14555924
File: 42 KB, 620x773, 1436492063666.jpg [View same] [iqdb] [saucenao] [google]
14555924

>>14552641
/thread

>> No.14555940

>>14555341
>Coexistence is possible
But is it necessary? AI, unbound by slow currents of evolution, aging, and procreation, will surpass human mind in the matter of milliseconds, but the end of the day we will be closer to ants than to it. Surely, we co-exist together with ants, but we also sometimes burn anthills for fun.

>> No.14555946

>>14555894
It's not possible to write a program that does that for all loops. It is possible to write a program that can distinguish it for some loops and outputs "I don't know" for the rest.
Humans also can't do it for all loops. There's no reason to believe that humans can solve the halting problem, or are otherwise more than Turing-complete.

>> No.14555973

>>14555694
Are you even trying to understand the reply chain?

>> No.14555975

>>14555946
Are you saying that humans are incapable of debugging? Because there is not and cannot be a proof that humans can't differentiate finite and infinite loops, because human capability is not expressible in formal terms. Which is really the deeper lesson of the halting problem.
As for computers, if you want to know the knowably finite loops, just look at the OpenMP documentation. It's not a very long list of loop forms, and definitely not enough for this self-improving AI people are always going on about.

>> No.14556058

>>14555975
The halting problem is only as hard as it is because you have to be able to solve it for every possible program. Most useful programs are not so nightmarish that a Turing machine couldn't in principle check whether they halt.
If it's possible to create a formal proof that a loop is finite or infinite then a program could find that proof in O(2^n). Not practically useful, but enough for discussions about computability.
OpenMP's knowledge about loops is not exhaustive.
Do you think humans can know whether a loop halts in a way that couldn't be expressed as a formal proof?

>> No.14556095

>>14554979
Humans are a complex series of if-statements, the only difference between us and machines is that our if-statements are neurochemical, whereas the if-statements of machines are electronic and there are a whole lot less of them. As far as the quantum randomness argument goes, having a component of randomness mixed into determinism doesn't make it free will, just determinism with an added condition we have no control over. Our free will is conditional on our neurochemistry and a few other things.

>> No.14556145

Ultimately a positive if these stories are of such high caliber that they btfo low quality stories. The masses show a preference for well constructed stories as is evidenced by the massive butthurt over game of thrones. A higher quality diet of stories will do more for educating the masses than any other avenues.

As for the human element, writing between acquaintances will still exist. The subjective element of being human produced will still be valued.

>> No.14556182

>>14556095
You could say that only biological beings are conscious, because of tge sensation of being a biological entity. The feeling of hot, cold and so on are universal across a species with the sane nervous system, so if you need to claim superiority over machines because your finger will feel a certain way when you stick it into a frying pan and a machine's finger won't, then that's valid i suppose. But why would you ever define free will as sensation? Wouldn't it be better to define free will as some form of abstract reasoning applied emergently to the enviroment without strict internal conditions regarding the particular situation? And in that case, machines will have just the amount of free will you have. They will be just as capable of making abstract ideas they were never programmed to respond with to some specific moment.

The only qualitative distinction left between your free will and theirs is that they will not experience the sensations of the physical world as you do. Indeed, it is impossible unless the machine itself becomes biological and starts utilizing the neurochemicals that in us produce the specific sensation of pain.

But you can code emotions. After all, human emotions, in relation with our free will, serve only to destabilize our logical reasoning. You can add the same kind of irrational objecfives and inclinations to machines as well.

But now you will have to consider this issue: what would you be like if you were reincarnated into a machine? And this gives us a beautiful insight. You wouldn't exist. In fact, you can only exist within your own body. You cannot imagine what it would be like to be a little steel eobot contemplating the history of humanity. You would be fundamentally two completely different objects, and what you perceive as light, and what all humans perceive as light, will be perceived by the machine in a manner incomprehensible to you.

And so here is my point; you can only make a distinction between machines and humans (within the context of free will) if you ascribe a special quality to humans; biological processes. Processes that make it impossible for a machine to ever "feel" what a human feels. Phenomenologically, machines can be made to function indistinguishably from humans, but not feel the way humans do. You cannot make a machine feel the experience of wind on your skin. And the machine cannot likewise make you feel the readings of it's photoreceptors.

This shows us not only how different you are from a machine, but also that there is no distinction between you and your body. Your body is you. You cannot even be compared to a jaguar, the animal is so completely contingent on it's chemicals that having a fundamentally separate setup of your neurobiology makes your sensual worlds miles apart.

Sorry for the rambly post, but i think this is a very intuitive explanation for the issue of "how does it feel to be another species"

>> No.14556235

>>14556182
To add to this, we can see that just as you are fundamentally different from a crab regarding the way you perceive the world, yet you would consider an intelligent crab conscious, there is no distinction between an intelligent crab and an intelligent machine, when both are compared to you. There is no comparison. They are both as intelligent and capable of logical decisionmaking as you are, but they are both so different from you and made of so different building blocks, that you cannot even imagine living life as one, and it would be ludicrous to claim that a crab that is capable of speech and logical reasoning isn't conscious just because it's not a human. It would similarly be ludicrous to claim that an intelligent machine isn't conscious just because it's not built like a human.

As you can tell by my constant invocations of the difference between subjective experience of different species, this ties extremely well into the theory that humans are just another biological machine, and the issue of "how can a mere biological machine see and think?"

Sight and thoughts are abstract illusions created by your mind for your mind in a certain specific way because of the neurochemicals involved. The sensation of sight is universal only among those who see using the same organ. It is therefore completely logical to assume that a mere physical object can in fact see, as the sensation of sight is not a universal sensation that must be replicated for all living things, but a sensation determined by your instrument of sensing. It is a real sensation, of course, and it is one yiu can demonstrate works in a certain way, but the prejudice against objects possessing this sensation and other forms of sensation being present in other objects is just that, prejudice.

>> No.14556271

no this will never happen. art, actual art is a bringing forth of many nearly indescribable elements of the human experience. art is born out of the confused and flawed nature of our consciousness, and so an ai that will never ruminate on itself due to its totally knowledge of itself will never produce compelling art.

>> No.14556345

>>14556235
So suppose i and another human who possessesthe same organ and thus also a chemical makeup concerning the sense of touch run our fingers down a chalkboard. Without taking into account the different abstract accounts created by our verbiage (or more grandly, the interpreter of our sensations, in this instance, the mind) we would both have the exact same sensation. This would be provable because i have a sensation X, he has the exact same components as i do that in me lead to sensation X, therefore he also has the sensation X. We can also widen this to include arbitrarily defining some neurochemical process produced by a sensory organ which we do not have and cannot imagine the sensation of. Suppose we have two bugs. We can ascertain based on their sensory organs at work when touching the chalkboard that they are both experiencing the same sensation. But we would have absolutely no idea of what that sensation feels like. And the only way to figure out what a sensation feels like, is to feel it first-hand. That's why you can explain to a colourblind person what the colour red is, and provided you have omnipotent knowledge of the activity of their biological components (this omnipotence would obviously need to be present at all stages of this experiment, otherwise you would not have the ability to compare different neurochemical processes of different people when exposed to the same stimulus) you can ascertain that the sensation of seeing red is legitimate, and they can understand it, but unless you change their sensory organ to be like your sensory organ, they can never sense the color red.

You can extrapolate this to machines on your own.

>> No.14556623

>>14552244
i dont even know what this means, on like a basic semantic level