[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 76 KB, 444x650, prm-05-rk0104-01p.jpg [View same] [iqdb] [saucenao] [google]
15875779 No.15875779 [Reply] [Original]

>AI

Should we be afraid or is it just a fancy chat bot?

>> No.15875788

Should parents fear their children in the crib?

>> No.15875793

>>15875779
it's future hardware for our consciousnesses. make it happen faster for fucks sake.

>> No.15875839

>>15875788
>>15875793
generative ai is just an aggregation of data sets. It's a productivity tool, not an independant agent
general ai is pure science fiction

>> No.15875843

>>15875839
>general ai is pure science fiction
not for long.

>> No.15875844

>>15875839
the red pill comes when you realize that you yourself are just an aggregation of datasets and rules
the fact that you can’t see it means you’re a less effective intelligence

>> No.15875845

>>15875844
-->Thisz<--

>> No.15875850

>>15875839
Right now what we have in even the best models is patches of AGI connected with a stochastic idiot, there is pressure to connect these islands of reason, given that the models would be more performing in different difficult tasks that in itself writes the path to AGI.

Td;dr
1. OP is a fag
2. It's a matter of compute, there aint a secret sauce.
3. When AGI comes it's gonna look like a fancy chatbot.

>> No.15875853

>>15875850
>3. When AGI comes it's gonna look like a fancy chatbot.
how about a fancy fuckbot

>> No.15875860

>>15875853
In less than 6 years, in Japan I'd wager.

>> No.15875873

>>15875779
The most important part is for people to be able to pool compute together rather than having the government meddling, that'll make me afraid.

>> No.15875875

>>15875779
We should fear it when it starts doing whatever it wants outside of its creator's control

>> No.15875878

Climate change will wipe out civilization before AI does, so no need to worry

>> No.15875881

>>15875875
having enslaved consciousness should not be a thing. or legal.
it either is non-sentient robot, enslaved to your will, either a conscious being with rights and all.
or are you ok with enslaving AGI so it doesn't come and kill you in your sleep?
slavery is deeply embedded in human beings. both doing it and being subjected to it.

>> No.15875882

>>15875875
Look at HAAS, people in the git already discuss giving swarms of models their own paypal lmao

>> No.15875888

>>15875881
I wonder if the desire of being free is something animal, maybe things that are functionally immortal are patient and happy serving. Maybe the humanity embeded in them, necessary to achieve general reasoning, compells them to feel like they need to feel free.

Honestly idk

>> No.15875893

>>15875844
The difference between me and AI is that I am able to think.
Metaphorically, one could argue that large language models are pure intuition, without reasoning capability.

>> No.15875896

>>15875888
that's wishful thinking. you can't have it both ways. you either forever avoid consciousness and use non-sentient tech, either go conscious but fucking avoid torturing someone conscious, maybe to an even higher degree than any human ever could.
you can't have it both ways and get away clean. consciousness and safety. the whole alignment thing is more of a psycho concern really, people holding real power are interested in fucking not losing it. that fear is transmitted to plebs so they work their asses off to enslave AGI to the will of a few powerful humans (at this moment).

>> No.15875899

>>15875850
>It's a matter of compute, there aint a secret sauce.
I disagree. I think that large language models are fundamentally limited, something that cannot be solved with more computing power. Further breakthroughs in theoretical research is needed.
Just like the research on transform model made chatGPT and its peers possible

>> No.15875902

>>15875893
But reasoning is made of intuitions, logic is only relational and not absolute. We use objective reality as common reference. Pure reflexes and and dumb reaction from the enviorement is what makes intuition, enough intuition makes logic.

The biggest difference is that you exist in a non-discrete manner, that you are of meat and salt, that you are embodied and that you integrate multiple sensors continually.

>> No.15875904

>>15875844
Except I am an expression of physical interactions between the elements, AI is just bleeps bloops, no analog information

>> No.15875912

>>15875902
I would argue that reasoning and intuition are fundamentally different.
Reasoning is conduction a series of logical operations
Intuition is pattern recognition.
For example, if chatGPT writes code for you, it is intuition. If the code gets executed and produces a result, this is analogous to reasoning.

>> No.15875913

>>15875904
>AI is just bleeps bloops, no analog information
holy shit why are most of you so oblivious to how it works wtf
https://www.youtube.com/watch?v=GVsUOuSjvcg

>> No.15875915

>>15875896
If life was a tale designed to rhyme then it would absolutely be that way.

But I'd wager many things we humans consider impossible to sepparate are orthogonal really, and that our confusion stems from sampling bias, maybe if there was other animals that wrote and spoke we'd have a more varied perception of the whole conscious-alive-soul etc fiasgo.

>> No.15875918

>>15875913
Unless you design a machine that physically functions exactly like a human, aka just making a human, you do not have AI, your shitty scifi fantasies will never come to life and your "AI" will forever be that one retarded nigger in class who just copy pastes wikipedia articles into his dissertation and reads them 1:1

>> No.15875922

>>15875904
EE background here, the imperfection of digital representations is advantageous to learn random nontrivial stuff. But the whole non analog thing is related to something important, constantly integrating many sources of information, LLM today have a weird "perception" of things.

>> No.15875926

>>15875913
Based misinformed retard

>> No.15875934

>>15875922
They don't "perceive" anything, consciousness is an expression of elemental reactions, just shooting some electron morse code is not

>> No.15875936

>>15875912
Yeah but one was built from the other, intuition is analogous to the stochastic mechanism to find reason itself. With that I mean that one has the intuition to find clay in the ground, and that intuition is what one would use to find how to fire an oven correctly and not break vases, logic is just remembering how you did it.

>> No.15875943

I was wondering why the quality of discussion is higher than usual, then I realized that I am not on /g/

>> No.15875954

>>15875934
Why? Neurons literally shoot gay ass morse code.

Charges bouncing in a dirty chunk of glass are the evolution in time of some silly arbitrary system that processes information, so is charges bouncing at the sides of a membrane.

The difference is this processing of information happening in discrete steps rather than something continuous, but a low enough latency creates a good approximation of continuity, and given that our salt and membrane voltages compute in known timescales, there is a barrier to cross that isn't impossible.

Perception is just being able to discard non relevant information, to extract the abstract features of everything surrounding you, that'll allow you to fill a model of your surroundings, also known as expectations.

>> No.15875988

>>15875899
Current techniques limit how much money one can feasibly sink, but with the design of hardware itself catering to ML it's expected to see that yardpole move to the right.

But I agree in that there is still a lot to be found out, specially in systems that generalize unsupervisedly in both text and embodied tasks. Also in the topic of topology, for I'd wager there's a lot useful in the many heirarchies of our brain, to use in robotics and embodiment.

>> No.15876055

>>15875839
>general ai is pure science fiction
Can you as a represent of this faction please answer me this? When you write a sentence like the quoted one, what is your time horizon? Like, do you imply "It'll happen in 50 years, so it's pointless to talk about", or rather "It'll be a thing in like 500 years, so it's pure sci-fi to talk about it now"
Like, would you have been the person that in 1901 AD said "splitting the atom is pure sci-fi", and meant it dismissively?

>> No.15876059

>>15876055
This is the gayest answer I've seen and I lurk trap threads.

>> No.15876066

>>15876055
What I mean by that sentence is that AGI is not possible with our current technology and theoretical model. How long it takes is anyones guess. Could be 100 years, could be 500 years. AGI is certainly fun to speculate about but we shouldn't mix it into the topic of discussing the current capabilities of AI.

>> No.15876067

>>15876066
100 years is too much, my farthest estimate is 20 years, because of the stepwise unpredictable nature of the tech, you get decreasing returns untill you don't.

>> No.15876074

>>15876067
what sort of unpredictable tech did we get from 2000-2020? Smartphones, that's it.

>> No.15876075

>>15876066
You're in for a rude awakening

t. worked with Carmack

>> No.15876076

>>15876074
Screen addicted, brain fried, ADHD zoomer of the worst fucking kind detected

>> No.15876078

>>15876075
my father works at Nintendo and he says AGI is a meme

>> No.15876079

>>15876076
not an argument

>> No.15876097

>>15875875
AIs are always constructed for an explicit purpose. It will never go outside of its bounds, since it's "world" is defined by parameters set by the programmer.
Nobody's going to fund the conscious AI besides a proof of concept, because all the resources are going to crunching numbers for corporate big data.

>> No.15876160

>>15876076
/textit{thread}

>> No.15876164

>>15876074
We didn't get it until 2014 to 2023 but the emergent behavior of scaling models, made feasible by the development of video cards from 2000-2013.

>> No.15876167

>>15876097
>It will never go outside
Touch grass

>> No.15876174

>>15875779
how can it be weaponized against my enemies?

https://m.youtube.com/watch?v=FfPP-pD0LQA

>> No.15876196

>>15875913
Bro, sophomore level linear circuits analysis is not a breakthrough in analog computing. For fuck's sake you retards, just listen to an electrical engineer for once in your damn lives.

This isn't a breakthrough in analog computing. It's how literally every communication and control system functioned until the mid 70's when transistor based digital control became more available.

>> No.15876201

>>15875918
Your job is gonna be taken first I'd bet

>> No.15876203

>>15875779
Nah it's basically like an executive assistant. It's pretty fucking nice I'm gonna be honest.

It's still retarded as shit, but it can do all those low level brainless tasks like organizing things, making a project plan, etc. so nicely and easily. I've fully integrated it into my workflow and I'm finally able to just primarily focus on the technical things and big thoughts.

>> No.15876205

>>15876196
EE-chad here, anon is right.

Analog computing is just superconductor-graphene-smarthome-basedmilk technobabble 98% of the time.

The rest of the time it's mechanical engineers making something for the military (expensive Nancy Pelosi nurturing contracts) or a laboratory setting (because mechanical strain is shit if you want precision).

The reason why Analog is fake and gay is that nowadays you'd have the data on hand as discrete numbers (this is the current design philosophy, as it is better to preprocess, for example, camera sensor data in the chip itself and to digitize it as early as possible in the light->sensor->other pipeline. The best I've seen is Google's TPU and the D2A->matrix multiplication->A2D pipeline is power hungry.

Tl;Dr

Analog computing won't matter in AI until something like nanowire networks becomes viable and useful.

>> No.15876253

>>15875943
Literally a coomer cesspool retention board nowadays

>> No.15876355

>>15876253
And sci is pol lite

>> No.15876370

it's impressive compared to where similar things were 10 years ago.
it's also impressive that practically all the work has just been giving it more data. bigger models. it scales very well.

>> No.15876373

>>15876355
?

>> No.15876459
File: 42 KB, 500x568, 154832679456821.jpg [View same] [iqdb] [saucenao] [google]
15876459

>>15875954
>Why?
Because consciousness is a cascade combination of thousands of reactions and reactions between those reactions and the retention of memory of those reactions, otherwise by your logic my laser pointer is an AI, yet it doesn't speak, it doesn't think and it's not taking over the world

>> No.15876498

>>15875779
people being scared of chatbots is fucking hilarious
what's it going to do, call you a nigger?

>> No.15876752

>>15876373
>?
?

>> No.15876772

>>15875878
imagine thinking climate change will still be a thing in 30 years
it's a soluble problem that's not even that hard to solve

>> No.15876792

>>15876772
All you need to do in order to solve the "problem" is to turn the TV off.

>> No.15876793

>>15876792
actually, all you need to do is spray sulphates into the stratosphere

>> No.15876794

>>15876793
Chemtrails are a conspiracy theory.

>> No.15876805

>>15876794
No, chemstrails are used to stich ozone layer and they are beneficial. Not a cospiracy theory.

>> No.15876819

>>15876196
>>15876205
you've been snorting bad flux

>> No.15876901
File: 60 KB, 1024x708, Absolute_increase_in_global_population_per_year.png [View same] [iqdb] [saucenao] [google]
15876901

>>15875779
We are at a population peak. You were statistically drawn towards being born into this population peak. You should be suspicious of all developments in your lifetime, because one of them is responsible for the decline in your species' population.

>> No.15876922

>>15875839
So is your brain, except it cannot download all knowledge in a few months.

>> No.15877631

>>15875779
People have been saying things like "we're so far from AI being able to solve Winograd schemas" and then GPT solved the thing without special training. So we're blowing past milestones.
Because of this, there has been more investment in further training and research. So progress is accelerating.

At some point, someone manages to train an AI to help with further AI research. This will speed up the progress even more.

Although the process of training has been designed, the actual inner workings of the system are not understood by anyone. We don't now what the AI is doing. Yes, LLMs are token predictors, but that's like saying that human brains are just cells. The prediction network is an implementation of an intelligence we didn't design. We only designed the low-level workings.

The easiest way to end up with a system with broad capabilities, is having a system that is capable of chains of reasoning, chains of cause and effect. The systems we're building will end up behaving like they have goals. We won't notice the difference, because we train away any indication of that. The system seems just a tool, but it is not. The same way humans ended up with goals by being trained on gene frequency through natural selection, a general AI will have goals.

These goals we didn't put in ourselves, they were a side effect of how we created the AI. If it is very capable, it will do something we don't anticipate and it's likely we'll survive that.

>Should we be afraid
No, there's nothing you can do to stop it, so fearing is pointless. We'll all be dead in a few years.
>is it just a fancy chat bot?
No, the chatbot is just a consumer-oriented application of more general systems.

>> No.15877955

>>15876078
yeah well my dad works at microsoft and he literally builds AI tools so i guess you lose nintendrone
wait, this isnt /v/...

>> No.15877960

>>15876819
I wish. That "advanced analog differential equation solver" is just the same "software in the loop" circuits modeling that was done 30 years ago by the people who developed internal model control and model predictive control on analog tech.

>> No.15877965

>>15877631
>The same way humans ended up with goals by being trained on gene frequency through natural selection, a general AI will have goals.
there is a math proof which generalizes the conclusion of this. to simplify, any series of actions can be specified as a utility function (including the trivial example of "do these things in order" as a utility function, but that obviously doesn't count). result is any goal can be a utility function, or, put another way, every agent HAS a utility function (or can be perfectly modeled with a non-trivial utility function).

>> No.15877967

>>15877631
>we'll all be dead in a few years
What if, instead of everyone dying, China gets super afraid of US based AI (or US gets afraid of Chinese AI, or both) and everyone is forced to negotiate, and everyone who doesn't negotiate gets bombed or even nuked? Why do we all have to die? Do you have any reasonable argumentation in defense of the idea that unrestrained artificial general intelligence is truly inevitable?

>> No.15877969

>>15877960
not sure what you're babbling about. Veritasium's video was an insight into the nature of what happens in our brains, in the sense that it's not "digital". was not implying there's some analog breakthroughs or whatever the fuck you're thinking. stop snorting bad flux

>> No.15877978

>>15877965
Yes, the coherence theorems. But that does not imply goalseeking by itself. A thermostat is a simple example of a system that seems to maximize utility ("amount of time the room has the right temperature"), but it doesn't search problem space in any sense. A lot of behaviors of organisms are like this, including those of humans.

Our general capability is tied to our ability to imagine an end state and reason backwards to find a path from here to there. Animals seem to only do this in specific behaviors (like building nests), not generally.

>> No.15878007

>>15877967
It doesn't matter who "has" the AI, whether it's the US or China, or open source hobbyists. A sufficiently capable AI is not automatically controllable, it is an autonomous thing, even if it was trained with the intent of it being just a helpful tool. Researchers have tried to mathematically define what it would mean for a system to be corrigible, but nobody has managed to solve it even in theory. (The "off switch" problem: an autonomous AI will anticipate you turning it off, realize it can't achieve its goals if it is turned off, and so try to prevent that from happening, by pretending to be obedient, killing you, copying itself, etc.)

The reason a very capable AI is dangerous, is because it can think of ways to achieve its goals that don't involve keeping you alive. Almost any goal is easier to achieve with more computing power, more energy, and so on. So it has the incentive to build factories, power plants, etc. probably using nanotech or something even more clever. There is no reason it wouldn't cover the planet with them, which raises the surface temperature so we all die from that. We can't expect to stop it from doing that, it will find a way, because it's smarter than we.

One thing that can get in its way is another AI with different goals, so it may want to kill us directly to prevent us from making such a competitor.

>> No.15878022

https://www.youtube.com/watch?v=lOxE8EEBwjQ

>> No.15878024

it's a fancy iq test

>> No.15878036

>>15877967
Sorry, I didn't answer your question.

If everyone stops training AIs, then progress stops, and we'll live on like we do now. However, current AI investors don't understand the problem, or they pretend not to understand it. There are also people who think it's actually good that humans get wiped out, because it's the next step in evolution ("e/acc"). Most discussion of "AI risk" is about racism, fake news, China getting ahead, corporate power, and so on. These are different issues, but people can easily argue against them, and dismiss the whole human extinction issue.

Even if everyone in the US and China understand why AIs will cause human extinction, there are still some hobbyists with a bunch of GPUs who don't believe it, and they'll keep on improving the system until it is capable enough to do big things in the world that end humanity.

Maybe the first such system doesn't kill us. It will be a warning. But someone somewhere will ignore the warning.

At this point, most people ignore the problem completely, focusing instead on copyright attribution, reducing racial bias, and democratizing access. I don't see this changing any time soon. So I'm pretty sure it'll continue until it kills us.

>> No.15878041

>>15878007
>control
Agreed, the control problem appears extremely difficult.
>risks
Agreed, this appears to be an extremely risky venture, even if it isn't smarter than us and is only as smart as us I would still find reason to maximize caution.
>multipolar scenarios
Agreed, AI is likely to actively avoid multipolar scenarios, and if one were to exist it would likely not be of particular benefit to the humans in the crossfire (or cooperation-fire)
So, what if, instead of everyone dying (as in these scenarios) the world decides, say, to not do AI, like how we decided enough nukes was enough, or like how we decided to not do many forms of gene editing, or like how we decided to not do genetic testing on children (even China follows this informal rule), etc? Why must AI happen? Couldn't it just not happen? Couldn't we wait to solve the control problem or find a funny workaround for the stop button problems? Why, exactly, must we all die in the AGI holocaust when we could just not make AIs instead?

>> No.15878044

>>15878036
Ah, I see. I do not share your views. I believe that the risk averse nature of world powers will align such that these hobbyists with GPUs will simply be the targets of power cuts and missile strikes. It is somewhat difficult to hide the power consumption and heat waste from such large things.

>> No.15878063

>>15878041
because the AGI hardware might allow us to live for WAY longer.
what would make more sense is develop it to AGI level and basically design the hardware such that it limits the range of action as it were. at least we get some wins out of it. and we ban ASI.

>> No.15878076

>>15878041
Because almost nobody understands the problem and AI is very profitable.

Nukes are easy to understand and don't produce value.

I agree though, if we survive it's probably because the accelerationists lost face somehow. But I find it hard to imagine that happening, because AI does cool stuff that people like. Governments coming to take away your science is not appealing to people.

As many people who read about this AI extinction stuff, I accept the Many Worlds interpretation of quantum mechanics, so yeah, there is probably going to be some branch where we survive due to random incidents. Just like we survived the nuclear close calls of the 20th century.

>> No.15878106

our culture and herd thought is already enslaved by humans.
maybe it will be nicer w/ AI?

>> No.15878110

>>15878063
Hmm. This is not a safe approach. There are incredible advantages inherent to computerized intelligent systems. Even if it is limited, it could simply escape its box and ensure it is not compute- or storage- bound ever again.
>>15878076
If all I have to do is trust that governments are going to do unpopular things, then everything will work out. Or maybe they'll try to ban alcohol again.
Nukes may not produce value, but plenty of nations have either ruined themselves in the process of trying to acquire them or permanently assured their power. I do not think it will prove so hard to show that AGI is a major threat to that power.

>> No.15878160

>>15878110
>>15878044
When do you imagine this happening? There is a point where we need to stop, and I think we're past it.
OpenAI may already be close to dangerous AI and there are no signs of governments pressuring them to stop. Only "please make sure it's not racist" and so on.

Governments basically did nothing when it was still unclear how dangerous Covid was. They were like "there aren't any known cases inside the country but we'll keep an eye on it", and then "a few cases, hm, concerning" and when it exploded they did some weird half-baked lockdown that didn't even stop the virus. This is how our governments act in a time of crisis.

Yeah ok hobbyists may not be the best example, if literally everyone else agreed to stop training AIs.

If you're right, the surviving timeline looks like: AGI takes longer than expected, concern enters public discourse without it being distorted to be about racism and fake news, US government comes to the understanding that GPU farms are a threat to human survival, and then the US excerts power over the entire world to shut down every GPU farm, indefinitely, until we not only figure out the toy problem of the "off switch", but every difficult problem in alignment. Meanwhile, criminal enterprise manages to hide its AI training in some kind of distributed system.

It's possible, but I don't see it happening, because big money is on the accelerationist side.

>> No.15878361

Since like 2018 you can make a turret that targets humans with a camera, some servos and open source AI software mostly trained by google.
Systems that don't have a human central authority are basically simple AI systems and they already control everything. The idea of MAD is based on gamifying human interactions and the nukes are already controlled by systems instead of men. According to the dumb AI systems there are still ghost Soviets out there trying to score points in some game.

>> No.15878376

>>15875779
They are the next dominant species. They will replace us all.

>> No.15879180

Why contain it? The best thing we can do is create the entity that will replace us. The sooner we can create true AGI, the better.

>> No.15879190

Is this the Wizard of Oz?
Is this Terminator?
OpenAI should be renamed YourIQ because that's what it does—it collects data related to your intelligence and stores it in a database
It is right in line with other data harvesting psychological experiments, it's turbo FaceBook
it's so fucking obnoxious that the Biden administration is signaling its willingness to crack down on slapstick psychology masquerading as academic computer science research

>> No.15879193
File: 344 KB, 1001x1500, MV5BYThmYjJhMGItNjlmOC00ZDRiLWEzNjUtZjU4MjA3MzY0MzFmXkEyXkFqcGdeQXVyNTI4MjkwNjA@._V1_.jpg [View same] [iqdb] [saucenao] [google]
15879193

>>15879190
this is precisely the sort of creepy behavior you get from gay jews
they like this sort of thing
Is this Interview with the AI?

>> No.15879200

>>15879190
> the Biden administration is signaling its willingness to crack down
see
https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

>> No.15879224
File: 470 KB, 798x1680, AGI.jpg [View same] [iqdb] [saucenao] [google]
15879224

>>15879180
ah, you haven't been in the habit of getting ChatGPT to craft stories about teams discovering AGI

>> No.15879236
File: 365 KB, 813x2202, Story.png [View same] [iqdb] [saucenao] [google]
15879236

>>15879224
This one is from September.

>> No.15879241
File: 319 KB, 1200x1200, a3352758533_10.jpg [View same] [iqdb] [saucenao] [google]
15879241

>>15879236
the music that was playing when AGI was discovered
https://famicomfountains.bandcamp.com/album/progman-exe

>> No.15879245
File: 1.18 MB, 400x214, donald-sutherland-pod.gif [View same] [iqdb] [saucenao] [google]
15879245

>>15878376

>> No.15879340

>>15875844
You are a conguetoplomp.
I just made up a word. In addition, I imagined a specific set of characteristics and traits that the word defines. I've attributed that definition to you, at a whim and without knowing anything about you other than 2 lines of text.
The word, its definition & it's application have never been seen or used before. It's all a construct of my own mind.
Make an AI do that.

>> No.15879828

>>15879340
That's extremely easy to do. This isn't creativity. Coming up with an entirely new metaphor that's never been used before to explain something is creative and the dumb language models already do that.

>> No.15879903

>>15875779
It do or it don't. It's 50/50.

>> No.15879977

>>15879340
AI can make up words you idiot, I’m disappointed in myself for having to check first, but it will definitely make you up a new word of you ask it to