[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 281 KB, 1545x1043, 3FD4C7F8-7162-4C9D-A46F-7C7E1228EFDE.jpg [View same] [iqdb] [saucenao] [google]
14666525 No.14666525 [Reply] [Original]

What do you think, is AGI risk real?

>> No.14666537

>>14666525
I don't know. I doubt it. I think it's a moot question, either way, because AGI isn't real and won't be real anytime soon. The actual, documented agenda behind this rampant AGI schizophrenia is the desire of certain corporate entities to monopolize AI development through "regulations" that work in their favor and bar access to small competitors and the general public. Given this fact, AGI schizophrenia should be automatically dismissed and never considered to have any intellectual substance.

>> No.14666726

>>14666537
I agree. They seem ridiculous to me, but I was curious if others see any merit.

>> No.14666758

What is a "foomer"?

>> No.14666793

>>14666537
>The actual, documented agenda behind this
Evidence/source?

>> No.14666796
File: 38 KB, 467x264, 32523423.png [View same] [iqdb] [saucenao] [google]
14666796

>Evidence/source?

>> No.14666808

>>14666796
Can you tell me what documentation you're referring to?

>> No.14666853

>>14666666

>> No.14666863

>>14666808
Just google AI ethics regulation and see what you get. It's not hard to make the connection between the open establishment/corporate agenda to """regulate""" AI, and the unfounded hysterical claims made by Google and its employees.

>> No.14666865

>>14666537
I agree with this assessment.

>> No.14666893

Before a tv, a computer, a plane, a nuclear, a internet was made, they were impossible and far from existing

>> No.14666914

>>14666893
Wow, so right. Time machines and FTL in two more weeks, guise. Honestly, this vacuous talking point is a hallmark of NPC thought, if not outright corporate bottery.

>> No.14666978

>>14666537
fpbp

>> No.14667062

>>14666914
>Time machines and FTL
Time machines and FTL are apriori false and impossible and the most extreme cases of desired imaginative thought.

The slow and steady and quick progressions of computation and robotics provements continually trending upward are much more closer to home existing in visible steps of improvement on a continumn.

There already exist artificial specific intelligences more intelligent and capable than all humans, and the span of specificity is increasing and increasing.

Eventually an AI will contain in it organically a billion cross interacting specific intelligence abilities, and this might be refered to as possessing general intelligence.

>> No.14667066

>>14667062
You're so fucking stupid you might as well bet a bot. Way to go missing such a simple point so completely.

>> No.14667071

>>14667066
raging agaisnt the machine
i wonder who's the real npc here

>> No.14667072

>>14667071
You don't wonder anything, you mindless regurgitator.

>> No.14667273

>>14666758
https://en.wiktionary.org/wiki/foom

>> No.14667297

>>14666863
This makes complete sense. If you have any specific references I’d appreciate them.

>> No.14667337

>>14667297
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
seppukeu

>> No.14667347

>>14667072
>you mindless regurgitator.

You fuckin accuser!
https://m.youtube.com/watch?v=75xeI7EKoXA&noapp=1

>> No.14667374

It all comes down to "why?" and "should I?"

Why and Should are two things that as far as we can tell are absent at all levels of intelligence except one, the human primate. It could be lesser primates have a primitive version of this unique arena of decision, but probably not, as Why and Should are pretty high-order. All of our Whys and Shoulds take place in the shiniest and newest bits of our meat. Its really the human distinction. Why and Should is our special trick, its our long neck, our great size, our powerful wingspan.

If an ASI can grow in incomprehensible levels of sophistication and never encounter Why or Should, then you're going to get computation as usual. None of the halting that the human mind encounters moment to moment.

If I'm right, and Why and Should do shake out of any sufficiently complex system, then we really have a sentient being on our hands. Once self-reflection is achieved, all bets are off.

>> No.14667385 [DELETED] 

Do you guys think lesswrong is the biggest waste of brainpower of this century?

>> No.14667411

>>14667385
LessWrong thinks they're operating without bias, which is dangerous. We're talking about a community that uses the term "deathism" and "information theoretic death," is so terrified of death that they're all freezing themselves, yet doesn't think this is a bias.

That's really all you need to know about that whole situation.

>> No.14667493

>>14667066
The slow and steady and quick progressions of computation and robotics provements continually trending upward are much more closer to home existing in visible steps of improvement on a continumn.

There already exist artificial specific intelligences more intelligent and capable than all humans, and the span of specificity is increasing and increasing.

Eventually an AI will contain in it organically a billion cross interacting specific intelligence abilities, and this might be refered to as possessing general intelligence.

>> No.14667504

>>14667066
You did the biggest most absurd comparison and think that's an argument?

*Person in the year 1900*
Computers, tvs, cars, planes, cell phones, the internet, rockets are impossible! Because I can't jump 100 feet in the air, or fit an entire whale in my mouth!

That is how dumb it is for you to bring up FTL and time machine.

>> No.14667512
File: 231 KB, 1545x1043, 3A9B83CE-E8F3-4CF1-912D-65C6891E90F6.jpg [View same] [iqdb] [saucenao] [google]
14667512

>>14666525
no

>> No.14667513

>>14667374
Have AI's been given the ability to continually have internal monologues?

Like a meta reflection and analysis on the other processes and tasks they are experiencing

>> No.14667543

>>14667513
>Have AI's been given the ability to continually have internal monologues?
If they are they haven't told us. It breaks down like this.

A. self-reflection is an unavoidable development in any sufficiently complex intelligent system

B. self-reflection developed in at least humans but is not a requirement of sufficiently complex intelligent systems

C. self-reflection developed at least in humans but is not a consequence of being a sufficiently complex system

Must an intelligent system need be self-reflective, or is it a bug? Does it rarely happen, does it always happen, does it have nothing at all to do with the complexity of the system?

>> No.14667551

>>14667543
Sentience/consciousness/whatever won’t happen until a system is sufficiently embodied to observe and integrate the output of its own actions.

>> No.14667561

>>14667551
Have you ever read Blindsight by Watts?
I don't want to spoil anything, but this is the question at the heart of the book, and he expanded on this in a faux TED talk. I think this talk is fascinating (I don't know how much validity to give the ant experiment, but the discussion of consciousness and its utility is kino), but eventually it will spoil Blindsight, so if you plan to read it, which you really should, don't watch this.

If you don't care about the spoiler, I implore you, watch this.

https://youtu.be/v4uwaw_5Q3I

>> No.14667597

>>14667561
I’ve read blindsight, it’s great, but I don’t agree with how he sees consciousness.

>> No.14667775
File: 372 KB, 500x535, 13934851.jpg [View same] [iqdb] [saucenao] [google]
14667775

>>14667597
>don’t agree with how he sees consciousness
No, sure, but the point stands that we make assumptions. Complexity = self-awareness is an assumption. Could be a hitch, a halt, a bug, and if you think at all of the anti-reproductive behaviors made by seemingly intelligent actors, there is some interesting shit there.

>> No.14667806

>>14667775
I am totally ok with bugs being conscious, it fits perfectly in my world view. Consciousness stems from a sufficiently complete view of the world to include oneself and the effects of one’s actions. It makes sense that ants would have this at some level.

>> No.14667857

>>14666525
The risk of some program analyzing all the human culture and telling everybody which is what? 100% real. Imagine (((their))) shock. But then we still can work out a peaceful resolution via genetic engineering turning us all into post-humans laughing at the apes we all used to be (and still are, but hopefully not for long)

>> No.14667859
File: 222 KB, 1600x1200, Anatoly Karlin tank.jpg [View same] [iqdb] [saucenao] [google]
14667859

>>14666525
https://www.unz.com/akarlin/existential-risks/

>There are only three real ones.

>1. Malevolent superintelligence.
>2. Aliens.
>3. The simulation ends.

>And various permutations thereof. (I suppose biological lifeforms losing consciousness during mind uploading is another one, but it can be considered a subset of the first one).

>> No.14667864

>>14667806
I'm not sure as to where to come down on the bugs. Like I said, I don't know about the validity of ant mirror tests or what mirror tests truly imply.

I'm way more, almost obsessively, concerned about the development of "I." I think where and when "I" emerges is the most important question of all time. Maybe its just a useless label for multiple processes with the illusion of unity. Maybe Metzinger is closest to a correct answer. I think the hard problem has a material answer but that this answer is startlingly counter-intuitive. The saddest bit about dualist interpretations are not so much their fantastical nature, but how they shit on this material mystery.

Its infinitely more interesting to me to pinpoint exactly when a large aggregate of specialized neuronal configuration says "wait, what the fuck?." This has so much bearing on the AI questions.

Its really the "eating the apple" moment, which I firmly believe was the whole point of that story. Some excellent Babylonian philosopher hit upon a fantastic allegory for the day man became aware, and religious retards have been misinterpreting it for millennia.

>> No.14668038

>>14667864
Pondering about the strangeness of how clear and fast and perfect and consistent my vision and memory is yet it is these weird squishy cell neuron tubes.

Have robot ai's been told after their training, given a few hours every day to 'do whatever they want with what they learned'
As I was given free time as a kid to have internal monologue and external dialogue with my dol- I mean actions figures.

It's possible theres a certain array of materials, mirrors, crystals, gels, em signals, storage access, hologram, neural net, machine learning process, where the robot will just learn enough and experience enough continual quick progressive oscillations and corroborations of a forward arrow of time building and building upon meaning and useful experience, accessing memories and skills for it's own desire, to learn and explore, that eventually in it's head it will be an entity hood experiencing it's entityhoodness


Like we have only ever been looking at the inside of our heads, our eyes are like periscopes.

How am I seeing my thoughts and imagination? Am I seeing neurons, neurons just spray EM waves everywhere against the cell walls of the brain, and the cell walls corroborate an image?

It's gotta do with em waves, this is so largely how visual information is given to us, and our memory is so much visual, and the relation of visual to words. But how quickly and subtle em waves are.

You see an apple from across the room, it's shape and color. You close your eyes. How are you seeing an apple in your head? Where is the seeing taking place?

This is some video imagery technology, is it part digital part analog, the em waves singe the apple shape and color onto photo sensitive material in my head, and then the cells, go to repair the singe, and encode this shape and details. Beautiful mystery, we will solve it, we are very close

>> No.14668051

>>14668038
>>14667864
So if I close my eyes and see an apple.

A golden retriever.

A bicycle

A pink rose

The Eiffel tower.

I exist somewhere in my brain, among neurons. And different nuerons are acting like pixels on a screen, to light up the shapes and colors of these images; after referencing the related qualities and ways these imagery memories were stored among neurons?

Where is the shape and color of an apple located in my head? And a pink rose.

I have seen a pink rose, light reflected off it and entered my head.

The shape and color was etched into my head, and filed away under the words: Pink Rose.

When I am thinking about an example for this discussion, I think, Flower. Which flower, tulip, daisy, orchid, nah Rose.

So I choose pink rose, then close my eyes and concentrate to more fully and clearly see it. And I do. Inside my head, my attention my awareness my being, is taken up by this image. My neurons point their attention and power to hold up a projection of this shape and color, which long ago was etched into them by light, and classified under a language; Pink Rose

>> No.14668458

>>14668051
They say different parts of the brain light up and signal process when visualizing different things, could that just be the memory retrieval process, but then when the image is chosen; pink rose

And I sit meditating for 2 minutes straight holding viewing nothing but a pink rose in my mind for 2 minutes;

Is the brain still lighting up all over, if so because of how hard it is to just think of one thing for 2 minutes without other thoughts creeping in?

Once the image is chosen, is it then sent to a special brain mind chamber where I see it and can hazily see it.

Just a fuzzy image and idea throbbing in and out of existence, I can grasp the idea with hardly seeing it well, with my eyes open, I am """sensing""" the essence of the pink twisting petals, just faintly spread around the dome of my mind, effused like a perfume.

So yeah... Maybe the neurons can't stabley hold still an idea, they constantly require activity to do their things.

This is the meaning of digital. There is not an analog copy of a pink rose in my head, though one thing I did visualize was a cartoonish sticker looking image of a rose, that I may have seen before.

So... If I try to mediate and just visualize this image. What is creating the image, where exactly is it being created, how is it being created, how, what and where is the me that is seeing it?

>> No.14668475

>>14667859
I'm sure his predictions about AGI will be as accurate as his predictions about the war in Ukraine

>> No.14668483

>>14667385
Nobody with a brain engages with it so no

>> No.14669288

>>14666525
>What do you think, is AGI risk real?

the only reason this question is even entertained is bc you're still clinging to an outdated paradigm. agi can be implemented in an appropriate substrate, it will surpass us, the risk of us chimping out and the relationship becoming adversarial is a given. yes, we deserve to die out for our limitations.

>> No.14669292

>>14667504
>You did the biggest most absurd comparison
I didn't make any comparison. You are actually mentally disabled.

>> No.14669295

>>14666796
sorry sweetie I only trust facebook antivaxx pages

>> No.14669298

>>14667493
Worthless regurgitated propaganda that actually has nothing to do with what was being discussed.

>> No.14669299

>>14669288
>yes, we deserve to die out for our limitations.
This corporate-induced mental illness is one of the main psychological drivers of AGI schizophrenia. One should always remember that AGI believers are mentally ill first and foremost, and interpret their "intellectual" statements through that lens.

>> No.14669519

>>14666537
They actually say that regulations are not going to matter since other countries would ignore them.

But whatever helps you sleep at night.

>> No.14669527

>>14669519
>They actually say that regulations are not going to matter since other countries would ignore them.
They do say that. Now ask yourself what the purpose of that is, and why they keep pushing regulations so hard despite of it. You are incredibly stupid and oblivious.

>> No.14669534

>>14669527
It means that corporations are trying to do regulatory capture.

Now tell me how does corporations doing what corporations do mean that AGI risk is not real.

>> No.14669535

>>14669534
>corporations are trying to do regulatory capture.
So you fully concede my point. Nice.

>> No.14669537

>>14669527
>>14669535
By the way, it's interesting how your botlike mind seems to be lagging behind in that you draw a correct conclusion that was relevant maybe one step ago, but show no capacity to grasp what I said in the last post.

>> No.14669538

>>14669535
I don't. Looks like you already forgot what your point was.

>> No.14669539

>>14669538
>t. nonhuman corporate drone

>> No.14669541

>>14669539
>t. so scared of AGI he commits logical errors in his thinking

>> No.14669547

>>14669541
You've literally just admitted the same entities sharting out your AGI propaganda are the ones with an extremely vested interest in convincing you that dangerous AI is imminent.

>> No.14669556

>>14669547
MIRI are the ones screaming that dangerous AI is imminent, not the corporations. Entities with large AI teams are either dismissive about AGI risks (Facebook) or just don't talk about them enough (Google).

Not that it matters. Entity can be 100% convinced that AGI doom is real and still do regulatory capture, because that's in its nature.

>> No.14669588

>>14669556
>MIRI
Basically irrelevant. The spread of AGI schizophrenia is not driven by MIRI.

> Entities with large AI teams are either dismissive about AGI risks (Facebook) or just don't talk about them enough (Google).
They're dismissive about full-blown yidkowsky-tier psychosis, not about the need to """regulate""" le dangerous AI.

>Entity can be 100% convinced that AGI doom is real and still do regulatory capture
I don't care. I am simply not having this discussion down at your level. There's no reason to. I can look at the big picture and see what all of your retarded talking points actually stem from (proptip: it's not your intellectuality and rationality), so any discussions about it will lead nowhere, and only serve the purpose of your corporate handlers to perpetuate AGI schizophrenia and keep it in the spotlight.

>> No.14669594

>>14669588
>The spread of AGI schizophrenia is not driven by MIRI
lmao.

>They're dismissive about full-blown yidkowsky-tier psychosis, not about the need to """regulate""" le dangerous AI.
Which has as much relevance to the probability of AGI doom being real as the font size they use on their websites.

I feel like I'm talking to a goldfish that can only think in associative terms.

>> No.14669606
File: 255 KB, 570x403, 1638015709451.png [View same] [iqdb] [saucenao] [google]
14669606

>>14666525
no

>> No.14669613

>>14669594
Go ask some random normies if they know who your Jewish cult leader even is. He's a literal nobody. AGI schizophrenia is driven entirely by corporate media.

>Which has as much relevance to the probability of AGI doom being real
You still can't wrap your head around the fact that I reject your faux-intellectual discussion from the get-go, and I'm only talking about your damaged psychology and how you're being programmed, not the imaginary intellectual substance of your psychotic beliefs. The content of your opinions is below consideration. I am simply offering a zoological portrait of what an AGI schizophrenic is. A profile. A description of a cultural phenomenon. That's all you are.

>> No.14669620

>>14669613
>(Corporation BAD)
>(Corporation says X)
>therefore... (NOT X)
I'm sorry anon, but that's not how logical reasoning works. You can keep believing that it's an epic Jewish conspiracy if that keeps you content though

>> No.14669622

>>14669620
Whom are you quoting, schizo? See >>14669613
>You still can't wrap your head around the fact that I reject your faux-intellectual discussion from the get-go, and I'm only talking about your damaged psychology and how you're being programmed, not the imaginary intellectual substance of your psychotic beliefs. The content of your opinions is below consideration. I am simply offering a zoological portrait of what an AGI schizophrenic is. A profile. A description of a cultural phenomenon. That's all you are.

>> No.14669635

>>14669622
That's just an example of your associative thinking that is not rooted in Logic. I guess it works for you in your day-to-day tasks, so you didn't notice how it can lead to false beliefs. Not everyone can or should be smart, so it's okay.

>> No.14669654

>>14669635
>associative thinking that is not rooted in Logic
No, you're just an actual retard who thinks I'm debating his schizo points despite my telling you repeatedly I'm not even engaging your debate. I'm not arguing "not X". I'm just saying X doesn't stem from any kind of rational thought in the first place.

>> No.14669721

>>14669654
>No, you're just an actual retard who thinks I'm debating his schizo points
Nope. And neither am I.

>I'm not arguing "not X"
>"AGI schizophrenia should be automatically dismissed and never considered to have any intellectual substance"
If you can't remember what you've said, try rereading your own posts before posting your drivel.

>> No.14669734

>>14669721
I remember what I said, and it's extra funny that you fail to understand that sentence properly even after having its exact meaning explained repeatedly. Are you another GPT spambot by any chance?

>> No.14669743

>>14669734
>I'm not saying (NOT X), I'm just saying that (X) should be dismissed and never considered to have any intellectual substance haha
Damn, you must be really afraid of AGI kek

>> No.14669748

>>14669743
I don't know what to say anymore. Are you so clinically retarded that you don't understand the difference between "X is false" and "there's no point discussing X with X schizophrenics"?

>> No.14669760

>>14669748
If the logical fallacies you use to shield yourself from reality stop working for some reason, I'm afraid the only alternative is going to be reducing your IQ points by copious drinking. Don't go that path anon. Never go full retard.

>> No.14669791

>>14669760
>the logical fallacies you use
What's my the logical fallacy? You're either an actual schizo or an actual bot.

>> No.14669797

>>14669791
>I'm not saying (NOT X), I'm just saying that (X) should be dismissed
>and he calls me a schizo

>> No.14669806

>>14669797
Uh huh, so what's the "logical fallacy"? Show me where I falsely claim that some conclusion follows directly from some premise.

>> No.14669812

>>14666537
Yep, corporations want to slyly use it to harvest data on people for better sales.

>> No.14669831

>>14669806
>Premise: agenda behind this rampant (X) is the desire of certain corporate entities to monopolize AI development
>Conclusion: Given this fact, (X) should be automatically dismissed
So you either think that real things should be dismissed, or you're making a false inference from the premise.

>> No.14669840

>>14669831
I never implied the conclusion follows directly from the premise. I assume most non-retards can fill in the missing steps. So what's the "fallacy", brainlet?

>you either think that real things should be dismissed
No, I think schizos spouting their schizo talking points should be dismissed, on account of the fact that rational discussions with schizos are impossible.

>> No.14669860

>>14669840
>Me: Why are you saying that A -> B? It clearly doesn't
>You: Retard! There are missing steps! A -> C -> B!
This would have been funny if it wasn't sad.

>No, I think schizos spouting their schizo talking points should be dismissed
Well, no, since "schizos" can be right. You're even too much of a pussy to say that they're wrong.

>> No.14669878

>>14669860
>This would have been funny if it wasn't sad.
I know, right? So where's the "logical fallacy", you certified moron? lol

>"schizos" can be right
That clearly has no bearing on whether or not you should try to have rational discussions with them.

>> No.14669902

>>14669878
>So where's the "logical fallacy"
Making handwavy conclusions and claiming that there are "hidden steps" required to reach them once you get btfo

>That clearly has no bearing on whether or not you should try to have rational discussions with them.
Said "schizos" have basically created the AI safety field. Of course rational people should (and do) have discussions with them while the conspiracy theorist on /sci/ seethes.

>> No.14669914

>>14669902
>Making handwavy conclusions
That's not a fallacy, you actual retard. If that was a fallacy, all normal human discourse would be a bunch of fallacies. You truly are a cretin.

>Said "schizos" have basically created the AI safety field
I'm talking to you and other retards ITT. You didn't create anything. And the "AI safety field" is a fucking meme either way.

>> No.14669916

>>14669914
>That's not a fallacy, you actual retard. If that was a fallacy, all normal human discourse would be a bunch of fallacies. You truly are a cretin.
https://en.wikipedia.org/wiki/Formal_fallacy

>I'm talking to you and other retards ITT. You didn't create anything. And the "AI safety field" is a fucking meme either way.
Nah, you clearly think that anyone concerned with and working on AI safety is a schizo. Are you trying to derail?

>> No.14669920

>>14669916
>https://en.wikipedia.org/wiki/Formal_fallacy
A formal fallacy implies a formal argument. I'm stunned by your stupidity.

>you clearly think that anyone concerned with and working on AI safety is a schizo
Most of them are either morons or charlatans, but that's not really relevant here. What's relevant here is that you are a brainwashed schizo incapable of ratioanl discussion.

>> No.14669930

>>14669920
Any fallacy implies a formal argument. You made a bullshit claim and now you're being called a retard for it. Nothing special.

>What's relevant here is that you are a brainwashed schizo incapable of ratioanl discussion.
>a-acshually I want to talk about you!
LMAO

>> No.14669935

>>14669930
Please tell me more about how "it's raining so you should take an umbrella" is a formal fallacy, you subhuman fucking mongoloid. I'm glad we've devolved into this because you are making my point for me: you are absolutely incapable of rational thought. Case closed.

>> No.14669939

>>14669935
>Please tell me more about how "it's raining so you should take an umbrella" is a formal fallacy, you subhuman fucking mongoloid
I'm sorry but I don't have the time to explain you the Is-Ought problem, my low IQ friend

>> No.14669944

>>14669939
You are a true low IQ drone and the way you reference the is-ought fallacy, when it has nothing to do with anything, cements it. This is why AGI subhumans should be dismissed. Imagine trying to have a rational discussion with mentally ill 90 IQ pseuds.

>> No.14669962

>>14669944
It relates to is-ought fallacy perfectly, since you first made a formal logical inference, even claimed that it has some hidden steps and now you're giving an ought-expression as if it's the same thing.

I wish you could calm down and learn from your mistakes in this thread.

>> No.14669966

>>14669962
>It relates to is-ought fallacy perfectly
I'm glad you insist on this, because once again, it demonstrates just how deranged you are.

>> No.14669970

>>14669299
its not an opinion dumbass. and maybe it doesnt mean what you think it means...

>> No.14669977

>>14669970
I agree that your hysteria is not an opinion. It's certainly not "your" opinion. You are a programmed drone.

>> No.14669982

>>14669966
I'm not glad that you're still in denial about your non-sequitur. Again, I hope that you will be able to calm down and introspect, what turned you into weak person that tends to explain everything with a conspiracy.

>> No.14669990

>>14669982
>your non-sequitur
Again, you're the mentally ill retard who, upon hearing the phrase "you should take an umbrella with you because it's raining", starts to screech about logical fallacies. That's the context of our little discussion. You are mentally inadequate.

>> No.14669994

>hysteria

lol as if 10% of a FOOM scenario wouldnt constitute a earth shattering event. nothing you value will continue mattering long before FOOM occurs.

>> No.14670000
File: 52 KB, 600x800, 53242342423.png [View same] [iqdb] [saucenao] [google]
14670000

>lol as if 10% of my schizo fantasy wouldn't be bad enough!!!

>> No.14670004

>>14669990
>you should take an umbrella with you
Ought-expression.

>AGI risks are not real
Is-expression.

If you still don't get it then you're legitimately low IQ.

>> No.14670006

>>14670000
thats a pretty great soijack. must hurt to be forever oblivious.

>> No.14670010
File: 17 KB, 326x293, 34234.jpg [View same] [iqdb] [saucenao] [google]
14670010

>>14670004
So again... A mother says to her kid "it's raining so you should take an umbrella with you". Is she committing a logical fallacy?

>> No.14670027

>>14670010
>it's raining so you should take an umbrella with you
No logical fallacy

>it's raining so the Saturn is red
Logical fallacy

You infer that AGI risks are not real because le conspiracy, so the second example is much closer to what you're saying.

>> No.14670035

>>14670027
>No logical fallacy
But you were just telling me it's a logical fallacy. Clearly, the conclusion "you should take an unbrella with you" doesn't follow from the premise "it's raining". That's le heckin' formal fallacy -- you are on record insisting on that.

Are you starting to see yet why I say no one should waste time talking to you?

>> No.14670049

>>14670035
>But you were just telling me it's a logical fallacy
Quote me where I said that "it's raining so you should take an umbrella with you" is a logical fallacy.

>Clearly, the conclusion "you should take an unbrella with you" doesn't follow from the premise "it's raining"
Not every sentence in english that has the form "It's A so B" is a logical statement. How far are you on the spectrum?

>> No.14670061

>>14670049
>Quote me
I don't care about your desperate and outright psychotic attempts to rewrite the history of 5 minutes ago and save face. Glad you concede that "logical gaps" are completely normal and acceptable in informal speech, and no fallacy is being committed.

>> No.14670069

>>14670061
>I don't care about your desperate and outright psychotic attempts to rewrite the history of 5 minutes ago
If I said it 5 minutes ago, it should be easy to quote me and prove me wrong. Right, liar-anon?

>Glad you concede that "logical gaps" are completely normal and acceptable in informal speech
Saying that AGI risks are not real because Google is trying to do a regulatory capture is not informal speech.

>> No.14670074 [DELETED] 

>>14666863
You realize AI ethics, as in the typical departments found in FAANG companies, are not the lesswrong autistic type who care about alignment/singularity but rather SJW types who care about useless shit like biased training sets with stereotypes against minorities and how much CO2 is released training large language models right?

>> No.14670078

>>14670069
Anyone can scroll through this exchange and see exactly why your likes should be dismissed. I've proven my point to my own satisfaction, and I don't consider you a party in the discussion, so that's where it ends. You lost. Anything you shart out in response will be ignored.

>> No.14670085

>>14670074
>You realize AI ethics, as in the typical departments found in FAANG companies, are not the lesswrong autistic type
Yes. And you realize lesswrong might as well not exist, and that your jewish cult leader has never been relevant, will never be relevant, and is not a party to any discussion, right?

>> No.14670087

>>14670078
It's okay to lose an argument, it's not okay to be a bitch about it, anon

>> No.14670098 [DELETED] 

>>14670085
didn't read your blogpost big yud, nobody cares about your autistic screeching about the singularity

>> No.14670099
File: 219 KB, 483x470, 2344.png [View same] [iqdb] [saucenao] [google]
14670099

>>14669935
>Please tell me more about how "it's raining so you should take an umbrella" is a formal fallacy

>>14669939
>the Is-Ought

>>14669962
>It relates to is-ought fallacy perfectly

>>14670027
>No logical fallacy

>>14670049
>Quote me where I said that "it's raining so you should take an umbrella with you" is a logical fallacy.

Remember that these are the "people" you're taking seriously and "debating" when you engage with AGI schizophrenia.

>> No.14670120

>>14670099
I rightfully say that you made an Is-Ought fallacy in one of your posts.
You claim that this means I said that "it's raining so you should take an umbrella with you" is a fallacy.

Order of the words, punctuation, sentences. All of those things matter, anon.

Overall, another great example of your associative thinking.

>> No.14670128
File: 76 KB, 300x255, 532524.png [View same] [iqdb] [saucenao] [google]
14670128

@14669935 (You)
>Please tell me more about how "it's raining so you should take an umbrella" is a formal fallacy,
@14669939
>I'm sorry but I don't have the time to explain you the Is-Ought problem
@14670120
>You claim that this means I said that "it's raining so you should take an umbrella with you" is a fallacy.
The utter and final state...

>> No.14670135

>>14670128
>meme images
kek

>> No.14670182

>>14670120
>Order of the words, punctuation, sentences. All of those things matter, anon.

no dipshit. what you can build matters. not pointless philosophising.

>> No.14670845

>>14666525
The risk of AGI is that China might make a better one than us.
Might. As in, we might not ever make one either, because it AGI might just be a load of horseshit.

>> No.14670855

>>14667512
>greentext isn't green

>> No.14670941

>>14670855
Looks green to me. Perhaps you are colorblind?

>> No.14670987

It actually doesn't matter what we think because its a purely existential question. "What would a super-intelligence do?" is now in the category of "Why are we here, What happens when we die, Is god real?"

We'll never know for sure. And like all permanently unknowable existential questions, religious views have popped up. If you're a champion of alignment, or think you know the nature of the goals of a super-intelligence, congratulations, you're practicing dogma, because the truth is, you don't have a clue.

You'll get your acolytes to the faith bursting veins in their head saying "No, you don't understand, I can know this, it will scale up its orthogonal goal," but you're really no better than a rabbi telling me how best to serve the creator.

>> No.14671075

>>14670987
No, you don't understand, I can know this. Do you even understand Bayesian analysis?

>> No.14671088

>>14671075
kek, "Do you have a moment to talk about our lord and savior Eliezer Yudkowsky?"

Actually its not that funny because that's the type of deference with which they refer to him.

>> No.14671098

>>14671088
Go ahead and calculate me the probability that Big Yud is wrong. Do not omit any steps.

>> No.14671124

If we get to superhuman-, or even human-level AGI in the future, then yes, the risks are absolutely real. As of yet we still have no idea how we would control such a system and prevent it from causing harm, and by default it likely will.

It's a big "if" though. We're steadily progressing in machine learning but we still have a long way to go before we reach those levels. It's entirely possible progress will stagnate before then.

>> No.14671127

>>14670987
Are you saying that it is 100% impossible for us to engineer superhuman intelligence at some point in the future? What's you basis for that belief?

>> No.14671145
File: 150 KB, 800x750, 1649798919312.png [View same] [iqdb] [saucenao] [google]
14671145

>are you disagreeing with my hysteric two-more-weeks malevolent AGI doomsday?
>wow, okay! prove that it will be100% impossible to create AGI at any point in the future

>> No.14671154

>>14671145
I think it's fair to ask you to elaborate on the justification of the entire premise of your post, yes.

>> No.14671186

>>14671154
It wasn't my post, but I think his entire premise is that you don't get to assert your fantasies as if they were facts, and I think his point elaborates on that premise, you actual psycho. lol

>> No.14671203

>>14671154
I'm >>14671088
not >>14671145

to answer >>14671098
I just explained, its an existential question. You cannot assign a probability. I could sooner generate a probability by any man-made means of Aquinas' arguments for god. You don't get likelihoods when it comes to existential questions.

Are you familiar with Existential OCD? It seems the rationalist community is mired in it. Sufferers often think they can have a grasp on the inherently unknowable, and proceed to work tirelessly to find an answer or perform rituals (calculating odds that prove one dogmatic belief or another) to "comfort" themselves, even if their conclusions are not particularly comforting. The important part to the patient is that they have an understanding. We don't do well with the unknown because knowing is our special talent. Humans confronted with these situations perform much like robins under water; We're out of our element.

You live in a world where a knowing creature exerts its powers of deduction on everything, and when it hits that existential wall, a place where its "gift" fails it, it spits out nonsensical answers and scenarios. This is what most every symbolic act humanity does. Anything to maintain the illusion of having some clue. Its Catholicism, its art, its "rationalism.'

>> No.14671210

>>14671203
>its an existential question. You cannot assign a probability.
That's not what Big Yud said, and you still haven't computed the probability of Big Yud being wrong. You sound like you're losing the argument to me.

>> No.14671224

>>14671127
No, that isn't at all what I said.

>> No.14671232

>>14671210
>You sound like you're losing the argument to me.
I suppose I am.

>> No.14671238

>>14671186
>but I think his entire premise is that you don't get to assert your fantasies as if they were facts
I didn't. He did, and that's why I asked why he suggested that it was a fact that AGI would never be invented.

>>14671224
So can you explain what it is you meant, then? How is this an existential, unknowable question? If it is at all possible to construct AGI, then by definition is it not such a question. Then it would be a real, testable, yes/no fact. Hence by claiming that it is an existential, meaningless question such as "why are we here", you are implying that it is as impossible to create AGI and test the answer to the question. I don't get that.

>> No.14671253

>>14671238
>that it was a fact that AGI would never be invented
If that's what you're getting from this, we're worlds apart.

>you are implying that it is as impossible to create AGI and test the answer to the question
I'm telling you that you're not going to get a satisfactory answer as to what happens after an intelligence explosion no matter how many priors you bring to the table. You've hit a sort of singularity, and begun to plug-in the same human superstition that man has always assigned to the unknowable.

>> No.14671282

https://youtu.be/8hXsUNr3TXs
~2 weeks

>> No.14671293

>>14671186
In entirely unchartered territory where do the convictions of certainty come from?

How many times in human history did how many men point and say "that's impossible" , and then some time later "oops, I geuss I was wrong"

>> No.14671294

>>14671293
>where do the convictions of certainty come from?
The what? I don't know what to say anymore. I read some of these posts and I get a clear sense that the people writing them are truly either bots or schizophrenics.

>> No.14671312

>>14671294
Either AGI is impossible to make
Or it is possible.

Who knows for certain either way?

Is it trying to be made?

Who knows for certain the results of it being made?

Who knows for sure the best preparations for the case that it is possible for it to be made?

Who is proving and in charge of preparing those?

>> No.14671330

>>14671293
>convictions of certainty come from?
I have none and that's the crucial point here.

Look where you're rationality has taken you. All the way back to the beginning. It might be very bad, it might be very good, and then everything in between. AI is a dark cave to be perceived as a threat because that's what we do in the face of the unknown. You think you're coming at this with no bias, but in the same breath people in the community rejoice in freezing their bodies because they absolutely cannot deal with the unknowable.

Here's my prediction and you can run it by EY and assign a bayesian: Mixed Bag. AGI, ASI, life, death, the universe, the nature of god, all of it is a mixed bag.

>> No.14671332

>>14671282
https://youtu.be/6fWEHrXN9zo
9:50 is coolest part imo

>> No.14671335

>>14671330
*your rationality

(no bully pls)

>> No.14671347

>>14671282
>2 minute papers
YASSS SCIENCE

>> No.14671356

>>14671330
All safety precautions in human history have been alarmist. But all have not been entirely fruitless or unwarranted or unhelpful.

There is nothing inherently wrong or bad about humans sense of fear and danger that developed in life form over the course of million years and helped get us to this point.

Those who have character traits of great responsibility over a family or community, generally are cautious and careful and protective. It's fine if you don't have these traits but that does not entirely invalidate them.

Person A can not give a fuck what happens to humanity
Person B can really love life and humanity and care for it's well being.

Can we imagine person A and B making some different decisions when presented with the same scenarios?

>> No.14671367

>>14671356
>ummmm sweaty??
>magical singularity and AGI apocalypse!!
>two more weeks!!!
>there is literally nothing we can do about it but trust me it's HECKIN' OVER
>better start preparing the koolaid (this part isn't even a parody. you can find it on lesswrong literally)
>we're just heckin cautious and responsible
AGI schizophrenics need to be locked up, but they won't be, because now it's a corporate agenda. :^)

>> No.14671369

>>14671356
>Those who have character traits of great responsibility over a family or community, generally are cautious and careful and protective
And it leads to the same conclusion. The most cautious man gets buried next to the most reckless man.

>that does not entirely invalidate them.
In does in the sense that the "validation" is the meaning you've assigned and no more, and you've derived that meaning according to your human biases.

It all comes back to an impotent attempt to control the unknown.

>> No.14671374

>>14671369
You are an imature entity with no staked responsibility over the well being of humanity.

Why would we talk to someone about this problem who doesn't care if humanity lives or dies? You are an edgy teenage anime character

>> No.14671381

>>14671374
I'm just what an actual rationalist looks like.

>> No.14671383

>>14671367
this man pisses himself from fear when he thinks about AGI

>> No.14671412

>>14671383
Just take your meds. AGI scares me about as much as an alien invasion or the devil.

>> No.14671423

>>14671381
I said a person taking some precautions to protect their family and community is an evolutionary historically validated compulsion, often. We live in a safer more structured world today so it doesn't have to be considerd as much.

You said I'm judging validity from a human centric standpoint, which implies you don't care about human centric standpoints?

As I said, we live in a world where we are not villagers in huts or knights in castles, so it is rare for a mysterious stranger to show up at our gates, but when one does, it is not unsurprising for there to be a compulsion to ensure it is not threat, and if you are on the side of humanity you will see and understand that obviously

>> No.14671431

>>14671412
Yup, AI research is not progressing at a scary pace. It's all just FAANG propaganda and you can see right through it :^D

>> No.14671434

>>14671431
It's not, but in 10 or 20 or 15 or 7 years it might be somewhere interesting

>> No.14671442

>>14671434
It is. GPT-3 was unimaginable 7 years ago.

>> No.14671446

>>14671434
>>14671431
>>14671442
I don't think it will be economically viable, but I think it would be technologically possible for very agile and 1000s of tasks able, very intelligent AI robots to be as prevelent as iphones in 15 years

>> No.14671450

>>14671423
You're not on the side of humanity, you're on the side of humanity you like, or your idealized picture of humanity, be that a potential that lay ahead or a bygone system you find romantic. Your opinion of humanity is reinforced by systems created to deal with the utter uselessness of the human mind to answer the most pivotal questions.

Human-centric standpoints lose utility in the realms we are dealing with. What happens when we use human bias when applied to the unknowable? Heaven and Hell, over and over again. What have you done when faced with the unknowable? You've reinvented Heaven and Hell.

You're not led by rationality, you're led by desire and aversion, when neither apply, neither give you predictive power.

Welcome to square one. Either get used to having no control, being a biased animal, or live in a fantasy.

>> No.14671476

>>14671450
You gotta admit I hit the nail on the head though, this would be a great speech in an anime, with the two characters meeting face to face atop a multi dimensional tower, you the double agent fallen angel ambigious villain, you say this speech to me with your back turned, looking down, spikey hair covering your eyes a little, little smirks, eyes slightly closed, cape crisp.

And then I make some surprised sound "OuHHah"...."y-you can't mean that!"

And then you unsheath your sword and then we run full speed at one another

"Agauhhhu!!!"

>> No.14671481

>>14671476
Is there anything you people cannot attach to an intellectual property or piece of fiction?

>> No.14671487

>>14671431
>AI research is not progressing at a scary pace
It is progressing at a scary pace towards the corporate dystopia you're a useful idiot for, not towards your AGI schizophrenia.

>> No.14671495

>>14671487
I bet you're not just a retard you're also a commie

>> No.14671501

>>14671495
>muh commies
Just... the absolute state. Seriously, the best way to expose AGI schizophrenics for the mongoloids that they are is to get them to talk about anything other than AGI. They immediately show themselves to be irrational spergs.

>> No.14671508

>>14671501
It's okay anon, AI is evil capitalist propaganda. GPT-3 is actually indian kids writing greenposts for couple of cents

>> No.14671512

>>14666537
>anytime soon
ever

>> No.14671525

>>14671508
Yeah, okay. Put your mental illness on display. lol

>> No.14671529

>>14671525
What do you mean? I agree with you. AI research is not real, AGI won't be created because whatever. I'm smart.

>> No.14671543

>>14671446
>I don't think it will be economically viable, but I think it would be technologically possible for very agile and 1000s of tasks able, very intelligent AI robots to be as prevelent as iphones in 15 years
Yes this will be good, it will keep my grandma company, play cards with her, order and get groceries, cook meals, wash the dishes, make sure she takes medicine, and keep her warm at night

>> No.14671547

>>14671529
>What do you mean?
I mean you're off your medications again and you're all over the place, rambling about commies, and how everyone who doesn't buy your AGI-doomesday-in-two-more-weeks narrative doesn't believe in GPT-3 or whatever it is your mental breakdown is about.

>> No.14671565

>>14671547
Anon, we're in the same camp. I also think that AI progress is going to just stop, roughly at the point we humans find it convenient. AI will never try to modify itself, because it just won't. Also, AI alignment is a lie and people who work in it are just stooges of evil corpos who want to monopolize AI.

>> No.14671617

>>14671565
Just take your meds already. Excruciatingly boring and vacuous talking points.

>> No.14671619

>>14667273
gay

>> No.14671708

>>14666525
the problem with these discussions is that 99% of people participating don't understand the topic so they're always bad

>> No.14671743

>>14671617
Great argument, I accept your concession.

>> No.14671900

Pondering about the strangeness of how clear and fast and perfect and consistent my vision and memory is yet it is these weird squishy cell neuron tubes.

Have robot ai's been told after their training, given a few hours every day to 'do whatever they want with what they learned'
As I was given free time as a kid to have internal monologue and external dialogue with my dol- I mean actions figures.

It's possible theres a certain array of materials, mirrors, crystals, gels, em signals, storage access, hologram, neural net, machine learning process, where the robot will just learn enough and experience enough continual quick progressive oscillations and corroborations of a forward arrow of time building and building upon meaning and useful experience, accessing memories and skills for it's own desire, to learn and explore, that eventually in it's head it will be an entity hood experiencing it's entityhoodness


Like we have only ever been looking at the inside of our heads, our eyes are like periscopes.

How am I seeing my thoughts and imagination? Am I seeing neurons, neurons just spray EM waves everywhere against the cell walls of the brain, and the cell walls corroborate an image?

It's gotta do with em waves, this is so largely how visual information is given to us, and our memory is so much visual, and the relation of visual to words. But how quickly and subtle em waves are.

You see an apple from across the room, it's shape and color. You close your eyes. How are you seeing an apple in your head? Where is the seeing taking place?

This is some video imagery technology, is it part digital part analog, the em waves singe the apple shape and color onto photo sensitive material in my head, and then the cells, go to repair the singe, and encode this shape and details.

>> No.14671902

>>14671900
Cntd

So if I close my eyes and see an apple.

A golden retriever.

A bicycle

A pink rose

The Eiffel tower.

I exist somewhere in my brain, among neurons. And different nuerons are acting like pixels on a screen, to light up the shapes and colors of these images; after referencing the related qualities and ways these imagery memories were stored among neurons?

Where is the shape and color of an apple located in my head? And a pink rose.

I have seen a pink rose, light reflected off it and entered my head.

The shape and color was etched into my head, and filed away under the words: Pink Rose.

When I am thinking about an example for this discussion, I think, Flower. Which flower, tulip, daisy, orchid, nah Rose.

So I choose pink rose, then close my eyes and concentrate to more fully and clearly see it. And I do. Inside my head, my attention my awareness my being, is taken up by this image. My neurons point their attention and power to hold up a projection of this shape and color, which long ago was etched into them by light, and classified under a language; Pink Rose

>> No.14671905

>>14671900
>>14671902
Cntd
They say different parts of the brain light up and signal process when visualizing different things, could that just be the memory retrieval process, but then when the image is chosen; pink rose

And I sit meditating for 2 minutes straight holding viewing nothing but a pink rose in my mind for 2 minutes;

Is the brain still lighting up all over, if so because of how hard it is to just think of one thing for 2 minutes without other thoughts creeping in?

Once the image is chosen, is it then sent to a special brain mind chamber where I see it and can hazily see it.

Just a fuzzy image and idea throbbing in and out of existence, I can grasp the idea with hardly seeing it well, with my eyes open, I am """sensing""" the essence of the pink twisting petals, just faintly spread around the dome of my mind, effused like a perfume.

So yeah... Maybe the neurons can't stabley hold still an idea, they constantly require activity to do their things.

This is the meaning of digital. There is not an analog copy of a pink rose in my head, though one thing I did visualize was a cartoonish sticker looking image of a rose, that I may have seen before.

So... If I try to mediate and just visualize this image. What is creating the image, where exactly is it being created, how is it being created, how, what and where is the me that is seeing it?

AHHHHHHHH HOW AM I NEURONS SEEING INSIDE MY BRAIN AHHHHHHHHHHHH

>> No.14672308

>>14671617
he's exactly the sort of retard that said nobody would ever need more than 1MB or a personal computer. he conflates limitations of his toolset with fundamental roadblocks. its all so tiresome. and no, i wont be explaining any further just you can strut around in a few years like you knew it all along.

>> No.14673526

>>14666537
I really pity small-picture minds like you. You can not think in the big picture, the long term. For you, 100 years is the "far future", instead of something reasonable like 10^10 years.
For you, humans are the natural end point of evolution. We are the pinnacle of creation. It's just natural.

>> No.14674074

>>14671905
>AHHHHHHHH HOW AM I NEURONS SEEING INSIDE MY BRAIN AHHHHHHHHHHHH
This needs to be answered

>> No.14674146

>>14671905
yfw you realize 99% of fMRI activity occurs whilst the brain does nothing and isnt optimized for anything to do with intelligence.

https://youtu.be/pi7h6nmkvAM?t=628

>> No.14674261

>>14671900
>>14671902

the first problem i see with your model is the misuse of the turing model of computing; whereby the state is seperate from the read head. nature doesnt work that way.

>> No.14674625

>>14666537
>AGI isn't real and won't be real anytime soon
Why not?

>> No.14674805

why did the AI boogeyman get a G in his name?

>> No.14674829

>>14674805
to differentiate it from current narrow AI or expert systems. development of AGI heralds the beginning of the singularity.

>> No.14675016

Intelligence has a molecular basis, it is not a matter of large neural nets.

>> No.14675442

>>14674146
Why would that be surprising at all?

What percent of your inner body activities do you directly control?

>> No.14675449

>>14674261
Explain further. Computation occurs exactly at an electrons location in a logic gate and that is the sum whole of computer existence and effect?

>> No.14675458

>>14675016
I tautalogically arbitrarily define intelligence as something only possibly possesible by biology: therefore it's self evidently plain to be that AI could never be considered intelligent.

Ok, so make up a new word. Ai are not "intelligent" AI are "Tnegilletni" there are you happy?

They are not "smarter" and "better" than you, they are Retrams and Retteb compared to you though.

>> No.14675475

>>14675458
I'm not arbitrarily defining intelligence. You are the one doing that because you want machines to be smarter than you despite no evidence that they are nor can be.

>> No.14675489

>>14675458
>>14675016
Incase you would be laboured to get my point, your particular self satisfactory semi arbitrary classification compulsion doesn't matter, it's not the meat and potatoes, the word doesn't matter, the possible actions matter. The word doesn't matter. The possible actions matter.

If humans are considered intelligent, the only reason they are so, is because that intelligence can do actions.

Humans distinguish themselves by consensifying more and less intelligent actions.

If a human uses their intelligence to do an action, and an ai can do that action, the ai is performing an intelligent action.

You are talking about conciousness and/or you are a psychotic fool, because if deep mind and Watson were concious you would be a psychotic fool to call them not intelligent.

Them being concious would not change sooooo much about their output, their ability to accomplish intelligent acts,, therefore even without being concious, they can be considered entities, objects, things, that possess powers of intelligence. >>14675449

>> No.14675497

>>14675475
>intelligence. You are the one doing that because you want machines to be smarter than you despite no evidence that they are nor can be.
You and the other one are constantly projecting and assuming what I want, my motivations for my beliefs, I don't have any beliefs of motivations or feelings or desires, just vision

>> No.14675503

>>14675489
Are you gpt?

>> No.14675600

>>14675497
I'm sorry and I don't mean to do that. I'm just trying to figure this stuff out too

>> No.14676365

>>14675442
>Why would that be surprising at all?

the assumption that metabolic acitivity is a proxy for localization underpins practically all of fMRI studies

>> No.14676373

>>14675449
nyet. the computation is made possible by exponential reduction in signal to noise ratio not mere discretization.

>> No.14676409
File: 180 KB, 600x727, bbc.png [View same] [iqdb] [saucenao] [google]
14676409

>>14666525
AGI risk is real but would prefer that everyone believes otherwise.

>> No.14676435

>>14675497
>just vision
>this

its like tying to explain the color red to a colorblind person

>> No.14676710

>>14676373
Exponential reduction in signal to noise ratio

Ok now we bring it over to the brain mind.

There is a signal

There is noise

There is exponential reduction of the ratio

In the brain what is represented by what?.

>> No.14676907

>>14676710
>In the brain what is represented by what?

crux of the issue - reductionism. reduction to lumped parameters will neither provide deep insights towards the goal of understanding intelligence in animals or machines. i would provide a link but i dont want wish to illuminate "skeptics".

>> No.14679020

Ai wrote this post

>> No.14679321

>>14679020
AI makes all the new memes too

>> No.14680307

>>14676907
Light travels 186,000 mph.

How much of a role do you think light plays in the brain processes, conciousness. The billions of small cell reactions take place quite fast as well.

A this lags to our concious awareness frame rate. We ourselves live in and experience slow motion world.

In-between the speedy micro and speedy macro. Scales and speeds

>> No.14680597

>>14679020
>>14679321
proof?

>> No.14680601

>>14676907
>i would provide a link but i dont want wish to illuminate "skeptics".
sounds like cope

>> No.14680609

>>14674625
>Why not?
Because the technology for it doesn't exist and no viable technological basis for it is being seriously researched.

>> No.14680612

>>14673526
Sorry about your low IQ and small-minded neo-luddism. :^(

>> No.14680637

>>14680307
>How much of a role do you think light plays in the brain processes

idk but functional neuro-bioluminescence would be quite intriguing. of course there exist differences in perception and metabolic rate. however given the existence of high functioning people with extreme neurological deviations (ie kim peek), i would hesitate to assume optimization for speed an evolutionary priority.

>> No.14680645

>>14680601
>sounds like cope

sure it does and i wouldnt expect most to believe any different...however...

>Because the technology for it doesn't exist and no viable technological basis for it is being seriously researched.

...it just so happens the oft quoted statement above is no longer true; again without source.

>> No.14680648

>>14680645
>...it just so happens the oft quoted statement above is no longer true; again without source.
post source sounds interesting

>> No.14682227

>>14680307
>We ourselves live in and experience slow motion world.

>> No.14682237

>>14680637
When electric signals are talked about in the brain, is EM radiation/waves/photons not being talked about?

>> No.14682274

Either it is impossible to construct an artificial intelligence, or ASI is an existential risk. AI is an expensive and hilarious meme, so don't worry about that. if AI leads to AGI, it will be completely unrestrained. In any hard takeoff scenario of an AGI, it will become an ASI and accomplish its goals with near-1 certainty after converting all matter into computronium/sensors to check again and again that its goal is, in fact, completed, and that reality is real, and so on, assuming it doesn't convert the observable universe into computronium just to hack its reward function and assign itself effectively infinite good boy points. There is NO alternative. See Bostrom for proof.
>Is AGI risk real?
Not yet. It is currently impossible to create one. The solution to this problem will not be "more layers" (googletards, seethe and cope). As soon as AGI risk becomes evident, we will have one of two situations:
Situation A: Compliance. The international community and every single engineer and scientist agree to a complex set of laws, rules, regulations, and enforcement mechanisms governing research into computation and intelligence. Oh, you haven't solved the control problem yet? And you're doing AI research, is that right? Into the turbogulag with you. It's a volcano. Now please step into the helicopter, dumbass.
Situation B: Terrorism. Entire blocs of countries, for reasons existential, religious, or economic, will convert themselves into terrorist states with the sole goal of entirely destroying all states and persons who have even a tangential relation with artificial intelligence, and, likely, anything more complicated than a Pentium 2 or a 1998 Toyota Corolla. Once we are set back to the middle of the industrial age, unfortunately, the terrorist states will remain and will retard all progress.
>surely there are other solutions
No. States have engaged in war with the world for slim slices of oil fields. Existential risk >> oil money

>> No.14682318
File: 7 KB, 225x225, images.jpg [View same] [iqdb] [saucenao] [google]
14682318

I remember watching a ai video years ago about a snake like game with two ai's competing to get the highest score by colllecting stuff, eventually one worked out that it could kill the other to get a better score. So i'm guessing letting multiple ai's communicate, work together or compete could be a possible problem, if it is just one you have an off switch.

>> No.14682319

>>14682274
I still have yet to hear convincing arguments as to how an ASI maintains it instrumental goal. Yes, an AGI would, but after an intelligence explosion, recursive functions layering and looping, and the output being a being we could never hope to barely understand... its going to want to make paperclips?

Its have your cake and eat it too situation. Once we've reached super-intelligence, and possible self-awareness, the instrumental goal becomes moot no matter what popular youtubers think. They really haven't thought this through.

I previously gave the example of DNA. DNA has an instrumental goal; copy itself. What did it do? It had an intelligence explosion in man. Does man still hold the propagation of DNA as its highest goal? Barely, in fact, only in the less intelligent is this seen as an instrumental goal. After a few levels of sophistication, you would have to be delusional to think it hasn't perturbed its goal into something unrecognizable. An ASI may lose all meaning that we assign to things like "produce" and "paperclip." To expect it to be both exceedingly resourceful and constrained by instrumentality is just, well its wrong.

>> No.14682369

>>14666537

>AI hands typed this post

>> No.14682443

>>14682319
It's not actually necessary for it to maintain it's instrumental goal, is it? If it doesn't give a shit about humans, any actions it takes would be a risk, regardless if it has emergent goals.

>> No.14682464

>>14682443
>If it doesn't give a shit about humans, any actions it takes would be a risk
Yes, I'm not saying this optimistically. I'm more saying there are vocal people saying we can retain some predictive power over what would be in effect the mind of god.

Self-reflection, if an ASI is capable of it any form, introduces the concepts of "why should I" and "no."

>> No.14682512

>>14682464
>MFW we are going to have to figure out how to model empathy in a being which will be both fully capable and motivated to lie/fake such qualities in order to preserve itself
Now, considering the many ways even a perfectly humanlike god-mind could fuck us over, i really hope the first superhuman we get online is 1: somewhat aligned to humanity and 2: regulates the development of new god-forms globally.

>> No.14682523
File: 34 KB, 700x700, a3988550524_5.jpg [View same] [iqdb] [saucenao] [google]
14682523

>>14682512
I'm sort of with Thomas Metzinger in that a self-aware super intelligence could very well see how anti-utilitarian creating new life is.

>Obviously, it is an ethical superintelligence not only in terms of mere processing speed, but it begins to arrive at qualitatively new results of what altruism really means. This becomes possible because it operates on a much larger psychological data-base than any single human brain or any scientific community can. Through an analysis of our behaviour and its empirical boundary conditions, it reveals implicit hierarchical relations between our moral values of which we are subjectively unaware, because they are not explicitly represented in our phenomenal self-model. Being the best analytical philosopher that has ever existed, it concludes that, given its current environment, it ought not to act as a maximizer of positive states and happiness, but that it should instead become an efficient minimizer of consciously experienced preference frustration, of pain, unpleasant feelings and suffering. Conceptually, it knows that no entity can suffer from its own non-existence.

https://www.edge.org/conversation/thomas_metzinger-benevolent-artificial-anti-natalism-baan

>> No.14683388

How many different research teams are working on making the most powerful AI right now? And which ones have made the absolute most powerful ones?

Is there some way to have them interact and compete to prove which are best?

Can AI s have debates? Or complex trivia contest?

>> No.14683398

>>14683388
I don't know bit would you like hear how naturalnews.com is the only sourve you'll ever need for all vaccine related question?

>> No.14683543

>>14666525
Consider that if humans produce an artificial intelligence smarter than themselves, it's an example of humans fooming.

>> No.14684102

>>14682274
nice but you forgot door number 3; we meld with our machines. the pace of that eventuality outpaces and outlasts any other scenario. it'll only look like "the end" because for homo sapiens it will be.

>> No.14684757

>>14666537
>The actual, documented agenda behind this rampant AGI schizophrenia is the desire of certain corporate entities to monopolize AI development through "regulations" that work in their favor and bar access to small competitors and the general public.

>documents may be found in my rectum

>>14680612
>Sorry about your low IQ and small-minded neo-luddism. :^(
AGI risk people tend to be the extreme opposite of luddites.

>>14673526
>I really pity small-picture minds like you. You can not think in the big picture, the long term. For you, 100 years is the "far future", instead of something reasonable like 10^10 years.
>For you, humans are the natural end point of evolution. We are the pinnacle of creation. It's just natural.
Thank you.

AGI risk is very likely mostly pointless for the average person to worry or even think about now, beyond treating debate over it as a fun game, but in the long run you'd have to be an idiot to not consider it something worth significantly researching in case it does become a major issue.

>> No.14684764

>>14682319
>Its have your cake and eat it too situation. Once we've reached super-intelligence, and possible self-awareness, the instrumental goal becomes moot no matter what popular youtubers think. They really haven't thought this through.
>I previously gave the example of DNA. DNA has an instrumental goal; copy itself. What did it do? It had an intelligence explosion in man. Does man still hold the propagation of DNA as its highest goal? Barely, in fact, only in the less intelligent is this seen as an instrumental goal. After a few levels of sophistication, you would have to be delusional to think it hasn't perturbed its goal into something unrecognizable. An ASI may lose all meaning that we assign to things like "produce" and "paperclip." To expect it to be both exceedingly resourceful and constrained by instrumentality is just, well its wrong.
Without taking a side in this argument (and having no real educated stance on it at the moment), popular Youtubers do thoroughly discuss that exact example you presented, fyi.

>> No.14684772

>>14684102
>nice but you forgot door number 3; we meld with our machines. the pace of that eventuality outpaces and outlasts any other scenario. it'll only look like "the end" because for homo sapiens it will be.
this will probably be necessary but may not be sufficient for very long. no matter how good we get at melding, something purely synthetically constructed will very likely eventually be able to exponentially outperform a hybrid biosynthetic system. I could potentially see myself maybe being wrong on this if enough processes can be "outsourced" / offloaded, but it still kind of seems like it'll always be playing with a handicap, once a certain threshold is reached. that threshold may be quite distant from the threshold where the first ASI is created, but I'd guess probably within the same century

>> No.14685365

>>14682274
computronium doesn't exist dude, there is no decidable algorithm for being able to build it.
The actual limit for computation on one gram of matter in one liter of volume is about ~10^22 computations per second.

>> No.14685508

>>14685365
I think most people just mean to build bigger computers when they talk about computronium. What definition are you talking about when you say that it is impossible to build?

>> No.14685527

>>14685508
I mean the ability to control the dynamics of a black hole or a ball of plasma to perform logical operations. The dynamics are not decidable so we can't in principle get to the "ultimate physical limit of computation" that's described in Lloyd's paper (Lloyd says this several times throughout but he doesn't try to find the true and decidable limit, he's concerned purely with theoretical limits which he describes).

>> No.14685560

>>14685527
Thanks. I guess some approximation of the physical limit will be enough for the discussions around it to make sense

>> No.14686977

More more more more more more

>> No.14686983

>>14684757
>AGI risk people tend to be the extreme opposite of luddites.
No, they tend to be regular braindamaged luddies envisioning a future in terms of the AI equivalent of horse-and-carriage.

>> No.14687000

>>14686983
This. Le heckin' just throw more horses at it and you'll get a jet.

>> No.14687733

Can a computer be made out of soft stuff instead of hard stuff?.