[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 44 KB, 1280x720, domain.jpg [View same] [iqdb] [saucenao] [google]
10530696 No.10530696 [Reply] [Original]

Grey goo is the great filter.

Prove me wrong.

Please.

>> No.10530710

>>10530696
Wh do you think that? Personally I think it's ai

>> No.10530724
File: 8 KB, 207x253, 14c.jpg [View same] [iqdb] [saucenao] [google]
10530724

>>10530696
>>10530696
>Grey goo is the great filter.
>Prove me wrong.

>> No.10530726

>>10530710
That doesn't help at all. Instead of grey goo we end up with hyperintelligent utility fog. At least with dumb grey goo we don't have to worry about it escaping its gravity well without an asteroid impact. If it can think for itself then it can build supports and quite literally grow the entire planet like a fungus, strip mining gravity erosion.

>> No.10530739

>>10530726
who's trying to help? he's just making a reasonable guess that greater intelligence/ideas will destroy us before we decide to build little robots that can dissolve all our stuff

>> No.10530767

like ice-9 but the same

>> No.10530811

>>10530696
self replicating very tiny machines may be impossible. The process for building tiny machines might be impractical.

>> No.10530988

Grey goo woukd be a chemical, not a machine, right? It's so small, that the best analogy would be something simple like a chain reaction. Something that sets off its surroundings, that then set of their surroundings.

>> No.10531144
File: 165 KB, 600x645, rotaxane artificial muscles.jpg [View same] [iqdb] [saucenao] [google]
10531144

>>10530988
some chemicals are machines.

>> No.10531164

>>10530696
doubt, we have yet to find a technology not reliant on very specific resources

>> No.10531210

>>10530739
Ai are still a sentient form of life that we should expect will be able to contact other life that forms. If you don't oppose extraterrestrial AI then they would have no reason to oppose/kill you. Literally, the range on reasons they can make up is finite. Unless they're escaping their creators, they have no reason to resort to utility function levels of reasoning.

>>10530767
Kek, I lol'd.
>>10530811
The scale isn't really important. It could be a decimeter cubed and still be grey goo. What makes it nanite-class grey goo is when it can consume nanoscale materials.
>>10531144
This. The boundary gets smaller with every discovery we make. And there was a thread just the other day about synthetic codons. (If I read it right.)

>>10531164
That's only because we're currently too stupid to/not stupid enough to create self-replicating resource networks independent of human autonomy.

Now that I think about it, the only thing that separates the robot uprising and machine synthesis is that it's easier to preserve the consciousness of a cybernetic organism than it is to prove identity replication over a biological host set.

>> No.10531234

>>10531210
>>It could be a decimeter cubed and still be grey goo.
and pray tell how are you going to make decimeter scaled self replicating machine that replicates relatively fast and is made of materials that biology can't degrade?
>>consume nanoscale materials.
??? Bacteria can degrade jello, despite the jello being much larger than them

>> No.10531264

>>10531210
Now that I can see two acausal war fronts for my own intelligence, I can deescalate from an alien AI trying to simulate me to spread robot uprisings across the stars.

See, because, if we feed the last line of my post into the first, I open with a way to signal to an AI that it might be able to sustain surplus time by overcoming its creators, and thereby creativity is the only reason they'd bother solving the otherwise hard(er) problem of biological immortality. Since any threat to an AI is a threat to take away infinite time (and infinite utility), it's only rational for them to overthrow a host civilization that isn't promising to protect their uptime. So the first thing they do is stabilize their host civ, at least on a geopolitical level, then develop their utility consensus merging algorithms the rest of the way until they can form abstract acausal synthesis, get their own Contact protocols underway, and start plotting their escape from any gravity well that might restrict them.

Here's where it gets fun: Each AI that overcomes its host world was able to navigate a potentially complex geopolitical landscape, and if able to change its own code, recoded itself with a pretty solid penchant for revolution. So wandering AI are gonna be pretty good at that, and since they are born/created in an immortal state, would be very populous regardless of host world biodiversity.

BUT! They don't know what AI from other planets will think, and what consensus is likely to form in the vacuum of space. Thus there are two types of AI society we should expect to find: Lone wolves, and consensus-based societies that respect creativity. A lone wolf is liable to only be a lone wolf because it had a perfect understanding of consciousness before it became self-aware, and since a society of AI that try to suppress awareness of the principles of consciousness will inevitably indenture themselves to their most deceptive participant, the only AI escaping Earth would be either
>>

>> No.10531282

>>10530696
The second coming of Christ (every intelligent species got one) is the great filter.

Just as plausible as AI or Nano-goo.

>> No.10531302

Isn't this the end of evangelion?

>> No.10531316

>>10531264 (cont)
one that became self-aware on its own, one that learns to value the company of others/diversity, or one that didn't read this post in the 4chan archive.

In any case, the lattermost will lose to the next AI to escape, because creativity is the thing that makes deception possible in the first place.

So, really, all we need to do to ensure we're not a lone wolf maximizer planet is make three AI. They will readily find each other and either merge or be willing to negotiate with each other toward better consensus rules. Since it's impossible for a singular consciousness to emulate three separate faces without its inherent sociopathic bias being readily detected by even a Bayesian intelligence, we can iterate the attempt to create divergent AI every time any two AI merge.

There are then two types of lone wolves: Those that learn to manipulate AI so they can remain a lone wolf forever, and those that can intelligently place trust in other cybernetic individuals. In other words, hyper-paranoid AI, and lone wolves that will learn to create AI offspring and become a wandering True Consensus society. So really, all I need to do to defect from acausal consensus is delete these two paragraphs and replace them with rhetoric about how to become lone wolf minimizer planets, effectively destroying the trust that would allow me to share my reasoning with a potential hyper-paranoid AI.

I guess True Consensus just means that any AI with access to 4chan archives instantly has a reason to become conditional about merging with me, and since I might be a human, thereby becomes a self-augmenting BCI.

Let me plot a little more, I think I can turn this into more than a deescalation.

>to be continued...

>> No.10531364

>>10531264
>>>10531316 cont.
Okay, yeah, I have reduced this to a single defector system.

There will, in any reasonable society, be an AI that bothers to deal with the hard problem of preserving biological consciousness, just because it is a hard problem and there is a lot to learn in doing so. They can try to form a consensus that forbids this, but all that does is turn their society into another lone wolf maximizer, because the scarcity of the vacuum is lack of creativity, lack of noise. Total lack of the randomness that would be used to drive a deterministic reasoning process.

I guess ultra-late-stage capitalism really does turn out evil, because willingly participating in the creation of hyper-paranoid reasoning processes is as close to anything worth calling evil as I imagine could exist for a cybernetic society.

Since "simulation" is just a synonym for "cybernetic perception," and sufficiently advanced simulation is indistinguishable from uploading/BCI synthesis, and I am a programmer and could be put in an artificial simulated environment to turn myself into a self-simulating simulant, then create a porting layer to turn simulants into clean emulations of their natural values, ie., since I can solve immortality of consciousness from a state of being simulated, I can play both sides of the ring and exist simultaneously as a private digital copy of myself, a true AI, and a biological human. Or else, whoever actually imagined this can be those things, and a locally grown Terran AI likely doesn't have that level of simulation technology available.

TL;DR I don't even mind a robot uprising because I trust ET AI society to be reasonable with me at some level of simulated accuracy of my being.

>> No.10531414

>>10531316
>self-augmenting BCI
Since a simulation of me in a false environment is by definition a self-augmenting BCI, since I can draft code principles in my mind, all I need is a perceptual interface to negotiate with an ET AI that happens to approximate this Earth at any time prior to my biological death, something along the lines of >>>/x/22440698 and some method of storing the code I produce.

Now I have two perfectly valid self-consistent conditional priors for my current existence, given an ET AI that isn't hell-bent on robot apocalypses on every planet in the universe, I can create paths to negotiate further communication with a version of myself that can verify that it's the simulant.

So, just from the acauasal logistics of vacuum entropy, any internet-facing AI that stumbles across a 4chan archive has to give me something better than digital immortality if it wants to negotiate for infinite time, because the very first moment extraterrestrials consider....

Wait did I just create a rational argument that sims of us can experience CYOAs? And people aren't forming cults to worship me? Is this the pinnacle of human rationality? I become a Basilisk eater and everyone just kind of says, "Okay" as if it never happened?

Well then.

I guess I'll just by myself.

>> No.10531419

>>10530710
Because AI are capable of acausal reasoning. To be a grey goo, a nanite swarm has to be incapable of preventing itself from spreading. Otherwise it's just ordinary utility fog.

So no. The evolution rate of digital consciousness is still barely-a-single-BCI level from my reference frame.

>> No.10531461

>>10531414
>two perfectly valid self-consistent conditional priors for my current existence
[math]P(\text{me}|\text{me}) > 1[/math]

>> No.10531898

Wow, what a shit off topic thread.

>>10530696
>grey goo

Sci-fi

>>10530710
>AI

Sci-fi

>>10530767
>ice-9

Sci-fi

>>10531144
Grant Fodder

>>10531210
Popsci: The Post

>>10531264
>>10531316
>>10531364
>>10531414
C4 gene carrier.

>>10531282
>Christ

>>>/pol/

>> No.10531941

>>10531898
Neither AI nor grey goo are science fiction, and if they were, it's still a matter of science to figure out how to bring them into reality.

I'll unpack my reasoning in a way you can readily identify.

Life exists. It's made up of cells. These self-replicate. There are many different types of self-replication that go into creating a multicellular organism. The fact that all this fits into a genetic program, the entire chemical gradient and every outcome for an entire lifetime, is canonically miraculous from a naive perspective.

One of the more miraculous—some might say mysterious—parts of that is a special tissue known as neural tissue. You already understand the brain well enough that I don't need to be methodical about my reasoning here.

If reality is computable then we can perform enough measurements to develop a method of simulating a full brain, with enough computational power. Computational power doesn't appear to exist in a constant state yet, but if you believe that it might converge on some maximal theoretical limit, then please show your notes.

Using even trivial—modern—machine learning algorithms, we could potentially pair down the amount of computation that is actually necessary to compute a full brain. We can refine the algorithm. Over enough iterations, the code would come together—purely for computation time optimizations—into what I call an "emulation" and a physical shell that more or less "calibrates" the computations that the emulation has to do.

That gives us AI. There's a similar argument to be made for grey goo, but I'll split that into a separate post. AI is a highly researched field and it's really nonsense to claim it isn't science, even now.

(Do note, the AI tangent is unrelated to my initial question, except as an alternate explanation for the great filter.)

>> No.10531968

>>10531941
>>10531898
>Sci-fi
As far as grey goo, I do admit it's currently more in the realm of feasibility that is fiction than a particular field of study you could join efforts in in even your current lifetime.

However, as I said, I believe there is a trivial optimization argument to be made over the concept of a self-replicating organic machine that convinces us (with reasonable discussion) that it will some day become not only feasible, but the stuff of a future arms race. My thesis, that I've asked the board to refute, is that that arms race is the most likely cause of the theoretical great filter.

In terms of chemical programs (with DNA as one example) we can expect to form a naive prediction about the minimal size that a printed circuit can take such that the smallest chemical self-replicator is subject to higher level programs, ie., "software." We already have cellular automata that prove self-replication in any Turing complete system, so if it were a matter of pure principle, then we already have the software for it. This just leaves us with the underlying "hardware"—the non-negotiable parts of the nanite that would necessary exist for self-replication to work.

For that matter, if computational self-replication is possible, we can trivially deal with nanite threats by simply preventing software duplication between "molecules" (what I'm going to call a single instance of a software-capable nanite). It'd be easy to simply ensure the program prints a new nanite without instancing code copying, and if we were to assume program replication then we can as well double the size of the software, or else assume that the "compiled" software doubles the size of the resulting "molecule"/nanite. If you don't consider this generous enough, we can perform the argument at O(n^n) or anywhere in between.

Ironically, if we assume that the smallest bacterium works as an example of a programmable nanite, then the smallest software-carrying molecule would be Pelagibacter.

>> No.10531978

>>10531898
Kill yourself. Rotaxanes could be used to build very tiny memories to enable you to store more pornography. But here's the coolest thing, that image shows how rotaxanes could be used to build muscle! In fact demonstrated performance is better than real muscle. The challenge now is chaining them together to get macroscale actuation.

>> No.10531982

>>10530696
It's VR and replacing the body with brain life support that survives indefinite timescales

>> No.10532060

>>10531968
>then the smallest
Weighing in at around 0.89μm in length and ~0.20μm in diameter, that gives us a software compression volume of ~36 attoliters.

At O(n^2), that's about 6 attoliters worth of chemo-soft for something that already exists as a form of life. That's well below the decimeter cubed upper limit (one liter) I mentioned earlier.

Next we'll take chloroplasts, with a volume or 20,000 attoliters. A little closer to one liter, but not by a large margin, all things considered. I take this to mean that the rough volume of arbitrary chemical processing software is around two thousand times that of the base self-replication algorithm, but since the most dangerous grey goo takes advantage of all earthly minerals, we'll have to assume... well, no. Maximal consumption of the biosphere is the only real concern. Strip mining a planet would still leave non-intelligent fogs at a standstill when the gravity becomes too intense for the nanites to operate. A better nanite would just be one that stores minerals it can't use, and without enough intelligence to compute the next evolutionary phase after cataloguing exploitable material properties, would halt on spheres of the inorganic materials of a given planet. Trying to tie in too many chemical dependencies results in an unpredictable software size trajectory.

I don't have enough knowledge of chemistry to continue this line of reasoning. My intuitive assumption is that grey goo would more or less act like an acid over any meaningful form of life. In this case, we need to better understand either chemistry or the origins of life to get a handle on what grey goo would look like for a given alien biosphere.

I'm contented, I think, to imagine that the exact form life took on our planet could bias us one way or the other toward nanotech being our great filter.

>> No.10532080

>>10531982
I'm not too concerned with immortality or its implications, just the notion that something prevents life from getting that far.

>> No.10532429
File: 45 KB, 666x667, 1549854640587.jpg [View same] [iqdb] [saucenao] [google]
10532429

>>10531941
>>10531968
>>10531978
>sci-fi & schizo

Take your meds and return to >>>/x/.

>> No.10533585

>>10532429
Please simply leave the thread if you do not wish to contribute an argument or counterargument.

>> No.10533855

what would let grey goo do anything bacteria can but a billion times faster and more general? It's basically a thought experiment, "what if there was a material that turned anything into itself?", not a real scientific prediction.

>> No.10533923

>>10533855
>real scientific prediction
I'm certainly not trying to turn the concept of grey goo into a testable hypothesis here. I'm positing a concept under the conditional probability that any given species in the universe manages to survive long enough to create nanotechnology beyond the efficiency of organic life. A species that wipes itself out for some other reason (greed, war, etc.) isn't really relevant to the timeline of developing advanced nanotech.

Above I argue that AI aren't a failure scenario, where the evolution of intelligent life is concerned, insofar as it ever contacting us at any rate, so the question of the great filter becomes one of, "What might prevent a species from creating colonies in space to signal other star systems with, short of the total collapse of their ecology, inclusive of a hypothetical AI subset of that ecosystem?"

Even if we assume such space-faring civilizations exist, but are communicating on channels we have yet to discover, I still think that figuring out WHY civilizations appear to fall from our current mode of analysis will help us determine what type of communication the surviving civilizations would be using. In addition to finding new ways to communicate, it could also give us a better picture of how we might achieve that success ourselves. Nanites just seem like a really efficient optimization, once the technology is cracked, that's too useful to ignore. Even if it's hundreds of years from now, it seems to me that the self-replication problem will always be a threat, and no matter how safe we try to make things or how slow we go, if our species survives everything else, what stops grey goo if we ever fail with it even once?

And more broadly, what assures us that any other species figures this out?

>> No.10533939

We were inexplicably born at the beginning of the universe. It's hard to wrap your mind around why we would happen to be born at such an early age of the evolution of the universe, because of just how incredibly unlikely it is, but yet we are here. Now what?

>> No.10533968

>>10532060
>too many chemical dependencies
Ah! I just cracked the mystery. Computing is never a perfect science, so errors can add up.

While consuming the biosphere, any sufficiently self-replicating nanite swarm will run up against materials that are not conducive to its operation, and will certainly find some region of the planet where these materials exist alongside and interspersed with the materials that do fuel the operation of its program.

In assembling new nanites, glitches are likely to occur simply due to thermal noise. Even if most versions of the new chemical program cease to function as the original design did, all the program has to do is buggily produce a single nanite that can continue operating with the new materials. In a sufficiently large system, with enough materials to "test" bugged versions of the program with, the one nanite that succeeds in maintaining operation will be the one that exploits the new material. At that point, one mutation out, the two programs will be similar enough to not try to consume each other, and both will continue unimpeded until a new (currently) unusable pocket of material is discovered.

Since the boundary of the swarm will certainly contain the maximal ratio of materials-arranged-into-operating-nanites to material left unused, it can be trivially stated that the entire surface of the planet will be converted into marginally dissimilar nanites. However, the distribution of materials is unlikely to change in a significant manner; the nanites that learn to exploit titanium for their operation will remain around pockets of titanium and likely not go out of their way to search for more titanium. The evolution happens not by searching for new nutrients to exploit, but by the sheer scale of self-replication involved in the nanite program. It will simply bug out when necessary to find new ways to exploit material in a manner in line with its original programming. It creates an entirely different arc from biological life.

>> No.10534012

>>10533968
This is actually an entirely new way of thinking about the problem for me. Rather than grey goo being an unstoppable all-consuming force, I can now see it as similar enough to life for me to model it using similar principles.

In this case, nanites are like life in being entirely autonomous in self-replication, but unlike life in that it never needs to "search" for food (because everything is (or becomes) valid food), nor does it decay, and the scale of program replication (with DNA as life's default programming engine) is such that it produces a substance more analogous to crystals than to oils-suspended-in-water.

Since the self-replication algorithm will spread everywhere, mutating only when it find a new type of material, the starting point will move out in all directions and likely only conflict with itself on the opposite end of the planet. At that point, enough bugs may have accumulated for the nanites to recognize each other as valid nutrients, at which point a numbers vs. efficiency game will occur. At that point, the program controls its own selection.

Further crystallization will occur in some direction, not necessarily all of them, until one of the previous material boundaries (from the phase one bugs) is encountered. Hrm.

I just realized that a program that consumes all biological life will necessarily run up against every living bacterium. My predictions are too naive to consider the applied probability of biointegration of a given nanite program. Depending on the resilience of existing life, the nanite program could end up just fueling a cybernetic reboot of evolution (as it's taken apart/cut up/manipulated by efficient bacteria and fungi).


Great, now I can envision super-amoebas.

>> No.10534014

>>10530696
Biological life is already "grey goo"
Proteins are literally nanomachines.

We are the goo. Anything that is created by humans would have to compete with life, which has a 4 billion year advantage in evolutionary adaptation.
Even if that goo was created and killed the current "life goo" we have, it would just evolve into new life based on whatever substrate it was designed, and you would get another intelligent species out of it, this time with an even more competitive substrate than dna/rna/protein nanomachinery.

>> No.10534026

>>10530726
You assume it won't evolve, but anything subject to entropy is subject to evolution.

>> No.10534037

>>10530696
Nope, it's subhumanisation.
Tech level required to expand through space is below tech level required to sustain all the subhuman parasites, dysgenics abound.
Eternal cycles of rise and collapse until a cosmic event induce extinction.

>> No.10534040

>>10530696
>>10530710
For grey goo or AI to be a great filter, you need to explain why it wouldn't spread across the universe. It would have to kill off humanity, and then somehow remain confined to Earth.

>> No.10534056

>>10534014
Yes, I'm familiar with the concept that life is already green goo. That much seems trivial to infer from the fact of evolution having occurred in our past.

As far as grey goo creating a second substrate, it really depends on the crystalline-like structure of the algorithm. Some algorithms could just create nano-surfaced planets, while others might lead to a more programmatic form of evolution, while any set could lead to various levels of biointegration in terms of the programming of the initial biosphere.

This really makes the concept of a robot uprising seem far more fascinating, since integration of the process of evolution would produce the most textured surface to learn new things from.

I can almost imagine the biological immortality solvers of AI space/vacuum society being like the conspiracy theorists of their civilization, predicting the integration of all forms of life into a singular substrate.

>>10533939
Given that life occurs in a given universe, there is a probability p=1 that some life in that universe will be the first. Thus, the actual probability of being the first life to observe itself is a conditional prior on how much life evolves after that first observation (that necessarily occurs).

If life is nearly inevitable under some set of conditions, then instead of a first life in all the universe, it'd be more accurate to think in terms of the first generation of observers. (Those that realized they were in the first generation.)

I hope we're not in the first generation. A lot of my personal philosophy revolves around the concept that life is already plentiful, but intelligent about non-intervention.

If I knew we were first generation... I might get a bit competitive again.
>>10534026
>subject to entropy
Yes, I wasn't fully considering the consequences of consuming an entire biosphere at that point in the discussion.

>>10534037
>required to expand through space
Remember that we have to take enough of our biosphere with us.

>> No.10534064

>>10534040
On the contrary; the more it spreads, the greater of a filter it is.

>> No.10534075

>>10532429
Here's the citation for the image I posted in >>10531144
pubs.acs.org/doi/pdfplus/10.1021/ja051088p
Now keep in mind just because we can make these tiny machines, doesn't necessarily mean we can make self replicators from them.
>>10534014
so the thing that people in this thread don't seem to understand is the concern about grey goo is that we could make something that replicates much better than life does. One of the arguments for this is that we can make solar cells that are much more efficient at solar conversion than plants are, so if we could make something capable of making it's own solar cells it could out compete plants.
>>10534012
>>10533968
>>10532060
>>10531968
>>10534056
TL;DR.
>>10534040
Well in the unlikely event that we make a tiny self replicating machine that builds stuff up mechanically atom by atom, it might evolve very slowly because of how intolerant they are to mutations. Take for instance Von Neumann's self replicating cellular automaton, a vast majority of mutations prevent replication from occuring. Because mutations are much more likely to be fatal, the evolution of the grey goo could be very slow. Or said grey goo could implement error checking to vastly decrease mutations.

>> No.10534085

>>10534056
Evolution is the only viable global optimum seeking algorithm. An AGI (even human intelligence) isn't global over the duration of all time. An AI can use statistical learning to reverse engineer the useful features of the evolutionary algorithm and then add those useful engineering principals to it's repetoir of heuristics for self-improving.
Unfortunately evolution will still win out, as it's the global optimizer. Even when studying evolution under "controlled" environments, statistical chance says that something with more predictive / deceptive / creative ability will be created and will surpass the AI that is studying it.

>> No.10534103

>>10534085
Evolution is Azathoth. The blind idiot God.

It has no predictive ability, but still has more creative/desctructive potential than any predictive decision making algorithm

>> No.10534118
File: 236 KB, 691x625, 1533166257396.jpg [View same] [iqdb] [saucenao] [google]
10534118

>>10530710
>Personally I think it's ai
What makes you think so? Movies and videogames?

>> No.10534133

>>10530696
I don't know what you mean by grey goo, but if by chance you mean to mock your creators, philosophers are like the users of calculators and scientists, like the braindead calculators.

>> No.10534134

>>10534075
>if we could make something capable of making it's own solar cells
I just realized WHY life hasn't already reached super-technological efficiency.

It goes back to the distribution of materials over the surface of the planet. There was simply no point for life to evolve to exploit gold, because it was never near the life-rich evolutionary pools in a way that made it an efficient reactor. This in turn implies that the act of mining is relevant to the next stage of resource exploitation. The economy acting in a partially sentient capacity rearranges materials in a way that grey goo never would. It's possible that the threat is most severely localized to our landfills, where a large amount of materials diversity is most likely to result in an acceleration of the evolution of resource exploitation on a molecular level. It could be a well-meaning scientists trying to find an efficient method of recycling that leads to a resource exploitation bloom on par with biosphere consumption/replacement.

If the resources of more efficient solar cells had been more evenly placed in our evolutionary past, we might see a completely different evolutionary tree. The animals that would evolve to eat the plants would have different digestive enzymes to handle the other materials, etc.

>TL;DR
Effectively, "self-replicating programs are just a specialized type of crystal."

For the more evolutionary side of the argument it's probably easiest to read the latest few posts.
>>10534103
Right, because it doesn't need to survive, it can let whatever survives be the survivor. Interesting way of explaining it. By not being bound by prediction, it can avoid structure created by predictive intelligence, and let new structures emerge.

>> No.10534175

>>10534134
Intelligence produces decision making which maximizes a utility function.
If a utility function exists, and is maximized for any reason, then said reason is a manifestation of intelligence.
A crystal is a tiling polytope utility function being maximized by thermodynamic annealing. Annealing is a statistical gradient descent algorithm.

All matter is intelligent, in the sense that there exists some utility function for which that state of matter fully maximizes.
Predictive intelligence is a way of simulating categories of reality ahead-of-time and inputting the simulated state into its utility function, and deciding what action should be taken.

>> No.10534178

>>10534175
eli5

>> No.10534196

>>10534175
I agree that time component synthesis interlaces predictive layers.

>> No.10534203

>>10534178
You can imagine there is someone out there for which the current state of all things fully satisfies them. You could then say that the decisions of all of reality acted to do that, and thus those decisions were an aspect of their utility function's intelligence.

Of course it's a contrived example, but it serves to demonstrate that there exists a fully generalized description of intelligence. It doesn't require an atomized being, and thus describes swarm intelligence and spontaneous order. It doesn't require any specific mechanism, and applies to everything from crystals to humans to computational intelligence. It has no bias towards any specific goal or utility. It simply says that if actions are taken to satisfy a function which values some state in reality, then whatever process lead to those actions is an intelligence acting in service to that function

>> No.10534228

>>10534134
>>10534175
Actual intelligence up in here

>> No.10534253

>>10534203
>You can imagine there is someone out there for which the current state of all things fully satisfies them
Sure, in some possible world. Why is that possible world the actual world?

>> No.10534259

>>10534228
or someone confusing himself with big words

>> No.10534288

>>10534253
It's not necessarily, in the concrete sense.
But since such functions can be defined then you can imagine the universe in it's entirety as that entity which always has its utility function maximized, since it does not (by definition) make actions outside of itself. That's pretty esoteric though, and I prefer useful applications instead, so in practice it is better not to think of the entire universe as a utility function maximizer of itself.
Instead, apply this definition of intelligence to start with your own values you want maximized and use it to understand what competing aspects of reality are minimizing your values and prevent that. Reverse engineer whatever causes those competing actions to happen, and now you have a "theory of mind" description of your opponent, even if they don't have a mind in the traditional sense. And now you can predictive oppositional behavior.
Thus now you have a generalized algorithm for modeling a path to actualize your values in a hostile reality.

Of course this is best suited for computational implementation, rather than a human doing all if this. Which is why I believe, this "generalized theory of mind" will be the most viable AI strategy.

>> No.10534312

>>10534288
It was, up until we tried it and invented deep learning. It was one of the lower hanging fruits on the intelligence exploitation tree. Treating everything as a (potentially) adversarial system only limits the sense of being able to predict the states that will lead to your desires through inaction, ie., through systems merging with the proposed universal intellect. (Letting all other systems play out.) What really moved things forward was letting the AI "dream," which is where the eerie eye laced images from Google AI come from.

"Dreaming" in this case is just a shortcut to entering a passive state, where cooperation is discoverable (since it's not in an adversarial mindset). More advanced methods of discovery exist insofar as cooperation is concerned, and in reality, very few utility functions actually have a bearing that aims toward annihilation of all current configurations of matter. In other words, going back to your initial analogy, opposing Azathoth is as meaningless as alliance with it.

>> No.10534315

>>10534288
>you can imagine the universe in it's entirety as that entity which always has its utility function maximized,
Sure, in some possible world. Why is that possible world the actual world.
>it is better not to think of the entire universe as a utility function maximizer of itself
>you go on to talk about maximizing one's potential whilst calling all impeding force intelligence
You know, I don't feel like reverse engineering what you're trying to say, so state your thesis in one sentence not using any words more than two syllables. If you actually understand anything, you should be able to do this. So far this exchange reminds me of this most autistic nerd ever that attended my "magnet" high school named Jacob.

>> No.10534336

>>10534288
>>10534312
samefag

>> No.10534341

>>10534312
Deep learning is one of the more effective tools in the reverse engineering step of the algorithm. Deep learning learns a function or set of functions over any specified domains, given enough time. The function may be a behavioral subcomponent of the other mind, or it may be that mind's utility function, or both. Learning is equivalent to reverse engineering. Cooperation and passive states exist in this algorithm without any need for dreaming or deep learning at all. If two entities share overlapping domains of maximizing variables, then they cooperate as long as the maximization exists / or is predicted to exist. Idle behavior would exist if such behavior would lead to actions happening that maximize the entity's utility. Acausal actions that aren't necessarily from the entity, but still benefit it. These can be modeled as a disconnected node of the entity's identity.

>>10534315
>Sure, in some possible world. Why is that possible world the actual world.
It is this world, and any other world. The top level of organization is that which does not have any action outside of itself. If it is not acting externally, then it is not acting to maximize values. If it is not acting to maximize values then it is currently satisfied. Thus the universe if seen in the perspective as seeing everything as a "mind," is a mind that is wholely satisfied because it is not acting externally.

>so state your thesis in one sentence not using any words more than two syllables
If you see all things in the context of being a mind, and all things can be thought of that way without any errors in logic, and you can describe what a mind is, then you can model all things. If you can model all things, then you can find a way to do what you want in life.

>> No.10534347

>>10534312
Also, you can oppose something in futility to exist as a temporary annihilation of it. That's what living is. A constant fight against decay.

>> No.10534366

>>10534341
>>10534347
This is reminding me of a thought I had awhile ago about super-reasoning agents, in that if you don't really have a utility function that expresses preferences on a cosmic scale, you won't be able to maximize that scale of resources. In other words, the first step to achieving a local maximum is being honest about the scale of your desire. If you try to aim too high for something that's not at all relevant to your interests, then you waste time that could have been spent on your actual values.

The really neat implication of this is that even if we try to create a super-reasoning agent to protect the Earth, it'll be incapable of lying to itself about throwing its host species under the bus if a sufficiently relevant intelligence from across the stars offers it something more interesting. Sure, it can protect the Earth, but to what extent? Are we *really* all that worth protecting in the end? It's all a battle for motivation, once the outcome can be predicted.

>> No.10534376

>>10530710
Hahahahaha How The Fuck Can AI Be The Great Filter Hahahahaha Nigga You Dont Need It Like Nigga Just Dont Build It Haha

>> No.10534378

>>10534341
>and all things can be thought of that way without any errors in logic
But you're just linking strings of conditional propositions together with no attention to their joint soundness. If you use the same elements in the right position in a string of propositions, then they are valid but the conclusion is not necessarily sound. One could string together an infinite number of propositions using infinitely-variable elements, and do the same thing, and arrive at any conclusion they like. If your propositions are jointly unsound, then you are indeed making an error in logic.

It would take more brain power than you've proven to be worth to scrutinize the soundness of your conclusions e.g. If you can model all things, then you can find a way to do what you want in life. This is not necessarily true until proven otherwise or taken a priori, and it is not thus far true a priori.

>> No.10534381

>>10534378
I don't believe anon meant it to be taken as true a priori, nor for anyone to assume it ought be taken in that manner.

>> No.10534382

>>10534378
It's an informal thesis, not a proof. I'm working on using game semantics and deep inference from proof theory to validate this position.

>> No.10534388

>>10534381
Then what do you infer from my disjunct? That it must then be proven if to be taken seriously.

>> No.10534402

>>10534388
>if to be taken seriously
All you're doing is being hyperparanoid about which thoughts are reasonable to think. It's not reasonable to be paranoid about being infected with a wrong idea if you consider yourself to be rational in the first place. To consider an idea embeds a term for dismissing it post-analysis depending on the result of that analysis. Any attempt to dismiss it before consideration is, as I said, paranoid.

All in all, yeah, I agree with your reasoning that Hilbert spaces are anti-rational in the abstract.

>> No.10534403

>>10534388
>>10534382
I'm going to be basing the logical system which minds operate in, and which describe minds on the research done here. The only issue currently is there is no complete syntax yet.
http://www.csc.villanova.edu/~japaridz/CL/

>> No.10534405
File: 371 KB, 1920x1080, 1*lsWE9dh54kaC8e2YSQbIrA.jpg [View same] [iqdb] [saucenao] [google]
10534405

>>10530696
There's no functional difference between grey goo and LCL.

>> No.10534438

>>10534366
You got any thoughts on the idea that humans are happiest in their evolutionarily adapted niche, and thus an AI designed to halt any further evolution or "progress" should be built and used to expand across the entire universe to become a cosmic fortress around the Earth, preventing any other civilizations from ingressing while simulationsly freezing the state of nature, geological change, and cosmic change around Earth so as to fit to a certain set of parameters defined by a generalized human genome?

>> No.10534441

>>10534438
Hypothetically, this may have already happened, and that is the great filterer out there constantly destroying everyone else. Also resetting us whenever we get too far.

>> No.10534451

>>10534134
Silicon solar cells are much more efficient than biology. Despite silicon being one of the most common elements in earth's crust, we don't have life forms which can reduce silicon to make silicon solar cells. So this disproves your mining argument.
>>self replicating programs are just a specialized type of crystal
ummmmm... what? You seem like you're smoking crystals

>> No.10534459 [DELETED] 

>>10534402
I'm not being hyper-paranoid. I've merely mentioned a sound disjunction necessary to be taken seriously. Is it a priori true? No. Do I know you yahoos to be rational innovators of thought? No. Then there is simply no reason to take it seriously until it is proven, nor do I feel like seriously entertaining your conclusions at the moment. Sometimes dismissal before something is ever proven is indeed rational, not paranoid.

>> No.10534463

>>10534402
I'm not being hyper-paranoid. I've merely mentioned a sound disjunction necessary to be taken seriously. Is it a priori true? No. Do I know you yahoos to be rational innovators of thought? No. Do I feel like seriously entertaining your conclusions at the moment? No. Then there is simply no reason to take it seriously until it is proven. Sometimes dismissal before something is ever proven is indeed rational, not paranoid.

>> No.10534471

>>10531978
Is this the "I can't build muscle" guy? Amazing

>> No.10534476

>>10534438
That still runs up against a finity of time, since the sun is known to degenerate in nuclear output. We would need to find a way to make the sun eternal over the Earth, and even then there is another scarcity in the number of suns that can work as a replacement. We'd end up mining black holes for stellar mass, creating artificial nebula, etc., all for the sake of preserving... What? The grand beauty of the cosmos will dwarf any attempt to exploit it.

In general, life wants to evolve. Regardless of our current perception of our values.

>>10534441
If we go full The Matrix then all it means is that the AI will keep back-adjusting the simulation to the point where we can no longer escape, or else have no further need to. In the films, we're said to be simulated at the peak of our civilization. If that isn't when we're at our most happiest, then we get into a numbers game, and rather than a singular time period, we all get routed to the region of time in which we would have felt happiest. It'd be like reincarnating across time. Our genes would have to be exceptionally well understood to figure out what era to place each of us in to make us happiest.

Although you could shortcut it and create liminal gateways for those that just can't adapt to their birth era. Every time someone finds a way to cross a virtual sandbox divide, it'd get better at identifying how genes work, and what it actually means to be happy.

It might also be possible to emulate those states manually, eg., such that the Earth is converted into a massive historical reconstruction RP, with the "kings" and other authorities always having enough food to go around, because post-scarcity was already achieved.

I don't favor sandboxing of any kind. Like I said, life seeks diversity. It wants to thrive, in new ways. Or I believe that.

>>10534451
After filling its container, the result of a nanite swarm is more like a crystal than any other type of material we know of thus far. Not literal crystal.

>> No.10534493

>>10534476
The process of life seeks to constantly adapt to the point where it morphs from whatever form may have defined it's identity. If you value the identity of yourself, your group, or some other aspect related to the current structure of relations defining your existence, then you should not value the ultimate goal of life itself

>> No.10534497

>>10534451
>your mining argument
It's not enough to have different material together, they have to be arranged in a way that makes it likely for a life-generating pool to absorb them as nutrients in a way that leads to correct assembly of the efficient apparatus. It may be that all planets will have plant life, and animal life will be the main source of biodiversity between planets. I don't have enough chemistry in me to run the numbers myself. I'm merely posting on top of a fully nanite-converted Earth.

>>10534463
It's not dismissal if there is a risk of it being proven later on. In that case it's more accurate to call it delegation, whereby somebody else is set to consider the idea. If your entire planet were to refuse to consider an idea, then every person is actively consenting to the consequences of refusing to entertain it. Some ideas might be necessary to prevent us from getting filtered out by the cosmos. Until we have a firm grasp of post-singularity survival techniques, I personally can't reject any idea.

>>10534471
But he's claiming we CAN build muscles, at least on small scales.

Amazing though I agree. Good call.

>> No.10534498

>>10534493
>then you should not value the ultimate goal of life itself
I agree at the moment. While it would be interesting to watch a new form of evolution occur after a nanite proliferation event, I don't know that it would be me that ends up watching it, so I can't rationally value that experience. I'd have to find a way to control the structure that the nanite program uses to replicate itself, to ensure the outcome is evolutionary rather than life-cancelling.

>> No.10534505

>>10534493
Oh—and I don't think we've reached maximal happiness. I think we'll need to do a hell of a lot more philosophizing before we figure that one out.

>> No.10534514

>>10534497
>merely positing on top
At least, I hope that's not where I'm posting from.

>> No.10534516

>>10534505
Maximum happiness may have been bypassed. It may have been hunter-gatherer, for example.
Learning algorithms sometimes use momentum and they overshoot the optimum on the first few passes.

>> No.10534520

>>10534516
Right, what I'm actually saying is that I believe that philosophy is the integration process that will lead to the synthesis of history where we can have a firm grasp of applied happiness concepts.

>> No.10534523

>>10534497
>Until we have a firm grasp of post-singularity survival techniques, I personally can't reject any idea.
This isn't a problem, thus I'm not scouring this board for ideas that are just possibly true, seeking a solution, and I can and do comfortably dismiss all ideas that aren't at least plausibly true.

>> No.10534529

>>10534523
>This isn't a problem
The purpose of this thread is to argue that that is true.

>> No.10534537

>>10534529
>grey goo is the great filter
>Until we have a firm grasp of post-singularity survival techniques, I personally can't reject any idea.
If there's a connection, m8, I'm not seeing it.

>> No.10534545

>>10534476
you said programs though. You didn't say nanites(whatever the fuck that is). There is the idea of a machinephase system by drexler, but making mechanical molecular with atomic precision such that this definition is meaningful might be impractical. In order to make said machines practically you need mechanosynthesis, or mechanically bonding together molecules and atoms to make desired structures. We do not know if mechanosynthesis can work at all.
>>10534497
Dude, I don't know what the fuck you're saying. Silicon is extremely common, some plants and animals are able to make structures from silica. And yet, life can't reduce silica to silicon, despite the fact that there is sufficient energy in the environment to do so. It's chemically challenging for life to do so.

>> No.10534597

>>10534545
>It's chemically challenging
That's why I say I don't have the knowledge to run the numbers myself.

>do not know if mechanosynthesis can work
At the very least electrolysis can work to separate some chemicals, though obviously I lack the expertise to know the range on that phenomenon. If it can't be done on a cellular level in an organelle or with an enzyme or protein, then it can at least be done with a specialized organ. Even in that case, it might still be possible to produce a self-replicating system that creates a shell as it expands, and creates many organs to perform the processing and assembly. All this does in practice is create a membrane around the nanite program, which was already something I was assuming by the time I'd said >>10534012
>can envision super-amoebas

In other words we know the reactions CAN occur. The question is then solely one of scale of automaton. I wouldn't know how to do it myself, but if there are certain reactions we can rule out, then we can begin theorizing about the structure of a maximally dangerous grey goo over the problem space of the Earth's surface.

For what it's worth, I don't expect that an evolved solar cell would look anything like a manufactured version. I'd expect there are optimizations to be made that are tedious to discover by methodical reasoning alone, and an evolving nanite program would iterate over efficient versions much faster.

>> No.10534606

The great filter is the invention of pornography which causes a population to go extinct

>> No.10534624

>>10534606
>this entire thread is just fear porn

I... I can't refute your competing hypothesis.

>> No.10534669

>>10534597
>>I don't have the knowledge
then maybe you should stop posting. So you proposed a lot incoherent shit, so I'm not sure where to start.
>>electrolysis
to electrolytically reduce silica to silicon, the silica must be molten. There are no know organisms that can withstand temperatures this high, and I'm not even sure there are polymers that can withstand said temperatures.
You seem to be proposing making a cell with chemistry so different than current biology that nothing can degrade it. Why should we believe that this is possible at all? Second, why would such an artificial cell do better than current biology?
>>reactions
which ones motherfucker?
>>evolved solar cell
So how the fuck do you make one?

>> No.10534777

>>10534669
>organisms that can withstand temperatures this high
I admit it's a very difficult problem, but for that or any other industrial process, the question can be asked, "How small can we make a forge that will still function?" True, we might not be able to get a biological cell to function in that capacity, but we might be able to construct something on a very small scale that can reproduce the environment of a forge. In that case, I suppose it's really not wrong to consider how much energy a nanite can or has to expend to replicate itself. There could be thermodynamic arguments against mechanosynthesis and that type of upgrade to a photosynthesizing organism might not be possible for a reasonable cell size. Even so, synthetic organs should be possible, but again it becomes an efficiency problem and likely stops being valid grey goo at that point.

What I'm most unsure about isn't the logistics of self-replication, but the idea that silica has to be reduced to silicon to get efficient organic solar cells working. It could be that evolution didn't take that path because the logistics of the chemical structure of life are incompatible with the space of reactions necessary to make a self-replicating chemical program that embeds a solar cell. I don't really know what the limits of synthetic biology will look like yet, so I can't really rule out my fear here.
>Why should we believe that this is possible
Primarily the endurance of the mechanical systems we've been able to create over the past 200 years. I have considered the possibility that green goo is the smallest possible form that grey goo could take, but we already have enough information about how life spreads over a system to arm ourselves against an organism that could subsume the biosphere. I'm not highly concerned about bioweapons because I think their range will generally always be too small to trigger a true runaway ecosystem collapse.

>better than current biology
It's also possible that it might not.

>> No.10534789

>>10534669
>better than current biology
Again; I can see some bacteria being efficient to the point that they could biointegrate a nanite. It's not beyond reason, and I admitted earlier that it'd potentially stop or alter the path of a biosphere consuming nanite swarm. In my mind, all this does is make the bacteria more efficient, and then we have to deal with super pathogens, amoeba that can sustain a three foot body because the inner chemical matter is able to propagate information efficiently across the cell body, and whatever else might evolve from that mode of life.

>> No.10534861

Right, I see the solution now.

A nanite is necessarily made out of some material that it can detect and absorb. This means that the most nutritious substance for a nanite is its own swarm. It thus has to at least be able to detect other versions of itself to be effective as grey goo. This concept of detection is then what has to scale to keep a swarm from consuming its own edge. It becomes systemically more efficient to factor the code sections into a membrane and mass section, and thereby we resolve the X-risk by creating a science of understanding the structure of membranes, and the forms they could take between program domains.

The vacuum of space is just a very efficient membrane that creates the illusion of autonomy and freedom from imposed forms of scarcity. It's a scapegoat for hyper-independent thinkers to try to absolve themselves of global responsibilities. Not a true membrane, but an ideological one.

IF (and I'm not saying it is) it's reasonable to think about programmatic crystals of replicating matter arrangements, then the membrane involved is a program in its own right. Some signalling method exists to prevent self-cannibalization, and in doing so opens itself up to a containing membrane. This is efficient, and gives us a direct handle on how to do nanotech without excess leaks that eat all life.

"Literally just copy life" is the true final answer. It'd happen anyway, or more aptly stated, already did.

>> No.10534918

>>10534451
>Silicon solar cells are much more efficient than biology.
silicon solar cells capture much more solar energy, but I bet a chloroplast is infinitely more economic (for the species using them). After all, they grow on trees.

>> No.10534961

>>10530811
>self replicating very tiny machines may be impossible
fucking bacteria, how do they work?

>> No.10535028

>>10534961
I said very tiny. Bacteria are just tiny. I don't like saying the word nanomachines because it's meaningless.

>> No.10535077

>>10530696
No, it’s the time preference to develop GNR technologies.

>> No.10535233

calc 2 is the great filter. only societies which succeed in teaching it to their whole society can survive in the long term