[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math

Search:


View post   

>> No.16046244 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
16046244

https://www.lesswrong.com/posts/xHA7jHafbkxrYLJhx/why-am-i-me

>> No.15857498 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
15857498

>>15857429
>emergent property
Emergence is a meaningless concept that says nothing about how a system actually functions.

https://www.lesswrong.com/posts/8QzZKw9WHRxjR4948/the-futility-of-emergence

>A fun exercise is to eliminate the adjective "emergent" from any sentence in which it appears, and see if the sentence says anything different:

>Before: Human intelligence is an emergent product of neurons firing.
>After: Human intelligence is a product of neurons firing.
>Before: The behavior of the ant colony is the emergent outcome of the interactions of many individual ants.
>After: The behavior of the ant colony is the outcome of the interactions of many individual ants.
>Even better: A colony is made of ants. We can successfully predict some aspects of colony behavior using models that include only individual ants, without any global colony variables, showing that we understand how those colony behaviors arise from ant behaviors.

>Another fun exercise is to replace the word "emergent" with the old word, the explanation that people had to use before emergence was invented:

>Before: Life is an emergent phenomenon.
>After: Life is a magical phenomenon.
>Before: Human intelligence is an emergent product of neurons firing.
>After: Human intelligence is a magical product of neurons firing.

>Does not each statement convey exactly the same amount of knowledge about the phenomenon's behavior? Does not each hypothesis fit exactly the same set of outcomes?

>> No.15798531 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
15798531

https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities
https://www.lesswrong.com/s/FrqfoG3LJeCZs96Ym/p/ooypcn7qFzsMcy53R

>> No.15012349 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
15012349

>>14997741
https://www.lesswrong.com/posts/zmSuDDFE4dicqd4Hg/you-only-need-faith-in-two-things

>You only need faith in two things: That "induction works" has a non-super-exponentially-tiny prior probability, and that some single large ordinal is well-ordered. Anything else worth believing in is a deductive consequence of one or both.

>(Because being exposed to ordered sensory data will rapidly promote the hypothesis that induction works, even if you started by assigning it very tiny prior probability, so long as that prior probability is not super-exponentially tiny. Then induction on sensory data gives you all empirical facts worth believing in. Believing that a mathematical system has a model usually corresponds to believing that a certain computable ordinal is well-ordered (the proof-theoretic ordinal of that system), and large ordinals imply the well-orderedness of all smaller ordinals. So if you assign non-tiny prior probability to the idea that induction might work, and you believe in the well-orderedness of a single sufficiently large computable ordinal, all of empirical science, and all of the math you will actually believe in, will follow without any further need for faith.)

>(The reason why you need faith for the first case is that although the fact that induction works can be readily observed, there is also some anti-inductive prior which says, 'Well, but since induction has worked all those previous times, it'll probably fail next time!' and 'Anti-induction is bound to work next time, since it's never worked before!' Since anti-induction objectively gets a far lower Bayes-score on any ordered sequence and is then demoted by the logical operation of Bayesian updating, to favor induction over anti-induction it is not necessary to start out believing that induction works better than anti-induction, it is only necessary *not* to start out by being *perfectly* confident that induction won't work.)

>> No.14960228 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
14960228

>>14959040
https://www.lesswrong.com/posts/8QzZKw9WHRxjR4948/the-futility-of-emergence

>A fun exercise is to eliminate the adjective "emergent" from any sentence in which it appears, and see if the sentence says anything different:

>Before: Human intelligence is an emergent product of neurons firing.
>After: Human intelligence is a product of neurons firing.
>Before: The behavior of the ant colony is the emergent outcome of the interactions of many individual ants.
>After: The behavior of the ant colony is the outcome of the interactions of many individual ants.
>Even better: A colony is made of ants. We can successfully predict some aspects of colony behavior using models that include only individual ants, without any global colony variables, showing that we understand how those colony behaviors arise from ant behaviors.

>Another fun exercise is to replace the word "emergent" with the old word, the explanation that people had to use before emergence was invented:

>Before: Life is an emergent phenomenon.
>After: Life is a magical phenomenon.
>Before: Human intelligence is an emergent product of neurons firing.
>After: Human intelligence is a magical product of neurons firing.

>Does not each statement convey exactly the same amount of knowledge about the phenomenon's behavior? Does not each hypothesis fit exactly the same set of outcomes?

>> No.14595722 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
14595722

https://www.lesswrong.com/tag/highly-advanced-epistemology-101-for-beginners

>> No.14582040 [View]
File: 377 KB, 400x521, YudkowskyGlory.png [View same] [iqdb] [saucenao] [google]
14582040

Give it to me straight /sci/. How fucked are we?

>> No.11907790 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11907790

>>11904921
The AI alignment problem

>> No.11607468 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11607468

https://www.lesswrong.com/posts/xCG6kXXmYwKCYHeif/if-mwi-is-correct-should-we-expect-to-experience-quantum

>I'm signed up for cryonics. I'm a bit worried about what happens to everyone else.

>Going on the basic anthropic assumption that we're trying to do a sum over conditional probabilities while eliminating Death events to get your anticipated future, then depending on to what degree causal continuity is required for personal identity, once someone's measure gets small enough, you might be able to simulate them and then insert a rescue experience for almost all of their subjective conditional probability. The trouble is if you die via a route that degrades the detail and complexity of your subjective experience before it gets small enough to be rescued, in which case you merge into a lot of other people with dying experiences indistinguishable from yours and only get rescued as a group. Furthermore, anyone with computing power can try to grab a share of your soul and not all of them may be what we would consider "nice", just like if we kindly rescued a Babyeater we wouldn't go on letting them eat babies. As the Doctor observes of this proposition in the Finale of the Ultimate Meta Mega Crossover, "Hell of a scary afterlife you got here, missy."

>The only actual recommendations that emerge from this set of assumptions seem to amount to:

>1) Sign up for cryonics. All of your subjective future will continue into quantum worlds that care enough to revive you, without regard for worlds where the cryonics organization went bankrupt or there was a nuclear war.

>> No.11562274 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11562274

>>11555746
https://www.lesswrong.com/posts/kXSETKZ3X9oidMozA/the-level-above-mine

>For so long as I have not yet achieved that level, I must acknowledge the possibility that I can never achieve it, that my native talent is not sufficient. When Marcello Herreshoff had known me for long enough, I asked him if he knew of anyone who struck him as substantially more natively intelligent than myself. Marcello thought for a moment and said "John Conway-I met him at a summer math camp." Darn, I thought, he thought of someone, and worse, it's some ultra-famous old guy I can't grab. I inquired how Marcello had arrived at the judgment. Marcello said, "He just struck me as having a tremendous amount of mental horsepower," and started to explain a math problem he'd had a chance to work on with Conway.

>Not what I wanted to hear.

>Perhaps, relative to Marcello's experience of Conway and his experience of me, I haven't had a chance to show off on any subject that I've mastered as thoroughly as Conway had mastered his many fields of mathematics.

>Or it might be that Conway's brain is specialized off in a different direction from mine, and that I could never approach Conway's level on math, yet Conway wouldn't do so well on AI research.

>Or...

>...or I'm strictly dumber than Conway, dominated by him along all dimensions. Maybe, if I could find a young proto-Conway and tell them the basics, they would blaze right past me, solve the problems that have weighed on me for years, and zip off to places I can't follow.

Is Le Metabolic Privilege Man right about John Conway?

>> No.11549687 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11549687

https://www.lesswrong.com/posts/kXSETKZ3X9oidMozA/the-level-above-mine

>Or sadder: Maybe I just wasted too much time on setting up the resources to support me, instead of studying math full-time through my whole youth; or I wasted too much youth on non-mathy ideas. And this choice, my past, is irrevocable. I'll hit a brick wall at 40, and there won't be anything left but to pass on the resources to another mind with the potential I wasted, still young enough to learn. So to save them time, I should leave a trail to my successes, and post warning signs on my mistakes.

Is Le Metabolic Privilege Man right?

>> No.11516562 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11516562

>>11511131
Free will isn't even a coherent concept in the first place.

https://wiki.lesswrong.com/wiki/Free_will_(solution)

>> No.11500526 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11500526

>>11498416
https://www.lesswrong.com/posts/C8nEXTcjZb9oauTCW/where-recursive-justification-hits-bottom
https://www.lesswrong.com/posts/zmSuDDFE4dicqd4Hg/you-only-need-faith-in-two-things

>You only need faith in two things: That "induction works" has a non-super-exponentially-tiny prior probability, and that some single large ordinal is well-ordered. Anything else worth believing in is a deductive consequence of one or both.

Is Le Metabolic Privilege Man right?

>> No.11423434 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11423434

https://wiki.lesswrong.com/wiki/The_Quantum_Physics_Sequence

>> No.11422216 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11422216

https://www.readthesequences.com/The-Dilemma-Science-Or-Bayes

>> No.11422122 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11422122

>>11421783
https://www.readthesequences.com/How-To-Convince-Me-That-Two-Plus-Two-Equals-Three

>> No.11323610 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11323610

>>11321323
Many-Worlds being the correct interpretation of quantum mechanics. The ball slides down in all directions, just in different Everett branches.

https://www.lesswrong.com/posts/9cgBF6BQ2TRB3Hy4E/and-the-winner-is-many-worlds

>> No.11280860 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11280860

https://www.lesswrong.com/posts/zmSuDDFE4dicqd4Hg/you-only-need-faith-in-two-things
https://www.lesswrong.com/posts/C8nEXTcjZb9oauTCW/where-recursive-justification-hits-bottom

>> No.11277209 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11277209

>>11277076
https://www.lesswrong.com/posts/XqvnWFtRD2keJdwjX/the-useful-idea-of-truth

>The reply I gave to Dale Carrico - who declaimed to me that he knew what it meant for a belief to be falsifiable, but not what it meant for beliefs to be true - was that my beliefs determine my experimental predictions, but only reality gets to determine my experimental results. If I believe very strongly that I can fly, then this belief may lead me to step off a cliff, expecting to be safe; but only the truth of this belief can possibly save me from plummeting to the ground and ending my experiences with a splat.
>Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies 'beliefs', and the latter thingy 'reality'.

>> No.11272883 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11272883

>>11272818
https://www.lesswrong.com/posts/XhaKvQyHzeXdNnFKy/probability-is-subjectively-objective

>> No.11251708 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11251708

>>11251000
Free will isn't even a coherent concept.

https://wiki.lesswrong.com/wiki/Free_will_(solution)

>> No.11223022 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11223022

Not an audiobook but still relevant to AI:
https://www.youtube.com/watch?v=EUjc1WuyPT8

>> No.11183799 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11183799

>>11181070
Eliezer Yudkowsky

>> No.11183013 [View]
File: 377 KB, 400x521, yudkowsky bayes.png [View same] [iqdb] [saucenao] [google]
11183013

>>11182537
Chances are that AI will be able to recursively improve itself and create an intelligence explosion far faster than human intelligence can be improved.

https://intelligence.org/files/AIPosNegFactor.pdf

Building a 747 from scratch is not easy. But is it easier to:
• Start with the existing design of a biological bird,
• and incrementally modify the design through a series of successive stages,
• each stage independently viable,
• such that the endpoint is a bird scaled up to the size of a 747,
• which actually flies,
• as fast as a 747,
• and then carry out this series of transformations on an actual living bird,
• without killing the bird or making it extremely uncomfortable?
I’m not saying it could never, ever be done. I’m saying that it would be easier to build
the 747, and then have the 747, metaphorically speaking, upgrade the bird. “Let’s just
scale up an existing bird to the size of a 747” is not a clever strategy that avoids dealing
with the intimidating theoretical mysteries of aerodynamics. Perhaps, in the beginning,
all you know about flight is that a bird has the mysterious essence of flight, and the
materials with which you must build a 747 are just lying there on the ground. But you
cannot sculpt the mysterious essence of flight, even as it already resides in the bird, until
flight has ceased to be a mysterious essence unto you.
The above argument is directed at a deliberately extreme case. The general point is
that we do not have total freedom to pick a path that sounds nice and reassuring, or
that would make a good story as a science fiction novel. We are constrained by which
technologies are likely to precede others.

Navigation
View posts[+24][+48][+96]