[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 166 KB, 568x433, patience.jpg [View same] [iqdb] [saucenao] [google]
11303958 No.11303958[DELETED]  [Reply] [Original]

What job should I get if I can mathematically prove that I'm smarter than everyone else? Is this a marketable skill?

>> No.11303964

there's always someone smarter than you
if there's not someone smarter than you, you should work somewhere else

>> No.11303976

>>11303964
Yes, those are the proper mathematics for not being the smartest person period. Were I to apply that reasoning myself, I would have to cyclically create new businesses until someone smarter than me emerged. Since I can already predict the outcome of such a system, it implies that this would not occur unless I was severely overworked and stressed from my obligations. Given that I'm not dumb enough to fall into that type of habit while creating a business, it's fair to say that that logic can only work for everyone who isn't mathematically smarter than everyone else.

>> No.11303985

>>11303976
Addendum:

If you wish to work under me, you will have to design a preservation of post-scarce trajectory contract yourself.

>> No.11304008

Security for any organization (or organization group) I create will be handled by the sheer mathematical simplicity of the evolutionary principle that violent agents are more likely to self-annihilate than persist.

>> No.11304468

>>11303958
If you didn't figured that out by yourself now, then maybe you aren't as smart as you think

>> No.11304473

>>11304468
The difficulty in presenting my intelligence to the world is in deciding which proofs will create the ideal future highest convergence. Competition is a valuable metric for measuring intellect, so I posed this as a challenge to /sci/ to see if my preexisting conclusion isn't exactly as solid as I already believe it to be. In general, I've found belief to me the least useful way to manage my intellect.

>> No.11305309

>>11303958
>Is this a marketable skill?
I would like to give /sci/ more time to determine if there is an objective applicable response to this.

>> No.11305549

>>11303985
Upon further reflection, /sci/ does often comment on possible methods of improving the future. I don't suppose this actually benefits anything, but it at least shows that your thought process is somewhere near par.

>> No.11306508

Self-directed careers will necessarily be part of the culture of post-scarcity, but without them it's questionable to suspect that social acceptance of the concept would occur at all. There must necessarily be some way to ensure the growth of diversity with regard to such careers, but this needs to occur in an equitable manner such that entire demographics do not feel isolated for lack of inclusion in the process.

Overall, I'd say we're 30% on trajectory for full post-scarce culture some time in the next 400 years.

>> No.11306525

>>11306508
I suppose it would not be altogether too difficult to loose a series of memetic alterations to the global psyche to generate pressure towards enablement of self-directed careers within the larger corporations that can afford to do it. It will of course require a second cultural wave before that concept can begin generating true career diversity on the levels necessary to traject toward post-scarcity.

>> No.11306533

>>11306525
In attempting to form an economic proof of the relevance of this concept, introducing equitable reasoning into the system results in what seems to be a proof that most modern wealth is false. It is an amusing notion that such a concept can be false in and of itself, but the dire implications this has on the quality of life of seemingly every corner of the population do not equip me with much freedom to laugh at the notion.

(To begin your reasoning to replicate my work, most wealth cannot be used to create more wealth without severe subjective loss of status.)

>> No.11306538

>>11306533
Trickle down economics appears to produce trickle up economics at far greater incidence than trickle down economics. Something is not equitable about this system and I don't pretend that it would be my responsibility to dig out exactly what.

>> No.11307663

>>11305549
The market appears to be in all ways inferior to hobbyists in terms of generating the sort of idealistic solutions that will increase convergence upon post-scarce futures. Some boards will be more relevant than others in maintaining a post-scarce trajectory.

>> No.11307673

>>11303958
seems like a very trivial pursuit. also if you're in america nobody respects intelligence it's all about charisma

>> No.11309285

>>11307673
It's not as simple as it sounds. Making someone feel like they're less intelligent from a relative perspective is fairly trivial, and happens organically. For anyone to understand a proof about the nature of intelligence would require them to be able to operate at a level where the initial axioms can be understood. Since the evidence is otherwise subtle, making someone consciously aware of the reasons that they are less intelligent becomes an intellectual exercise.

>> No.11309314

>>11303958
Please demonstrate your proof that you are in fact smarter than everyone else

>> No.11309389

>>11303958
....
Academic Professor?
....
Proof reader?
....
Poof leader?
....
WEEB!
;D

>> No.11309405

>>11303958
Garbage questions. Masked advice thread. Fucking neck yourself you low brow faggot.
> HURR DURR WHAT JOB SHOULD I GET IF ME INTELLIGENT?!?
Hmmm, something you can search in a search engine.
> IS HAVING THE SKILL TO MATHEMATICALLY PROVE MY INTELLIGENCE A MARKETABLE SKILL?!?
I really do hate you. Look up what an IQ test is you fucking weeb.

Why isn't this stupid fucking poster banned for this retarded post. Because he used the fucking word "mathematically"?

>> No.11309620

>>11309405
The emergent social dynamics of /sci/ do not benefit from posts in your format. Most posters are at least peripherally aware of this, but the more intelligent half react intelligently to each such post. Rather than bothering to indulge the perspectives of such posters, they see fit to allow you to burn yourself out trying to force the board to be better—something they already know is not within your power. Simply put, a small amount of trolling, the most scarce of the occasional shitpost, and of course reasonable corrections for tolerable posters are all useful for the most intelligent anons to cultivate an atmosphere for abstract scientific discussion.

Since the intent of bait is not always clear, /sci/ does not often report posts for deletion, and the board itself is slow enough that they do not often have need to report anything at all. Spam posts get deleted fast enough by the janitors that excess attention from mods isn't necessary to sustain board culture. Most notably, /sci/ is now aware that you bit the bait, rather than simply minimizing the thread like the more intelligent posters likely would.

This is probably not a very satisfactory answer, but it is the correct one. Were your notions about how the board ought to be run correct, this thread would have been deleted some time ago. Since it has not been deleted, we can safely infer that your beliefs are empirically wrong.

It is not expected that you will appreciate hearing this nor accept that /sci/ is simply not going to be run by your idle opinions. The urge to respond anyway will likely win in the end, and you will find yourself fully capable of writing a response that you consider satisfactory. After having posted it, you will feel justified in your perspective and (if the thread still lives) I will fully validate that perspective with explanation of why your feelings are correct.

If you disagree with any part of this then perhaps we can work together toward a mutually agreeable ideal.

>> No.11309790

>>11309620
My urge to respond to this meta bait reply is based on my respect for the effort you put into it. You implied that I tried to benefit this board which isn't correct. I simply have a sadistic tendency to verbally attack troll posters in the hope that I can elicit some negative response from them. I don't minimize posts such as these for those same sadistic and masochistic reasons. I understood the bait of the original post immediately and it seems you didn't understand mine almost immediately. It is obvious this is an advice thread which I recalled as not being allowed on this board. So I consequently used this fact as utility in my bait.

This is my second and last response to you and this isn't bait. I find it satisfying to explain the situation from my perspective. Although what I'm responding to now could be bait, that isn't how I interpreted it. Now you will probably read this being satisfied that you accurately assumed how I would have responded. You will also probably feel less lonely and even challenged. Or you could even be laughing by the time you reach this sentence knowing that I fell for some sort of meta meta bait. In any case it was interesting interacting with you and farewell.

>> No.11309884

>>11309790
I was trying something new. Perhaps closure is the simplest way to dissolve a dispute. The easiest way to test this was with a sufficient prediction wall. Across the wide range of personalities that would motive an initial reply like yours, I had to navigate the precise set of mental satiators that would most diligently reward any effort employed.

In that sense, it was intended to function as calculated anti-bait. It appears to have worked more efficiently than I had any expectation for.


The bait in the OP serves multiple purposes. For one, it is intended to jab intelligence itself in the ego to demonstrate that the world does not value intelligence of the abstract variety in any inherent sense. This matches with current research in neural semiotics in confirmation of an evolutionary model of consciousness whereby perception is shaped according to what works for survival, and irrelevant modes of perception atrophy accordingly. The same logic implies that the inner subjective value we feel for our own cognitive superiority may well be ill-fit for the ever-changing environment that it has created. In this case, the utility of intelligence with regard to earning a career.

The difficulty in correcting this perspective is that it is less readily communicated via a well thought out essay than it is re-inferred by each mind in turn. That is, /sci/ (or the higher thinkers on it) is sufficiently intelligent to contextualize a personal emotional response to the inquiry. However, this does not lead to wide perspective repair. Extra steps much be taken to recover a correct state of healthy ego after a sufficiently provocative jab.

It would be easier to explain were this not a work in progress.

>> No.11310621

>>11303958
>Implying there is mathematical equation to prove you're "smarter than everyone else", and asking /sci/ if they know.

So basically the collective body of /sci/ is greater than yourself, making your question negligible.

Occupation: Unemployable.

>> No.11311435

If you can mathematically prove you're smarter than everyone else, you shouldn't have to ask this.
Go fuck yourself byebye.

>> No.11311451

>>11309314
It has taken a significant amount of work to create proofs at the level that I do. There are epistemic principles within me that I have found no evidence of elsewhere in the world. My inferences about my own intelligence are based not on my discovery of such principles, but the rate at which my mind was able to infer them, independently of any other source. The difficulty in communicating these methods is that originality is a low-entropy solution to verifying the integrity of a claim to intelligence, and to properly measure general intellectual capacity beyond creative-type expansive intelligence requires me to remap the current form of my proofs into a formal system that will be highly useful to other individuals.

>>11311435
It was already been established that my rational for asking was not straightforward. Consider: >>11309884

>>11310621
Yes, because you can generate hypotheses at a much faster rate than I would be able to issue forth under my own creativity. Methodical hypotheses tend to be the least useful kind, so I would favor /sci/ have jobs over myself. This does not imply anything about my own career, however, as you have so carelessly stepped on the trajectory of post-scarcity to state.

>> No.11311640

>>11304008
An additional internal policy of minimally functioning competence will be employed, so that little need for extraneous competitive urges can emerge. Since employees are most productive when excess administration is trimmed, this will provide forward compliance of my organizations with the inexorable shift toward hobby-based methods of maintaining the tools that ensure the current level of ease we enjoy with regard to our survival needs.

It is quite interesting to design a world worth having a place in.

>> No.11311658

>>11303958
Show us the proof

>> No.11311695

>>11311658
Interesting. This is actually much easier done than the challenge posed by >>11309314 because their request is impersonal and demands a solution in pure principle, while showing a proof to "us" (over the scope of /sci/ posters) requires a far smaller body of proofs.

Since humanity has yet to produce actionable evidence that it has plans to perpetually value intelligence, any meta-proofs I make along the way to generating that final proof state could be used by malicious parties to more efficiently track down intelligent individuals for inconsiderate motives.

It seems that if I do not personally find a gratifying solution to the career crisis that an ever increasing intelligent fraction of the population faces then it will trend invariably toward economic and other pressures fostering mad scientist style research avenues. I'll have to initiate, explore and complete research into artificially generated X-risks before I can decipher the moral complex of producing such proficient proofs.

It seems to me that the two most dangerous types of weaponry possible per (current) theoretical physics would be false vacuum saturation chambers and anything involving strangelets. The former is a device that creates slow methodical bends in spacetime to replace all the intervening matter of a manifold with its false vacuum counterpart. Since this seems to also produce the technology necessary to escape such a manifold, it is a far future threat that need not be subject to moral consideration at this time.

Let me spend a moment thinking about femtotechnology and see what becomes safe to discuss.

>> No.11311729

I have determined that mad science is not a formal X-risk, but must caution that militarization stands in violent contravention of >>11304008.

To protect the security of organizations that do value intelligence for its own sake, I advise "not touching you" protests against public military offices. After everyone has gone home for the night, camp in the thousands just outside (where you are legally allowed to go) places that the government wishes to feel are secure, and whenever any office of the law attempts to interact with you, insist that you are camping alone. This will create a horrible backlog that will slow their efforts as an organizational body considerably.

>> No.11311731

>>11311695
This is bullshit, it's time traveling logic.

> I cannot give you any information about the future because it would be risky
> I cannot show you the mechanism of my time travel machine before measuring the risks

Go back to >>>/x kid

>> No.11311737

>>11311731
A time traveler has enough future to overcome those problems and return to this moment instantly. Such excuses are not viable and thus do not relate to my fleeting moral considerations.

>> No.11311752

>>11311695
It appears that a maximally unencrypted internet is the ideal method of detecting malicious AI. This does not adversely affect security, since having been encrypted does not ensure any information packet is free of exploit.

It seems acting as a white-hat security watchdog is the only viable/existentially safe career path for me at this moment.

>> No.11311760

>>11311737
But are not any different from them, you just created your own set of rules where nobody can disprove your claims, just like time traveling fags. And if you are smarter than everyone, what benefit would you obtain by asking advise to less intelligent creatures?

>> No.11311779

>>11311760
I have readily disproven any time traveler who claims to not have enough time to process moral imperatives before an anonymous audience. Your logic does not follow.

The purpose is to see how well /sci/ can reply to the concept of proving the depth of one's intelligence. This is not a serious request for career advice, since /sci/ is already well familiar with the false promise afforded by the modern job market.

I predict humanity's survival.

>> No.11311836

>>11311779
Well, since I don't give a fuck if you are smarter than me, whether there's a proof that shows it or not, I'm playing your game.

>What job should I get if I can mathematically prove that I'm smarter than everyone else?
The same that people with either average to top intelligence perform, we currently have no vacancies for the smartest person in the world

>Is this a marketable skill?
I am not sure how much money should the smartest person in the world should earn, but what I am sure that it won't be much marketable if you do not show them your proof. Even, if you are still afraid of showing it, remember that the existentialist mantra "Actions define what a person is." is really taken into account for a job apply, so you you gotta show them stuff that people usually attribute to high intelligence besides your almighty IQ score and your post doc degree

>> No.11311845

>>11311836
>no vacancies
Thank you anon. That is quite an amusing response.

I believe you have exercised skillful professionalism in your reply and can consider the matter closed.