[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

>> No.15904822 [View]
File: 32 KB, 400x400, 1702667128281.jpg [View same] [iqdb] [saucenao] [google]

I can with 100% certainty say Aliens don't exist, they are a human fantasy fueled by cope and seethe

>> No.15800706 [View]
File: 32 KB, 400x400, 1923222681260.jpg [View same] [iqdb] [saucenao] [google]

Education and job requirements plays the biggest role in the average IQ of a country

in the 1900 America and Britain had an average IQ of 70 since all you had to be was a factory worker
so judging it by country doesn't work

if you want to test it out on a genetic level you have to use modern America as the standard because you have differing races but all fall under the same economy and culture, there will still be factors that prevent you from getting an accurate reading

but it should be the most accurate that we have for now

>> No.15671011 [View]
File: 32 KB, 400x400, 1467153211804.jpg [View same] [iqdb] [saucenao] [google]


We now know that rat's do get "bored" and do assign a sense of "value" to something based on it's scarcity

>> No.15554198 [View]
File: 32 KB, 400x400, 0544417387885.jpg [View same] [iqdb] [saucenao] [google]

Why do Aliens/UFO only want to visit America ?

Is it because Americans are the pinnacle of human intelligence ?

>> No.15278128 [View]
File: 32 KB, 400x400, 1639413591286.jpg [View same] [iqdb] [saucenao] [google]

I'm about to graduate with a MSc in Machine Learning. The pace of the progress of the field is frankly absurd. I'm convinced that we might develop AGI within the next few years, and that as things are going now, it would most likely be really, really bad for us. Like, human extinction-level bad. As to why, I would urge you to read at least a bit of the r/controlproblem FAQ page, which explains in a very succinct way, at least much better than I could, why a benevolent AGI is the exception, and not the rule. It only takes a few minutes to read.

To me, it is quite apparent that if we are to create something smarter than us, we should approach it with utmost care. Why so many people here are so convinced that an AGI would be automatically beneficial is puzzling to me, I do not understand that leap of logic. I do want to create an aligned intelligence, that would be amazing and probably indeed usher in an utopian society. The whole crux of the problem is getting it right, because we literally only have one chance to get it right. It will be the last problem humanity faces, for better or worse.

I would urge anyone willing to listen to educate themselves on why AI alignment/safety is so important, and why it's so hard. Another good resource I would recommend is Rob Miles' youtube channel. Some of you may recognize him from his appearances on Computerphile.

I understand that some of you are convinced this would be the best thing to happen in your lifetime. But for me, personally, it fills me with a sense of dread and impending doom. Like climate change, but 100x worse and more imminent. I get that it's nice to be optimistic about it, but being so blindly accelerationist as to call anyone who goes "maybe we should be careful with this" a luddite is absurd.

Given that this might be the last few years of life as we know it, my plan for now is to enjoy the present and company of my loved ones while I still can.

>> No.14819880 [View]
File: 32 KB, 400x400, verycool.jpg [View same] [iqdb] [saucenao] [google]

>flowers for algernon was real

View posts[+24][+48][+96]