I'm begging you to just get a normal job and give to effective charities.
Doctor in Australia giving 10% forever
The way you describe WELLBYs - as being heavily influenced by the hedonic treadmill and so potentially unable to distinguish between the wellbeing of the Sentinelese and the modern Londoner - seems to highlight their problems. There's a good chance a WELLBY analysis would have argued against the agricultural revolution, which doesn't seem like a useful opinion.
No it's not obvious, but the implications are absurd enough (agricultural revolution was a mistake, cities were a mistake) that I think it's reasonable to discard the idea
I encourage you to publish that post. I also feel that the AI safety argument leans too heavily on the DNA sequences -> diamondoid nanobots scenario
Consider entering your post in this competition: https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize
I agree that revealed preference and survey responses can differ. Unless WELLBYs take account of revealed preferences they'll fail to predict what people actually want
"ingestion of said natural sources does not seem to include the side effects from their synthesized forms"
Can you provide a source for this?
I think this is a great question. The lack of clear, demonstrable progress in reducing existential risks, and the difficulty of making and demonstrating any progress, makes me very skeptical of longtermism in practice.
I think shifting focus from tractable, measurable issues like global health and development to issues that - while critical - are impossible to reliably affect, might be really bad.
Thanks for this. It's important to give to rescue and relief efforts when disasters happen in addition to giving to development efforts in the good times so that communities are less vulnerable to disasters.
The information you've provided here is really valuable. Thank you. It will inform how I donate.
I don't like this post and I don't think it should pinned to the forum front page.
A few reasons:
The general message of: "go and spread this message, this is the way to do it" is too self-assured, and unquestioning. It appears cultish. It's off-putting to have this as the first thing that forum visitors will see.
The thesis of the post is that a useful thing for everyone to do is to spread a message about AI safety, but it's not clear what messages you think should be being spread. The only two I could see are "relate it to Skynet" and "even if AI looks safe it might not be".
Too many prerequisites: this post refers to five or ten others posts as a "this concept is properly explained here" thing. Many of these posts reference further posts. This is a red flag to me of poor writing and/or poor ideas. Either a) your ideas are so complex that they do indeed require many thousands of words to explain (in which case, fine), or b) they're not that complex, just aren't being communicated well or c) bad ideas are being obscured in a tower of readings that gatekeep the critics away. I'd like to see the actual ideas you're referring to expressed clearly, instead of referring to other posts.
Having this pinned to the front page further reinforces the disproportionate focus that AI Safety gets on the forum