+1 to all the other resources in these answers, but never underestimate how useful it is to just get started! I keep this link bookmarked, which shows the currently-open Metaculus questions which will close soonest. Making quick predictions on these questions keeps the feedback loop as tight as possible (although it's still not that tight to be honest).
Also, Superforecasting is great but longer than it needs to be, I've heard that there are good summaries out there but don't personally know where they are.
This looks great! I'm concerned that it won't get the traffic it needs to be useful to people. Have you considered/attempted reaching out to 80K to put a link on the job board or something? That's my go-to careers resource, and I think the main way I could learn about the existence of something like this once this post is off the front page.
Anecdotally, I've found that describing EA as "a community of people trying to do as much good as possible with our time and money" gets good response.
Agree that this is worth a shot, would be Huge if it worked. But it seems like Mr Beast and Mark Rober might be selecting causes to avoid controversy, which would make it hard to get EA through. Both of their platforms are mainly built on mass appeal. Planting trees and cleaning up the oceans are extremely uncontroversial causes - nobody is out there arguing that they do net harm. This is not the case with EA.
That said, if any of you folks went to high school with Mark Rober or something, I would still be extremely excited to try this. I have a 3rd or 4th degree connection to him, but that seems a bit too far to do much of anything.
thank machine doggo
Not entirely sure if I interpreted your intentions right when I tried to write an answer. In particular, I'm confused by the line "I could create just a little more hedonium". My understanding is that hedonium refers to the arrangement of matter that produces utility most efficiently. Is the narrator deciding whether to convert themselves into hedonium?
I ended up interpreting things as if "hedonium" was meant to mean "utility", and the narrator is deciding what their last thought should be - how to produce just a little more utility with their last few computations before the universe winds down. Hopefully I interpreted correctly - or if I was incorrect, I hope this feedback is helpful :)
...it was beautiful. And that is good.
Bro this is really scary. Well done.
Observation: prion-catalysis or not, any vaccine-evasion measures at all seem extraordinarily dangerous. For a highly infectious threat, the fastest response we have right now is mass vaccine manufacture, and that seems just barely fast enough. But our vaccine tech is public knowledge, and an apocalyptic actor can take all the time they want to design a countermeasure.
Once a threat with any sort of countermeasure is released, we first have to go through a vaccine development cycle to find that out in the first place, then a research cycle to figure out how to beat it, then a development/deployment cycle to use those research results and actually beat it. Those latter two phases seem quite slow and notably hard to speed up, since we'd have to find ways to prepare to do fast research, manufacturing, and deployment in a very general sense, to be able to respond to any plausible anti-vaccine measure.
I agree that relatively small improvements in public health could potentially be highly beneficial. Research on this might be totally tractable.
What I am concerned might be intractable is deploying results. Public health (and all health-relevant products) is a massive industry, with a lot of strong interests pushing in different directions. It seems entirely possible that all the answers are already out there, just drowned out by food, exercise, sexual health, self-help, and other industries.
There's so much noise out there, it seems unlikely that a few EAs will be able to get a word in edgewise.
Thank you for posting! Many kudos for contributing to the frontpage discussion rather than lurking for years like many people (including me).
I agree with most of your assessment here. But I think rather than "simple altruism", it would be better to focus on "altruistic intent". Making this substitution doesn't change much, the major differences are just that it includes EA itself, and excludes cynically motivated giving. The thing I think we care about is people trying to do good, not specifically doing non-EA things.
That said, increasing altruistic intent is, I think, included under the heading of broad longtermism. I don't have a source for this, but my impression is that not that much work goes towards broad longtermism because it seems really hard, not that urgent, and EAs tend to be bad at the key skills involved, like persuasion and politics.