Welcome! Glad you found us.
My colleague Michelle wrote some related thoughts here.https://forum.effectivealtruism.org/posts/3k4H3cyiHooTyLY6p/why-i-find-longtermism-hard-and-what-keeps-me-motivated
Yep - agree with all that, especially that it would be cool for somebody to look into the general question.
My impression is that a lot of her quick success was because her antitrust stuff tapped into progressive anti Big Tech sentiment. It's possible EAs could somehow fit into the biorisk zeitgeist but otherwise, I think it would take a lot of thought to figure out how an EA could replicate this.
Agreed that in her outlying case, most of what she's done is tap into a political movement in ways we'd prefer not to. But is that true for high-performers generally? I'd hypothesise that elite academic credentials + policy-relevant research + willingness to be political, is enough to get people into elite political positions, maybe a tier lower than hers, a decade later, but it'd be worth knowing how all the variables in these different cases contribute.
Fair enough. I guess just depends on exactly how broad/narrow of a category Linch was gesturing at.
I don't think Alan's really an example of this.
I think I’ve always been interested in computers and artificial intelligence. I followed Kasparov and Deep Blue, and it was actually Ray Kurzweil’s Age of Spiritual Machines, which is an old book, 2001 … It had this really compelling graph. It’s sort of cheesy, and it involves a lot of simplifications, but in short, it shows basically Moore’s Law at work and extrapolated ruthlessly into the future. Then, on the second y-axis, it shows the biological equivalent of computing capacity of the machine. It sho
Fwiw, for mental health I'm not sure whether therapy is more likely to treat the 'root causes' than medications. You could have a model where some 'chemical thingie' that can be treated by meds is the root cause of mental illness and the actual cognitive thoughts treated by therapy are the symptoms.
In reality, I'm not sure the distinction is even meaningful given all the feedback loops involved.
I don't think most people would consider prevention a type of preparation. EA-funded biorisk efforts presumably did not consider it that way. And more to the point, I do not want to lump prevention together with preparation because I am making an argument about preparation that is separate from prevention. So it's not about just semantics, but precision on which efforts did well or poorly.
I think it actually is common to include prevention under the umbrella of pandemic preparedness. for example, here's the Council on Foreign Relation's independent committ... (read more)
I think research into novel vaccine platforms like mRNA is a top priority. It's neglected in the sense that way more resources should be going into it but also my impression is that the USG does make up a decent proportion of funding for early stage research into that kind of thing. So that's a sense in which the U.S.'s preparedness was prob good relative to other countries though not in an absolute sense.
Here's an article I skimmed about the importance of govt (mostly NIH) funding for the development of mRNA vaccines. https://www.scientificamerican.com... (read more)
"effective pandemic response is not about preparation"
FYI - my impression is that pandemic preparedness is often defined broadly enough to include things like research into defensive technology (e.g. mRNA vaccines). It does seem like those investments were important for the response.
Several other people who work with them are connected to EA.
Note that Open Phil funded this project. https://www.nti.org/newsroom/news/nti-launch-global-health-security-index-new-grant-open-philanthropy-project/
In case anybody's curious: https://coronavirus.jhu.edu/map.html
I do think CHS should get some credit for arguing for taking pandemic response very seriously early on. For example, I think Tom had some tweets arguing for pulling out all the stops on manufacturing more PPE in January 2020.
Note - I'm a bit biased since I was working on biorisk at Open Phil the first time Open Phil funded CHS.
Fwiw, my vague memory is that some other people at CHS, including Tom Inglesby (the director) did better than Adalja. I think Inglesby's Twitter was generally pretty sensible though I don't have time to go back and check. I'd guess that, like most experts, he was too pessimistic about travel restrictions, though. Maybe masks, too?
If you're referring to what I think you are, it was a different group at Hopkins
If I had to pick two parts of it, it would be 3 and 4 but fwiw I got a bunch out of 1 and 2 over the last year for reasons similar to Max.
Also seems relevant that both 80k and CEA went through YC (though I didn't work for 80k back then and don't know all the details).
Indeed, IIRC, EAs tend to be more progressive/left-of-center than the general population. I can't find the source for this claim right now.
The 2019 EA Survey says:
"The majority of respondents (72%) reported identifying with the Left or Center Left politically and just over 3% were on the Right or Center Right, very similar to 2018."
I figured some people might be interested in whether the orientation toward longtermism that Michelle describes above is common at EA orgs, so I wanted to mention that almost everything in this post could also be describing my personal experience. (I'm the director of strategy at 80,000 Hours.)
I think this request undermines how karma systems should work on a website. 'Only people who have engaged with a long set of prerequisites can decide to make this post less visible' seems like it would systematically prevent posts people want to see less of from being downvoted.
I really like Holly Elmore's blogpost "Kicking an Addiction to Self-Loathing."
Most native English speakers from outside of particular nerd cultures also would have no clue what it means.
Fwiw, the forum explicitly discourages unnecessary rudeness (and encourages kindness). I think tone is part of that and the voting system is a reasonable mechanism for setting that norm. But there's room for disagreement.
If the original poster came back and edited in response to feedback or said that the tone wasn't intentional, I'd happily remove my downvote.
I downvoted this. "Please, if you disagree with me, carry your precious opinion elsewhere" reads to me as more than slightly rude and effectively an intentional insult to people who disagree with the OP and would otherwise have shared their views. I think it's totally reasonable to worry in advance about a thread veering away from the topic you want to discuss and to preempt that with a request to directly answer your question [Edited slightly] and I wouldn't have downvoted without the reference to other people's "precious views."
Lobbying v. grassroots advocacy
This is just semantic but I think you probably don't want to call what you're proposing a "lobbying group." Lobbying usually refers to one particular form of advocacy (face to face meetings with legislators) and in many countries it is regulated more heavily than other forms of advocacy.
(It's possible that in the UK, "lobbying group" means something more general but in the U.S.)
 This is true in the U.S., which I know best. Wikipedia suggests it's true in the EU but appears less tr... (read more)
I didn't actually become a member until after the wording of the pledge changed but I do vividly remember the first wave of press because all my friends sent me articles showing that there were some kids in Oxford who were just like me.
Learning about Giving What We Can (and, separately, Jeff and Julia) made me feel less alone in the world and I feel really grateful for that.
Thanks for pointing this out (and for the support).
We only update the 'Last updated' field for major updates not small ones. I think we'll rename it 'Last major update' to make it clearer.
The edit you noticed wasn't intended to indicate that we've changed our view on the effectiveness of existential risk reduction work. That paragraph was only meant to demonstrate how it’s possible that x-risk reduction could be competitive with top charities from a present-lives-saved perspective. The author decided w... (read more)
Something else I hope you'll update is the claim in that section that GiveWell estimates that it costs the Against Malaria Foundation $7,500 to save a life.
The archived version of the GiveWell page you cite does not support that claim; it states the cost per life saved of AMF is $5,500. (It looks like earlier archives of that same page do state $7,500 (e.g. here), so that number may have been current while the piece was being drafted.)
Additionally, the $5,500 number, which is based on GiveWell's Aug. 2017 estimates (click here and ... (read more)
Not an expert but, fwiw, my impression is that this is more common in CS than philosophy and the social science areas I know best.
I'm very worried that staff at EA orgs (myself included) seem to know very little about Gen Z social media and am really glad you're learning about this.
I think it's especially dangerous to use this word when talking about high schoolers, especially given the number of cult and near-cult groups that have arisen in communities adjacent to EA.
"People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive"
I haven't read enough of your original research to know whether it applies in your case but just flagging that most original research has a much narrower target audience than the summaries/collections, so I'd expect fewer people to find it useful (and for a relatively broad summary to be biased against them).
That said, as you know, I think your summaries/collections are useful and underprovided.
This all seems reasonable to me though I haven't thought much about my overall take.
I think the details matter a lot for "Even among individual researchers who work independently, or whose org isn't running surveys, probably relatively few should run their own, relatively publicly advertised individual surveys"
A lot of people might get a lot of the value from a fairly small number of responses, which would minimise costs and negative externalities. I even think it's often possible to close a survey after a certain number of respon... (read more)
[Not meant to express an overall view.] I don't think you mention the time of the respondents as a cost of these surveys, but I think it can be one of the main costs. There's also risk of survey fatigue if EA researchers all double down on surveys.
I find it off-putting though I don't endorse my reaction and overall think the time savings mean I'm personally net better off when other people use it.
I think for me, it's about taking something that used to be a normal human interaction and automating it instead. Feels unfriendly somehow. Maybe that's a status thing?
Though there's a bit of a tradeoff where putting the money into a DAF/trust might alleviate some of the negative effects Ben mentioned but also loses out on a lot of the benefits Raemon is going for.
[My own views here, not necessarily Ben’s or “80k’s”. I reviewed the OP before it went out but don’t share all the views expressed in it (and don’t think I’ve fully thought through all the relevant considerations).]
Thanks for the comment!
“You say you take (1) to be obvious, but I think that you’re treating the optimal percentage as kind of exogenous rather than dependent on the giving opportunities in the system.”
I mostly agree with this. The argument’s force/applicability is muc... (read more)
If you want some more examples of specific research/researchers, a bunch of the grantees from FLI's 2015 AI Safety RFP are non-EA academics who have done some research in fields potentially relevant to mid-term safety.
Fwiw, I think you're both right here. If you were to hire a reasonably good lawyer to help with this, I suspect the default is they'd say what Habryka suggests. That said, I also do think that lawyers are trained to do things like remove vagueness from policies.
Basically, I don't think it'd be useful to hire a lawyer in their capacity as a lawyer. But, to the extent there happen to be lawyers among the people you'd consider asking for advice anyway, I'd expect them to be disproportionately good at this kind of thing.
[Source: I went to two years of law school but haven't worked much with lawyers on this type of thing.]
You say no to "Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?" and say "2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years."
I feel like I'd much better understand what you mean if you were up for giving some probabilities here even if there's a range or they're imprecise or unstable. There's a really big range within "likely" and I'd like some sense of where you are on that range.
[Note - I endorse the idea of splitting it into two much more strongly than any of the specifics in this comment]
Agree that you shouldn't be quite as vague as the GW policy (although I do think you should put a bunch of weight on GW's precedent as well as Open Phil's).
Quick thoughts on a few benefits of staying at a higher level (none of which are necessarily conclusive):
1) It's not obviously less informative.
If somebody clicks on a conflict of interest policy wanting to figure out if they generally trust thee LTF and they see a bunch ... (read more)
I guess I think a private board might be helpful even with pretty minimal time input. I think you mostly want some people who seem unbiased to avoid making huge errors as opposed to trying to get the optimal decision in ever case. That said, I'm sympathetic to wanting to avoid the extra bureaucracy.
The comparison to the for-profit sector seems useful but I wouldn't emphasize it *too* much. When you can't rely on markets to hold an org accountable, it makes sense that you'll sometimes need an extra layer.
When for-profits start to need to... (read more)
Ah - whoops. Sorry I missed that.
Having a private board for close calls also doesn't seem crazy to me.
So, the problem here is that we are already dealing with a lot of time-constraint, and I feel pretty doomy about having a group that has even less time than the fund already has, to be involved in this kind of decision-making.
I also have a more general concern where when I look at dysfunctional organizations, one of the things I often see are profusions of board upon boards, each one of which primarily serves to spread accountability around, overall resulting in a system in which no one really has any skin in the game and in which even very simple tasks o... (read more)
Hmm. Do you have to make it public every time someone recuses themself? If someone could nonpublicly recuse themself that at least gives them the option to avoid biasing the result but also not have to stick their past romantic lives on the internet.
Thanks - this is helpful.
(Note that I'm not saying that recusal would necessarily be bad)
Wanted to +1 this in general although I haven't thought through exactly where I think the tradeoff should be.
My best guess is that the official policy should be a bit closer to the level of detail GiveWell uses to describe their policy than to the level of detail you're currently using. If you wanted to elaborate, one possibility might be to give some examples of how you might respond to different situations in an EA Forum post separate from the official policy.