Thanks for writing this up. On X, Joey Politano points out that this destruction of USAID (or even PEPFAR alone) dwarfs EA’s contribution to global development by an order of magnitude: https://x.com/josephpolitano/status/1896186144070729847
Is it a good idea for me to adjust the letter, or should I stick to the template?
The AIM charity UK Voters for Animals appears to think (based on when I attended a work party they ran) that letters/emails count for more when they are not obviously copied and pasted, to the extent it’s worth customising letters. I don’t know their epistemic basis for this, but I trust them to have one (I suspect they know people who have worked for MPs). But it might still make sense to give less-motivated friends a template to copy if that’s all you think they’ll be willing to do, since a templated letter is better than none at all. Though NB writetothem.com does block copy-and-pasted messages.
Thank you for this post. I’d like to add some argument for considering this a very high priority, or at least potentially one, since in your post people might not appreciate the scale we are talking about (due to comparison with ‘last time’ and mention of ‘low effort’).
Briefly, the slash to UK foreign aid dwarfs all EA spending on global health and development to date, and it seems like we are at a crucial moment that could influence whether the government feels this is at all accepted by the electorate.
Some quick figures from the Center for Global Develop...
Is it really the case that the UK and US were competing for the gains to reputation that foreign aid brings? I suppose I’d try to answer that question by looking at the history of where the 0.7% target, which I thought was fairly broadly shared among rich countries, originally came from. One history I found said:
> It results from the 1970 United Nations General Assembly Resolution 2626. The 0.7% figure was calculated as a means to boost growth for developing countries. Since 1970, however, only several Nordic countries have met or surpassed this target....
Thanks for making the connection to Francois Chollet for me - I'd forgotten I'd read this interview with him by Dwarkesh Patel half a year ago that had made me a little more skeptical of the nearness of AGI.
Seems a lot of it is saying “you can’t put a price on x” — and then going ahead and putting a price on x anyway by saying we should prefer to fund x over y.
In her book, Ms. Schiller ties her criticism of effective altruism to broader questions about optimization, writing: “At a time when we are under enormous pressure to optimize our time, be maximally productive, hustle and stay healthy (so we can keep hustling), we need philanthropy to make pleasure, splendor and abundance available for everyone.”
Her conception of the good can include magnificence an...
CAF charges a fee for its services. This seems crucial to deciding between GAYE/Payroll Giving vs Gift Aid — from the intro email when I registered to do GAYE:
For direct CAF Give As You Earn donors, we take a 4% fee of your total donation to cover our costs (the fee will never be more than £10 per pay period).
Many employers pay this fee for their employees, and you should contact your payroll team to confirm if this is the case.
My employer doesn’t cover it so I’m looking for an alternative method.
The paper says:
Permissivism can take multiple forms. For instance, it might permit both fanatical and antifanatical preferences. Or it might permit (or even, its name notwithstanding, require) incomplete preferences that are neither fanatical nor anti-fanatical. But apart from noting its existence, we will say no more about the permissivist alternative for now, returning to it only in the concluding section.
...The takeaway, I think, is that those who find fanaticism counterintuitive should favor not anti-fanaticism but permissivism. More specifically, t
Thanks for the helpful summary. I feel it's worth pointing out that these arguments (which seem strong!) defend only fanaticism per se, but not a stronger claim that is used or assumed when people argue for long-termism. The stronger claim being that we ought to follow Expected Value Maximization. It's a stronger ask in the sense that we're asked to take bets not of arbitrarily high payoffs, which can be 'gamed' to be high enough to be worth taking, but 'only' some specific astronomically high payoffs, which are derived from (as it were) empirically determ...
I think high X-risk makes working on X-risk more valuable only if you believe that you can have a durable effect on the level of X-risk - here's MacAskill talking about the hinge-of-history hypothesis (which is closely related to the 'time of perils' hypothesis):
Or perhaps extinction risk is high, but will stay high indefinitely, in which case in expectation we do not have a very long future ahead of us, and the grounds for thinking that extinction risk reduction is of enormous value fall away.
I believe the connection (which might or might not directly pick up on something you are defending?) is that if you go beyond merely starting your student community building with top universities first as a heuristic, and you further concentrate spending on the top universities to extreme degrees, you are in fact assuming a very strong distinction between those universities. David T has described the distinction in an approximate way as saying there are ‘only’ influential/high-earning-potential people at top universities.
The assumption of a strong distinct...
I’ve never affiliated with a university group. I’m sad to hear that at least some university groups seem to be trying to appeal to ambitious prestige-chasers, and I hope it’s not something that the CEA Groups team has applied generally. I wonder if it comes from a short-sighted strategy of trying to catch those who are most likely to end up in powerful positions in the future, which would be in line with the reasons there has been a focus on the most prestigious universities. I call it short-sighted because filling the next generation of your movement with people who are light on values and strong on politics seems like a certain way to kill what’s valuable about EA (such as commitments to altruism and truth-seeking).
I can confirm that copying and pasting doesn't move the needle, at least in consultations I've been involved with - they will put weight on people actually engaging with the ideas (Similarly feel free to skip or provide very short answers to questions you don't care much about and focus on the ones who care most about)
I think you should speak to Naming What We Can https://forum.effectivealtruism.org/posts/54R2Masg3C9g2GxHq/announcing-naming-what-we-can-1
Though I think these days they go by ‘CETACEANS’ (the Centre for Effectively, Transparently, Accurately, Clearly, Effectively, and Accurately Naming Stuff).
To contextualize the final point I made, it seems that in fact there is a lot of criminality among the ultra rich. https://forum.effectivealtruism.org/posts/d8nW46LrTkCWdjiYd/rates-of-criminality-amongst-giving-pledge-signatories (No comment on how malicious it is)
David - I mention the gender bias in moral typecasting in this context because (1) moral typecasting seems especially relevant in these kinds of organizational disputes, (2) I've noticed some moral typecasting in this specific discussion on EA Forum, and (3) many EAs are already familiar with the classical cognitive biases, many of which have been studied since the early 1970s, but may not be familiar with this newly researched bias.
Edit: I misread what you were saying. I thought you were saying 'Kat has dodged questions about whether it was true', and 'It's not clear the anecdotes are being presented as real'.Actually, Kat said it was true.
I read the author's intention, when she makes the case for 'forgiveness as a virtue', as a bid to (1) seem more virtuous herself, and (2) make others more likely to forgive her (since she was so generous to her accusers - at least in that section - and we want to reciprocate generosity). I think this is an effective persuasive writing technique, but is not relevant to the questions at issue (who did what).
Another related 'persuasive writing' technique I spotted was that, in general, Kat is keen to phrase the hypothesis where Nonlinear did bad things in an ...
I'm confused. You say "what's at issue is the overall character of Nonlinear staff", but that Kat displaying virtues like forgiveness is "is not relevant to the questions at issue (who did what)". (I think both people's character and "who did what" are relevant, and a lot of the post addresses "who did what").
Incidentally, your interpretation of Kat as being manipulative happens to be an example of the lack of goodwill that my original comment was referring to. Whether or not goodwill is in general desirable, I think viewing things through such an overly negative lens puts you at risk of confirmation bias.
Retaliation is bad.
People seem to be using “retaliation” in two different senses: (1) punishing someone merely in response to their having previously acted against the retaliator’s interests, and (2) defecting against someone who has previously defected in a social interaction analogous to a prisoner’s dilemma, or in a social context in which there is a reasonable expectation of reciprocity. I agree that retaliation is bad in the first sense, but Will appears to be using ‘retaliation’ in the second sense, and I do not agree that retaliation is bad in this ...
So you endorse "always cooperate" over "tit-for-tat" in the Prisoner's Dilemma?
Seems to me there are 2 consistent positions here:
The thing is bad, in which case the person who did it first is worse. (They were the first to defect.)
The thing is OK, in which case the person who did it second did nothing wrong.
I don't think it's particularly blameworthy to both (a) participate in a defect/defect equilibrium, and (b) try to coordinate a move away from it.
EDIT: A couple other points
I know the payoff structure here might not be an actual Prisoner's
On the wiki:
It seems like 'topics' are trying to serve at least two purposes: linking to wiki articles with info to orient people, and classifying/tagging forum posts. These purposes don't need to be so tied together as they currently are.
One could want to have e.g. 3 classification labels to help subdivide a topic (I think we currently have 'AI safety', 'AI risks', and 'AI alignment'), but that seems like a bad reason to write 3 separate similar articles, which duplicates effort in cases where the topics have a lot of overlap.
A lot of writing time could be saved if tags and wiki articles were split out such that closely related tags could point to the same wiki article.
Seems like these 'topics' are trying to serve at least two purposes: providing wiki articles with info to orient people, and classifying/tagging forum posts. These purposes don't need to be so tied together as they currently are. One could want to have e.g. 3 classification labels ('safety', 'risks', 'alignment'), but that seems like a bad reason to write 3 separate articles, which duplicates effort in cases where the topics have a lot of overlap.
A lot of writing time could be saved if tags/topics and wiki articles were split out such that closely related tags/topics could point to the same wiki article.
My hard-workingness is really dependent on my work context (e.g., whether I have a job or not). A graph of my hard-workingness over the past year peaks really strongly from Jan-March when I was working on EAGxCambridge, because of the soon and immovable deadlines, and being the main person responsible for it. I tracked 70 hrs/wk of work in the last month (unsustainable). In the meantime I've been far less hard-working (which I prefer). I think if I had a baby, I'd also become really hard-working, because I'd be one of the people most responsible for the 'project'.
One can submit new features here: https://www.swapcard.com/product-roadmap
I just submitted what you said.
I've written to my MP, James Asser, with something very similar to Sanjay's linked template.