DT

David T

1172 karmaJoined

Comments
207

I think Godwinning the debate actually strengthens the case for "I don't do labels" as a position. True, most people won't hesitate to say that the label "Nazi" doesn't apply to them, whether they say they don't do labels or have social media profiles which read like a menu of ideologies.[1] On the other hand, many who wouldn't hesitate to say that they think Nazis and fascists are horrible and agree should be voted against and maybe even fought against would hesitate to label themselves as "antifascist", with its connotations of ongoing participation in activism and/or membership of self-styled antifascist groups whose other positions they may not agree with. 

  1. ^

    and from this, we can perhaps infer than figures at Anthropic don't think EA is as bad as Naziism, if that was ever in doubt ;-)

Feels like claims like "Trump's tariffs have slowed down AGI development" need some evidence to back then up. The larger companies working on AGI have already raised funds, assembled teams and bought hardware (which can be globally distributed if necessary) and believe they're going to get extraordinary returns on that effort. Unlike retail and low margin business, it doesn't seem like a 10% levy on manufactured goods or even being unable to import Chinese chips is going to stop them from making progress 

I think the most likely explanation, particularly for people working at Anthropic is that EA has a lot of "takes" on AI, many of which they (for good or bad reasons) very strongly disagree with. This might fall into "brand confusion", but I think some of it's simply a point of disagreement. It's probably accurate to characterise the AI safety wing of EA as generally regarding it as very important to debate whether AGI is safe to attempt to develop. Anthropic and their backers have obviously picked a side on that already.

I think that's probably more important for them to disassociate from than FTX or individuals being problematic in other ways.

If we say that "because targeting you is the most effective thing we can do", we incentivise them to not budge. Because they will know that willingness to compromise invites more aggression

That presumably depends on whether "targeting you is the most effective thing we can do" translates into because you're most vulnerable to enforcement action or because you're a major supplier of this company that's listening very carefully to your arguments or because you claim to be market leading in ethics or even just because you're the current market leader. Under those framings, it still absolutely makes sense for companies to consider compromising.

Agree with the broader argument that if you resolve to never bother about small entities or entities that tell you to get lost then that will deter even more receptive ears from listening to you though.

I guess this also applies to junior positions within the system, whose freedom would be determined to a significant extent by people in senior positions

The obvious difference is that an alternative candidate for a junior position in a shrimp welfare organization is likely to be equally concerned about shrimp welfare. An alternative candidate for a junior person in an MEP's office or DG Mare is not, hence the difference at the margin is (if non-zero) likely much greater. And a junior person progressing in their career may end up with direct policy responsibility for their areas of interest, whereas a person who remains a lobbyist will never have this. It even seems non-obvious that even a senior lobbyist will have more impact on policymakers than their more junior adviser or research assistant, though as you say it does depend on whether the junior adviser has the freedom to highlight issues of concern.

"small" is relative. AMF manages significantly more donations compared with most local NGOs, but it does one thing and has <20 staff. That's very different from Save the Children or the Red Cross or indeed the Global Fund type organizations I was comparing it with, that have more campaigns and programmes to address local needs but also more difficulty in evaluating how effective they are overall.  I understand that below the big headline "recommended" charities Give well does actually make smaller grants to some smaller NGOs too, but these will still be difficult to access for many

Are EA cause priorities too detached from local realities? Shouldn’t people closest to a problem have more say in solving it?

I think this is the most interesting question, and I would be interested in your thoughts about how to make that easier.[1]

I think part of the reason EA doesn't do this is simply because it doesn't have those answers, being predominantly young Western people centred around certain universities and tech communities[2] And also because EA (and especially the part of EA that is interested in global health) is very numbers oriented.

This is also somewhat related to a second point you raise regarding political and social realities including corruption: it is quite easy for GiveWell or OpenPhilanthropy to identify that infectious diseases are likely to be real, that a small international NGO is providing evidence that they're actually buying and shipping the nets or pills that deal with it, and that on average given infectious disease prevalence they will save a certain amount of lives. Some other programmes that may deliver results highly attuned to local needs are more difficult to evaluate (and local NGOs are not always good at dealing with the complex requests for evidence for foreign evaluators even if they are very effective at their work). The same is true of large multinational organizations that have both local capacity building programs and the ability to deal with complex requests from foreign evaluators, but are also so big that Global Fund type issues can happen...

  1. ^

    I would note that there is a regular contributor to this forum @NickLaing who is based in Uganda and focused on trying to solve local problems, although I don't believe he receives very much funding compared with other EA causes, and also @Anthony Kalulu, a rural farmer in eastern Uganda who has an ambitious plan for a grain facility to solve problems in Busoaga, but seems to be getting advice from the wrong people on how to fund it... 

  2. ^

    This is also, I suspect, part of the reason many but not all EAs think AI is so important...

Just realise that betting on crypto is like betting on a casino. Probably worse, if it's a memecoin which has apparently lost nearly all of its value in the last two months. Then decide whether something like a casino but probably worse is how you would want to invest the last $10k which you could still help your fellow farmers with.

FWIW I remember liking your original post and your ambition. I might have some ability to assist with grant application writing. But only if you spend any funds you can get on helping fellow Ugandans, not crypto!

What section do you put Marco Rubio in?

The side that defied a court order to eliminate 90% of USAID programs this week including all the lifesaving programs described above, with the name Marco Rubio referenced as being the decision-making authority in the termination letters.

I'm not sure the number of statements he's made in favour of some of these programs being lifesaving before termination letters were sent out in his name is a mitigating factor. And if he's not actually making the decisions it's a moot point: appealing to Rubio's better nature doesn't seem to be a way forward.

Where was USAID mentioned in the PDF you linked?

My bad, I should have linked to this one 

FWIW I agree with your point that people who are broadly neutral/sympathetic are more likely to be sympathetic to a broad explainer than a "denunciation".

But I worded my post quite carefully, it's "people who like Musk's cuts to US Aid and AI Safety" I don't think overlap with EA. I don't imagine either of the EA-affiliated people you linked to would object to EAs pointing out that Musk shutting down AI safety institutes might be the opposite of what he says he cares about. And I don't think people who think foreign aid is a big scam and AI should be unregulated are putative EAs (whether they trust Musk or not!)

I don't think a "denunciation" is needed, but I don't think avoiding criticising political figures because they're sensitive, powerful and have some public support is a way forward either.

Load more