I lead the DeepMind mechanistic interpretability team
I think this is reasonable as a way for the community to reflexively react to things, to be honest. The question I'm trying to answer when I see someone making a post with an argument that seems worth engaging with is: what's the probability that I'll learn something new or change my mind as a result of engaging with this?
When there's a foundational assumption disagreement, it's quite difficult to have productive conversations. The conversation kind of needs to be about the disagreement about that assumption, which is a fairly specific kind of discussion. Eg if someone hasn't really thought about AI alignment much, thinks it's not an issue, but isn't familiar with the reasons I believe it matters, then I put a much lower (though still non-zero) probability that I'll make useful updates from talking to them. Because I have a bunch of standard arguments for the most obvious objections people sometimes raise, and don't learn much from stating them. And I think there's a lot of value to having high-context discussion spaces where people broadly agree on these foundational claims.
These foundational claims are pretty difficult to establish consensus on if people have different priors, and discussing them doesn't really tend to move people either way. I get a lot of value from discussing technical details of what working on AI safety is like with people, much more so than I get from the average "does AI safety matter at all?" conversation.
Obviously, if someone could convince me that AI safety doesn't matter, that would be a big deal. But I'd guess it's only really worth the effort if I'm reasonably sure the person understands why I believe it does matter and disagrees anyway, in a way that isn't stemming from some intractable foundational disagreements in worldviews
Have people recognize you right away. You don't need to tell your name to everyone
This is a VERY huge use case for me. It's so useful!
If someone is in this situation they can just take off their name tag. Security sometimes ask to see it, but you can just take it out of a pocket to show them and put it back
When I read that description I infer "make the best decision we can under uncertainty", not "only make decisions with a decent standard of evidence or to gather more evidence". It's a reasonable position to think that the TSUs grant is a bad idea or that it would be unreasonable to expect it to be a good idea without further evidence, but I feel like GiveWell are pretty clear that they're fine with making high risk grants, and in this case they seem to think this TSUs will be high expected value
What’s unique about these grants?: These grants are a good illustration of how GiveWell is applying increased flexibility, speed, and risk tolerance to respond to urgent needs caused by recent cuts to US foreign assistance. Funded by our All Grants Fund, the grants also demonstrate how GiveWell has broadened its research scope beyond its Top Charities while maintaining its disciplined approach—comparing each new opportunity to established interventions, like malaria prevention or vitamin A supplementation, as part of its grantmaking decisions.
The grants were explicitly made from the all grants fund, which is the place people donate when they are happy for GiveWell to make riskier decisions and hold themselves to lower standards than for top charities. I personally donate to the all grants fund over the top charities fund, am a fan of a more risk tolerant approach, and I'm happy to defer to GiveWell's judgement. I think your post is holding this grant to the standard of a top charity, which I think is unreasonable and would not be worth the effort and expense of GiveWell staff time
I don't have too much context on the actual object details of the grant, so don't have strong takes on most of your criticisms (you definitely know more about this domain than me!). But I find it pretty plausible that lots of high importance decisions get made after a disaster like the USAID cuts, and that this was urgent. And I also expect that there are, in general, a bunch of grants that are time sensitive in response to the USAID cuts and endorse GiveWell moving fast here and maximising expected value.
I think A>B, eg I often find people who don't know each other in London who it is valuable to introduce. People are not as on the ball as you think, the market is very far from efficient
Though many of the useful intros I make are very international, and I would guess that it's most useful to have a broad network across the world. So maybe C is best, though I expect that regular conference and business trips are enough