Doing alignment research with Vivek Hebbar's team at MIRI.
This article just made HN. It's a report saying that 39 of 50 top offsetting programs are likely junk, 8 "look problematic", and 3 lack sufficient information, with none being found good.
I think most climate people are very suspicious of charities like this, rather than or in addition to not believing in ethical offsetting. See this Wendover Productions video on problematic, non-counterfactual, and outright fraudulent climate offsets. I myself am not confident that CATF offsets are good and would need to do a bunch of investigation, and most people are not willing to do this starting from, say, an 80% prior that CATF offsets are bad.
But with no evidence, just your guesses. IMO we should wait until things shake out and even then the evidence will require lots of careful interpretation. Also EA is 2/3 male, which means that even minor contributions of women to scandals could mean they cause proportionate harms.
I'm looking for AI safety projects with people with some amount of experience. I have 3/4 of a CS degree from Caltech, one year at MIRI, and have finished the WMLB and ARENA bootcamps. I'm most excited about activation engineering, but willing to do anything that builds research and engineering skill.
If you've published 2 papers in top ML conferences or have a PhD in something CS related, and are interested in working with me, send me a DM.
Is there any evidence for this claim? One can speculate about how average personality gender differences would affect p(scandal), but you've just cited two cases where women caused huge harms, which seems to argue neutrally or against you.
Who tends to be clean?
With all the scandals in the last year or two, has anyone looked at which recruitment sources are least likely to produce someone extremely net negative in direct impact or to the community (i.e. a justified scandal)? Maybe this should inform outreach efforts.
In addition to everything mentioned so far, there's the information and retributive justice effect of the public exposé, which can be positive. As long as it doesn't devolve into a witch hunt, we want to discourage people from using EA resources and trust in the ways Nonlinear did, and this only works if it's public. If this isn't big enough, think about the possibility of preventing FTX. (I don't know if the actual fraud was preventable, but negative aspects of SBF's character and the lack of separation between FTX and Alameda could have been well substantiated and made public. Just the reputation of EAs doing due diligence here could have prevented a lot of harm.)
You're assuming that the EV of switches from global health to biosecurity is lower than the EV of switching from something else to biosecurity. Even though global health is better than most cause areas, this could be false in practice for at least two reasons