My ethics are closest to asymmetric, person-affecting prioritarianism, and I don’t think the expected value of the future is as high as longtermists do because of potential bad futures. Politically I'm closest to internationalist libertarian socialism.
“As a secular NGO, GiveDirectly may struggle to gain traction with Muslim donors”I strongly agree with this.
Isn’t an obvious solution to market the Zakat compliant fund under a different name than Give Directly?
(Obvious choice would be whatever “Give Directly” is in Arabic)
On “learning from people outside EA and those who slightly disagree with EA views” I highly recommend reading everything by Dr Filippa Lentzos: https://www.filippalentzos.com/.
Also, subscribe to the Pandora Report newsletter:
https://pandorareport.org/
Global Biodefense was great but sadly seems to have become inactive: https://globalbiodefense.com/
So rather than a specific claim about specific activities being done by Anthropic, would you say that:
from your experiences, it’s very common for people to join the arms race under the guise of safety
you think by default, we should assume that new AI Safety companies are actually joining the arms race, until proven otherwise
the burden of proof should essentially rest on Anthropic to show that they are really doing AI Safety stuff?
Given the huge potential profits from advancing AI capabilities faster than other companies and my priors on how irrational money makes people, I’d support that view.
My crux here is whether or not I think Anthropic has joined the arms race.
Why do you believe that it has?
I’m grateful to the women who have publicly spoken about sexual misconduct in EA, which I hope will result in us making EA spaces safer and more welcoming to women.
I’m grateful to the EAs who engaged with criticisms around transparency in EA, and responded by making a more easily navigatable database of grants by all EA orgs, which meaningfully improves transparency, scrutiny and accountability.
I’ve never spoken to him but think he’s doing a great job at a difficult time in a difficult role
I think we should slightly narrow the Overton Window of what ideas and behaviours are acceptable to express in EA spaces, to help exclude more harassment, assault and discrimination.
I also think EA at its best would primarily be more of a professional and intellectual community and less of a social circle, which would help limit harmful power dynamics, help limit groupthink and help promote intellectual diversity.
I know that a few Open Phil staff live outside the Bay Area and work remotely
Well done, have been waiting for years to see EA start looking into Zakat!