Earner-to-give based in Minnesota. Board member at Wild Animal Initiative. Interested in catastrophic risks and wild animal suffering.
EAs are probably more likely than the general public to keep money they intend to donate invested in stocks, since that's a pretty common bit of financial advice floating around the community. So the large drop in stock prices in the past few weeks (and possible future drops) may affect EA giving more than giving as a whole.
How far do you think we are from completely filling the need for malaria nets, and what are the barriers left to achieving that goal?
What are your high-level goals for improving AI law and policy? And how do you think your work at OpenAI contributes to those goals?
Seems like its mission sits somewhere between GiveWell's and Charity Navigator's. GiveWell studies a few charities to find the very highest impact ones according to its criteria. Charity Navigator attempts to rate every charity, but does so purely on procedural considerations like overhead. ImpactMatters is much broader and shallower than GiveWell but unlike Charity Navigator does try to tell you what actually happens as the result of your donation.
I think I would be more likely to share my donations this way compared to sharing them myself, because it would feel easier and less braggadocious (I currently do not really advertise my donations).
Among other things, I feel a sense of pride and accomplishment when I do good, the way I imagine that someone who cares about, say, the size of their house feels when they think about how big their house is.
Absolutely, EAs shouldn't be toxic, inaccurate, or uncharitable on Twitter or anywhere else. But I've seen a few examples of people effectively communicating about EA issues on Twitter, such as Julia Galef and Kelsey Piper, at a level of fidelity and niceness far above the average for that website. On the other hand they are briefer, more flippant, and spend more time responding to critics outside the community than they would on other platforms.
Yep, though I think it takes a while to learn how to tweet, whom to follow, and whom to tweet at before you can get a consistently good experience on Twitter and avoid the nastiness and misunderstandings it's infamous for.
There's a bit of an extended universe of Vox writers, economists, and "neoliberals" that are interested in EA and sometimes tweet about it, and I think it would be potentially valuable to add some people who are more knowledgeable about EA into the mix.
On point 4, I wonder if more EAs should use Twitter. There are certainly many options to do more "ruthless" communication there, and it might be a good way to spread and popularize ideas. In any case it's a pretty concrete example of where fidelity vs. popularity and niceness vs. aggressive promotion trade off.
This all seems to assume that there is only one "observer" in the human mind, so that if you don't feel or perceive a process, then that process is not felt or perceived by anyone. Have you ruled out the possibility of sentient subroutines within human minds?