I am a lawyer and policy researcher interested in improving the governance of artificial intelligence using the principles of Effective Altruism. In May 2019, I received a J.D. cum laude from Harvard Law School. I currently work as a Research Scientist in Governance at OpenAI.
I am also a Research Affiliate with the Centre for the Governance of AI at the Future of Humanity Institute; Founding Advisor and Research Affiliate at the Legal Priorities Project; and a VP at the O’Keefe Family Foundation.
My research focuses on the law, policy, and governance of advanced artificial intelligence.
You can share anonymous feedback with me here.
I don’t fully understand why the netted enclosure helps. Is the idea just that it prevents humans from coming close to the barns?
I feel like I was only speaking out against the framing that critics of EA are entitled to a lengthy reply because of EA being ambitious in its scope of caring. (This framing was explicit at least in the quoted paragraph, not necessarily in her post as a whole or her previous work.)
Ah, okay. That seems more reasonable. Sorry for misunderstanding.
I would also point out that I think the proposition that " that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk" is both:
This is just to say that I value the general maxim you're trying to advance here, but "never" is way too strong. Then it's just a boring balancing question.
Yeah. I have strong feelings that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk. For example, I'm annoyed when people punish others for honesty in cases where it would have been easy to tell a lie and look better. Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply "Oh, you care for the future of all humans, and even animals? That's suspicious – we're definitely going to apply extra scrutiny towards you." Meanwhile, AI capabilities companies continue to scale up compute and most of the world is busy discussing soccer or what not. Yet somehow,"Are EAs following democratic processes and why does their funding come from very few sources?" is made into the bigger issue than widespread apathy or the extent to which civilization might be acutely at risk.
I think this is an undervalued idea. But I also think that there's a distinct but closely related idea, which is valuable, which is that for any Group X with Goal Y, it is nearly always instrumentally valuable for Group X to hear about suggestions about how it can better advance Goal Y, especially from those who believe that Goal Y is valuable. Sometimes this will read as (or have the effect of) disincentivizing adopting Goal Y (because it leads to criticism), but in fact it's often much easier to marginally improve the odds of Goal Y being achieved by attempting to persuade Group X to do better at Y than to persuade Group ~X who believes ~Y. I take Carla Zoe to be doing this good sort of criticism, or at least that's the most valuable way to read her work.
The main problem with lavishness, IMHO, is not optics per se, but rather that it's extremely easy for people to trick themselves into believing that spending money on their own comfort/lifestyle/accommodations is net-good-despite-looking-bad (for productivity reasons or whatever). This generalizes to the community level.
(To be clear, this is not to say that we should never follow such reasoning. It's just a serious pitfall. This is also not original—others have certainly brought this up.)