AB

Abby Babby

2033 karmaJoined

Posts
3

Sorted by New
4
· · 1m read

Comments
154

Such an interesting read, thank you!

Thanks for clarifying! Really appreciate you engaging with this. 

Re: It takes a lot longer. It seems like it takes a lot of time for you to monitor the comments on this post and update your top level post in response. The cost of doing that after you post publicly, instead of before, is that people who read your initial post are a lot less likely to read the updated one. So I don't think you save a massive amount of time here, and you increase the chance other people become misinformed about orgs.

Re: Orgs can still respond to the post after it's published. Some orgs aren't posting some information publicly on purpose, but they will tell you things in confidence if you ask privately. If you publicly blast them on one of these topics, they will not publicly respond. I know EAs can be allergic to these kind of dynamics, but politics is qualitatively different than ML research; managing relationships with multiple stakeholders with opposing views is delicate, and there are a bunch of bad actors working against AI safety in DC. You might be surprised by what kind of information is very dangerous for orgs to discuss publicly. 

I'm just curious, have you discussed any of your concerns with somebody who has worked in policy for the US Government? 

Thanks for being thoughtful about this! Could you clarify what your cost benefit analysis was here? I'm quite curious!

I appreciate the effort you’ve put into this, and your analysis makes sense based on publicly available data and your worldview. However, many policy organizations are working on initiatives that haven’t been/can't be publicly discussed, which might lead you to make some incorrect conclusions. For example, I'm glad Malo clarified MIRI does indeed work with policymakers in this comment thread.

Tone is difficult to convey online, so I want to clarify I'm saying the next statement gently: I think if you do this kind of report--that a ton of people are reading and taking seriously--you have some responsibility to send your notes to the mentioned organizations for fact checking before you post.

I also want to note: the EA community does not have good intuitions around how politics works or what kind of information is net productive for policy organizations to share. The solution is not to blindly defer to people who say they understand politics, but I am worried that our community norms actively work against us in this space. Consider checking some of your criticisms of policy orgs with a person who has worked for the US Government; getting an insider's perspective on what makes sense/seems suspicious could be useful. 

This is a really complex space with lots of moving parts; very cool to see how you've compiled/analyzed everything! Haven't finished going through your report yet, but it looks awesome :)

This looks so cool! Good luck!!!

This course sounds cool! Unfortunately there doesn't seem to be too much relevant material out there. 

This is a stretch, but I think there's probably some cool computational modeling to be done with human value datasets (e.g., 70,000 responses to variations on the trolley problem). What kinds of universal human values can we uncover? https://www.pnas.org/doi/10.1073/pnas.1911517117 

For digestible content on technical AI safety, Robert Miles makes good videos. https://www.youtube.com/c/robertmilesai

Thanks for the clarification, too many Carnegies! 

From what I understand, the MacArthur foundation was one of the main funders of nuclear security research, including at the Carnegie Endowment for International Peace, but they massively reduced their funding of nuclear projects and no large funder has replaced them.  https://www.macfound.org/grantee/carnegie-endowment-for-international-peace-2457/

(I've edited this comment, I got confused between the MacArthur foundation and the various Carnegie philanthropic efforts.) 

Load more