This is great! Just wanted to mention that this kind of weighting approach works very well with the recent post A Model-based Approach to AI Existential Risk, by Sammy Martin, Lonnie Chrisman, and myself, particularly the section on Combining inside and outside view arguments. Excited to see more work in this area!
Looking over that comment, I realize I don't think I've seen anybody else use the term "secret sauce theory", but I like it. We should totally use that term going forward. :)
Fair. I suppose there are actually two paths to being a doomer (usually): secret sauce theory or extremely short timelines.
I've been meaning to ask: Are there plans to turn your Cold Takes posts on AI safety and The Most Important Century into a published book? I think the posts would make for a very compelling book, and a book could reach a much broader audience and would likely get much more attention. (This has pros and cons of course, as you've discussed in your posts.)
As I mentioned on one of those Facebook threads: At least don't bill the event as a global conference for EA people and then tell people no you can't come. Call it maybe the EA Professionals Networking Event or something, which (a) makes it clear this is for networking and not the kind of academic conference people might be used to, and (b) implies this might be exclusive. But if you bill it as a global conference, then make it be like a global conference. And at the very least make it very clear that it's exclusive! Personally I didn't notice any mention of exclusivity at all in any EA Global posts or advertising until I heard about people actually getting rejected and feeling bad about that.
My interpretation of the “Global” part in EAG is ‘from around the world’, not ‘everyone is invited’. E.g. for EAGxAustralia it seems like you’re much more likely to get accepted if you’re based in Australia or the Asia Pacific, because it’s about building the community there. But EA Global is about connecting people across these different communities, and doesn’t prioritise admissions based on geographical closeness.
Honestly I’m super confused why people perceive ‘EA Global’ as an inclusive-sounding name. Especially in contrast to ‘EAGx’, which evokes the TEDx vs TED contrast, where TEDxes have a much lower bar, are scrappier and more community based.
Here's a perspective I mentioned recently to someone:
Many people in EA seem to think that very few people outside the "self identifies as an EA" crowd really care about EA concerns. Similarly, many people seem to think that very few researchers outside of a handful of EA-affiliated AI safety researchers really care about existential risks from AI.
Whereas my perspective tends to be that the basic claims of EA are actually pretty uncontroversial. I've mentioned some of the basic ideas many times to people and I remember getting pushback I think only once - a...
Good point! I started reading those a while ago but got distracted and never got back to them. I'll try looking at them again.
In some cases it might be easier to do this as a structured interview rather than asking for written analyses. For example, I could imagine that it might be possible to create a podcast where guests are given an article or two to read before the interview, and then the interviewer asks them for their responses on a point-by-point basis. This would also allow the interviewer (if they're particularly good) to do follow-up questions as necessary. On the other hand, my personal impression is that written analyses tend to be more carefully argued and thought through than real-time interviews.
Thought: In what ways do EA orgs / funds go about things differently than in the rest of the non-profit (or even for-profit) world? If they do things differently: Why? How much has that been analyzed? How much have they looked into the literature / existing alternative approaches / talked to domain experts?
Naively, if the the thing they do differently is not related to the core differences between EA / that org and the rest of the world, then I'd expect that this is kind of like trying to re-invent the wheel and it won't be a good use of resources unless you have a good reason to think you can do better.
Thank you for posting this! I was going to post something about this myself soon, but you beat me to it!
Decision Analysis (the practical discipline of analyzing decisions, usually in a business, operations, or policy context; not the same as decision theory): This discipline overlaps in obvious ways with a lot of EA and LessWrong discussions, but I have seen few direct references to Decision Analysis literature, and there seems to be little direct interaction between the EA/LW and DA communities. I'd love to see if we could bring in a few DA experts to giv...
What I do (assuming I get to that point in the conversation) is that I deliberately mention points like this, even before trying to argue otherwise. In my experience (which again is just my experience) a good portion of the time the people I'm talking to debunk those counterarguments themselves. And if they don't, well then I can start discussing it at that point - but at that point it feels to me like I've already established credibility and non-craziness by (a) starting off with noncontroversial topics, (b) starting off the more controversial topics with...
I haven't read most of the post yet, but already I want to give a strong upvote for (1) funding critiques of EA, and (2) the fact that you are putting up a list of projects you'd like to see. I would like to see more lists of this type! I've been planning to do one of them myself, but I haven't gotten to it yet.
I think I mostly lean towards general agreement with this take, but with several caveats as noted by others.
On the one hand, there are clearly important distinctions to be made between actual AI risk scenarios and Terminator scenarios. On the other hand, in my experience people pattern-matching to the Terminator usually doesn't make anything seem less plausible to them, at least as far as I could tell. Most people don't seem to have any trouble separating the time travel and humanoid robot parts from the core concern of misaligned AI, especially if you imm...
Yes, I have seen people become more actively interested in joining or promoting projects related to AI safety. More importantly, I think it creates an AI safety culture and mentality. I'll have a lot more to say about all of this in my (hopefully) forthcoming post on why I think promoting near-term research is valuable.
[Disclaimer: I haven't read the whole post in detail yet, or all the other comments, so apologies if this is mentioned elsewhere. I did see that the Partnerships section talks about something similar, but I'm not sure it's exactly what I'm referring to here.]
For some of these products there already exists similar software, just that they're meant for corporations and are really expensive. Just as an example from something I'm familiar with, for building on Guesstimate there's already Analytica (https://lumina.com/). Now, does it doeverything that Guesstima...
Meta-comment: I noticed while reading this post and some of the comments that I had a strong urge to upvote any comment that was critical of EA and had some substantive content. Introspecting, I think this was partly due to trying to signal-boost critical comments because I don't think we get enough of those, partly because I agreed with some of those critiques, ... but I think mostly because it feels like part of the EA/rationalist tribal identity that self-critiquing should be virtuous. I also found myself being proud of the community that a critical pos...
This seems correct and a valid point to keep in mind - but it cuts both ways. It makes sense to reduce your credence when you recognize that expert judgment here is less informed than you originally thought. But by the same token, you should probably reduce your credence in your own forecasts being correct, at least to the extent that they involve inside view arguments like, "deep learning will not scale up all the way because it's missing xyz." The correct response in this case will depend on how much your views depend on inside view arguments about deep ...
Part-time work is an option at my workplace. Less than half-time loses benefits though, which is why I didn't want to drop down to lower than 50%.
I did not have an advisor when I sent the original email, but I did have what amounted to a standing offer from my undergrad ML professor that if I ever wanted to do a PhD he would take me as a grad student. I spent a good amount of time over the past three months deciding whether I should take him up on that or if I should apply elsewhere. I ended up taking him up on the offer.
I did not discuss it with my employer before sending the original email. It did take some work to get it through bureaucratic red tape though (conflict of interest check, etc.).
Does this look close to like what you're looking for? https://www.lesswrong.com/posts/qnA6paRwMky3Q6ktk/modelling-transformative-ai-risks-mtair-project-introduction
If yes, feel free to message me - I'm one of the people running that project.
Also, what software did you use for the map you displayed above?
In your 80,000 Hours interview you talked about worldview diversification. You emphasized the distinction between total utilitarianism vs. person-affecting views within the EA community. What about diversification beyond utilitarianism entirely? How would you incorporate other normative ethical views into cause prioritization considerations? (I'm aware that in general this is basically just the question of moral uncertainty, but I'm curious how you and Open Phil view this issue in practice.)
Most people at Open Phil aren't 100% bought into to utilitarianism, but utilitarian thinking has an outsized impact on cause selection and prioritization because under a lot of other ethical perspectives, philanthropy is supererogatory, so those other ethical perspectives are not as "opinionated" about how best to do philanthropy. It seems that the non-utilitarian perspectives we take most seriously usually don't provide explicit cause prioritization input such as "Fund biosecurity rather than farm animal welfare", but rather provide input about what rules...
True. My main concern here is the lamppost issue (looking under the lamppost because that's where the light is). If the unknown unknowns affect the probability distribution, then personally I'd prefer to incorporate that or at least explicitly acknowledge it. Not a critique - I think you do acknowledge it - but just a comment.
Shouldn't a combination of those two heuristics lead to spreading out the probability but with somewhat more probability mass on the longer-term rather than the shorter term?
- What skills/types of people do you think AI forecasting needs?
I know you asked Ajeya, but I'm going to add my own unsolicited opinion that we need more people with professional risk analysis backgrounds, and if we're going to do expert judgment elicitations as part of forecasting then we need people with professional elicitation backgrounds. Properly done elicitations are hard. (Relevant background: I led an AI forecasting project for about a year.)
I know that in the past LessWrong, HPMOR, and similar community-oriented publications have been a significant source of recruitment for areas that MIRI is interested in, such as rationality, EA, awareness of the AI problem, and actual research associates (including yourself, I think). What, if anything, are you planning to do to further support community engagement of this sort? Specifically, as a LW member I'm interested to know if you have any plans to help LW in some way.
[Disclaimers: My wife Deena works with Kat as a business coach - see my wife's comment elsewhere on this post. I briefly met Kat and Emerson while visiting in Puerto Rico and had positive interactions with them. My personality is such that I have a very strong inclination to try to see the good in others, which I am aware can bias my views.]
A few random thoughts related to this post:
1. I appreciate the concerns over potential for personal retaliation, and the other factors mentioned by @Habryka and others for why it might be good to not delay this kind of ... (read more)