AE

Aryeh Englander

643 karmaJoined Jun 2015

Bio

Aryeh Englander is a mathematician and AI researcher at the Johns Hopkins University Applied Physics Laboratory. His work is focused on AI safety and AI risk analysis.

Comments
28

I've been meaning to ask: Are there plans to turn your Cold Takes posts on AI safety and The Most Important Century into a published book? I think the posts would make for a very compelling book, and a book could reach a much broader audience and would likely get much more attention. (This has pros and cons of course, as you've discussed in your posts.)

As I mentioned on one of those Facebook threads: At least don't bill the event as a global conference for EA people and then tell people no you can't come. Call it maybe the EA Professionals Networking Event or something, which (a) makes it clear this is for networking and not the kind of academic conference people might be used to, and (b) implies this might be exclusive. But if you bill it as a global conference, then make it be like a global conference. And at the very least make it very clear that it's exclusive! Personally I didn't notice any mention of exclusivity at all in any EA Global posts or advertising until I heard about people actually getting rejected and feeling bad about that.

Here's a perspective I mentioned recently to someone:

Many people in EA seem to think that very few people outside the "self identifies as an EA" crowd really care about EA concerns. Similarly, many people seem to think that very few researchers outside of a handful of EA-affiliated AI safety researchers really care about existential risks from AI.

Whereas my perspective tends to be that the basic claims of EA are actually pretty uncontroversial. I've mentioned some of the basic ideas many times to people and I remember getting pushback I think only once - and that was from a self-professed Kantian who already knew about EA and rejected it because they associated it with Utilitarianism. Similarly, I've presented some of the basic ideas behind AI risk many times to engineers and I've only very rarely gotten any pushback. Mostly people totally agree that it's an important set of issues to work on, but there are also other issues we need to focus on (maybe even to a greater degree), they can't work on it themselves because they have a regular job, etc. Moreover, I'm pretty sure that for a lot of such people, if you compensate them sufficiently and remove the barriers that are preventing them from e.g., working on AGI safety, then they'd be highly motivated to work on it. I mean, sure, if I can get paid my regular salary or even more and I can also maybe help save the world, then that's fantastic!

I'm not saying that it's always worth removing all those barriers. In many cases it may be better to hire someone who is so motivated to do the job that they'd be willing to sacrifice for it. But in other cases it might be worth considering whether someone who isn't "part of EA" might totally agree that EA is great, and all you have to do is remove the barriers for that person (financial / career / reputational / etc.) and then they could make some really great contributions to the causes that EA cares about.

Questions:

  1. Am I correct that the perspective I described in my first paragraph is common in EA?
  2. Do you agree with the perspective I'm suggesting?
  3. What caveats and nuances am I missing or glossing over?

[Note: This is a bit long for a shortform. I'm still thinking about this - I may move to a regular post once I've thought about it a bit more and maybe gotten some feedback from others.]

Good point! I started reading those a while ago but got distracted and never got back to them. I'll try looking at them again.

In some cases it might be easier to do this as a structured interview rather than asking for written analyses. For example, I could imagine that it might be possible to create a podcast where guests are given an article or two to read before the interview, and then the interviewer asks them for their responses on a point-by-point basis. This would also allow the interviewer (if they're particularly good) to do follow-up questions as necessary. On the other hand, my personal impression is that written analyses tend to be more carefully argued and thought through than real-time interviews.

Thought: In what ways do EA orgs / funds go about things differently than in the rest of the non-profit (or even for-profit) world? If they do things differently: Why? How much has that been analyzed? How much have they looked into the literature / existing alternative approaches / talked to domain experts?

Naively, if the the thing they do differently is not related to the core differences between EA / that org and the rest of the world, then I'd expect that this is kind of like trying to re-invent the wheel and it won't be a good use of resources unless you have a good reason to think you can do better.

Answer by Aryeh EnglanderMar 20, 202260

Thank you for posting this! I was going to post something about this myself soon, but you beat me to it!

Decision Analysis (the practical discipline of analyzing decisions, usually in a business, operations, or policy context; not the same as decision theory): This discipline overlaps in obvious ways with a lot of EA and LessWrong discussions, but I have seen few direct references to Decision Analysis literature, and there seems to be little direct interaction between the EA/LW and DA communities. I'd love to see if we could bring in a few DA experts to give some workshops on the tools and techniques they've developed. Several companies have also developed DA software that I think may be very useful for EA, and I'd love to see collaborations with some of these companies to see how those software systems can be best adapted for the needs of EA orgs and researchers.

Risk analysis is another closely related field that I would like to see more interaction with.

Some - see the links at the end of the post.

Load more