Aryeh Englander

Aryeh Englander is a mathematician and AI researcher at the Johns Hopkins University Applied Physics Laboratory. His work is focused on AI safety and AI risk analysis.

Topic Contributions

Comments

Aryeh Englander's Shortform

Thought: In what ways do EA orgs / funds go about things differently than in the rest of the non-profit (or even for-profit) world? If they do things differently: Why? How much has that been analyzed? How much have they looked into the literature / existing alternative approaches / talked to domain experts?

Naively, if the the thing they do differently is not related to the core differences between EA / that org and the rest of the world, then I'd expect that this is kind of like trying to re-invent the wheel and it won't be a good use of resources unless you have a good reason to think you can do better.

What are academic disciplines, movements or organisations that you think EA should try to learn more from?

Thank you for posting this! I was going to post something about this myself soon, but you beat me to it!

Decision Analysis (the practical discipline of analyzing decisions, usually in a business, operations, or policy context; not the same as decision theory): This discipline overlaps in obvious ways with a lot of EA and LessWrong discussions, but I have seen few direct references to Decision Analysis literature, and there seems to be little direct interaction between the EA/LW and DA communities. I'd love to see if we could bring in a few DA experts to give some workshops on the tools and techniques they've developed. Several companies have also developed DA software that I think may be very useful for EA, and I'd love to see collaborations with some of these companies to see how those software systems can be best adapted for the needs of EA orgs and researchers.

Risk analysis is another closely related field that I would like to see more interaction with.

On presenting the case for AI risk

Some - see the links at the end of the post.

On presenting the case for AI risk

What I do (assuming I get to that point in the conversation) is that I deliberately mention points like this, even before trying to argue otherwise. In my experience (which again is just my experience) a good portion of the time the people I'm talking to debunk those counterarguments themselves. And if they don't, well then I can start discussing it at that point - but at that point it feels to me like I've already established credibility and non-craziness by (a) starting off with noncontroversial topics, (b) starting off the more controversial topics with arguments against taking it seriously, and (c) by drawing mostly obvious lines of reasoning from (a) to (b) to whatever conclusions they do end up reaching. So long as I don't go signaling science-fiction-geekiness too much during the conversation, it feels to me like if I end up having to make some particular arguments in the end then those become a pretty easy sell.

EA Projects I'd Like to See

I haven't read most of the post yet, but already I want to give a strong upvote for (1) funding critiques of EA, and (2) the fact that you are putting up a list of projects you'd like to see. I would like to see more lists of this type! I've been planning to do one of them myself, but I haven't gotten to it yet.

AI Risk is like Terminator; Stop Saying it's Not

I think I mostly lean towards general agreement with this take, but with several caveats as noted by others.

On the one hand, there are clearly important distinctions to be made between actual AI risk scenarios and Terminator scenarios. On the other hand, in my experience people pattern-matching to the Terminator usually doesn't make anything seem less plausible to them, at least as far as I could tell. Most people don't seem to have any trouble separating the time travel and humanoid robot parts from the core concern of misaligned AI, especially if you immediately point out the differences. In fact, in my experience, at least, the whole Terminator thing seems to just make AI risks feel more viscerally real and scary rather than being some sort of curious abstract thought experiment - which is how I think it often comes off to people.

Amusingly, I actually only watched Terminator 2 for the first time a few months ago, and I was surprised to realize that Skynet didn't seem so far off from actual concerns about misaligned AI. Before that basically my whole knowledge of Skynet came from reading AI safety people complaining about how it's nothing like the "real" concerns. In retrospect I was kind of embarrassed by the fact that I myself had repeated many of those complaints, even though I didn't really know what Skynet was really about!

On presenting the case for AI risk

Yes, I have seen people become more actively interested in joining or promoting projects related to AI safety. More importantly, I think it creates an AI safety culture and mentality. I'll have a lot more to say about all of this in my (hopefully) forthcoming post on why I think promoting near-term research is valuable.

Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits

[Disclaimer: I haven't read the whole post in detail yet, or all the other comments, so apologies if this is mentioned elsewhere. I did see that the Partnerships section talks about something similar, but I'm not sure it's exactly what I'm referring to here.]

For some of these products there already exists similar software, just that they're meant for corporations and are really expensive. Just as an example from something I'm familiar with, for building on Guesstimate there's already Analytica (https://lumina.com/). Now, does it doeverything that Guesstimate does and with all of Guesstimate's features? Probably not. But on the other hand, a lot of these corporate software systems are meant to be customized for individual corporations' needs. The companies who build these software platforms employ people whose job consists of customizing the software for particular needs, and there are often independent consultants who will do that for you as well. (My wife does independent customization for some software platforms as part of her general consulting business.)

So, what if we had some EA org buy corporate licenses to some of these platforms and hand them out to other EA orgs as needed? It's usually (but not always) cheaper to buy and/or modify existing systems than to build your own from scratch, when possible.

Additionally, many of these organizations offer discounts for nonprofits, and some may even be interested in helping directly on their own if approached. For example, I have talked with the Analytica team and they are very interested in some of the AI forecasting work we've been doing (https://www.alignmentforum.org/posts/qnA6paRwMky3Q6ktk/modelling-transformative-ai-risks-mtair-project-introduction), and with the whole EA/LW approach in general.

Will it turn out cheaper to buy licenses and/or modify Analytica for general EA purposes instead of building on Guesstimate? I don't know, and it will probably depend on the specifics. But I think it's worth looking into.

Load More