AE

Aryeh Englander

705 karmaJoined Jun 2015

Bio

Aryeh Englander is a mathematician and AI researcher at the Johns Hopkins University Applied Physics Laboratory. His work is focused on AI safety and AI risk analysis.

Comments
33

[Disclaimers: My wife Deena works with Kat as a business coach - see my wife's comment elsewhere on this post. I briefly met Kat and Emerson while visiting in Puerto Rico and had positive interactions with them. My personality is such that I have a very strong inclination to try to see the good in others, which I am aware can bias my views.]

A few random thoughts related to this post:

1. I appreciate the concerns over potential for personal retaliation, and the other factors mentioned by @Habryka and others for why it might be good to not delay this kind of post. I think those concerns and factors are serious and should definitely not be ignored. That said, I want to point out that there's a different type of retaliation in the other direction that posting this kind of thing without waiting for a response can cause: Reputational damage. As others have pointed out, many people seem to update more strongly on negative reports that come first and less on subsequent follow up rebuttals. If it turned out that the accusations are demonstrably false in critically important ways, then even if that comes to light later the reputational damage to Kat, Emerson, and Drew may now be irrevocable.

Reputation is important almost everywhere, but in my anecdotal experience reputation seems to be even more important in EA than in many other spheres. Many people in EA seem to have a very strong in-group bias towards favoring other "EAs" and it has long seemed to me that (for example) getting a grant from an EA organization often feels to be even more about having strong EA personal connections than for other places. (This is not to say that personal connections aren't important for securing other types of grants or deals or the like, and it's definitely not to say that getting an EA grant is only or even mostly about having strong EA connections. But from my own personal experience and from talking to quite a few others both in and out of EA, this is definitely how it feels to me. Note that I have received multiple EA grants in the past, and I have helped other people apply to and receive substantial EA grants.) I really don't like this sort of dynamic and I've low-key complained about it for a long time - it feels unprofessional and raises all sorts of in-group bias flags. And I think a lot of EA orgs feel like they've gotten somewhat better about this over time. But I think it is still a factor.

Additionally, it sometimes feels to me that EA Forum dynamics tend to lead to very strongly upvoting posts and comments that are critical of people or organizations, especially if they're more "centrally connected" in EA, while ignoring or even downvoting posts and comments in the other direction. I am not sure why the dynamic feels like this, and maybe I'm wrong about it really being a thing at all. Regardless, I strongly suspect that any subsequent rebuttal by Nonlinear would receive significantly fewer views and upvotes, even if the rebuttal were actually very strong.

Because of all this, I think that the potential for reputational harm towards Kat, Emerson, and Drew may be even greater than if this were in the business world or some other community. Even if they somehow provide unambiguous evidence that refutes almost everything in this post, I would not be terribly surprised if their potential to get EA funding going forward or to collaborate with EA orgs was permanently ended. In other words, I wouldn't be terribly surprised if this post spelled the end of their "EA careers" even if the central claims all turned out to be false. My best guess is that this is not the most likely scenario, and that if they provide sufficiently good evidence then they'll be most likely "restored" in the EA community for the most part, but I think there's a significant chance (say 1%-10%) that this is basically the end of their EA careers regardless of the actual truth of the matter.

Does any of this outweigh the factors mentioned by @Habryka? I don't know. But I just wanted to point out a possible factor in the other direction that we may want to consider, particularly if we want to set norms for how to deal with other such situations going forward.

2. I don't have any experience with libel law or anything of the sort, but my impression is that suing for slander over this kind of piece is very much within the range of normal responses in the business world, even if in the EA world it is basically unheard of. So if your frame of reference is the world outside of EA then suing seems at least like a reasonable response, while if your frame of reference is the EA community then maybe it doesn't. I'll let others weigh in on whether my impressions on this are correct, but I didn't notice others bring this up so I figured I'd mention it.

3. My general perspective on these kinds of things is that... well, people are complicated. We humans often seem to have this tendency to want our heroes to be perfect and our villains to be horrible. If we like someone we want to think they could never do anything really bad, and unless presented with extremely strong evidence to the contrary we'll look for excuses for their behavior so that it matches our pictures of them as "good people". And if we decide that they did do something bad, then we label them as "bad people" and retroactively reject everything about them. And if that's hard to do we suffer from cognitive dissonance. (Cf. halo effect.)

But the reality, at least in my opinion, is that things are more complicated. It's not just that there are shades of grey, it's that people can simultaneously be really good people in some ways and really bad people in other ways. Unfortunately, it's not at all a contradiction for someone to be a genuinely kind, caring, supportive, and absolutely wonderful person towards most of the people in their life, while simultaneously being a sexual predator or committing terrible crimes.

I'm not saying that any of the people mentioned in this post necessarily did anything wrong at all. My point here is mostly just to point out something that may be obvious to almost all of us, but which feels potentially relevant and probably bears repeating in any case. Personally I suspect that everybody involved was acting in what they perceived to be good faith and are / were genuinely trying to do the right thing, just that they're looking at the situation through lenses based on very different perspectives and experiences and so coming to very different conclusions. (But see my disclaimer at the beginning of this comment about my personality bias coloring my own perspective.)

This is great! Just wanted to mention that this kind of weighting approach works very well with the recent post A Model-based Approach to AI Existential Risk, by Sammy Martin, Lonnie Chrisman, and myself, particularly the section on Combining inside and outside view arguments. Excited to see more work in this area!

Looking over that comment, I realize I don't think I've seen anybody else use the term "secret sauce theory", but I like it. We should totally use that term going forward. :)

Fair. I suppose there are actually two paths to being a doomer (usually): secret sauce theory or extremely short timelines.

Many people who worry about AI x-risk believe some variation of this.

I've been meaning to ask: Are there plans to turn your Cold Takes posts on AI safety and The Most Important Century into a published book? I think the posts would make for a very compelling book, and a book could reach a much broader audience and would likely get much more attention. (This has pros and cons of course, as you've discussed in your posts.)

As I mentioned on one of those Facebook threads: At least don't bill the event as a global conference for EA people and then tell people no you can't come. Call it maybe the EA Professionals Networking Event or something, which (a) makes it clear this is for networking and not the kind of academic conference people might be used to, and (b) implies this might be exclusive. But if you bill it as a global conference, then make it be like a global conference. And at the very least make it very clear that it's exclusive! Personally I didn't notice any mention of exclusivity at all in any EA Global posts or advertising until I heard about people actually getting rejected and feeling bad about that.

Here's a perspective I mentioned recently to someone:

Many people in EA seem to think that very few people outside the "self identifies as an EA" crowd really care about EA concerns. Similarly, many people seem to think that very few researchers outside of a handful of EA-affiliated AI safety researchers really care about existential risks from AI.

Whereas my perspective tends to be that the basic claims of EA are actually pretty uncontroversial. I've mentioned some of the basic ideas many times to people and I remember getting pushback I think only once - and that was from a self-professed Kantian who already knew about EA and rejected it because they associated it with Utilitarianism. Similarly, I've presented some of the basic ideas behind AI risk many times to engineers and I've only very rarely gotten any pushback. Mostly people totally agree that it's an important set of issues to work on, but there are also other issues we need to focus on (maybe even to a greater degree), they can't work on it themselves because they have a regular job, etc. Moreover, I'm pretty sure that for a lot of such people, if you compensate them sufficiently and remove the barriers that are preventing them from e.g., working on AGI safety, then they'd be highly motivated to work on it. I mean, sure, if I can get paid my regular salary or even more and I can also maybe help save the world, then that's fantastic!

I'm not saying that it's always worth removing all those barriers. In many cases it may be better to hire someone who is so motivated to do the job that they'd be willing to sacrifice for it. But in other cases it might be worth considering whether someone who isn't "part of EA" might totally agree that EA is great, and all you have to do is remove the barriers for that person (financial / career / reputational / etc.) and then they could make some really great contributions to the causes that EA cares about.

Questions:

  1. Am I correct that the perspective I described in my first paragraph is common in EA?
  2. Do you agree with the perspective I'm suggesting?
  3. What caveats and nuances am I missing or glossing over?

[Note: This is a bit long for a shortform. I'm still thinking about this - I may move to a regular post once I've thought about it a bit more and maybe gotten some feedback from others.]

Good point! I started reading those a while ago but got distracted and never got back to them. I'll try looking at them again.

Load more