O

oh54321

229 karmaJoined Apr 2022

Comments
37

Source is that I remember Ajeya mentioning at one point that it led to positive changes and she doesn't think it was a bad decision in retrospect, but cannot get into said changes for NDA reasons. 

AFAIK this is not something that can be shared publicly. 

This makes more sense. I still feel a bit  irked by the downvotes though - I would like people to be aware of the email, and feel much more strongly about this than about not wanting people to see some of pseudonym's takes about the apology.

While I agree that these kinds of "bad EA optics" posts are generally unproductive and it makes sense for them to get downvoted, I'm surprised that this specific one isn't getting more upvoted? Unlike most links to hit pieces and criticisms of EA, this post actually contains new information that has changed my perception of EA and EA leadership. 

with less intensity, we should discourage the framing of 'auditing' very established journalists for red flags

 Why? If I was making a decision to be interviewed by Rachel or not, probably the top thing I'd be worried about is whether they've previously written not-very-journalistic hit pieces on tech-y people (which is not all critical pieces in general! some are pretty good and well researched). I agree that there's such thing as going too far, but I don't think my comment was doing that.  

I think "there are situations this is valid (but not for the WSJ!)" is wrong? There have been tons of examples of kind of crap articles in usually highly credible newspapers.  For example, this article in the NYT seemed to be pretty wrong and not that good

I think it makes more sense to look at articles that Rachel has written about SBF/EA. Here's one:

https://www.wsj.com/articles/sam-bankman-frieds-plans-to-save-the-world-went-down-in-flames-11669257574.

I (very briefly) skimmed it and didn't see any major red flags. 

Not an answer, but why are you trying to do this? If you're excited about Biology, there seem to be plenty of ways to do impactful biological work. 

Even if you're purely trying to maximize your impact, for areas like AI Alignment, climate change, or bioweapons, the relevant question is something like: what is the probability that me working on this area prevents a catastrophic event? According to utilitarianism, your # of lives saved is basically this time the total number of people that will ever live or something like this. 

So if there's a 10% chance of AI killing anyone, and you working on this brings it down to 9.999999%, this is less impactful than if there's a 0.5 % chance of climate change killing everyone, and you working on this brings it down to 0.499 %. Since it's much more likely to do impactful work in an area that excites you, seems like bio is solid, since it's relevant to bioweapons and climate change?

Ok, cool, that's helpful to know. Is your intuition that these examples will definitely occur and we just haven't seen them yet (due to model size or something like this)? If so, why?

Load more