5101 karmaJoined Jan 2015


Moral Anti-Realism


Really cool report!

The UK needs to matter. I can see two main ways that the UK ends up mattering:

I haven't thought about this much, but next to the two things you said, maybe something related to data centers could also turn out to be relevant. (Certainly with regulatory diffusion, but maybe the UK infrastructure is significant by itself.) I think the UK ranks third in numbers of data centers, but the stats for China might be unreliable (so possible that it would rank fourth with complete stats). (Thanks to Konstantin Pilz for making me aware of this!)

Great points! I think "top-1,000" would've worked better for the point I wanted to convey.

I had the intuition that there are more (aspiring) novelists than competitive game players, but on reflection, I'm not sure that's correct.

I think the AI history for chess is somewhat unusual compared to the games where AI made headlines more recently because AI spent a lot longer within the range of human chess professionals. We can try to tell various stories about why that is. On the "hard takeoff" side of arguments, maybe chess is particularly suited for AI and maybe humans including Kasparov simply weren't that good before chess AI helped them understand better strategies. On the "slow(er) takeoff" side, maybe the progress in Go or poker or Diplomacy looks more rapid mostly because there was a hardware overhang and researchers didn't bother to put a lot of effort into these games before it became clear that they can beat human experts.

The "human range" at various tasks is much larger than one would naively think because most people don't obsess over becoming good at any metric, let alone the types of metrics on which GPT-4 seems impressive. Most people don't obsessively practice anything. There's a huge difference between "the entire human range at some skill" and "the range of top human experts at some skill." (By "top experts," I mean people who've practiced the skill for at least a decade and consider it their life's primary calling.) 

GPT-4 hasn't entered the range of "top human experts" for domains that are significantly more complex than stuff for which it's easy to write an evaluative function.  If we made a list of the world's best 10,000 CEOs, GPT-4 wouldn't make the list. Once it ranks at 9,999, I'm pretty sure we're <1 year away from superintelligence. (The same argument goes for  top-10,000 Lesswrong-type generalists or top-10,000 machine learning researchers who occasionally make new discoveries. Also top-10,000 novelists, but maybe not on a gamifiable metric like "popular appeal.")

I share this sentiment.

What you're referring to in the last sentence sounds like evil that doesn't even bother to hide.

But this other part maybe warrants a bit of engagement:

She says others in the community told her allegations of misconduct harmed the advancement of AI safety,

If the allegations are true and serious, then I think it makes sense even just on deterrence grounds for people to have their pursuits harmed, no matter their entanglement with EA/AI safety or their ability to contribute to important causes. In addition, even if we went with the act utilitarian logic of "how much good can this person do?," I don't buy that interpersonally callous, predatory individuals are a good thing for a research community (no matter how smart or accomplished they seem). Finding out that someone does stuff that warrants their exclusion from the community (and damages its reputation) is really strong evidence that they weren't serious enough about having positive impact. One would have to be scarily good at mental gymnastics to think otherwise, to think that this isn't a bad sign about someone's commitment and orientation to have impact. (It's already suspicious most researchers in EA have worldviews that play to their strengths or make their own work seem particularly important. To some degree, biases in that area are probably unavoidable. Still, at the very least, we can try to select for people who are capable of putting in a half-decent effort to avoid these biases and get it right.)

Of course, sometimes particular behaviors seem unforgivable to some people but somewhat less bad to others. Therefore, I think it's really important to be clear/precise what an accusation is about. (I acknowledge that it can be tricky to give specifics due to protecting anonymity of accusers.) I can imagine circumstances where specific accusations would have significantly bad consequences on net – but not really if they are precise (also in the sense of not omitting important context) and truthful!

I feel like discussions about what we'd like social norms to be  and (relatedly) how to react to "scandals" have an inherent dynamic that increases polarization. This often goes like this:

There's a scandal or the possibility of a scandal and there's a tradeoff to make with respect to several things of importance. (E.g., creating a welcoming and safe environment vs. fear that this devolves into a culture where 99% of people will end up cancelled eventually with no chance of redemption for increasingly less severe transgressions.) Many people have some opinion on where they would set this tradeoff, but different people would set the weights in different places. (And some people may just say things that they expect will be received well, increasing the momentum that the pendulum is currently swinging.) Moreover, people operate in different parts of the EA community and have widely different day-to-day experiences filtered by their personality, standing in the movement, preferred ways of socializing, things like gender or ethnicity, and so on. So, even if two people in some sense agreed that the ideal norms for the movement would set the tradeoff in one specific way, they may disagree, based on the different glimpses of the movement they catch,  about where the pendulum is currently at.

Now, since people often care really strongly about what the norms should be, it can be quite distressing if someone wants the pendulum to be at a 60 degree angle and thinks it's currently at a 120 degree angle, and then a person who wants it to be at the 120 angle comes and talks as though it's already at the 60 degree angle. While these two people only differ by 60 degrees (one wants it at 60, the other at 120), it seems to them as though they differ by 120 degrees (they both think the pendulum is currently far away from them). This impression of vast differences will cause them to argue their case even more vehemently, which further amplifies the perceived difference until it feels like 180 degrees – total opposition.

I'm not sure what to do about this. One option could be to debate less where the pendulum is exactly in a movement-wide sense (since that's impossible to begin with, given that the answer will be different for different parts of EA – both geographically and in terms of more subtle differences to what people see and experience – and also because no one should be confident about it given the limited glimpses that they catch). Instead, we could say things like "I think the pendulum is too far to the left in such and such situations (e.g., Bay area community house x)." Or, alternatively, we could focus more on what the movement should ideally look like. (E.g., maybe write down the reaction you'd like to see instead of focusing on why you don't like other people's reactions.) People will still disagree on these things, but maybe the disagreements will feel more like the 60 degrees rather than the doubled 120 degrees?

To make clear that one is making statements about where one wants the pendulum to be instead of where it's currently at, I think it's  also useful to simply acknowledge all the values at stake in the tradeoff. This makes clear to others that you at least see where they're coming from. It also makes clear that you're not engaged in frantic pendulum-pushing where you think the pendulum has to move in a specific direction at all costs without worrying about how far it already went in some places.

Lastly, maybe it would be good if people thought about what sort of viewpoints they disagree with but still find "defensible." I think it makes total sense to regard some viewpoints as indefensible and try to combat them and so on, but if we resort to that sort of reaction too quickly, then it becomes really difficult to coordinate on anything. Therefore, I often find it refreshing when people disagree in a way that makes clear that other perspectives are still heard and respected. 

Second: As a poly EA, I'm more likely to bother to show up for things if I think I might get laid. It increases engagement and community cohesion.

I upvoted the comment for sharing a relevant point of view, but I personally care most about an EA community where people obsess over ideas and taking action to make the world better. So, anything that attracts people for other reasons is something I see as a risk (dilution of quality). I'm not sure it's that important to draw in lots of casually interested people (definitely not saying you fit that description – I'm just talking about the part of "increases engagement").

To be clear, I know some poly people or people who "sleep around" who seem as serious about EA as it gets, so I'm not saying one can't have both.

That said, the most committed EAs I know who "sleep around" mostly do so outside of EA because they've decided doing it within EA has more risks than benefits and they attend EA events for impact rather than socially. (And my guess is that the most committed EAs who are poly pursue more serious poly relationships rather than lots of casual ones – but I don't actually know.) 

So, there's a sense in which I totally agree with the OP. I just don't think it's a good idea to try to do anything about this from top down. One thing we can do from the bottom up is socially encourage people for being highly dedicated and impact-oriented. (People tend to notice when someone has a high opinion of them and finds their attitude impressive.)

Edit: I guess one point that made me much more sympathetic toward the view that casual dating is not in tension with  high dedication to (the moral inspiration of) EA is that several commenters mentioned that some of their best long-term relationships started casually. If that's the case for someone (that pursuing casual relationships is one of the best ways for finding long-term relationship happiness), then that's of course different! 

I mean, empirically women do choose to go on dates,

Not everyone. Some no longer date after particularly bad experiences. (Can also apply to men.)

Great point about tail risks. I'm unsure if "sleeping around" is a good antidote. Maybe?

Against: According to the point you're making, people's first (or earliest) relationships are the most risky because of not yet having developed reasonable expectations of what relationships should be like. Casual norms (and greater tolerance of dating across asymmetric power dynamics) encourage less careful selection, which makes it more likely the partner is a bad match?(As many commenters point out, casual can turn into serious/long-term unexpectedly, so it's not like casual means you don't run the risk of ending up in a bad long-term bond.)

In favor: I find it hard to articulate why, but I think you still have a point. Maybe there's something about how casual norms lead to more discussions about sex and relationships? If so, maybe it's less about what you end up doing (actually "sleeping around") and more about seeking out advice from others around you, discussing difficult topics openly, etc.? (Sure, firsthand relationship experience is invaluable, but if the first one is bad for you and you have the sort of personality to "get stuck" – seems like you're in danger in environments with both types of norms?)

Either way, I don't think this is the sort of thing that one can (or should) easily engineer from the top down. Feels kind of dystopian if the rules are too restricting. (I do think it's good to have rules for things that can often go really badly and are somewhat easy to work around if some people really want to date each other – e.g., about power dynamics.)

Yeah, I mean I don't disagree with a lot of what you wrote.

That makes sense to me now after re-reading your initial comment! I think I was thrown off by various aspects of the comparison to FTX and then didn't read the last two thirds of your comment closely enough to notice that you made a different point than the one I was expecting. I ended up making a different point that doesn't have much to do with yours. Sorry for the confusion!

It seems like many EAs still (despite SBF) didn't put significant probability on the person from that particular Time incident being a very well-known and trusted man in EA, such as Owen.

These cases seem very different to me. One big update from the FTX situation was "in case you didn't already notice, dark triad traits can be really bad." By contrast, while I'm still processing the update from Owen's case, I think it's gonna be something more like, "probably there really is something unusually bad/unwelcoming with aspects of EA culture even outside the Bay area, sorry I didn't see this earlier." I don't see how I could've made that update just from the FTX scandal.

For what it's worth, I did have significant probability mass on the influential EA figure mentioned in the TIME article being someone who is indeed still influential within EA, despite the fact that the TIME article misrepresented the degree of involvement and centrality of one of accused in one of the other incidents they described. So, it's not like I thought "no way this could happen to EA." The main thing I was taken aback by is that it ended up being someone who was not only very influential within EA, but also someone to whom the adjective "trusted" applied to a very high degree.  In my view, SBF was never "trusted" in the same way Owen was, even though he was even more influential and better known. (I still agree that "by far most EAs trusted SBF" is an accurate statement overall. I just want to highlight that there's a difference between "minimum degree of trust required for someone to hold influential positions" and "would trust this person so much that they'd be among the very last people I'd expect to cause some kind of scandal.")

But honestly.... this community needs to come to terms that sexual assault or professional misconduct can be done by anyone.

I want to distinguish here between types of sexual assault or professional misconduct that are very rare for anyone who isn't high on dark tetrad traits and types of it that also frequently happen with people without dark tetrad traits. Both are bad, but if someone is a serial predator high on dark tetrad traits, you'll potentially end up with several dozens of victims and there can be violence or very explicit and agentic threats to physical safety and ruining someone's reputation, as opposed to just contextually having to worry that one's reputation might suffer as a consequence of speaking up. Owen's case was nothing remotely like the former, so it seems super important to still have a category that is qualitatively different and a lot worse (and that's the category SBF was in, with respect to financial/regulatory misconduct rather than sexual misconduct).

The difference is easy to pin down.* Ask the question: "Does someone genuinely care about not messing up, not harming others or making them uncomfortable (or breaking laws/regulations/moral conventions), etc.? Yes or no?" If the answer is "yes," then you're in a different regime than if it's "no." 

*Edit: actually, it's probably a bit harder to pin this down. I think some bad actors may consciously care about not harming others, but their mind might have anti-social patterns of underlying emotions and self-deception and so on, which can trick highly-empathetic people into wanting to give them second and third chances because it convincingly seems as though they "mean well." So, maybe instead of asking "do they care (conscious intent)?," we have to also ask if they have a mind that's sufficiently conducive to genuinely caring. 

Load more