All of Ben Jamin's Comments + Replies

I think I am mostly comparing to how different my impression of the landscape of a few years ago is to today's landscape.

I am mostly talking about uni groups (I know less about how status-y city groups are), but I there were certainly a few people putting in a lot of hours for 0 money and not much recognition from the community for just how valuable their work was. I don't want to name specific people I have in mind, but some of them now work at top EA orgs or are doing other interesting things and have status now, I just think it was hard for them to know... (read more)

My previous comment may well 'deserve' to be down voted, but given that it has been heavily down voted I would appreciate it a lot if some of the downvoters could explain why they downvoted the comment.

5
Charles He
2y
You directly said that all the smart dedicated people are working on LT causes or interventions and also distinctly said that NT people aren’t very good or thought out and that’s why they are NT. Moving to explicit slagging of different cause areas isn’t an orthodox or acceptable view IMO in EA, even among “pure” LT or NT talking about other respective areas. Things get metal in a world without these norms, which is why EA leaders pay a lot of attention to maintaining them. Very strong people, who don’t perfectly agree with every possible aspect of LT implementations, are very happy to see strong AI safety work and see it grow greatly. Some NT people are sitting on pretty powerful and unvoiced novel criticisms on LT (and vice versa). These ideas can’t really be socialized and won’t be said. Note that this does not involve horse trading or anything like that. I would expect to provide aid or resources to other EAs working on their cause areas I am not interested in and vice versa. Not doing so is defection.
3
Thomas Kwa
2y
I didn't downvote. The negative component of the impression I got is it seems vaguely rude to my model of neartermists, and the first paragraph doesn't quite seem to be true even if the most impact is produced by longtermists, because people could have bad epistemics or different morals and still be smart and dedicated. Also it should be out of scope of this discussion. Also, even if most of the impact is produced by longtermists, people working in global health or animal welfare are often like 60% of the way there epistemically compared to the average person, and from a longtermist perspective the reason why they aren't producing much impact is just that reality is unfair and effectiveness is a conjunction of multipliers.
9
Wind
2y
"Don't think about things well" is probably what caused it. It makes it hard not to read your post as NT EA's being stupid. If you removed that, your comment would have basically the same meaning, except it would be because of lack of exposure (or not taking weird ideas seriously) and not something that feels like a proxy for stupidity. Disclaimer: I didn't downvote you
3
freedomandutility
2y
I think some common reasons to not primarily focus on improving the long-run future, such as person-affecting views, non-totalist utilitarian beliefs and the cluelessness objection don’t fit into the 4 categories you described.
7
howdoyousay?
2y
Full disclosure: I'm thinking about writing up about ways in which EA's focus on impact, and the amount deference to high status people, creates cultural dynamics which are very negative for some of its members.    It's a divisive claim, and not backed up with anything. By saying 'bite the bullet', it's like you're taunting the reader to say "if you don't recognise this, you're willfully avoiding the truth / cowardly in the face of truth".  Whereas for such a claim I think onus is on you to back it up.  It's also quite a harsh value judgement of others, and bad for that reason - see below.   This implies "some people matter, others do not". It's unpleasant and a value judgement, and worth downvoting on that alone. It also assumes such judgements can easily be made of others, whether they "Don't think about things well". I think I've pretty good judgement of people and how they think (it's part of my job to have it), but I wouldn't make these claims about someone as if it's definitive and then decide whether to engage / disengage with them off the bat of that.  But it's even more worth downvoting given how many - in my experience, I'll caveat - EAs end up disconnecting from the community or beat themselves up because they feel the community makes value judgements about them, their worth, and whether they're worth talking to.  I think it's bad for all the 'mental health--> productivity --> impact' reasons, but most importantly because I think not hurting others or creating conditions in which they would be hurt matters.  This statement you made seems to me to be very value judgementy, and would make many people feel threatened and less like expressing their thoughts in case they would be accused of 'not thinking well', so I certainly don't want it going unchallenged, hence downvoting it.  I think making a list of people doing things, and ranking them against your four criteria above, and sharing that with other people would bring further negative tones to the E

One thing that I feel this post underemphasises is just how high impact the top PAs are going to be.

If you believe that impact is extremely heavy tailed, some PAs (like Holden's) are probably going to have a far greater impact than the vast majority of high status EAs, even if you are on the more pessimistic end a PAs value add.

You also might be able to leverage not caring about status, it's plausible to me that some people that would are going to start mediocre organisations should actually try to force-multiply the best people and one reason they don't i... (read more)

-2
Ben Jamin
2y
My previous comment may well 'deserve' to be down voted, but given that it has been heavily down voted I would appreciate it a lot if some of the downvoters could explain why they downvoted the comment.

I want to be clear that I am endorsing not only the sentiment but the drastic framing. At the end of the day, a few 100k here and there is literally a rounding error on what matters and I would much rather top researchers were spending this money of weird things that might help them slightly than we had a few more mediocre researchers who are working on things that don't really matter.

I certainly wouldn't say this about any researcher, if they could work in constellation/lightcone they have a 30% chance of hitting my bar. I am much more excited about this... (read more)

4
Lukas_Gloor
2y
Superforecasters can predict more accurately if they make predictions at 1% increments rather than 2% increments. It either hasn't been studied, or they've found negative evidence, whether they can make predictions at lower % increments. 0.01% increments are way below anything that people regularly predict on; there's no way to develop the calibration for that. In my comment, I meant to point out that anyone who thinks they're calibrated enough to talk about 0.01% differences, or even just things close to that, is clearly not a fantastic researcher and we probably shouldn't give them lots of money. A separate point that makes me uneasy about your specific example (but not about generally spending more money on some people with the rationale that impact is likely extremely heavy-tailed) is the following. I think  even people with comparatively low dark personality traits are susceptible to corruption by power. Therefore, I'd want people to have mental inhibitions from developing taste that's too extravagant. It's a fuzzy argument because one could say the same thing about spending $50 on an uber eats order, and on that sort of example, my intuition is  "Obviously it's easy to develop this sort of taste and if it saves people time, they should do it rather than spend willpower on changing their food habits."   But on a scale from $50 uber eats orders to spending $150,000 on a sports car, there's probably a point somewhere where someone's conduct too dissimilar to the archetype of "person on a world-saving mission." I think someone you can trust with a lot of money and power would be wise enough that, if they ever form the thought "I should get a sports car because I'd be more productive if I had one," they'd sound a mental alarm and start worrying they got corrupted. (And maybe they'll end up buying the sports car anyway, but they certainly won't be thinking "this is good for impact.")   

Sure, but it's pretty reasonable to think that Kat thinks that majority of value will come from helping longtermists given that that is literally that reason she set up nonlinear.

Also, EAIF will fund these things.

I like this because it is a low overhead way for high impact people to organise retreats, holidays etc. with aligned people and this is plausibly very valuable for some people. It will also nudge people to look after themselves and spend time in nice plaes which on the current margin is maybe a good thing, idk.

Fwiw I think that LTFF would fund all of the 'example use cases for guests' anyway for someone reasoably high impact/value aligned anway, so I think this is more about nudges than actually creating opportunities that don't already exist.

Not all EAs work on the long-term future

Sometimes the high impact game feels weird, get over it.

I have been in lots of conversations recently where people expressed their discomfort in the longtermist communities spending (particularly at events).

I think that my general take here is "yeah I can see why you think this but get over it". Playing on the high impact game board when you have $40B in your bank account and only a few years to use it involves acting like you are not limited financially. If top AI safety researchers want sports cars because it will help them relax and therefore be more 0... (read more)

3
Lukas_Gloor
2y
I agree with the sentiment, but I wouldn't put it quite as drastically. (If someone actually talked about things that make them 0.01% more productive, that suggests they have lost the plot.) Also, "(and I trust their judgment and value alignment)"  does a lot of work. I assume you wouldn't say this about any researcher who self-describes as working on longtermism. If some grantmakers have poor judgment, they may give away large sums of money to other grantmakers for regranting who may have even worse judgment or could be corrupt, then you get a pretty bad ecosystem where it's easy for the wrong people to attain more influence within EA. 

Part of me is a bit sad that community building is now a comfortable and status-y option. The previous generation of community builders had a really high proportion of people who cared deeply about these ideas, were willing to take weird ideas seriously and often take a substantial financial/career security hit.

I don't think this applies to most of the current generation of  community builders to the same degree and it just seems like much more of a mixed bag people wise. To be clear I still think this is good on the margin, I just trust the median new community builder a lot less (by default). 

8
Manuel Allgaier
2y
Interesting! I work in CB full-time (Director of EA Germany), and my impression is still that it's challenging work, pays less than what I and my peers would earn elsewhere and most of the CB roles still have a lot less status than e.g. being a researcher who gets invited to give talks etc. Do you think some CBs are motivated by money or status? What makes you think so? I'm genuinely curious (though no worries if you don't feel like elaborating).

In your list of new hard-to-fake signals of seriousness I like.

Doing high upside things even if there's a good chance they might not work out and seem unconventional.

I think that this is underrated and as a community, we overemphasise actually achieving things in the real world meaning if you want to get ahead within EA it often pays to do the medium right but reasonable thing over the super high EV thing, as the weird super high EV thing probably won't work.

 I'm much more excited when I meet young people who keep trying a bunch of things that seem pl... (read more)