Peter4444

235Joined Jul 2021

Comments
19

Topic Contributions
1

I have mixed feelings because I understand what the post is getting at but think this is a good example of a person writing their thoughts without considering how others will perceive them. E.g. there is no need to say 'quality of person' to get the point across, but doing so might make more sense if the author's mental process is simply 'writing down, as accurately as possible, what I believe' and there's no flagging of how a message might be received.

This problem seems common to me in the rationality community. Not meaning to dig at Thomas in particular, only to point it out, since I think it could reduce the diversity of the EA community along important lines.

If you state an opinion, it's thought that opinion should be scrupulously challenged.

If you state a feeling you had, especially a difficult or conflicted one, it's thought that it should be welcomed and certainly not challenged.

Individually, these attitudes make sense, but together I would expect that they will make Forum posts much more focused on emotional reactions than careful and unemotional pieces.

To clarify, I want both and think emotional reactions can be very important. But at least once, I've seen a detailed but unemotional post buried under a less well thought through post describing someone's emotional reaction to a similar issue. Perhaps we should be welcoming of posts that try hard to do careful and rational analysis, even if they seem/are misguided or unsuccessful.

(Intersubjective evaluation - the combination of multiple people's subjective evaluations - could plausibly be better than one person's subjective evaluation, especially if of themselves, assuming 'errors' are somewhat uncorrelated.)

Linking to Spencer Greenberg's excellent short talk on intrinsic values: 

Spencer claims, among other things, that

  • it's a cognitive fact that you value multiple different things
  • if you pretend otherwise, e.g. because you feel it's stigmatised to act based on any consideration but impartial impact, you will fool yourself with 'irrational doublethink' of the type described in this post.

Thanks for sharing, and congrats! I especially enjoyed reading through the timeline. (I generally like & find it helpful to read concrete, relevant info, especially in posts more abstract than this one.)

Thanks so much for sharing your thoughts in such detail here :)

Not sure about best places, though I have a friend who's working on setting up an EA community in Tulsa, Oklahoma.

It might be worth pointing out that, in my experience, EAs seem quite unusual in tending to talk about EA almost all the time, e.g. at parties and other events as well as at work. I've often found this inspiring and energising, but I can also understand how someone could feel overwhelmed by it.

Great post!

The fact that some  orgs already say things like 'knowledge of effective altruism is preferred but not essential' probably doesn't solve this issue. I can imagine that many jobs are competitive enough that you could only reasonably have a shot if you ticked certain boxes related to EA knowledge/experience, even if you might be a better and more-aligned candidate but don't have obvious evidence.

I think there's information value from doing lots of 10-minute speed-interviews, at least sometimes, so that we can get a sense of how many competent and EA-aligned people might be off EA orgs' radar.

p.s. I can confirm that Evan has been an excellent volunteer for the EA & Consulting Network.

Thank you very much for this post. I thought it was well-written and that the topic may be important, especially when it comes to epistemics.

I want to echo the comments that cost-effectiveness should still be considered. I have noticed  people (especially Bay Area longtermists) acting like almost anything that saves time or is at all connected to longtermism is a good use of money. As a result, money gets wasted because cheaper ways of creating the same impact are missed. For example, one time an EA offered to pay $140 of EA money (I think) for me for two long Uber rides so that we could meet up, since there wasn't a fast public transport link. The conversation turned out to be a 30-minute data-gathering task with set questions that worked fine when we did it on Zoom instead.

Something can have a very high value but a low price. I would pay a lot for potable liquid if I had to, but thanks to tap water that's not required, so I would be foolish to do so. In the example above, even if the value of the data were $140, the price  of getting it was lower than that. After taking into account the value of time spent finding cheaper alternatives, EAs should capture the surplus whenever possible.

As a default, I would like to see people doing a quick internal back-of-the-envelope calculation and scan for cheaper alternatives, which could take a minute or five. Not only do I think this is cost-effective; I think it helps with any issues of optics and alienation as well, because you only do crazy-expensive-looking things when there's not an obvious cheaper alternative.

It would also be nice to have a megathread of cheaper alternatives to common expenditures.

Load More