HS

Henry_Sleight

28 karmaJoined Aug 2020

Comments
8

Small feedback on your essay itself: 

even as someone interested in hearing what you had to say, your writing could be formatted to let me skim it more efficiently. I'd have loved if you posted more visible TL;DRs at the start & named the sections by their conclusions rather than their guiding questions.

The teaser video worked on me as you predicted though, props on that! 

This also makes for a distinctive cover letter to the OP job, to be sure! Smart.

One obvious heuristic is to not act on the thing without asking ppl first why they've left that gap, which is like, part of OP's model here - feels like the case in the example OP lists with Devansh, if there was some bad unilateral, OP, with a better map, could have explained why. 

Ime most times I've had the feeling "why tf isn't anybody doing X?!" my first response is to run around asking people that question? 

This seems like one heuristic of a few that we'd need to make this go safely by default


 

I like this post! I think it's a great internal comms norm to have shared a draft with the events team, but with a view to posting it on the forum for transparency / articulating the direction you want other organisers to move in. Thanks :)

How short are your AI timelines btw?




edit: clarifying that I'm not being serious, and that I broadly agree with the post.

Really enjoyed this summary - you succeeded in managing the tone of Singer's writing, and the style seems really approachable. 

Also your illustrations, as Thomas pointed out, are brill!

I especially like the phrasing:
>"In assessing whether or not we should donate our resources, Singer argues that we probably overestimate the number of people that are helping in any given situation."

Overall seems like people still overestimate this after hearing about EA - maybe even more than before!

Ah this irony point is interesting! Do you think that this irony is in some way antithetical to the statusy self-importance of west coast culture?

Thanks for writing this up! I'm also in the midst of Working Things Out and a lot of what you've said hits home. My bottom line here is something like: I completely agree that there comes a point in most people's decisions about their lives and what to prioritise, where even though they've done all the homework and counted all the utils on each side, they mostly make the final decision based on intuition - because you ultimately can't prove most of this stuff for certain. One thing that could help you structure your cause prio is by focusing more on a key decision that it has to help you decide on, and using your sureness about that decision as a barometer for when you've caused enough prios. 

> On "I come up with 10 new things to consider" - you're right that it feels like battling an intellectual hydra of crucial considerations sometimes. Have you got the sense so far that, of the 10 new things to consider, there's at least one or two that could substantially reshape your opinion? For me, even when that's not the case, having a more detailed picture can still be really good. This seems especially important for situations/roles where you'll probably end up communicating about EA to people with less context than you. 

> On when to stop: Cause prio thinking and building models of different fields of research / work is definitely something you could spend literally forever on. I roughly think that this wave of EAs are stopping just a bit too early, and are jumping into trying to do useful work too quickly. I elaborate more in the next bit.

> Against lots of deferring: An argument here that motivates me is that in most EA/LTist roles you'll want to go into, it seems like time spent investing in your cause prio saves time. Specifically, it's likely to save time that your colleagues would otherwise have to spend giving you context, explaining how they orient towards the problem, etc. The more you've nailed what your view is, the better you can make (increasingly) autonomous decisions about how the projects you work on should look, etc. I think that this applies in basically any field of EA work: knowing in great detail why you care about a given cause area helps you identify which empirical facts about the world matter to your aims. This I think helps you a lot with strategy and design decisions. It also means that your team benefits more from having you on it - because your perspective is likely to be distinct in useful ways from other people's! 

(I'm quite uncertain about the above and I think this sort of thing differs a lot between individuals) 

Awesome post! I think it's great to explicify this stuff and it would have made my transition into working in EA spaces much smoother (and will probably make the tail end of that transition faster and safer!). What would you recommend to people who want to read/talk/think more about this?