I left out nuances to keep the blindspot summary short and readable. But I should have specifically prefaced what fell outside the scope of my writing. Not doing so made claims come across more extreme than I meant for the more literal/explicit readers amongst us :)
So for you who still happens to read this, here’s where I was coming from:
Hope that clarifies the post's argumentation style somewhat. I had those three starting points at the back of my mind while writing in March. So sorry I didn't include them.
But wait, how do we know that was really written by an algorithm? ^^
To clarify the independent vs. interdependent distinction Julia suggested that EA thought about negative flow-through effects are an example of interdependent thinking. IMO EAs still tend to take an independent view on that. Even I did a bad job above of describing causal interdependencies in climate change (since I still placed the causal sources in a linear 'this leads to this leads to that' sequence). So let me try to clarify again, at the risk of going meta-physical:
Would it be possible to release the audio narrations on Apple Podcasts too? I would personally be much more likely to dig into them if I can easily access it when going out for a walk or something.
This interview with Jacqueline Novogratz from Acumen Fund covers some practical approaches to attain skin in the game.
Two people asked me to clarify this claim:
Going by projects I've coordinated, EAs often push for removing paper conflicts of interest over attaining actual skin in the game.
Copying over my responses:re: Conflicts of interest:
My impression has been that a few people appraising my project work looked for ways to e.g. reduce Goodharting, or the risk that I might pay myself too much from the project budget. Also EA initiators sometimes post a fundraiser write-up for an official project with an official plan, that somewhat hides that they're actually seeking funding for their own salaries to do that work (the former looks less like a personal conflict of interest *on paper*).
re: Skin in the game:
Bigger picture, the effects of our interventions aren't going to affect us in a visceral and directly noticeable way (silly example: we're not going to slip and fall from some defect in the malaria nets we fund). That seems hard to overcome in terms of loose feedback from far-away interventions, but I think it's problematic that EAs also seem to underemphasise skin in the game for in-between steps where direct feedback is available. For example, EAs seem sometimes too ready to pontificate (me included) about how particular projects should be run or what a particular position involves, rather than rely on the opinions/directions of an experienced practician who would actually suffer the consequences of failing (or even be filtered out of their role) if they took actions that had negative practical effects for them. Or they might dissuade someone from initiating an EA project/service that seems risky to them in theory, rather than guide the initiator to test it out locally to constrain or cap the damage.
re: Using Asana Business at EA Hub Teams.
You can sign up here (I see EA PH already did): https://is.gd/asanaforea
It’s also possible to ask for a fully functional team for free there, but you need at least one paid member account (€220/year) to set up new teams, custom fields, and app integrations like Slack.
Migration is arrangeable with Asana staff (note that some formatting and conversations get lost). Basically need to arrange with me to add my email to your old space, and include it in this form: https://form.jotform.com/asanawebdev/asana-migration-request
I'm actually interested to hear your thoughts!
Do throw them here, or grab a moment to call :)
Ah, good to know that my fumbled attempts at narrating were helpful! :)
I’m personally up for the audio tag. Let me see if I can create one for this post.
See also LessWrong Forum:Comment 1 (on my portrayal of Eliezer's portrayal of AGl):
... saying 'later overturned' makes it sound like there is consensus, not that people still have the same disagreement they've had 13 years ago ...
On 3, I'd like to see EA take sensitivity analysis more seriously.
I found it immensely refreshing to see valid criticisms of EA....I think I disagree on the degree to which EA folks expect results to be universal and generalizable ...
The way I've tended to think about these sorts of questions is to see a difference between the global portfolio of approaches, and our personal portfolio of approaches ...
Comment 5: (a bunch of counterarguments and counterexamples)