remmelt

Wiki Contributions

Comments

Some blindspots in rationality and effective altruism

This interview with Jacqueline Novogratz from Acumen Fund covers some practical approaches to attain skin in the game.

Some blindspots in rationality and effective altruism

Two people asked me to clarify this claim:

Going by projects I've coordinated, EAs often push for removing paper conflicts of interest over attaining actual skin in the game.

Copying over my responses:

re: Conflicts of interest:

My impression has been that a few people appraising my project work looked for ways to e.g. reduce Goodharting, or the risk that I might pay myself too much from the project budget. Also EA initiators sometimes post a fundraiser write-up for an official project with an official plan, that somewhat hides that they're actually seeking funding for their own salaries to do that work (the former looks less like a personal conflict of interest *on paper*).

re: Skin in the game:

Bigger picture, the effects of our interventions aren't going to affect us in a visceral and directly noticeable way (silly example: we're not going to slip and fall from some defect in the malaria nets we fund). That seems hard to overcome in terms of loose feedback from far-away interventions, but I think it's problematic that EAs also seem to underemphasise skin in the game for in-between steps where direct feedback is available. For example, EAs seem sometimes too ready to pontificate (me included) about how particular projects should be run or what a particular position involves, rather than rely on the opinions/directions of an experienced practician who would actually suffer the consequences of failing (or even be filtered out of their role) if they took actions that had negative practical effects for them. Or they might dissuade someone from initiating an EA project/service that seems risky to them in theory, rather than guide the initiator to test it out locally to constrain or cap the damage.

Get funding for your student group to buy productivity software

re: Using Asana Business at EA Hub Teams.

You can sign up here (I see EA PH already did): https://is.gd/asanaforea

It’s also possible to ask for a fully functional team for free there, but you need at least one paid member account (€220/year) to set up new teams, custom fields, and app integrations like Slack.

Migration is arrangeable with Asana staff (note that some formatting and conversations get lost). Basically need to arrange with me to add my email to your old space, and include it in this form: https://form.jotform.com/asanawebdev/asana-migration-request

Some blindspots in rationality and effective altruism

I'm actually interested to hear your thoughts! 


Do throw them here, or grab a moment to call  :)

A parable of brightspots and blindspots

Ah, good to know that my fumbled attempts at narrating were helpful! :)

I’m personally up for the audio tag. Let me see if I can create one for this post.

Some blindspots in rationality and effective altruism

See also LessWrong Forum:

Comment 1 (on my portrayal of Eliezer's portrayal of AGl):

... saying 'later overturned' makes it sound like there is consensus, not that people still have the same disagreement they've had 13 years ago ...

Comment 2:

On 3, I'd like to see EA take sensitivity analysis more seriously.

Comment 3:

I found it immensely refreshing to see valid criticisms of EA.
...
I think I disagree on the degree to which EA folks expect results to be universal and generalizable ...

Comment 4:

The way I've tended to think about these sorts of questions is to see a difference between the global portfolio of approaches, and our personal portfolio of approaches ...

Are we actually improving decision-making?

I'm interested in your two cents on any societal problems where a lot of of work has been done by specialists who are not directly involved in the effective altruism community.

Are we actually improving decision-making?

Thank you too for the input, Vicky. This gives me a more grounded sense of what EA initiators with  experience in policy are up to  and thinking. Previously, I corresponded with volunteers of Dutch EA policy initiatives as well as staff from various established EA orgs that coordinate and build up particular professional  fields. Your comment and the post by your working group  made me feel less pessimistic about a lack of open consultation and consensus-building in IIDM initiatives .

I like your framing of a two-way learning process. I think it's useful to let go of one's own  theory of impact sometimes in conversations, and ask about why they're doing what they do and find relevant.

I had missed your excellent write-up so just read through it!  It seems carefully written, makes nuanced distinctions, and considers complexity in the many implicit interactions involved. I found it useful. 

How much time should EAs spend engaging with other EAs vs with people outside of EA?

Thank you for starting a  thread on this open question! Just reading through.

I wrote some quick thoughts on the value of getting a diversity of views here.

Are we actually improving decision-making?

Thank you too for your interesting counterarguments. Some scattered ideas on each: 

1. Your first point seems most applicable at the early stages of forming a community.
What do you think of the further argument that there are diminishing marginal returns to finding additional people who share your goals, and corresponding marginal increases in the risk of not being connected with people who will bring up  important alternative approaches and views for doing good? 

This is a rough intuition I have but I don't know how to trade off the former against the latter right now. For example, someone I called with mentioned that doing a lecture for a computer science department is going to lead to more of the audience members visiting your EA meetups than if you hold it for the anthropology department. There are trade-offs here and in other areas of outreach but it's not clear to me how to weigh up  considerations.

My sense is that as our community continues to grows bigger (an assumption) with fewer remaining STEM hubs to still reach out to, that (re-)connecting with people who are more likely to take up similar goals will yield lower returns. In the beginning days of EA, Will MacAskill and Toby Ord prioritised gathering with a core group of collaborators to motivate each other and divide up work, as well as reaching out further to amenable others in their Oxford circles. Currently my impression is that in many English-speaking countries, and particularly within professional disciplines that are (or used to be) prerequisites for pursuing 80K priority career paths, it is now quite doable for someone to find such collaborators. 

Given that we're surrounded more by like-minded others that we can easily gather with, it seems more likely to drift into forming a collective echo chamber that misses or filters out important outside perspectives. My guess is that EA initiators now get encouraged more to pursue actions that the EAs they meet or respect will re-affirm as 'high impact'. On the other hand, perhaps they are also surrounded by more comrades who are able to observe their concrete actions, comprehend their intentions more fully, and give faster and more nitty-gritty feedback. 


2. On your second point, this made me change my mind somewhat! Although it may be harder to identify specific perspectives that we are missing if we're surrounded by less non-EAs, we can still identify the people who we are missing from the community. You mentioned that we're missing conservatives, and this post on diversity also mentioned social conservatives. Spotting a gap in cognitively diverse people ('social conservatives') seems relatively easy to do in say the EA Survey, while spotting  a gap in important perspectives may be much harder if you're not already in contact with the people who have them (my skimpy attempts for social conservatives: 'more respect for hidden value of traditions, work more incrementally, build up more stable and lasting collaborations, more wary of centralised decision-making without skin in the game').

Anthropologists were  also given as an example by 80K since anthropologists understood the burial practices that were causing Ebola to spread. I think the framing here of anthropologists having specialised skills that could turn out to be useful, or a framing  of whether you can have enough impact pursuing a career in anthropology (latter mentioned by Buck Schlegeris) misses another important takeaway for EA though:  if you seek advice from specialists who have spent a lot of time observing and thinking differently about an area similar to the one you're trying to influence through your work, they might be able to uncover what's missing in your current approach to doing good. 

I'd also be curious to read other plausible examples of professionals whose views we're missing!


3. Your third point on EAs being pretty open-minded does resonate with me, and I agree that should make us less worried about EAs insulating themselves from different outside opinions. My personal impression is that EAs tend to be most open-minded in conversations they have inside the community, but are still interested and open to having conversations with strangers they're not used to talking with. 

My guess is that EAs still come across as kinda rigid to outsiders in terms of the relevant dimensions they're willing to explore whole-heartedly  in public conversations about making a positive difference. I like this post on discussing EA with people outside the community for example, but its starting point seemed to be to look for opportunities to bring up and discuss altruistic causes with unwitting outsiders that EAs have already thought  a long time about (in other words, it starts from our own turf where we can assume to have an informational advantage). As another example, a few responses by EA leaders that I've seen to outside criticisms of tenets of EA appeared to be somewhat defensive and stuck in views already held inside EA (though often the referred-to criticism seemed to mischaracterise EA views, making it hard to steelman that criticism and wring out any insights). 

The EA community  reminds me a lot of the international Humanist community I was involved in for three years: I hung out with people who were open-minded, kind, pondered a lot, and were willing to embrace wacky science or philosophy-based beliefs. But they were also kinda stuck on expounding on certain issues they advocated for in public (e.g. atheism, right to free speech, euthanasia, living a well-reflected life, scepticism and Science, leaving money in your will for Humanist organisations). There was even a question of whether you were Humanist enough – one moment I remember feeling a little uncomfortable about was when the leader of the youth org I was part of decided to remove the transhumanists from the member list because they were 'obviously' not Humanist. From the inside Humanism felt like it was a big influential thing , but really we were a big fish in a little pond.

–> Would be curious to hear where your  impressions of EAs you've met differ here! 

Over the last years, messaging from EA does seem to have become less preachy. I.e. describing and allowing space for more nuanced and diverse opinions and relying less on big simplified claims that lack grounding in how the world actually works (e.g. claims about an intervention's effectiveness based on a metric from one study, a 100x donation effectiveness multiplier for low-income countries, leafletting costing cents per chicken saved, that once an AI is generally capable enough it will recursively improve its own design and go FOOM).

But I do worry about EAs now no longer needing to interact as much with outsiders who think about problems in fundamentally different ways. Aspiring EAs do seem to make more detailed, better grounded, and less dogmatic arguments. But for the most part, we still appear to map and assess the landscape using  similar styles of thinking as before. For example, posts recommended in the community that I've read often base their conclusions on explicit arguments that are elegant and ordered. These arguments tend to build on mutually exclusive categorisations, generalise across large physical spaces and timespans, and assume underlying structures of causation that are static. Authors figure out general scenarios and assess the relative likelihood of each, yet often don't disentangle the concrete meanings and implications of their statements nor scope out the external validity of the models they use in their writing (granted, the latter are much harder to convey). Posts usually don't cover variations across concrete contexts, the relations and overlap between various plausible perspectives, or the changes in underlying dynamics much (my posts aren't exempt here!). Furthermore, the range of environments (e.g. in Western academia, coding, engineering)  that the people involved in EA were exposed to in the past that they now generalise certain arguments from are usually very different relatively from the contexts in which beneficiaries reside whom they're trying to improve the lives of (e.g. villages in low income countries, animals in factory farms, other cultural and ethnic groups that will be affected by technological developments). 


4. That brings me to your fourth point.  What you proposed resonates with my personal experience in trying to talk with people from other groups ('EAs in the past put in an effort to reach out to other groups of people and were generally disappointed because the combination of epistemic care and deliberative altruistic ambition seems really rare'). I haven't asked others about their attempts at kindling constructive dialogues but I wouldn't be surprised if many of those who did also came away somewhat disappointed by a seeming lack of altruistic or epistemic care. 

So I think this is definitely a valid point, but I still want to suggest some nuances:

  • We could be more explicit, deliberate, and targeted about seeking out and listening intently to specialists who actually do genuinely work towards making a positive difference in their field, yet take on possibly insightful views and approaches to doing good that draws from different life experience. I think we can do more than open-mindedly explore unrelated groups in our own spare time. I also think it's not necessary for a specialist to take a cosmopolitan and/or consequentialist altruistic angle to their work for us to learn from them, as long as they are somehow incentivised to convey or track true aspects of the world in their work.
  • If we stick tightly to comparing outsiders' thinking against markers used in EA to gauge say good judgement, scientific literacy, or good cause prioritisation, then we're kinda missing the point IMO. Naturally, most outside professionals are not going to measure up against standards that EAs have promoted amongst themselves and worked hard to get better at for years.  A more pertinent reason to reach out IMO is to listen to people who think differently, notice other relevant aspects of the fields they're working in, and can help us uncover our blindspots.
Load More