David_Althaus

Topic Contributions

Comments

What psychological traits predict interest in effective altruism?

I wouldn’t be totally surprised if it was less predictive than say ”openness to new ideas” or something.

That seems possible, yeah. (Generally, it would be interesting to see if other personality traits are also predictive.)

I wonder if you could learn more by interviewing people who are just starting to get interested in EA and seeing how their responses change over say a year? Interviewing people who have just started an intro to EA fellowship/virtual program could work well for this.

Good idea, that would definitely be informative!

What psychological traits predict interest in effective altruism?

Thanks!

Is it possible that being E and A correlates with EAs who have been involved and absorbed EA ideas but wouldn’t correlate with EAs if you were able to survey them before they got involved in EA?

That there is no correlation at all seems unlikely to me. (I could expand on that.)

However, I do agree that there is plausibly an effect where being involved in EA, interacting with fellow EAs and hearing EA arguments makes you score even more highly on expansive altruism and effectiveness-focus scales than when you first encountered EA.

I could also imagine someone who is very open to reasonable arguments but isn’t particularly E or A but comes to agree with the statements over time.

That seems plausible to me as well, particularly for effectiveness-focus.

I agree that this is not an especially cost-effective intervention. I was hoping to convey something else with my comment.

If that new FTX Future Fund invested all $1B into Ukraine it will be a minority percentage of all total funds

Sure, but the fact that an area has already received dozens of billions in funding is in itself not a knock-down argument. For example, hundreds of billions of dollars are spent on climate change every year and hundreds of billions were spent on COVID vaccine development alone. But posts about interventions in these areas would receive much less pushback (or usually no pushback). 

Overall, I think that interventions in this space are plausibly more cost-effective than the average climate change intervention discussed by EAs. (That being said, there are additional strategic and PR reasons to praise climate change as a cause area since this is one of the ideological cornerstones of EA’s main political ally.)

The main reason I wrote my comment was not to suggest that this is the most cost-effective intervention (which I agree it is not). I wanted to respond to the large number of downvotes and, if I am to be frank, my impression of the somewhat hostile tone of Dony’s comment, which made me think that many EAs think that OP’s post is clearly net negative.

In addition, I felt that arguments in favor of concessions/giving in to Putin’s threats (e.g., this post) were overrepresented on this Forum (and among EAs I know in private). I was responding more to these sentiments (and also to Dony’s claim that there is no debate). Lastly, there are also game-theoretic reasons to not advertize one's willingness to give in to coercion.

[I’m sleep-deprived so this is not well written and fairly repetitive and unstructured, apologies. I also know nothing about politics and usually follow a policy of almost never reading the news. So me writing a comment on a complex geopolitical issue  is arguably ludicrous.]

I see your reasoning with these points, and agree that the sign of donating is unclear, but I also think there are counterarguments to the points you have made. 

I think that effectively giving in to Putin’s threats here plausibly emboldens him and other malevolent autocrats to take over more countries in the future with impunity. Instead, perhaps the more effective approach, and the one that might have better results in the long run, might be what the West is currently doing: forming a coalition that enforces punishments on malevolent autocrats invading other countries, etc. (I do think that the US invading e.g. Iraq is sufficiently dissimilar from the current case, though I know many people disagree on this.)

(Perhaps one might think that Putin is not a malevolent autocrat. Again, I think this seems likely but I don’t provide evidence here.)

If the West does not do this, it might become clear to Putin and others that they should invade neighboring countries (e.g., China taking over Taiwan) given that there are large material incentives to doing so, and that they will not face much resistance.

Therefore, if the West does not stand up strongly to Putin now, the result might be more violence and lives lost in the long run. Also, the historic track record of appeasement when it comes to malevolent dictators has not been good. (Though it’s difficult to know the relevant counterfactuals, of course.)

A few other miscellaneous points:

I guess the big question is what the overall policy should be for dealing with situations like this? If the West gives in to Putin now, what should the response be when he invades other countries? Or when other nuclear armed nations invade other countries? If we are going to give in to any nuclear armed autocrat, that’s a quick recipe for giving over more and more territory and power to malevolent leaders, which seems very negative from a short-termist and longtermist perspective.

Overall, it’s plausible to me that every day we prolong this war might be net positive from a long-term perspective, even if more lives are lost in the short-term. First, it makes it more likely that sanctions bite hard enough and Putin has to give up. Second, a longer war will be a greater deterrent to Putin and other autocrats in the future. Last, prolonging the war plausibly weakens Putin’s and his allies power and strengthens political opposition in Russia. If Putin is succesful now, then the Russian people will update on that and more likely support future nationalistic leaders. However, if Putin fails, they might be more likely to support more conciliatory, peaceful leaders. 

Generally, reducing Putin’s influence seems very valuable from a longtermist perspective since he seems to has caused a lot of harm in the past decades. For example, he plausibly helped to increase political polarization in the US, perhaps helped Trump to win the election, weakened international cooperation, etc.

Another point to keep in mind: Imagine we live in the universe where Putin is really willing to consider launching nuclear missiles over this conflict, if Ukraine is not given to him without much resistance. (If we don’t live in this universe, we don’t have to worry about his nuclear threats.) It seems to me that the Putin of this universe would also be fairly likely to invade more countries and make further nuclear threats (to which the world would have to give in again and again) giving him ever more power. The Putin of this universe would be a terrible person to give much power to. 

Lastly, I agree that getting involved in hot-button issues is usually not wise (e.g., because they are too crowded) but this is not always true. For example, many EAs also became involved in COVID. 

All that said, I agree that this issue is complex and fraught with uncertainty about how one should act. 

What psychological traits predict interest in effective altruism?

Good point! 

>I'd expect people in their early twenties to answer it quite differently than people in their early forties.

I'd have expected this as well but according to the data age doesn't make a difference when it comes to answering the career item (r = -.04, p = .56). 

What psychological traits predict interest in effective altruism?

Yeah, the negative correlation between education and expansive altruism was also the most surprising to me. 

However, these correlations might not hold up in the general population as it could be something specific to MTurkers. 

It seems that the negative correlation is mostly due to the items "I would make a career change if it meant that I could improve the lives of people in need" (r = -.21, p < .001) and "From a moral perspective, the suffering of all beings matters roughly the same, no matter to what species they belong to" (r = -.18, p < .01). Perhaps more educated people are more happy with their career and thus more reluctant to change it? I don't understand the negative correlation with the anti-speciesism item. 

What psychological traits predict interest in effective altruism?

Thank you! 

If  we operationalize proto-EAs as scoring five or higher on both scales, then I’d say the 14% estimate is closer to the actual number of proto-EAs in the general (US) population (though it’s not clear if this is the relevant population or operationalization, more on that below). 

First, the MTurk sample is much more representative of the general population than the NYU sample. The MTurk sample is also larger (n = 534) than the NYU sample (n = 96) so the MTurk number is a more robust estimate. Lastly, the NYU sample mostly consisted of business school students (undergraduates) who are probably less altruistic than the general population (e.g., Cadsby & Maynes, 1998).[1]

However, if we operationalize proto-EA as “someone who finds EA ideas intuitively appealing and is likely to become a highly engaged EA later on” (which is perhaps closer to what we ultimately care about), then I’d think the NYU number of 6% is a better estimate (and probably an overestimate).[2]

First, our scales were all self-report. It’s a lot easier to respond with “agree” to a question like “I would make a career change if it meant that I could improve the lives of people in need” than to actually do so when push comes to shove. Relatedly, acquiescence bias and social-desirability bias probably inflated mean scores (see footnote 3).

Lastly, as mentioned briefly in the post, becoming a highly engaged EA often requires more than being altruistic and effectiveness-focused. For example, most high-impact career paths discussed by 80k are difficult to pursue (some more so than others) without having fairly high cognitive ability and conscientiousness, low neuroticism, and so forth. (Of course, depending on how you define it you can be a “highly engaged EA” without having a highly impactful career but it’s certainly a lot harder to stay highly engaged if it feels as though you are not making a real difference.)

  1. ^

    Admittedly, I mostly believe this based on personal experience and priors and looked for evidence afterwards. Though the introduction of Cadsby & Maynes (1998) cites more relevant papers (which I haven’t read).

  2. ^

    Needless to say, this estimate depends a lot on what we exactly mean by "highly engaged EA". It also depends on how much outreach is happening. E.g., the more the EA community grows, the more people will be inclined to join for social reasons. 

SamClarke's Shortform

Thanks, Howie! Sent you an email.

SamClarke's Shortform

I've been thinking about starting such an EA mental health podcast for a while now (each episode would feature a guest describing their history with EA and mental health struggles, similar to the 80k episode with Howie).

However, every EA whom I've asked to interview—only ~5 people so far, to be fair—was concerned that such an episode would be net negative for their career (by, e.g., becoming less attractive to future employers or collaborators). I think such concerns are not unreasonable though it seems easy to overestimate them.

Generally, there seems to be a tradeoff between how personal the episode is and how likely the episode is to backfire on the interviewee.

One could mitigate such concerns by making episodes anonymous (and perhaps anonymizing the voice as well). Unfortunately, my sense is that this would make such episodes considerably less valuable.

I'm not sure how to navigate this; perhaps there are solutions I don't see. I also wonder how Howie feels about having done the 80k episode. My guess is that he's happy that he did it; but if he regrets it that would make me even more hesitant to start such a podcast.

Load More