PhD student at Aberdeen University studying Bayesian reasoning
Interested in practical exercises and theoretical considerations related to causal inference, forecasting and prioritization.
What is EAA? Effective Animal Advocacy?
My (naive) understanding is that the risk of a recession today is not much lower than in 2007-08.So the answer to whether EAs would be working on this back then rounds down to whether EAs are looking into macroeconomic risk today.And the answer to that is mixed: there is actually a tag in the Forum for this very problem, which includes a reference to OpenPhil's program on macroeconomic policy stabilization. But there are no articles under that tag, and I haven't heard much discussion on the topic outside of OpenPhil.
Thank you for writing this!While this is not the high note of the paper, I read with quite some interest your notes about heavy tailed distributions.
I think that the concept of heavy tailed distributions underpins a lot of considerations in EA, yet as you remark many people (including me) are still quite confused about how to formalize the concept effectively, how often it applies in real life.
Glad to see more thinking going into this!
I find your steelman convincing (would love more intersectionalists to confirm though!).
Re: downsides of intercause prioritization. Beyond making people feel bad about their work, systematic prioritization can systematically misallocate resources, while a more informal, holistic and intersectional approach is less likely to make this kind of mistake.
Arguably, while EAs are very well aware of the importance of hit-based giving, they are overly focused on a few cause areas. Meanwhile my (naive) impression is that intersectionalists are succesfully tackling a much wider array of problem areas and interventions, from community help to international aid and political lobbying.
I do not think it is a stretch to think that prioritization frameworks are partly to blame for cause convergence in the EA community.
I feel like invoking worldview diversification here is discussing things at the wrong level.
Is like saying "oh its ok that you believe in intersectionality, because from a worldview diversification perspective we want to work on many causes anyway", and failing to address the fundamental disagreement that within their worldview a intersectionalist does not find cause prioritization useful.
Like, I feel the crux of intersectionality is about different problems being interwoven in complex, hard-to-understand ways. So as OP pointed out, if you believe this you'll need to address all problems at once by radically restructuring society.
Meanwhile, the crux of worldview diversificationists is that we are not certain of our own values and how they will change, so it is better to hedge your bets by compromising between many views.
:surprised:How can I put footnotes on my posts?!?!
Why would research on 'minor' GCRs like the ones mentioned by Arepo be harder than eg AI alignment?My impression is that there is plenty of good research on eg effects of CO2 on health, the Flynn effect and Kessler syndrome, and I would say its much higher quality than extant X risk research.
Is the argument that they are less neglected?
Brainstorming some concrete examples of what everyday longtermism might look like:> Alice is reviewing a CV for an applicant. The applicant does not meet the formal requirements for the job, but Alice wants to hire them anyway. Alice visualices the hundreds of people making a similar decision to hers. She would be ok with hiring this specific applicant, because she trusts her instincts a lot. But she would not trust 100 people in a similar position to make the right choice; ignorning the recruiting guidelines might disadadvantage minorities in a illegible way. She decides that the best would be if everyone chose to just follow the procedure, so she chooses to forego her intuition to favor better decision making overall.> Beatrice is offered a job as a Machine Learning engineer to help the police with some automated camera monitoring. Before deciding whether to accept, she seeks out open criticism of that line of work, and tries to imagine what are some likely consequences of developing that kind of technology, both positive and negative. After doing a balance she realizes that while it will most likely be positive there is a plausible chance that it will enable really bad situations, and rejects the job offer.> Carol reads an interesting article. She wants to share it in social media. She could spend some effort paraphrasing the key ideas in the article, or just share the link. She has internalized that spending one minute summarizing key ideas might well be worth a lot of time saved to her friends who otherwise could use her summary to decide whether to read the whole article. Out of habit she summarizes the article as best as she can, making it clear who she genuinely thinks would benefit from reading the article.
Without entering into too many sensitive details, when I have looked at the output of similar programs I have noticed that I was excited about the career path of 1 out of every 3 participants.
But a) I dont know how much of it was counterfactual, b) when I made the estimation I had an incentive to produce an optimistic answer and c) it relies on my subjective judgement, which you may not trust.
Also worth noting that I think that the raw conversion rate is not the right metric to focus on - the outliers usually account for most the impact of these programs.
The citation is a link: (Grace, 2020)
Just in case: https://aiimpacts.org/discontinuous-progress-in-history-an-update/