Interesting post and curriculum. I look forward to hearing about the outcomes of the first run as you evaluate them and get results. My own estimates for the likeliness of this achieving 85% better outcomes than the current method are significantly lower, but I think there's a chance this will be an improvement.
That being said, some points of disagreement.
Thanks Jan! Could you elaborate on the first point specifically? Just from a cursory look at the linked doc, the first three suggestions seem to have few drawbacks to me, and seem to constitute good practice for a charitable movement.
- Set up whistleblower protection schemes for members of EA organisations
- Transparent listing of funding sources on each website of each institution
- Detailed and comprehensive conflict of interest reporting in grant giving
This post is quite informative, but at points seems written in a needlessly harsh tone.
E.g. "If Givewell was serious about welcoming outsiders’ input into what they could do better, they’d work with experts to improve their hiring process. But they’re not a serious organisation, so I suspect they’ll ignore this."
I read this part as venting some anger after being rejected (in a frustrating way!), which is understandable. But it makes it harder for me to place the post more broadly, as I worry that parts may be similarly exaggerated or that the focus on negative parts may omit other parts that would be needed for a representative picture of the application process.
Still, I found this informative and upvoted. I wanted to mention it as it may explain the voting pattern.
"I agree with Lynette Bye that most of the working hours literature is poor—I'm even more skeptical than she is about agenda-driven research on Gilded Age factory workers—and that gaining an impression from anecdotes of top performers is better. "
I am worried about relying on anecdotes of top performers as this has an obvious selection effect neglecting the (probably sizeable) group of people that tried stimuant-driven work binges and simply burned out.
This is hand-wavingly addressed later
"A third reason is that burnout risk might be overrated if most of your impact comes from the small chance of you being a very high performer, perhaps because being 99th percentile is 100+ times better than being 90th percentile. This makes studying the habits of top performers even more useful because the survivorship bias is less important."
First, I think it seems unattractive to me to have EA become a large group of amphetamine-fueled workaholics with high burnout rates - not even because of optics, but because of the immense suffering of those that will burn out.
Secondly, this neglects how many of the high-impact performers would have been high-impact absent amphetamines or excessive working hours.
Third, it strikes me as implausible that the "99th percentile is 100+ times better than being 90th percentile" for the target groups of "operations, entrepreneurship, or community-building ". I did a tad of community-building myself, and would be very surprised if for a community-builder, adding 20 hours of work a week even approximates the value of the first 40 hours spent on community-building, and honestly shocked if it outsized it by a factor of 100.
Lastly and most importantly, it is entirely unclear to me in what relation the "small chance of being a very high performer" and the "chance of burnout" is. It seems entirely plausible to me that the chance of me becoming Erdos-like because I take stimulants and work a ton is thousands of times less likely than the chance that I'll burn out because I take stimulants and work a ton.
I also generally think that health-related advice that goes against widely-held priors should at least attempt to quantify risks and benefits using actual numbers, rather than waving hands.
While agency is often invoked as a crucial step in an AI or AGI becoming dangerous, I often find pitches for AI safety oscillate between a very deflationary sense of agency that does not ground worries well (e.g. "Able to represent some model of the world, plan and execute plans") and more substantive accounts of agency (e.g. "Able to act upon a wide variety of objects, including other agents, in a way that can be flexibly adjusted as it unfolds based on goal-representations").
I'm generally unsure if agency is a useful term for the debate at least when engaging with philosophers, as it comes with a lot of baggage that is not relevant to AI safety.
Most liberals and libertarians identify with non-consequentialist ethics. Consequentialism is (sometimes?often?) seen as an antagonist or threat to liberalism or libertarianism. Sometimes, I worry that the strong connection of Effective Altruism to consequentialist ethical positions serves as a hindrance in popularizing it among modern liberals and libertarians.
Do you agree with this assessment? Do you think this can change? In what ways would you like to see consequentialists engage with liberal or libertarian ideas? In what ways can we make liberals or libertarians engage more with consequentialist ideas?
What advice do you have for teaching EA courses in an academic context (esp. philosophy)? Besides the Ethics projects, which parts of your classes on the topic do you think are most successful or most popular?
anyone have the link for the economists guesses Askill refers to? I have no copy of doing good better around so I cant check myself.
Also, anyone know if demand-independent subsidies are factored in? I would expect the expected value to be lower when subsidies allow producers to be producing below "production/world market price", as they could easily export whatever is not locally consumed (as some EU countries do).
Thanks for the post. This issue regularly arises in our local EA group (mainly due to me desperately grasping straws to justify my carnivorous ways), and it is surprisingly hard to get good information on the topic. So far I knew only the "Does Vegetarianism make a difference" post, which is well-written but does seem a bit light on the economics side, with no peer-reviewed articles or analyses being quoted as far as I remember.
I'm confused as to whether the character of the project is (1) An epistemic project to make economics research more accessible and transparent or (2) A political project to promote specific areas of economic research that we believe are not accurately represented in current consensus, possibly in the hope of accelerating economic system change.
This announcement is giving me (1) vibes, whereas the newsletter is giving me (2) vibes.
Personally, I share Harrison's concerns. I think if the project is (2), these concerns are much more pressing than if the project is (1), as I expect a washout effect as more topics get added to correct for what may be biases of the founders. But based on the website, I am relatively confident that the project is (2) - the website specifies wanting to accelerate a "paradigm shift", and prominently displays a quote about the problematic nature of western capitalism.
Two give just two examples illustrating my concern with the newsletter.
I don't want to be overly critical - I am glad this project exists, and am happy to see more accessible and transparent economic data. But I want to highlight that there may be a significantly higher value if the project takes a neutral approach to economic schools and systems instead of following a line of thought or narrative that the founders (maybe correctly) take to be the right one.
Edited to reflect a closer look at the website.