Hide table of contents

One area that has often been discussed as an EA or meta-EA cause area is rationality development, whether that be in the form of "raising the sanity waterline", providing relevant training to certain key people in order to empower them, or something else entirely.

What aspect of this strikes you as most interesting or relevant? What would you be most excited about seeing out of a new group or project in this area?

11

0
0

Reactions

0
0
New Answer
New Comment


5 Answers sorted by

I've become pretty pessimistic about rationality-improvement as an intervention, especially to the extent that it involves techniques that are domain-general, with a large subjective element and placebo effect/participant cost. Basically most interventions of this sort haven't worked, though they induce tonnes of biases that allow them to display positive testimonials: placebo effects, liking instructors, having a break from work, getting to think about interesting stuff, branding of techniques, choice-supportive bias, biased sampling of testimonials, etc etc etc.

The nearest things that I'd be interest in would be: 1) domain-specific training that delivers skills and information from trained experts in a particular area, such as research, 2) freely available online reviews of literature on rationality interventions, similar to what gwern does for nootropics, 3) new controlled experiments on existing rationality programs such as Leverage and CFAR 4) training in risk assessment for high-risk groups like policymakers.

More thorough evaluation of productivity techniques, particularly those based on some form of group commitment/incentive that couldn't easily be replicated by a lone practitioner.

One aspect of this: Which forms of "standard" business training actually seem to work well? I've heard good things about Toastmasters; what about Getting Things Done training? Cialdini's courses on influence? People have paid a lot of money for these things for a long time, which is no guarantee of efficacy but still hints that they should be investigated.

I would like to see efforts at calibration training for people running EA projects. This would be useful for helping to push those projects in a more strategic direction, by having people lay out predictions regarding outcomes at the outset, kind of like what Open Phil does with respect to their grants.

I am in the process of reading a book called The Righteous Mind by Jonathan Haidt and I think the theories and research on moral psychology Haidt discusses could be applied to this topic to create some interesting research / studies!

I'd love to see a tool that people enjoy using which testably teaches rationality.

Perhaps and app or novel which leaves people making better decisions on common tests of bias.

I would be particularly interested in seeing this in regard to elections. How do you teach people to vote more in line with their own interests?

Curated and popular this week
 ·  · 1m read
 · 
[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.] In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.  In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: 1. Those working on AI Safety, because they believe that transformative AI is coming. 2. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources. 
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time. 1. ^ Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other pra
rai, NunoSempere
 ·  · 5m read
 · 
We’re developing an AI-enabled wargaming-tool, grim, to significantly scale up the number of catastrophic scenarios that concerned organizations can explore and to improve emergency response capabilities of, at least, Sentinel. Table of Contents 1. How AI Improves on the State of the Art 2. Implementation Details, Limitations, and Improvements 3. Learnings So Far 4. Get Involved! How AI Improves on the State of the Art In a wargame, a group dives deep into a specific scenario in the hopes of understanding the dynamics of a complex system and understanding weaknesses in responding to hazards in the system. Reality has a surprising amount of detail, so thinking abstractly about the general shapes of issues is insufficient. However, wargames are quite resource intensive to run precisely because they require detail and coordination. Eli Lifland shared with us some limitations about the exercises his team has run, like at The Curve conference: 1. It took about a month of total person-hours to iterate of iterating on the rules, printouts, etc. 2. They don’t have experts to play important roles like the Chinese government and occasionally don’t have experts to play technical roles or the US government. 3. Players forget about important possibilities or don’t know what actions would be reasonable. 4. There are a bunch of background variables which would be nice to keep track of more systematically, such as what the best publicly deployed AIs from each company are, how big private deployments are and for what purpose they are deployed, compute usage at each AGI project, etc. For simplicity, at the moment they only make a graph of best internal AI at each project (and rogue AIs if they exist). 5. It's effortful for them to vary things like the starting situation of the game, distribution of alignment outcomes, takeoff speeds, etc. AI can significantly improve on all the limitations above, such that more people can go through more scenarios faster at the same q
DavidNash
 ·  · 7m read
 · 
Project 2025: Mandate for Leadership: The Conservative Promise is 922 pages of US governing proposals from the Heritage Foundation, with ideas for multiple departments. From the recent executive orders it seems like parts of Project 2025 are already or in the process of being implemented. They have a 30 page section on the US Agency for International Development (USAID) and I thought it would be useful to go through and see what the new US government may be attempting to do in the next few years. I’ve given a brief summary of most of the topics without much comment.   Key Issues Aligning U.S. Foreign Aid to U.S. Foreign Policy * U.S. foreign aid currently suffers from fragmentation across approximately 20 different government offices, agencies, and departments, resulting in poor alignment with broader foreign policy strategy. * The proposed solution is to authorise the USAID Administrator to serve as Director of Foreign Assistance (at Deputy Secretary level within the State Department), enabling better coordination of aid programs and alignment with policy objectives. Countering China’s Development Challenge * China's Belt and Road Initiative has deployed billions in loans and investments across Latin America and Africa, often creating "debt traps" that advance China's strategic interests while undermining local economies and U.S. influence. * The Trump administration established several counter-China programs through USAID (including "Clear Choice," Digital Strategy, and new bilateral partnerships), but these were largely discontinued under the Biden administration in favor of climate-focused policies. * The administration should restore USAID's counter-China programs and prioritise aid to countries that resist Chinese influence, while cutting funding to partners that engage with Chinese entities. Climate Change * USAID was declared "a climate agency," redirecting its focus towards transitioning countries away from fossil fuels to renewable energy.