Meta-suggestion: In-person, professionally facilitated small workshop, sponsored and hosted by CEA, to build-consensus around a solution to the EA project bottleneck - with a view to CEA owning the project.
There are a range of carefully-considers, well-informed, and somewhat divergent perspectives on how to solve the EA project bottleneck. At the same time, getting the best version possible of a solution to the EA project bottleneck is likely to be very high value; a sub-optimal version may represent a large counterfactual loss of value.
As an important and complex piece of EA infrastructure, this seems to be a good fit for CEA to own. CEA is well-placed to lend legitimacy and seed funding to such a project, so that it has the ongoing human and financial resources and credibility to be done right.
It also seems quite likely that an appropriate - optimally impactful - solution to this problem would entail work beyond a project evaluation platform (e.g. a system to source project ideas from domain experts; effective measures to dampen overly-risk projects). This kind of 'scope creep' would be hard for an independent project to fulfil, but much easier for CEA to execute, given their network and authority.
1. I don't have a definition of x-risk expertise. I think the quality of x-risk expertise is currently ascribed to people i) with a track record of important contributions to x-risk reduction ii) subjective peer approval from other experts.
I think a more objective way to evaluate x-risk expertise would be extremely valuable.
2. Possible signs of a value mis-aligned actor:
if they don't value impact maximisation, they may focus on ineffective solutions, perhaps based on their interests
if they don't value high epistemic standards, they may hold beliefs that they cannot rationally justify, and may make more avoidable bad/risky decisions
if they don't value the far future, they may make decisions that hare high risk for the far future
3. see http://effective-altruism.com/ea/1tu/bottlenecks_and_solutions_for_the_xrisk_ecosystem/foo
I also think good judgement and decision-making results from a combination of qualities of the individual and qualities of their social network. Plausibly, one could make much better decisions if they have frequent truth-seeking dialogue with relevant domain experts with divergent views.
The required skills and experience of senior hires vary between fields and roles; senior x-risk staff are probably best-placed to specify these requirements in their respective domains of work. You can look at x-risk job ads and recruitment webpages of leading x-risk orgs for some reasonable guidance. (we are developing a set of profiles for prospective high-impact talent, to give a more nuanced picture of who's required).
"Exceptionally good judgement and decision-making", for senior x-risk talent, I believe requires:
a thorough and nuanced understanding of EA concepts and how they apply to the context
good pragmatic foresight - an intuitive grasp of the likely and possible implications of one's actions
a conscientious risk-aware attitude, with the ability to think clearly and creatively to identify failure modes
Assessing good-judgement and decision-making is hard; it's particularly hard to assess the consistency of a person's judgement without knowing/working with them over at least several months. Some methods:
Speaking to a person can quickly clarify their level of knowledge of EA concepts and how they apply to the context of their role.
Speaking to references could be very helpful, to get a picture of how a person updates their beliefs and actions.
Actually working with them (perhaps via a work trial, partnership or consultancy project) is probably the best way to test whether a person is suitable for the role
A critical thinking psychometric test may plausibly be a good preliminary filter, but is perhaps more relevant for junior talent. A low score would be a big red flag, but a high score is far from sufficient to imply overall good judgement and decision-making.
what would a 'do the most good-er' and an 'Earth optimiser' disagree about?
Great question!
I'm not sure if there is any direct logical incompatibility between a 'do the most good-er' and an 'Earth optimiser'. Rather, I think the Earth optimiser frames the challenge of doing the most good in a particular way that tends to give greater consideration to collective impact and long run indirect effects than is typical in the EA community.
As an Earth optimiser, I am confident that we can substantially improve on our current cause prioritisation methodology, to better account for long run indirect effects and better maximise collective impact. By modelling the Earth as a complex system, defining top-level systemic goals/preferred outcomes, and working backwards to identify the critical next steps to get there, I expect would lead many of us to revise what we currently consider to be top priority causes.
I would strongly encourage you to write up one of these areas as a cause profile and compare it to existing ones
When it comes to complex systems change causes, I think a substantial amount of up front research is typically required to write up a remotely accurate cause profile, that can be compared meaningfully with direct-impact causes. Complex systems typically seem highly intractable at first glance, but a systems analysis may highlight a set of neglected interventions, which when pursued together, make systems change fairly tractable.
As a good example, I am currently part of the leadership team working on a political systems change research project (set up under EA Geneva). This is a year-long project with a team of (part-time volunteer) researchers. We will do a detailed literary review, a series of events with policy makers, and a series of expert interviews. We hope that this will be enough to evaluate the tractability of this as a cause area, and locate it's priority in relation to other cause areas.
Parts 3 and 5 of the article linked below explain this approach is more detail, although my thinking has moved on a bit since writing this.
There's a good chance that these ideas will be refined and written up collaborative in an applied context as part of GeM Labs' Understanding and Optimising Policy project over the next year. If they are out of scope of this project, I intend to develop them independently and share my progress.
https://docs.google.com/document/d/1DFZ9OAb0g5dtQuZHbAfngwACQkgSpjqrpWWOeMrsq7o/edit?usp=sharing
System change causes are inherently complex and thus often appear highly intractable initially. However, with detailed systems analysis a set of viable (and perhaps novel) approaches may (sometimes) be identified, which are much more tractable than expected.
For example, the system of animal agriculture and animal product consumption is pretty complex, but ACE have done a great job to identify charities that are working very effectively on different aspects of that system (cultured meat, advocacy to corporates, promoting veganism, etc.).
Analysing a complex system in detail sheds new light on what's broken and why, and can highlight novel and neglected solutions (e.g. cultured meat) that make changing the system far more tractable.
changing the political system is highly intractable
The political systems is very complex, but we don't yet know how tractable it is. We are currently researching this at EA Geneva/Geneva Macro Labs. If we find a political systems change strategy that is even moderately tractable, I suspect it would be worth pursuing due to the magnitude of the likely flow through effects. If we change the political system to better prioritise policies, this would make changing many other important systems (economic, defence, education, etc.) way more tractable.
while taking into account externalities (as EAs do)
I think that the current EA methodology to take into account impact externalities is incomplete. I am not aware of any way to reliably quantify flow-through effects, or to quantify how a particular cause area indirectly affects the impact of other cause areas.
The concept of total impact, if somehow integrated into our cause prioritisation methodology, may help us to account for impact externalities more accurately. I concede that total impact may be too simplistic a concept...
For what it's worth, I currently think the solution requires modelling the Earth as a complex system, clarifying top-level metrics to optimise the system for, and a probability weighted theory of change for the system as a whole.
It seems you're trying to set up a distinction between EA focusing on small issues, and systems change focusing on big issues.
I do not mean to say that EA focuses on small issues and systems change focuses on big issues. Rather, I see EA as having a robust (but incomplete) cause prioritisation methodology, and systems change having a methodology that accounts well for complexity (but neglects cause prioritisation in the context of the system of Earth as a whole).
This is pretty mystical.
On reflection, I think that conducting systems change projects in appropriate phases, with clear expectations for each phase, is a viable way to synthesis EA and systems change approaches and culture. Specifically, a substantial research phase would typically be required to understand the system before one can know what interventions to prioritise.
the theoretical framework linked to "do the most good" already gives us a way to think about how to choose causes while taking into account inter-cause spillovers
I think impact 'spill-overs' between causes is a good representation of how most EAs mentally think about the relationship between causes and impact. However, I see this as an inaccurate representation of what's actually going on, and I suspect this leads to a substantial mis-allocation of resources.
I suspect that long term flow-through effects typically outweigh the immediate observable impact of working on any given cause (because flow-through effects accumulate indefinitely over time). 'Spill-over' suggests that impact can be neatly attributed to one cause or another, but in the context of complex systems (i.e. the world we live in), impact is often more accurately understood as resulting from many factors, including the interplay of a messy web of causes pursued over many decades.
I see 'Earth optimization', as a useful concept to help us develop our cause prioritisation methodology to better account for the inherent complexity of the world we aim to improve, better account for long run flow-through effects, and thus help us to allocate our resources more effectively as individuals and as a movement.
Would you be able to provide any further information regarding the reasons for not recommending the proposal I submitted for an 'X-Risk Project Database'? Ask: $12,375 for user research, setup, and feature development over 6 months.
Project summary:
Create a database of x-risk professionals and their work, starting with existing AI safety/x-risk projects at leading orgs, to improve coordination within the field.
The x-risk field and subfields are globally distributed and growing rapidly, yet x-risk professionals still have no simple way to find out about each other’s current work and capabilities. This results in missed opportunities for prioritisation, feedback and collaboration, thus retarding progress. To improve visibility and coordination within the x-risk field, and to expedite exceptional work, we will create a searchable database of leading x-risk professionals, organisations and their current work.
Application details
p.s. applause for the extensive explanations of grant recommendations!!