Thank you so much for this extremely important and brilliant post, Andrew! I really appreciate it.
I completely agree that the degree to which autonomous general-capabilities research is outpacing alignment research needs to be reduced (most likely via recruitment and social opinion dynamics), and that this seems neglected relative to its importance.
I wrote a post on a related topic recently, and it would be really great to hear what you think! (https://forum.effectivealtruism.org/posts/juhMehg89FrLX9pTj/a-grand-strategy-to-recruit-ai-capabilities-researchers-into)
Thank you so much for this extremely important and helpful guide on EA messaging, Julia! I really appreciate it, and hope all EAs read it asap.
Social opinion dynamics seem to have the property where some action (or some inaction) can cause EA to move into a different equilibrium, with a potentially permanent increase or decrease in EA’s outreach and influence capacity. We should therefore tread carefully.
Unfortunately, social opinion dynamics are also extremely mysterious. Nobody knows precisely what action or what inaction possesses the risk of permanentl... (read more)
Thanks so much for this extremely important and well-written post, Theo! I really appreciate it.
My main takeaway from this post (among many takeaways!) is that EA outreach and movement-building could be significantly better. I’m not sure yet on the clear next steps, but perhaps outreach could be even more individualized and epistemically humble.
One devil’s-advocate point on your point that “while it may be true that there are certain characteristics which predict that people are more likely to become HEAs, it does not follow that a larger EA community made... (read more)
Thanks so much for your kind words on our post, Nick! I really appreciate it.
One of the non-governmental barriers to relocation for international folks is the general non-accessibility of relevant information. Even something as basic as finding an apartment to rent in a foreign city could present a quite high barrier (and certainly a perceived barrier) to relocation.
This is such an incredibly useful resource, Vael! Thank you so much for your hard work on this project.
I really hope this project continues to go strong!
Thank you so much for this extremely helpful suggestion, Linch! I really appreciate it.
A thought: Especially when enabled by technology, people are very capable. In theory, a person can easily offset the negative impact of their greenhouse gas emissions and have a lot of time and resouces left over to pursue positive impact. For example, by donating a fraction of their money to carbon offsetting projects and not having a polluting lifestyle, the median American can easily have a net reducing effect on global greenhouse gas emissions throughout their lifetime. Also, I think the median person in the world can in theory achieve a net reducing e... (read more)
That makes sense! Shoes are probably more expensive than malaria nets.But it might still be a better intervention point than antivenom+improving diagnosis+increasing people's willingness to go to the hospital.
What about something they can wear on their leg to prevent the snakebite?
Thank you so much for your kind words, Max! I'm extremely grateful.I completely agree that if (a big if!) we could identify and recruit AI capabilities researchers who could quickly "plug in" to the current AI safety field, and ideally could even contribute novel and promising directions for "finding structure/good questions/useful framing", that would be extremely effective. Perhaps a maximally effective use of time and resources for many people. I also completely agree that experiential learning on how to talent-scout and recruit AI capabiliti... (read more)
Thank you so much for your feedback on my post, Peter! I really appreciate it.It seems like READI is doing some incredible and widely applicable work! I would be extremely excited to collaborate with you, READI, and people working in AI safety on movement-building. Please keep an eye out for a future forum post with some potential ideas on this front! We would love to get your feedback on them as well.(And thank you very much for letting me know about Vael's extremely important write-up! It is brilliant, and I think everyone in AI safety should read it.)
I think Elon Musk said it in a documentary about AI risks. (Is this correct?)
Quoted from an EA forum post draft I'm working on:“Humans are currently the smartest being on the planet. This means that non-human animals are completely at our mercy. Cows, pigs, and chickens live atrocious lives in factory farms, because humans’ goal of eating meat is misaligned with these animals’ well-being. Saber-toothed tigers and mammoths were hunted to extinction, because nearby humans’ goal was misaligned with these animals’ survival.
But what if in the future, we were not the smartest being on the planet? AI experts predict that it’s basica... (read more)
Thank you very much for the constructive criticisms, Max! I appreciate your honest response, and agree with many of your points.I am in the process of preparing a (hopefully) well-thought-out response to your comment.
Thank you so much Jay for your kind words!
If you happen to think of any suggestions, any blind spots of the post, or any constructive criticisms, I'd be extremely excited to hear them! (Either here or in private conversation, whichever you prefer.)
Thanks so much for your comment, Owen! I really appreciate it.I was under the impression (perhaps incomplete!) that your definition of "phase 2" was "an action whose upside is in its impact," and "phase 1" was "an action whose upside is in reducing uncertainty about what is the highest impact option for future actions."I was suggesting that I think we already know that recruiting people away from AI capabilities research (especially into AI safety) has a substantially high impact, and this impact per unit of time is likely to improve with experience. So pondering without experientially trying it is worse for optimizing its impact, for reducing uncertainty.
The best use of time and resources (in the Phase 2 sense) is probably to recruit AI capabilities researchers into AI safety. Uncertainty is not impossible to deal with, and is extremely likely to improve from experience.
That seems archetypically Phase 1 to me? (There's a slight complication about the thing being recruited to not quite being EA)
But I also think most people doing Phase 1 work should stay doing Phase 1 work! I'm making claims about the margin in the portfolio.
I completely agree with the urgency and the evaluation of the problem.
In case begging and pleading doesn't work, a complementary method is to create a prestige differential between AI safety research and AI capabilities research (i.e., like that between green-energy research and fossil fuels), with the goal of convincing people to move from the latter to the former. See my post for a grand strategy.
How do we recruit AI capabilities researchers to transition into AI safety research? It seems that "it is relatively easy to persuade people to join AI safety i... (read more)
My prior is that one's degree of EA-alignment is pretty transparent. If there are any grifters, they would probably be found out pretty quickly and we can retract funding/cooperation from that point on. Also, people who are at a crossroads of either being EA-aligned or non-EA aligned (e.g., people who want to be a productive member of a lively and prestigious community) could be organizationally "captured" and become EA-aligned, if we maintain a high-trust, collaborative group environment.
A general class of problems for effective altruists is the following:In some domains, there are a finite number of positions through which high-impact good can be done. These positions tend to be prestigious (perhaps rationally, perhaps not). So, there is strong zero-sum competition for these positions. The limiting factor is that effective altruists face steep competition for these positions against other well-intentioned people who are just not perfectly aligned on one or more crucial issues. One common approach is to really help the effective altru... (read more)
So one alternative is to have a preprint server like arXiv (where papers can be posted) that directly serves as a journal, potentially with peer reviews that are also posted. Independent of paper availability to the public, this would also save researchers' time. (Instead of formatting papers to fit the Elsevier guidelines, they could be doing more research or training new researchers.)
What is a lower bound for the maximal counterfactual impact from allocating a couple dozen billion dollars?
Reposting my post: “At what price estimate do you think Elsevier can be acquired?
Could acquiring Elsevier and reforming it to be less rent-seeking be feasible?”
At what price estimate do you think Elsevier can be acquired?
Could acquiring Elsevier and reforming it to be less rent-seeking be feasible?
I think so too! A strong anecdote can directly illustrate a cause-and-effect relationship that is consistent with a certain plausible theory of the underlying system. And correct causal understanding is essential for making externally valid predictions.
My intuition is that the priority for funding criticism of EA/longtermism is low, because there will be a lot of smart and motivated people who (in my opinion, because of previously held ideological commitments; but the true reason doesn’t matter for the purpose of my argument) will formulate and publicize criticisms of EA/longtermism, regardless of what we do.
They can be (deterministic Bayesian updating is just causal inference), but they can also not be (probabilistic Bayesian updating requires a large sample size; also, sampling bias is universally detrimental to accurate learning).
Just to play devil’s advocate:
For many different types of talented people, the harm to the Russian government from their emigration might be overstated (at least the short term harm), because it’s economy is disproportionately based on oil and gas. Taxes from citizens’ economic activity are not as important.
But the strong case for open immigration does not require this harm to be true.
It's plausible that compared to a stable authoritarian nuclear state, an unstable or couped authoritarian nuclear state could be even worse (in worst-case scenario and even potentially in expected value).
For a worst-case scenario, consider that if a popular uprising is on the verge of ousting Kim Jong Un, he may desperately nuke who-know's-where or order an artillery strike on Seoul.
Also, if you believe these high-access defectors' interviews, most North Korean soldiers genuinely believe that they can win a war against the U.S. and South Korea.... (read more)
Research on how to minimize the risk of false alarm nuclear launches
Preventing false alarm nuclear launches (as Petrov did) via research on the relevant game theory, technological improvements, and organization theory, and disseminating and implementing this research, could potentially be very impactful.
Facilitate interdisciplinarity in governmental applications of social science
Values and Reflective Processes, Economic Growth
At the moment, governmental applications of social science (where, for example, economists who use the paradigm of methodological individualism are disproportionately represented) could benefit from drawing on other fields of social science that can fill potential blind spots. The theory of social norms is a particularly relevant example. Also, behavioral scientists and psychologists could potentially be very helpful in improving the... (read more)
Increase the number of STEM-trained people, in EA and in general
Economic growth, Research that can help us improve
Research and efforts to increase the numberof quantitatively skilled people in general, and targeted EA movement-building efforts to them could potentially be very impactful. (e.g., AI alignment research, biorisk research, scientific research in general) Incentivizing STEM education at the school and university levels, facilitating immigration of STEM degree holders, and offering STEM specific guidance via 80,000 Hours and other organizations could potentially be very impactful.
Incentivize researchers to prioritize paradigm shifts rather than incremental advances
Economic growth, Research That Can Help Us Improve
There's a plausible case that societal under-innovation is one of the largest causes (if not the largest cause) of people's suboptimal well-being. For example, scientific research could be less risk-averse/incremental and more pro-moonshots. Interdiscplinary research on how to achieve society's full innovation potential, and movement-building targeted at universities, scientific journals, and grant agencies to incentivize scientific moonshots could potentially be very impactful.
A fast and widely used global database of pandemic prevention data
Speed is of the essence for pandemic prevention when emergence occurs. A fast and widely used global database could potentially be very impactful. It would be great if events like the early discovery of potential pandemic pathogens, doctors' diagnoses of potential pandemic symptoms, etc. regularly and automatically gets uploaded to the database, and high-frequency algorithms can use this database to predict potential pandemic outbreaks faster than people can do.
Yes, I think these proposals together could be especially high-impact, since people who pass screening may develop issues of mental health down the line.
"find an existing youtube studio with some folks who are interested in EA"-> This sounds very doable and potentially quite impactful. I personally enjoy watching Kurzgesagt and they have done EA-relevant videos in the past (e.g., meat consumption).
"But a broader, 80K-style effort to build the EA pipeline so we can attract and absorb more media people into the movement also seems worthwhile." -> I agree!
Thanks so much for these suggestions! I would also really like to see these projects get implemented. There are already bootcamps for, say, pivoting into data science jobs, but having other specializations of statistics bootcamps (e.g., an accessible life-coach level bootcamp for improving individual decision-making, or a bootcamp specifically for high-impact CEOs or nonprofit heads) could be really cool as well.
Thanks for the great big-picture suggestions! Some of these are quite ambitious (in a good way!) and I think this is the level of out-of-the-box thinking needed on this issue.
This idea goes hand-in-hand with a previous post "Facilitate U.S. voters' relocation to swing states." For a project aiming to facilitate relocation to well-chosen parts of the US, it could be additionally impactful to consider geographic voting power as well, depending on the scale of the project.
Thanks so much, Jackson!
I have never published a book, but some EAs have written quite famous and well-written books. In addition to what you suggested, I was thinking "80,000 pages" could organize mentoring relationships for other EAs who are interested in writing a book, writer's circles, a crowdsourced step-by-step guide, etc. Networking in general is very important for publishing and publicizing books, from what I can gather, so any help on getting one's foot in the door could be quite helpful.
Pipeline for podcasts
Crowdsourced resources, networks, and grants may help facilitate EAs and longtermists' creation of high-impact, informative podcasts.
Reduce meat consumption
Biorisk, Moral circle expansion
Research and efforts to reduce broad meat consumption would help moral circle expansion, pandemic prevention, and climate change mitigation. Perhaps messaging from the pandemic-prevention angle (in addition to the climate change angle and the moral circle expansion angle) may help.
Research into reducing general info-hazards
Researching and diseminating knowledge on how to generally reduce info-hazards could potentially be very impactful. An ambitious goal would be to have an info-hazard section in the training of journal editors, department chairs, and biotech CEOs in relevant scientific fields (although perhaps such a training would also be an info-hazard!)
Simultaneously reliable and widely trusted media
Eeliable (in the truthseeking sense) media seems to not be widely trusted, and widely trusted media seems to not be reliable. Research and efforts to simultaneously achieve both could potentially be very impactful, for political resolution of a broad range of issues. (Ambitious idea: Can EAs/longtermists establish a media competitor?)
Normalize broad ownership of hazmat suit (and of N-day supply of non-perishable food and water)
If everyone either wore a hazmat suit all the time or stayed at home for 14 days (especially in the early stages of the COVID-19 pandemic), the pandemic would have been over. Normalize, fund, and advocate for broad ownership of hazmat suits and of non-perishable food and water, for preventing future pandemics. This may be more feasible in developing countries than developed countries, but in principle foreign aid/EA can make it feasible for developed countries as well.
Can editing efforts be directed to Wikipedia? Or would this not suffice because everyone can edit it?
Influencing culture to align with longtermism/EA
"Everything is downstream of culture." So, basic research and practical efforts to make culture more aligned with longtermism/EA could potentially be very impactful.
Increasing social norms of moral circle expansion/cooperation
Moral circle expansion
International cooperation on existential risks and other impactful issues is largely downstream of social norms of, for example, whether foreigners are part of one's moral circle. Research and efforts to encourage social norms of moral circle expansion and cooperation to include out-group members could potentially be very impactful, especially in relevant countries (e.g., US and China) and among relevant decision-makers.
Global cooperation/coordination on existential risks
Negative relationships between, for example, US and China are detrimental to pandemic prevention efforts, to the detriment of all people. Research on and efforts to facilitate fast, effective, and transparent global cooperation/coordination on pandemic prevention can be very impactful. Movement building on the sheer importance of this (especially among the relevant scientists and governmental decision-makers) would be especially impactful. Perhaps pandemic prevention can be "carved out" in U.S.-China relations? This also applies to other existential risks.
Reducing antibiotic resistance
If say a plague bacterium (maybe there are better examples) became resistant to all available antibiotics and started spreading, it could cause a pandemic like the Black Death. Research on how to behaviorally reduce antibiotic use (e.g., reduce meat consumption, convince meat companies to not use antibiotics, reduce overprescription) and how to develop new antibiotics (AI could help), and advocacy of reducing antibiotic use could potentially be high impact.
Reducing vaccine hesitancy
Even if we have extremely quick development of vaccines for pandemic pathogens, vaccine hesitancy can limit the impact of vaccines. Research and efforts to reduce vaccine hesistancy in general could potentially be high-impact.