All of Peter S. Park's Comments + Replies

"Tech company singularities", and steering them to reduce x-risk

Thank you so much for this extremely important and brilliant post, Andrew! I really appreciate it.

I completely agree that the degree to which autonomous general-capabilities research is outpacing alignment research needs to be reduced (most likely via recruitment and social opinion dynamics), and that this seems neglected relative to its importance.

I wrote a post on a related topic recently, and it would be really great to hear what you think! (https://forum.effectivealtruism.org/posts/juhMehg89FrLX9pTj/a-grand-strategy-to-recruit-ai-capabilities-researchers-into)

EA will likely get more attention soon

Thank you so much for this extremely important and helpful guide on EA messaging, Julia! I really appreciate it, and hope all EAs read it asap.

Social opinion dynamics seem to have the property where some action (or some inaction) can cause EA to move into a different equilibrium, with a potentially permanent increase or decrease in EA’s outreach and influence capacity. We should therefore tread carefully.

Unfortunately, social opinion dynamics are also extremely mysterious. Nobody knows precisely what action or what inaction possesses the risk of permanentl... (read more)

Bad Omens in Current Community Building

Thanks so much for this extremely important and well-written post, Theo! I really appreciate it.

My main takeaway from this post (among many takeaways!) is that EA outreach and movement-building could be significantly better. I’m not sure yet on the clear next steps, but perhaps outreach could be even more individualized and epistemically humble.

One devil’s-advocate point on your point that “while it may be true that there are certain characteristics which predict that people are more likely to become HEAs, it does not follow that a larger EA community made... (read more)

Effective [Re]location

Thanks so much for your kind words on our post, Nick! I really appreciate it.

One of the non-governmental barriers to relocation for international folks is the general non-accessibility of relevant information. Even something as basic as finding an apartment to rent in a foreign city could present a quite high barrier (and certainly a perceived barrier) to relocation.

Transcripts of interviews with AI researchers

This is such an incredibly useful resource, Vael! Thank you so much for your hard work on this project.

I really hope this project continues to go strong!

Should You Have Children Despite Climate Change?

Thank you so much for this extremely helpful suggestion, Linch! I really appreciate it.

Should You Have Children Despite Climate Change?

A thought: Especially when enabled by technology, people are very capable. In theory, a person can easily offset the negative impact of their greenhouse gas emissions and have a lot of time and resouces left over to pursue positive impact. For example, by donating a fraction of their money to carbon offsetting projects and not having a polluting lifestyle, the median American can easily have a net reducing effect on global greenhouse gas emissions throughout their lifetime. Also, I think the median person in the world can in theory achieve a net reducing e... (read more)

3Linch2mo
You might find posts with the parenting [https://forum.effectivealtruism.org/topics/parenting] tag helpful.
Snakebites kill 100,000 people every year, here's what you should know

That makes sense! Shoes are probably more expensive than malaria nets.

But it might still be a better intervention point than antivenom+improving diagnosis+increasing people's willingness to go to the hospital.

5AndrewDoris2mo
I suspect it would be easier to convince people who HAVE been bitten by a snake to go to the hospital than it will be to convince people who have not yet been bitten by a snake to constantly wear some kind of protective wraparound shinguards every time they're on the farm. The daily inconvenience level seems high for such a rare event. Even malaria nets are often not used for their intended purpose once distributed, and they seem to me like less of an inconvenience.
Snakebites kill 100,000 people every year, here's what you should know

What about something they can wear on their leg to prevent the snakebite? 

3MathiasKB2mo
I wondered about this as well. There's no doubt that it would reduce snakebites, but whether it's cost-effective is more difficult to tell. An analyst I spoke to pointed out to me that after all it's still pretty rare to be bitten by a snake. The amount of footwear you'd need to distribute per snakebite prevented is pretty high, and likely pretty expensive.
A grand strategy to recruit AI capabilities researchers into AI safety research

Thank you so much for your kind words, Max! I'm extremely grateful.

I completely agree that if (a big if!) we could identify and recruit AI capabilities researchers who could quickly "plug in" to the current AI safety field, and ideally could even contribute novel and promising directions for  "finding structure/good questions/useful framing", that would be extremely effective. Perhaps a maximally effective use of time and resources for many people. 

I also completely agree that experiential learning on how to talent-scout and recruit AI capabiliti... (read more)

A grand strategy to recruit AI capabilities researchers into AI safety research

Thank you so much for your feedback on my post, Peter! I really appreciate it.

It seems like READI is doing some incredible and widely applicable work! I would be extremely excited to collaborate with you, READI, and people working in AI safety on movement-building. Please keep an eye out for a future forum post with some potential ideas on this front! We would love to get your feedback on them as well.

(And thank you very much for letting me know about Vael's extremely important write-up! It is brilliant, and I think everyone in AI safety should read it.)

[$20K In Prizes] AI Safety Arguments Competition

I think Elon Musk said it in a documentary about AI risks. (Is this correct?)

1Arran McCutcheon2mo
That's right, he said 'It’s just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road, and so, goodbye anthill.'
[$20K In Prizes] AI Safety Arguments Competition

Quoted from an EA forum post draft I'm working on:

“Humans are currently the smartest being on the planet. This means that non-human animals are completely at our mercy. Cows, pigs, and chickens live atrocious lives in factory farms, because humans’ goal of eating meat is misaligned with these animals’ well-being. Saber-toothed tigers and mammoths were hunted to extinction, because nearby humans’ goal was misaligned with these animals’ survival. 

But what if in the future, we were not the smartest being on the planet? AI experts predict that it’s basica... (read more)

A grand strategy to recruit AI capabilities researchers into AI safety research

Thank you very much for the constructive criticisms, Max! I appreciate your honest response, and agree with many of your points.

I am in the process of preparing a (hopefully) well-thought-out response to your comment.

A grand strategy to recruit AI capabilities researchers into AI safety research

Thank you so much Jay for your kind words! 

If you happen to think of any suggestions, any blind spots of the post, or any constructive criticisms, I'd be extremely excited to hear them! (Either here or in private conversation, whichever you prefer.)

Longtermist EA needs more Phase 2 work

Thanks so much for your comment, Owen! I really appreciate it.

I was under the impression (perhaps incomplete!) that your definition of "phase 2" was "an action whose upside is in its impact," and "phase 1" was "an action whose upside is in reducing uncertainty about what is the highest impact option for future actions."

I was suggesting that I think we already know that recruiting people away from AI capabilities research (especially into AI safety) has a substantially high impact, and this impact per unit of time is likely to improve with experience. So pondering without experientially trying it is worse for optimizing its impact, for reducing uncertainty.

Longtermist EA needs more Phase 2 work

The best use of time and resources (in the Phase 2 sense) is probably to recruit AI capabilities researchers into AI safety. Uncertainty is not impossible to deal with, and is extremely likely to improve from experience.

That seems archetypically Phase 1 to me? (There's a slight complication about the thing being recruited to not quite being EA)

But I also think most people doing Phase 1 work should stay doing Phase 1 work! I'm making claims about the margin in the portfolio.

Begging, Pleading AI Orgs to Comment on NIST AI Risk Management Framework

I completely agree with the urgency and the evaluation of the problem.

In case begging and pleading doesn't work, a complementary method is to create a prestige differential between AI safety research and AI capabilities research (i.e., like that between green-energy research and fossil fuels), with the goal of convincing people to move from the latter to the former. See my post for a grand strategy.

How do we recruit AI capabilities researchers to transition into AI safety research? It seems that "it is relatively easy to persuade people to join AI safety i... (read more)

The Vultures Are Circling

My prior is that one's degree of EA-alignment is pretty transparent. If there are any grifters, they would probably be found out pretty quickly and we can retract funding/cooperation from that point on. 

Also, people who are at a crossroads of either being EA-aligned or non-EA aligned (e.g., people who want to be a productive member of a lively and prestigious community) could be organizationally "captured" and become EA-aligned, if we maintain a high-trust, collaborative group environment.

Peter S. Park's Shortform

A general class of problems for effective altruists is the following:

In some domains, there are a finite number of positions through which high-impact good can be done. These positions tend to be prestigious (perhaps rationally, perhaps not). So, there is strong zero-sum competition for these positions. The limiting factor is that effective altruists face steep competition for these positions against other well-intentioned people who are just not perfectly aligned on one or more crucial issues. 

One common approach is to really help the effective altru... (read more)

Peter S. Park's Shortform

So one alternative is to have a preprint server like arXiv (where papers can be posted) that directly serves as a journal, potentially with peer reviews that are also posted. Independent of paper availability to the public, this would also save researchers' time. (Instead of formatting papers to fit the Elsevier guidelines, they could be doing more research or training new researchers.)

Peter S. Park's Shortform

What is a lower bound for the maximal counterfactual impact from allocating a couple dozen billion dollars?

$100 bounty for the best ideas to red team

Reposting my post: “At what price estimate do you think Elsevier can be acquired?

Could acquiring Elsevier and reforming it to be less rent-seeking be feasible?”

Peter S. Park's Shortform

At what price estimate do you think Elsevier can be acquired?

Could acquiring Elsevier and reforming it to be less rent-seeking be feasible?

1david_reinstein3mo
My take is * It’s a bad system and maybe without much good infrastructure (innovative employees, legal environment etc) in place to be able to make it good within that system * Also worried that by buying them we will feel less compelled to build it into something better, this system has a decent chance of being disrupted soon (what Unjournal [bit.ly/eaunjournal] is trying to do, obviously) * I wouldn’t want to ‘reward the bad behavior’ and encourage future bad behavior by buying them out. Caveats: * This may be a 1x thing so incentives may not matter. * I may be biased by my distaste for Elsevier and a misguided fairness concern
1Dave Cortright3mo
What problem would this solve? And how does the existence of Sci-Hub change the calculus? https://www.sci-hub.st [https://www.sci-hub.st/]
7Vincent van der Holst3mo
I did my bachelor thesis on a company that was acquired by elsevier for 100million USD. Elsevier (now called Relx) has a market cap of 60 billion USD. Getting a majority voting position would require dozens of billions probably. Counterfactual impact from allocating a couple dozen billion is much larger. So I think it's not feasible nor recommendable.
Anecdotes Can Be Strong Evidence and Bayes Theorem Proves It

I think so too! A strong anecdote can directly illustrate a cause-and-effect relationship that is consistent with a certain plausible theory of the underlying system. And correct causal understanding is essential for making externally valid predictions.

EA Projects I'd Like to See

My intuition is that the priority for funding criticism of EA/longtermism is low, because there will be a lot of smart and motivated people who (in my opinion, because of previously held ideological commitments; but the true reason doesn’t matter for the purpose of my argument) will formulate and publicize criticisms of EA/longtermism, regardless of what we do.

4MichaelPlant4mo
I'm not sure about this. People outside EA who have a good criticisms might just decide it's not worth writing up at length - they should just ignore EA and get on with their preferred projects. People inside EA might worry about making themselves unpopular ('getting cancelled') [https://forum.effectivealtruism.org/posts/gx7BEkoRbctjkyTme/democratising-risk-or-how-ea-deals-with-critics-1?commentId=vJRv7JmWjxbyroQNn] and conclude it's not worth the risk.
1Caleb Biddulph4mo
I disagree somewhat; if we directly fund critiques, it might be easier to make sure a large portion of the community actually sees them. If we post a critique to the EA Forum under the heading "winners of the EA criticism contest," it'll gain more traction with EAs than if the author just posted it on their personal blog. EA-funded critiques would also be targeted more towards persuading people who already believe in the idea, which may make them better. While critiques will probably be published anyway, increasing the number of critiques seems good; there may be many people who have insights into problems in EA but wouldn't have published them due to lack of motivation or an unargumentative nature. Holding such a contest may also convey useful signaling to people in and outside the EA community and hopefully promote a genuine culture of open-mindedness.
Anecdotes Can Be Strong Evidence and Bayes Theorem Proves It

They can be (deterministic Bayesian updating is just causal inference), but they can also not be (probabilistic Bayesian updating requires a large sample size; also, sampling bias is universally detrimental to accurate learning).

2FCCC4mo
Yep, I agree. Maybe I should have gone into why everyone puts anecdotes at the bottom of the evidence hierarchy. I don't disagree that they belong there, especially if all else between the study types is equal. And even if the studies are quite different, the hierarchy is a decent rule of thumb. But it becomes a problem when people use it to disregard strong anecdotes and take weak RCTs as truth.
Let Russians go abroad

Just to play devil’s advocate:

For many different types of talented people, the harm to the Russian government from their emigration might be overstated (at least the short term harm), because it’s economy is disproportionately based on oil and gas. Taxes from citizens’ economic activity are not as important.

But the strong case for open immigration does not require this harm to be true.

The Future Fund’s Project Ideas Competition

It's plausible that compared to a stable authoritarian nuclear state, an unstable or couped authoritarian nuclear state could be even worse (in worst-case scenario and even potentially in expected value). 

For a worst-case scenario, consider that if a popular uprising is on the verge of ousting Kim Jong Un, he may desperately nuke who-know's-where or order an artillery strike on Seoul. 

Also, if you believe these high-access defectors' interviews, most North Korean soldiers genuinely believe that they can win a war against the U.S. and South Korea.... (read more)

The Future Fund’s Project Ideas Competition

Research on how to minimize the risk of false alarm nuclear launches

Effective Altruism

Preventing false alarm nuclear launches (as Petrov did) via research on the relevant game theory, technological improvements, and organization theory, and disseminating and implementing this research, could potentially be very impactful.

The Future Fund’s Project Ideas Competition

Facilitate interdisciplinarity in governmental applications of social science

Values and Reflective Processes, Economic Growth

At the moment, governmental applications of social science (where, for example, economists who use the paradigm of methodological individualism are disproportionately represented) could benefit from drawing on other fields of social science that can fill potential blind spots. The theory of social norms is a particularly relevant example. Also, behavioral scientists and psychologists could potentially be very helpful in improving the... (read more)

The Future Fund’s Project Ideas Competition

Increase the number of STEM-trained people, in EA and in general

Economic growth, Research that can help us improve

Research and efforts to increase the numberof quantitatively skilled people in general, and targeted EA movement-building efforts to them could potentially be very impactful. (e.g., AI alignment research, biorisk research, scientific research in general) Incentivizing STEM education at the school and university levels, facilitating immigration of STEM degree holders, and offering STEM specific guidance via 80,000 Hours and other organizations could potentially be very impactful. 

The Future Fund’s Project Ideas Competition

Incentivize researchers to prioritize paradigm shifts rather than incremental advances

Economic growth, Research That Can Help Us Improve

There's a plausible case that societal under-innovation is one of the largest causes (if not the largest cause) of people's suboptimal well-being. For example, scientific research could be less risk-averse/incremental and more pro-moonshots. Interdiscplinary research on how to achieve society's full innovation potential, and movement-building targeted at universities, scientific journals, and grant agencies to incentivize scientific moonshots could potentially be very impactful.
 

The Future Fund’s Project Ideas Competition

A fast and widely used global database of pandemic prevention data

Biorisk

Speed is of the essence for pandemic prevention when emergence occurs. A fast and widely used global database could potentially be very impactful. It would be great if events like the early discovery of potential pandemic pathogens, doctors' diagnoses of potential pandemic symptoms, etc. regularly and automatically gets uploaded to the database, and high-frequency algorithms can use this database to predict potential pandemic outbreaks faster than people can do.

The Future Fund’s Project Ideas Competition

Yes, I think these proposals together could be especially high-impact, since people who pass screening may develop issues of mental health down the line.

The Future Fund’s Project Ideas Competition

"find an existing youtube studio with some folks who are interested in EA"-> This sounds very doable and potentially quite impactful. I personally enjoy watching Kurzgesagt and they have done EA-relevant videos in the past (e.g., meat consumption).

"But a broader, 80K-style effort to build the EA pipeline so we can attract and absorb more media people into the movement also seems worthwhile." -> I agree!

The Future Fund’s Project Ideas Competition

Thanks so much for these suggestions! I would also really like to see these projects get implemented. There are already bootcamps for, say, pivoting into data science jobs, but having other specializations of statistics bootcamps (e.g., an accessible life-coach level bootcamp for improving individual decision-making, or a bootcamp specifically for high-impact CEOs or nonprofit heads) could be really cool as well.

The Future Fund’s Project Ideas Competition

Thanks for the great big-picture suggestions! Some of these are quite ambitious (in a good way!) and I think this is the level of out-of-the-box thinking needed on this issue. 

This idea goes hand-in-hand with a previous post "Facilitate U.S. voters' relocation to swing states." For a project aiming to facilitate relocation to well-chosen parts of the US, it could be additionally impactful to consider geographic voting power as well, depending on the scale of the project.

The Future Fund’s Project Ideas Competition

Thanks so much, Jackson!

I have never published a book, but some EAs have written quite famous and well-written books. In addition to what you suggested, I was thinking "80,000 pages" could organize mentoring relationships for other EAs who are interested in writing a book, writer's circles, a crowdsourced step-by-step guide, etc. Networking in general is very important for publishing and publicizing books, from what I can gather, so any help on getting one's foot in the door could be quite helpful.

1Logan Riggs4mo
My brother has written several books and currently coaches people on how to publish it and market it on Amazon. He would be open to being paid for advice in this area (just dm me) I think the dissemination and prestige are the best arguments so far.
The Future Fund’s Project Ideas Competition

Pipeline for podcasts

Effective altruism

 Crowdsourced resources, networks, and grants may help facilitate EAs and longtermists' creation of high-impact, informative podcasts.

The Future Fund’s Project Ideas Competition

Reduce meat consumption

Biorisk, Moral circle expansion

Research and efforts to reduce broad meat consumption would help moral circle expansion, pandemic prevention, and climate change mitigation. Perhaps messaging from the pandemic-prevention angle (in addition to the climate change angle and the moral circle expansion angle) may help. 

The Future Fund’s Project Ideas Competition

Research into reducing general info-hazards

Biorisk

Researching and diseminating knowledge on how to generally reduce info-hazards could potentially be very impactful. An ambitious goal would be to have an info-hazard section in the training of journal editors, department chairs, and biotech CEOs in relevant scientific fields (although perhaps such a training would also be an info-hazard!)

5Tessa4mo
yeah, to expand upon this: Best practices for assessment and management of dual-use infohazards Biorisk and Recovery from Catastrophe, Values and Reflective Processes Lots of important and well-intended research, including research into AI alignment and pandemic prevention, generates information which may be hazardous if misused. We would like to better understand how to assess and manage these hazards, and would be interested in funding expert elicitation studies and other empirical work on estimating information risks. We would also be interested in funding work to make organizations, including research labs, publishers and grantmakers, better equipped to handle dual-use through offering training and incentives to follow certain best practices.
The Future Fund’s Project Ideas Competition

Simultaneously reliable and widely trusted media

Epistemic institutions

Eeliable (in the truthseeking sense) media seems to not be widely trusted, and widely trusted media seems to not be reliable. Research and efforts to simultaneously achieve both could potentially be very impactful, for political resolution of a broad range of issues. (Ambitious idea: Can EAs/longtermists establish a media competitor?)

The Future Fund’s Project Ideas Competition

Normalize broad ownership of hazmat suit (and of N-day supply of non-perishable food and water)

Biorisk

If everyone either wore a hazmat suit all the time or stayed at home for 14 days (especially in the early stages of the COVID-19 pandemic), the pandemic would have been over. Normalize, fund, and advocate for broad ownership of hazmat suits and of non-perishable food and water,  for preventing future pandemics. This may be more feasible in developing countries than developed countries, but in principle foreign aid/EA can make it feasible for developed countries as well.

2Greg_Colbourn4mo
This would only work for pandemics if literally everyone in the world did it at the same time. I think we'd probably need effective global governance for that (that itself isn't an x-risk in terms of authoritarianism or permanently curtailing humanity's flourishing).
The Future Fund’s Project Ideas Competition

Can editing efforts be directed to Wikipedia? Or would this not suffice because everyone can edit it?

1brb2434mo
Yeah make it accessible and normally accepted.
2agnode4mo
I've read that experts often get frustrated with wikipedia because their work ends up getting undone by non-experts. Also there probably needs to be financial support and incentives for this kind of work.
The Future Fund’s Project Ideas Competition

Influencing culture to align with longtermism/EA

Effective altruism

"Everything is downstream of culture." So, basic research and practical efforts to make culture more aligned with longtermism/EA could potentially be very impactful.

The Future Fund’s Project Ideas Competition

Increasing social norms of moral circle expansion/cooperation

Moral circle expansion

International cooperation on existential risks and other impactful issues is largely downstream of social norms of, for example, whether foreigners are part of one's moral circle. Research and efforts to encourage social norms of moral circle expansion and cooperation to include out-group members could potentially be very impactful, especially in relevant countries (e.g., US and China) and among relevant decision-makers.

The Future Fund’s Project Ideas Competition

Global cooperation/coordination on existential risks

AI, Biorisk

Negative relationships between, for example, US and China are detrimental to pandemic prevention efforts, to the detriment of all people. Research on and efforts to facilitate fast, effective, and transparent global cooperation/coordination on pandemic prevention can be very impactful. Movement building on the sheer importance of this (especially among the relevant scientists and governmental decision-makers) would be especially impactful. Perhaps pandemic prevention can be "carved out" in U.S.-China relations? This also applies to other existential risks.

The Future Fund’s Project Ideas Competition

Reducing antibiotic resistance

Biorisk

If say a plague bacterium (maybe there are better examples) became resistant to all available antibiotics and started spreading, it could cause a pandemic like the Black Death. Research on how to behaviorally reduce antibiotic use (e.g., reduce meat consumption, convince meat companies to not use antibiotics, reduce overprescription) and how to develop new antibiotics (AI could help), and advocacy of reducing antibiotic use could potentially be high impact.

The Future Fund’s Project Ideas Competition

Reducing vaccine hesitancy

Biorisk

Even if we have extremely quick development of vaccines for pandemic pathogens, vaccine hesitancy can limit the impact of vaccines. Research and efforts to reduce vaccine hesistancy in general could potentially be high-impact.

Load More