[ Question ]

What posts do you want someone to write?

by Aaron Gertler 1 min read24th Mar 202048 comments

49


I really enjoyed "What posts are you planning on writing?"

This is the lazy version, for people who want a post to exist but want someone else to write it. Given that we're all stuck inside anyway, I'm hoping we can use this opportunity to get a lot of writing done (like Isaac Newton!)

So: What are some unwritten posts that you think ought to exist?

If you want something to exist badly enough that you'd pay for it, consider sharing that information! There's nothing wrong with offering money; some really good work has come from small-scale contracting of this kind.

New Answer
Ask Related Question
New Comment

20 Answers

I'd be really interested in reading an updated post that makes the case for there being an especially high (e.g. >10%) probability that AI alignment problems will lead to existentially bad outcomes.

There still isn't a lot of writing explaining case for existential misalignment risk. And a significant fraction of what's been produced since Superintelligence is either: (a) roughly summarizing arguments in Superintelligence, (b) pretty cursory, or (c) written by people who are relative optimists and are in large part trying to explain their relative optimism.

Since I have the (possibly mistaken) impression that a decent number of people in the EA community are quite pessimistic regarding existential misalignment risk, on the basis of reasoning that goes significantly beyond what's in Superintelligence, I'd really like to understand this position a lot better and be in a position to evaluate the arguments for it.

(My ideal version of this post would probably assume some degree of familiarity with contemporary machine learning, and contemporary safety/robustness issues, but no previous familiarity with arguments that AI poses an existential risk.)

More journalistic articles about EA projects.

I don't necessarily mean "written by journalists", though there's been a lot of good journalistic coverage of EA.

I mean "in the style of long-form journalism": Telling an interesting story about the work of a person/organization, while mixing in the origin story, interesting details about the people involved, photos, etc.

Examples of projects I think could get the journalistic treatment:

Governance innovation as a cause area

Many people are working on new governance mechanisms from an altruistic perspective. There are many sub-categories such as Charter cities, space governance, decentralized governance,  RadicalXChange agenda..

I'm uncertain as to the marginal value in such projects, and I'd like to see a broad analysis that can serve as a good prior and analysis framework for specific projects.

I want a post on how to be a good donor.

Context: I work with a small foundation that asks a lot of questions when we investigate charities. We sometimes worry that we're annoying the charities we work with without providing much value for them or for ourselves, especially since we don't make grants on the same scale as larger foundations. Even when they tell us our questions are helpful/reasonable, they obviously have a strong incentive to make us feel happy and valued. 

Ideal version of this post: Someone goes to a lot of EA orgs, asks them questions related to the above dilemma, and reports the results. 

Other general questions about "what donors should know" would also be neat: How should someone with no special preferences time their donations? How much more valuable is unrestricted than restricted funding? And so on.

A case study of the Scientific Revolution in Britain as intervention by a small group. This bears on one of the most surprising facts: the huge distance, 1.5 centuries, between the scientific and industrial revs. Could also shed light on the old marginal vs systemic argument: a synthesis is "do politics - to promote nonpolitical processes!"

https://forum.effectivealtruism.org/posts/RfKPzmtAwzSw49X9S/open-thread-46?commentId=rWn7HTvZaNHCedXNi

Defining "management constraints" better.

Anecdotally, many EA organizations seem to think that they are somehow constrained by management capacity. My experience is that this term is used in different ways (for example, some places use it to mean that they need senior researchers who can mentor junior researchers; others use it to mean that they need people who can do HR really well).

It would be cool for someone to interview different organizations and get a better sense of what is actually needed here

An analysis of how knowledge is constructed in the EA community, and how much weight we should assign to ideas "supported by EA". 

The recent question on reviews by non-EA researchers  is an example of that. There might be great opportunities to improve EA intellectual progress.

An AMA from someone who works at a really big foundation that leans EA but isn't quite "EA-aligned" in the same way as Open Philanthropy (e.g. Gates, Rockefeller, Chan/Zuckerberg, Skoll).

I'm interested to hear how those organizations compare different causes, distribute resources between areas/divisions, evaluate the impact of their grantmaking, etc.

"Type errors in the middle of arguments explain many philosophical gotchas: 10 examples"

"CNS imaging: a review and research agenda" (high decision relevance for moral uncertainty about suffering in humans and non humans)

"Matching problems: a literature review"

"Entropy for intentional content: a formal model" (AI related)

"Graph traversal using negative and positive information, proof of divergent outcomes" (neuroscience relevant potentially)

"One weird trick that made my note taking 10x more useful"

More accessible summaries of technical work. Some things I would like summarized:

1. Existential risk and economic growth
2. Utilitarianism with and without expected utility

(You can see my own attempt to summarize something similar to #2 here , as one example.)

"American UBI: for and against"

"A brief history of Rosicrucianism & the Invisible College"

"Were almost all the signers of the Declaration of Independence high-degree Freemasons?"

"Have malaria case rates gone down in areas where AMF did big bednet distributions?"

"What is the relationship between economic development and mental health? Is there a margin at which further development decreases mental health?"

"Literature review: Dunbar's number"

"Why is Rwanda outperforming other African nations?"

"The longtermist case for animal welfare"

"Philosopher-Kings: why wise governance is important for the longterm future"

"Case studies: when has democracy outperform technocracy? (and vice versa)"

"Examining the tradeoff between coordination and coercion"

"Spiritual practice as an EA cause area"

"Tools for thought as an EA cause area"

"Is strong, ubiquitous encryption a net positive?"

"How important are coral reefs to ocean health? How can they be protected?"

"What role does the Amazon rainforest play in regulating the North American biosphere?"

"What can the US do to protect the Amazon from Bolsonaro?"

"Can the Singaporean governance model scale?"

"Is EA complacent?"

"Flow-through effects of widespread addiction"

Negative income taxes > UBI ?

A short mathematical demonstration of how negative income taxes compares to UBI in terms of economics 101.

Here's a thread in an EA group about the topics

Collating predictions made by particularly big pundits and getting calibration curves for them. Bill Gates is getting a lot of attention now for warning of pandemic in 2015; what is his average though? (This is a bad example though, since I expect his advisors to be world-class and to totally suppress his variance.)

If this could be hosted somewhere with a lot of traffic, it could reinforce good epistemics.

A post about when we should and should not use "lives saved" language in describing EA work.

I find that telling people they can save a life for $5000 often leads to a lot of confusion: Whose life is being saved? What if they die of something else a few months later?Explaining QALYs isn't too hard if you have a couple of minutes, but you often have a lot less time than that.

Is there some shorthand we can use for "giving 50 healthy years, in expectation, across a population" that makes it sound anywhere nearly as good as simply "saving a life"? How important is it to be accurate in this dimension, vs. simply allowing people to conflate QALY/VSL with "saving a specific person"?

Should Covid-19 be a priority for EAs?

A scale-neglectedness-tractability assessment, or even a full cost-effectiveness analysis, of Covid as a cause area (compared to other EA causes) could be useful. I'm starting to look into this now – please let me know if it's already been done.

Posts on how people came to their values, how much individuals find themselves optimizing for certain values, and how EA analysis is/isn't relevant. Bonus for points for resources for talking about this with other people.

I'd like to have more "Intro to EA" convos that start with, "When I'm prioritizing values like [X, Y, Z], I've found EA really helpful. It'd be less relevant if I valued [ABC ] instead, and it seems less relevant in those times when I prioritize other things. What do you value? How/When do you want to prioritize that? How would you explore that?"

I think personal stories here would be illustrative.

A detailed study of hyper-competent ops people. 

What makes these people so competent? What tools and processes do they use to manage information and set priorities? What does the flow of their workday look like; mostly flitting around between tasks, or mostly focused blocks of time? (And so on.)

I care about a lot of different U.S. policy issues and would like to get a sense of their neglectedness and tractability. So I'd love it if someone could do a survey to find out how many people in the U.S. work full time on various issues and how hard it is to get bills passed on them.

When to use quantitative vs qualitative research

MacAskill mentions some considerations here, but the dividing line still feels fuzzy. Sample size is one consideration, but I suspect there are many others, such as the goal of the research (e.g. arguing for the possibility vs the plausibility of some phenomena).

This is relevant to many EA questions, especially those relating to longtermism or disruptive technologies. For instance, this post uses qualitative methods (in depth case studies) to argue that "an AI which is generally more intelligent than us could take over the world, even if it isn't superintelligent." I'm unsure whether three case studies actually constitutes much evidence; in a comment, the author suggests that a higher-n study ("quantitative") would be helpful.

Without a framework for thinking about this, I'm often unsure what I should be learning from qualitative studies, and I don't know when it makes sense to conduct them. (This seems related to the debate between cleometricians and counterfactual narrative historians; some discussion here, page 18)

Posts investigating/discussing any of the questions listed here. These are questions which would be "valuable for someone to research, or at least theorise about, that the current pandemic in some way 'opens up' or will provide new evidence about, and that could inform EAs’ future efforts and priorities".

If anyone has thought of such questions, please add them as answers to to that post.

An example of such a question which I added: "What lessons can be drawn from [events related to COVID-19] for how much to trust governments, mainstream experts, news sources, EAs, rationalists, mathematical modelling by people without domain-specific expertise, etc.? What lessons can be drawn for debates about inside vs outside views, epistemic modesty, etc.?"