Wiki Contributions


Reasons for and against posting on the EA Forum

Also, I think there's a third way that this drawback might not apply

Yeah, I thought about that and meant it to be included (somewhat sloppily) in the "closely aligned" proviso.

Or like shifting your beliefs and arguments in worse ways to match the incentives on the Forum?

Or shifting your attention.

I think things like upvotes and comments here provide multiple incentive gradients which seem possibly harmful. For example, I think based on a vague gestalt impression that the Forum tends to:

  • Encourage confidence and simplicity over nuance at some margin less than the IMO optimal
  • Disproportionately reward critiques and "drama" of a certain sort
  • Discourage highly technical content
  • Encourage familiar content and content areas

Many of these claimed problems are very understandable and seem hard to avoid in this kind of setting. People like things they're familiar with (looseley:; understanding and evaluating highly technical content either demands more time from readers or outright limits the audience size; if you don't have the expertise to evaluate and contextualize claims, confident claims seems more informative than cautious ones; etc.

Obviously, my claims here are pretty subjective and fuzzy and others could disagree.

Reasons for and against posting on the EA Forum

This maybe could be assimilated under "opportunity cost", but I think a major potential downside is skewed incentives. To avoid that drawback you'd either have to believe that posters mostly aren't influenced by the mechanics of the Forum or that the mechanics of the Forum are closely aligned with the good.

Epistemic Trade: A quick proof sketch with one example

Nondogmatic Social Discounting seems very loosely related. Could be an entry point for further investigations, references, etc.

The long-run social discount rate has an enormous effect on the value of climate mitigation, infrastructure projects, and other long-term public policies. Its value is however highly contested, in part because of normative disagreements about social time preferences. I develop a theory of "nondogmatic" social planners, who are insecure in their current normative judgments and entertain the possibility that they may change. Although each nondogmatic planner advocates an idiosyncratic theory of intertemporal social welfare, all such planners agree on the long-run social discount rate. Nondogmatism thus goes some way toward resolving normative disagreements, especially for long-term public projects.

If Bill Gates believes all lives are equal, why is he impeding vaccine distribution?

I think this post unhelpfully mixes general, systemic criticisms around innovation, public goods and IP (which I'm very interested in and sympathetic to) with the "news hook"—COVID vaccines. It strikes me as incredibly unlikely that we'll determine and shift to a better solution in the current crisis. I think the most likely outcome of action here would be to shift us out of the local maximum but not into the global maximum. I think a proposal of an alternative system, an analysis of its cost and benefits relative to the status quo , and a plan for how to get there from here would receive a very different reception.

Allocating Global Aid to Maximize Utility

Somewhat related:

The Limitations of Decentralized World Redistribution: An Optimal Taxation Approach

A centralized scheme of world redistribution that maximizes a border-neutral social welfare function, subject to the disincentive effects it would create, generates a drastic reduction in world consumption inequality, dropping the Gini coefficient from 0.69 to 0.25. In contrast, an optimal decentralized (i.e., with no cross-country transfers) redistribution has a miniscule effect on world income inequality. Thus, the traditional public finance concern about the excess burden of redistribution cannot explain why there is so little world redistribution.

Actual foreign aid is vastly lower than the transfers under the simulated world income tax, suggesting that voluntary world transfers—subject to a free-rider problem—produces an outcome that is consistent with rich countries such as the United States either placing a much lower value on the welfare of foreigners, or else expecting that a very significant fraction of cross-border transfers is wasted. The product of the welfare weight and one minus the share of transfers that are wasted constitutes the implicit weight that the United States assigns to foreigners. We calculate that value to be as low as 1/2000 of the value put on the welfare of an American, suggesting that U.S. policy is consistent with social preferences that place essentially no value on the welfare of the citizens of the poorest countries, or that implicitly assumes that essentially all transfers are wasted.

How do ideas travel from academia to the world: any advice on what to read?

I know one of the examples I've heard of is neoliberalism and the Mont Pelerin society. You may be able to use that as a case study.

Aligning Recommender Systems as Cause Area

From Optimizing Engagement to Measuring Value is interesting and somewhat related:

Most recommendation engines today are based on predicting user engagement, e.g. predicting whether a user will click on an item or not. However, there is potentially a large gap between engagement signals and a desired notion of "value" that is worth optimizing for. We use the framework of measurement theory to (a) confront the designer with a normative question about what the designer values, (b) provide a general latent variable model approach that can be used to operationalize the target construct and directly optimize for it, and (c) guide the designer in evaluating and revising their operationalization. We implement our approach on the Twitter platform on millions of users. In line with established approaches to assessing the validity of measurements, we perform a qualitative evaluation of how well our model captures a desired notion of "value".

Take care with notation for uncertain quantities

Note that the significant figures conventions are a common way of communicating the precision in a number. e.g. indicates more precision than .

What are examples of EA work being reviewed by non-EA researchers?

In addition to Will MacAskill's critique of functional decision theory (MIRI-originated and intended to be relevant for AI alignment), there's this write-up by someone that refereed FDT's submission to a philosophy journal:

My recommendation was to accept resubmission with major revisions, but since the article had already undergone a previous round of revisions and still had serious problems, the editors (understandably) decided to reject it. I normally don't publish my referee reports, but this time I'll make an exception because the authors are well-known figures from outside academia, and I want to explain why their account has a hard time gaining traction in academic philosophy.

Load More