Wiki Contributions


Utilitarianism Symbol Design Competition

Interesting, thanks Aaron. This result seems roughly in line with the fraction of EAG attendees who wear EA t-shirts.

Utilitarianism Symbol Design Competition

For what it's worth, this thread reminded me of Joshua Greene arguing that the brand of "utilitarianism" is so bad as to be a lost cause.

Greene suggests "deep pragmatism" for the rebrand.

Utilitarianism Symbol Design Competition

I didn't downvote. For what it's worth, the main negative reaction I had was:

  1. The use of the EA lightbulb as an example of a great symbol. Personally, I've always found it kind of amateurish and cringe. I think mainly because it combines two very tired cliches (a lightbulb to represent "ideas" and a heart to represent "altruism"? Really?!).

I suppose I could also complain that:

  1. The claim that "symbolism is important" is not substantiated. Generically that seems true, but the claim that utilitarianism the philosophical idea needs a good/better symbol and/or a flag isn't obvious.

  2. Granting that symbolism is important, running a prize competition on the EA Forum is probably not the best way to get a brilliant symbol. My main concern is that the format disproportionately encourages submissions from amateurs. In logo design, professional designers often encounter clients who believe that a great logo can be whipped up by more or less anyone in a couple of hours on a Sunday afternoon. But no—world class logos usually take weeks or months of work, drawing on years of specialist training. If I had just $1K to spend, I might look for a talented young designer from a low-ish wage EU country (e.g. Portugal), and ask them to spend a couple days on it.

The Future of Humanity & The Methods of Ethics: A discussion of Bostrom, Sidgwick and Scheffler (Thursday 22 July, 6:30pm UK)

The salon recording is now available here:

A written summary is below:

We began by considering utilitarianism—particularly Sidgwick's "pleasure as desirable consciousness" hedonism—as a starting point for thinking about what matters. The value and failure modes of attempts at legibility and abstraction were discussed, as were different ideas about what makes a "meaningful" life. While accepting that utilitarian principles have, historically, supported important reforms (such as the de-criminalisation of homosexuality), attendees voiced concern about what may be missing from a hedonistic theory of value. There was broad agreement that we'll face major moral and meta-ethical uncertainty for the foreseeable future, and that we need to find ways to act despite that. One participant described giving Prozac to their cat, despite their misgivings about hedonism.

Discussion then turned to Nozick's experience machine, and the idea that it reveals more about our attachment to the status quo than our commitment to "base reality". We discussed how, during a process of gradual change, each step, viewed from the previous step, may seem comprehensible and tractable to moral evaluation. Yet if we try to look directly from the present to the thousandth step down the line, we end up in trouble—facing visions of an alien future that leave us cold. Parents can just about understand their children, but grandparents often struggle to understand their great-grandchildren.

In the last hour, we focussed on the question: how to proceed? There was general agreement that we should try our best to keep options open for future generations, which as a first cut, suggests an interest in reducing catastrophic and existential risks. Some attendees proposed relating to our best theories of value (including hedonism) as tentative yardsticks, and there was general enthusiasm for focussing on directional improvements on the margin, rather than a highly specified long term vision. Several attendees expressed interest in the Effective Altruism and Progress Studies communities, and we discussed some challenges of building effective communities when good feedback loops are hard to construct. The forecasting community—including Metaculus, the Good Judgement Project, and Danny Hernandez' work on calibration training—was briefly mentioned. So too was the difficulty of achieving rational social responses to risk—the debacle of COVID-19 suggesting that we have roughly two modes: ignore or obsess.

In closing, we reflected on potential harms associated with exposure to big picture perspectives in general, and utilitarian ideas in particular. Several attendees described acquaintances who have developed deep anxiety over things they cannot control, and who are making big life decisions—such as deciding not to have children—for questionable, anxiety-driven reasons. It was suggested that some contemporary neuroses may be a sign of impartial perspectives taking undue prominence in our culture. If people think that agent-neutral reasons are the only reasons they can justifiably care about, they're going to have a hard time living their lives.

This brought us back to Sidgwick’s "profound problem". If we can believe something is valuable, yet not actually value it, where does this leave us? Perhaps Agnes Callard can help us: for her, aspiration is about the rational, purposive process of learning to value something you don’t already value. Perhaps we should think of "learning to aspire" as a central challenge for the present, and the future.

Betting on the best case: higher end warming is underrepresented in research

Somewhat related: Robert S. Pindyck on The Use and Misuse of Models for Climate Policy.

In short, his take (a) seems consistent with the claim that research and policy attention is being misallocated and (b) suggests a mechanism that might partly explain the misallocation.

Abstract (my emphasis):

In recent articles I have argued that integrated assessment models (IAMs) have flaws that make them close to useless as tools for policy analysis. IAM-based analyses of climate policy create a perception of knowledge and precision that is illusory and can fool policymakers into thinking that the forecasts the models generate have some kind of scientific legitimacy. However, some economists and climate scientists have claimed that we need to use some kind of model for policy analysis and that IAMs can be structured and used in ways that correct for their shortcomings. For example, it has been argued that although we know very little about key relationships in the model, we can get around this problem by attaching probability distributions to various parameters and then simulating the model using Monte Carlo methods. I argue that this would buy us nothing and that a simpler and more transparent approach to the design of climate change policy is preferable. I briefly outline what such an approach would look like.

A few highlights:

I believe that we need to be much more honest and up-front about the inherent limitations of IAMs. I doubt that the developers of IAMs have any intention of using them in a misleading way. Nevertheless, overselling their validity and claiming that IAMs can be used to evaluate policies and determine the SCC can end up misleading researchers, policymakers, and the public, even if it is unintentional. If economics is indeed a science, scientific honesty is paramount.


Yes, the calculations I have just described constitute a “model,” but it is a model that is exceedingly simple and straightforward and involves no pretense that we know the damage function, the feedback parameters that affect climate sensitivity, or other details of the climate–economy system. And yes, some experts might base their opinions on one or more IAMs, on a more limited climate science model, or simply on their research experience and/or general knowledge of climate change and its impact.


Some might argue that the approach I have outlined here is insufficiently precise. But I believe that we have no choice. Building and using elaborate models might allow us to think that we are approaching the climate policy problem more scientifically, but in the end, like the Wizard of Oz, we would only be drawing a curtain around our lack of knowledge


I have argued that the best we can do at this point is to come up with plausible answers to these questions, most likely by relying at least in part on numbers supplied by climate scientists and environmental economists, that is, utilize expert opinion. This kind of analysis would be simple, transparent, and easy to understand. It might not inspire the kind of awe and sense of scientific legitimacy conveyed by a large-scale IAM, but that is exactly the point.

What would you do if you had half a million dollars?

A post on this topic, discussing the Thiel Fellowship, Entrepreneur First, and other attempts:

What would you do if you had half a million dollars?
  1. In some cases yes, but only when they were working on specific projects that I expected to be legible and palatable to EA funders. Are there places I should be sending people who I think are very promising to be considered for very low strings personal development / freedom-to-explore type funding?
What would you do if you had half a million dollars?

A thought that motivates my other comments on this thread: reviewing my GWWC donations a while ago, I realised that if I suddenly had lots of money, one of the first questions I would ask myself is "what friends and acquaintances should I fund?". To an outsider this kind of thing can look like rather non-altruistic nepotism, but from the inside it seems like betting on the opportunities that you are unusually able to see. I think it actually is the latter, at least sometimes. My impression is that for profit investors do a lot of "nepotistic investing", but I suspect that values like altruism and impartiality and transparency (as well as constraints of charitable legal status) make EA funders reluctant to go hard on this method.

What would you do if you had half a million dollars?

I would consider starting some kind of "major achievement" prize scheme.

Roughly, the idea I have in mind is to give large no-strings-attached lump sums to individuals who have:

(a) done exceptionally valuable work at non-trivial personal cost (e.g. massive salary sacrifice)

(b) a high likelihood of continuing to do extremely valuable work.

The aims would be:

(i) to help such figures become personally "set for life" in the way that successful startup founders sometimes do.

(ii) to improve the personal incentive structure faced by people considering EA careers.

This idea is very half baked. A couple quick comments:

  1. On (i): I'm surprised how often I meet people doing very valuable work who seem to have significant personal finance issues that (a) distract them and (b) mean that they don't buy time aggressively enough. Perhaps more importantly, I suspect that (c) personal financial security enables people to take riskier bets on their inside views, in a way that is valuably generative and/or error-correcting; also that (d) people who are doing very valuable work often have lists of good ideas for turning $$$ into good outcomes, so giving these people greater financial security would be one merit-based means of increasing the number of EA-sympathetic angel investors.

  2. On (ii): I have no idea if this would actually work out well. In theory, it'd make the personal incentives look a bit more like they do in for-profit entrepreneurship, i.e. small chance of large financial upside if you do well. In practice I could imagine a well known prize scheme causing various sorts of trouble.

  3. E.g. I see major PR risks to this kind of thing ("effective altruists conclude that the most effective use of money is to make themselves rich") and internal risk of resentment or even corruption scandals. I've not looked into how science prizes fare on this kind of thing.

  4. On (i): one possible counter is that IIRC there's some evidence for a "personal wealth sweet spot" in entrepreneurship. I think the story is supposed to be that too little financial security means you can't afford the risks, but too much security (both financial and status) makes you too complacent and lazy. My guess is that the complacency thing happens for many but not all people. Maybe one can filter for this.

Load More