Topic Contributions

Comments

Things that limit my ambition in EA

Did you see the recent EA Dedicates post? Maybe you'd like it.

EA Dedicates

Thank you for naming this. I think you've sketched the distinction very well.

Pretty sure I'll find this useful in future big decisions.

Mike Huemer on The Case for Tyranny

Dwarkesh Patel asks Huemer (2021) about The Vulnerable World Hypothesis and he calls it "the strongest argument for a strong state... for keeping the state".

In the 5 minute discussion, Huemer mentions a couple points from his "The Case For Tyranny" blog post, but clearly still doesn't have a response he is satisfied with.

Huemer says he recently read Neal Stephenson talking about "distributed monitoring" as a possible solution. He appears interested in the possibility, but not to have thought about it that much, and not ready to advocate for it.

https://www.youtube.com/watch?v=--xKsIgv7tE&t=3727s

The totalitarian implications of Effective Altruism

I agree with the following statement, which is well put:

EA needs to find a better way to articulate its relationship with the individual and with personal agency.

I think there are some good examples of this, but they're not sufficiently prominent in the introductory materials.

One I saw recently, from Luke Muehlhauser:

  1. I was born into incredible privilege. I can satisfy all of my needs, and many of my wants, and still have plenty of money, time, and energy left over. So what will I do with those extra resources?
  2. I might as well use them to help others, because I wish everyone was as well-off as I am. Plus, figuring out how to help others effectively sounds intellectually interesting.
  3. With whatever portion of my resources I’m devoting to helping others, I want my help to be truly other-focused. In other words, I want to benefit others by their own lights, as much as possible (with whatever portion of resources I’ve devoted to helping others).

In a not-very-prominent article in the Key Ideas series, Ben Todd writes:

One technique that can be helpful is setting a target for how much energy you want to invest in personal vs. altruistic goals. For instance, our co-founder Ben sees making a difference as the top goal for his career and forgoes 10% of his income. However, with the remaining 90% of his income, and most of his remaining non-work time, he does whatever makes him most personally happy. It’s not obvious this is the best tradeoff, but having an explicit decision means he doesn’t have to waste attention and emotional energy reassessing this choice every day, and can focus on the big picture.

There's also You have more than one goal, and that's fine by Julia Wise.

The totalitarian implications of Effective Altruism

I wasn't convinced by your argument that basic EA principles have totalitarian implications.

The argument given seems too quick, and relies on premises that seem pretty implausible to me, namely:

(a) that EA "seeks to displace all prior traditions and institutions"

(b) that it is motivated by "the goal of bringing all aspects of society under the control of [its] ideology"

Given that this is the weakest part of the piece, I think the title is unfortunate.

The totalitarian implications of Effective Altruism

Thanks for the post. I'll post some quick responses, split into separate comments...


I agree that "do the most good" can be understood in a totalising way. One can naturally understand it as either:

(a) do the most good (with your entire life).

(b) do the most good (with whatever fraction of resources you've decided to allocate to altruistic ends).

I read it as (b).

In my experience, people who think there are strong moral arguments for (a) tend to nonetheless think that (b) is a better idea to promote (on pragmatic grounds).

I've long thought it'd be good if introductions to effective altruism would make it clearer that:

(i) EA is compatible with both (a) and (b)

(ii) EA is generally recommending (b)

Concave and convex altruism

Thanks for the post.

In the follow up reading, this link points to the wrong place:

80,000 Hours — A framework for comparing global problems in terms of expected impact

It should point here: https://80000hours.org/articles/problem-framework/#how-to-assess-how-neglected-a-problem-is

On that page there's a brief discussion of the possibility of convex production functions:

There are some mechanisms by which problem areas can see increasing returns rather than diminishing returns. However, we think there are good theoretical and empirical arguments that diminishing returns are the norm, and that returns most likely diminish logarithmically. Increasing returns might hold at very small scales within problem areas, though we’re not even sure about that due to the value of information benefits mentioned above. (Increasing returns seem more likely to be common within organisations rather than problem areas.)

An easy win for hard decisions.

To make things even faster: create a bookmark for "doc.new" and give it the name "nd". Then you can just type "nd" and press "enter".

An easy win for hard decisions.

Quick way to create a Google Doc—browse to this web address:

doc.new

I've found that having a quick way to create new docs makes me more likely to do so.

(To set your typing focus to the browser address bar, press CMD+L or CTRL+L)

What Are Your Software Needs?

Personally I'm looking for someone to help me build a simple plugin for the Obsidian note taking app.

The plugin should generate a list of links to notes that match criteria I specify.

Spec here. If you'd enjoy getting paid to make this for me, please send me a DM.

Load More