MichaelA

I’m Michael Aird, a Senior Research Manager at Rethink Priorities and guest fund manager at the Effective Altruism Infrastructure Fund. Opinions expressed are my own. You can give me anonymous feedback at this link.

With Rethink, I'm mostly focused on helping lead our AI Governance & Strategy team. I also do some nuclear risk research, give input on our Generalist Longtermism team's work, and do random other stuff.

Previously, I did a range of longtermism-y and research-y things as a Research Scholar at the Future of Humanity Institute, a Summer Research Fellow at the Center on Long-Term Risk, and a Researcher/Writer for Convergence Analysis. More on my background here.

I also post to LessWrong sometimes.

If you think you or I could benefit from us talking, feel free to message me! For people interested in doing EA-related research/writing, testing their fit for that, "getting up to speed" on EA/longtermist topics, or writing for the Forum, I also recommend this post.

Sequences

Moral uncertainty
Risks from Nuclear Weapons
Improving the EA-aligned research pipeline

Comments

Will the coronavirus pandemic advance or hinder the spread of longtermist-style values/thinking?

Quick take: Seems to have clearly boosted the prominence of biorisk stuff, and in a way that longtermism-aligned folks were able to harness well to promote interventions, ideas, etc. that are especially relevant to existential biorisk. I think it probably also on net boosted longtermist-/x-risk-style priorities/thinking more broadly, but I haven't really thought about it much.

How many people have heard of effective altruism?

Thanks for this post!

I think there's a typo here:

We also found sizable differences between the percentage of Republicans (4.3% permissive, 1.5% stringent) estimated to have heard of EA, compared to Democrats (7.2% permissive, 2.9% stringent) and Independents (4.3% permissive, 1.5% stringent). [emphasis added]

It looks like the numbers for Republicans were copy-pasted for Independents? Since the text implies that the numbers should be very different but they're identical, and since if those are the correct numbers it seems weird that the US adult population estimates would be much closer to the Democrat estimates than to the Republican and Independent estimates.[1]

[I work at Rethink Priorities, but on a different team, and I read this post and left this comment just for my own interest.]

  1. ^

    The total population estimate is "We estimate that 6.7%[2] of the US adult population have heard of effective altruism using our permissive standard and 2.6% according to our more stringent standard."

    So the current text suggests the percentages in the total population were slightly lower than in Democrats but much higher than in Republicans and Independents. This could make sense if there are notably more US Democrats than US Republicans and US Independents put together, but I doubt that that's the case in the US population?

    It seems very plausible that the sample included far more Democrats than Republicans+Independents. But I assume your weighting procedure to get US adult population estimates should adjust things so that overrepresentation of Democrats in the sample doesn't distort estimates for the population?

    (I wrote this chunk of text before I realised the issue was probably just a typo, and then rearranged it.)

     

What We Owe the Past

For what it's worth, I get a sense of vagueness from this post, like I don't have a strong understanding of what specific claims are being made and like I predict that different readers will spot or interpret different claims from this. 

I think attempting to provide a summary of the key points in the form of specific claims and arguments for/against them would be a useful exercise, to force clarity of thought/expression here. So what follows is one possible summary. Note that I think many of the arguments in this attempted summary are flawed, as I'll explain below.

"I think we should base our ethical decision-making in part on the views that people from the past (including past versions of currently living people) would've held or did hold. I see three reasons for this:

  1. Those past people may have been right and we may be wrong
  2. Those past people's utility matters, and our decisions can affect their utility
  3. A norm of respecting their preferences could contribute to future people respecting our preferences, which is good from our perspective"

I think (1) is obviously true, and it does seem worth people bearing it in mind. But I don't see any reason to think that people on average currently under-weight that point - i.e., that people pay less attention to past views than they should given how often past views will be better than present views. I also don't think that this post provided such arguments. So I don't think that merely stating this basic point seems very useful. (Though I do think a post providing some arguments or evidence on whether people should change how much or when they pay attention to past views would be useful.) 

I think (2) is just false, if by utility we have in mind experiences (including experiences of preference-satisfaction), for the obvious reason that the past has already happened and we can't change it. This seems like a major error in the post. Your footnote 1 touches on this but seems to me to conflate arguments (2) and (3) in my above attempted summary. 

Or perhaps you're thinking of utils in terms of whether preferences are actually satisfied, regardless of whether people know or experience that and whether they're alive at that time? If so, then I think that's a pretty unusual form of utilitarianism, it's a form I'd give very little weight to, and that's a point that it seems like you should've clarified in the main text.

I think (3) is true, but to me it raises the key questions "How good (if at all) is it for future people to respect our preferences?". "What are the best ways to get that to happen?", and "Are there ways to get our preferences fulfilled that are better than getting future people to respect them?" And I think that: 

  • It's far from obvious that it's good for future people to respect present-people-in-general's preferences.
  • It's not obvious but more likely that it's good for them to respect EAs' preferences.
  • It's unlikely that the best way to get them to respect our preferences is to respect past people's preferences to build a norm (alternatives include e.g. simply writing compelling materials arguing to respect our preferences, or shifting culture in various ways).
  • It's likely that there are better options for getting our preferences fulfilled (relative to actively working to get future people to choose to respect our preferences), such as reducing x-risk or maybe even things like pursuing cryonics or whole-brain emulation to extend our own lifespans.

So here again, I get a feeling that this post: 

  • Merely flags a hypothesis in a somewhat fuzzy way
  • Implies confidence in that hypothesis and in the view that this means we should spend more resources fulfilling or thinking about past people's preferences
  • But it doesn't really make this explicit enough or highlight in-my-view relatively obvious counterpoints, alternative options, or further questions

...I guess this comment is written more like a review than like constructive criticism. But what I'd say on the latter front (if you're interested!) is that it seems worth trying to make your specific claims and argument structures more explicit, attempting to summarize all the key things (both because summaries are useful for readers and as an exercising in forcing clear thought), and spending more thought on alternative options and counterpoints to whatever you're initially inclined to propose.

[Note that I haven't read other comments.]

My thoughts on nanotechnology strategy research as an EA cause area

I just want to flag that, for reasons expressed in the post, I think it seems probably a bad idea to be trying to accelerate the implementation of APM at the moment, as opposed to doing more research and thinking on whether to do that and then maybe indeed doing that afterwards, if it then appears useful. 

And I also think it seems bad to "stand firmly behind" any "aggressive strategy"  for accelerating powerful emerging technologies; I think there are many cases where accelerating such technologies is beneficial for the world, but one should probably always explicitly maintain some uncertainty about that and some openness to changing one's mind.

I'd be open to debating this further, but I think basically I just agree with what's stated in the post and I'm not sure which specific points you disagree with or would add. (It seems clear that you see the risks as lower and/or see the benefits as higher, but I'm not sure why.) Perhaps if I hear what you disagree with or would add, I could see if that changes my views or if I then have useful counterpoints tailored to your views. 

(Though it's also plausible I won't have time for such a debate, and in any case some/many other people know more about this topic than me.) 

My thoughts on nanotechnology strategy research as an EA cause area

I strong downvoted this comment. Given that and that others have too (which I endorse), I want to mention I'm happy to write some thoughts on why I did so if you want, since I imagine sometimes people new-ish to the EA Forum may not understand why they're getting downvoted. 

But in brief:

  • I thought this was a misleading/inaccurate and uncharitable reading of the post
  • I think that the "kill list" part of your comment feels wildly over-the-top/hyperbolic
    • Perhaps you meant it as light-hearted or a joke or something, but I think it's not obvious that that's the case without hearing your tone
    • I think it's just in general clearly not conducive to good discussion for someone to in any way imply their conversational partners may put them on a kill list - that's not a good way to start a productive debate where both sides are open to and trying to learn from each other and see if they want to change their views
  • Less importantly, I also disagree with your view that it's a good move at the moment to try to speed up advanced nanotechnology development.
    • But if you just stated you have that view, I'd probably not downvote and instead just leave a comment disagreeing. 
    • And that'd certainly be the case if you stated that view but also indicated an openness to having your view changed (as I believe the post did), explained why you have your view in a way that sounds intended to inform rather than persuade, and ideally also attempted to summarise your understanding of the post's argument or where you disagree with it. I think that's a much better way to have a productive discussion.
    • For that reason, I didn't downvote the parent comment, even though my current guess is that the strategy you're endorsing there is a bad one from the perspective of safeguarding and improving the world & future.  
Propose and vote on potential EA Wiki articles / tags [2022]

Oh, we do have https://forum.effectivealtruism.org/topics/marketing So it's probably not worth adding a new tag for just Digital marketing.

The author or readers might also find the following interesting:

  1. ^

    That said, fwiw, since I'm recommending Holden's doc, I should also flag that I think the breakdown of possible outcomes that Holden sketches there isn't a good one, because:

    • He defines utopia, dystopia, and "middling worlds" solely by how good they are, whereas "paperclipping" is awkwardly squeezed in with a definition based on how it comes about (namely, that it's a world run by misaligned AI). This leads two two issues in my view:
      • I think the classic paperclipping scenario would itself be a "middling" world, yet Holden frames "paperclipping" as a distinct concept from "middling" worlds.
      • Misaligned AI actually need not lead to something approx. as good/bad as paperclipping; it could instead lead to dystopia, or could maybe lead to utopia, depending on how we define "alignment" and depending on metaethics. 
    • There's no explicit mention of extinction.
      • I think Holden is seeing "paperclipping" as synonymous with extinction?
      • But misaligned AI need not lead to extinction.
      • And extinction is "middling" relative to utopia and dystopia.
      • And extinction is also very different from some other "middling" worlds according to many ethical theories (though probably not total utilitarianism).

Thanks for this post. I upvoted this and think the point you make is important and under-discussed. 

That said, I also disagree with this post in some ways. In particular, I think the ideal version of this post would pay more attention to:

I see these as like important oversights, rather than areas where the post makes explicit false claims. E.g., the post does acknowledge at the top that it assumes total utilitarianism, but it's still the case that it seems to assume perfect confidence in total utilitarianism and seems to frame that as reasonable rather than just "for the sake of argument", and I think that both makes the post less valuable and perhaps more misleading than it could be. 

But also, tbc, it's fair enough to write a post that moves conversations forward in some ways but isn't perfect!

(I haven't read the comments, so maybe much of this is covered already.)

Load More