MichaelA

I’m Michael Aird, a Summer Research Fellow with the Center on Long-Term Risk (though I don’t personally subscribe to suffering-focused views on ethics). During my fellowship, I’ll likely do research related to reducing long-term risks from malevolent actors. Opinions expressed in my posts or comments should be assumed to be my own, unless I indicate otherwise.

Before that, I did existential risk research & writing for Convergence Analysis and grant writing for a sustainability accounting company. Before that, I was a high-school teacher for two years in the Teach For Australia program, ran an EA-based club and charity election at the school I taught at, published a peer-reviewed psychology paper, and won a stand-up comedy award which ~30 people in the entire world would've heard of (a Golden Doustie, if you must know).

If you've read anything I've written, you taking this survey would really help me (see here for context). You can also give me more general feedback here. (Either way, your response will be anonymous by default.)

I also post to LessWrong.

If you think you or I could benefit from us talking, feel free to reach out or schedule a call.

Comments

Modelling the odds of recovery from civilizational collapse

Also, if you're aware of Rethink Priorities/Luisa Rodriguez's work on modelling the odds and impacts of nuclear war (e.g., here), I'd be interested to hear whether you think making parameter estimates was worthwhile in that case. (And perhaps, if so, whether you think you'd have predicted that beforehand, vs being surprised that there ended up being a useful product.)

This is because that seems like the most similar existing piece of work I'm aware of (in methodology rather than topic). And to me it seems like that project was probably worthwhile, including the parameter estimates, and that it provided outputs that are perhaps more useful and less massively uncertain than I would've predicted. And that seems like weak evidence that parameter estimates could be worthwhile in this case as well.

Modelling the odds of recovery from civilizational collapse

Thanks for the comment. That seems reasonable. I myself had been wondering if estimating the parameters of the model(s) (the third step) might be: 

  • the most time-consuming step (if a relatively thorough/rigorous approach is attempted)
  • the least insight-providing step (since uncertainty would likely remain very large)

If that's the case, this would also reduce the extent to which this model could "plausibly inform our point estimates" and "narrow our uncertainty". Though the model might still capture the other two benefits (indicating what further research would be most valuable and suggesting points for intervention).

That said, if one goes to the effort of building a model of this, it seems to me like it's likely at least worth doing something like: 

  1. surveying 5 GCR researchers or other relevant experts on what parameter estimates (or confidence intervals or probability distributions for parameters[1]) seem reasonable to them
  2. inputting those estimates
  3. see what outputs that suggests, and more importantly perform sensitivity analyses
  4. thereby gain knowledge on what the cruxes of disagreement appear to be and what parameters most warrant further research, breaking down further, and/or getting more experts' views on

And then perhaps this project could stop there, or perhaps it could then involve somewhat deeper/more rigorous investigation of the parameters where that seems most valuable.

Any thoughts on whether that seems worthwhile?

[1] Perhaps this step could benefit from use of Elicit; I should think about that if I pursue this idea further.

Modelling the odds of recovery from civilizational collapse

Thanks, I've sent you a PM :)

ETA: Turns out I was aware of the work Peter had in mind; I think it's relevant, but not so similar as to strongly reduce the marginal value this project could provide.

Risks from Atomically Precise Manufacturing

It looks like FHI now want to start looking into nanotechnology/APM more, and build more capacity in that area: They're hiring for researchers in a bunch of areas, one of which is: 

Nanotechnology: analysing roadmaps to atomically precise manufacturing and related technologies, including possible intersections with advances in artificial intelligence, and potential impacts and strategic implications of progress in these areas.

How have you become more (or less) engaged with EA in the last year?

That makes sense to me. 

It also reminds me of the idea - which I've either heard before or said before - of talking about taking the Giving What We Can pledge by telling the story of what led one to take it, rather than as an argument for why one should take it. A good thing about that is that you can still present the arguments for taking it, as they probably played a role in the story, and if other arguments played a role in other people's stories you can talk about that too. But it probably feels less pushy or preachy that way, compared to framing it more explicitly as a set of arguments.

(These two pages may also be relevant: 1, 2.)

How have you become more (or less) engaged with EA in the last year?

Thanks for sharing :)

Do you think you wouldn't have found it as negative/abrasive if the people still basically argued against a focus on those causes or an engagement with other advocacy orgs or the like, but did so in a way that felt less like a quick, pre-loaded answer, and more like they: 

  • were really explaining their reasoning
  • were open to seeing if you had new arguments for your position
  • were just questioning neglectedness/tractability, rather than importance

I ask because I think there'll be a near-inevitable tension at times between being welcoming to people's current cause prioritisation views and staying focused on what does seem most worth prioritising.[1] So perhaps the ideal would be a bit more genuine open-mindedness to alternative views, but mainly a more welcoming and less dismissive-seeming way of explaining "our" views. I'd hope that that would be sufficient to avoid seeming arrogant or abrasive or driving people away, but I don't know.

(Something else may instead be the ideal. This could include spending more time helping people think about the most effective approaches to causes that don't actually seem to be worth prioritising. But I suspect that that's not ideal in many cases.)

[1] I'm not sure this tension is strong for climate change, as I do think there are decent arguments for prioritising (neglected aspects of) climate change (e.g., nuclear power, research into low-probability extreme risks). But I think this tension probably exists for human rights advocacy and various other issues many people care about.

How have you become more (or less) engaged with EA in the last year?

Very glad your second bout of experiences with EA has been more positive! And sorry to hear that your earlier experiences were negative/abrasive. I'd be interested to hear more about that, though that also feels like the sort of thing that might be personal or hard to capture in writing. But if you do feel comfortable sharing, I'd be interested :)

Additionally/alternatively, I'd be interested in whether you have any thoughts on more general trends that could be tweaked, or general approaches that could be adopted, to avoid EA pushing people away like it did the first time you engaged. (Even if those thoughts are very tentative, they could perhaps be pooled with other tentative thoughts to form a clearer picture of what the community could do better.)

How have you become more (or less) engaged with EA in the last year?

I've also changed the style/pace of my engagement somewhat, in a way that feels a little hard to describe. 

It's sort-of like, when I first encountered EA, I was approaching it as a sprint: there were all these amazing things to learn, all these important career paths to pursue, and all these massive problems to solve, and I had to go fast. I actually found this exciting rather than stressful, but it meant I wasn't spending enough time with my (non-EA) partner, was too constantly talking about EA things with her, etc. (I think this is more about my personality than about EA specifically, given that a similar thing occurred when I first started teaching in 2018.)

Whereas now it's more like I'm approaching EA as a marathon. By that I mean I'm: 

  • Spending a little less time on "work and/or EA stuff" and a little more time with my partner
    • My work is now itself EA stuff, so I actually increased my time spent on EA stuff compared to when I was a teacher. But I didn't increase it as much as I would've if still in "sprint mode".
  • Making an effort to more often talk about non-EA things with my partner
  • Reducing how much I "sweat the small stuff"; being more willing to make some frivolous expenditures (which are actually small compared to what I'm donating and will donate in future) for things like nice days out, and to not think carefully each time about whether to do that

I think the factors that led me to switch to marathon mode are roughly that:

  • It seemed best for my partner and my relationship
  • I've come to see my relationship itself in a more marathon-y and mature way (or something like that; it's hard to describe), I think due to the fact that I got married this year
    • This seems to have made ideas about compromise and long time horizons more salient to me
    • (I mean this all in a good way, despite how "seeing my relationship as a marathon" might sound!)
  • My career transition worked! So now I feel a bit less like there's a mad dash to get onto a high impact path, and a bit more like I just need to work well and sustainably
    • But this change was only moderate, for reasons including that I remain uncertain about which path I should really be on
  • Getting an EA research job means I can now scratch my itch for learning, discussing, and writing about interesting and important ideas during my work hours, and therefore don't feel an unmet intellectual "need" if I spend my free hours on other things
    • In contrast, when I was a teacher, I mostly had to get my fill of interesting and important ideas outside of work time, biting into the time I spent with my partner
How have you become more (or less) engaged with EA in the last year?

I've become much more engaged in the last year. I think this was just a continuation of a fairly steady upward trend in my engagement since I learned about EA in late 2018. And I think this trend hasn't been about increased inclination to engage (because I was already very sold on EA shortly after encountering it), but rather about increased ability to engage, resulting from me: 

  • catching up on EA's excellent back-catalogue of ideas
  • gradually having more success with job applications 

Ways my engagement increased over the past ~12 months include that I:

  • Continued applying to a bunch of EA-aligned jobs, internships, etc.
    • Over 2019 as a whole, I applied to ~30 roles
    • Perhaps ~10 were with non-EA orgs
  • Attended my first EAGx (Australia) and EAG (London)
  • Made my first 10% donation
    • This was to the EA Long-Term Future Fund
    • This was also my first donation after I took the GWWC Pledge in early 2019
  • Started posting to the EA Forum, as well as commenting much more
  • Was offered two roles at EA orgs and accepted one
  • Stayed at the EA Hotel
  • Mostly moved from vegetarianism to veganism
    • This was influenced by my stay at the EA Hotel, as basically all the food there was vegan, and I realised I was pretty happy with it
  • Was later offered a fellowship at a different EA org and accepted it
  • Made a bunch of EA friends

Overall, I've really enjoyed this process, and I'm very glad I found EA. 

I've found some EAs or EA-adjacent people rude or arrogant, especially on Facebook groups and LessWrong (both of which I value a lot overall!). But for some reason this hasn't really left me with a bad taste in my mouth, or a reduced inclination to engage with EA as a whole. And I've much more often had positive experiences (including on Facebook groups and LessWrong).

Load More