MichaelA

I'm a researcher & writer for Convergence Analysis, an existential risk strategy research group. Feel free to check out Convergence's list of publications and strategic plan.

Posts of mine that were written for/with Convergence will mention that fact. In other posts, and in most of my comments, opinions expressed are my own.

I want to continually improve, so I welcome feedback of all kinds. You can give me feedback anonymously here.

About half of my posts are on LessWrong: https://www.lesswrong.com/users/michaela

Some more info on my background and interests is here.

MichaelA's Comments

MichaelA's Shortform

That looks very helpful - thanks for sharing it here!

You have more than one goal, and that's fine

Thanks for this post. I think it provides a useful perspective, and I've sent it to a non-EA friend of mine who's interested in EA, but concerned by the way that it (or utilitarianism, really) can seem like it'd be all-consuming.

I also found this post quite reminiscent of Purchase Fuzzies and Utilons Separately (which I also liked). And something that I think might be worth reading alongside this is Act utilitarianism: criterion of rightness vs. decision procedure.

Collection of good 2012-2017 EA forum posts

Thanks for this collection!

Another 2017 post I quite liked and have often drawn on in my thinking or in conversation is Act utilitarianism: criterion of rightness vs. decision procedure.

Some history topics it might be very valuable to investigate

Thanks for those answers and thoughts!

And good idea to add the Foundational Questions link to the directory - I've now done so.

Some history topics it might be very valuable to investigate

Thanks for sharing those topic ideas, links to resources, and general thoughts on the intersection of history research and EA! I think this post is made substantially more useful by now having your comment attached. And your comment has also further increased how excited I'd be to see more EA-aligned history research (with the caveats that this doesn't necessarily require having a history background, and that I'm not carefully thinking through how to prioritise this against other useful things EAs could be doing).

If you do end up making a top-level post related to your comment, please do comment about it here and on the central directory of open research questions.

It's long been on my to-do list to go through GPI and CLR's research agendas more thoroughly to work out if there are other suggestions for historical research on there. I haven't done that to make this post so I may have missed things.

Yeah, that sounds valuable. I generated my list of 10 topics basically just "off the top of my head", without looking at various research agendas for questions/topics for which history is highly relevant. So doing that would likely be a relatively simple step to make a better, fuller version of a list like this.

Hopefully SI's work offers a second example of an exception to the "recurring theme" you note in that 1) SI's case studies are effectively a "deeper or more rigorous follow-up analysis" after ACE's social movement case study project -- if anything, I worry that they're too deep and rigorous and that this has drastically cut down the number of people who put the time into reading them, and 2) I at least had an undergraduate degree in history :D

Yeah, that makes sense to me. I've now edited in a mention of SI after AI Impacts. I hadn't actively decided against mentioning SI, just didn't think to do so. And the reason for that is probably just that I haven't read much of that work. (Which in turn is probably because (a) I lean longtermist but don't prioritise s-risks over x-risks, so the work by SI that seems most directly intended to improve farm animal advocacy seems to me valuable but not a top priority for my own learning, and (b) I think not much of that work has been posted to the Forum?) But I read and enjoyed "How tractable is changing the course of history?", and the rest of what you describe sounds cool and relevant.

Focusing in on "I worry that they're too deep and rigorous and that this has drastically cut down the number of people who put the time into reading them" - do you think that that can't be resolved by e.g. cross-posting "executive summaries" to the EA Forum, so that people at least read those? (Genuine question; I'm working on developing my thoughts on how best to do and disseminate research.)

Also, that last point reminds me of another half-baked thought I've had but forgot to mention in this post: Perhaps the value of people who've done such history research won't entirely or primarily be in the write-ups which people can then read, but rather in EA then having "resident experts" on various historical topics and methodologies, who can be the "go-to person" for tailored recommendations and insights regarding specific decisions, other research projects, etc. Do you have thoughts on that (rather vague) hypothesis? For example, maybe even if few people read SI's work on those topics, if they at least know that SI did that research, they can come to SI when they have specific, relevant questions and thereby get a bunch of useful input in a quick, personalised way.

(This general idea could also perhaps apply to research more broadly, not just to history research for EA, but that's the context in which I've thought about it recently.)

The career coordination problem

I'd agree with the idea people should take personal fit very seriously, with passion/motivation for a career path being a key part of that. And I'd agree with your rationale for that.

But I also think that many people could become really, genuinely fired up about a wider range of career paths than they might currently think (if they haven't yet tried or thought about those career paths). And I also think that many people could be similarly good fits for, or similarly passionate about, multiple career paths. For these people, which career path will have the greatest need for more people like them in a few years can be very useful as a way of shortlisting the things to test one's ability to become passionate about, and/or a "tie-breaker" between paths one has already shortlisted based on passions/motivations/fit.

For example, I'm currently quite passionate about research, but have reason to believe I could become quite passionate about operations-type roles, about roles at the intersection of those two paths (like research management), and maybe about other paths like communications or non-profit entrepreneurship. So which of those roles - rather than which roles in general - will be the most marginally useful in a few years time seems quite relevant for my career planning.

(I think this is probably more like a different emphasis to your comment, rather than a starkly conflicting view.)

The career coordination problem
we’ve found that releasing substandard data can get people on the wrong track

I've seen indications and arguments that suggest this is true when 80,000 Hours releases data or statements they don't want people to take too seriously. Do you (or does anyone else) have thoughts on whether it's the case that anyone releasing "substandard" (but somewhat relevant and accurate) data on a topic will tend to be worse than there being no explicit data on a topic?

Basically, I'm tentatively inclined to think that some explicit data is often better than no explicit data, as long as it's properly caveated, because people can just update their beliefs only by the appropriate amount. (Though that's definitely not fully or always true; see e.g. here.) But then 80k is very prestigious and trusted by much of the EA community, so I can see why people might take statements or data from 80k too seriously, even if 80k tells them not to.

So maybe it'd be net positive for something like what the OP requests to be done by the EA Survey or some random EA, but net negative if 80k did it?

3 suggestions about jargon in EA

Yes, I think these are all valid points. So my suggestion would indeed be to often provide a brief explanation and/or a link, rather than to always do that. I do think I've sometimes seen people explain jargon unnecessarily in a way that's a bit awkward and presumptuous, and perhaps sometimes been that person myself.

In my articles for the EA Forum, I often include just links rather than explanations, as that gives readers the choice to get an explanation if they wish. And in person, I guess I'd say that it's worth:

  • entertaining both the hypothesis that using jargon without explanation would make someone feel confused/excluded, and the hypothesis that explaining jargon would make the person feel they're perceived as more of a "newcomer" than they really are
  • then trying to do whatever seems best based on the various clues and cues
    • with the options available including more than just "assume they know the jargon" and "assume they don't and therefore do a full minute spiel on it"; there are also options like giving a very brief explanation that feels natural, or asking if they've come across that term

One last thing I'd say is that I think the fact jargon is used as a marker of belonging is also another reason to sometimes use jargon-free statements or explain the jargon, to avoid making people who don't know the jargon feel excluded. (I guess I intended that point to be implicit in saying that explanations and/or hyperlinks of jargon "may make [people] feel more welcomed and less disorientated or excluded".)

Some history topics it might be very valuable to investigate

That definitely sounds good to me. My personal impression is that there are many EAs who could be doing some good research on-the-side (in a volunteer-type capacity), and many research questions worth digging into, and that we should therefore be able to match these people with these questions and get great stuff stuff. And it seems good to have some sort of way of coordinating that.

Though I also get the impression that this is harder than it sounds, for reasons I don't fully understand, and that mentorship (rather than just collaboration) is also quite valuable.

So I'd suggest someone interested in setting up that sort of crowdsourcing or coordination system might want to reach out to EdoArad, Peter Slattery, and/or David Janku. The first two of those people commented on my central directory for open research questions, and David is involved with (runs?) Effective Thesis. All seem to know more than me about this sort of thing. And it might even make sense to somehow combine any new attempts at voluntary research crowdsourcing or collaborations with initiatives they've already set up.

Effective Thesis: updates from 2019 and call for collaborators
2) Focusing on non-EAs and people on the borders of the community rather than on EAs - it seems to me so far that many people who are highly involved in EA can find similarly good advice as we would be able to give them in their own circles so the counterfactual impact in this group is smaller.

That sounds right to me, and indeed like an argument that pushes in favour of focusing on non-EAs or people on the borders. (Though I don't know how to balance that against other arguments.)

In fact, a related point that came to mind is that it seems possible Effective Thesis could be a good intervention simply from the perspective of expanding the EA community, separate from expanding the EA-aligned researcher community or the amount of high-impact research done.

For example, maybe Effective Thesis looks to non-EA uni students like a concrete service they just want to engage with for their own career plans, without them having to be sold yet on anything more than a vague sense of "having an impact". And then via Effective Thesis and the coaching, they learn about EA and priority cause areas, learn how they can help, and get useful EA connections. And then even if they move out of research later, they might do something like working on important problems in the civil service or founding a high-impact charity, and maintain an EA mindset and connections to the community.

Whereas a EA group at their university might not have appealed to that person, as it didn't obviously advance their existing plans in a concrete way.

I think part of why that seems plausible to me is that I think a similar process might help explain why 80,000 Hours and GiveWell have both served well for expanding the EA community. They both offer a service that can seem directly useful to anyone who at least just wants to "have an impact", in some vague sense, even if that person isn't yet bought into things like utilitarianism or caring about various neglected populations (people in other countries, future generations, nonhumans, etc.).

Have you thought about how much impact ET might be able to have on just expanding the EA community?

Load More