All of MichaelA's Comments + Replies

Will the coronavirus pandemic advance or hinder the spread of longtermist-style values/thinking?

Quick take: Seems to have clearly boosted the prominence of biorisk stuff, and in a way that longtermism-aligned folks were able to harness well to promote interventions, ideas, etc. that are especially relevant to existential biorisk. I think it probably also on net boosted longtermist-/x-risk-style priorities/thinking more broadly, but I haven't really thought about it much.

How many people have heard of effective altruism?

Thanks for this post!

I think there's a typo here:

We also found sizable differences between the percentage of Republicans (4.3% permissive, 1.5% stringent) estimated to have heard of EA, compared to Democrats (7.2% permissive, 2.9% stringent) and Independents (4.3% permissive, 1.5% stringent). [emphasis added]

It looks like the numbers for Republicans were copy-pasted for Independents? Since the text implies that the numbers should be very different but they're identical, and since if those are the correct numbers it seems weird that the US adult population ... (read more)

What We Owe the Past

For what it's worth, I get a sense of vagueness from this post, like I don't have a strong understanding of what specific claims are being made and like I predict that different readers will spot or interpret different claims from this. 

I think attempting to provide a summary of the key points in the form of specific claims and arguments for/against them would be a useful exercise, to force clarity of thought/expression here. So what follows is one possible summary. Note that I think many of the arguments in this attempted summary are flawed, as I'll ... (read more)

I put a bunch of weight on  decision theories which support 2. 

A mundane example: I get value now from knowing that, even if I died, my partner would pursue certain Claire-specific projects I value being pursued because it makes me happy to know they will get pursued even if I die. I couldn't have that happiness now if I didn't believe he would actually do it, and it'd be hard for him (a person who lives with me and who I've dated for many years) to make me believe that he actually would pursue them even if it weren't true (as well as seeming ske... (read more)

Or perhaps you're thinking of utils in terms of whether preferences are actually satisfied, regardless of whether people know or experience that and whether they're alive at that time? If so, then I think that's a pretty unusual form of utilitarianism, it's a form I'd give very little weight to, and that's a point that it seems like you should've clarified in the main text.

Although I find this version of utilitarianism extremely implausible, it is actually a very common form of it. Discussions of preference-satisfaction theories of wellbeing presupposed by... (read more)

Thank you so, so much for  writing up your review & criticism! I think your sense of vagueness is very justified, mostly because my own post is more "me trying to lay out my intuitions" and less "I know exactly how we should change EA on account of these intuitions". I had just not seen many statements from EAs, and even less among my non-EA acquaintances, defending the importance of (1), (2), or (3) - great breakdown, btw. I put this post up in the hopes of fostering discussion, so thank you (and all the other commenters) for contributing your th... (read more)

My thoughts on nanotechnology strategy research as an EA cause area

I just want to flag that, for reasons expressed in the post, I think it seems probably a bad idea to be trying to accelerate the implementation of APM at the moment, as opposed to doing more research and thinking on whether to do that and then maybe indeed doing that afterwards, if it then appears useful. 

And I also think it seems bad to "stand firmly behind" any "aggressive strategy"  for accelerating powerful emerging technologies; I think there are many cases where accelerating such technologies is beneficial for the world, but one should prob... (read more)

My thoughts on nanotechnology strategy research as an EA cause area

I strong downvoted this comment. Given that and that others have too (which I endorse), I want to mention I'm happy to write some thoughts on why I did so if you want, since I imagine sometimes people new-ish to the EA Forum may not understand why they're getting downvoted. 

But in brief:

  • I thought this was a misleading/inaccurate and uncharitable reading of the post
  • I think that the "kill list" part of your comment feels wildly over-the-top/hyperbolic
    • Perhaps you meant it as light-hearted or a joke or something, but I think it's not obvious that that's t
... (read more)

As a moderator, I agree with Michael. The comment Michael's replying to goes against Forum norms.

Propose and vote on potential EA Wiki articles / tags [2022]

Oh, we do have https://forum.effectivealtruism.org/topics/marketing So it's probably not worth adding a new tag for just Digital marketing.

The author or readers might also find the following interesting:

  1. ^

    That said, fwiw, since I'm recommending Holden's doc, I should also flag that I think the breakdown of possible outcomes that Holden sketches there isn't a good one, because:

    • He defines utopia, dystopia, and "middling worlds" solely by how good they are, whereas "papercli
... (read more)

Thanks for this post. I upvoted this and think the point you make is important and under-discussed. 

That said, I also disagree with this post in some ways. In particular, I think the ideal version of this post would pay more attention to:

... (read more)
MichaelA's Shortform

Someone shared a project idea with me and, after I indicating I didn't feel very enthusiastic about it at first glance, asked me what reservations I have. Their project idea was focused on reducing political polarization and is framed as motivated by longtermism. I wrote the following and thought maybe it'd be useful for other people too, since I have similar thoughts in reaction to a large fraction of project ideas. 

  • "My main 'reservations' at first glance aren't so much specific concerns or downside risks as just 'I tentatively think that this doesn'
... (read more)
It could be useful if someone ran a copyediting service

Thanks for this post!

Here's a related thing I wrote recently, with a slightly different framing and some additional details, in case this is useful to someone. Though Habryka's comment makes me think perhaps LessWrong already have this mostly covered, so I guess a first step would be to check that out.

"Mesa-project* idea: Centralised & scalable proofreading, copyediting, and formatting assistance for EA-aligned people

Maybe someone should find decent/good copyeditors/proofreaders/formatters and advertise their services to EA community members who are wi... (read more)

2evelynciara1mo
I'm willing to do this work for $15-35 per page (depending on the author's ability to pay); I'm very detail-oriented and like this kind of stuff. I can only really copyedit/proofread posts written in American English, but I can do formatting for any text. I could probably do one or two copyediting or formatting jobs per weekend.
Corporate governance

An FYI to potential text-writers for this entry I made: I've mostly thought about this in relation to AI governance, but I think it's also important for space governance and presumably various other EA issues. 

Consider Changing Your Forum Username to Your Real Name

I think you mean "your first name" or something like that, rather than necessarily "your full name"? 

My suggested default would be to write your full real name in your bio, fill in other info about you in your bio, and make your Forum name sufficiently related to your real name that people who at one point learned the connection will easily remember it. (As I've done.) 

If one does that, then also making one's Forum name their full real name seems to add little value and presumably adds some risk to their 'real life' reputation if they want to lat... (read more)

My policy on this, to the extent I have one, is a sort of soft lockdown: I don't mind sharing enough personal info on here that an EA who knows me in real life could figure out my identity, but I need to always have at least plausible deniability in the face of any malicious actor. 

As far as the risks in policy careers, I think the risk is very high for appointed jobs and real but lower for elected ones. Politicians are more risk averse than voters, and when they can pick from a pool of 100, they'll look for any reason to turn you down. When the voter... (read more)

Mid-career people: strongly consider switching to EA work

Nice post! I think that I agree with all of the specific points you made, that they seem in aggregate pretty useful+important to say, and that in future I'll probably send this post to at least 5 people when giving career advice.

But here are two criticisms:

  • I think you don't state explicitly what you mean by "EA work"?
    • And I'm guessing at least 25% of readers will consciously or unconsciously interpret it as "work at explicitly EA orgs", but I'm also guessing you in fact mean it as something like "work that's motivated by impartial altruism and would be seen
... (read more)
1Ben Snodin1mo
Thanks. On the first point in particular, the post might add a bit of confusion here unfortunately. Edit: I added something near the top that hopefully makes things a bit clearer re the first point
Mid-career people: strongly consider switching to EA work

For what it's worth, I'd say (partly based on my experience as a grantmaker and on talking to lots of other grantmakers about similar things):

  • It's not the case that everyone who applies will get funding, and it is the case that track record and other signs of talent/skill would be taken into account
  • But people also have a decent chance of getting at least a few months of funding even if they have neither a very good track record nor clear signals of strong talent/skill
  • And people who think they don't have much signals of strong talent/skill should probably i
... (read more)
Mid-career people: strongly consider switching to EA work

Agreed - and to point to lots of sources, I'd highlight List of EA funding opportunities and my statement there that:

I strongly encourage people to consider applying for one or more of these things. Given how quick applying often is and how impactful funded projects often are, applying is often worthwhile in expectation even if your odds of getting funding aren’t very high. (I think the same basic logic applies to job applications.)

Also, less importantly, Things I often tell people about applying to EA Funds
 

Propose and vote on potential EA Wiki articles / tags [2022]

I think this would be a subset of Personal development, so in some sense is "covered" by that, but really that tag is probably too broad and not very intuitively named. So I think I'm in favour of adding subsidiary tags or dividing that tag up or refactoring it or something. Not sure precisely what the best move is, though.

Productivity would also overlap with Coaching and Time-money tradeoffs, but that seems ok.
 

2nicolenohemi1mo
"Productivity" is indeed included in the description of the personal development [https://forum.effectivealtruism.org/topics/personal-development] tag. However, I do believe/agree that this is a really big and important category that could be broken down. Spending a couple of seconds thinking about this, I'd come up with the following suggestions: -productivity -mental health -physical health -systems -meditation -learning -self-help -PD services -PD experiments -spirituality.
List of EA funding opportunities

I've just now learned of www.futurefundinglist.com, which seems also relevant (though I haven't looked at it closely or tried to assess how useful it'd be to people)

Things I often tell people about applying to EA Funds

Something else I now often tell people:

I'd suggest:

  1. maybe getting feedback from various people in EA who know about the sort of things you're working on but aren't as busy as the grantmakers
  2. then just applying and seeing whether you (a) already get accepted, (b) get rejected but with useful feedback, or (c) get rejected with no feedback but can then use that as a signal to rethink, get feedback elsewhere, and apply again with a new version of the project and explanation. 

Relatedly, a few points that I now feel this post should've had more in mind are:

  • It
... (read more)
Propose and vote on potential EA Wiki articles / tags [2022]

My initial feeling is that research assistants is a pretty different kind of thing and is closer to "research" than to "PA & similar", but that PAs, virtual assistant, and executive assistants do form a natural cluster.

But I'm not sure if that's right. And even if it is, it seems fine to call it "assistants" anyway, and just still have RA-related things often get other tags too and have this tag mostly be about things other than RA things.

Propose and vote on potential EA Wiki articles / tags [2022]

Personal assistance or Personal assistant or PA or Personal/executive assistant or something like that

E.g. https://forum.effectivealtruism.org/posts/bzXBZyMrnMiWu2DeF/to-pa-or-not-to-pa

Overlaps with Operations and https://forum.effectivealtruism.org/tag/pineapple-operations but seems sufficiently distinct and important to warrant its own tag

4Pablo1mo
Cool, yes, this was on my list. Done [https://forum.effectivealtruism.org/tag/personal-assistants]. Probably worth making the scope broad enough to also cover virtual assistants, research assistants, and other kinds of assistants. On reflection, perhaps it should just be called assistants?
Propose and vote on potential EA Wiki articles / tags [2022]

Something about quantum computing or quantum mechanics?

I'm more interested in the former, but maybe we should have the latter tag and then in practice it'll also work as the former tag, since it won't be hugely populated anyway?

Relevant posts include:

... (read more)
Don’t think, just apply! (usually)

I think that this isn't a useful way of looking at the situation and doesn't match the reality well. I don't have time to fully elaborate on why I think that, but here are brief points:

  • The difference between the first and second choice applicant in terms of their fit for the role can often be quite large (in expectation)
  • The person who would've been picked if the top ranked person didn't apply or turned the job down can still probably go do something else.
    • This in fact very much happens; quite often the people who nearly get an offer also get another cool of
... (read more)
2Denis Drescher1mo
An alternative decision algorithm: 1. If you’re otherwise unusually likely to turn away from EAish stuff – i.e. reach the end of your runway or burn out – just apply. Probably even if you’re just at an average level of risk. 2. If you can see yourself turning down an awesome offer because you disagree with the result of the interview process, apply a bit more liberally than otherwise. 3. When prioritizing between positions, assign: 1. 1000x weight to completely idiosyncratic high-impact projects (regardless of whether they’re your ideas or someone else’s) that no one else would otherwise pursue for a long time, 2. 100x weight to relatively neglected roles in the community (say, because they require a rare combination of skills or because the org is new and fairly unknown), 3. 10x to any capacity-creating kind of role to reduce the risk that they may not find anyone, and 4. 1x to any other role.
2Denis Drescher1mo
Thank you for taking the time to write up the summary! 1. Possibly. I’ve only hired for two roles so far (using a structured process). In one case there were clear candidates 1, 2, and 3+, and while 2 might’ve just had a bad day, we made offers to 1 and 2 anyway. In another case, though, we had two, or possibly three, candidates tied for the top spot. Two, we thought, would be more pleasant to work with while the third one seemed to have the stronger technical skill. We didn’t know how to trade that off and ended up making the offer to the one with the stronger technical skill. I have no idea whether that was the right call. 2. Yes, that’s helpful for mitigating the worst-case risks. We also did that in the second case. It still seems weak though. I imagine that in most cases they’re not able to help the other candidates very much. We weren’t either afaik. 3. Yes, that’s also a system I’ve encountered, and I love it! That’s a strong reason in my mind to apply somewhere after all. But I don’t fully trust it. 1. Even if an organization has enough funding for this system, they may not have enough management capacity. 2. They may still have a hiring goal, and upon reaching it will wind down the effort they put into hiring. That frees up resources at the org at the expense of missing out on an even better candidate. The hiring process is hopefully short in comparison to the time that the person will stay at the org, so the second probably has more leverage. 3. I’d be replacing .5 or .2 people, which is much better, but no where near an ops job that creates capacity. 4. Okay, that’s reassuring, but see my point 1. Then again most EA interview processes (e.g., the CLR one that Stefan described in detail a few years back) are more sophisticated than ours was. A good interview process is another minor but valuable mitigat
Don’t think, just apply! (usually)

I basically just think it's a bad idea to say "we don't want to waste [evaluators'] time and flood their applications process" (even with your caveats). I think there's only a small kernel of truth to this in practice, and that the statement is far more likely to mislead than enlighten people. 

To elaborate:

  • If an application is clearly bad, then it costs very little time from the hirer or grantmaker or whatever, if they have a good process. 
  • If the application is good but the person might pull out of the role or decline an offer later, I think that
... (read more)
1david_reinstein1mo
I'm not saying we telegraph "don't waste our time", and this should not be conveyed in broad communications obviously. But here in the EA forum we can afford to be nuanced and subtle, and think about the whole ecosystem ... I said "we don’t want to waste their time and flood their applications process." ... (emphasis added). And maybe "waste" is 40% too strong a word; just consider 'it is a potential cost.' I also think that ‘self filtering’ (for the right reasons) is sometimes useful to the ecosystem, as we know vetting is hard. Often it goes too far. But I don’t want us to throw the baby out with the bathwater and move to a heuristic of ‘just apply to everything and let the other side sort it out’. Because there are real costs on the other side; * perhaps not mainly the actual time spent on the ‘don’t think’ (DT) applications, * but because a large volume of applications makes it harder to spend time on the high-value applications * ... and ‘filtering out the DT applications’ will usually lead to some good applications being mistakenly filtered out. This type-1 error can be minimized by good processes, but there is always some tradeoff (see 'precision versus recall' [https://en.wikipedia.org/wiki/Precision_and_recall] in classification problems/ML) I think the self filtering is particularly useful where * You have strong information about yourself that is not easy to see on a CV or even in work tasks * Particularly where this is of the nature “I could almost surely not be able to accept a job in X field/Y org because of a strong overriding reason” In such situations it may be very hard for the employer/funder to detect these things through your application and work tasks. Furthermore, if they are fully compensating you for the work-tasks, and encouraging you, this may not cause you to want to self-filter along the way. This falls closely to my thoughts on not overcorrecting on ‘imposter syndrome’ (IS).
Don’t think, just apply! (usually)

Yeah, that seems a fair point. 

One thing I'd say in response is that, as a person who's been on multiple hiring committees and evaluated many grant applications, I'm pretty confident hirers and grantmakers would be excited for people to apply even if there's a decent chance they'll ultimately pull out or decline an offer! 

E.g., even if someone has a 75% chance of pulling our or declining, that just reduces the EV for the hirer/grantmaker of the person applying by a factor of 4. And that probably isn't a very big deal, given that hirers and grantm... (read more)

1Bella1mo
Thanks for sharing your perspective from the hiring & evaluation side! FWIW I already had some belief of this shape, which is why I added the caveat 'things that I imagine will disappoint people' - some part of me knows that the hirers are very unlikely to actually care, but another part worries & feels aversion to this.
Don’t think, just apply! (usually)

Some quickly written scattered remarks on how some of these points have played out for me personally:

  • In 2019 I applied for ~20 roles of a very wide range of types and ambition levels. 
  • I ended up getting 2 offers, both for things that seemed really not like what I expected I’d be a good fit for, and both of which I wouldn’t have applied to if I had been screening myself out of things that seemed not clearly “me-shaped” or that I wasn’t confident I’d want to accept offers for. 
  • In one case, I got the offer because the org decided they should change
... (read more)
Propose and vote on potential EA Wiki articles / tags [2022]

Coworking spaces

Do we already have a similar tag? If not, I feel fairly confident we should have this; there are at least three people / groups I know of who might find it useful to have all relevant posts collected in one place.

There are a bunch of recent relevant posts I won't bother collecting, but one is https://forum.effectivealtruism.org/posts/MBDHjwDvhDnqisyW2/awards-for-the-future-fund-s-project-ideas-competition 

4Pablo2mo
I was just thinking about this earlier today. Tag is here [https://forum.effectivealtruism.org/tag/coworking-spaces]; will add some content later.
4Pablo2mo
Yeah, seems reasonable. Although there are few posts on compute governance, the scope of that field is well defined. Stub here [https://forum.effectivealtruism.org/tag/compute-governance].
Emergency response

I think I favour dropping the word "teams" to make this broader.

We could also consider replacing this name with "crisis response", but I don't have a view on which is better.

4Pablo2mo
I lean towards dropping the word "teams", too.
2Jan_Kulveit2mo
Crisis response is a broader topic. I would probably suggest creating additional tag for Crises response (most of our recent sequence would fit there)
2MichaelA23d
Also Institute for Progress [https://progress.institute/about/]
2MichaelA24d
Also Encultured AI [https://encultured.ai/]
2MichaelA1mo
Also Pour Demain [https://en.pourdemain.ch/]
2Pablo2mo
To the best of my knowledge, Samotsvety is a group of forecasters, not an organization (although some of its members have recently launched or will soon launch forecasting-related orgs).
Translation

I'm pretty sure in the last few months there was a post that was a retrospective on a fellowship/program in continental Europe (maybe Finland or Sweden or Poland?) that was framed as paying people to translate EA content into their local language but then also intended to have the benefit of getting those people themselves interested in EA. That should get this tag. But I can't remember what the post was called.

Propose and vote on potential EA Wiki articles / tags [2022]

Market testing or message testing or polling something like that

I'm pretty unsure if we should make this entry. Also maybe these topics are too different to all be lumped together? Maybe market testing should just be covered by a tag on Digital marketing or Marketing (proposed elsewhere) and then message testing and polling should be covered by a different tag? 

By message testing I mean this what this page talks about: https://publicinterest.org.uk/TestingGuide.pdf 

Some relevant posts:

... (read more)
Propose and vote on potential EA Wiki articles / tags [2022]

Digital marketing or maybe just Marketing

Do we already have a tag quite like this? If not, I think we should almost certainly have it.

I know at least a few posts would warrant this tag and that several funders and I think entrepreneur-types and incubators are interested in the topic, so having a tag to collect posts on the topic seems good. (E.g., then we can send that tag page to people who are at an early stage of considering doing work on this.)

6MichaelA22d
Oh, we do have https://forum.effectivealtruism.org/topics/marketing [https://forum.effectivealtruism.org/topics/marketing] So it's probably not worth adding a new tag for just Digital marketing.
Nuclear risk research ideas: Summary & introduction

I'd be interested in hearing whether people think it'd be worth posting each individual research doc - the ones linked to from the table - as its own top-level post, vs just relying on this one post linking to them and leaving the docs as docs. 

So I'd be interested in people's views on that. (I guess you could upvote this comment to express that you're in favour of each research idea doc being made into a top-level post, or you could DM me.)

Propose and vote on potential EA Wiki articles / tags [2022]

Tabletop exercises or wargaming or maybe some other related term (scenario planning? I think that's too distant a concept, but I guess maybe it independently deserves an entry?)

I think the former name is better because it seems best to not have highly militaristic/adversarial framings in some contexts, including with respect to some/many existential risks.

Some posts this could apply to:

... (read more)
Tips for asking people for things

Thanks for this post! I follow similar principles myself and think they're helpful, and when people ask me for things I and they would often benefit from them following these principles too.

Some readers might also be interested in my rough collection of Readings and notes on how to get useful input from busy people. (I've also now added a link to this post from there.)

Propose and vote on potential EA Wiki entries

Retreat or Retreats

I think there are a fair few EA Forum posts about why and how to run retreats (e.g., for community building, for remote orgs, or for increasing coordination among various orgs working in a given area). And I think there are a fair few people who'd find it useful to have these posts collected in one place.

8Pablo2mo
Makes sense; I'll create it. By the way, we should probably start a new thread for new Wiki entries. This one has so many comments that it takes a long time to load.
Modelling the odds of recovery from civilizational collapse

No, I didn't - I ended up getting hired by Rethink Priorities and doing work on nuclear risk instead, among other things.

Nuclear Risk Overview: CERI Summer Research Fellowship

Thanks for this post and for helping run this project! As we've discussed, I think this is a valuable effort.

I wanted to mention a few things:

  • I agree that nuclear risk work can have useful benefits for testing fit and building career capital for work in other areas, including AI governance. I also agree that there will be some people for whom nuclear risk related projects/jobs are the best next step even if their primary goal is to ultimately work on AI governance. But I also think there are many paths into AI governance that are more direct, and some are
... (read more)
3Will Aldred2mo
Many thanks for this comment, especially the part below, which I embarrasingly overlooked (I did know about this database and the nuclear view - I literally showed it to someone the other day #facepalm) and which I've now incorporated into the main text of my post
8 possible high-level goals for work on nuclear risk

If you found this post interesting, there's a good chance you should do one or more of the following things:

  1. Apply to the Cambridge Existential Risks Initiative (CERI) summer research fellowship nuclear risk cause area stream. You can apply here (should take ~2 hours) and can read more here.
  2. Apply to Longview's Nuclear Security Programme Co-Lead position. "Deadline to apply: Interested candidates should apply immediately. We will review and process applications as they come in and will respond to your application within 10 working days of receiving the fully
... (read more)
8 possible high-level goals for work on nuclear risk

Some additional additional rough notes:

  • I think actually this list of 8 goals in 3 categories could be adapted into something like a template/framework applicable to a wide range of areas longtermism-inclined people might want to work on, especially areas other than AI and biorisk (where it seems likely that the key goal will usually simply by 1a, maybe along with 1b).
    • E.g., nanotechnology, cybersecurity, space governance.
    • Then one could think about how much sense each of these goals make for that specific area.
  • I personally tentatively feel like something alo
... (read more)
8 possible high-level goals for work on nuclear risk

Some additional rough notes that didn’t make it into the post

  • Maybe another goal in the category of "gaining indirect benefits for other EA/longtermist goals" could be having good feedback loops (e.g. on our methods for influencing policy and how effective we are at that) that let us learn things relevant to other areas too?
    • Similar to what Open Phil have said about some of their non-longtermist work
  • One reviewer said “Maybe place more emphasis on 1b? For example, after even a limited nuclear exchange between say China and the US, getting cooperation on AI de
... (read more)
Load More