All of MichaelA's Comments + Replies

Independent impressions

I agree with your second and third arguments and your two rules of thumb. (And I thought about those second and third arguments when posting this and felt tempted to note them, but ultimately decided to not in order to keep this more concise and keep chugging with my other work. So I'm glad you raised them in your comment.)

I partially disagree with your first argument, for three main reasons:

  • People have very different comparative advantages (in other words, people's labour is way less fungible than their donations).
    • Imagine Alice's independent impression is
... (read more)
Independent impressions

Agreed that this topic warrants a wiki entry, so I proposed that yesterday just after making this post, and Pablo - our fast-moving wiki maestro - has already made such an entry!

I almost like inside beliefs and outside beliefs, but:

  • I feel like "outside beliefs" implies that it's only using info about other people's beliefs, or is in any case setting aside one's independent impression.
    • Whereas I see independent impressions as a subset of what forms our all-things-considered beliefs.
  • I'd also worry that inside and outside beliefs sounds too close to inside and
... (read more)
1Emrik1dOn the social-epistemological point: Yes, it varies by context. One thing I'd add is that I think it's hard to keep inside/outside (or independent and all-things-considered) beliefs separate for a long time. And your independent beliefs are almost certainly going to be influenced by peer evidence, and vice versa. I think this means that if you are the kind of person whose main value to the community is sharing your opinions (rather than, say, being a fund manager), you should try to cultivate a habit of mostly attending to gears-level evidence and to some extent ignore testimonial evidence. This will make your own beliefs less personally usefwl for making decisions, but will make the opinions you share more valuable to the community.
1Emrik1dAgree on all points, but inside/outside is catchier! Might ride the inside-jargon group-belonging-signalling train into norm fixation.
Propose and vote on potential EA Wiki entries

Yeah, that title/framing seems fine to me

3Pablo1dAfter reviewing the literature, I came to the view that Independent impressions, which you proposed, is probably a more appropriate name, so that's what I ended up using.
Propose and vote on potential EA Wiki entries

Independent impressions or something like that

We already have Discussion norms and Epistemic deference, so I think there's probably no real need for this as a tag. But I think a wiki entry outlining the concept could be good. The content could be closely based on my post of the same name and/or the things linked to at the bottom of that post.

2Stefan_Schubert1dI agree that it would be good to describe this distinction in the Wiki. Possibly it could be part of the Epistemic deference entry, though I don't have a strong view on that.
2Pablo2dHow about something like beliefs vs. impressions?
MichaelA's Shortform

Thanks for the suggestion - I've now gone ahead and made that top-level post :) 

List of AI safety courses and resources

Haven't checked out your spreadsheet, but I do think these sorts of collections are good things to create! And on that note, I'll mention my Collection of AI governance reading lists, syllabi, etc. (so that's for AI governance, not technical AI safety stuff). I suggest people who want to read it read the doc version, but I'll also copy the full contents into this comment for convenience.


What is this doc, and why did I make it?

AI governance is a large, complex, important area that intersects with a vast array of other fields. Unfortunately, it’s only fairly... (read more)

Some global catastrophic risk estimates

Late to the thread, but one further thing I'd note is that it's entirely possible for multiple different global catastrophe scenarios to occur by 2100. E.g., a global catastrophe in 2030 due to nuclear conflict and another in 2060 due to bioengineering. From a skim, I think the relevant Metaculus questions are about "by 2100" rather than "the first global catastrophe by 2100", so they're not mutually exclusive. 

So if it was the case that the individual questions added to 14% and the total question added to 14% (which Christian's answer suggests it isn... (read more)

A central directory for open research questions

Thanks for the heads up - I've now added a link to your doc and changed the date for the CNAS agenda :)

An analysis of Metaculus predictions of future EA resources, 2025 and 2030

This suggests that binary questions more easily attract forecasts, which was my intuition already, and seems relevant to future efforts to write questions - if they can be turned into binary questions without too much loss of value, this might be preferable for getting more attention from forecasters. 

  1. Do you have a sense of why this is the case? Is it typically easier/faster to make binary than continuous forecasts? Are there any other mechanisms?
  2. Do you have a sense of how strong that effect might tend to be? Like whether it can typically be expected
... (read more)

It is definitely easier - the answer is more one dimensional, and for continuous questions there's a lot more going back and forth between the cumulative distribution function and the probability density function, and thinking about corner cases.

E.g. For "When will the next supreme court vacancy arise" vs "will there be a vacancy by [year]", in the former case you have to think about when a decision to retire might be timed, in the latter you just need to think about whether the judge will do it.

Other mechanisms - it's possible the average binary question... (read more)

MichaelA's Shortform

Collection of collections of resources relevant to (research) management, mentorship, training, etc.

(See the linked doc for the most up-to-date version of this.)

The scope of this doc is fairly broad and nebulous. This is not The Definitive Collection of collections of resources on these topics - it’s just the relevant things that I (Michael Aird) happen to have made or know of.

... (read more)
MichaelA's Shortform

Collection of AI governance reading lists, syllabi, etc. 

This is a doc I made, and I suggest reading the doc rather than shortform version (assuming you want to read this at all). But here it is copied out anyway:


What is this doc, and why did I make it?

AI governance is a large, complex, important area that intersects with a vast array of other fields. Unfortunately, it’s only fairly recently that this area started receiving substantial attention, especially from specialists with a focus on existential risks and/or the long-term future. And as far as I... (read more)

A central directory for open research questions

Thanks! I've now added the first of those two :)

1Vael Gates4dGot my post up :). https://forum.effectivealtruism.org/posts/dKgWZ8GMNkXfRwjqH/seeking-social-science-students-collaborators-interested-in [https://forum.effectivealtruism.org/posts/dKgWZ8GMNkXfRwjqH/seeking-social-science-students-collaborators-interested-in] Also "Artificial Intelligence and Global Security Initiative Research Agenda [https://www.cnas.org/artificial-intelligence-and-global-security-initiative-research-agenda] - Centre for a New American Security, no date" was published in July 2017, according to the embedded pdf in that link!
Ethics of existential risk

I think this is probably worth citing here, but I've only read the abstract myself: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2807377

1matthew.vandermerwe13dweak disagree. FWIW lots of good cites in endnotes to chapter 2 of The Precipice pp.305–12; and Moynihan's X-Risk [https://mitpress.mit.edu/books/x-risk].
Improving EAs’ use of non-EA options for research training, credentials, testing fit, etc.

Nice, thanks for that info! I'll check out that post soon, and might reach out to you with questions at some point.

Think tanks

I think it'd be good for someone to read/skim the relevant 80k article and write some entry text based on that.

I think it'd also be good to list/discuss EA, EA-adjacent, or especially-EA-relevant think tanks, such as Rethink Priorities, CSET, and NTI.

Management & mentoring

I have in mind that this entry (or this pair of entries) would cover roughly people management and mentoring, rather than also covering specifically project management, operations management, or other management-y things. Maybe that should be clarified in the entry text. Or maybe it'd be best to just accept a broader sense of "management" as the scope here. 

Management & mentoring

I included WANBAM in the Related entries, but I think using org tags like that is somewhat uncommon and I think it'd be fair enough if someone wanted to remove that. 

Propose and vote on potential EA Wiki entries

Ok, I've now made this, for now going with just one entry called Management & mentoring, but flagging on the Discussion page that that could be changed later. 

Management & mentoring

My original proposal:

Management/mentoring, or just one of those terms, or People management, or something like that

This tag could be applied to many posts currently tagged Org strategy, Scalably using labour, Operations, research training programs, Constraints in effective altruism, WANBAM, and effective altruism hiring. But this topic seems sufficiently distinct from those topics and sufficiently important to warrant its own entry.

Pablo's response:

Sounds good. I haven't reviewed the relevant posts, so I don't have a clear sense of whether "management" or

... (read more)
2MichaelA17dI have in mind that this entry (or this pair of entries) would cover roughly people management and mentoring, rather than also covering specifically project management, operations management, or other management-y things. Maybe that should be clarified in the entry text. Or maybe it'd be best to just accept a broader sense of "management" as the scope here.
Propose and vote on potential EA Wiki entries

Management/mentoring, or just one of those terms, or People management, or something like that

This tag could be applied to many posts currently tagged Org strategy, Scalably using labour, Operations, research training programs, Constraints in effective altruism, WANBAM, and effective altruism hiring. But this topic seems sufficiently distinct from those topics and sufficiently important to warrant its own entry.

2Pablo18dSounds good. I haven't reviewed the relevant posts, so I don't have a clear sense of whether "management" or "mentoring" is a better choice; the latter seems preferable other things equal, since "management" is quite a vague term, but this is only one consideration. In principle, I could see a case for having two separate entries, depending on how many relevant posts there are and how much they differ. I would suggest that you go ahead and do what makes most sense to you, since you seem to have already looked at this material and probably have better intuitions. Otherwise I can take a closer look myself in the coming days.
elifland's Shortform

Speaking from the perspective of a forecaster, I personally wouldn't have trusted the forecasts produced as an input into important decisions. 

Fwiw, I expect to very often see forecasts as an input into important decisions, but also usually seem them as a somewhat/very crappy input. I just also think that, for many questions that are key to my decisions or to the decisions of stakeholders I seek to influence, most or all of the available inputs are (by themselves) somewhat/very crappy, and so often the best I can do is: 

  1. try to gather up a bunch o
... (read more)
What would better science look like?

Thanks for this post, I found it interesting.

One thing I want to push back on is the way you framed applied vs fundamental research and how to decide how much to prioritise fundamental research. I think the following claims from the post seem somewhat correct but also somewhat misleading:

The second framing is more focused on the intrinsic value of knowledge:

“Better science is science that more effectively improves our understanding of the universe.”

This might appear closer to the approach of fundamental research, where the practical usefulness of eventual

... (read more)
2C Tilli1moMy thought is that the exploration vs exploitation issue remains, even if we also attempt to favour the areas where progress would be most beneficial. I am not really convinced that it’s possible to make very good predictions about the consequences of new discoveries in fundamental research. I don’t have a strong position/belief regarding this but I’m somewhat skeptical that it’s possible. Thanks for the reading suggestions, I will be sure to check them out – if you think of any other reading recommendations supporting the feasibility of forecasting consequences of research, I would be very grateful! This is more or less my conclusion in the post, even if I don’t use the same wording. The reason why I think it’s worth mentioning potential issues with a (naïve) welfarist focus is that if I’d work with science reform and only mention the utilitarian/welfarist framing, I think this could come across as naïve or perhaps as opposed to fundamental research and that would make discussions unnecessarily difficult. I think this is less of a problem on the EA Forum than elsewhere 😊
China

Fwiw, I was envisioning something more like the former - i.e., this tag could be used for any post that has substantial discussion of China or China's relevance to some cause area, and the entry could cover the intersection of China and various different cause areas. I see this as making sense because I think there's some extent to which knowledge, connections, etc. relevant to China could be transferable across different cause areas. E.g., someone who develops some degree of expertise on Chinese policymaking or history or culture for the purposes of thinking about animal advocacy efforts there could also potentially be useful for people interested in technology or great power risks to talk to.

Risks from the UK's planned increase in nuclear warheads

Thanks for this post! Though I disagree with some key claims in it (as noted in my other comments), I also thought it was a handy, concise summary of some important events and possible implications. And your suggested possible actions sound to me like they'd probably be useful. (Though I'm more agnostic about how high-priority they'd be, relative to other ways of reducing nuclear risk.)

Also, more generally, it seems to me that reducing odds of increases of numbers of warheads in countries that already have some is a relatively neglected possible goal for n... (read more)

Risks from the UK's planned increase in nuclear warheads

Btw, here's a relevant section of a post I'm drafting on "10 mistakes to avoid when thinking about nuclear risk", which overviews what I see as some key points on nuclear winter etc.. (I could probably share the draft with you if you want.)

Mistake 5 & 6: Ignoring the possibility of major climate and famine effects following nuclear conflict—or overstating the likelihood/severity of those effects

When thinking about nuclear risk, people often focus on the immediate harms (e.g., from the blast) and the harms from radioactive fallout. And those harms could... (read more)

Risks from the UK's planned increase in nuclear warheads

To add to what Larks said, I would also say that:

  • it's not the case that "Some research on nuclear winter suggests that 100 Hiroshima- sized nuclear detonations would be enough to destroy the majority of human life on earth"
  • even the smaller claim that that research does make is contested and some parts of it are based on pretty shoddy methods (especially the reasoning to go from reduced crop yields to famine death)
  • "the majority of human life on earth" would in any case be less than "almost all human life on earth", and the distinction might matter a lot fro
... (read more)
2MichaelA1moBtw, here's a relevant section of a post I'm drafting on "10 mistakes to avoid when thinking about nuclear risk", which overviews what I see as some key points on nuclear winter etc.. (I could probably share the draft with you if you want.) MISTAKE 5 & 6: IGNORING THE POSSIBILITY OF MAJOR CLIMATE AND FAMINE EFFECTS FOLLOWING NUCLEAR CONFLICT—OR OVERSTATING THE LIKELIHOOD/SEVERITY OF THOSE EFFECTS When thinking about nuclear risk, people often focus on the immediate harms (e.g., from the blast) and the harms from radioactive fallout. And those harms could indeed be huge! But those harms could be dwarfed by the harms from major cooling of the climate - perhaps a nuclear winter [https://en.wikipedia.org/wiki/Nuclear_winter], or perhaps a smaller version of the same effects. That cooling could perhaps cause huge numbers of famine deaths (plausibly in the billions, for some nuclear conflicts). And this seems the most likely way for nuclear war to cause an existential catastrophe [https://theprecipice.com/faq#existential-risk].[1] ...or maybe not! The effects depend on factors such as: * how many detonations occur * how much flammable material is in the targeted areas * how much black carbon fires in these areas would produce and would reach high enough in the atmosphere to persist there for years * how severely agricultural production would be reduced by various potential climate effects * how people would respond to expected or occurring agricultural production issues (e.g., how well could they adjust what crops they grow, where, and how; how much would food usage patterns change; would international trade continue) * how likely civilization is to recover from a collapse And, unfortunately, each of those questions are contested, complex, and under-researched. Ultimately, I suggest: 1. Recognising that major climate and famine effects are plausible, but that whether they’ll happen and how bad they’ll be is quite uncertain. 2. Seeing that
Risks from the UK's planned increase in nuclear warheads

I don't think this is true for the UK's nuclear deterrence strategy. The UK's nuclear warheads are launched only from four Vanguard-class submarines.  Each one carries 8  (but can carry up to 16) Trident nuclear missiles, and at least one is on active service at any one time. This last part is crucial- the deterrence strategy relies on the location of the active submarine and  its' warheads being very hard to detect, and I would argue the number of warheads beyond a certain point is irrelevant to deterrence. 


If that's roughly the case (... (read more)

Propose and vote on potential EA Wiki entries

United Kingdom policy & politics (or something like that)

This would be akin to the entry/tag on United States politics. An example of a post it'd cover is https://forum.effectivealtruism.org/posts/yKoYqxYxo8ZnaFcwh/risks-from-the-uk-s-planned-increase-in-nuclear-warheads 

But I wrote on the United States politics entry's discussion page a few months ago:

I suggest changing the name and scope to "United States government and politics". E.g., I think there should be a place to put posts about what actions the US government plans to take or can take, h

... (read more)
6Pablo1moYeah, makes sense. I just created the new article [https://forum.effectivealtruism.org/tag/united-kingdom-policy-and-politics] and renamed the existing one. There is no content for now, but I'll try to add something later.
Needed: Input on testing fit for your career

I think this sounds like it could be a useful resource :)

I previously made a collection of Notes on EA-related research, writing, testing fit, learning, and the Forum, which might be helpful for this project or for some of this project's intended beneficiaries. 

(I know this isn't exactly what you're after, and I also shared it with you earlier, but someone suggested I share it in a comment on this post.)

Books and lecture series relevant to AI governance?

I've also now listened to Victor's Understanding the US Government (2020) due to my interest in AI governance, and made some quick notes here.

MichaelA's Shortform

Notes on Victor's Understanding the US Government (2020)

Why I read this

... (read more)
MichaelA's Shortform

I've recently collected readings and notes on the following topics:

Just sharing here in case people would find them useful. Further info on purposes, epistemic status, etc. can be found at those links.

Books and lecture series relevant to AI governance?

I'm also going to listen to Tegmark's Life 3.0, but haven't done so yet.

Books and lecture series relevant to AI governance?

In case anyone was wondering, Army of None seems to be available on US Audible and on Audiobooks.co.uk.

MichaelA's Shortform

Quick thoughts on the question: "Is it better to try to stop the development of a technology, or to try to get there first and shape how it is used?"

(This is related to the general topic of differential progress.) 

(Someone asked that question in a Slack workspace I'm part of, and I spent 10 mins writing a response. I've copied and pasted that below with slight modifications. This is only scratching the surface and probably makes silly errors, but maybe this'll be a little useful to some people.)

  • I think the ultimate answer to that question is really so
... (read more)
Is effective altruism growing? An update on the stock of funding vs. people

Medium-sized donors can often find opportunities that aren’t practical for the largest donors to exploit – the ecosystem needs a mixture of ‘angel’ donors to compliment the ‘VCs’ like Open Philanthropy. Open Philanthropy isn’t covering many of the problem areas listed here and often can’t pursue small individual grants.

This reminded me of the following post, which may be of interest to some readers: Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation

Is effective altruism growing? An update on the stock of funding vs. people

The Metaculus community also estimates there’s a 50% chance of another Good Ventures-scale donor within five years.

I think that that question would count Sam Bankman-Fried starting to give at the scale Good Ventures is giving as a positive resolution, and that some forecasters have that as a key consideration for their forecast (e.g., Peter Wildeford's comment suggests that). Whereas I think you're using this as evidence that there'll be another donor at that scale, in addition to both Good Ventures and the FTX team people? So this might be double-counting... (read more)

2Benjamin_Todd2moAh good point. I only found the metaculus questions recently and haven't thought about them as much.
Is effective altruism growing? An update on the stock of funding vs. people

Thanks for this really interesting post! 

Overall I think all the core claims and implications sound right to me, but I'll raise a few nit-picks in comments.

We could break down some of the key leadership positions needed to deploy these funds as follows:

  1. Researchers able to come up with ideas for big projects, new cause areas, or other new ways to spend funds on a big scale
  2. EA entrepreneurs/managers/research leads able to run these projects and hire lots of people
  3. Grantmakers able to evaluate these projects

I agree with all that, but think that that's a so... (read more)

2Benjamin_Todd2moI agree there are lots of forms of useful research that could feed into this, and in general better ideas feels like a key bottleneck for EA. I'm excited to see more 'foundational' work and disentanglement as well. Though I do feel like at least right now there's an especially big bottleneck for ideas for specific shovel ready projects that could absorb a lot of funding.
Is effective altruism growing? An update on the stock of funding vs. people

(Just want to say that I did find it a bit odd that Ben's post didn't mention timelines to transformative AI - or other sources of "hingeyness" - as a consideration, and I appreciate you raising it here. Overall, my timelines are longer than yours, and I'd guess we should be spending less than 10% per year, but it does seem a crucial consideration for many points discussed in the post.)

Long-range forecasting

Yeah, I think that that'd work for this. Or maybe to avoid proliferation of tags, we should have forecasting and forecasts, and then just long-range forecasting, and if people want to say something contains long-range forecasts they can use long-range forecasting along with forecasts

Propose and vote on potential EA Wiki entries

I do see this concept as relevant to various EA issues for the reasons you've described, and I think high-quality content covering "the value of open societies, the meaning of openness, and how to protect and expand open societies" would be valuable. But I can't immediately recall any Forum posts that do cover those topics explicitly. Do you know of posts that would warrant this tag?

If there aren't yet posts that'd warrant this tag, then we have at least the following (not mutually exclusive) options:

  1. This tag could be made later, once there are such posts
  2. Y
... (read more)
Long-range forecasting

Should this tag be applied to posts that contain (links to) multiple thoughtful long-range forecasts but don't explicitly discuss long-range forecasting as distinct from forecasting in general? E.g., did it make sense for me to apply it to this post

(I say "thoughtful" as a rough way of ruling out cases in which someone just includes a few quick numbers merely to try to give a clearer sense of their views, or something.)

I think LessWrong have separate tags for posts about forecasting and posts that contain forecasts. Perhaps we should do the same?

2Pablo2moFurther to my previous message: What do you think about creating a long-range forecasts tag for posts that contain such forecasts, and to reserve long-range forecasting for posts that discuss the phenomenon? I don't have a clear enough sense of how this problem manifests itself in other articles, so I'm not proposing any general solution for the time being. But this seems like an adequate way to address this particular manifestation.
4Pablo2moThis is a general problem: for many entries, posts can be potentially relevant by virtue of either discussing the topic of the entry or exemplifying the phenomenon the entry describes. So we probably want to think about possible general ways to deal with this problem rather than solutions for this specific instance. Still, it seems fine to discuss that here. I don't think I have any insights to offer off the top of my head, but will try to think about this a bit more later.
Propose and vote on potential EA Wiki entries

My personal, quick reaction is that that's a decently separate thing, that could have a separate tag if we feel that that's worthwhile. Some posts might get both tags, and some posts might get just one.

But I haven't thought carefully about this.

I also think I'd lean against having an entry for that purpose. It seems insufficiently distinct from the existing tags for career choice or community experiences, or from the intersection of the two.

Propose and vote on potential EA Wiki entries

Actually, having read your post, I now think it does sound more about jobs (or really "roles", but that sounds less clear) than about careers. So I now might suggest using the term job profiles

4Aaron Gertler2moThanks, have created this [https://forum.effectivealtruism.org/tag/job-profile]. (The "Donation writeup" tag is singular, so I felt like this one should also be, but LMK if you think it should be plural.)
2Pablo2moEither looks good to me. I agree that this is worth having.
You should write about your job

I think the MVP version you describe sounds good. I'd add that it seems like it'd sometimes/often be useful for people to also write some thoughts on whether and why they'd recommend people pursue such jobs? I think these posts would often be useful even without that, but that could sometimes/often make them more useful. 

You should write about your job

Yeah, I definitely expect it'd be worth many people doing this! 

I also tentatively suggested something somewhat similar recently in a shortform. I'll quote that in full:

Are there "a day in the life" / "typical workday" writeups regarding working at EA orgs? Should someone make some (or make more)?

I've had multiple calls with people who are interested in working at EA orgs, but who feel very unsure what that actually involves day to day, and so wanted to know what a typical workday is like for me. This does seem like useful info for people choosing how

... (read more)
Propose and vote on potential EA Wiki entries

Yeah, this seems worth having! And I appreciate you advocating for people to write these and for us to have a way to collect them, for similar reasons to those given in this earlier shortform of mine.

I think career profiles is a better term for this than job posts, partly because:

  • The latter sounds like it might be job ads or job postings
  • Some of these posts might not really be on "jobs" but rather things like being a semi-professional blogger, doing volunteering, having some formalised unpaid advisory role to some institution, etc.

OTOH, career profiles also... (read more)

2MichaelA2moActually, having read your post, I now think it does sound more about jobs (or really "roles", but that sounds less clear) than about careers. So I now might suggest using the term job profiles.
Books and lecture series relevant to AI governance?

Thanks Mauricio!

(Btw, if anyone else is interested in "These histories of institutional disasters and near-disasters", you can find them in footnote 1 of the linked post.)

1Mauricio2moThanks! Good catch - looks like that didn't save into the URL.
Books and lecture series relevant to AI governance?

Here are some relevant books from my ranked list of all EA-relevant (audio)books I've read, along with a little bit of commentary on them.

  • The Precipice, by Ord, 2020
    • See here for a list of things I've written that summarise, comment on, or take inspiration from parts of The Precipice.
    • I recommend reading the ebook or physical book rather than audiobook, because the footnotes contain a lot of good content and aren't included in the audiobook
    • Superintelligence may have influenced me more, but that’s just due to the fact that I read it very soon after getting in
... (read more)
3MichaelA1moI've also now listened to Victor's Understanding the US Government [https://www.audible.co.uk/pd/Understanding-the-US-Government-Audiobook/1629979724] (2020) due to my interest in AI governance, and made some quick notes here [https://forum.effectivealtruism.org/posts/EMKf4Gyee7BsY2RP8/michaela-s-shortform?commentId=C6y8oxc2Fd9LHiarx] .
2MichaelA1moI'm also going to listen to Tegmark's Life 3.0, but haven't done so yet.
Load More