I agree with your second and third arguments and your two rules of thumb. (And I thought about those second and third arguments when posting this and felt tempted to note them, but ultimately decided to not in order to keep this more concise and keep chugging with my other work. So I'm glad you raised them in your comment.)
I partially disagree with your first argument, for three main reasons:
Agreed that this topic warrants a wiki entry, so I proposed that yesterday just after making this post, and Pablo - our fast-moving wiki maestro - has already made such an entry!
I almost like inside beliefs and outside beliefs, but:
Yeah, that title/framing seems fine to me
Independent impressions or something like that
We already have Discussion norms and Epistemic deference, so I think there's probably no real need for this as a tag. But I think a wiki entry outlining the concept could be good. The content could be closely based on my post of the same name and/or the things linked to at the bottom of that post.
Thanks for the suggestion - I've now gone ahead and made that top-level post :)
Haven't checked out your spreadsheet, but I do think these sorts of collections are good things to create! And on that note, I'll mention my Collection of AI governance reading lists, syllabi, etc. (so that's for AI governance, not technical AI safety stuff). I suggest people who want to read it read the doc version, but I'll also copy the full contents into this comment for convenience.
AI governance is a large, complex, important area that intersects with a vast array of other fields. Unfortunately, it’s only fairly... (read more)
Late to the thread, but one further thing I'd note is that it's entirely possible for multiple different global catastrophe scenarios to occur by 2100. E.g., a global catastrophe in 2030 due to nuclear conflict and another in 2060 due to bioengineering. From a skim, I think the relevant Metaculus questions are about "by 2100" rather than "the first global catastrophe by 2100", so they're not mutually exclusive.
So if it was the case that the individual questions added to 14% and the total question added to 14% (which Christian's answer suggests it isn... (read more)
Thanks for the heads up - I've now added a link to your doc and changed the date for the CNAS agenda :)
This suggests that binary questions more easily attract forecasts, which was my intuition already, and seems relevant to future efforts to write questions - if they can be turned into binary questions without too much loss of value, this might be preferable for getting more attention from forecasters.
It is definitely easier - the answer is more one dimensional, and for continuous questions there's a lot more going back and forth between the cumulative distribution function and the probability density function, and thinking about corner cases.
E.g. For "When will the next supreme court vacancy arise" vs "will there be a vacancy by [year]", in the former case you have to think about when a decision to retire might be timed, in the latter you just need to think about whether the judge will do it.
Other mechanisms - it's possible the average binary question... (read more)
(See the linked doc for the most up-to-date version of this.)
The scope of this doc is fairly broad and nebulous. This is not The Definitive Collection of collections of resources on these topics - it’s just the relevant things that I (Michael Aird) happen to have made or know of.
This is a doc I made, and I suggest reading the doc rather than shortform version (assuming you want to read this at all). But here it is copied out anyway:
AI governance is a large, complex, important area that intersects with a vast array of other fields. Unfortunately, it’s only fairly recently that this area started receiving substantial attention, especially from specialists with a focus on existential risks and/or the long-term future. And as far as I... (read more)
Thanks! I've now added the first of those two :)
I think this is probably worth citing here, but I've only read the abstract myself: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2807377
Nice, thanks for that info! I'll check out that post soon, and might reach out to you with questions at some point.
I think it'd be good for someone to read/skim the relevant 80k article and write some entry text based on that.
I think it'd also be good to list/discuss EA, EA-adjacent, or especially-EA-relevant think tanks, such as Rethink Priorities, CSET, and NTI.
I have in mind that this entry (or this pair of entries) would cover roughly people management and mentoring, rather than also covering specifically project management, operations management, or other management-y things. Maybe that should be clarified in the entry text. Or maybe it'd be best to just accept a broader sense of "management" as the scope here.
I included WANBAM in the Related entries, but I think using org tags like that is somewhat uncommon and I think it'd be fair enough if someone wanted to remove that.
It'd be good to discuss the things this survey https://forum.effectivealtruism.org/posts/TpoeJ9A2G5Sipxfit/ea-leaders-forum-survey-on-ea-priorities-data-and-analysis says about management
Ok, I've now made this, for now going with just one entry called Management & mentoring, but flagging on the Discussion page that that could be changed later.
My original proposal:
Management/mentoring, or just one of those terms, or People management, or something like thatThis tag could be applied to many posts currently tagged Org strategy, Scalably using labour, Operations, research training programs, Constraints in effective altruism, WANBAM, and effective altruism hiring. But this topic seems sufficiently distinct from those topics and sufficiently important to warrant its own entry.
Management/mentoring, or just one of those terms, or People management, or something like that
This tag could be applied to many posts currently tagged Org strategy, Scalably using labour, Operations, research training programs, Constraints in effective altruism, WANBAM, and effective altruism hiring. But this topic seems sufficiently distinct from those topics and sufficiently important to warrant its own entry.
Sounds good. I haven't reviewed the relevant posts, so I don't have a clear sense of whether "management" or
Speaking from the perspective of a forecaster, I personally wouldn't have trusted the forecasts produced as an input into important decisions.
Fwiw, I expect to very often see forecasts as an input into important decisions, but also usually seem them as a somewhat/very crappy input. I just also think that, for many questions that are key to my decisions or to the decisions of stakeholders I seek to influence, most or all of the available inputs are (by themselves) somewhat/very crappy, and so often the best I can do is:
Thanks for this post, I found it interesting.
One thing I want to push back on is the way you framed applied vs fundamental research and how to decide how much to prioritise fundamental research. I think the following claims from the post seem somewhat correct but also somewhat misleading:
The second framing is more focused on the intrinsic value of knowledge:“Better science is science that more effectively improves our understanding of the universe.”This might appear closer to the approach of fundamental research, where the practical usefulness of eventual
The second framing is more focused on the intrinsic value of knowledge:
“Better science is science that more effectively improves our understanding of the universe.”
This might appear closer to the approach of fundamental research, where the practical usefulness of eventual
Fwiw, I was envisioning something more like the former - i.e., this tag could be used for any post that has substantial discussion of China or China's relevance to some cause area, and the entry could cover the intersection of China and various different cause areas. I see this as making sense because I think there's some extent to which knowledge, connections, etc. relevant to China could be transferable across different cause areas. E.g., someone who develops some degree of expertise on Chinese policymaking or history or culture for the purposes of thinking about animal advocacy efforts there could also potentially be useful for people interested in technology or great power risks to talk to.
Thanks for this post! Though I disagree with some key claims in it (as noted in my other comments), I also thought it was a handy, concise summary of some important events and possible implications. And your suggested possible actions sound to me like they'd probably be useful. (Though I'm more agnostic about how high-priority they'd be, relative to other ways of reducing nuclear risk.)
Also, more generally, it seems to me that reducing odds of increases of numbers of warheads in countries that already have some is a relatively neglected possible goal for n... (read more)
Btw, here's a relevant section of a post I'm drafting on "10 mistakes to avoid when thinking about nuclear risk", which overviews what I see as some key points on nuclear winter etc.. (I could probably share the draft with you if you want.)
When thinking about nuclear risk, people often focus on the immediate harms (e.g., from the blast) and the harms from radioactive fallout. And those harms could... (read more)
To add to what Larks said, I would also say that:
I don't think this is true for the UK's nuclear deterrence strategy. The UK's nuclear warheads are launched only from four Vanguard-class submarines. Each one carries 8 (but can carry up to 16) Trident nuclear missiles, and at least one is on active service at any one time. This last part is crucial- the deterrence strategy relies on the location of the active submarine and its' warheads being very hard to detect, and I would argue the number of warheads beyond a certain point is irrelevant to deterrence.
If that's roughly the case (... (read more)
United Kingdom policy & politics (or something like that)
This would be akin to the entry/tag on United States politics. An example of a post it'd cover is https://forum.effectivealtruism.org/posts/yKoYqxYxo8ZnaFcwh/risks-from-the-uk-s-planned-increase-in-nuclear-warheads
But I wrote on the United States politics entry's discussion page a few months ago:
I suggest changing the name and scope to "United States government and politics". E.g., I think there should be a place to put posts about what actions the US government plans to take or can take, h
I think this sounds like it could be a useful resource :)
I previously made a collection of Notes on EA-related research, writing, testing fit, learning, and the Forum, which might be helpful for this project or for some of this project's intended beneficiaries.
(I know this isn't exactly what you're after, and I also shared it with you earlier, but someone suggested I share it in a comment on this post.)
I've also now listened to Victor's Understanding the US Government (2020) due to my interest in AI governance, and made some quick notes here.
Why I read this
I've recently collected readings and notes on the following topics:
Just sharing here in case people would find them useful. Further info on purposes, epistemic status, etc. can be found at those links.
I'm also going to listen to Tegmark's Life 3.0, but haven't done so yet.
In case anyone was wondering, Army of None seems to be available on US Audible and on Audiobooks.co.uk.
(This is related to the general topic of differential progress.)
(Someone asked that question in a Slack workspace I'm part of, and I spent 10 mins writing a response. I've copied and pasted that below with slight modifications. This is only scratching the surface and probably makes silly errors, but maybe this'll be a little useful to some people.)
Medium-sized donors can often find opportunities that aren’t practical for the largest donors to exploit – the ecosystem needs a mixture of ‘angel’ donors to compliment the ‘VCs’ like Open Philanthropy. Open Philanthropy isn’t covering many of the problem areas listed here and often can’t pursue small individual grants.
This reminded me of the following post, which may be of interest to some readers: Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation
The Metaculus community also estimates there’s a 50% chance of another Good Ventures-scale donor within five years.
I think that that question would count Sam Bankman-Fried starting to give at the scale Good Ventures is giving as a positive resolution, and that some forecasters have that as a key consideration for their forecast (e.g., Peter Wildeford's comment suggests that). Whereas I think you're using this as evidence that there'll be another donor at that scale, in addition to both Good Ventures and the FTX team people? So this might be double-counting... (read more)
Thanks for this really interesting post!
Overall I think all the core claims and implications sound right to me, but I'll raise a few nit-picks in comments.
We could break down some of the key leadership positions needed to deploy these funds as follows:Researchers able to come up with ideas for big projects, new cause areas, or other new ways to spend funds on a big scaleEA entrepreneurs/managers/research leads able to run these projects and hire lots of peopleGrantmakers able to evaluate these projects
We could break down some of the key leadership positions needed to deploy these funds as follows:
I agree with all that, but think that that's a so... (read more)
(Just want to say that I did find it a bit odd that Ben's post didn't mention timelines to transformative AI - or other sources of "hingeyness" - as a consideration, and I appreciate you raising it here. Overall, my timelines are longer than yours, and I'd guess we should be spending less than 10% per year, but it does seem a crucial consideration for many points discussed in the post.)
Yeah, I think that that'd work for this. Or maybe to avoid proliferation of tags, we should have forecasting and forecasts, and then just long-range forecasting, and if people want to say something contains long-range forecasts they can use long-range forecasting along with forecasts.
I do see this concept as relevant to various EA issues for the reasons you've described, and I think high-quality content covering "the value of open societies, the meaning of openness, and how to protect and expand open societies" would be valuable. But I can't immediately recall any Forum posts that do cover those topics explicitly. Do you know of posts that would warrant this tag?
If there aren't yet posts that'd warrant this tag, then we have at least the following (not mutually exclusive) options:
Should this tag be applied to posts that contain (links to) multiple thoughtful long-range forecasts but don't explicitly discuss long-range forecasting as distinct from forecasting in general? E.g., did it make sense for me to apply it to this post?
(I say "thoughtful" as a rough way of ruling out cases in which someone just includes a few quick numbers merely to try to give a clearer sense of their views, or something.)
I think LessWrong have separate tags for posts about forecasting and posts that contain forecasts. Perhaps we should do the same?
My personal, quick reaction is that that's a decently separate thing, that could have a separate tag if we feel that that's worthwhile. Some posts might get both tags, and some posts might get just one.
But I haven't thought carefully about this.
I also think I'd lean against having an entry for that purpose. It seems insufficiently distinct from the existing tags for career choice or community experiences, or from the intersection of the two.
Actually, having read your post, I now think it does sound more about jobs (or really "roles", but that sounds less clear) than about careers. So I now might suggest using the term job profiles.
I think the MVP version you describe sounds good. I'd add that it seems like it'd sometimes/often be useful for people to also write some thoughts on whether and why they'd recommend people pursue such jobs? I think these posts would often be useful even without that, but that could sometimes/often make them more useful.
Yeah, I definitely expect it'd be worth many people doing this!
I also tentatively suggested something somewhat similar recently in a shortform. I'll quote that in full:
Are there "a day in the life" / "typical workday" writeups regarding working at EA orgs? Should someone make some (or make more)?I've had multiple calls with people who are interested in working at EA orgs, but who feel very unsure what that actually involves day to day, and so wanted to know what a typical workday is like for me. This does seem like useful info for people choosing how
Are there "a day in the life" / "typical workday" writeups regarding working at EA orgs? Should someone make some (or make more)?
I've had multiple calls with people who are interested in working at EA orgs, but who feel very unsure what that actually involves day to day, and so wanted to know what a typical workday is like for me. This does seem like useful info for people choosing how
Yeah, this seems worth having! And I appreciate you advocating for people to write these and for us to have a way to collect them, for similar reasons to those given in this earlier shortform of mine.
I think career profiles is a better term for this than job posts, partly because:
OTOH, career profiles also... (read more)
(Btw, if anyone else is interested in "These histories of institutional disasters and near-disasters", you can find them in footnote 1 of the linked post.)
Here are some relevant books from my ranked list of all EA-relevant (audio)books I've read, along with a little bit of commentary on them.