2022 update: This is now superseded by a new version of the same open thread.

(I have no association with the EA Forum team or CEA, and this idea comes with no official mandate. I'm open to suggestions of totally different ways of doing this.)

Update: Aaron here. This has our official mandate now, and I'm subscribed to the post so that I'll be notified of every comment. Please suggest tags!

2021 update: Michael here again. The EA's tag system is now paired with the EA Wiki, and so proposals on this post are now for "entries", which can mean tags, EA Wiki articles, or (most often) pages that serve both roles.

The EA Forum now has tags, and users can now make tags themselves. I think this is really cool, and I've now made a bunch of tags. 

But I find it hard to decide whether some tag ideas are worth including, vs being too fine-grained or too similar to existing tags. I also feel some hesitation about taking too much unilateral action. I imagine some other forum users might feel the same way about tag ideas they have, some of which might be really good! (See also this thread.)

So I propose that this post becomes a thread where people can comment with a tag idea there's somewhat unsure about, and then other people can upvote it or downvote it based on whether they think it should indeed be its own tag. Details:

  • I am not saying you should always comment here before making a tag. I have neither the power nor the inclination to stop you just making tags you're fairly confident should exist!
  • I suggest having a low bar for commenting here, such as "this is just a thought that occurred to me" or "5% chance this tag should exist". It's often good to be open to raising all sorts of ideas when brainstorming, and apply most of the screening pressure after the ideas are raised.
    • The tag ideas I've commented about myself are all "just spitballing".
  • Feel free to also propose alternative tag labels, propose a rough tag description, note what other tags are related to this one, note what you see as the arguments for and against that tag, and/or list some posts that would be included in this tag. (But also feel free to simply suggest a tag label.)
  • Feel free to comment on other people's ideas to do any of the above things (propose alternative labels, etc.).
  • Make a separate comment for each tag idea.
  • Probably upvote or downvote just based on the tag idea itself; to address the extra ideas in the comment (e.g., the proposed description), leave a reply.
  • Maybe try not to hold back with the downvotes. People commenting here would do so specifically because they want other people's honest input, and they never claimed their tag idea was definitely good so the downvote isn't really disagreeing with them.

Also feel free to use this as a thread to discuss (and upvote or downvote suggestions regarding) existing tags that might not be worth having, or might be worth renaming or tweaking the scope of, or what-have-you. For example, I created the tag Political Polarisation, but I've also left a comment here about whether it should be changed or removed.


New comment
374 comments, sorted by Click to highlight new comments since: Today at 8:02 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Retreat or Retreats

I think there are a fair few EA Forum posts about why and how to run retreats (e.g., for community building, for remote orgs, or for increasing coordination among various orgs working in a given area). And I think there are a fair few people who'd find it useful to have these posts collected in one place.

Makes sense; I'll create it. By the way, we should probably start a new thread for new Wiki entries. This one has so many comments that it takes a long time to load.
Thanks! And good idea - done [https://forum.effectivealtruism.org/posts/cB6LYs8s7afLvpTzm/propose-and-vote-on-potential-ea-wiki-articles-tags-2022]

Quadratic voting or Uncommon voting methods or Approval voting or something like that or multiple of these

E.g., this post could get the first and/or second tag, and posts about CES could get the second and/or third tag

Created [https://forum.effectivealtruism.org/tag/quadratic-voting]. I may try to expand the description to also cover quadratic funding. (Both quadratic voting and quadratic funding are instances of quadratic payments, at least in Buterin's framing [https://vitalik.ca/general/2019/12/07/quadratic.html], so we could use the latter for the name of the entry. I used 'quadratic voting' because this is the name that people usually associate with the general idea.)  
The content of the old EA Concepts page is now part of the cost-effectiveness entry [https://forum.effectivealtruism.org/tag/cost-effectiveness]. However, it may be worth creating a separate entry on distribution of cost-effectivenss and moving that content there. I'll do that tomorrow if no one objects by then.
Sorry, I hadn't seen that. I now added the "cost-effectiveness" tag to the first of these three articles, since that even has "cost-effectiveness" in the title. The other two articles are actually about differences in performance between people. Potentially that should have its own tag. But it's also possible that that is too small a topic to warrant that. I'd also be happy for an article on distribution of cost-effectiveness.

Thanks. I'll take a look at the articles later today. My sense is that discussion of variation in performance across people is mostly of interest insofar as it bears on the question of distribution of cost-effectiveness, so I'd be tempted to use the distribution of cost-effectiveness tag for those articles, rather than create a dedicated entry.

Alignment tax

Here I'm more interested in the Wiki entry than the tag, though the tag is probably also useful. Basically I primarily want a good go-to link that is solely focused on this and gives a clear definition and maybe some discussion.

This is probably an even better fit for LW or the Alignment Forum, but they don't seem to have it. We could make a version here anyway, and then we could copy it there or someone from those sites could.

Here are some posts that have relevant content, from a very quick search:

... (read more)
Here's the entry [https://forum.effectivealtruism.org/tag/alignment-tax/]. I was only able to read the transcript of Paul's talk and Rohin's summary of it, so feel free to add anything you think is missing.
Thanks, Michael. This is a good idea; I will create the entry. (I just noticed you left other comments to which I didn't respond; I'll do so shortly.)

READI Research


My guess is that this org/collective/group doesn't (yet) meet the EA Wiki's implicit notability or number-of-posts-that-would-be-tagged standards, but I'm not confident about that. 

Here are some posts that would be given this tag if the tag was worth making:

... (read more)

Tags for some local groups / university groups

I'd guess it would in theory be worth having tags for EA Cambridge and maybe some other uni/local groups like EA Oxford or Stanford EA. I have in mind groups that are especially "notable" in terms of level and impact of their activities and whether their activities are distinct/novel and potentially worth replicating. E.g., EA Cambridge's seminar programs seem to me like an innovation other groups should perhaps consider adopting a version of, and with more confidence they seem like a good example of a certain ... (read more)


A central pillar for biodefense against GCBRs and an increasingly feasible intervention with several EAs working on it and potentially cool projects emerging in the near future. Possibly too granular as a tag since there's not a high volume of biosecurity posts which would warrant the granular distinction. But perhaps valuable from a Wiki standpoint with a definition and a few references. I can create an entry, if the mods are okay with it.

Example posts:

... (read more)
Hi Jasper, I agree that this would be a valuable Wiki article, and if you are willing to write it, that would be fantastic.


Would want to have a decent definition. I feel like the term is currently being used in a slippery / under-defined / unnecessary-jargon way, but also that there's some value in it. 

Example posts: 

Related entries:

Constraints on effective altruism

Scalably using labour

ETA: Now created

Corporate governance

Example of a relevant post: https://forum.effectivealtruism.org/posts/5MZpxbJJ5pkEBpAAR/the-case-for-long-term-corporate-governance-of-ai

I've mostly thought about this in relation to AI governance, but I think it's also important for space governance and presumably various other EA issues. 

I haven't thought hard about whether this really warrants an entry, nor scanned for related entries - just throwing an idea out there.

Brain-computer interfaces

See also the LW wiki entry / tag, which should be linked to from the Forum entry if we make one: https://www.lesswrong.com/tag/brain-computer-interfaces

Relevant posts:

Looks good. I've now created the entry [https://forum.effectivealtruism.org/tag/brain-computer-interfaces] and will add content/links later.

Time-money tradeoffs or Buying time or something like that

For posts like https://forum.effectivealtruism.org/posts/g86DhzTNQmzo3nhLE/what-are-your-favourite-ways-to-buy-time and maybe a bunch of other posts tagged Personal development

Cool, I created the entry here [https://forum.effectivealtruism.org/tag/time-money-tradeoffs]. I may add some text soon.

Criticism of the EA community

For posts about what the EA community is like, as opposed to the core ideas of EA themselves. Currently, these posts get filed under Criticism of effective altruism even though it doesn't quite fit.

Update: I have created Criticism of the effective altruism community [https://forum.effectivealtruism.org/tag/criticism-of-the-effective-altruism-community].
6Aaron Gertler1y
Seems like a good idea! If we have three criticism tags covering "causes", "organizations", and "community", then having a general "criticism of EA" tag doesn't seem to make sense. The best alternative seems like "criticism of EA philosophy". If I don't hear objections from Pablo/Michael, I'll make that change in a week or so and re-tag relevant posts.
So the plan is to have 4 tags, covering community, causes, organizations, and philosophy? Is so, that sounds good to me, I think. If the idea was to have just three (without philosophy), I'd have said it feels like there's something missing, e.g. for criticism of the ITN framework or ~impartial welfarism or the way EA uses expected value reasoning or whatever.

Arms race or Technology race or Arms/technology race something like that

Related entries

AI governance | AI forecasting | armed conflict | existential risk | nuclear warfare | Russell-Einstein Manifesto


I think such an entry/tag would be at least somewhat attention hazardous, so I'm genuinely unsure whether it's worth creating it. Though I think it'd also have some benefits, the cat is somewhat out of the bag attention-hazard-wise (at least among EAs, who are presumably the main readers of this site), and LessWrong have apparently opted for such a tag (focu... (read more)

Yes, I actually have a draft prepared, though it's focused on AI, just like the LW article. I'll try to finish it within the next couple of days and you can let me know when I publish it if you think we should expand it to cover other technological races (or have another article on that broader topic).

Survey or Surveys

For posts that: 

  1. discuss results from surveys,
  2. promote surveys, and/or
  3. discussing pros and cons and best practices for using surveys in general and maybe for specific EA-relevant areas (e.g., how much can we learn about technology timelines from surveys on that topic? how best can we collect and interpret that info?). 

I care more about the first and third of those things, but it seems like in practice the tag would be used for the second. I guess we could discourage that, but it doesn't seem important.

"Survey" seems more appropriate... (read more)

Yeah, makes sense. There's some overlap with Data [https://forum.effectivealtruism.org/tag/data-ea-community-1], but my sense is that having this other entry is still justified. I don't have a preference for plural vs. singular.
Ok, now created [https://forum.effectivealtruism.org/tag/surveys].


Might overlap too much with things like international relations and international organizations?

Would partly be about diplomacy as a career path.

Probably worth it, if there are enough relevant posts and/or if there's discussion here or elsewhere about diplomacy as a career path. 

Coaching or Coaching & therapy or something like that

Basically I think it'd be useful to have a way to collect all posts relevant to coaching and/or therapy as ways to increase people's lifetime impact - so as meta interventions/cause areas, rather than as candidates for the best way to directly improve global wellbeing (or whatever). So this would include things like Lynette Bye's work but exclude things like Canopie.

In my experience, it tends to make sense to think of coaching and therapy together in this context, as many people offer both services, ... (read more)

Yes, makes a lot of sense. Not sure why we don't have such a tag already. Weak preference for coaching over coaching & therapy.
Ok, now created [https://forum.effectivealtruism.org/tag/coaching/], with coaching as the name for now

Independent impressions or something like that

We already have Discussion norms and Epistemic deference, so I think there's probably no real need for this as a tag. But I think a wiki entry outlining the concept could be good. The content could be closely based on my post of the same name and/or the things linked to at the bottom of that post.

I agree that it would be good to describe this distinction in the Wiki. Possibly it could be part of the Epistemic deference entry, though I don't have a strong view on that.
How about something like beliefs vs. impressions?
Yeah, that title/framing seems fine to me
After reviewing the literature, I came to the view that Independent impressions, which you proposed, is probably a more appropriate name, so that's what I ended up using.

Management/mentoring, or just one of those terms, or People management, or something like that

This tag could be applied to many posts currently tagged Org strategy, Scalably using labour, Operations, research training programs, Constraints in effective altruism, WANBAM, and effective altruism hiring. But this topic seems sufficiently distinct from those topics and sufficiently important to warrant its own entry.

Sounds good. I haven't reviewed the relevant posts, so I don't have a clear sense of whether "management" or "mentoring" is a better choice; the latter seems preferable other things equal, since "management" is quite a vague term, but this is only one consideration. In principle, I could see a case for having two separate entries, depending on how many relevant posts there are and how much they differ. I would suggest that you go ahead and do what makes most sense to you, since you seem to have already looked at this material and probably have better intuitions. Otherwise I can take a closer look myself in the coming days.
Ok, I've now made this, for now going with just one entry called Management & mentoring [https://forum.effectivealtruism.org/tag/management-and-mentoring], but flagging on the Discussion page that that could be changed later. 

United Kingdom policy & politics (or something like that)

This would be akin to the entry/tag on United States politics. An example of a post it'd cover is https://forum.effectivealtruism.org/posts/yKoYqxYxo8ZnaFcwh/risks-from-the-uk-s-planned-increase-in-nuclear-warheads 

But I wrote on the United States politics entry's discussion page a few months ago:

I suggest changing the name and scope to "United States government and politics". E.g., I think there should be a place to put posts about what actions the US government plans to take or can take, h

... (read more)
Yeah, makes sense. I just created the new article [https://forum.effectivealtruism.org/tag/united-kingdom-policy-and-politics] and renamed the existing one. There is no content for now, but I'll try to add something later.

We've now redirected almost all of EA Concepts to Wiki entries. A few of the remaining concepts (e.g. "beliefs") don't seem like good wiki entries here, so we won't touch them.

However, there are a couple of entries I think could be good tags, or good additions to existing tags:

  1. Charity recommendations
  2. Focus area recommendations

It seems good to have wiki entries that contain links to a bunch of lists of charity and/or focus area recommendations. Maybe these are worked into tags like "Donation Choice"/"Donation Writeup", or maybe they're separate.

(Wherever the... (read more)

Charity evaluators, e.g. GiveWell [https://forum.effectivealtruism.org/tag/givewell] and Animal Charity Evaluators [https://forum.effectivealtruism.org/tag/animal-charity-evaluators], have Wiki entries with sections listing their current recommendations. One option is to make the charity recommendations entry a pointer to existing Wiki entries that include such sections. Alternatively, we could list the recommendations themselves in this new Wiki entry, perhaps organizing it as a table that shows, for each charity, which charity evaluators recommend it.
Yeah, how about communities adjacent to effective altruism?
Sounds good! Thanks.
I created a stub. As usual, feel free to revise or expand it.

Open society

The ideal of an open society - a society with high levels of democracy and openness - is related to many EA causes and policy goals. For example, open societies are associated with long-run economic growth, and an open society is conducive to the "long reflection." This tag could host discussion about the value of open societies, the meaning of openness, and how to protect and expand open societies.

I agree that the concept of an open society as you characterize it has a clear connection to EA. My sense is that the term is commonly used to describe something more specific, closely linked to the ideas of Karl Popper and the foundations of George Soros (Popper's "disciple"), in which case the argument for adding a Wiki entry would weaken. Is my sense correct? I quickly checked the Wikipedia article, which broadly confirmed my impression, but I haven't done any other research.
Yeah, maybe something broader like "democracy" or "liberal democracy." Perhaps we could rename the "direct democracy" tag to "democracy"?
6Aaron Gertler2y
The direct democracy [https://forum.effectivealtruism.org/tag/direct-democracy] tag is meant for investments in creating specific kinds of change through the democratic process. But people are using it for other things now anyway -- probably it's good to have a "ballot initiatives" tag and rename this tag to "democracy" or something else. Good catch!
Here's what I did: * I renamed direct democracy to ballot initiative. * I added two new entries: democracy and safeguarding liberal democracy. The first covers any posts related to democracy, while the second covers specifically posts about safeguarding liberal democracy as a potentially high-impact intervention. I still need to do some tagging and add content to the new entries.
I agree. I'll deal with this tomorrow (Thursday), unless anyone wants to take care of it.
Yes, I think your sense is correct.
I do see this concept as relevant to various EA issues for the reasons you've described, and I think high-quality content covering "the value of open societies, the meaning of openness, and how to protect and expand open societies" would be valuable. But I can't immediately recall any Forum posts that do cover those topics explicitly. Do you know of posts that would warrant this tag? If there aren't yet posts that'd warrant this tag, then we have at least the following (not mutually exclusive) options: 1. This tag could be made later, once there are such posts 2. You could write a post of those topics yourself 3. An entry on those topics could be made * It's ok to have entries that don't have tagged posts * But it might be a bit odd for someone other than Pablo to jump to making an entry on a topic as one of the first pieces of EA writing on that topic? * Since wikis are meant to do things more like distilling existing work. * But I'm not sure. * This is related to the question of to what extent we should avoid "original research" on the EA Wiki, in the way Wikipedia avoids it * See also [https://forum.effectivealtruism.org/tag/totalitarianism/discussion?commentId=fosqBRhrQXysZu7gG] 4. Some other entry/tag could be made to cover similar ground

Career profiles (or maybe something like "job posts"?)

Basically, writeups of specific jobs people have, and how to get those jobs. Seems like a useful subset of the "Career Choice" tag to cover posts like "How I got an entry-level role in Congress", and all the posts that people will (hopefully) write in response to this.

What about posts that discuss personal career choice processes (like this [https://forum.effectivealtruism.org/posts/LHZBcqyCkYqmZLzij/my-career-decision-making-process])?
My personal, quick reaction is that that's a decently separate thing, that could have a separate tag if we feel that that's worthwhile. Some posts might get both tags, and some posts might get just one. But I haven't thought carefully about this. I also think I'd lean against having an entry for that purpose. It seems insufficiently distinct from the existing tags for career choice or community experiences, or from the intersection of the two.
Yeah, this seems worth having! And I appreciate you advocating for people to write these and for us to have a way to collect them, for similar reasons to those given in this earlier shortform of mine [https://forum.effectivealtruism.org/posts/EMKf4Gyee7BsY2RP8/michaela-s-shortform?commentId=oZEH68sCn5Dtuns9r]. I think career profiles is a better term for this than job posts, partly because: * The latter sounds like it might be job ads or job postings * Some of these posts might not really be on "jobs" but rather things like being a semi-professional blogger, doing volunteering, having some formalised unpaid advisory role to some institution, etc. OTOH, career profiles also sounds somewhat similar to 80k's career reviews. This could be good or bad, depending on whether it's important to distinguish what you have in mind from the career review format. (I don't have a stance on that, as I haven't read your post yet.)
Actually, having read your post, I now think it does sound more about jobs (or really "roles", but that sounds less clear) than about careers. So I now might suggest using the term job profiles. 
4Aaron Gertler2y
Thanks, have created this [https://forum.effectivealtruism.org/tag/job-profile]. (The "Donation writeup" tag is singular, so I felt like this one should also be, but LMK if you think it should be plural.)
Either looks good to me. I agree that this is worth having.

Update: I've now made this entry.

Requests for proposals or something like that

To cover posts like https://forum.effectivealtruism.org/posts/EEtTQkFKRwLniXkQm/open-philanthropy-is-seeking-proposals-for-outreach-projects 

This would be analogous to the Job listings tags, and sort of the inverse of the Funding requests tag.

This overlaps in some ways with Get involved and Requests (open), but seems like a sufficiently distinct thing that might be sufficiently useful to collect in one place that it's worth having a tag for this.

This could also be an entry t... (read more)

Update: I've now made this entry.

Semiconductors or Microchips or Integrated circuit or something like that

The main way this is relevant to EA is as a subset of AI governance / AI risk issues, which could push against having an entry just for this.

That said, my understanding is that a bunch of well-informed people see this as a fairly key variable for forecasting AI risks and intervening to reduce those risks, to the point where I'd say an entry seems warranted.

Update: I've now made this entry.

Consultancy (or maybe Consulting or Consultants or Consultancies)

Things this would cover:

... (read more)
Yeah, I made a note to create an entry on this topic soon after Luke published his post. Feel free to create it, and I'll try to expand it next week (I'm a bit busy right now).

Update: I've now made this entry.

Alternative foods or resilient foods or something like that

A paragraph explaining what I mean (from Baum et al., 2016):

nuclear war, volcanic eruptions, and asteroid impact events can block sunlight, causing abrupt global cooling. In extreme but entirely possible cases, these events could make agriculture infeasible worldwide for several years, creating a food supply catastrophe of historic proportions. This paper describes alternative foods that use non-solar energy inputs as a solution for these catastrophes. For example,

... (read more)
I'm in favor. Very weak preference for alternative foods until resilient foods becomes at least somewhat standard.

I now feel that a number of unresolved issues related to the Wiki ultimately derive from the fact that tags and encyclopedia articles should not both be created in accordance with the same criterion. Specifically, it seems to me that a topic that is suitable for a tag is sometimes too specific to be a suitable topic for an article.

I wonder if this problem could be solved, or at least reduced, by allowing article section headings to also serve as tags. I think this would probably be most helpful for articles that cover particular disciplines, such as psycho... (read more)

2Aaron Gertler2y
These are reasonable concerns, but adding hundreds of additional tags and applying them across relevant posts seems like it will take a lot of time. As a way to save time and reduce the need for new tags, how many of your use cases do you think would be covered if multi-tag filtering was supported? That is, someone could search for posts with both the "psychology" and "career choice" tags and see posts about careers in psychology. This lets people create their own "fine-grained taxonomy" without so many tags needing to have a bunch of sub-tags.
I think something along these lines feels promising, but I feel a bit unsure precisely what you have in mind. In particular, how will users find all posts tagged with an article section heading tag? Would there still be a page for (say) social psychology like there is for psychology, and then it's just clear somehow that this page is a subsidiary tag of a larger tag? Inspired by that question, I think maybe a more promising variant (or maybe it's what you already had in mind) is for some article section headings to be hyperlinked to a page whose title is the other page's section heading and whose contents is that section from the other page, below which is shown all the tags with that section heading tag. Then if a user edits the section or the "section's own page", the edit automatically occurs in the other place as well.  And from "the section's own page" there's something at the top that makes it clear that this entry is a subsidiary entry of a larger entry and people can click through to get back to the larger one. Maybe the "something at the top" would look vaguely like the headers of posts that are in sequences? Maybe then you could even, like with sequences, click an arrow to the right or left to go to the page corresponding to the previous or following section of the overarching entry? Stepping back, this seems like just one example of a way we could move towards more explicitly having a nested hierarchy of entries where the different layers are in some ways linked together. I imagine there are other ways to do that too, though I haven't brainstormed any yet.

Meta: perhaps this entry should be renamed 'Propose and vote on potential entries' or 'Propose and vote on potential tags/Wiki articles'? We generally use the catch-all term 'entries' for what may be described as either a tag or a Wiki article.

Yeah, I considered that a few weeks ago but then (somewhat inexplicably) didn't bother doing it. Thanks for the prod - I have now done it :) 

I am considering turning a bunch of relevant lists into Wiki entries. Wikipedia allows for lists of this sort (see e.g. the list of utilitarians) and some (e.g. Julia Wise) have remarked that they find lists quite useful. The idea occurred to me after a friend suggested a few courses I may want to add to my list of effective altruism syllabi. It now seems to me that the Wiki might be a better place to collect this sort of information than some random blog. Thoughts?

Quick thoughts: * I think more lists/collections would be good [https://forum.effectivealtruism.org/posts/6trt8mTsKfqJJbfJa/post-more-summaries-and-collections] * I think it's better if they're accessible via the Forum search function than if they're elsewhere * I think it's probably better if they're EA wiki entries than EA Forum posts or shortforms because that makes it easier for them to be collaboratively built up * And this seems more important for and appropriate to a list than an average post * Posts are often much more like a particular author's perspective, so editing beyond copyediting would typically be a bit odd (that said, a function for making suggestions could be cool - but that's tangential to the main topic here) * I don't think I see any other advantage of these lists being wiki entries rather than posts or shortforms * I think the only disadvantages of them being lists are that then we might have too many random or messy lists that have an air of official-ness or that the original list creator gets less credit for their contributions (their name isn't attached to the list) * But the former disadvantage can apply to entries in general and so we already need sufficient policies, other editors, etc. to solve it, so doesn't seem a big deal for lists specifically * And the former disadvantage can also apply to entries in general and so will hopefully be partially solved by things like edit counters, edit karma, "badges", or the like * So overall this seems worth doing Less important: * Various "collections" on my own shortform might be worth making into such entries * Though I think actually most of them are better fits for the bibliography pages of existing entries * (And ~ a month ago I added a link to those collections, or to all relevant items from the collections, to the associated entries that existed at the time)

Update: I've now made this entry

career advising or career advice or career coaching or something like that

We already have career choice. But that's very broad. It seems like it could be useful to have an entry with the more focused scope of things like:

  • How useful do various forms of career advising tend to be?
  • What are best practices for career advising?
  • What orgs work in that space?
    • E.g., 80k, Animal Advocacy Careers, Probably Good, presumably some others
  • How can one test fit for or build career capital in career advising?

This would be analogous to how we hav... (read more)

Charter cities or special economic zones or whatever the best catchall term for those things + seasteading is

From a quick search for "charter cities" on the Forum, I think there aren't many relevant posts, but there are:

... (read more)
Yes, definitely. I already had some scattered notes on this. There's also the 80k podcast episode: Wiblin, Robert & Keiran Harris (2019) The team trying to end poverty by founding well-governed ‘charter’ cities [https://80000hours.org/podcast/episodes/lutter-and-winter-chater-cities-innovative-governance/], 80,000 Hours, March 31. An interview with Mark Lutter and Tamara Winter from the Charter Cities Institute.

Effective Altruism on Facebook and Effective Altruism on Twitter (and more - maybe Goodreads, Instagram, LinkedIn, etc). Alternatively Effective Altruism on Social Media, though I probably prefer tags/entries on particular platforms.

A few relevant articles:




https://forum.effectivealtruism.org/posts/BtptBcXWmjZBfdo9n/ea-fa... (read more)

At first glance, I'd prefer to have Effective altruism on social media, or maybe actually just Social media, rather than the more fine-grained ones. (Also, I do think something in this vicinity is indeed worth having.) Reasoning: * I'm not sure if any of the specific platforms warrant an entry * If we have entries for the specific platforms, then what about posts relevant to effective altruism on some other platform? * We shouldn't just create an entry for every other platform there's at least one post relevant to, nor should we put them all under one of the other single-platform-focused tags. * But having an entry for Facebook, another for Twitter, and another for social media as a whole seems like too much? * Regarding dropping "Effective altruism on" and just saying "Social media": * Presumably there are also posts on things like the effects of social media, the future trajectory of it, or ways to use it for good or intervene in it that aren't just about writing about EA on it? * E.g., https://forum.effectivealtruism.org/posts/842uRXWoS76wxYG9C/incentivizing-forecasting-via-social-media [https://forum.effectivealtruism.org/posts/842uRXWoS76wxYG9C/incentivizing-forecasting-via-social-media] * And it seems like it'd be good to capture those posts under the same entry? * Though maybe an entry for social media and an entry for effective altruism on social media are both warranted? Though also note that there's already a tag for effective altruism in the media [https://forum.effectivealtruism.org/tag/effective-altruism-in-the-media], which has substantial overlap with this. But I think that's probably ok - social media seems a sufficiently notable subset of "the media" to warrant its own entry. (Btw, for the sake of interpreting the upvotes as evidence: I upvoted your comment, though as I noted I disagree a bit on the best name/scope.)
(Just wanted to send someone a link to a tag for Social media or something like that, then realised it doesn't exist yet, so I guess I'll bump this thread for a second opinion, and maybe create this in a few days if no one else does)
I don't have accounts on social media and don't follow discussions happening there, so I defer to you and others with more familiarity.

Something like regulation

Intended to capture discussion of the Brussels effect, the California effect, and other ways regulation could be used for or affect things EAs care about.

Would overlap substantially with the entries on policy change and the European Union, as well as some other entries, but could perhaps be worth having anyway.

Update: I've now made this entry.

software engineering

Some relevant posts:

Related entries

artificial intelligence... (read more)

Looks good to me.

Maybe we should have an entry for each discipline/field that's fairly relevant to EA and fairly well-represented on the Forum? Like how we already have history, economics, law, and psychology research. Some other disciplines/fields (or clusters of disciplines/fields) that could be added:

  • political science
  • humanities
    • I think humanities disciplines/fields tend to be somewhat less EA-relevant than e.g. economics, but it could be worth having one entry for this whole cluster of disciplines/fields
  • social science
    • But (unlike with humanities) it's probably better to h
... (read more)
I'm overall in favor. I wonder if we should take a more systematic approach to entries about individual disciplines. It seems that, from an EA perspective, a discipline may be relevant in a number of distinct ways, e.g. because it is a discipline in which young EAs may want to pursue a career,  because conducting research in that discipline is of high value, because that discipline poses serious risks, or because findings in that discipline should inform EA thinking. I'm not sure how to translate this observation into something actionable for the Wiki, though, so I'm just registering it here in case others have thoughts along these lines.
Yeah, I do think it seems worth thinking a bit more about what the "inclusion criteria" for a discipline should be (from the perspective of making an EA Wiki entry about it), and that the different things you mention seem like starting points for that. Without clearer inclusion criteria, we could end up with a ridiculously large number of entries, or with entries that are unwarranted or too fine-grained, or with entries that are too coarse-grained, or with hesitation and failing to create worthwhile entries. I don't immediately have thoughts, but endorse the idea of someone generating thoughts :D
I agree that humanities disciplines tend to be less EA-relevant than the social sciences. But I think that the humanities are quite heterogeneous, so it feels more natural to me to have entries for particular humanities disciplines, than humanities as a whole. But I'm not sure any such entries are warranted; it depends on how much has been written.

Vetting constraints

Maybe this wouldn't add sufficient value to be worth having, given that we already have scalably using labour and talent vs. funding constraints.

I think there should definitely be a place for discussing vetting constraints. My only uncertainty is whether this should be done in a separate article and, if so, whether talent vs. funding constraints should be split. Conditional on having an article on vetting constraints, it looks to me that we should also have articles on talent constraints and funding constraints. Alternatively, we could have a single article discussing all of these constraints.
I think I agree that we should either have three separate entries or one entry covering all three. I'm not sure which of those I lean towards, but maybe very weakly towards the latter?
Just discovered Vaidehi made a collection of discussions of constraints in EA [https://forum.effectivealtruism.org/posts/4SRj3KnRCh7iFoGK2/vaidehi_agarwalla-s-shortform?commentId=5XymYrET43cHT9bwg], which could be helpful for populating whatever entries get created and maybe for deciding on scopes etc.

Mmh, upon looking at Vaidehi's list more closely, it now seems to me that we should have a single article: people have proposed various other constraints besides the three mentioned, and I don't think it would make sense to have separate articles for each of these, or to have an additional article for "other constraints". So I propose renaming talent vs. funding constraints constraints in effective altruism. Thoughts?

I think that that probably makes sense.
Done. (Though I used the name constraints on effective altruism [https://forum.effectivealtruism.org/tag/constraints-on-effective-altruism], which seemed more accurate. I don't have strong views on whether the preposition should be 'in' or 'on', however, so feel free to change it.) The article should be substantially revised (it was imported from EA Concepts), I think, but at least its scope is now better defined.
Great. Let's have three articles then. Feel free to split the existing one, otherwise I'll do that tomorrow. [I know you like this kind of framing. ;) ]
Vetting constraints dovetails nicely with talent vs. funding constraints. I'm not totally convinced by the scalably using labour entry, though. One possibility would be to just replace it by a vetting constraints entry. Alternatively, it could be retained but renamed/reconceptualised.
Yeah, scalably using labor just doesn't strike me as a natural topic for a Wiki entry, though I not sure exactly why. Maybe it's because it looks like the topic was generated by considering an interesting question—"how should the EA community allocate its talent?"—and creating an entry around it, rather than by focusing on an existing field or concept. I'd be weakly in favor of merging it with vetting constraints.
I'm currently in favour of keeping scalably using labour, though I also made the entry so this shouldn't be much of an update (it's not like a "second vote", just a repeat of the first vote after hearing the new arguments).  One consideration I'd add is that maybe it's a more natural topic for a tag than a wiki entry? It seems to me like having a tag for posts relevant to a (sufficiently) interesting and recurring question makes sense?
Fwiw, I think that "scalably using labour" doesn't sound quite like a wiki entry. I find virtually no article titles including the term "using" on Wikipedia. If one wants to retain the concept, I think that "Large-scale use of labour" or something similar would be better. There are may Wikipedia article titles including the term "use of [noun]". (Potentially nouns are generally better than verbs in Wikipedia article titles? Not sure.)

Update: I've now made this entry

Charity evaluation or (probably less good) Charity evaluator

We already have an entries donation choice, intervention evaluation, and cause prioritisation. But charity evaluation is a major component of donation choice for which we lack an entry. This entry could also cover things about charity evaluation orgs like GiveWell, e.g. how useful a role they serve, what the best practices for them are, and whether there should be one for evaluating longtermist charities or AI charities or whatever.

Downside of this name: Really it m... (read more)

I think this should clearly exist.

Update: I've now made this entry.

Effective altruism outreach in schools or High school outreach or something like that

Overlaps with https://forum.effectivealtruism.org/tag/effective-altruism-education , but that entry is broader, and it seems like now there's a decent amount of activity or discussion about high school outreach specifically. E.g.:

... (read more)
I'm in favor.

Barriers to effective giving or Psychology of (in)effective giving or something like that


Why aren’t people donating more effectively? | Stefan Schubert | EA Global: San Francisco 2018

EA Efficacy and Community Norms with Stefan Schubert [see description for why this is relevant]

[Maybe some other Stefan Schubert stuff]

[Probably some stuff by Lucius Caviola, David Reinstein, and others]

Related entries

cognitive bias | cost-effectiveness | donation choice | diminishing returns | effective giving | market efficiency of philanthropy | rationality | sc... (read more)

Yeah, I think Psychology of effective giving is probably the best name. Stefan, Lucius and others have published a bunch of stuff on this, which would be good to cover in the article.

This is one of many emerging areas of research at the intersection of psychology and effective altruism: - psychology of effective giving (Caviola et al. 2014; Caviola, Schubert & Nemirow 2020; Burum, Nowak & Hoffman 2020) - psychology of existential risk (Shubert, Caviola & Faber 2019) - psychology of speciesism (Caviola 2019; Caviola, Everett & Faber 2019; Caviola & Capraro 2020) - psychology of utilitarianism (Kahane et al. 2018; Everett & Kahane 2020) I was thinking of covering all of this research in a general entry on the psychology of effective altruism, but we can also have separate articles for each.
I forgot that there was already an EA Psychology tag [https://forum.effectivealtruism.org/tag/psychology-of-effective-altruism/], so I've now just renamed that, added some content, and copied this comment of Pablo's on that Discussion page. (It could still make sense for someone to also create entries on those other topics and/or on moral psychology - I just haven't done so yet.)
Great, thanks.
Apparently there's a new review article by Caviola, Schubert, and Greene called "The Psychology of (In)Effective Altruism" [https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(21)00090-5], which pushes in favour of roughly that as the name.  I also think that, as you suggest, that can indeed neatly cover "psychology of effective giving" (i.e., that seems a subset of "psychology of effective altruism"), and maybe "psychology of utilitarianism". But I'm less sure that that neatly covers the other things you list. I.e., the psychology of speciesism and existential risk are relevant to things other than how effective people will be in their altruism. But we can just decide later whether to also have separate entries for those, and if so I do think they should definitely be listed in the Related entries section from the "main entry" on this bundle of topics (and vice versa).  So I think I currently favour: * Haven't an entry called psychology of (in)effective altruism * With psychology of effective altruism as a second-to-top pick * Probably not currently having a separate entry for psychology of (in)effective giving * But if people think there's enough distinctive stuff to warrant an entry/tag for that, I'm definitely open to it * Maybe having separate entries for the other things you mention

Psychology of (in)effective altruism is adequate for a paper, where authors can use humor, puns, and other informal devices, but inappropriate for an encyclopedia, which should keep a formal tone.

(To elaborate:  by calling the field of study e.g. the 'psychology of effective giving' one is not confining attention only to the psychology of those who give particularly effectively: 'effective giving' is used to designate a dimension of variation, and the field studies the underlying psychology responsible for causing people to give with varying degrees of effectiveness, ranging from very effectively to very ineffectively. By analogy, the psychology of eating is meant to also study the psychology of people who do not eat, or who eat little. A paper about anorexia may be called "The psychology of (non-)eating", but that's just an informal way of drawing attention to its focus; it's not meant to describe a field of study called "The psychology of (non-)eating", and that's not an appropriate title for an encyclopedia article on such a topic.)

Yeah, the ultra-pedantic+playful parenthetical is a very academic thing. "Psychology of effective altruism" seems to cover giving/x-risk/speciesism/career choice - i.e. it covers everything we want.
Given the fact you both say this and the upvotes on those comments, I think we should probably indeed go with "psychology of effective giving" rather than "psychology of (in)effective giving".[1] I still don't think that actually totally covers psychology of speciesism, since speciesism is not just relevant in relation to altruism. Likewise, I wouldn't say the psychology of racism or of sexism are covered by the area "psychology of effective altruism". But I do think the entry on psychology of effective altruism should discuss speciesism and so on, and that if we later have an entry for psychology of speciesism they should link to each other. [1] But FWIW: * I don't naturally interpret the "(in)" device as something like humour, a pun, or an informal device * I think "psychology of effective altruism" and "psychology of ineffective altruism" do call to mind to distinct focuses, even if I'd expect each thing to either cover (with less emphasis) or "talk to" work on the other thing * Somewhat analogously, areas of psychology that focus on what makes for an especially good life (e.g., humanist psychology) are meaningfully distinct from those that focus on "dysfunction" (e.g., psychopathology), and I believe new terms were coined primarily to highlight that distinction But I don't think this matters much, and I'm totally happy for "psychology of effective giving" to be used instead.
(Oh, just popping a thought here before I go to sleep: "moral psychology" is a relevant nearby thing. Possibly it'd be better to have that entry than "psychology of effective altruism"? Or to have both?)
Thanks. Coincidentally this was published [https://forum.effectivealtruism.org/revisions/tag/our-world-in-data] yesterday. But I haven't done any tagging yet.
Ah, nice. Maybe I searched for the entry shortly before it was published. I've now tagged those 3 posts I mentioned, but haven't checked and tagged other things that come up when you search "Our World in Data".
There are lots of hits for 'EA updates'. The three results that I thought deserved to be tagged were precisely the ones you had already identified. I haven't looked at this exhaustively, though, so if you find other relevant articles, feel free to add the tag to those, too.

Intelligence assessment or Intelligence (military and strategy) or Intelligence agencies or Intelligence community or Intelligence or something

I don't really like any of those specific names. The first is what Wikipedia uses, but sounds 100% like it means IQ tests and similar. The second is my attempt to put a disambiguation in the name itself. The third and and fourth are both too narrow, really - I'd want the entry to not just be about the agencies or community but also about the type of activity they undertake. The fifth is clearly even more ambiguous t... (read more)

(Edit: I've now made this entry.)

Independent research

Proposed text:

Independent research is research conducted by an individual who is not employed by any organisation or institution, or who is employed but is conducting this research separately from that. This person may or may not have funding for this research (e.g., via grants). Research that is done by two or more people collaborating, but still separate from an organisation or institution, could arguably be considered independent research.

There are various advantages and disadvantages of independent r

... (read more)
Some text from the latest LTFF report [https://forum.effectivealtruism.org/posts/diZWNmLRgcbuwmYn4/long-term-future-fund-may-2021-grant-recommendations#Tegan_McCaslin____80_401] that could be drawn on when discussing advantages and disadvantages within this entry:
Looks good, thanks!

Edit: I've now made this entry.

Longtermist Entrepreneurship Fellowship

I think this is only mentioned in three Forum posts so far[1], and I'm not sure how many (if any) would be added in future. 

It's also mentioned in this short Open Phil page: https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-jade-leung

I'm also not sure if the name is fully settled - different links seem to use different names, or to not even use a capitalised name.

[1] https://forum.effectivealtruism.org/posts/diZWNmLRgcbuwmYn4/long-term-future-fund-may-2021-gra... (read more)

I'm in favor, though there's so little public information at this stage that inevitably the entry won't have any substantive content for the time being.
Looks good.
Cool - given that, I've now made this [https://forum.effectivealtruism.org/tag/weapons-of-mass-destruction] (though without adding body text or tagging things, for time reasons). 

Some orgs it might be worth making entries about:

Thanks, I'm in the process of compiling a master list of EA orgs and creating entries for the missing ones. Would you be interested in looking at the spreadsheet?
Yeah, I'll send you a DM

David Pearce (the tag will be removed if others think it’s not warranted)

Arguments against:

  • One may see David Pearce much more related to transhumanism (even if to the most altruistic “school” of transhumanism) than to EA (see e.g. Pablo’s comment).
  • Some of Pearce’s ideas goes against certain established notions in EA: e.g. he thinks sentience of classical digital computers is impossible under the known laws of physics, that minimising suffering should take priority over increasing happiness of the already well-off, that environmental interventions alone,
... (read more)
4Michael Huang2y
To add to arguments for inclusion, here’s an excerpt from an EA Forum post about key figures in the animal suffering focus area. David Pearce’s work on suffering and biotechnology would be more relevant now than in 2013 due to developments in genome editing and gene drives [https://forum.effectivealtruism.org/tag/gene-drives].
For those who may want to see the deleted entry, I'm posting it below:
3Aaron Gertler2y
As the head of the Forum, I'll second Pablo in thanking you for creating the entry. While I defer to Pablo on deciding what articles belong in the wiki, I thought Pearce was a reasonable candidate. I appreciate the time you took to write out your reasoning (and to acknowledge arguments against including him).
Thank you for appreciating the contribution. Since Pablo is trusted w/ deciding on the issue, I will address my questions about the decision directly to him in this thread.
Thanks again, nil, for taking the time to create this entry and outline your reasoning. After reviewing the discussion, and seeing that no new comments have been posted in the past five days, I've decided to delete the article, for the reasons I outlined previously. Please do not let this dissuade you from posting further content to the Wiki, and if you have any feedback, feel free to leave it below or to message me privately [https://forum.effectivealtruism.org/users/pablo_stafforini].
I'm sorry to hear this, Pablo, as I haven't been convinced that Pearce isn't relevant enough for effective altruism. Also, I really don’t see how the persons below have contributed more or are more relevant to effective altruism than Pearce (that is not necessarily to say that their entities aren’t warranted!). May it be correct to infer that at least some of these entries received less scrutiny than Pearce’s nomination? * Dylan Matthews [https://forum.effectivealtruism.org/tag/dylan-matthews] * David Chalmers [https://forum.effectivealtruism.org/tag/david-chalmers] And perhaps: * Demis Hassabis [https://forum.effectivealtruism.org/tag/demis-hassabis] * K. Eric Drexler [https://forum.effectivealtruism.org/tag/eric-drexler] May I ask why five days since the last comment were deemed enough for proceeding to the deletion? Is this part of the wiki’s rules? (If so, it must be my fault that I didn't have time to reply in time.) I also wanted to say that despite the disagreement, I appreacite that the wiki has a team commiteed to it. 

>Also, I really don’t see how the persons below have contributed more or are more relevant to effective altruism than Pearce

I tried to outline some criteria in an earlier comment. Chalmers and Hassabis fall under the category of "people who have attained eminence in their fields and who are connected to EA to a significant degree". Drexler, and perhaps also Chalmers, fall under the category of "academics who have conducted research of clear EA relevance".  Matthews doesn't fall under any of the categories listed, though he strikes me as someone worth including given his leading role at Future Perfect—the only explicitly EA project in mainstream journalism—and his long-standing involvement with the EA movement.

As the example of Matthews shows, the categories I identified aren't exhaustive. That was just my attempt to retroactively make sense of the tactic criterion I had followed in selecting these particular people. Despite still not having a super clear sense of the underlying categories, I felt reasonably confident that Pearce didn't qualify because (1) it seemed that there was no other potential category he could fall under besides that of "EA core figure" and (2)  ... (read more)

First, I want to make it clear that I don’t question that any of the persons I listed in my previous comment [https://forum.effectivealtruism.org/posts/rxbLqMDhd4832WYit/propose-and-vote-on-potential-tags?commentId=MDex5XmBWZrES4vqG] should be removed from the wiki. I just disagree that not including Pearce is justified. Again, I honestly don’t think that it is true that Chalmers and Drexler are “connected to EA to a significant degree” while Pearce isn’t. Especially Chalmers: from what I know, he isn’t engaged w/ effective altruism, besides once agreeing for being interviewed [https://80000hours.org/podcast/episodes/david-chalmers-nature-ethics-consciousness/] at the 80,000 Hours podcast. As for the “attained eminence in their fields” condition, I do see that it may be harder to resolve for Pearce’s case since he isn’t an academic but rather an independent philosopher, writer, and advocate. But if Pearce’s field as suffering abolitionism [https://www.lesswrong.com/tag/abolitionism], then the “attained eminence in their fields” condition does hold, in my view: he both is the founder of the “abolitionist project” [https://www.abolitionist.com/] and has written extensively on why’s and how’s of the project. Also, as I mentioned in the original comment [https://forum.effectivealtruism.org/posts/rxbLqMDhd4832WYit/propose-and-vote-on-potential-tags?commentId=BjB53cwF39tiHvo24] proposing the entry, Pearce’s work has inspired many EAs, including Brain Tomasik, the Qualia Research Institute’s Andrés Gómez Emilsson, and the Center for Reducing Suffering’s Magnus Vinding, and the nascent field of welfare/compassionate biology [https://sentience-politics.org/files/RWAS-8.pdf]. Also, Invincible Wellbeing [https://www.invinciblewellbeing.com/] research group has been inspired by Pearce's work as well. I don’t have any new arguments to make, and I don’t expect anyone involved to change their minds anyway. I only hope it may be worth time of others to contribute their perspect
[Just responding to one specific thing, which isn't central to what you're saying anyway. No need to respond to this.] For what it's worth, I think I agree with you re Chalmers (I think Pearce may be more connected to EA than Chalmers is), but not Drexler. E.g., Drexler has worked at FHI for a while, and the FHI office is also shared by GovAI (part of FHI, but worth listing separately), GPI, CEA, and I think Forethought. So that's pretty EA-y. Plus he originated some ideas that are quite important for a lot of EAs, e.g. related to nanotech, CAIS, and Paretotopia. (I'm writing quickly and thus leaning on acronyms and jargon, sorry.)
I should have been more clear about Drexler: I don't dispute that he is “connected to EA to a significant degree”. But so is Pearce, in my view, for the reasons outlined in this thread.

(I think it's weird and probably bad that this comment of nil's has negative karma. nil is just clarifying what they were saying, and what they're saying is within the realm of reason, and this was said politely.)

Hey nil, Chalmers was involved with EA in various ways over the years, e.g. by publishing a paper on the intelligence explosion and then discussing it at one of the Singularity Summits, briefly participating in LessWrong discussions, writing about mind uploading, interacting (I believe) with Luke Muehlhauser and Buck Shlegeris about their illusionist account of consciousness, etc. In any case, I agree with you (and Michael) that it may be more productive to consider the underlying reasons for restricting the number of entries on individual people. I generally favor an inclusionist stance, and the main reason for taking an exclusionist line with entries for individuals is that I fear things will get out of control if we adopt a more relaxed approach. I'm happy, for instance, with having entries for basically any proposed organization, as long as there is some reasonable link to EA, but it would look kind of weird if we allowed any EA to have their own entry. An alternative is to take an intermediate position where we require a certain degree of notability, but the bar is set lower, so as to include people like Pearce, de Grey, and others. We could, for instance, automatically accept anyone who already has their own Wikipedia entry, as long as they have a meaningful connection to EA (of roughly the same strength as we currently demand for EA orgs). Pearce would definitely meet this bar. How do others feel about this proposal?
Perhaps voting on cases where there is a disagreement could achieve a wider inclusiveness or at least less controversy? Voters would be e.g. the moderators (w/ an option to abstain) and several persons who are familiar w/ the work of a proposed person. It may also help if inclusion criteria are more specific and are not hidden until a dispute arises.
Hi nil, I've edited the FAQ [https://forum.effectivealtruism.org/tag/ea-wiki-faq#What_are_the_criteria_for_inclusion_] to make our inclusion criteria more explicit.
Thanks, Pablo. The criteria will help to avoid some future long disputes (and thus save time for more important things), although it wouldn't have prevented my creating the entry for David Pearce, for he does fit the second condition, I think. (We disagree, I know.)

I think discussion will probably usually be sufficient. Using upvotes and downvotes as info seems useful, but probably not letting them be decisive. 

It may also help if inclusion criteria are more specific and are not hidden until a dispute arises.

This might just be a case where written communication on the internet makes the tone seem off, but "hidden" sounds to me unfair and harsh. That seems to imply Pablo already knew what the inclusion criteria should be, and was set on them, but deliberately withheld them. This seems extremely unlikely. 

I think it's more like the wiki is only a few months old, and there's (I think) only one person paid to put substantial time into it, so we're still figuring out a lot of policies as we go - I think Pablo just had fuzzier ideas, and then was prompted by this conversation to make them more explicit, and then was still clearly open to feedback on those criteria themselves (rather than them already being set).

I do agree that it will help now that we have possible inclusion criteria written up, and it would be even better to have them shown more prominently somewhere (though with it still being clear that they're tentative and open to revision). Maybe this is all you meant?

I didn't have in mind to sound harsh. Thanks for pointing this out: it now seems obvious to me that that part sounds uncharitable. I do appologise, belatedly :( What I meant is that currently these new, evolving inclusion criteria are difficult to find. And if they are used in dispute resolutions (from this case onwards), perhaps they should be referenced for contributors as part of the introduction [https://forum.effectivealtruism.org/tags/all] text, for example.
Thanks for the feedback. I have made a note to update the Wiki FAQ, or if necessary create a new document. Feel free to ping me if you don't see any updates within the next week or so. 

I personally feel that the proposal would allow for the inclusion of a number of people (not Pearce) who intuitively should not have their own Wiki entry, so I'm somewhat reluctant to adopt it. More generally, an advantage of having a more exclusionist approach for individuals is that the class of borderline cases is narrower, and so is therefore the expected number of discussions concerning whether a particular person should or should not be included. Other things equal, I would prefer to have few of these discussions, since it can be tricky to explicitly address whether someone deserves an entry (and the unpleasantness associated with having to justify an exclusionist position specifically—which may be perceived as expressing a negative opinion of the person whose entry is being considered—may unduly bias the discussion in an inclusionist direction).

FWIW, I agree that Hassabis and Drexler meet your proposed criteria and warrant entries, and that Chalmers and Caplan probably do (along with Hanson and Beckstead). But Matthews does seem roughly on par with Pearce to me. (Though I don't know that much about either of their work.) 

I also agree that Pearce seems to be a similar case to de Grey, so we might apply a similar principle to both.

Maybe it'd be useful to try switching briefly from the discussion of specific entries and criteria to instead consider: What are the pros and cons of having more or much more entries (and especially entries on people)? And roughly how many entries on people do we ultimately want? This would be similar to the inclusionism debate on Wikipedia, I believe. If we have reason to want to avoid going beyond like 50 or 100 or 200 or whatever entries on people, or we have reason to be quite careful about adding less prominent or central people to the wiki, or if we don't, then that could inform how high a "bar" we set.

Michael is correct that the inclusion criteria for entries of individual people hasn't been made explicit. In deciding whether a person was a fit subject for an article, I haven't followed any conscious procedure, but merely relied on my subjective sense of whether the person deserved a dedicated article. Looking at the list of people I ended up including, a few clusters emerge:

  1. people who have had an extraordinary positive impact, and that are often discussed in EA circles (Arkhipov, Zhdanov, etc.)
  2. people who have attained eminence in their fields and who are connected to EA to a significant degree (Pinker, Hassabis, Boeree, etc.)
  3. academics who have conducted research of clear EA relevance (Ng, Duflo, Parfit, Tetlock, etc.)
  4. historical figures that may be regarded as proto-EAs or that are seen as having inspired the EA movement (Bentham, Mill, Russell, etc.)
  5. "core figures" in the EA community (Shulman, Christiano, Tomasik, etc.)

Some people, such Bostrom, MacAskill, Ord, fit into more than one of these clusters. My sense is that David Pearce doesn't fit into any of the clusters. It seems relatively uncontroversial that he doesn't fit into clusters 1-4, so the relevant question—at least i... (read more)

FWIW, I think your comment is already a good step! I think I broadly agree that those people who fit into at least one of those clusters should typically have entries, and those who don't shouldn't. And this already makes me feel more of a sense of clarity about this. I still think substantial fuzziness remains. This is mostly just because words like "eminence" could be applied more or less strictly. I think that that's hard to avoid and maybe not necessary to avoid - people will probably generally agree, and then we can politely squabble about the borderline cases and thereby get a clearer sense of what we collectively think the "line" is. But I think "people who have had an extraordinary positive impact, and that are often discussed in EA circles (Arkhipov, Zhdanov, etc.)" may require further operationalisation, since what counts as extraordinary positive impact can differ a lot based on one's empirical, moral, epistemological, etc. views. E.g., I suspect that nil might think Pearce has been more impactful than most people who do have an entry, since Pearce's impacts are more targeted at suffering reduction. (nil can of course correct me if I'm wrong about their views.) So maybe we should say something like "people who are widely discussed in EA and who a significant fraction of EAs see as having had an extraordinary positive impact (Arkhipov, Zhdanov, etc.)"? (That leaves the fuzziness of "significant fraction", but it seems a step in the right direction by not just relying on a given individual's view of who has been extraordinarily impactful.) Then, turning back to the original example, there's the question: Would a significant fraction of EAs see Pearce as having had an extraordinary positive impact? I think I'd lean towards "no", though I'm unsure, both because I don't have a survey and because of the vagueness of the term "significant fraction". 

I think there's a relatively clear sense in which Arkhipov, Borlaug, and similar figures (e.g. winners of the Future of Life Award, names included in Scientists Greater than Einstein, and related characters profiled in Doing Good Better or the 80,000 Hours blog)  count as having had an extraordinary positive impact and Pearce does not, namely, the sense in which also Ord, MacAskill, Tomasik, etc. don't count. I think it's probably unnecessary to try to specify in great detail what the criterion is, but the core element seems to be that the former are all examples of do-gooding that is extraordinary from both an EA and a common-sense perspective, whereas if you wanted to claim that e.g. Shulman or Christiano are among humanity's greatest benefactors, you'd probably need to make some arguments that a typical person would not find very persuasive. (The arguments for that conclusion would also likely be very brittle and fail to persuade most EAs, but that doesn't seem to be so central.)

So I think it really boils down to the question of how core a figure Pearce is in the EA movement, and as noted, my impression is that he just isn't a core enough figure. I say this, incidentally, as someone who admires him greatly and who has been profoundly influenced by his writings (some of which I translated into Spanish a long time ago), although I have also developed serious reservations about various aspects of his work over the years.

1. If you mean that the vast majority of EAs would agree that Arkhipov, Borlaug, Zhdanov, and similar figures count as having had an extraordinary positive impact, or that that's the only reasonable position one could hold, I disagree, for reasons I'll discuss below. 2. But if you just mean that a significant fraction of EAs would agree that those figures count as having had an extraordinary impact, I agree. And, as noted in my previous comment, I think that using a phrasing like "people who are widely discussed in EA and who a significant fraction of EAs see as having had an extraordinary positive impact (Arkhipov, Zhdanov, etc.)" would probably work. 1. And that phrasing also seems fine if I'm wrong about (1), so maybe there's no real need to debate (1)? 2. (Relatedly, I also do ultimately agree that Arkhipov etc. should have entries.) Expanding on (1): * This is mostly due to crucial considerations that could change the sign or (relative) magnitude of the moral value of the near-term effects that these people are often seen as having had. For example: * It's not obvious that a US-Russia nuclear war during the Cold War would've caused a negative long-term future trajectory change. * I expect it would, and, for related reasons, am currently focused on nuclear risk research myself. * But I think one could reasonably argue that the case for this view is brittle and the case for e.g. the extraordinary positive impact of some people focused on AI is stronger (conditioning on strong longtermism). * Some EAs think extinction risk reduction is or plausibly is net negative. * Some EAs think population growth is or plausibly is net negative, e.g. for reasons related to the meat-eater problem [https://forum.effectivealtruism.org/tag/meat-eater-problem] or to differential progress [https://forum.effectivealtruism.org/tag/differential-progress].
I'm roughly neutral on this, since I don't have a very clear sense of what the criteria and "bars" are for deciding whether to make an entry about a given person. I think it would be good to have a discussion/policy regarding that.  I think some people like Nick Bostrom and Will MacAskill clearly warrant and entry, and some people like me clearly don't, and there's a big space in between - with Pearce included in it - where I could be convinced either way. (This has to do with relevance and notability in the context of the EA Forum Wiki, not like an overall judgement of these people or a popularity contest.) Some other people who are perhaps in that ambiguous space: * Nick Beckstead (no entry atm) * Elie Hassenfeld (no entry atm, but an entry for GiveWell) * Max Tegmark (no entry atm, but an entry for FLI) * Brian Tomasik (has an entry) * Stuart Russell (has an entry) * Hilary Greaves (has an entry) (I think I'd lean towards each of them having an entry except Hassenfeld and maybe Tegmark. I think the reason for The Hassenfeld Exception is that, as far as I'm aware, the vast majority of his work has been very connected with GiveWell. So it's very important and notable, but doesn't need a distinct entry. Somewhat similar with Tegmark inasmuch as he relates to EA, though he's of course notable in the physics community for non-FLI-related reasons. But I'm very tentative with all those views.)
This makes sense to me, although one who is more familiar w/ their work may find their exclusion unwarranted. Thanks for clarifying! In this light I still think an entry for Pearce is justified, to a degree scientifically grounded proposals for abolishing suffering is an EA topic (and this is the main theme of Pearce's work). But I'm just one input of course. Regarding Tomasik, we have different intuitions here: if an entry for Tomasik may not be justified, then I would say this sets a high bar which only original EA founders could reach. (For Tomasik himself is a founder of an EA charity - the Foundational Research Institute / Center on Long-Term Risk - has written [https://reducing-suffering.org/] extensively on many topics highly relevant to EA, and an advisor [https://centerforreducingsuffering.org/team/] at the Center for Reducing Suffering, another EA org.) Anyway, this difference doesn't probably matter in practice since you added that you lean towards Tomasik's having an entry.
I agree with you that a Tomasik entry is clearly warranted. I would say that his entry is as justified as one on Ord or MacAskill; he is one of half a dozen or so people who have made the most important contributions to EA, in my opinion. I will respond to your main comment later, or tomorrow.
As noted, I do lean towards Tomasik having an entry, but "co-founder of an EA org"  + "written extensively on many topics highly relevant to EA" + "is an advisor for another EA org", or 1 or 2 of those things plus 1 or 2 similar things, includes a fair few people, including probably like 5 people I know personally and who probably shouldn't have their own entries.  I do think Tomasik has been especially prolific and his writings especially well-regarded and influential, which is a big part of why I lean towards an entry for him, but the criteria and cut offs do seem fuzzy at this stage. 

Maybe we should have a tag for each individual EA Fund, in addition to the existing tag Effective Altruism Funds tag? The latter could then be for posts relevant to EA Funds as a whole.

There are now 60 posts with the Effective Altruism Funds tag, and many readers may only be interested in posts relevant to one or two of the funds.

Yes, good idea. Feel free to create them, otherwise I'll do it myself later today or tomorrow.

It might be worth going through the Effective Altruism Hub's resource collections and the old attempts to build EA Wikis (e.g., the Cause Prioritization wiki), to:

  • See if that inspires useful new entries/tags
    • E.g., they might cover some topic that we then realise is worth having an entry for
  • Find resources that can be given a relevant tag, or listed in Bibliography / Further reading / External links sections

I assume some of this has been done already, but someone doing it thoroughly seems worthwhile.

5Catherine Low2y
Thanks Michael!  I manage the EA Hub Resources [ resources.eahub.org], but much of the content has been slowly getting outdated.  I think the best action will be to incorporate the content in the Learn and Take Action sections of the EA Hub Resources into the EA Forum wiki, and redirect Hub visitors to the wiki. I'm unlikely to have the time to do this soon, so I would be delighted if someone else was keen to do this. Get in touch if you are keen to do this and I can assist + set up redirects when ready! Message me through the forum private messaging. The rest of the resources are designed for EA group  organisers and my current plan is to keep this outside of the wiki (but I'm happy for folks to try to change my mind!). I plan to move this content onto a new website in the next few months as the EA Hub team have decided to narrow their focus to the community directories. 
I did this systematically for all the relevant wikis I was aware of, back when I started working on this project in mid 2020. Of course, it's likely that I have missed some relevant entries or references.
Ah, nice. What about for the EA Hub stuff? E.g., they've got a bunch of stuff on how to talk about EA, running EA-related events, and movement-building. And also curated collections for cause areas. And I don't think I've seen those things linked to from tag pages?
I actually wasn't aware of their resources [https://resources.eahub.org/] section (EA Hub has changed a lot over the years and I haven't stayed abreast of the latest changes). They used to have a wiki [https://web.archive.org/web/20161221172926/http://wiki.effectivealtruismhub.com/index.php?title=Special:AllPages], which I did review, though some pages were not indexed by the Internet Archive. I wonder if they have migrated their old wiki content to the new resources page. In any case, I've made a note to investigate this further.
5Catherine Low2y
Hey Pablo!  You are right that the wiki is long dead. The current resources section was written independently from the wiki. As I just commented up the thread [https://forum.effectivealtruism.org/posts/rxbLqMDhd4832WYit/propose-and-vote-on-potential-tags?commentId=DCHyAthMRiF5zwkzB], with the new EA Forum wiki (which is wonderful!), I think the content on the EA Hub intended for all EAs should be merged into the wiki, and then I can retire those pages and set up redirects. More than happy to chat more about this!  
Thanks for your message! Can you email me at stafforini.com [pablo@stafforini.com] preceded by MyName@, or share an email address where I can reach you? (EDIT: We have now contacted each other.)
Great that you two have connected! In the other thread, Catherine says: Yeah, I don't think the EA Forum Wiki needs to eat everything else - other options include: * Just include a link in Further reading or Bibliography to the external collection of resources * See e.g. the link to my own collection of resources from here [https://forum.effectivealtruism.org/tag/movement-collapse] * Look through the collection, give the appropriate tag to the Forum posts that are in that collection, and maybe include links to some other specific things in the Further reading or Bibliography section
Sounds good!

Academia or something like that

This could cover things like how (in)efficient academia is, what influences it has had and could have, the best ways to leverage or direct academia, whether people should go into academic or academia-related careers, etc.

E.g., Open Phil's post(s) on field-building and this post on How to PhD.

Related entries

field-building | meta-science | research methods | research training programs | scientific progress


It's possible that this is made redundant by other tags we already have? 

And my current suggested name and scope are... (read more)

I think this would be a valuable article. Perhaps the title could be refined, but at the moment I can't think of any alternatives I like. So feel free to create it, and we can consider possible name variants in the future.
Ok, done [https://forum.effectivealtruism.org/tag/academia-1/]!

Mind uploads, or Whole brain emulation, or maybe Digital minds

I think that:

  • These concepts overlap somewhat with artificial sentience
  • But these concepts (or at least mind uploads and WBE) are also meaningfully distinct from artificial sentience

But I could be wrong about either of those things.

Further reading

Age of Em


Related entries

artificial sentience | consciousness research | intelligence and neuroscience | long-term future | moral patienthood | non-humans and the long-term future | number of futur... (read more)

Definitely. I already was planning to have an entry on whole brain emulation and have some notes on it... wait, I now see the tag already exists. Mmh, it seems we missed it because it was "wiki only". Anyway, I've removed the restriction now. Feel free to paste the 'further reading' and 'related entries' sections (otherwise I'll do it myself; I just didn't want to take credit for your work).
Cool, I've now added those related entries and the "roadmap" report (Age of Em was already cited). 

Non-longtermist arguments for GCR reduction, or Non-longtermist arguments for prioritising x-risks, or similar but with "reasons" instead of arguments, or some other name like that

The main arguments I have in mind are the non-longtermist 4 of the 5 arguments Toby Ord mentions in The Precipice, focusing on the past, the present, civilizational virtues, and cosmic significance.

Ideally, the entry would cover both (a) such arguments and (b) reasons why those arguments might be much weaker than the longtermist arguments and thus might not by themselves justify ... (read more)

I think this would be a very useful article to have. It seems challenging to find a name for it, though. How about short-termist existential risk prioritization? I am not entirely satisfied with it, but I cannot think of other alternatives I like more. Another option, inspired by the second of your proposals, is short-termist arguments for prioritising existential risk. I think I prefer 'risk prioritization' over 'arguments for prioritizing' because the former allows for discussion of all relevant arguments, not just arguments in favor of prioritizing.
Hmm, I don't really like "short-termist" (or "near-termist"), since that only seems to cover what Ord calls the "present"-focused "moral foundation" for focusing on x-risks, rather than also the past, civilizational virtue, or cosmic significance perspectives.  Relatedly, "short-termist" seems like it implies we're still assuming a broadly utilitarianian-ish perspective but just not being longtermist, whereas I think it'd be good if these tags could cover more deontological and virtue-focused perspectives. (You could have deontological and virtue-focused perspectives that prioritise x-risk in a way that ultimately comes down to effects on the near-term, but not all such perspectives would be like that.) Some more ideas:  * Existential risk prioritization for non-longtermists * Alternative perspectives on existential risk prioritization * I don't really like tag names that say "alternative" in a way that just assumes everyone will know what they're alternative to, but I'm throwing the idea out there anyway, and we do have some other tags with names like that
The reasons for caring about x-risk that Toby mentions are relevant from many moral perspectives, but I think we shouldn't cover them on the EA Wiki, which should be focused on reasons that are relevant from an EA perspective. Effective altruism is focused on finding the best ways to benefit others (understood as moral patients [https://forum.effectivealtruism.org/tag/moral-patienthood]), and by "short-termist" I mean views that restrict the class of "others" to moral patients currently alive, or whose lives won't be in the distant future. So I think short-termist + long-termist arguments exhaust the arguments relevant from an EA perspective, and therefore think that all the arguments we should cover in an article about non-longtermist arguments  are short-termist arguments.
It's not immediately obvious that the EA Wiki should focus solely on considerations relevant from an EA perspective. But after thinking about this for quite some time, I think that's the approach we should take, in part because providing a distillation of those considerations is one of the ways in which the EA Wiki could provide value relative to other reference works, especially on topics that already receive at least some attention in non-EA circles.
Hmm. I think I agree with the principle that "the EA Wiki should focus solely on considerations relevant from an EA perspective", but have a broader notion of what considerations are relevant from an EA perspective. (It also seems to me that the Wiki is already operating with a broader notion of that than you seem to be suggesting, given that e.g. we have an entry for deontology [https://forum.effectivealtruism.org/tag/deontology].) I think the three core reasons I have this view are: 1. effective altruism is actually a big fuzzy bundle of a bunch of overlapping things 2. we should be morally uncertain 3. in order to do good from "an EA perspective", it's in practice often very useful to understand different perspectives other people hold and communicate with those people in terms of those perspectives On 1 and 2: * I think "Effective altruism is focused on finding the best ways to benefit others (understood as moral patients [/tag/moral-patienthood])" is an overly strong statement. * Effective altruism could be understood as a community of people or as a set of ideas, and either way there are many different ways one could reasonably draw the boundaries. * One definition that seems good to me is this one from MacAskill (2019) [https://forum.effectivealtruism.org/posts/9wYa8BqSTMcx9j2tK/defining-effective-altruism]: * "Effective altruism is: (i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world. [...] * The definition is: [...] Tentatively impartial and welfarist. As a tentative hypothesis or a first approximation, doing good is about promoting wellbeing, with everyone’s wellbeing counting equally." (emphasis added, and formatting tweaked) * I think we should be quite morall
I'll respond quickly because I'm pressed with time. 1. I don't think EA is fuzzy to the degree you seem to imply. I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed). 2. I don't understand your point about moral uncertainty. You mention the fact that Will wrote a book about moral uncertainty, or the fact that Beckstead is open to non-consequentialism, as relevant in this context, but I don't see their relevance. EA, in the sense captured by the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view. (Will uses the term 'welfarism', but I don't think he is using it in a moral sense, since he states explicitly that his definition is non-normative.) (ADDED: there is one type of moral uncertainty that is relevant for EA, namely uncertainty about population axiology, because it concerns the class of beings whom EA is committed to helping, at least if we interpret 'others' in "helping others effectively" as "whichever beings count morally". Relatedly, uncertainty about what counts as a person's wellbeing is also relevant, at least if we interpret 'helping' in "helping others effectively" as "improving their wellbeing". So it would be incorrect to say that EA has no moral commitments; still, it is not committed to any particular moral theory.) 3. I agree it often makes sense to frame our concerns in terms of reasons that make sense to our target audience, but I don't see that as the role of the EA Wiki. Instead, as noted above, one key way in which the EA Wiki can add value is by articulating the distinctively EA perspective on the topic of interest. If I consult a Christian encyclopedia, or a libertarian encyclopedia, I want the entries to describe the reasons C
I think you make some good points, and that my earlier comment was a bit off. But I still basically think it should be fine for the EA Wiki to include articles on how moral perspectives different from the main ones in EA intersect with EA issues.  --- Yeah, I think the core of EA is something like what you described, but also that EA is fuzzy and includes a bunch of things outside that core. I think the "core" of EA, as I see it, also doesn't include anti-ageing work, and maybe doesn't include a concern for suffering subroutines, but the Wiki covers those things and I think that it's good that it does so. (I do think a notable difference between that and the other moral perspectives is that one could arrive at those focus areas while having a focus on "helping others". But my basic point here is that the core of EA isn't the whole of EA and isn't all that EA Wiki should cover.) Going back to "the EA Wiki should focus solely on considerations relevant from an EA perspective", I think that that's a good principle but that those considerations aren't limited to "the core of EA". --- Was the word "not" meant to be in there? Or did you mean to say the opposite? If the "not" is intended, then this seems to clash with you saying that discussion from an EA perspective would omit moral perspectives focused on the past, civilizational virtue, or cosmic significance? If discussion from an EA perspective would omit those things, then that implies that the EA perspective is committed to some set of moral moral views that excludes those things.  Maybe you're just saying that EA could be open to certain non-consequentialist views, but not so open that it includes those 3 things from Ord's book? (Btw, I do now recognise that I made a mistake in my previous comment - I wrote as if "helping others" meant the focus must be welfarist and impartial, which is incorrect.) --- I think moral uncertainty is relevant inasmuch as a bit part of the spirit of EA is trying to do good, w
(Typing from my phone; apologies for any typos.) Thanks for the reply. There are a bunch of interesting questions I'd like to discuss more in the future, but for the purposes of making a decision on the issue that triggered this thread, on reflection I think it would be valuable to have a discussion of the arguments you describe. The reason I believe this is that existential risk is such a core topic within EA that an article on the different arguments that have been proposed to mitigate these risks is of interest even from a purely sociological or historical perspective. So even if we may not agree on the definition of EA, the relevance of moral uncertainty or other issues, luckily that doesn't turn out to be an obstacle for agreeing on this particular issue. Perhaps the article should be simply called arguments for existential risk prioritization and cover all the relevant arguments, including longtermist arguments, and we could in addition have a longer discussion of the latter in a separate article, though I don't have strong views on this. (As it happens, I have a document briefly describing about 10 such arguments that I wrote many years ago, which I could send if you are interested. I probably won't be able to work on the article within the next few weeks, though I think I will have time to contribute later.)
Ok, I've gone ahead and made the tag, currently with the name Moral perspectives on existential risk reduction [https://forum.effectivealtruism.org/tag/moral-perspectives-on-existential-risk-reduction]. I'm still unsure what the ideal scope and name would be, and have left a long comment on the Discussion page, so we can continue adjusting that later.
Great, I like the name.
Makes sense. I created [https://forum.effectivealtruism.org/tag/terrorism] it (no content yet).

Make entries for many of the concepts featured on Conceptually

I read the content on that site in 2019 and found it useful. I haven't looked through what concepts are on there to see which ones we already have and which ones might be worth adding, but I expect it'd be useful for someone to do so. So I'm noting it here in case someone else can do that (that'd be my preferred outcome!), or to remind myself to do it in a while if I have time. 

I like Conceptually, and during my early research I went through their list of concepts one by one, to decide which should be covered by the EA Wiki, though I may have missed some relevant entries. Thoughts on which ones we should include, that aren't already articles or are listed in our list of projected entries?

Epistemic challenge, or The epistemic challenge, or Epistemic challenges, or any of those but with "to longtermism" added

Relevant posts include the following, and presumably many more:

Related entries

  • cluelessness
  • longtermism
  • expected value
  • forecasting
Another idea: Long-range forecasting (or some other name covering a similar topic).  See e.g. https://forum.effectivealtruism.org/posts/s8CwDrFqyeZexRPBP/link-how-feasible-is-long-range-forecasting-open-phil [https://forum.effectivealtruism.org/posts/s8CwDrFqyeZexRPBP/link-how-feasible-is-long-range-forecasting-open-phil] Related entries: cluelessness | estimation of existential risk | forecasting | longtermism Given how much the scope of this entry/tag would overlap with the scope of an epistemic challenge to longtermism tag, and how much both would overlap with other entries/tags we already have, I think we should probably only have one or the other. (I could be wrong, though. Maybe we should have both but with one being wiki-only. Or maybe we should have both later on, once the Wiki has a larger set of entries and is perhaps getting more fine-grained.)
I agree with having this tag and subsuming epistemic challenge to longtermism under it. We do already have forecasting [https://forum.effectivealtruism.org/tag/forecasting] and AI forecasting [https://forum.effectivealtruism.org/tag/ai-forecasting], so some further thinking may be needed to avoid overlap.
Ok, I've now made a long-range forecasting [https://forum.effectivealtruism.org/tag/long-range-forecasting] tag, and added a note there that it should probably subsume/cover the epistemic challenge to longtermism as well. And yeah, I'm open to people adjusting things later to reduce how many entries/tags we have on similar topics.
Is the "epistemic challenge to longtermism" something like "the problem of cluelessness, as applied to longtermism", or is it something different?
People in EA sometimes use the term "cluelessness" in a way that's pretty much referring to the epistemic challenge or the idea that it's really really hard to predict long-term-future effects. But I'm pretty sure the philosophers writing on this topic mean something more specific and absolute/qualitative, and a natural interpretation of the word is also more absolute ("clueless" implies "has absolutely no clue"). I think cluelessness could be seen as one special case / subset of the broader topic of "it seems really really hard to predict long-term future effects".  I write about this more here [https://forum.effectivealtruism.org/posts/uGt5HfRTYi9xwF6i8/3-suggestions-about-jargon-in-ea?commentId=mXm3KbwsBCszdZ9hT] and here [https://forum.effectivealtruism.org/posts/ocmEFL2uDSMzvwL8P/possible-misconceptions-about-strong-longtermism?commentId=X4dXQ5eYKmzMfMCtL]. Here's an excerpt from the first of those links: Meanwhile, the epistemic challenge is the more quantitative, less absolute, and in my view more useful idea that:  * effects probably get harder to predict the further in future they are * this might mean we should focus on the near-term if that gradual decrease in our predictive power outweighs the increased scale of the long-term future compared to the nearer-term.  On that, here's part of the abstract of Tarsney's paper [https://forum.effectivealtruism.org/posts/FhjDSijdWrhFMgZrb/the-epistemic-challenge-to-longtermism-tarsney-2020]:

I think there should either be an entry for each of Accident risk, Misuse risk, and Structural risk, or a single entry that covers all three, or something like that.

Maybe these entries should just focus on AI, since that's where the terms were originally used (as far as I'm aware). On the other hand, I think the same concepts also make sense for other large-scale risks from technologies.

If the entries do focus on AI, maybe they should have AI in the name (e.g. AI accident risk or Accident risk from AI), or maybe not.

In this case, the reason I'm posting thi... (read more)

There's an accidental harm [https://forum.effectivealtruism.org/tag/accidental-harm] article, which is meant to cover the risk of causing harm as an unintended effect of trying to do good, as discussed e.g. here [https://80000hours.org/articles/accidental-harm/]. What you describe is somewhat different, since the risk results not so much from "attempts to do good" but from the development of a technology in response to consumer demand (or other factors driving innovation not directly related to altruism). Furthermore, misuse risk can involve deliberate attempts to cause harm, in addition to unintended harm. I guess all of these risks are instances of the broader category of "downside risk", so maybe we can have an article on that?
I think there are indeed overlaps between all these things. But I do think that the application of these terms to technological risk specifically or AI risk specifically is important enough to warrant its own entry or set of entries.  Maybe if you feel their distinctive scope is at risk of being unclear, that pushes in favour of sticking with the original AI-focused framing of the concepts, and maybe just mentioning in one place in the entry/entries that the same terms could also be applied to technological risk more broadly? Or maybe it pushes in favour of having a single entry focused on this set of concepts as a whole and the distinctions between them (maybe called  Accident, misuse, and structural risks)? I also wouldn't really want to say misuse risk is an instance of downside risk. One reason is that it may not be downside risk from the misuser's perspective, and another is that downside risk is often/usually used to mean a risk of a downside from something that is or is expected to be good overall. More on this from an older post of mine [https://www.lesswrong.com/s/r3dKPwpkkMnJPbjZE/p/RY9XYoqPeMc8W8zbH]: Also, I think I see "accidental harm" as sufficiently covering standard uses of the term "downside risk" that there's not a need for a separate entry. (Though maybe a redirect would be good?)

Update: I've now made this entry.

Fermi estimation or Fermi estimates

Overlaps with some other things in the Decision Theory and Rationality cluster of the Tags Portal.

I agree that this should be added. I weakly prefer 'Fermi estimation'.

Demandingness objection

I'd guess there are at least a few Forum posts quite relevant to this, and having a place to collect them seems nice, but I could be wrong about either of those points.

[This comment is no longer endorsed by its author]Reply
I agree it's relevant. But we already have an article: demandingness of morality [https://forum.effectivealtruism.org/tag/demandingness-of-morality]. (It's likely you haven't seen it because many of these articles were Wiki-only until very recently.)
Yeah, I just spotted that and the fact I had a new notification at the same time, and hoped it was anything other than a reply here so I could delete my shamefully redundant suggestion before anyone spotted it :D (I think what happened is that I used command+f on the tags portal before the page had properly loaded, or something.)

Update: I've now made this tag.

Charitable pledges or Altruistic pledges or Giving pledges (but that could be confused with the Giving Pledge specifically) or Donation pledges or similar

Maybe the first two names are good in that they could capture pledges about resources other than money (e.g., time)? But I can't off the top of my head think of any non-monetary altruistic pledges. 

This could serve as an entry on this important-seeming topic in general, and as a directory to a bunch of other entries or orgs on specific pledges (e.g., Giving Pledge, GWWC... (read more)

Antimicrobial resistance or Antibiotic resistance

Not sure enough EAs care about this and/or have written about this on the Forum for it to warrant an entry/tag?

(I don't personally have much interest in this topic, but I'm just one person.)

A couple relevant posts I stumbled upon: * https://forum.effectivealtruism.org/posts/8ERp3GbQ54Fw8ehuQ/antibiotic-resistance-and-meat-why-we-should-be-careful-in [https://forum.effectivealtruism.org/posts/8ERp3GbQ54Fw8ehuQ/antibiotic-resistance-and-meat-why-we-should-be-careful-in] * https://forum.effectivealtruism.org/posts/2qXfME3Rrcd7mdnMr/ [https://forum.effectivealtruism.org/posts/2qXfME3Rrcd7mdnMr/] 

Update: I've now made this tag.

Something like Bayesianism

Arguments against having this entry/tag:

  • Maybe the topic is sufficiently covered by the entries on Epistemology and on Decision theory?
Yeah, perhaps name it Bayesian reasoning or Bayesian epistemology?

Cognitive biases/Cognitive bias, and/or entries for various specific cognitive biases (e.g. Scope neglect)

I feel unsure whether we should aim to have just a handful of entries for large categories of biases, vs one entry for each of the most relevant biases (even if this means having 5+ or 10+ entries of this type)

My sense is that it would be desirable to have both an overview  article about cognitive bias, discussing the phenomenon in general (e.g. the degree to which humans can overcome cognitive bias, the debate over how desirable it is to overcome them, etc.) as well as articles about specific instances of it.
I think you mean it'd be desirable to have both a general article on cognitive bias and one article each for various specific instances of it? Rather than having just one general article that covers both the topic as a whole and specific instances of it? Given my assumed interpretation of what you meant, I've now made an entry for Cognitive biases and another for Scope neglect. People could later add more, or delete some, or whatever. (I've now copied the content of this thread to the Discussion page on the Cognitive biases entry [https://forum.effectivealtruism.org/tag/cognitive-biases/discussion]. If you or others would like to reply, please do so there.)

Nonlinear Fund

Maybe it's too early to make a tag for that org?

Update: I've now made this entry.

Instrumental vs. epistemic rationality

Some brief discussion here.

These terms may basically only be used on the LessWrong community, and may not be prominent or useful enough to warrant an entry here. Not sure.

I think this would be useful to have.

Metaethical uncertainty and/or Metanormative uncertainty

These concepts are explained here.

I think it's probably best to instead have an entry on "Normative uncertainty" in general that has sections for each of those concepts, as well as sections that briefly describe (regular) Moral uncertainty and Decision-theoretic uncertainty and link to the existing tags on those concepts. (Also, the entry on Moral uncertainty could discuss the question of how to behave when uncertain what approach to moral uncertainty is best, which is metanormative uncertainty.) This... (read more)

Subjective vs. objective normativity

See here and here

Update: I've now made this entry.

Disentanglement research

Defined here: https://forum.effectivealtruism.org/posts/RCvetzfDnBNFX7pLH/personal-thoughts-on-careers-in-ai-policy-and-strategy

Off the top of my head, I'm not sure how many posts would get this tag. But I know at least that one would, and I'd guess we'd find several more if we looked.

And in any case, this seems to be a useful concept that's frequently invoked in the EA community, so having a short wiki entry on it might be good (even ignoring tagging).

Related entries:

https://forum.effectivealtruism.... (read more)

Another suggestion: Research distillation or Research debt or similar We could have: 1. an entry for this and another for disentanglement research (with links between them) 2. one entry covering both 3. one entry that's mainly on one topic but briefly mentions/links to the other 4. neither What I have in mind is what's discussed here: https://distill.pub/2017/research-debt/ [https://distill.pub/2017/research-debt/] Off the top of my head, I'm not sure how many posts would get this tag. But maybe some would? And in any case, this seems to me to be a useful concept that's sometimes invoked in the EA community, so having a short wiki entry on it might be good (even ignoring tagging). But I'm less confident I've heard this mentioned a lot in EA than I am with disentanglement research. This is obviously very similar to the idea of a research summary [https://forum.effectivealtruism.org/tag/research-summary]. But I think that these terms and the Distill article add some value. And the research summary tag is currently only for research summaries, not for discussion of the value of or best practices for distilling research or making summaries. Related entries: https://forum.effectivealtruism.org/tag/scalably-involving-people [https://forum.effectivealtruism.org/tag/scalably-involving-people] https://forum.effectivealtruism.org/tag/research-methods [https://forum.effectivealtruism.org/tag/research-methods] 

Tag portal question/suggestion:

Many tags are probably relevant for more than one of the categories/clusters used on the tag portal. For example, Economic growth is currently listed under global health & development, but it's also relevant to Long-Term Risk and Flourishing and to Economics & Finance and probably some other things.

Currently, I think each tag is only shown in one place on the portal. That might be the best move.

But maybe they should instead be mentioned in every place where they're (highly) relevant, and where people might expect to ... (read more)

Crypto or something like that

Some EAs are working on or interested in things like crypto and blockchain, either as investment opportunities or as tools that might be useful for accomplishing things EAs care about (e.g., mechanism design, solving coordination problems). Maybe there should be a tag for posts relevant to such things. I'd guess that there are at least 3 relevant Forum posts, though I haven't checked. 

There are also at least two 80,000 Hours episodes that I think are relevant:

... (read more)
I would prefer Blockchain, as it is more general than cryptocurrency and doesn't confuse people with the field of cryptology
Good points. I've now created the tag and used the name Blockchain [https://forum.effectivealtruism.org/tag/blockchain].
2JP Addison2y
Reasonable because the generality, though I think the cryptography ship has long, long since sailed.
2JP Addison2y
Seems good. Maybe we should crosspost one of the recent articles on Sam Bankman-Freid. 
I've now created the tag [https://forum.effectivealtruism.org/tag/blockchain]. Feel free to make those crossposts and give them the tag, of course :)  (I won't do it myself, as I have little knowledge about or personal interest in blockchain stuff myself.)

EA vs Non-EA Orgs

Proposed tag description:

The EA vs Non-EA Orgs tag is for posts that include arguments for or against pursuing jobs at organisations that explicitly identify with the "effective altruism" label, relative to jobs at other organisations. The tag is also for posts that include discussion of whether and why members of the EA community may be biased in one direction or the other on this question, and how to address that (e.g., how to raise the status - within the EA community - of high-impact work at non-EA orgs). 

This tag is not intended

... (read more)
4JP Addison2y
I like it. Maybe "Working at EA vs Non-EA Orgs?"
Cool, done [https://forum.effectivealtruism.org/tag/working-at-ea-vs-non-ea-orgs]. I think that that name is clearer, but thought brevity was substantially preferred for tag names. But I'm personally more inclined towards clarity than brevity here, so I'll use your suggested name. Someone can change it later anyway.

Scalably Using People or Scalably Using Labour or Task Y or something like that

Proposed description:

There are often discussions of how good or bad EA is at efficiently allocating many people to valuable work, the consequences of EA's strengths or weaknesses on that front, how to improve, and how this all might change as EA grows. 

See also Career Choice, Get Involved, Movement Strategy, Community Tools, Criticism (EA Movement), EA Hiring, and Markets for Altruism. 

Notes on that description:

  • I'll obviously add the links to the "See also" tags if I a
... (read more)
4JP Addison2y
I'm pro. I'd call it Task Y, though I wouldn't be surprised if there was a reason not to.
Cool, given that, I've now made the tag [https://forum.effectivealtruism.org/tag/scalably-using-people]. I've called it Scalably Using People rather than Task Y, with the key reason being that Alex originally described Task Y as being a single task. More generally, I think that the description of Task Y wouldn't neatly cover things like the vetting-constrained discussion or Jan's discussion of hierarchical network structures [https://forum.effectivealtruism.org/posts/oNY76m8DDWFiLo7nH/what-to-do-with-people], and I'm hoping for this tag to cover things like that as well. So I see Task Y as a subset of what I'm hoping this tag will cover. I'm definitely open to people suggesting alternative names, though. 

Industrial Revolution

We already have a variety of related tags, like History, Economic Growth, and Persistence of Political/Cultural Variables. But the Industrial Revolution does seem like perhaps the single most notable episode of history for many/all EA cause areas, so maybe we should have a tag just for it?

Some posts that would warrant the tag:

... (read more)
2JP Addison2y
My guess is that a better tag would be "History of Economic Growth". Because I can't picture a case where someone wants to find things about the industrial revolution but not all of economic growth. (Unless they're doing a specific research project, but that sounds pretty niche.) But even still, I'd tentatively lean towards economic growth being enough. But I think that depends on how fine-grained our tagging system should be, which I don't have a strong opinion on.
This seems reasonable. I was also unsure about my suggestion, hence popping it here rather than making it. I'll hold off for now, at least.

Cultural Evolution

One relevant post: https://forum.effectivealtruism.org/posts/7QiXR2dv8KL4fkf9D/notes-on-henrich-s-the-weirdest-people-in-the-world-2020

I haven't searched my memory or the Forum for other relevant posts yet.

This would overlap somewhat with the tags for Memetics and Persistence of Political/Cultural Variables.

2JP Addison2y
I'm in favor.
Cool - done [https://forum.effectivealtruism.org/tag/cultural-evolution].

Update: I've now made this tag.

Persistence of Political/Cultural Variables (or Cultural Persistence, or Cultural, Political, and Moral Persistence, or something like that)

First pass at a description: 

It's often important to have a sense of the persistence of political/cultural variables - such as democracy, authoritarianism, concern for human rights, a concern for animal welfare, or norms conducive to scientific progress or free markets. This can inform our predictions of what the future will be like and our views on the importance of changing those v

... (read more)
4Aaron Gertler2y
Seems reasonable to me. Want to go ahead and create it?
Done! [https://forum.effectivealtruism.org/tag/persistence-of-political-cultural-variables]

Update: I've now made this entry

Non-Humans and the Long-Term Future

Why I propose this:

  • The following sorts of topics come up decently often:
    • Is longtermism focused only on humans?
    • Should longtermists focus on improving wellbeing for animals?
    • Should longtermists focus on improving wellbeing for artificial sentiences?
    • Are existential risks just about humans?
  • Topics like "Will most moral patients in the long-term future be humans? Other animals? Something else? By how large a margin?" also come up sometimes (though less often)
  • I think it'd be good to collect posts r
... (read more)

One consideration I just thought of, which I do not recall seeing mentioned elsewhere, is that the optional number of tags depends somewhat on the typical tag use case.

  • Clicking on an article's tags to find other related articles
    • As only a small % of tags apply to any given article, and this % will fall as the number of tags increases, article tag spaces will not become too 'busy'. 
    • Hence there should be many tags, so that each article can be tagged as usefully as possible.
  • Clicking on the tag list to find a specific topic
    • There are already so many tags it
... (read more)
Good points. Maybe the ideal for future will be to have hierarchies/categories of Forum tags? LessWrong now does this [https://www.lesswrong.com/tags/all] (though I haven't looked at their system in detail).

Update: I've now made this entry

Positive futures (or Utopias, or Ideal futures, or something like that)

Proposed description:

The positive futures tag is for posts that discuss things like what a particularly good long-term future might look like and what sorts of ideal long-term futures we might want to aim towards. Reasons to care about this topic include that: 

  • How positive the future might be influences how important reducing existential risk is
  • Positive visions for the future could motivate work to reduce existential risks
  • Thinking about what we want
... (read more)

"Economic Policy" or "Macroeconomic Stabilization"


  • Macroeconomic stabilization is one of the areas that Open Phil works on, but it's not frequently discussed in the EA community. This tag could be specific to macro stabilization or it could encompass all areas of economic policy (aside from economic growth, which already has a tag).
  • Land use reform already has a tag.


Fermi Paradox

Arguments for having this tag:

  • Seems a potentially very important macrostrategy question
  • There are at least some posts relevant to it

Arguments against:

  • Not sure if there are more than a few posts highly relevant to this
  • Maybe this is not a prominent enough topic to get its own tag, rather than just being subsumed under the Space and Global Priorities Research tags
4Aaron Gertler2y
This is currently a wiki-only tag. I doubt many posts are relevant to this, and I suspect that "Space" should work for all of them, but we're still in the process of figuring out how useful a tag has to be to be worth adding to the tagging menu.

Simulation Argument

Arguments for having this tag:

  • Seems a potentially very important macrostrategy question
  • There are at least some posts relevant to it

Arguments against:

  • Not sure if there are more than a couple posts highly relevant to this
  • Maybe this is not a prominent enough topic to get its own tag, rather than just being subsumed under the Global Priorities Research tag
4Aaron Gertler2y
This is currently a wiki-only tag. I doubt many posts are relevant to this, but we might make it usable again — we're still in the process of figuring out how useful a tag has to be to be worth adding to the tagging menu.

EA fellowships

I think it might be useful to have a post on EA fellowships, meaning things like the EA Virtual Programs, which "are opportunities to engage intensively with the ideas of effective altruism through weekly readings and small group discussions over the course of eight weeks. These programs are open to anyone regardless of timezone, career stage, or anything else." (And not meaning things like summer research fellowships, for which there's the Research Training Programs tag.)

I think this'd be a subset of the Event strategy tag.

But I'm not sure i... (read more)

What do you think about a tag for posts that include Elicit predictions? I'd like to see all posts that include them and it might be a tiny further reminder to use them more.

This seems plausibly useful to me. Obviously it’d overlap a lot with the Forecasting tag. But if it’s the case that several posts include Elicit forecasts but most posts tagged Forecasting don’t include Elicit forecasts, then I imagine a separate tag for Elicit forecasts could be useful. (Basically, what I’m thinking about is whether there would be cases in which it’d be useful for someone to find / be sent a collection of links to just posts with Elicit forecasts, with the Forecasting tag not covering their needs well.) But maybe a better option would be to mirror LessWrong in having a tag for posts about forecasting and another tag for posts that include actual forecasts (see here [https://www.lesswrong.com/tag/forecasts-specific-predictions])? (Or maybe the latter tag should only include posts that quite prominently include forecasts, rather than just including them in passing here and there.) Because maybe people would also want to see posts with Metaculus forecasts in them, or forecasts from Good Judgement Inc, or just forecasts from individual EAs but not using those platforms. And I’d guess it’d make more sense to have one tag where all of these things can be found than to try to have a separate tag for each. (That’s just my quick thoughts in a tired state, though.) It could also be handy to have a tag for posts relevant to “Ought / Elicit” - I think it’d probably be good to bundle them together but note Elicit explicitly - similarly to how there’s now tags for posts relevant to each of a few other orgs (e.g. Rethink Priorities, FHI, GPI, QURI). So maybe the combination of a tag for posts that contain actual forecasts and a tag for Ought / Elicit would serve the role a tag for posts containing Elicit forecasts would?

Can I create a tag called "EA Philippines", for posts by people related to EA Philippines, such as about our progress or research?  I'd like to easily see a page compiling posts related to EA Philippines. I could create a sequence for this, but a sequence usually implies things are in a sequential order and more related to each other. But our posts will likely be not that related to each other, so a tag would likely be better.

A counterargument is I currently don't see any tags for any EA chapter, except for EA London updates, But these aren't about EA... (read more)

Quick thoughts: * There's been some discussion of "country-specific tags" (and region-specific tags) here [https://forum.effectivealtruism.org/posts/rxbLqMDhd4832WYit/propose-and-vote-on-potential-tags?commentId=7urx2gnK874PdgAGg] * I think perhaps decisions about general principles for country-specific tags and general principles for EA-chapter-specific tags should be made in tandem * E.g., because it'd be a bit weird to have both a tag for the Philippines as a country (e.g., about the relevance of that country for EA cause areas) and a tag for EA Philippines * Maybe the best option would be to just have country- or region-specific tags that also serve sort-of like EA-chapter-specific tags, unless there are e.g. more than 10 posts relevant to that EA chapter specifically, or more than 20 posts that'd be in the whole tag? * (This is just one possible, quickly thought up principle) * But I'm not actually sure what the principles should be * E.g., if something like the above principle is adopted, I'm not sure what numbers should be used (I chose 10 and 20 pretty randomly) * And I'm not sure how that sort of principle should interact with the option of region-specific tags * E.g., maybe it'd be best to just have a tag like Southeast Asia, and let that play roles similar to that that would be played by country-specific and EA-chapter-specific tags for each country in that region? * Or maybe if there's a tag for Southeast Asia, that's so broad that it then becomes useful to have an EA Philippines tag (but without there being need for a Philippines tag)?
I think it's a good idea to go with a Philippines tag rather than an EA Philippines tag. Both are quite interchangeable because 100% of past posts (there's 5 of them) related to the Philippines are also written by people in EA Philippines, and 100% of past posts by EA Philippines are related to the Philippines.  I think this will continue for quite a few years for ~80-100% of posts, since we expect only a few people to not be affiliated with EA Philippines but still be writing about the Philippines. I think that 90-100% of posts by EA Philippines will relate to the Philippines. I also agree that for national EA groups, rather than have an EA-chapter-specific tag as well as a country-specific tag, we should just have the country-specific tag. I don't understand how a post related specifically to an EA chapter wouldn't also be related to the country, so I think one country tag (rather than a country and a chapter tag) is enough.  I would prefer to just have a Philippines tag already rather than a Southeast Asia tag. This is because: 1. I think we'll hit 10 posts soon, i.e. by the midpoint of 2021 1. We already have 5 past posts that could be tagged under Philippines 2. I have ~3 more posts coming up (likely this month) that would also be tagged under Philippines 2. Therefore rather than tagging these posts as under Southeast Asia, then having to move them to Philippines after we hit 10 posts, I'd rather we just have them tagged as under the Philippines already. I think the principle should be like "If there are 5 or more posts already for a specific country or EA national chapter, and if you would want to create a tag for easier visibility of posts related to that country/chapter, then you should create a tag for that specific country already." Let me know what you think of this principle!
That sounds good to me :) (Though of course this is just one person's thoughts - I have no official role in the EA Forum; I'm just a nerd for tags.)
Alright. I've gone ahead and made the Philippines tag here [https://forum.effectivealtruism.org/tag/philippines], along with a description for it. I've also tagged all 5 pasts posts on this topic already. The description I wrote could be a template for how other country-specific tags should be like. I felt that the description you wrote for China didn't apply as much to the Philippines tag.  If you or anyone else wants to let me know if the description is alright, or if I should change anything, let me know!
The description looks good to me! And I agree that it seems like it could be a useful example/template for other country-specific tags to draw on.

Country-specific tags

I just saw "creation of country specific content"as an example among the higher rated meta EA areas in the recent article What areas are the most promising to start new EA meta charities - A survey of 40 EAs. What do you think about introducing tags for specific countries? E.g. I'd already have a couple of articles in mind that would be specifically interesting for members of German/Austrian/Swiss communities.

Personally, I think:  * it probably makes sense to have at least some tags to mark that posts are relevant to particular countries/regions * but that this should probably be something like 2-20 tags, just in the cases where there are several posts for which the tag would be useful * Rather than e.g. a tag for every country (which I'm not saying you proposed!) Relevant prior tags and discussion There are already tags for China [https://forum.effectivealtruism.org/tag/china] and the European Union [https://forum.effectivealtruism.org/tag/european-union]. The tag description for the China one (which I wrote) could perhaps be used as a model/starting point for other such tags: And when I proposed the China tag [https://forum.effectivealtruism.org/posts/rxbLqMDhd4832WYit/propose-and-vote-on-potential-tags?commentId=KGPuyPQcCrwGpWkC7], I wrote:
Yes, I also had something like 5-15 tags in mind. Your proposal for China makes sense to me, though I had a more "internal" perspective in mind, where EAs from the US/UK/Australia/Germany/Canada/etc. could get an overview of articles that are relevant for their specific country and are maybe indirectly encouraged to add something. So I'd write it as Looking at the EA Survey results [https://forum.effectivealtruism.org/posts/cvkqyxepf4W2whYSK/ea-survey-2019-series-geographic-distribution-of-eas] on geographic distribution, I'd maybe do * US * UK * Austria-NZ * Germany-Austria-Switzerland * Canada * Netherlands * France * Scandinavia * Southeast Asia * Latin America

Should we have a tag for "Feedback Request"?

We in EA Philippines have made 2 posts (and have another upcoming one) already that were specifically for requesting feedback from the global EA community on an external document we wrote, before we post this document for the general public. See here and here as examples from EA PH, and this other example from a different author. 

I think it happens quite often that EAs or EA orgs ask for feedback on an external document or on a writeup they have rough thoughts on, so I think it's worth having this tag.

A pote... (read more)

Another potential argument in favor of having a tag for Feedback Request is it might encourage EAs to share work with each other and get feedback more often, which is likely a good thing.  In my workplace at First Circle, we have a process called "Request for Comment" or "RFC" where we write documents in a specific format and share them on an #rfc slack channel, so that people know we want feedback on a proposal or writeup in order to move forward with our work. This was very effective in getting people to share work, get feedback on work asynchronously rather than via a synchronous meeting, and to streamline and house one place for feedback requests. Maybe a tag for "Feedback Request" could also streamline things? For example, if an EA wants to see what they could give feedback on, they could click this tag to check out things they could give feedback on.  It could also be good practice for authors of feedback requests to put a deadline on when they need feedback on something by. This is so people backreading know if they should still give feedback if a deadline has passed.
I made a tag for requests [https://forum.effectivealtruism.org/tag/requests-open], which I think applies here if there is a specific request for feedback with timeframe. I'll write a short post about it now.
Oh cool, yeah I guess this works!
Yeah, I think I'd personally lean towards letting the thing Brian is describing be covered by the Requests (Open) tag. This is partly because, as Brian notes, "lots of authors (or most authors) would want feedback on their posts anyway, and it's hard separating which ones are feedback requests and which ones aren't." I'm also not really sure I understand the distinction, or the significance of the distinction, between that wanting feedback on an external doc before sharing it more beyond the EA community and wanting feedback on a post before that, or an adapted form of that, is shared beyond the EA community. (One part of my thoughts here is that I think a decent portion of posts may ultimately make their way into things shared beyond the EA community, and sometimes the authors won't be sure in advance which posts those are. E.g., MacAskill's hinge of history post [https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1] is now an academic working paper.) That said, I've also appreciated the existence of Slack channels where people can solicit feedback from colleagues. (I've appreciated that both as an author and as a person who enjoys being helpful by giving feedback.) And the EA Editing & Review facebook group [https://www.facebook.com/groups/458111434360997/] seems to demonstrate some degree of demand for this sort of thing in EA. So maybe there's a stronger case for the tag than I'm currently seeing. (OTOH, maybe the need could be well-met just by using the Requests (Open) tag and posting in EA Editing & Review?) If a Feedback Request tag is made, perhaps it'd be worth linking in the tag description to Giving and receiving feedback [https://forum.effectivealtruism.org/posts/ZjiokgANEfu36LD6G/giving-and-receiving-feedback], Asking for advice [https://forum.effectivealtruism.org/posts/N3zd4FtGmRnMF7pfM/asking-for-advice], and/or Discussion Norms [https://forum.effectivealtruism.org/tag/discussion-norm

Update: I've now made this tag.


Proposed description: 

The ITN tag is for posts about the Importance, Tractability, Neglectedness framework that is frequently used in effective altruism, or about highly related matters. This could include posts critiquing the ITN framework, discussing in abstract terms how it should and shouldn't be applied, and discussing other factors that could be considered alongside or instead of ITN. 

This tag is not necessarily meant to capture the much larger set of posts which in some way use the ITN framework. 


... (read more)

Longtermism (Cause Area)

We have various tags relevant to longtermism or specific things that longtermists are often interested in (e.g., Existential Risk). But we don't have a tag for longtermism as a whole. Longtermism (Philosophy) and Long-Term Future don't fit that bill; the former is just for "posts about philosophical matters relevant to longtermism", and the latter is "meant for discussion of what the long-term future might actually look like".

One example of a post that's relevant to longtermism as a cause area but that doesn't seem to neatly fit in ... (read more)

Agreed. Perhaps Longtermism(Philosophy) is redundant because it could be Longetrmism (Cause Area) + Moral Philosophy  - if so, I'd suggest changing the name instead of opening a new tag
Hmm, I think I'd agree that most things which fit in both Longtermism (Cause Area) and Moral Philosophy would fit Longtermism (Philosophy). (Though there might be exceptions. E.g., I'm not sure stuff to do with moral patienthood/status/circles would be an ideal fit for Longtermism Philosophy - it's relevant to longtermism, but not uniquely or especially relevant to longtermism. But those things tie in to potential longtermist interventions.) But now that you mention that, I realise that there might not be a good way to find and share posts at the intersection of two tags (which would mean that tags which are theoretically redundant are currently still practically useful). I've just sent the EA Forum team the following message about this: So I'll hold off on making a Longtermism (Cause Area) tag or converting the Longtermism (Philosophy) tag into that until I hear back from the Forum team, and/or think more or get more input on what the best approach here would be.

Update: I've now made this tag.

Fellowships or EA-Aligned Fellowships or Research Fellowships or something like that

Stefan Schubert writes:

It could be good if someone wrote an overview of the growing number of fellowships and scholarships in EA (and maybe also other forms of professional EA work). It could include the kind of info given above, and maybe draw inspiration from Larks' overviews of the AI Alignment landscape. I don't think I have seen anything quite like that, but please correct me if I'm wrong.

Maybe this would be partially addressed via a tag ... (read more)

UPDATE: I've proposed the change to the tag.

Proposal: Change the EA Global tag to EA Conferences.

Since many of the tagged posts are relevant to EA Student Summit, EAGx's etc. and the description itself is conference posts. 

(Update: I've now made this tag.)

Institutions for Future Generations

This is arguably a subset of Institutional Decision-Making and/or Policy Change. It also overlaps with Longtermism (Philosophy) and Moral Advocacy / Values Spreading. But it seems like this is an important category that various people might want to learn about in particular (i.e., not just as part of learning about institutional decision-making more broadly), and like there are many EA Forum posts about this in particular.

Advanced Military Technology (or some other related name)

Proposed description:

The Advanced Military Technology tag is for posts about military technologies that are on the cutting-edge, that are in the process of development, that appear to be on the horizon, or that could plausibly be developed in future. This could include both "entirely new" technologies and substantial advances in existing technologies.

See also Armed Conflict, Autonomous Weapons, and Differential Progress.

Other tags that this overlaps with include: AI Governance, Atomically Precise Man... (read more)

2JP Addison2y
I agree with whoever upvoted the other of the two tags you made this day but not this one. I would want to see more posts that formed a natural cluster around this concept. The one example is good, but I can't recall any others.
Yeah, that makes sense. I'll hold off unless I encounter additional relevant posts.

(Update: I've now made this tag.)

China (or maybe something broader like BRICS or Rising Powers)

Rough proposed description:

The China tag is for posts that are about China, that address how China is relevant to various issues EAs care about, or that are relevant to how one could have an impact by engaging with China.

See also Global Outreach and International Relations.

It seems perhaps odd to single China out for a tag while not having tags for e.g. USA, Southeast Asia, ASEAN, United Nations, Middle Powers. But we do have a tag for posts relevant to the Europ... (read more)

Markets for Altruism or Market Mechanisms for Altruism or Impact Certificates or Impact Purchases (or some other name)

Tentatively proposed description: 

The Markets for Altruism tag is for posts relevant to actual or potential market-like mechanisms for altruistic or charitable activities. An example would be certificates of impact

See also EA Funding.

The posts listed here would fit this tag. Some other posts tagged EA Funding might fit as well.

I'm unsure precisely what the ideal scope and name of this tag would be. 

2JP Addison2y
I like it. Impact Certificates is more recognizable, but Markets for Altruism is more general. I think I agree with your favoring it.
Cool, thanks for the input - given that, I've now made the tag, with the name Markets for Altruism :)

Update: I've created the tag "Discussion Norms"

Community Norms/Discussion Norms

Very Bad Description: Posts that discuss norms on how EAs to interact with each other. 

Posts this tag could apply to: 

... (read more)
My quick, personal take is that: 1. A tag for Discussion Norms seems useful and distinct from the other tags you mention. It also wouldn't have to only be about discussion norms for intra-EA interactions - it could also be about discussion norms in other contexts.  2. "Community Norms" and "Posts that discuss norms on how EAs to interact with each other" feel very broad to me, and it's harder for me to see precisely what that's trying to point at that isn't captured by one of the first three other tags you mention.  3. But I have a feeling that something like Community Norms/Discussion Norms could have a clear scope that's useful and distinct from the other tags. Maybe if you just try flesh out what you mean a little more in the description it'd be clear to me? 4. Maybe what you have in mind will often relate to things like being welcoming, supportive, and considerate? If so, maybe adjusting the tag label or description in light of that could help?
2Vaidehi Agarwalla2y
I think Discussion Norms makes sense! Discussion Norms: Posts about suggested or encouraged norms within the EA community on how to interact with other EAs, which may often relate to being supportive, welcoming and considerate.  It's still not great, if you had any  feedback I'd be keen to hear it!

Anyone have thoughts on this tag? I'm skeptical, but might be more inclined if I saw more applications that were good. Also if it had a description that described it's naturalness as a category in the EA-sphere. (If this were a business forum it would obviously be good, and maybe it is in this Forum — I'm not sure.)

My quick take is that it does seem like it at least needs a description that explains why it warrants an EA Forum tag. I'd wonder, for example, whether it's meant to just be about scaling organisations (e.g., EA orgs), or also about scaling things like bednet distribution programs. (Or maybe those two things are super similar anyway?) 

I think it wouldbe useful to be able to see all the posts from a particular organisation all at once on the forum. For the most part, individuals from those organisations post, rather than a single organisation account it can be difficult to see e.g. all of Rethink Priorities' research on a given topic

Curious to hear if people think it's better to have tags or sequences for group these posts?

2Vaidehi Agarwalla2y
New issue: How do we deal with name changes ? (E.g. EAF became CLR, .impact became rethink charity) I think it's nice to have a single tag (the new name) for continuity but sometimes an org had a different focus or projects associated with the old name. Maybe it's enough to mention in the tag description "previously called X"?
Update: I've now made tags for Rethink Priorities, Future of Humanity Institute, and Global Priorities Institute. I believe I've tagged all RP posts. I wasn't very thorough in tagging FHI or GPI posts. Other people can tag additional FHI and GPI posts, and/or add tags for other orgs. 
0JP Addison2y
A possibility would be to add the organization as a coauthor for all official posts.
I think something like this would be a good idea :)  Some thoughts: * One downside could be that we might end up with quite a few of these tags, which then clutter up the tags page. * Maybe it'd be best if the Forum team can set it up so there's a separate, collapsable part of the tags page just for all the organisation tags? * That might also make it easier for someone who's looking for org tags in general (without knowing what specific orgs might have tags) to find them. * Most EA organisations probably already have pages on their site where you can find all/most their research outputs. E.g., Rethink Priorities' publications page [https://www.rethinkpriorities.org/publications]. * But one thing tags allows you to do is (from the home page of the forum) filter by multiple tags at once. So you could e.g. filter by both the Rethink Priorities tag  and the Wild Animal Welfare tag, to find all of Rethink's posts related to that topic. * That said, I've never actually used the approach of filtering by multiple tags myself. * And the lists of publications on an org's site may often be organised by broad topic area anyway. Though this could still be useful if you want to see if an org wrote something related to a concept/topic they probably wouldn't organise their pages by (perhaps because it's cross-cutting, slightly obscure, or wasn't the main focus of the post) - e.g., if you want to see whether Rethink has written anything related to patient altruism [https://forum.effectivealtruism.org/tag/patient-altruism]. * I think tags might be better than sequences for this purpose. One reason is the above-mentioned benefit of allowing for filtering by both org and some tag. Another reason is that these posts usually won't really be sequences in the usual sense - it won't be the case that the order of publication is the most natural order of reading, and that one gains a lot f
2Vaidehi Agarwalla2y
I agree that tags seem better than sequences.  I think rather than specific tags, it may be better to just have them regular tags. This would solve the issue about which organisations get org tags. I think it's okay for people to tag their own early stage projects or orgs even if they aren't very big (I'm biased here as I have some projects which I would like to be able to link people to).  I don't think there's a lot of risk - having a tag doesn't mean your project is endorsed by EA or anything, it's just a organisational tool.   I think this is probably the best strategy!   Also congrats on starting at Rethink :) 

Change My View!

I found r/ChangeMyView recently and I think it's the bee's knees. "A place to post an opinion you accept may be flawed, in an effort to understand other perspectives on the issue."

There are already a good deal of questions and posts inviting criticism on this forum, and this tag could organize them all for the people who enjoy a good, clean disagreement/discussion. It could be used especially (or only) for ideas with <50% certainty.

The subreddit itself is a cool place to go, but many issues are more fruitfully discusse... (read more)

I've added a Meta-Science tag. I'd love for some help with clarifying the distinction between it and Scientific Progress.  

Generally, I imagine meta-science as being more focused on specific aspects of the academic ecosystem and scientific progress to be related more to the general properties of scientific advances. There is clearly an overlap there, but I'm not sure where exactly to set the boundaries. 

2Vaidehi Agarwalla2y
I think the overlap would be a if say, in the field of survey methodology, someone discovers a new way to measure bias in surveys - this would be a meta-science improvement but also scientific progress in the field of survey methodology