Hide table of contents


Inspired by this piece, I suggest that people interested in demonstrating familiarity with some EA-related ideas read and discuss EA Forum posts. This can bring innovative insights, should not unnecessarily burden authors with explaining introductory concepts, and can prevent the perception of right and wrong answers.

Further, participants will already have a ‘record’ of their thinking and its development for grant and hiring managers to review, which is rarely the case for videoconference-based fellowships. Currently, due to the funding abundance and general atmosphere which motivates impact-maximization thinking, this can be more valuable than a ‘certificate’ of internalization of specific ideas, such as the willingness to donate abroad or engage in careers recommended by a specific advisory.

Following the Decade Review, I made a list of questions that can assist with perspective development on various EA-related questions. I categorized them in the following areas: 1) Community building (with sub-categories), 2) AI safety, 3) Earning to give, 4) Animal welfare, 5) Law, 6) Policy, 7) Global health, wellbeing, and development, 8) Existential risk, 9) Criticism of EA, and 10) Intersectional. Due to this timeline and recent developments on the Forum, some valuable posts are not included and some questions already outdated.

I would most like to discuss whether participation on the EA Forum should constitute a fellowship alternative and if so, what the format should be. I will also appreciate feedback on any of the questions.

I thank Edo Arad for feedback on the framing of this post.

Categorized posts with discussion questions

Community building

Career advising

Most research/advocacy charities are not scalable

  • What are the arguments that support that research and advocacy organizations are not scalable?
  • What would you suggest that a community member interested in large-scale research or advocacy considers?

Dealing with Network Constraints (My Model of EA Careers)

  • What would you recommend to a community member interested in working in an early-stage EA-related organization?

A beginner data scientist tries her hand at biosecurity

  • What would you recommend to the author if they were seeking to re-evaluate their career path in the future?

Volunteering isn't free

  • What information would you need to have in order to recommend a volunteer role at an EA-related organization to a community member?

How valuable is ladder-climbing outside of EA for people who aren't unusually good at ladder-climbing or unusually entrepreneurial?

  • What ladders (if any) would you suggest that different EA community members climb?
  • What roles in and outside of EA would you recommend to community members who are not unusually good at ladder-climbing or unusually entrepreneurial?

More EAs should consider “non-EA” jobs

  • What fundamental values did the author bring to her EA-unlabeled position?
  • What values (skills, principles, knowledge) would you recommend that different community members develop before taking EA-unlabeled roles?

SHOW: A framework for shaping your talent for direct work

  • What are the four talent shaping paths recommended by the SHOW framework?

Big List of Cause Candidates

  • What EA-related cause candidates does the author list?
  • What cause candidates would you remove from or add onto this list?

A central directory for open research questions

  • What open EA-related research question lists exist?
  • How would you go about directing a community member with specific research interests and expertise to questions that they may be interested in exploring?

After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation

  • According to this post, why can it be difficult to get hired by some EA-related organizations?
  • What steps to gain a counterfactually contributing position would you recommend to similar candidates?

Early career EA's should consider joining fast-growing startups in emerging technologies

  • What skills, attitudes, and networks may early-career professionals develop and miss by joining fast-growing startups in emerging technologies? Based on your answer, what are some startup characteristics that you would recommend that EAs look for?

Community organization

Sentientism: Improving the human epistemology and ethics baseline

  • Considering reputational loss risk, what would you recommend to the Sentientism organization?

Ben Todd: How can we best work together as a community?

  • What knowledge and skill gaps does this 2017 post highlight?
  • Currently, what knowledge and skills gaps are the most important in EA?
  • How, if anyhow, would you suggest that EA funds talent development to address these gaps?

Judgement as a key need in EA

  • How does the author define ‘good judgement’?
  • How can ‘good judgement’ be developed?

Better models for EA development: a network of communities, not a global community

  • What EA community models does the author describe?
  • What are the aspects necessary for these models?

Community strategy

Hard-to-reverse decisions destroy option value

  • How would you quantify option value?
  • What measures are implemented in the EA community to prevent hard-to-reverse decisions?

Possible gaps in the EA community

  • What are the four gaps in the EA community, according to this article?
  • What other gaps can be important for a successful EA community strategy?

Movement Collapse Scenarios

  • According to this article, what are the four movement collapse scenarios?
  • What steps is the EA community making to prevent these scenarios?

Six Ways To Get Along With People Who Are Totally Wrong*

  • According to this post, what are the six ways to address disagreement about EA-related ideas?

Some thoughts on deference and inside-view models

  • What is a one-sentence summary of this post?

High School Seniors React to 80k Advice

  • What are the five high school seniors’ objections to 80,000 Hours advice?
  • Based on these objections and the general EA community strategy, how would you suggest that 80,000 Hours changes its content?

Reasons to eat meat

  • What are strategically sound and unsound reasons to eat meat?
  • What types of community members who could otherwise be veg*n would you suggest to strategically eat meat?

Community scale-up

How valuable is movement growth?

  • How does the author quantify the value of movement growth?
  • What assumptions are made?

EA Workplace/Professional Groups

  • What are the possible benefits and limitations of running EA workplace/professional groups?
  • How can EA workplace/professional groups develop their Theories of Change (feel free to use a case study)?

Higher-stakes community building

How we promoted EA at a large tech company and How we promoted EA at a large tech company (v2.0)

  • How did the author approach EA community building at Microsoft?
  • In general, what option value loss risk mitigation strategies would you recommend for EA groups at large tech companies?

Could GiveWell create a cryptocurrency to raise a lot of money?

  • What does the author recommend?
  • What are the limitations of this recommendation?

Use "care" with care.

  • How does the author define “care?”

Capacity development

EA Infrastructure Fund: May–August 2021 grant recommendations

  • Based on this post, what grants does the EA Infrastructure Fund recommend?
  • Using information in this post and other relevant information, what types of ventures would you refer to EAIF?

Is effective altruism growing? An update on the stock of funding vs. people

  • Based on this article and general reasoning, what EA-related approach changes would you recommend that EA community members with various capital and interests consider?

A review of what affective neuroscience knows about suffering & valence. (TLDR: Affective Neuroscience is very confused about what suffering is.)

  • How does the mind work?
  • How is neuroimaging employed to study valence? Is there a low-risk room for improvement?


The community's conception of value drifting is sometimes too narrow

  • How does the author define value drift?

Effective Altruism is a Question (not an ideology)

  • How does this 2014 post define effective altruism?
  • What has changed about this definition since the time of writing?
  • What should have changed about this definition since the time of writing?

List of ways in which cost-effectiveness estimates can be misleading

  • What misleading aspects of cost-effectiveness estimates are still used in EA? What would you recommend to specific actor or actors to address this issue?
  • What are the arguments for misleading cost-effectiveness estimates in EA?

Minimalist axiologies and positive lives

  • What are some strengths and limitations of minimalist axiologies?
  • When minimalist axiologies do and do not have counterintuitive implications in practice?
  • How do you decide when persons’ axiologies differ?


Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)

  • How can groupthink in introductory EA fellowships be avoided while covering all important points?

AI safety

Aligning Recommender Systems as Cause Area

  • How do the authors suggest that recommender systems are aligned?
  • What are some possible drawbacks of aligning recommender systems?

A Simple Model of AGI Deployment Risk

  • How would you summarize this paper for a busy decisionmaker looking for an AI timelines answer?

Final Report of the National Security Commission on Artificial Intelligence (NSCAI, 2021)

  • What does the US seem to prioritize and deprioritize regarding AI technology development?

Characterising utopia

  • How does the author characterize utopia?
  • What changes and additions would you suggest to this piece?

Collective superintelligence

  • Can you think of an example of collective superintelligence? What company (considering corporation subsidiaries) owns it? What are the implications for EA?

[Link] How understanding valence could help make future AIs safer

  • What are some risks associated with valence research? Who should have access to valence research information?

The academic contribution to AI safety seems large

  • How do you estimate (quantitatively and qualitatively) the developments of AI safety in academia if EA contributes to various extents and by various means? What does this imply about AI safety research prioritization in EA? What information regarding AI and academia would you need to make these estimates and what additional factors would you consider in making your recommendations to EA?
  • What would you update on the Guesstimate models at the high level? What would help you better orient in Guesstimate models in general? What could help reduce biases? When is top-down and bottom-up modeling more appropriate, in general?
  • What does the author understand as AI alignment work (rather than simply AI work)? How would it and would it not make sense that AI scientists do not focus on alignment work?
  • All information introduced in this document and additional knowledge considered, should academics focus on AI alignment work to a greater extent? How should they gain the capacity?

Earning to give

Seed funding nonprofits: a high-risk, high-reward approach (Founders Pledge)

  • What arguments does the author present for seed funding nonprofits?
  • What organizations exist in this space?
  • How would you go about selecting a non-profit to invest into?

EA Should Spend Its “Funding Overhang” on Curing Infectious Diseases

  • What arguments for spending EA “funding overhang” on curing infectious diseases does the author present?
  • With this approach, what problems would likely be resolved and which could remain?

Common responses to earning to give

  • According to this post and using general reasoning, what are the arguments for earning to give?
  • According to this post and using general reasoning, what are the arguments against earning to give?

Announcing EffectiveCrypto.org, powered by EA Funds

  • What donation bundles does EffectiveCrypto.org offer?
  • What charities would you include in a bundle to inform systemic change?

Animal welfare

Research Summary: The Intensity of Valenced Experience across Species

  • According to this post, how does the intensity of valenced experiences vary across species?

EAA is relatively overinvesting in corporate welfare reforms

  • What critiques of corporate animal welfare reforms does the author present?
  • According to this post and using general knowledge, what alternatives can be cost-effective?

Longtermism and animal advocacy

  • How does the author relate longtermism and animal advocacy?
  • According to this post, what are the longtermist implications for animal advocacy?

Corporate campaigns affect 9 to 120 years of chicken life per dollar spent

  • How did the author reach his conclusion on the cost-effectiveness of corporate chicken welfare campaigns?
  • What assumptions were made in these calculations?

Six lessons learned from our first year - Animal Ask

  • How did Animal Ask prioritized the ‘asks’ of animals for the UK government?
  • What six lessons did Animal Ask learn?

[Link] US Egg Production Data Set

  • How many percent of egg laying hens in the US lived in cage-free systems at the time of writing of this post?
  • What is the percentage presently?

Wild Animal Welfare Literature Library: Introductory Materials, Philosophical & Empirical Foundations

  • Is wild animal welfare important? What definition of importance does the author use?
  • Should interventions in nature be supported? What types of interventions?
  • What are some of wild animals’ worst concerns, according to the author? What additional or alternative concerns can be perceived?
  • What does Jordan Ross conclude about wild animal suffering? What assumptions are made?
  • What does Oscar Horta suggest about the situation of animals in the wild?
  • How does Oscar Horta describe natural processes in the wild? What counterarguments does he omit?
  • What does Tobias Baumann of the Wild Animal Initiative advocate for?
  • Abraham Rowe (broken link)
  • What did Brian Tomasik assert regarding the focus on wild animal suffering in 2009–2013? How has this changed since?
  • According to the author and using additional knowledge and reasoning, when do intentions to cause suffering morally matter?
  • What does Jeff McMahan assert regarding the subjective perception of predation? What evidence does he use for his argument?
  • How does Jeff McMahan discuss his earlier piece “The Meat Eaters?” What issues does he omit?
  • What does Mikel Torres argue in his paper? How would different decisionmakers likely react to this piece and why?
  • What views regarding intervention in nature does Eze Paez review? How does the author discuss the environmentalist view?
  • What possibly cost-effective interventions regarding wild animal welfare does Tyler Cowen suggest?
  • What empirical claims does Ole Martin Moen make? What moral and pragmatic reasons to avoid alleviating wild animal suffering does he offer? Considering the current and potential capacity, what attitude toward wild animal suffering are possible, according to the author?
  • What reasons why most people do not care about wild-animal suffering did Ben Davidow suggest in 2013? What parallels can be drawn with other EA-related causes?
  • What problematics related to r-strategists and possible interventions does Kyle Johannsen describe?
  • What question does Catia Faria answer with her thesis? What arguments are uniquely presented in this piece?
  • What solutions to the predation problematics does Jozef Keulartz suggest?
  • What does Dr. Ng recommend regarding welfare biology? What AI sentience insights does this 1995 piece suggest? What parts are still relevant and which are outdated?
  • What analogies to wild animal welfare do the authors suggest? What can be the influence of this piece on different decisionmakers?
  • What proposition does Brian Tomasik centralize?
  • How does Michael Plant determine net positivity of an animal’s life? What wild animal suffering solutions does the author suggest?
  • What wild animal welfare determinants does Ozy Brennan suggest?
  • Georgia Ray (broken link)
  • How does the author approximate the number of wild animals in the world?
  • Georgia Ray (broken link)
  • How does the life-fate concept estimate suffering in the wild? What welfare measures may be complementary to this approach?

Interview with Jon Mallatt about invertebrate consciousness

  • What type of consciousness does Jon Mallatt discuss? According to the discussant, what is necessary to this type of consciousness? What determines the degree of consciousness? What types of pain do animals feel?


Ways EU law might matter for farmed animals

  • What provisions regarding farm animal welfare in Europe do you think would be particularly worth building up upon and which ones should be rather reworked?
  • What are some avenues through which the public communicates with the European Commission?


The Case for Promoting / Creating Public Goods Markets as a Cause Area

  • How do public good markets differ from government taxation and spending?
  • How does the author suggest to decide on financial allocation? Would the hypothesized consequences likely take place?

Thoughts on electoral reform

  • What arguments in favor of EAs’  electoral reform advocacy are used?
  • What voting systems can accurately represent persons’ preferences?

Climate Recommendations in EA: Giving Green and Founders Pledge

  • How much does it cost to avoid or remove one metric ton of CO2 equivalent? What is this number in context?

Global health, wellbeing, and development

Alice Redfern: Moral weights in the developing world — IDinsight’s Beneficiary Preferences Project

  • At what amounts did respondents switch between a child’s life and cash? What does it tell about their preferences?
  • If a similar experiment was conducted in an industrialized nation with similar results, what would you conclude about the respondents’ preferences?

Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty

  • What is the difference between quality- and wellbeing-adjusted life years?
  • Where can the ‘neutral point’ be on a 10-point scale?

Drowning children are rare

  • How does the author reason about the possibility of virtually eliminating deaths due to some preventable diseases with the currently pledged resources? What can be the implications for EA donors?

We care about WALYs not QALYs

  • What does this 2015 post assert? Has this thinking been implemented since?

RCTs in Development Economics, Their Critics and Their Evolution (Ogden, 2020) [linkpost]

  • What are the seven categories of RCT critics? Which of them seem valid in the present context?

Existential risk

Possible misconceptions about (strong) longtermism

  • What are the nine possible misconceptions about longtermism? Which of them can be perceived as valid by some persons?
  • What two issues does the author specify?

Why I am probably not a longtermist

  • What does the author misunderstands about longtermism, if anything?
  • What near-term development could motivate the author to endorse longtermism?

Existential risk factor

  • What are some existential risk factors that the EA community addresses? Are there any EAs who may contribute to existential risk?

Reducing long-term risks from malevolent actors

  • What is the role of institutions in preventing the rise of actors who could cause catastrophic outcomes to power?
  • Should catastrophic malevolence be prevented by confronting risky actors or otherwise?

[Creative Writing Contest] The Reset Button

  • How does this essay alter your attitude toward career decisionmaking? Is this counterfactually positive or negative?

Nines of safety: Terence Tao’s proposed unit of measurement of risk

  • Could the introduction of a logarithmic existential risk measure alter (different) decisionmakers’ relative focus on various probability risks (e. g. due to bias)? How can bias be avoided among decisionmakers with less applied mathematics practice? How would this computation transform probability distributions?

Criticism of EA

The case against “EA cause areas”

  • Why are some cause areas more prominent in EA than others?
  • If someone has a comparative advantage in working in a cause area that is not popular in EA, how can they know to what extent they have counterfactually positive impact?
  • What causes should ‘EA include’ to have wide influence? How can it prevent distracting itself?
  • Can you share an example where helping the same company become more effective has large directly negative or positive consequences?
  • How to support independent thinking while checking for the pursuit of selfish interests (including unintended institutional interests)?

Why I left EA

  • Does this person have a deep understanding of EA? What could have been done to enable them make more informed and rational decisions regarding their involvement in EA?

Some blindspots in rationality and effective altruism

  • What are some thinking options that the author specifies? Would these be useful to EA? What questions could these viewpoints address? What grantmaking adjustments would need to take place?

Effective altruism is self-recommending

  • Does EA implement a ‘flawed reinforcement learning’ mechanism on its reasoning and activities?
  • Is anyone checking the results of EA-related activities? Which ones are the most and the least scrutinized?
  • Should EA avoid the (monetary and otherwise) influence of large organizations with high (positive and negative) potential?
  • What strategies is EA using to prevent biased grantmaking? How would you improve these strategies?
  • Is EA focused on a few of programs in which it is also ‘existentially’ invested? How could EA prevent its self-reliance, if anyhow?

Democratising Risk - or how EA deals with critics

  • Could you list several examples of what can go wrong with existential risk studies?
  • How would you go about estimating if funders have been lost due to this critique?
  • What progressive steps does the piece suggest? Could these be offered without appealing to emotions?
  • Can you think of an EA critique which would actually upset funders? Would this be funded by various EA sources?
  • What measures can be implemented to prevent decisionmakers’ distractions with emotional work while maximizing the value they gain from critical discourse?

The motivated reasoning critique of effective altruism

  • Should EA seek to motivate the ‘soldier mindset’ about anything, such as doing the most good? What are the long-term risks?
  • To what extent is the openness and dynamicity of EA discourse sufficient at different groups, organizations, and idea-exchange fora? What is the limiting factor of thought progress in a situation where addressing this can be resource-effective?
  • Would an inclusion of a relatively less impactful but publicly appealing organization in EA be a problem of motivated reasoning, ‘selection bias,’ or something else? Does it matter whether the inclusion is ‘strategic’ or based on limited cost-effectiveness scrutiny by some EA community members?
  • Considering that community members learn and it may be a suboptimal use of senior mentors’ time to supervise persons in shareable skills development, what should be the standards of cost-effectiveness analyses rigorousness?

Beware surprising and suspicious convergence

  • What is a cool way of finding what is good for human welfare?
  • What are some ways in which EA is preventing suspicious convergence?

[WIP] Summary Review of ITN Critiques

  • Is there anything that the author misunderstands about the ITN framework (consider the equation)?
  • What critique do you agree and disagree with? Are there any contradictions? Of the valid arguments, which are the most valuable to address in EA? How are you using the ITN framework to answer this question, if anyhow?

Julia Galef and Angus Deaton: podcast discussion of RCT issues (excerpts)

  • Is Dr. Deaton biased or is it the data that is (e. g. due to experimenter bias)?
  • What could an objective emerging economy government official agree with in this piece?

Don't we need political action rather than charity?

  • Do we need political action when we have charity and vice versa?

A critique of effective altruism

  • Which of these critiques are current, negative, and valuable to entertain? What is an example of a constructive follow-up on these?


Community organization and long-term future

Long-Term Future Fund: July 2021 grant recommendations

  • What is the ‘bar’ for the Long-Term Future Fund?
  • Has the Fund made any grants that benefit only the medium- or the short- term future?

Community scale-up and AI safety

Feedback Request on EA Philippines' Career Advice Research for Technical AI Safety

  • What resources would you recommend to EA Philippines to develop their technical AI safety advising capacity (while leveraging interest useful to EA, avoiding interest that can distract or dilute the community, and enabling comparative advantage development)?

Career advising and AI safety

Crazy ideas sometimes do work

  • How ‘crazy’ was the author’s decision to apply for EA funding to pursue an AI safety PhD?
  • Based on this post, what background could persons who you would recommend to apply for academic-related Long-Term Future Fund funding have?

Criticism of EA and existential risk

The Precipice: a risky review by a non-EA

  • How is this publication ‘risky?’ Should have this been translated into English?
  • What does the author of this post misinterpret about the book?
  • What value to EA does the post’s author bring?
  • According to the author of this post, what is the book’s objective?

Global health, wellbeing, and development and animal welfare

Poor meat eater problem

  • Is there a way to avoid the poor meat eater problem? Can you share historical examples where something similar happened?

Global health, wellbeing, and development and policy

In search of systems change that lasts: 3 lessons we've learned at Fortify Health

  • How long can systems developed in the overviewed way last? Could you qualify that in addition to quantifying it?
  • Can you think of ways in which systems can be changed more cost-effectively?
  • Can you think of contexts into which this framework does not translate to?

Policy and existential risk

The catastrophic primacy of reactivity over proactivity in governmental risk assessment: brief UK case study

  • What kind of disasters should the UK prepare for and how? What budget should it use for that?

Existential risk and policy

Space governance is important, tractable and neglected

  • What rules and norms should be enforced in space? How can bias in decisionmaking be prevented?
  • What technologies could be used for space governance? How can universal accessibility be ensured?

Animal welfare and capacity development

How we verify charities' claims

  • How does Animal Charity Evaluators assess charities’ claims? Of all organizations’ claims, how many has it assessed so far? How can this process be improved?

AI and epistemics

On Progress and Prosperity

  • To what extent and how should the relative rate of issue creation and problem solving (capacity development) be considered when deciding on technological investments as means to safeguard long-term prosperity?
  • How much of human development indicators progress can be attributed to technology? Could have technology biased the development of these indicators?
  • Imagine that people in 1900 would have gained the technology of today, would that increase their development? What would the meaning of ‘development’ be in this case? Are there any other objectives that they could seek to observe? Imagine the same with the technology of 2050 and today. What does this imply for the current investment prioritization in EA?

Existential risk and epistemics

The expected value of extinction risk reduction is positive

  • Based on this article, is humanity's extinction positive or negative? How would you compare that to the (dis)value of extinction of three other species? Would this conflict with Peter Singer’s equal consideration differing treatment of species introduced in his All Animals are Equal? What epistemic measures in EA could be taken to prevent dystopias perceived by any actor?
  • Is it likely that humans’ moral circle will continue expanding commensurably with their space expansion? Will this be sufficient to estimate humanity’s survival as positive?
  • How does Open Philanthropy conclude that cows are conscious with 80% probability?
  • How should option value creation be taken into account?
  • What of our moral values would our ancestors find especially repugnant? What does this imply about the quality of our values?

Existential risk and criticism of EA

Which World Gets Saved

  • How does the author rationalize his claims regarding future violence development?
  • Based on this post and your general knowledge, should marginal focus on existential risk mitigation in EA be increased, decreased, or maintained?





More posts like this

Sorted by Click to highlight new comments since:

Finally. An EA fellowship that cannot reject me.  Victory is mine!

Joking side, I really like a lot of the questions here. It's also worth bearing in mind a lot of the categories can overlap with each other which is another big bonus I see. Ideally, would these discussions take place on a single megathread, or across multiple smaller ones? Perhaps stickied topics in each tag?

Yes! .. thank you. I think maybe there can be some organized page of summaries that people going though a 'fellowship' can update - so an aspect of the Wiki. Otherwise, just writing a comment or a comment on a comment can be a good way to demonstrate that one thought about the topics. Or, forming several narratives of the articles can be nice (the activity where anyone writes the next sentence).

Thank you for pointing out the overlap. I can come up only with organization according to a vector space where the elements are the extent to which the article relates to specific topics but it would be nice to have something with better flow and with paths (with intersections) that would lead one to go for a bit at a time.

A megathread would not solve the organization issue and could feel like the thoughts developed are not being utilized. Multiple smaller threads can be cool, but mostly for questions that are actually advanced by discussion or for those that can be interesting to get opinions on (not e. g. asking someone to rephrase main points). Stickied questions under tags may be a solution - also once a question is somewhat resolved or opinions at the time gathered, it can be replaced.

I think that there is some value to this idea. I'm not confident about how well it would work for people to read a lot, think, comment, and then have other's consider that "equivalent" of an Intro Program. But for what it is worth, I read a lot of EA content before I took the Intro Program and the In-Depth Program, and when going through those programs I felt I already knew about 80% of the content.

The above list could certainly be a first draft of a syllabus for either self-study or for group discussion. However, links to the posts would make it much more usable.

OK, but did the discussion Programs add something that you would not get from reading the Forum, such as nicer experience (or motivation to meet more people in person) or ideas on topics that could be also interesting to your colleagues? If so, then should there be an option without these aspects (e. g. for people less interested in more 'dynamics navigation' focused chats)?

Also, it could be argued that the fraction of material that people who do not participate in a discussion-based program would not read can be crucial to people's understanding of EA. But, early-on specialization of people can be optimal. For example, consider that a person interested in global development never reads any insect welfare texts. They think about massively scaling up insect farming to enable people escape poverty. They address skepticism from insect welfare researchers by assurance of positive welfare. Thus, the researchers are motivated to find a solution optimal for humans, and insects. If everyone read introductory texts from both (all) areas, it is possible that the thought that would lead to this mutually beneficial solution would not have been developed.

Thanks! Yeah that makes sense.

did the discussion Programs add something that you would not get from reading the Forum, such as nicer experience (or motivation to meet more people in person) or ideas on topics that could be also interesting to your colleagues

Kind of. In my intro fellowship all the other people failed to show up for most of the meetings, so for 6 or 7 of the 8 meetings it was just me and the facilitator on a video chat. I didn't really get any motivation to meet people in person from the program, not motivation to attend an EA event. The biggest benefit for me was the ability to bounce ideas around and have an instant response/reply, shortening the feedback loop compared to simply reading and Googling around on my own.

I likely also would have had a very different EA experience if I lived in a country with a lot of EAs and open/welcoming EA groups, or if international travel had never been impacted by COVID. I imagine in that kind of scenario I would have joined in-person events.

the fraction of material that people who do not participate in a discussion-based program would not read

I think it is possible for someone to work through a syllabus/list of resources on their own with a group of friends. And I think that working through a syllabus has a big benefit of making sure you don't miss important topics. If I only read the things that are interesting to me then there are some relevant/important topics that I would never learn. I really enjoyed the ideas of moral cluelessness and of moral uncertainty, but I likely wouldn't have encountered them if I was simply reading on my own.

for 6 or 7 of the 8 meetings it was just me and the facilitator on a video chat

Wow, maybe the number of people in a group could be somewhat increased.

The biggest benefit for me was the ability to bounce ideas around and have an instant response/reply, shortening the feedback loop compared to simply reading and Googling around on my own.

Ok, ah hah maybe 'commenting sprees' could be implemented in a otherwise this aspect can be difficult to imitate in a written form.

commenting sprees

I think something like that could work decently. Perhaps something like a a two hour block of time when people are all encouraged to be active in the chat room or actively responding comments.

More from brb243
Curated and popular this week
Relevant opportunities