Recent Discussion

I think about those working at MIRI or x-risk, in which they may see little to no benefit in their lifetime, and potentially only their grandchildren will reap the benefits.

I'm pursuing a somewhat longtermist project, but I'm having trouble staying motivated through obstacles and navigating through the "grind" right now. I was wondering what others have done to get through this or how they've stayed motivated when success can seem so far away? Especially if what they're doing is really only going to help in the far future.

I'm afraid I don't have any original recommendations, but have you read the EA Handbook Motivation Series? Nate Soares' 'On Caring' might be particularly relevant.

I was also talking with some other EAs about this recently and one of them mentioned Metta meditation, which is essentially a meditation that focuses on creating an expanding circle of goodwill, which could hypothetically include the long-term future. If meditation is your thing, it might be worth a shot.

How concerned should we be about replaceability? One reason some people don't seem that concerned is that the leaders of EA organizations reported very high estimates for the value of their new hires. About twenty-five organizations answered the following question:

For a typical recent Senior/Junior hire, how much financial compensation would you need to receive today, to make you indifferent about that person having to stop working for you or anyone for the next 3 years?

The same survey showed that organizations reported feeling more talent constrained than funding constrained.

On a scale o
... (Read more)

Great post! Been meaning to comment for a while - better late than never than suppose.

One thing I wanted to add - I've talked with ~50 people who are interested in working at EA orgs over the last six months or so, and it seems like a lot of them come to the decision through process of elimination. Common trends I see:

  • They don't feel well-suited for policy, often because it's too bureaucratic or requires a high level of social skills.
  • They don't feel well-suited for academia, usually because they have less-than-stellar marks or dislike t
... (read more)
EA Meta Fund Grants – July 2020
532d10 min readShow Highlight

This is the July 2020 payout report for the Effective Altruism Meta Fund, one of the Effective Altruism Funds.

Fund: Effective Altruism Meta Fund

Payout date: August 7, 2020

Payout amount: $838,000.00

Grant author(s): Luke Ding, Alex Foster, Denise Melchin, Matt Wage, Peter McIntyre

Grant recipients:

Grant rationale:

The EA Meta Fund made the following grant recommendations in the July 2020 round:

  1. 80,000 Hours - $300k
  2. Founders Pledge - $200k
  3. The Future of
... (Read more)

Thanks for this write-up! Sounds like a bunch of cool projects.

Since 2015, over $19m has been given to high-impact charities recommended by FP. FP estimates that their research and advice played a significant role in $8m out of this total.

Do you mean that over $19m has been given to high-impact charities FP recommends by people FP talked to, but $11m might have been given to the same places anyway? That would seem to suggest a surprisingly high proportion of these people would've given anyway, and to the same places. 

Or do you mean that the total amou

... (read more)
6HaydnBelfield16hI really appreciate your recognition of this - really positive! "it's hard to publish critiques of organizations or the work of particular people without harming someone's reputation or otherwise posing a risk to the careers of the people involved. I also agree with you that it's useful to find ways to talk about risks and reservations. One potential solution is to talk about the issues in an anonymized, aggregate manner."

With this post I want to encourage an examination of value-alignment between members of the EA community. I lay out reasons to believe strong value-alignment between EAs can be harmful in the long-run.

The EA mission is to bring more value into the world. This is a rather uncertain endeavour and many questions about the nature of value remain unanswered. Errors are thus unavoidable, which means the success of EA depends on having good feedback mechanisms in place to ensure mistakes can be noticed and learned from. Strong value-alignment can weaken feedback mechanisms.

EAs prefer to work with p... (Read more)

I think I agree with all of what you say. A potentially relevant post is The Values-to-Actions Decision Chain: a lens for improving coordination.

despite some explicit statements by people like Carrick Flynn in a post on the forum saying how little we know and a research agenda which is mainly questions about what we should do

Just in case future readers are interested in having the links, here's the post and agenda I'm guessing you're referring to (feel free to correct me, of course!):

... (read more)
CEA Mid-year update (2020)
733d9 min readShow Highlight

We'd like to share an update on our recent progress. 

In a previous post, we set out five goals for 2020:

  1. Developing our strategy: We're on track, with more work needed on metrics and data.
  2. Narrowing our scope by considering spinning off EA Funds and Giving What We Can: We're slightly ahead of expectations; we hired leaders and set an initial strategic direction for each project.
  3. Expanding group and community health support: We're somewhat behind expectations.
  4. Improving online discussion: We're somewhat ahead of expectations. We learned a lot about how to run virtual events
... (Read more)

I'm impressed with the success of Virtual EAGx. Do you have a measure of how successful that was for a comparable population to EAG London 2019? Or, say, comparing the success for people who have been at 2 previous EAGs?

Also, I'm curious, what CRM are you using and for what purpose? 

4Maxdalton10hMe too!
6Maxdalton10hThanks for sharing this! I found it somewhat surprising that the scale of effect looks like it's bigger for comments vs. posts. (I imagine that the difference in significance is also partly that the sample size for posters is much smaller, so it's harder to reach a significance threshhold.)

Crossposted with Lesswrong and personal blog

Summary –

Over the past 15 years, major international donor agencies shifted their approach to local politics in fragile states. In order to build stable state-society relations in conflict-affected societies, they committed to studying popular expectations on states, which may differ greatly from international norms like competitive elections and service provision. This blog post examines donor interventions to improve service provision in refugee-crisis-affected communities in Lebanon and Jordan from 2011 to 2019. Donor states hoped that imp... (Read more)

I am now switching to academic jargon, so this language can integrate directly in my paper

Are you kidding? The jargon was barely comprehensible before this point.

Propose and vote on potential tags
309d1 min readShow Highlight

(I have no association with the EA Forum team or CEA, and this idea comes with no official mandate. I'm open to suggestions of totally different ways of doing this.)

Update: Aaron here. This has our official mandate now, and I'm subscribed to the post so that I'll be notified of every comment. Please suggest tags!

The EA Forum now has tags, and users can now make tags themselves. I think this is really cool, and I've now made a bunch of tags. 

But I find it hard to decide whether some tag ideas are worth including, vs being too fine-grained or too similar to existing tags. I also feel some h

... (Read more)

I'm tentatively in favour of Macrostrategy. A big issue is that I don't have a crisp sense of what macrostrategy is meant to be about, and conversations I've had suggests that a lot of people who work on it feel the same. So I'd have a hard time deciding what to give that tag to. But I do think it's a useful concept, and the example post you mention does seem to me a good example of something that is macrostrategy and isn't cause prioritisation.

I feel like a tag for Global Priorities Research is probably unnecessary once we have tags for both Cause Priorit

... (read more)

I've had interesting conversations with people based on this question, so I thought I'd ask it here. I'll follow up with some of my thoughts later to avoid priming.

By novel insights, I mean insights that were found for the first time. This excludes the diffusion of earlier insights throughout the community.

To gesture at the threshold I have in mind for major insights, here are some examples from the pre-2015 period:

  • Longtermism
  • Anthropogenic extinction risk is greater than natural extinction risk
  • AI could be a technology with impacts comparable to the Industrial Revolution, and tho
... (Read more)
8Answer by Owen_Cotton-Barratt13hMaybe: "We should give outsized attention to risks that manifest unexpectedly early, since we're the only people who can." (I think this is borderline major? The earliest occurrence I know of was 2015 but it's sufficiently simple that I wouldn't be surprised if it was discovered multiple times and some of them were earlier.)
20Answer by Tobias_Baumann17hI think there haven’t been any novel major insights since 2015, for your threshold of “novel” and “major”. Notwithstanding that, I believe that we’ve made significant progress and that work on macrostrategy was and continues to be valuable. Most of that value is in many smaller insights, or in the refinement and diffusion of ideas that aren’t strictly speaking novel. For instance: * The recent work on patient longtermism [https://80000hours.org/2020/08/the-emerging-school-of-patient-longtermism/] seems highly relevant and plausibly meets the bar for being “major”. This isn’t novel - Robin Hanson wrote about it in 2011, and Benjamin Franklin arguably implemented the idea in 1790 - but I still think that it’s a significant contribution. (There is a big difference between an idea being mentioned somewhere, possibly in very “hidden” places, and that idea being sufficiently widespread in the community to have a real impact.) * Effective altruists are now considering a much wider variety of causes than in 2015 (see e.g. here [https://forum.effectivealtruism.org/posts/xoxbDsKGvHpkGfw9R/problem-areas-beyond-80-000-hours-current-priorities] ). Perhaps none of those meet your bar for being “major”, but I think that the “discovery” (scare quotes because probably none of those is the first mention) of causes such as Reducing long-term risks from malevolent actors [https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors] , invertebrate welfare [https://forum.effectivealtruism.org/posts/EDCwbDEhwRGZjqY6S/invertebrate-welfare-cause-profile] , or space governance [https://forum.effectivealtruism.org/posts/QkRq6aRA84vv4xsu9/space-governance-is-important-tractable-and-neglected] constitutes significant progress. S-risks [https://centerforreducingsuffering.org/intro/] have also gained more traction, although again the basic idea is from before 2015. * Views on the fu

I liked this answer.

One thing I'd add: My guess is that part of why Max asked about novel insights is that he's wondering what the marginal value of longtermist macrostrategy or global priorities research has been since 2015, as one input into predictions about the marginal value of more such research. Or at least, that's a big part of why I find this question interesting.

So another interesting question is what is required for us to have "many smaller insights" and "the refinement and diffusion of ideas that aren’t strictly speaking novel"? E.g., does that

... (read more)

I’m interested in people’s thoughts on:

  1. How valuable would it be for more academics to do research into forecasting?
  2. How valuable would it be for more non-academics to do academic-ish research into forecasting? (E.g., for more people in think tanks or EA orgs to do research on forecasting that's closer in levels of "rigour" to the average paper than to the average EA Forum post.)
  3. What questions about forecasting should be researched by academics, or by non-academics using academic approaches?
    • I imagine this could involve psychological experiments, historical research, conceptual/theoretical/mathem
... (Read more)

I've just discovered the very recently published Forecasting AI Progress: A Research Agenda. The abstract reads: 

Forecasting AI progress is essential to reducing uncertainty in order to appropriately plan for research efforts on AI safety and AI governance. While this is generally considered to be an important topic, little work has been conducted on it and there is no published document that gives and objective overview of the field. Moreover, the field is very diverse and there is no published consensus regarding its direction. 

This paper descr

... (read more)

Metaculus is an online platform where users make and comment on forecasts, which has recently been particularly notable for its forecasting of various aspects of the pandemic, on a dedicated subdomain. As well as displaying summary statistics of the community prediction, Metaculus also uses a custom algorithm to produce an aggregated "Metaculus prediction". More information on forecasting can be found in this interview with Philip Tetlock on the 80,000 hours podcast.

Questions on Metaculus are submitted by users, and a thread exists on the platform where people can suggest questions t... (Read more)

To lay out my tentative position a bit more:

I think forecasts about what some actor (a person, organisation, community, etc.) will overall believe in future about X can add value compared to just having a large set of forecasts about specific events that are relevant to X. This is because the former type of forecast can also account for: 

  • how the actor will interpret the evidence that those specific events provide regarding X
  • lots of events we might not think to specifically forecast that could be relevant to X

On the other hand, forecasts about what som

... (read more)

I suspect that cell-based meat research and development could be the most important strategy to protect animal rights and improve animal welfare (with a possible exception of research in welfare biology to improve wild animal welfare), and could strongly reduce climate change.

This post describes my very rough back-of-the-envelope Fermi-estimate calculation of the cost-effectiveness of cell-based meat R&D, and compares it with traditional animal rights and vegan advocacy campaigns. I only estimate the orders of magnitude, in powers of ten. The results are presented in the table here.

The t... (Read more)

Gotcha. I was thinking about a much simpler situation where we're comparing two interventions to accomplish equally valuable goals, rather than two interventions to accomplish the same goal, where finishing one makes the other obsolete. I was also assuming that we are able to coordinate on what to fund. But in the situation you described, it makes sense to fund the cheaper intervention only if we can put together enough money for it to overtake the one that's already being funded, like 555,555,555 euros in your example. But that number is assumin... (read more)

1Stijn12hSorry, I'm not following. The gain is independent of C, and hence (at given U and F) independent of the expected time period. Assume x is such that cell-based meat enters the market 1 year sooner (i.e. x=F). Accelerating cell-based meat with one year is equally good (spares U=0,1.10^11 animals), whether it is a reduction from 10 to 9 years or 100 to 99 years. Only if C/F would be smaller than a year, accelerating with 1 year would not work.
1Thomas Sepulchre9hI totally agree with you, the gain is independent of C. In your original post, you give a scenario where the cell-based meat enters the market in 100 years, while you seem to believe that an actual estimate would rather be ten years or less. I wondered if this was because you overestimated C, or underestimated F (both affect the timeline, but only F affects the gain) I now understand that you overestimated C, so this doesn't affect your prediction about the gain Thanks for clarifying!
2richard_ngo8hI don't think variable populations are a defining feature of population ethics - do you have a source for that? Sure, they're a feature of the repugnant conclusion, but there are plenty more relevant topics in the field. For example, one question discussed in population ethics is when a more equal population with lower total welfare is better than a less equal population with higher total welfare. And this example motivates differences between utilitarian and egalitarian views. So more generally, I'd say that population ethics is the study of how to compare the moral value of different populations.

Here is a passage from Hilary Greaves's Population axiology.

In many decision situations, at least in expectation, an agent’s decision has no effect on the numbers and identities of persons born. For those situations, fixed-population ethics is adequate. But in many other decision situations, this condition does not hold. Should one have an additional child? How should life-saving resources be prioritised between the young (who might go on to have children) and the old (who are past reproductive age)? How much should one do to prevent climate ch
... (read more)

This post is mainly targeted at EAs in the early stages of their career, such as those in university.

In my experience, many aspiring EAs don’t start career planning until fairly late in their undergraduate degree, and many don’t start until they’ve completely finished their studies. Despite the obvious claim that procrastination is to be avoided, I think there are some other, more subtle reasons why people should start career planning as early as possible. A lot of first and second year undergraduates feel like their graduation is far away and that they can worry about the... (Read more)

13TimothyTelleenLawton8hSince college I've updated away from the importance of planning out my career, and toward the importance of finding a thing that deeply excites me. In particular, I've noticed that when I have jobs that seem good on paper (e.g. from an EA perspective) but I'm not excited about the day-to-day work of them, I tend to underperform. On the other hand, when I find something that really nerdsnipes me, not only do I tend to use it to do my job better, but I also tend to find an even better job next (with bigger EA impact, even if that application was not initially obvious). Now obviously, trying to look ahead and plan out a path is a great thing to do. I would expect it to be especially valuable for folks who already know what they want to do ("I love machine learning and I want to make sure the technology is used for good!") and for folks whose chosen career paths are well worn ("I want to be a policy maker."). Unfortunately, I think that overemphasizing career planning can actually undermine the search for excitement. If I believe success comes to a large extent from careful long term planning, then I'm going to be less open to noticing what I like and what I don't like; less willing to admit that I should abandon the path I've been following for 5 years. That's why I worry that reading this post in college may actually have done me in particular more harm than good—by helping me put even more moral pressure on myself to get it right, and quick! I suspect that there are a lot of factors here that vary from person to person—perhaps it suited me better to jump industries multiple times than it would have for those more naturally suited to a technical specialty. Perhaps the version of the above advice I would have benefited from hearing in college is closer to, "Keep noticing what excites you and find ways to do more of that. Don't hesitate to update or abandon your plans—your failures will not matter very much but your successes can take you places you hadn't even im

Thanks Timothy!

I think this is broadly fair, and perhaps a reframing of “think more actively about your interests” would be better than just “think more actively about your career” for many readers.

That said, I think for a lot of people, what they’re immediately excited about doesn’t line up well with what might be good for their career, especially if they’re trying to do good. I worry that “keep noticing what excites you and find ways to do more of that” would lead some people down career pat... (read more)

Hello all, i am just curious and wanting to have feedback from EA community on how group organizer from LMIC and Africa can tailor effective altruism movement in their regions and have the most impact from doing so and move forward the effective altruism movement in their region. what should we be focusing on? are we going to focus on

1. Promoting effective giving

2. higher impact career

3. Research

4. community building and advancement of education through online and in-person event

5. Raising the profile of EA cause area or promotion and improvement of efficiency and effectiveness of ch... (Read more)

2EdoArad14hLMIC - low and middle-income countries.

Are the diarrhea and substance abuse numbers annualized? (does diarrhea cost 85 m YLL/yr)

Epistemic status: magpie showing off shiny goods she doesn’t understand, young child enthusiastically sharing trivia he learned from a bathroom reader

Being a fledgling Effective Altruist without a philosophy background, I decided to read Jeremy Bentham to understand more about how modern Utilitarianism started. Bentham’s definitive treatise on the subject is An Introduction to the Principles of Morals and Legislation. I didn’t read that. I’m a good EA who knows “if you want success, seek Neglectedness”. So I read Bentham’s other book on ethics: Deontology (volume 1, volume 2).

Finding Deo

Warn

... (Read more)

Thanks for writing this! I really like the way you write, which I found both fun and light and, at the same time, highlighting the important parts vividly. I too was surprised to learn that this is the version of utilitarianism Bentham had in his mind, and I find the views expressed in your summary (Ergo) lovely too.

As we pandemic grinds on and the initial panic is replaced with grim endurance increasingly many people are turning their minds to the future: what will we do differently after COVID?

From the start I (and others) have been worried about how this could go wrong: how an ill-calibrated response to this most recent catastrophe could end up doing more harm than good. I'm interested in hearing other Forum users' thoughts on what we should be particularly worried about, and try particularly hard to prevent. I'll write my own (very speculative) answer in a couple of days.

I'm particularly interested in

... (Read more)

Articles like this make me think there is some basis to this concern:

Coronavirus: Russia calls international concern over vaccine 'groundless'

On Wednesday, Germany's health minister expressed concern that it had not been properly tested.

"It can be dangerous to start vaccinating millions... of people too early because it could pretty much kill the acceptance of vaccination if it goes wrong," Jens Spahn told local media.

"Based on everything we know... this has not been sufficiently tested," he added. "It's not about being first somehow - it's about having a safe vaccine."

The basic argument

A somewhat common goal in EA (more common elsewhere) is to accelerate the human trajectory, by promoting things such as economic growth, tech R&D, or general population growth. One could presume that doing this could accelerate exponential economic growth throughout the very long-run future, which could easily accumulate to huge improvements in welfare.

But the returns to economic growth have historically been super-exponential, so our global economic trend points clearly towards a theoretical economic singularity within the 21st century.[3] This is not contradicted by the... (Read more)

Nice post! Meta: footnote links are broken, and references to [1] and [2] aren't in the main body.

Also could [8] be referring to this post? It only touches on your point though:

Defensive consideration also suggest that they’d need to maintain substantial activity to watch for and be ready to respond to attacks.

Seth Baum of GCRI has published an excellent new paper. Here's the abstract:

A recent article by Beard, Rowe, and Fox (BRF) evaluates ten methodologies for quantifying the probability of existential catastrophe. This article builds on BRF’s valuable contribution. First, this article describes the conceptual and mathematical relationship between the probability of existential catastrophe and the severity of events that could result in existential catastrophe. It discusses complications in this relationship arising from catastrophes occurring at different speeds and from multiple concurrent catas

... (Read more)

Going further down the rabbit-hole, Simon Beard, Thomas Rowe, and James Fox replied to Seth's reply!

https://www.cser.ac.uk/resources/existential-risk-assessment-reply-baum/

Highlights

  • Seth Baum’s reply to our paper “An analysis and evaluation of methods currently used to quantify the likelihood of existential hazards” makes a very valuable contribution to this literature.
  • We raise some concerns about the definitions of terms like ‘existential catastrophe’ and how they can be both normative and non-normative.
  • While acc
... (read more)
Load More