All Comments

Settings

Why think the aliens don't already know exactly what's going on?

I don't think there's much data yet on how a person motivated to act altruistically can have their material sacrifices and lifestyle changes (frugality, dedication, commitment) compensated with non-material emotional benefits. But there have been quite a few comments on the book "Strangers Drowning" in this forum.

Here is an illustration of how one can easily be much more confident than that. If welfare per animal-year was proportional to f(x) = 2^x, where x is the number of neurons, its elasticity would be x*f'(x)/f(x) = x*ln(2)*2^x/2^x = ln(2)*x. Even for the 302 neurons of adult nematodes, which are the animals with the fewest neurons, the elasticity would be 209 (= ln(2)*302). For my assumption that welfare per animal-year is proportional to g(x) = x^a, its elasticity is x*a*x^(a - 1)/x^a = a. So, for a number of neurons close to that of adult nematodes, I think... (read more)

Thanks for the thoughtful reply — and yes, I do think this is a pretty serious concern for trust and scale.

The core issue, as I see it, is that for the “we’re neutralizing opposing political donations” story to really hold, donors should be doing something like:

“This is money I was otherwise going to use to support the specific zero-sum political cause indicated (or a very close substitute), and I’m now redirecting it instead.”

One concrete way to reinforce that would be a short pledge at checkout, e.g.:

“I understand that DuelGood only works if donors genui... (read more)

Thank you for posting this @PabloAMC 🔸. In addition to the video linked in the post, we just published @Jakub Stencel's blog article with more context about the consultation. We do encourage everyone to take some time to respond.

Thanks for the question! We've sourced the courses based on a mix of:

  • Soliciting recommendations from domain experts, mainly when asking for resource recommendations during research for other articles (like our career profiles).
  • Doing our own research to find courses that seemed to be particularly high-quality, reputable, coming from orgs that we think are likely to provide strong overviews, and/or especially relevant to impact-focused careers.

That said, we want to emphasize that these are the best courses we know of; we're certain there are many great cours... (read more)

Excellent idea, we'll set our creative talent on it. If the result isn't too amateurish we'll include it in our About page. 

We hadn't considered this framing! DuelGood wouldn't contribute anything if everyone who uses it would have donated to GiveWell anyway. And if Person A was already planning to donate to GiveWell and uses DuelGood to also block a political donation, Person A hasn't made a real sacrifice while their matched partner has. This asymmetry could undermine trust and discourage participation.

If I've misunderstood the argument, please correct me. 

I wonder if the net outcome might still be positive. 

  • Consider Person A, who was going to donate $100 to GiveWel
... (read more)

No, I really don't. Sometimes you see things in the same territory on Dwarkesh (which is very AI-focused) or Econtalk (which is shorter and less and less interesting to me lately). Rationally Speaking was wonderful but appears to be done. Hear This Idea is intermittent and often more narrowly focused. You get similar guests on podcasts like Jolly Swagman but the discussion is often at too low of a level, with worse questions asked. I have little hope of finding episodes like those with Hannah Ritchie, Christopher Brown, Andy Weber, or Glen Weyl anywhere el... (read more)

Hi and thanks for writing this, obviously I'm quite late to the comments. For context, I'm a white American male with Jewish heritage. I'm surprised there's so little content on this forum about recent escalations in the Israeli-Palestinian Conflict. The reason this matters to me is because I'm concerned about trying to build a better world, if that world will tolerate genocide. The conflict has crossed a red line for me, and so there would be a gap in integrity of not including this cause as a priority. There are indeed plenty of dark things happening in ... (read more)

Enjoyed it, a good start.

I like the stylized illustrations but I think a bit more realism (or at least detail) could be helpful. Some of the activities and pain suffered by the chickens was hard to see.

The transition to the factory farm/caged chickens environment was dramatic and the impact I think you were seeking.

One fact-based question which I don't have the answer to -- does this really depict the conditions for chickens where the eggs are labeled as "pasture raised?" I hope so, but I vaguely heard that that was not a rigorously enforced label.

Here's some suggestions from 6 minutes of ChatGPT thinking. (Not all are relevant, e.g., I don't think "Probable Causation" is a good fit here.)

Cheers for engaging James, I appreciate you spending the time on this. 

  1. On your second point about timelines: I agree to the extent that talking about theories of victory in fine-grained detail would only be more relevant on shorter timelines. But even on longer timelines (e.g. whether it's 50 or 200 years), I'd argue we need theories of victory in broad strokes - at least outlining what major outcomes we are reasonably confident would need to happen. Otherwise how can we make bets on what capacities we need to build now? For example, can we be confide
... (read more)

Do you see other podcasts filling the long-form, serious/in-depth, EA-adjacent/aligned niche in areas other than AI? E.g., GiveWell has a podcast, but I'm not sure it's the same sort of thing. There's also Hear This Idea, often Clearer Thinking or  Dwarkesh Patel cover relevant stuff. 

(Aside, was thinking of potentially trying to do a podcast involving researchers and research evaluators linked to The Unjournal; if I thought it could fill a gap and we could do it well, which I'm not sure of.) 

That's interesting! I would expect EAs to be pretty supportive of frugality in general.

Hey Kuhan, I really liked this. Thanks for writing it. It led me to think a bit about how this applies to animal welfare.

What I really like about this, is how your thought experiment encourages altruists to think from the perspective of those they’re trying to help. That principle doesn’t just help humanize EV, it can also help with creating willingness to help individuals regardless of the cause of their suffering. An animal living in a fire zone probably doesn’t care if you’re helping them because humans are to blame or if nature is.

One of the difficulti... (read more)

My apologies for not having followed the links in your post in the first place.

We think WAI’s grantmaking criteria—such as Neglectedness, Scope, and Impact—are explicitly designed to prioritize cost-effectiveness and maximize counterfactual impact for large numbers of animals. Beyond that, their distribution may be limited by the types of projects they receive suitable applications from.

Thanks for pointing this out! I could have been more clear with what I was saying. For con #1, I meant that people might think the following:

  1. EAs are rich people.
  2. EAs have their own unique set of interests that are different from those of the poor people within their own country.
  3. EAs encourage poor people in their own country to save money so that they can donate money to support the interests of EAs.
  4. EAs are rich people who think their own interests matter more than the interests of poor people in their own country.
  5. EAs are elitists.

In regards to promoting fru... (read more)

Thanks for the positive feedback! 

Shrimp Welfare Project’s ranges are narrower for a few reasons. Because SWP works directly with farmers, they can track and estimate the number of shrimp on partner farms, reducing uncertainty about the animals affected. We also either used point estimates or narrow ranges for other parameters, such as the duration of impact (based on the lifespan of electrical stunners) and the duration of improved water quality. This means the main source of uncertainty in SWP’s CEA lies in the SADs estimates, whereas other charitie... (read more)

Yeah, I totally agree. It seems like that one should be able to be quite satisfied with minimal possessions and luxuries as long as their needs for connection, purpose, safety, and stability are satisfied. It would be interesting to look at the data on this.

Thanks for your answer! :)

 If so, why not generalise, and conclude you would avert 2^N h of pain of intensity 0.999^N instead of 1 h of pain of intensity 1?

I think the procedure might not be generalizable, for the following reason. I currently think that a moment of conscious experience corresponds to a specific configuration of the electromagnetic field. As such, it can undergo phase transitions, analogous to how water goes abruptly from liquid to gas at 100°C. Using the 1-dimensional quantity "temperature" can be useful in some contexts but is insuf... (read more)

Hi Brad, thanks for reading and commenting. The School for Moral Ambition and its incubatees  are the EA-associated organizations I mentioned above. They appear to be strongly opposed to promoting vaping to people who smoke ("Vaping: a healthy alternative to smoking" appears as an item in a "Big Tobacco Bullshit Bingo" card used in one of their trainings), have lobbied for restrictions on e-cigarette use in places where smoking is banned, and have spread misinformation on both the risks of vaping and the evidence for its usefulness in smoking cessatio... (read more)

i think f to ug as litu isn't discussed so much here because it isn't broadly popular on the forum, so people can be dissuaded from bringing it up. I've definitely been dissuaded after bringing it up as could of times. Any hint of frugality suggestion gets more disagree than agree votes and is unlikely to get high karma.

before i strong upvoted for this very reasonable and balanced post, it had 10 votes for 16 karma. I don't really see much which warrants down voting here even if you disagree with the argument.

I would say it’s much worse than just a simplification or a stereotype, it’s just plain wrong. 

If someone writes a post arguing that doctors are fools to prescribe anti-biotics to prevent novel coronavirus infections because of the respective properties of anti-biotics and viruses, that error in understanding what doctors are really recommending — mRNA vaccines — is more than a simplification or a stereotype. That person is just confused about what doctors are really saying. They are getting the details on which everything depends wrong, in such a way... (read more)

Hey Thom, thanks for engaging. I'm evolving my thoughts as I go here, so what ensues might slightly contradict some parts of the main post:

  1. On short-term pragmatism being a straw man: I think you're right that my description of short-term pragmatism is a straw man at the individual level, but I think it holds true about our movement-level expression. I don't think any individual non-profit or person would necessarily embody short-term pragmatism — I imagine/hope that everyone involved in a campaign/project has (a) some end goal they truly want; and (b) some
... (read more)

Hi Abe, thanks for this post, it was really interesting! I largely agree with you, and I want to add some complexity from my decade of fundraising experience.


So it might be significantly easier in principle to convince a philanthropist to move from giving at the 1 unit to the 10 unit level, even if not arguing on the basis of cost-effectiveness: there are just more opportunities to move a donor from 1 to 10 units of value than from 101 to 110."

In principle, sure! But I think you miss the practical reality that this is incredibly difficult. Most philanthrop... (read more)

Matt - thanks for the quick and helpful reply.

I think the main benefit of explicitly modeling ASI as being a 'new player' in the geopolitical game is that it highlights precisely the idea that the ASI will NOT just automatically be a tool used by China or the US -- but rather than it will have its own distinctive payoffs, interests, strategies, and agendas. That's the key issue that many current political leaders (e.g. AI Czar David Sacks) do not seem to understand -- if America builds an ASI, it won't be 'America's ASI', it will be the ASI's ASI, so to sp... (read more)

There are some historical examples of altruistic behavior (famous cases like Tolstoy and Gandhi) that show that, in the right context, many people find in frugality and in accounting for charitable works a certain psychological satisfaction comparable to that which others find in the so-called "virtue of thrift" and in the enjoyment of their possessions. It might be worthwhile to explore these kinds of social contexts and emotional rewards. There are many paths to happiness.

Thanks for the comment! I agree that more game-theory analysis of arms race scenarios could be useful. I haven't been able to find much other analysis, but if you know of any sources where I can learn more, that would be great. 

As for the ASI being "another player", my naive initial reaction is that it feels like an ASI that isn't 100% controlled/aligned probably just results in everyone dying really quickly, so it feels somewhat pointless to model our conflicting interests with it using game theory. However, maybe there are worlds such as this one wh... (read more)

Hi all,

Thank you, Allegra for the well presented initial post, and for constructive replies. I thought I would share how I’ve been grappling with this standard EA dilemma regarding donations to acute humanitarian crises (as we see now in Sudan). The default position is that while morally urgent, these donations may be 1-2 orders of magnitude less cost-effective (in $ per life saved) than top GiveWell picks. This often leads to a “head vs. heart” framework, where one might allocate a 10-20% “moral” portion to crisis relief.

However, in thinking this through,... (read more)

I think this criticism could apply if we were suggesting moving funds from the "donations" bucket of one's financial decisions to one's "savings" bucket.  Less so if we are suggesting moving funds from the "personal consumption" bucket to "savings" bucket.

Thanks for writing this! I think you touch upon a few facts that I've been thinking about a lot recently: 

1. If the future consists of an ASI-enabled singleton (whether US-led, China-led, jointly-led, etc.), the moral values and posture of the singleton towards the rest of the world matter a lot. Given enough concentration of power, no outside force could realistically impose better morality on the singleton. 
2. The present-day moral failures of current powerful people will not necessarily carry over in an ASI-enabled-abundance world. Dictator's ... (read more)

I think maybe a brief video explaining the zero-sum nature of a lot of political giving, and a brief explanation of a better path forward might be helpful.

I think I broadly agree with this.

I am very confused about your number 1 con though! Why would promoting frugality be perceived as the rich promoting their own interests over those of the poor? Isn't it exactly the other way around?

To the extent that EA is comfortable with people spending large sums of their money on unnecessary things, I think it is open to the 'elitism' criticism (think of the discussion around SBF's place in the bahamas). People can justifiably argue: "it is easy to say we should all be donating a lot to charity when you are so rich tha... (read more)

One concern I have is how this might be pretty gameable to not reflect someone's counterfactual donation decisions.

For instance, say that I am going to donate anyway to GiveWell's Top Charities Fund, but I also have other political preferences (say promoting second amendment rights in the United States). Now, instead of just donating directly to GiveWell's Top Charities Fund, I can use DuelGood to donate to GiveWell, while neutralizing the donations to a gun safety organization. 

This possibility, that your DuelGood contribution may not actually be neu... (read more)

That's a good point! I'll have to think about how to do that!

Matt -- thanks for an insightful post. Mostly agree.

However, on your point 2 about 'technological determinism': I worry that way too many EAs have adopted this view that building ASI is 'inevitable', and that the only leverage we have over the future of AI X-risk is to join AI companies explicitly trying to build ASI, and try to steer them in benign directions that increase control and alignment.

That seems to be the strategy that 80k Hours has actively pushed for years. It certainly helps EAs find lucrative, high-prestige jobs in the Bay Area, and gives th... (read more)

sn
1
0
0

Growth coming from YouTube recommendations makes sense. In that case I agree that more episodes is not a bad thing.

I haven't looked into this topic much, but I know reducing the harm from cigarette smoking is one of the priorities of the School for Moral Ambition. Perhaps they would be interested in substitution of cigarette smoking for vaping as part of the approach?

https://www.moralambition.org/

I definitely agree that looking for tweaks that could save money without reducing luxury or convenience is a great idea and think that resources to help EAs make such decisions quickly and easily would be great. I don't think it is all that people mean typically when they think about living frugally, so maybe a different framing would make sense.

Thanks for this analysis. I think your post deserves more attention, so I upvoted it.

We need more game-theory analyses like this, of geopolitical arms race scenarios. 

Way too often, people just assume that the US-China rivalry can be modelled simply as a one-shot Prisoner's Dilemma, in which the only equilibrium is mutual defection (from humanity's general interests) through both sides trying to build ASI as soon as possible.

As your post indicates, the relevant game theory must include incomplete and asymmetric information, possible mixed-strategy equ... (read more)

Are there particular formats you have in mind? 

If you haven't come across it yet, you might be interested in the work 80k's video team does.

Thanks for the nudge. I agree it seems crucial to try to find things that are actually different to cover - both for the sake of being interesting and more importantly to actually have an impact. I'd love to hear any particular suggestions you have about things that seem underexplored and important to you! 

We don’t intentionally aim to represent a broad range of approaches among our Recommended Charities. While we take steps to invite a pluralistic pool of applicants—especially from underfunded areas—those considerations don’t factor into our selection for evaluation, our assessments, or our decision making. If we thought that funding a marginal charity would have less impact than supporting the others, we wouldn’t recommend them, even if their inclusion could add more diversity of approaches to our list of Recommended Charities.

The animal advocacy movement ... (read more)

With respect to numbers of episodes: 
- We're considering potential new hosts publishing on new feeds. They might try a different format, or they might try the same format but a different target audience 
- Over the last ~year, most of our growth has been on youtube, rather than audio feeds. As far as we can tell, on youtube it's very uncommon to find new episodes directly via a subscription, and relatedly there seems to much less of an effect of reduction in engagement if you have episodes more than once per week. 

This is all pretty tentative... (read more)

Yes, this table does not include data on this. I don’t have columns for values related to that. It’s a lot of work to track these numbers down and for many foods they are just not available.

When people online talk about the “bioavailability” of protein sources, they seem to mean one (or both) of two things:

One is the digestibility. So how much of the amino acids end up absorbed your body. That number is ~90%+ for most soy products (that I have found numbers for) as well as vital wheat gluten.

The other thing that people talk about is the amino acid profile.... (read more)

Yeah, I see what you mean. I think that most people could benefit from minor tweaks in their spending that would save them a lot of money, but, when people try to cut down their spending, they tend to focus on the wrong things or use the wrong strategies. If people think that being frugal means removing luxury and convenience from their life, they are approaching it in a way that's guaranteed to discourage them from doing so. (I think eating out is a prime example of this. If you go to cheaper places, eating out once a week can be very easily off-set by very simple changes such as purchasing the cheapest fruit, purchasing fruit in larger quantities, and keeping a minimal pantry to reduce food waste.)

Oh, thanks for sharing that tag! I didn't know that existed. Your point about alienating people definitely makes sense. According to this chart from the 2019 EA survey, it looks like the largest donations account for the vast majority of money donated. This would make me think that donations from the top 10% of income are probably more important to focus on than from those with the top 50% of income.

Quick comment: in most questions, they totally ignore fish and invertebrates. On "other" I am adding these examples. For instance: 7.  To what extent should imports of animal products comply with equivalent animal welfare standards to those applied in the EU?  Other > free text 

I think another issue with frugality is the risk of burnout (if the savings is coming out of the EA's personal consumption bucket). Making substantial inroads into consumption that makes their lives easier or more enjoyable may make staying on the path more difficult in the long run.

Does this take bioavailability into account? IIUC, humans can't absorb 100% of plant based proteins, while we can absorb nearly 100% of animal-based proteins. The upshot is that your spreadsheet isn't tracking the actual amount of protein human bodies will absorb

Executive summary: The author argues that the true impact of donation-influencing efforts depends less on the absolute cost-effectiveness of an intervention and more on the counterfactual use of the donor’s funds—what they would have supported otherwise.

Key points:

  1. When influencing others’ donations, credit should be based on how much better their new allocation is than what they would have done without that influence.
  2. Moving money between two already highly cost-effective opportunities can have less counterfactual impact than improving the giving of less ef
... (read more)

Executive summary: Joe Carlsmith argues that while AI systems trained with current methods may have alien motivations, they need not be human-like to be safe if designed as corrigible instruction-followers rather than long-term consequentialist “sovereigns.” The essay critiques Yudkowsky and Soares’s claim in If Anyone Builds It, Everyone Dies that alien drives make alignment impossible, suggesting instead that safe, non-consequentialist behavior may generalize adequately from training to deployment.

Key points:

  1. The essay distinguishes between “sovereign” AI
... (read more)

I think you're right that frugality is good, but I'm not sure where you're getting the idea that it isn't discussed any, although it maybe could use a bit more discussion on the margin. I also think the main con is that it would alienate people who aren't willing to be particularly frugal, but will donate some anyways. The personal finance tag has some posts you might be interested in.

This might not fit the idea of a prioritization question, but it seems like there are a lot of "sure bets" in global development, where you can feel highly confident an intervention will be useful, and not that many in AI-related causes (high chance it either ends up doing nothing or being harmful), with animal welfare somewhere in between. It would be interesting to find projects in global development that look good for risk-tolerant donors, and ones in AI (and maybe animal welfare or other "longtermist" causes) that look good for less risk-tolerant donors. 

Thank you so much for having all the projects in an easy to read list here. I am looking forward to applying!

sn
4
3
0

I've had a drop off of non EA friends who are willing to listen to ai episodes. I think the ai stuff is repetitive. I liked the podcast more and recommended it more before the shift to focus on AI and I am an AI professional interested in safety. I think the priority should be interesting conversations. 

Separately, most podcast listeners I know subscribe to podcasts. There is an effect where publishing too much may cause people to unsubscribe, when it starts to feel like spam. I doubt the main thing preventing further expansion is quantity. I don't think that a third host would meaningfully help things. 

This also applies to the 80k brand as a whole. I used to recommend it to people interested in having an impact with their career but ever since 80k pivoted to an AI career funnel I recommend it to fewer people and always with the caveat of "They focus only on AI now, but there is some useful content hidden beneath"

Thanks for posting this Allegra! I was actually looking into this the other day and one thing that stopped me from giving as an individual donor was understanding exactly how cost-effective groups working on this are. My general understanding is that traditional humanitarian efforts aren't particularly cost-effective if your goal is to help the most people (I think largely because these efforts raise lots of money through salience and they are not as rigorously designed as GiveWell charities might be - but these might not be true in this case). 

Do you have any information or research into Emergency Response Rooms or other groups working in Sudan on how many people they are helping or lives they are saving? 

I think this is for the best.

I appreciated what you tried initially, because it is important to have external reviews. And I upvoted you. Some of your points were valid and taken into account.

But I have been very unimpressed and disappointed with the way you handled things. Huw summed it well. Your approach was extremely adversarial and you made huge public accusations based on debatable evidence that could have been clarified with a call in many cases. 

As such, it became very hard to trust you.

Blaming EA practices, as you are doing, is worrying. It s... (read more)

Yes it would imply that a bit of extra energy can vastly increase consciousness.  But so what?  Why be 99.9999% confident that it can't? 

Nice! That's super exciting. And I feel very excited about the work ACE is doing to bring conventional animal donors / conservation donors into this work, because that seems incredibly valuable! I think where I disagree with many people's views about ACE is that I think ACE doing perfectly rigorous charity evaluation is much less important than ACE expanding the pool of donors, because I think most of ACE's impact comes via expanding the pool of donors, like you describe.

Thanks for writing this Abraham! 

Charity evaluators should think about their impact as partially just “moving money around,” not counterfactual donations.

In simple terms, I see Animal Charity Evaluators doing two things related to this topic:

  1. Get donors to say no to good grants so they can say yes to great ones (the moving money around on the margin stuff)
  2. Creating impact from being the only reason good work happens (the counterfactual stuff)

As an evaluator that aims to help people help more animals, I currently think this approach will accelerate the j... (read more)

I had a similar experience. I recommended the podcast to dozens of people over the years, because it was one of the best to have fascinating interviews with great guest on a very wide range of topics. However, since it switched to AI as the main topic, I have recommended it to zero people and I don't expect this to change if the focus stays this way. 

See the last sentence :) But I should have highlighted this more. It's a great piece.

Thanks for the work, this is great! 

I especially appreciate the rationale summaries, and generally I'd encourage you to lean more into identifying underlying cruxes as opposed to quantitative estimates. (E.g. I'm skeptical on experts being sufficiently well calibrated to give particularly informative timeline forecasts).

I'm looking forward to the risk-related surveys. Would be interesting to hear their thoughts on the likelihood of concrete risks. One idea that comes to mind would be conditional forecasts on specific interventions to reduce risks.

Also... (read more)

Thanks for doing this! 

Could you define TFOs? Based on your backgrounds, I'm guessing you mean community building organisations like EA Sweden, EA Netherlands, etc., and coaching/training/advising organisations like Successif, 80k, Talos, AIM, Tarbell, etc.?

While both of these sets of organisations are ultimately about helping talent make a difference, I think they have quite different theories of change, and therefore require different M&E systems. 

See my proposal below for how I think community building organisations should do things. ... (read more)

Thanks for posting this! I will be coming back to this post next year when I'm planning debate weeks. 
 

Thanks for sharing your thoughts, Alfredo!

I would not support EA efforts to reduce the number of pin pricks in humans, no matter how vast, given that we also have humans who are actually being tortured right now.

Would you avert 2 h of pain of intensity 0.999 instead of 1 h of pain of intensity 1? If so, would you avert 4 h (= 2*2) of pain of intensity 0.998 (= 0.999^2) instead of 2 h of pain of intensity 0.999? If so, why not generalise, and conclude you would avert 2^N h of pain of intensity 0.999^N instead of 1 h of pain of intensity 1? You could endorse... (read more)

Good suggestion! We're planning to have large parts of the programme for EAGxAmsterdam published by next week. 

Really enjoyed reading this post! 

You can influence Big Normie Foundation to move $1,000,000 from something generating 0 unit of value per dollar (because it is useless) to something generating 10 units of value per dollar.

This example reminded me of something similar I have been meaning to write about, but @AppliedDivinityStudies got there before me (and did so much better than I could have!) - it is not just that influencing Big Normie Foundations could produce the same marginal impact due to a lower counterfactual, but also that there is way more m... (read more)

As a panpsychist and suffering abolitionist, I'm one of the most sympathetic people in the world to the cause of reducing suffering even in the smallest beings. And yet, I do not want to see more research on how to increase the welfare of microorganisms on the margin (or at least not with EA resources).

I probably won't change your mind about meta ethics, but I strongly disagree with the aggregationist QALY approach to comparing the welfare of humans vs e.g. soil animals (e.g. here). I hope to write more about this at some point, but as an intuition pump, I... (read more)

Thanks for the perspective, Yarrow. Note the post is supposed to be about "The simplified stereotype leftist view", and "The simplified stereotype rightist view", not about the median, or strongest leftist and rightist views.

Fantastic post. I appreciate how it lays out the tradeoffs between short-term pragmatism and long-term vision. 

For my part, I think cultivated meat is by far the most promising route to ending factory farming ASAP. Every animal advocate should, in my view, be doing everything possible to get it to market faster - through policy, funding, comms, or talent pipelines. I think other approaches pale in comparison. 

Hey Pablo,

Thanks for your comment. We’re always happy to answer questions about our impact!

It’s difficult to make a direct comparison between the impact of GFI’s work in alternative proteins and that of other animal advocacy organisations because the theory of change is so different. Advocacy campaigns can generate short-term wins for animals, but they don’t address the root causes of factory farming. GFI’s focus is on long-term, systemic change, transforming the entire food system by finding an alternative to conventional animal products that can feed the... (read more)

Fantastic answer, very thoughtful and clear. Thanks!

Thank you for sharing Allegra! Welcome to the Forum, and congrats on writing and sharing this.

I think this is well written and engaging! I agree it seems a real shame for these people and for the world that the existing services have been cut. And I do think that your bullet point list suggests it's worth considering/evaluating.

I think a stronger case would delve into more detail on these claims, which aren't currently substantiated: "Sudan ranks extraordinarily high on scale, neglect, and tractability", and "Emergency networks, women's coalitions, and ind... (read more)

Sorry David, should have updated this post! We are in the final stages now. We will be announcing by EOW. 

completely agree just give @Peter Wildeford they money right now. Every minute we delay is lost expected value...

EA is something like a brand, and you should be somewhat careful about how you communicate about EA in order to not harm the brand (Thinking like this helped me understand why it was not an universally accepted truth that everybody should tell everybody about EA by all means and immediately. I think a lot of new people in EA go through this “tell everyone” phase, because EA can be super exciting.)

 

Can you elaborate on this a little bit? What should I not be telling people? 

Updated to be more delightful on the eyeballs! 🐒

One of the shortcomings in this post is that it is not careful to distinguish leftism, which is a very broad umbrella term, from Marxism, which is a specific political philosophy or theory of economics which is not particularly popular in Western, developed liberal democracies today. This post seems to attempt to summarize Karl Marx's theory of surplus value, but then says most or all leftists think of the economy in these terms. There may be a grain of truth to this, but the way it's stated in this post is not really accurate.

The post's reply to the summa... (read more)

[epistemic status: tentative, as I did not pay attention very well in my one-credit accounting class ~20 years ago]

Regarding your summary of the evidence, I'm not sure how much weight to give the lived experiences of PfG / PfG-adjacent companies. I pulled data for two of them, and I didn't see evidence of large improvements in margins relative to what I would expect from comparable profit-for-yacht businesses. Although presumably businesses could improve, these are also among the current best-in-class PfG companies -- and survivorship bias means that we're... (read more)

Answer by Arepo2
1
0

Most of these aren't so much well-formed questions, as research/methodological issues I would like to see more focus on:

  • Operationalisations of AI safety that don't exacerbate geopolitical tensions with China - or ideally that actively seek ways to collaborate with China on reducing the major risks.
  • Ways to materially incentivise good work and disincentivise bad work within nonprofit organisations, especially effectiveness-minded organisations
  • Looking for ways to do data-driven analyses on political work especially advocacy; correct me if wrong, but the recom
... (read more)

Thanks! The online courses page describes itself as a collection of 'some of the best courses'. Could you say more about what made you pick these? There are dozens or hundreds of online courses these days (esp on general subjects like data analysis), so the challenge for pursuing them is often a matter of filtering convincingly.

Imbuing the values of 80,000 hours into formats with orders of magnitudes more reach would likely be extremely beneficial. Looking forward to seeing the organization evolve and adapt alongside an increasingly relevant technological landscape. 

Thanks for sharing this. I'm skeptical about near-term AGI, so I don't think this is a practical concern in the near term, but it's still really fun to think about. 

This is the example O'Keefe et al. give in the report:

This is way too mild! I'd design it way more aggressively, something like this:[1]

Particularly having it top out at 50% seems way too low.
 

  1. ^

    The Windfall Clause is a bit peculiar in that it's a private contract to donate a portion of marginal profits to charity, rather than a tax. There is a discussion in the report about whether th

... (read more)

My suggestion along these lines would be to try to get guests on who come with a different perspective on transformative AI or AGI than most of the 80,000 Hours Podcast's past guests or most people in EA. Toby Ord's episode was excellent in this respect; he's as central to EA as it gets, yet he was dumping cold water on the scaling trends many people in EA take for granted. 

Some obvious big names that might be hard to get: François Chollet, Richard Sutton, and Yann LeCun (the links go to representative podcast clips for each one of them).

A semi-big na... (read more)

Approx how much absorbency/room for more funding is there in each cause area? How many good additional opportunities are there over what is currently being funded? How steep are the diminishing returns for an additional $10m, $50m, $100m, $500m?

This seems a bit related to the “Pivotal questions”: an Unjournal trial initiative   -- we've engaged with a small group of organizations and elicited some of these -- see here.

To highlight some that seem potentially relevant to your ask:

What are the effects of increasing the availability of animal-free foods on animal product consumption? Are alternatives to animal products actually used to replace animal products, and especially those that involve the most suffering? Which plant-based offerings are being used as substitutes versus complements

... (read more)

Some questions that might be cruxy and important for money allocation: 

Because there is some evidence that superforecaster aggregation might underperform in AI capabilities, how should epistemic weight be distributed between generalist forecasters, domain experts, and algorithmic prediction models? What evidence exists/can be gotten about their relative track records?

Are there better ways to do AIS CEA? What are they? 

Is there productive work to be done in inter-cause comparison among new potential cause areas (i.e. digital minds, space governanc... (read more)

Load more