All of Tom_Davidson's Comments + Replies

Is the threat model for stages one and two the same as the one in my post on whether one country can outgrow the rest of the world?

 

That post just looks at the basic economic forces driving super-exponential growth and argues that they would lead an actor that starts with more than half of resources, and doesn't trade, to gain a larger and larger fraction of world output. 

 

I can't tell from this post whether there are dynamics that are specific to settling the solar system in Stage II that feed into the first-mover advantage or whether it's ... (read more)

1
JordanStone
Thanks Tom, yeah the threat model for stage two is quite similar to your post, where I'm expecting one actor to potentially outgrow the rest of the world by grabbing space resources. However, I do think there might be dynamics in space that feed into a first mover advantage, like Fin's recent post about shutting off space access to other actors, or some way to get to resources first and defend them (not sure about this yet), or just initiating an industrial explosion in space before anyone else (which maybe pays off in the long-term because Earth eventually reaches a limit or slows down in growth compared to Dyson swarm construction).  As for the threat model of stage 1, I don't have strong opinions on whether a decisive strategic advantage on Earth is more likely to be achieved with superexponential growth or conflict, though your post is very compelling in favour of the former. I'm thinking about this sort of thing at the moment in terms of ~what percentage of worlds a decisive strategic advantage is achieved on Earth vs in space, which informs how important space governance work is. I find the 3 stages of competition model to be useful to figure that out. It's not clear to me that Earth obviously dominates and I am open to stage 2 actually not mattering very much, but I want to map out strategies here.  I do already think that stage 3 doesn't matter very much, but I include it as a stage because I may be in a minority view in believing this, e.g. Will and Fin imply that races to other star systems are important in "Preparing for an Intelligence Explosion", which I think is an opinion based on works by Anders Sandberg and Toby Ord. 

Perhaps many minds end up at a shared notion of what they’re aiming for, via acausal trade (getting to some grand bargain), or evidential cooperation in large worlds

This isn't convergence. It's where ppl DIDN"T converge but make a win-win compromise deal. If ppl all converge, there's no role for acausal trade/ECL 

2
Owen Cotton-Barratt
I'm not sure. I think there are versions of things here which are definitely not convergence (straightforward acausal trade between people who understand their own values is of this type), but I have some feeling like there might be extra reasons for convergence from people observing the host, and having that fact feed into their own reflective process.  (Indeed, I'm not totally sure there's a clean line between convergence and trade.)

It seems like we can predictably make moral progress by reflecting; i.e. coming to answers + arguments that would be persuasive to our former selves
I think I’m more likely to update towards the positions of smart people who’ve thought long and hard about a topic than the converse (and this is more true the smarter they are, the more they’re already aware of all the considerations I know, and the more they’ve thought about it)
If I imagine handing people the keys to the universe, I mostly want to know “will they put serious effort into working out what the r

... (read more)

I think my biggest uncertainty about this is:

 

If there were a catastrophic setback of this kind, and civilisation tried hard to save and maintain the weights of superintelligent AI (which they presumably would), how likely are they to succeed? 

 

My hunch is that they very likely could succeed. E.g. in the first cpl of decades they'd have continued access to superintelligent AI advice (and maybe robotics) from pre-existing hardware. They could use that to bootstrap to longer periods of time. E.g. saving the weights on hard drives rather than S... (read more)

Really like this post!

 

I'm wondering whether human-level AI and robotics will significantly decrease civilisation's susceptibility to catastrophic setbacks?

AI systems and robots can't be destroyed by pandemics. They don't depend on agriculture -- just mining and some form of energy production. And a very small number of systems could hold tacit expertise for ~all domains. 

Seems like this this might reduce the risk by a lot, such that the 10% numbers you're quoting are too high. E.g. you're assigning 10% to a bio-driven set-back. But i'd have thought that would have to happen before we get human-level robotics?

I also work at Forethought!

I agree with a lot of this, but wanted to flag that I would be very excited for ppl doing blue skies research to apply and want Forethought to be a place that's good for that. We want to work on high impact research and understand that sometimes mean doing things where it's unclear up front if it will bear fruit.

Thanks, good Q.

 

I'm saying that if there is such a new paradigm then we could have >10 years worth of AI progress at rates of 2020-5, and >10 OOMs of effective compute growth according to the old paradigm. But that, perhaps, within the new paradigm these gains are achieved while the efficiency of AI algs only increases slightly. E.g. a new paradigm where each doubling of compute increases capabilities as much as 1000X does today. Then, measured within the new paradigm, the algorithmic progress might seem like just a couple of OOMs so 'effective ... (read more)

Can you give more examples of where ppl are getting this wrong?

I support the 80k pivot, and the blue dot page seems ok (but yes, I'd maybe prefer smg more opinionated).

While these concerns make sense in theory I'm not sure whether it's a problem in practice

2
Tristan W
You might be looking for something larger, but as a bit of anecdata, I found myself at LISA post-EAG and, much to my surprise, found that not even the majority of the food they were offering was vegan. IIRC, last time I was there it was fully vegan, so that was a bit of a shock, and a potential sign of the times. 

Nice!

I think that condition is equivalent to saying that A_cog explodes iff either

  • phi_cog + lambda > 1 and phi_exp + lambda > 1, or
  • phi_cog > 1

Where the second possibility is the unrealistic one where it could explode with just human input

Agree that i wouldn't particularly expect the efficiency curves to be the same. 

But if the phi>0 for both types of efficiency, then I think this argument will still go through.

To put it in math, there would be two types of AI software technology, one for experimental efficiency and one for cognitive labour efficiency: A_exp and A_cog. The equations are then:

dA_exp = A_exp^phi_exp F(A_exp K_res, A_cog K_inf)

dA_cog = A_cog^phi_cog F(A_exp K_res, A_cog K_inf)

 

And then I think you'll find that, even with sigma < 1, it explodes when phi_exp>0 and phi_cog>0.

6
Parker_Whitfill
I spent a bit of time thinking about this today. Lets adopt the notation in your comment and suppose that F(⋅) is the same across research sectors, with common λ. Let's also suppose common σ<1.  Then we get blow up in Acog iff  {ϕcog+λ>1if ϕcog≤ϕexpmax{ϕcog,ϕexp+λ}>1if ϕcog>ϕexp The intution for this result is that when σ<1, you are bottlenecked by your slower growing sector.  If the slower growing sector is cognitive labor, then asympotically F∝Acog, and we get ˙A∝AϕcogcogAλcog so we have blow-up iff ϕcog+λ>1.  If the slower growing sector is experimental compute, then there are two cases. If experimental compute is blowing up on its own, then so is cogntive labor because by assumption cognitive labor is growing faster. If experimental compute is not blowing up on its own then asympotically F∝Aexp and we get ˙Acog∝AϕcogcogAλexp. Here we get a blow-up iff ϕcog>1.[1] In contrast, if σ>1 then F is approximately the fastest growing sector. You get blow-up in both sectors if either sector blows up. Therefore, you get blow-up iff max{ϕcog+λ,ϕexp+λ}>1.  So if you accept this framing, complements vs substitutes only matters if some sectors are blowing up but not others. If all sectors have the returns to research high enough, then we get an intelligence explosion no matter what. This is an update for me, thanks! 1. ^ I'm only analyzing blow-up conditions here. You could get e.g. double exponential growth here by having ϕcog=1 and ϕexp+λ=1. 

Although note that this argument works only with the CES in compute formulation. For the CES in frontier experiments, you would have the  so the A cancels out.

Yep, as you say in your footnote, you can choose to freeze the frontier, so you train models of a fixed capability using less and less compute (at least for a while). 

However, if , then a software-only intelligence explosion occurs only if . But if this condition held, we could get an intelligence explosion with constant, human-only research input. While not impossible, we find this condition fairly implausible. 

 

Hmm, I think a software-only intelligence explosion is plausible even if  , but without the implication that you can do it with human-only research input.

The basic idea is that when you double the efficiency of software, you can now:

  • Run twice as many experiments
  • Have
... (read more)
3
Parker_Whitfill
Note that if you accept this, our estimation of σ in the raw compute specification is wrong.  The cost-minimization problem becomes  minH,KwH+rKs.t.F(AK,H)=¯F. Taking FOCs and re-arranging,  KH=σγ1−γ+σlnwAr So our previous estimation equation was missing an A on the relative prices. Intuitively, we understated the degree to which compute was getting cheaper. Now A is hard to observe, but let's just assume its growing exponentially with an 8 month doubling time per this Epoch paper.  Imputing this guess of A, and estimating via OLS with firm fixed effects gives us σ=.89 with .10standard errors.  Note that this doesn't change the estimation results for the frontier experiments since the A in AKresAKtrain just cancels out. 
4
Owen Cotton-Barratt
hmm, I think I would expect different experience curves for the efficiency of running experiments vs producing cognitive labour (with generally less efficiency-boosts with time for running experiments). Is there any reason to expect them to behave similarly? (Though I think I agree with the qualitative point that you could get a software-only intelligence explosion even if you can't do this with human-only research input, which was maybe your main point.)
3
Parker_Whitfill
This is a good point, we agree, thanks! Note that you need to assume that the algorithmic progress that gives you more effective inference compute is the same that gives you more effective research compute. This seems pretty reasonable but worth a discussion.  Although note that this argument works only with the CES in compute formulation. For the CES in frontier experiments, you would have the AKresAKtrain so the A cancels out.[1] 1. ^ You might be able to avoid this by adding the A's in a less naive fashion. You don't have to train larger models if you don't want to. So perhaps you can freeze the frontier, and then you getAKresAfrozenKtrain? I need to think more about this point. 

Thanks for this!

 

Let me try and summarise what I think is the high-level dynamic driving the result, and you can correct me if I'm confused.

 

CES in compute.

Compute has become cheaper while wages have stayed ~constant. The economic model then implies that:

  • If compute and labour were complements, then labs would spend a greater fraction of their research budgets on labour. (This prevents labour from becoming a bottleneck as compute becomes cheaper.)

Labs aren't doing this, suggesting that compute and labour are substitutes. 

 

CES in frontier... (read more)

5
Parker_Whitfill
Yep, I think this gets the high-level dynamics driving the results right. 

The  condition is exactly what Epoch and Forethought consider when they analyze whether the returns to research are high enough for a singularity.[5]

Though we initially consider this, we then adjust for compute as an input to R&D and so end up considering the sigma=1 condition. It's under that condition that I think it's more likely than not that the condition for a software-only intelligence explosion holds

3
Parker_Whitfill
Thanks for the clarification. We updated the post accordingly. 

So you think ppl doing direct work should quit and earn to give if they could thereby double their salary? Can't be the right recommendation for everyone!

3
Vasco Grilo🔸
Hi Tom, It depends on the organisations which would receive the additional donations. If the person quitting their job is 10 % more cost-effective than the person who would replace them, donates 10 % of their gross annual salary to an organisation 10 times as cost-effective as their initial organisation, their donations doubled as a result of quitting, and there was no impact from direct work in the new organisation, their annual impact after quitting would become 1.82 (= (0 + 0.1*2*10)/(0.1*1 + 0.1*10)) times as large as their initial annual impact.

I like the vividness of the comparisons!

A few points against this being nearly as crazy as the comparisons suggest:

  • GPT-2030 may learn much less sample efficiently, and much less compute efficiently, than humans. In fact, this is pretty likely. Ball-parking, humans do 1e24 FLOP before they're 30, which is ~20X less than GPT-4. And we learn languages/maths from way fewer data points. So the actual rate at which GPT-2030 itself gets smarter will be lower than the rates implied. 
    • This is a sense of "learn" as in "improves its own understanding". There's an
... (read more)
2
rosehadshar
Thanks, I think these points are good. Do you have any examples in mind of domains where we might expect this? I've heard people say things like 'some maths problems require serial thinking time', but I still feel pretty vague about this and don't have much intuition about how strongly to expect it to bite.  

Thanks, this is a great comment.

The first and second examples seems pretty good, and useful reference points.

The third example don't seem like they are nearly as useful though. What's particularly unusual about this case is that there are two useful inputs to AI R&D -- cognitive labour and compute for experiments -- and the former will rise very rapidly but the other will not. In particular, I imagine CS departments also saw compute inputs growing in that time. And I imagine some of the developments discussed (eg proofs about algorithms) only have cogn... (read more)

  •  I think utilitarianism is often a natural generalization of "I care about the experience of XYZ, it seems arbitrary/dumb/bad to draw the boundary narrowly, so I should extend this further" (This is how I get to utilitarianism.) I think the AI optimization looks considerably worse than this by default.

Why is this different between AIs and humans? Do you expect AIs to care less about experience than humans, maybe bc humans get reward during life-time learning about AIs don't get reward during in context learning?

  • I can directly observe AIs and make predictions of future training methods and their values seem to result from a much more heavily optimized and precise thing with less "slack" in some sense. (Perhaps this is related to genetic bottleneck, I'm unsure.)

Can you say more about how slack (or genetic bottleneck) would affect whether AIs have values that are good by human lights?

  • AIs will be primarily trained in things which look extremely different from "cooperatively achieving high genetic fitness".

They might well be trained to cooperate with other copies on tasks, if this is they way they'll be deployed in practice?

  • Current AIs seem to use the vast, vast majority of their reasoning power for purposes which aren't directly related to their final applications. I predict this will also apply for internal high level reasoning of AIs. This doesn't seem true for humans.

In what sense do AIs use their reasoning power in this way? How that that affect whether they will have values that humans like?

I agree that bottlenecks like the ones you mention will slow things down. I think that's compatible with this being a "jump in forward a century" thing though.

Let's consider the case of a cure for cancer. First of all, even if it takes "years to get it out due to the need for human trials and to actually build and distribute the thing" AGI could still bring the cure forward from 2200 to 2040 (assuming we get AGI in 2035).

Second, the excess top-quality labour from AGI could help us route-around the bottlenecks you mentioned:

  • Human trials: AGI might develop u
... (read more)

It seems to me like you disagree with Carl because you write:

  • The reason for an investor to make a bet, is that they believe they will profit later
  • However, if they believe in near-term TAI, savvy investors won't value future profits (since they'll be dead or super rich anyways)
  • Therefore, there is no way for them to win by betting on near-term TAI

So you're saying that investors can't win from betting on near-term TAI. But Carl thinks they can win.

5
CarlShulman
As Tom says, sorry if I wasn't clear.

Local cheap production makes for small supply chains that can regrow from disruption as industry becomes more like information goods.

Could you say more about what you mean by this?

Thanks for these great questions Ben!

To take them point by point:

  1. The CES task-based model incorporates Baumol effects, in that after AI automates a task the output on that task increases significantly and so its importance to production decreases. The tasks with low output become the bottlenecks to progress. 
    1. I'm not sure what exactly you mean by technological deflation. But if AI automates therapy and increases the amount of therapists by 100X then my model won't imply that the real $ value of therapy industry increases 100X. The price of therapy fall
... (read more)

if they had explained why their views were not moved by the expert reviews OpenPhil has already solicited.

I included responses to each review, explaining  my reactions to it. What kind of additional explanation were you hoping for?

 

Davidson 2021 on semi-informative priors received three reviews.

By my judgment, all three made strong negative assessments, in the sense (among others) that if one agreed with the review, one would not use the report's reasoning to inform decision-making in the manner advocated by Karnofsky (and by Beckstead).

For Hajek... (read more)

Thanks for this!

I won't address all of your points right now, but I will say that I hadn't considered that "R&D is compensating for natural resources becoming harder to extract over time", which would increase the returns somewhat. However, my sense is that raw resource extraction is a small % of GDP, so I don't think this effect would be large.

Sorry for the slow reply!

I agree you can probably beat this average by aiming specifically at R&D for boosting economic growth.

I'd be surprised if you could spend $100s millions per year and consistently beat the average by a large amount (>5X) though:

  • The $2 trillion number also excludes plenty of TFP-increasing research work done by firms that don't report R&D like Walmart and many services firms.
  • The broad areas where this feels most plausible to me (R&D in computing or fundamental bio-tech) are also the areas that have the biggest poten
... (read more)

Great question!

I would read Appendix G as conditional on "~no civilizational collapse (from any cause)", but not conditional on "~no AI-triggered fundamental reshaping of society that unexpectedly prevents growth". I think the latter would be incorporated in "an unanticipated bottleneck prevents explosive growth".

I think the question of GDP measurement is a big deal here. GDP deflators determine what counts as "economic growth" compared to nominal price changes, but deflators don't really know what to do with new products that didn't exist. What was the "price" of an iPhone in 2000? Infinity? Could this help recover Roodman's model? If ideas being produced end up as new products that never existed before, could that mean that GDP deflators should be "pricing" these replacements as massively cheaper, thus increasing the resulting "real" growth rate?

This is an int... (read more)

Thank you for this comment! I'll make reply to different points in different comments.

But then the next point seems very clear: there's been tons of population growth since 1880 and yet growth rates are not 4x 1880 growth rates despite having 4x the population. The more people -> more ideas thing may or may not be true, but it hasn't translated to more growth.

So if AI is exciting because AIs could start expanding the number of "people" or agents coming up with ideas, why aren't we seeing huge growth spurts now?

The most plausible models have dimin... (read more)

Hey - interesting question! 

This isn't something I looked into in depth, but I think that if AI drives explosive economic growth then you'd probably see large rises in both absolute energy use and in energy efficiency.

Energy use might grow via (e.g.) massively expanding solar power to the world's deserts (see this blog from Carl Shulman). Energy efficiency might grow via replacing human  workers with AIs (allowing services to be delivered with less energy input), rapid tech progress further increasing the energy efficiency of existing goods and s... (read more)

Thanks for these thoughts! You raise many interesting points.

 On footnote 16, you "For example, the application of Laplace’s law described below implies that there was a 50% chance of AGI being developed in the first year of effort". But historically, participants in the Dartmouth conference were gloriously optimistic

I'm not sure whether the participants at Dartmouth would have assigned 50% to creating AGI within a year and >90% within a decade, as implied by the Laplace prior. But either way I do think these probabilities would have been too ... (read more)

Agreed - the framework can be applied to things other than AGI.

Thanks for this Halstead - thoughtful article.

I have a one push-back, and one question about your preferred process for applying the ITN framework.

1. After explaining the 80K formalisation of ITN you say

Thus, once we have information on importance, tractability and neglectedness (thus defined), then we can produce an estimate of marginal cost-effectiveness.
The problem with this is: if we can do this, then why would we calculate these three terms separately in the first place?

I think the answer is that in some contexts it's easier to calculate each t... (read more)

3
MichaelPlant
Hmmm. I don't really see how this is any harder, or different from, your proposed method, which is to figure out how much of the problem would be solved by increasing spend by 10%. In both cases you've got to do something like working out how much money it would take to 'solve' AI safety. Then you play with that number.
1
Robert_Wiblin
Glad you like them! Tell your friends. ;)

I found Nakul's article v interesting too but am surprised at what it led you to conclude.

I didn't think the article was challenging the claim that doing paradigmatic EA activities was moral. I thought Nakul was suggesting that doing them wasn't obligatory, and that the consequentialist reasons for doing them could be overridden by an individual's projects, duties and passions. He was pushing against the idea that EA can demand that everyone support them.

It seems like your personal projects would lead to do EA activities. So I'm surprised you judge EA acti... (read more)

1
Diego_Caleiro
Agreed with 2 first paragraphs. Activities that are more moral than EA for me: At the moment I think working directly on assembling and conveying knowledge in philosophy and psychology to the AI safety community has higher expected value. I'm taking the AI human compatible course at Berkeley, with Stuart Russell, I hang out at MIRI a lot, so in theory I'm in good position to do that research and some of the time I work on it. But I don't work on it all the time, I would if I got funding for our proposal. But actually I was referring to a counterfactual world where EA activities are less aligned with what I see as morally right than this world. There's a dimension, call it "skepticism about utilitarianism" that reading Bernard Williams made me move along. If I moved more and more along that dimension, I'd still do EA activities, that's all. Your expectation is partially correct, I assign 3% to EA activities is morally required of everyone, I feel personally more required to do them than 25% (because this is the dream time, I was lucky, I'm at a high leverage position etc..), but although I think it is right for me to do them, I don't do them because its right, and that's my overall point.

Yeah good point.

If people choose a job which they enjoy less then that's a huge sacrifice, and should be applauded.

But EA is about doing the most good that you can.

So anyone who is doing the most good that they could possibly do is being an amazing EA. Someone on £1million who donates £50K is not doing anywhere near as much good as they could do.

The rich especially should be encouraged to make big sacrifices, as they do have the power to do the most good.

1
Owen Cotton-Barratt
But this will tend to neglect the fact that people can make choices which make them richer, possibly at personal cost. If we systematically ignore this, we will probably encourage people too much into careers which they enjoy with low consumption levels. I think it's important to take both degree of sacrifice (because the amount we can do isn't entirely endogenous) and absolute amount achieved (because nor is it entirely exogenous) into account.

I agree completely that talking with people about values is the right way to go. Also, I don't think we need to try and convince them to be utilitarians or nearly-utilitarian. Stressing that all people are equal and pointing to the terrible injustice of the current situation is already powerful, and those ideas aren't distinctively utilitarian.

There is no a priori reason to think that the efficacy of charitable giving should have any relation whatsoever to utilitarianism. Yet it occupies a huge part of the movement.

I think the argument is that, a priori, utilitarians think we should give effectively. Further, given the facts as they far (namely that effective donations can do an astronomical amount of good), there are incredibly strong moral reasons for utilitarians to promote effective giving and thus to participate in the EA movement.

I think that [the obsession with utilitarianism] is reg

... (read more)
0[anonymous]
I agree that given the amount of good which the most effective charities can do, there are potentially strong reasons for utilitarians to donate. Yet utilitarians are but a small sub-set of at least one plausible index of the potential scope of effective altruism: any person, organisation or government which currently donates to charity or supports foreign aid programmes. In order to get anywhere near that kind of critical mass the movement has to break away from being a specifically utilitarian one.

Those seem really high flow through effects to me! £2000 saves one life, but you could easily see it doing as much good as saving 600!

How are you arriving at the figure? The argument that "if you value all times equally, the flow through effects are 99.99...% of the impact" would actually seem to show that they dominated the immediate effects much more than this. (I'm hoping there's a reason why this observation is very misleading.) So what informal argument are you using?

0
MichaelDickens
I more or less made up the numbers on the spot. I expect flow-through effects to dominate direct effects, but I don't know if I should assume that they will be astronomically bigger. The argument I'm making here is really more qualitative. In practice, I assume that AMF takes $3000 to save a life, but I don't put much credence in the certainty of this number.

This is a nice idea but I worry it won't work.

Even with healthy moral uncertainty, I think we should attach very little weight to moral theories that give future people's utility negligible moral weight. For the kinds of reasons that suggest we can attach them less weight don't go any way to suggesting that we can ignore them. To do this they'd have to show that future people's moral weight was (more than!) inversely proportional to their temporal distance from us. But the reasons they give tend to show that we have special obligations to people in our gen... (read more)

Great post!

Out of interest, can you give an example of an "instrumentally rational technique that require irrationality"?

Why? What are the very long term effects of a murder?

0
saulius
Murdering also decreases world population and consumption, which decreases problems like global warming, overfishing, etc. and probably reduces some existential risks.
0
Robert_Wiblin
Increasing violence and expectation of violence seems to lead to worse values and a more cruel/selfish world. Of course it's also among the worst thing you can do under all non-consequentialist ethics.

Would you similarly doubt that, on expectation, someone murdering someone else had bad consequences overall? Someone slapping you very hard in the face?

This kind of reasoning seems to bring about a universal scepticism about whether we're doing Good. Even if you think you can pin down the long term effects, you have no idea about the very long term effects (and everything else is negligible compared to very long term effects).

3
MichaelDickens
For what it's worth, I definitely don't think we should throw our hands up and say that everything is too uncertain, so we should do nothing. Instead we have to accept that we're going to have high levels of uncertainty, and make decisions based on that. I'm not sure it's reasonable to say that GiveWell top charities are a "safe bet", which means they don't have a clear advantage over far future interventions. You could argue that we should favor GW top charities because they have better feedback loops--I discuss this here.
1
Robert_Wiblin
I think the effect of murdering someone are more robustly bad than reducing poverty (which are also probably positive, but less obviously so).

In defence of WALYs, and in reply to your specific points:

  1. I don't share your intuition here. Well-being is what we're talking about when we say "I'm not sure he's doing so well at the moment", or when we say "I want to help people as much as possible". It's a general term for how well someone is doing, overall. It's an advantage, in my eyes, that it's not committed to any specific account of well-being, for any such account might have its drawbacks.

  2. I worry that, in adopting HALYs, EA would tie its aims to a narrow view of what huma

... (read more)
0
MichaelPlant
Thanks for the comments Tom. On 1. I agree that the broadness of leaving 'well-being' unspecified looks like an advantage, but I think that's someone illusory. If I ask you "okay, so if you want to help people do better, what do you mean by 'better'?" then you've got to specify an account of well-being unless you want to give a circular answer. If you just say "well, I want to do what's good for them" that wouldn't tell me what you meant.. This might seem picky, but depending on you view of well-being you get quite sharply different policy/EA decisions. I'm doing some research on this now and hope to write it up soon. On 2. I should probably reveal my cards and say i'm a hedonist about well-being. I'm not interested in any intervention which doesn't make people experience more joy and less suffering. To make the point by contrast, lots of thinks which make people richer do nothing to increase happiness. I'm very happy for other EAs to choose their own accounts of well-being of course. As it happens, lots of EAs seem to be implicit or explicit hedonists too.

A small quibble

One conclusion EAs might make is that their personal diets are no big deal, easily swamped as it is by the consequences of donations.

I think it's flat out wrong to conclude our diets "are no big deal". Being vegetarian for a lifetime prevents over 1000 years of animal suffering. That's a huge, huge impact.

My more serious worry is that people will draw this conclusion and eat less ethically as a result, without donating more (they already knew donating was great). But this is just psychological speculation backed up by some anecdotal evidence.

Most people who go vegetarian find its very very little effort to be 90% vegetarian after a year or so. To me this warns against the view that people will give extra because "they haven't made the sacrifice of becoming veggie". Very soon the sacrifice becomes a habit and the claim that charitable donations are affected becomes even less plausible.

I'd be interested to know if anyone has given more money because of this thread. I know that i'm more willing to eat diary products and have read others saying it made them happier eating meat.

Load more