All of Tom_Davidson's Comments + Replies

I agree that bottlenecks like the ones you mention will slow things down. I think that's compatible with this being a "jump in forward a century" thing though.

Let's consider the case of a cure for cancer. First of all, even if it takes "years to get it out due to the need for human trials and to actually build and distribute the thing" AGI could still bring the cure forward from 2200 to 2040 (assuming we get AGI in 2035).

Second, the excess top-quality labour from AGI could help us route-around the bottlenecks you mentioned:

  • Human trials: AGI might develop u
... (read more)

It seems to me like you disagree with Carl because you write:

  • The reason for an investor to make a bet, is that they believe they will profit later
  • However, if they believe in near-term TAI, savvy investors won't value future profits (since they'll be dead or super rich anyways)
  • Therefore, there is no way for them to win by betting on near-term TAI

So you're saying that investors can't win from betting on near-term TAI. But Carl thinks they can win.

5
CarlShulman
1y
As Tom says, sorry if I wasn't clear.

Local cheap production makes for small supply chains that can regrow from disruption as industry becomes more like information goods.

Could you say more about what you mean by this?

Thanks for these great questions Ben!

To take them point by point:

  1. The CES task-based model incorporates Baumol effects, in that after AI automates a task the output on that task increases significantly and so its importance to production decreases. The tasks with low output become the bottlenecks to progress. 
    1. I'm not sure what exactly you mean by technological deflation. But if AI automates therapy and increases the amount of therapists by 100X then my model won't imply that the real $ value of therapy industry increases 100X. The price of therapy fall
... (read more)

if they had explained why their views were not moved by the expert reviews OpenPhil has already solicited.

I included responses to each review, explaining  my reactions to it. What kind of additional explanation were you hoping for?

 

Davidson 2021 on semi-informative priors received three reviews.

By my judgment, all three made strong negative assessments, in the sense (among others) that if one agreed with the review, one would not use the report's reasoning to inform decision-making in the manner advocated by Karnofsky (and by Beckstead).

For Hajek... (read more)

Thanks for this!

I won't address all of your points right now, but I will say that I hadn't considered that "R&D is compensating for natural resources becoming harder to extract over time", which would increase the returns somewhat. However, my sense is that raw resource extraction is a small % of GDP, so I don't think this effect would be large.

Sorry for the slow reply!

I agree you can probably beat this average by aiming specifically at R&D for boosting economic growth.

I'd be surprised if you could spend $100s millions per year and consistently beat the average by a large amount (>5X) though:

  • The $2 trillion number also excludes plenty of TFP-increasing research work done by firms that don't report R&D like Walmart and many services firms.
  • The broad areas where this feels most plausible to me (R&D in computing or fundamental bio-tech) are also the areas that have the biggest poten
... (read more)

Great question!

I would read Appendix G as conditional on "~no civilizational collapse (from any cause)", but not conditional on "~no AI-triggered fundamental reshaping of society that unexpectedly prevents growth". I think the latter would be incorporated in "an unanticipated bottleneck prevents explosive growth".

I think the question of GDP measurement is a big deal here. GDP deflators determine what counts as "economic growth" compared to nominal price changes, but deflators don't really know what to do with new products that didn't exist. What was the "price" of an iPhone in 2000? Infinity? Could this help recover Roodman's model? If ideas being produced end up as new products that never existed before, could that mean that GDP deflators should be "pricing" these replacements as massively cheaper, thus increasing the resulting "real" growth rate?

This is an int... (read more)

Thank you for this comment! I'll make reply to different points in different comments.

But then the next point seems very clear: there's been tons of population growth since 1880 and yet growth rates are not 4x 1880 growth rates despite having 4x the population. The more people -> more ideas thing may or may not be true, but it hasn't translated to more growth.

So if AI is exciting because AIs could start expanding the number of "people" or agents coming up with ideas, why aren't we seeing huge growth spurts now?

The most plausible models have dimin... (read more)

Hey - interesting question! 

This isn't something I looked into in depth, but I think that if AI drives explosive economic growth then you'd probably see large rises in both absolute energy use and in energy efficiency.

Energy use might grow via (e.g.) massively expanding solar power to the world's deserts (see this blog from Carl Shulman). Energy efficiency might grow via replacing human  workers with AIs (allowing services to be delivered with less energy input), rapid tech progress further increasing the energy efficiency of existing goods and s... (read more)

Thanks for these thoughts! You raise many interesting points.

 On footnote 16, you "For example, the application of Laplace’s law described below implies that there was a 50% chance of AGI being developed in the first year of effort". But historically, participants in the Dartmouth conference were gloriously optimistic

I'm not sure whether the participants at Dartmouth would have assigned 50% to creating AGI within a year and >90% within a decade, as implied by the Laplace prior. But either way I do think these probabilities would have been too ... (read more)

Agreed - the framework can be applied to things other than AGI.

Thanks for this Halstead - thoughtful article.

I have a one push-back, and one question about your preferred process for applying the ITN framework.

1. After explaining the 80K formalisation of ITN you say

Thus, once we have information on importance, tractability and neglectedness (thus defined), then we can produce an estimate of marginal cost-effectiveness.
The problem with this is: if we can do this, then why would we calculate these three terms separately in the first place?

I think the answer is that in some contexts it's easier to calculate each t... (read more)

3
MichaelPlant
5y
Hmmm. I don't really see how this is any harder, or different from, your proposed method, which is to figure out how much of the problem would be solved by increasing spend by 10%. In both cases you've got to do something like working out how much money it would take to 'solve' AI safety. Then you play with that number.
1
Robert_Wiblin
7y
Glad you like them! Tell your friends. ;)

I found Nakul's article v interesting too but am surprised at what it led you to conclude.

I didn't think the article was challenging the claim that doing paradigmatic EA activities was moral. I thought Nakul was suggesting that doing them wasn't obligatory, and that the consequentialist reasons for doing them could be overridden by an individual's projects, duties and passions. He was pushing against the idea that EA can demand that everyone support them.

It seems like your personal projects would lead to do EA activities. So I'm surprised you judge EA acti... (read more)

1
Diego_Caleiro
8y
Agreed with 2 first paragraphs. Activities that are more moral than EA for me: At the moment I think working directly on assembling and conveying knowledge in philosophy and psychology to the AI safety community has higher expected value. I'm taking the AI human compatible course at Berkeley, with Stuart Russell, I hang out at MIRI a lot, so in theory I'm in good position to do that research and some of the time I work on it. But I don't work on it all the time, I would if I got funding for our proposal. But actually I was referring to a counterfactual world where EA activities are less aligned with what I see as morally right than this world. There's a dimension, call it "skepticism about utilitarianism" that reading Bernard Williams made me move along. If I moved more and more along that dimension, I'd still do EA activities, that's all. Your expectation is partially correct, I assign 3% to EA activities is morally required of everyone, I feel personally more required to do them than 25% (because this is the dream time, I was lucky, I'm at a high leverage position etc..), but although I think it is right for me to do them, I don't do them because its right, and that's my overall point.

Yeah good point.

If people choose a job which they enjoy less then that's a huge sacrifice, and should be applauded.

But EA is about doing the most good that you can.

So anyone who is doing the most good that they could possibly do is being an amazing EA. Someone on £1million who donates £50K is not doing anywhere near as much good as they could do.

The rich especially should be encouraged to make big sacrifices, as they do have the power to do the most good.

1
Owen Cotton-Barratt
8y
But this will tend to neglect the fact that people can make choices which make them richer, possibly at personal cost. If we systematically ignore this, we will probably encourage people too much into careers which they enjoy with low consumption levels. I think it's important to take both degree of sacrifice (because the amount we can do isn't entirely endogenous) and absolute amount achieved (because nor is it entirely exogenous) into account.

I agree completely that talking with people about values is the right way to go. Also, I don't think we need to try and convince them to be utilitarians or nearly-utilitarian. Stressing that all people are equal and pointing to the terrible injustice of the current situation is already powerful, and those ideas aren't distinctively utilitarian.

There is no a priori reason to think that the efficacy of charitable giving should have any relation whatsoever to utilitarianism. Yet it occupies a huge part of the movement.

I think the argument is that, a priori, utilitarians think we should give effectively. Further, given the facts as they far (namely that effective donations can do an astronomical amount of good), there are incredibly strong moral reasons for utilitarians to promote effective giving and thus to participate in the EA movement.

I think that [the obsession with utilitarianism] is reg

... (read more)
0[anonymous]8y
I agree that given the amount of good which the most effective charities can do, there are potentially strong reasons for utilitarians to donate. Yet utilitarians are but a small sub-set of at least one plausible index of the potential scope of effective altruism: any person, organisation or government which currently donates to charity or supports foreign aid programmes. In order to get anywhere near that kind of critical mass the movement has to break away from being a specifically utilitarian one.

Those seem really high flow through effects to me! £2000 saves one life, but you could easily see it doing as much good as saving 600!

How are you arriving at the figure? The argument that "if you value all times equally, the flow through effects are 99.99...% of the impact" would actually seem to show that they dominated the immediate effects much more than this. (I'm hoping there's a reason why this observation is very misleading.) So what informal argument are you using?

0
MichaelDickens
8y
I more or less made up the numbers on the spot. I expect flow-through effects to dominate direct effects, but I don't know if I should assume that they will be astronomically bigger. The argument I'm making here is really more qualitative. In practice, I assume that AMF takes $3000 to save a life, but I don't put much credence in the certainty of this number.

This is a nice idea but I worry it won't work.

Even with healthy moral uncertainty, I think we should attach very little weight to moral theories that give future people's utility negligible moral weight. For the kinds of reasons that suggest we can attach them less weight don't go any way to suggesting that we can ignore them. To do this they'd have to show that future people's moral weight was (more than!) inversely proportional to their temporal distance from us. But the reasons they give tend to show that we have special obligations to people in our gen... (read more)

Great post!

Out of interest, can you give an example of an "instrumentally rational technique that require irrationality"?

Why? What are the very long term effects of a murder?

0
saulius
8y
Murdering also decreases world population and consumption, which decreases problems like global warming, overfishing, etc. and probably reduces some existential risks.
0
Robert_Wiblin
8y
Increasing violence and expectation of violence seems to lead to worse values and a more cruel/selfish world. Of course it's also among the worst thing you can do under all non-consequentialist ethics.

Would you similarly doubt that, on expectation, someone murdering someone else had bad consequences overall? Someone slapping you very hard in the face?

This kind of reasoning seems to bring about a universal scepticism about whether we're doing Good. Even if you think you can pin down the long term effects, you have no idea about the very long term effects (and everything else is negligible compared to very long term effects).

3
MichaelDickens
8y
For what it's worth, I definitely don't think we should throw our hands up and say that everything is too uncertain, so we should do nothing. Instead we have to accept that we're going to have high levels of uncertainty, and make decisions based on that. I'm not sure it's reasonable to say that GiveWell top charities are a "safe bet", which means they don't have a clear advantage over far future interventions. You could argue that we should favor GW top charities because they have better feedback loops--I discuss this here.
1
Robert_Wiblin
8y
I think the effect of murdering someone are more robustly bad than reducing poverty (which are also probably positive, but less obviously so).

In defence of WALYs, and in reply to your specific points:

  1. I don't share your intuition here. Well-being is what we're talking about when we say "I'm not sure he's doing so well at the moment", or when we say "I want to help people as much as possible". It's a general term for how well someone is doing, overall. It's an advantage, in my eyes, that it's not committed to any specific account of well-being, for any such account might have its drawbacks.

  2. I worry that, in adopting HALYs, EA would tie its aims to a narrow view of what huma

... (read more)
0
MichaelPlant
8y
Thanks for the comments Tom. On 1. I agree that the broadness of leaving 'well-being' unspecified looks like an advantage, but I think that's someone illusory. If I ask you "okay, so if you want to help people do better, what do you mean by 'better'?" then you've got to specify an account of well-being unless you want to give a circular answer. If you just say "well, I want to do what's good for them" that wouldn't tell me what you meant.. This might seem picky, but depending on you view of well-being you get quite sharply different policy/EA decisions. I'm doing some research on this now and hope to write it up soon. On 2. I should probably reveal my cards and say i'm a hedonist about well-being. I'm not interested in any intervention which doesn't make people experience more joy and less suffering. To make the point by contrast, lots of thinks which make people richer do nothing to increase happiness. I'm very happy for other EAs to choose their own accounts of well-being of course. As it happens, lots of EAs seem to be implicit or explicit hedonists too.

A small quibble

One conclusion EAs might make is that their personal diets are no big deal, easily swamped as it is by the consequences of donations.

I think it's flat out wrong to conclude our diets "are no big deal". Being vegetarian for a lifetime prevents over 1000 years of animal suffering. That's a huge, huge impact.

My more serious worry is that people will draw this conclusion and eat less ethically as a result, without donating more (they already knew donating was great). But this is just psychological speculation backed up by some anecdotal evidence.

Most people who go vegetarian find its very very little effort to be 90% vegetarian after a year or so. To me this warns against the view that people will give extra because "they haven't made the sacrifice of becoming veggie". Very soon the sacrifice becomes a habit and the claim that charitable donations are affected becomes even less plausible.

I'd be interested to know if anyone has given more money because of this thread. I know that i'm more willing to eat diary products and have read others saying it made them happier eating meat.

That only seems to show that emissions do harm. Not that the harm is so finely individuated. fwiw there are reasons to doubt the butterfly effect works in the same way given quantum mechanics

1
Tom_Ash
8y
I'd be interested to hear Carl's response, since this is an interesting test case for the harm-avoidance moral principles at issue.

When you emit carbon dioxide those emissions will go on to harm particular people. When you buy offsets that will avert emissions that would have harmed different people.

What's this claim based on?

2
CarlShulman
8y
Harm from weather events: heat stroke, storms, crop damage. Plus the butterfly effect.

This is a really good article, and I do find the perspective advocated compelling. However, I would like to voice some worries.

  1. Anyone not committed to an consequentialist mindset is likely to take serious issue with someone who eats meat but donates to charities that encourage other people to give up meat. In general, advocating that someone else make a sacrifice that you aren't willing to make is seen as hypocritical and lacking in integrity. People will criticise you and perhaps, by association, effective altruism.

  2. I'm sceptical, psychologically, th

... (read more)

Agree - it's worth pointing out that 'meat offsetting' isn't obviously morally OK unless you're a consequentialist. It's analogous to a case where you kill one person then pay someone else not to kill a different person - and you'd only have to donate $3500 to AMF per person killed, bargain!

(unlike CO2 offsetting, where the overall level of CO2 is reduced and fewer ppl are harmed).

I agree with this. Let me make explain why I stand by the point that you quote me on. Tl;dr: by "negative effects" I wasn't talking about the hurt feelings of potential EAs.

My point wasn't the following: "It's unfair on relatively poor potential EAs, therefore it's bad, therefore let's change the movement" As you stress, this consideration is outweighed by the considerations of those the movement is trying to help. I accept explicitly in the article that such considerations might justify us making EA elitist.

My point was rather that pe... (read more)

Thanks for that.

My basic worries are: -Academics must gain something from spending ages thinking and studying ethics, be it understanding of the arguments, knowledge of more arguments or something else. I think this puts them in a better position than others and should make others tentative in saying that they're wrong.

-Your explanation for disagreeing with certain academics is that they have different starting intuitions. But does this account for the fact that academics can revise/abandon intuitions because of broader considerations. Even if you're right... (read more)

1
Tor_Barstad
9y
Btw, I agree with this in the sense that I'd rather have a random ethicist make decisions about an ethical question than a random person. Great! I'm writing a text about this, and I'll add a comment with a reference to it when the first-draft finished :) A reasonable question, and I'll try to give a better account of my reasons for this in my next comment, since the text may help in giving a picture of where I'm coming from. I will say in my defence though, that I do have at least some epistemic modesty in regards to this - although not as much as I think you would think is the reasonable level. While what I think of as probably being the best outcomes from an "objective" perspective corresponds to some sort of hedonistic utilitarianism, I do not and do not intend to ever work towards outcomes that don't also take other ethical concerns into account, and hope to achieve a future that that is very good from the perspective of many ethical viewpoints (rights of persons, fairness, etc) - partly because of epistemic modesty.

Why, do you believe we should redistribute moral virtue?

No, but it's unfair that it's harder for the poor to attain the status. That has negative effects which I talked about in the article.

1
SophiesWorld
9y
I mentioned this on Facebook before (I hope I don't sound like a broken record!), but the feelings of fellow aspiring EAs, while no doubt important, completely pales in comparison to that of the population we're trying to serve. Here's an analogy from GiveDirectly: https://www.givedirectly.org/blog-post.html?id=1960644650098330671 "through my interactions with the organization, it's become clear that their commitment is not just to evidence – it's to the poor. Most international charities' websites prominently feature photos of relatable smiling children, but not GiveDirectly, because of respect for beneficiaries' privacy and security. Many charities seem to resign themselves to a certain degree of corruption among their staff, but GiveDirectly is willing to install intrusive internal controls to actively prevent corruption." Is intrusive internal controls "unfair" to GiveDirectly's staff members? In some sense, of course...other NGOs don't do this. In another, more important sense, however, GiveDirectly workers are still way better off than the people they're transferring money to. In a similar sense, while "the poor" (by that, I assume you mean people making in the 80th percentile of income) will find it more difficult to meet the GWWC pledge, and maybe it's less "fair" for them to feel altruistic, it's even less fair to die from malaria. Ultimately my greatest priority isn't fellow EAs. Paul Farmer said that his duty is [paraphrasing] "first to the sick, second to prisoners, and third to students." I think this is the right model to have. Conventional models of morality radiates outwards from our class and social standing, whereas a more universalist ethic will triage. If this is not obvious to you, imagine, behind the veil of ignorance, the following two scenarios: 1) You're making minimum wage in the US. You heard about the Giving What We Can pledge. You would like to contribute but know that you have a greater obligation to your family. You feel bad about

Thanks so much for this! Really good and persuasive points.

One important thing to say is that the Pledge should absolutely not be used to distinguish ‘good people’.

My worry is this isn't realistic, even if ideally it we wouldn't distinguish people like this. For example, having taken the pledge myself and told people about it I was congratulated (especially by other EAs). This simple and unavoidable kind of interaction rewards pledgers and shows that their moral status in the eyes of others has gone up. To me, it seems a real problem that this kind of... (read more)

1
Dale
9y
Why, do you believe we should redistribute moral virtue? The Pledge is trying to encourage people to donate more, so it assigns status on that basis. We don't want to reduce that incentive, it is already weak enough.

Thanks for a thoughtful response.

But what do you mean by "Refrain from posting things that assume that consequentialism is true"? That its best to refrain from posting things that assume that values like e.g. justice aren't ends-in-themselves, or refrain from posting things that assume that consequences and their quantity are important?

Definitely the former. I find it hard to get my head round people who deny the latter. I suspect only people committed to a weird philosophical theories would do it. I thought modern Kantians were more moderate... (read more)

1
Tor_Barstad
9y
Likewise :) That's a reasonable worry, but whereas the field of ethics as a whole is concerned I would be much more worried about trusting the judgment of the average ethicist over ours. I would also agree that the "we are not special"-assumption seems like a reasonable best-guess for how things are in the absence of evidence for or against (although, in fear of violating your not-comming-across-as-smug-and-arrogant-reccomendation, I’m genuinely unsure about whether its correct or not). I've also thought a lot about ethics, I’ve been doing so since childhood. But admittedly, most of the philosophical texts that have been written about these topics have not been read by me (or by most professional ethicists I suppose, but I've read far less than them also, for sure). I have read a significant amount though, enough for me to have heard most or all memorable arguments I've heard be repeated several times. Also, perhaps more surprisingly; I'm somewhat confident that I've never heard an argument against my opinions about ethics (that is, not the specific issues, but the abstract issues) that was both (1) not based on axiomatic assumptions/intuitions I disagree with and (2) something I hadn't already thought of (of course, I may have forgotten, but it also seems like something that would have been memorable). Examples where criteria #2 was met but #1 wasn't met includes things like e.g. "the repugnant conclusion" (it doesn't seem repugnant to me at all, so it never occurred to me that this should be seen as a possible counter argument). Philosophy class was a lot of "oh.. so that argument has a name" (and also a lot of “what? do people find that a convincing argument against utilitarianism?”). For what I know this could be the experience of many with opinions different from mine also, but if so, it suggests that intuitions and/or base assumptions may be the determining factor for many, as opposed to knowledge and understanding of arguments presented by differing sides

I agree with you on technical language - we have to judge cases on an individual basis and be reasonable.

Less sure about the consequentialism, unless you know your talking to a consequentialist! If you want to evaluate an action from a narrow consequentialist perspective, can't you just say so at the start?

I don't think the existence of another pledge does much to negate the harm done by the GWWC pledge being classist.

I agree there's value in simplicity. But we already have an exception to the rule: students only pay 1%. There's two points here. Firstly, it doesn't seem to harm our placard-credentials. We still advertise as "give 10%", but on further investigation there's a sensible exception. I think something similar could accommodate low-earners. Secondly, even if you want to keep it at one exception, students are in a much better position to g... (read more)

Thanks a lot, this cleared up a lot of things.

I think we're talking past each other a little bit. I'm all for EtG and didn't mean to suggest otherwise. I think we should absolutely keep evaluating career impacts; Matt Wage made the right choice. When I said we should stop glorifying high earners I was referring to the way that they're hero-worshipped, not our recommending EtG as a career path.

Most of my suggested changes are about the way we relate to other EAs and to outsiders, though I had a couple of more concrete suggestions about the pledge and the ca... (read more)

3
xccf
9y
Good to hear we're mostly on the same page. Hm, maybe I just haven't seen much of this? Regarding the pledge, I'm inclined to agree with this quote: So, I'm inclined to think that preserving the simplicity of the current GWWC pledge is valuable. If someone doesn't feel like they're in a financial position to make that pledge, there's always the Life You Can Save pledge, or they can skip pledging altogether. Also, note that religions have been asking their members for 10% of their income for thousands of years, many hundreds of which folks were much poorer than people typically are today.

Thanks for the reply! I would like to pick you up on a few points though...

"On the one hand, you say you "want EA to change the attitudes of society as a whole". But you seem willing to backpedal on the goal of changing societal attitudes as soon as you encounter any resistance... If EA is watered down to the point where everyone can agree with it, it won't mean anything anymore."

I think all the changes I suggested can be made without the movement losing the things that currently makes it distinctive and challenging in a good way. W... (read more)

2
xccf
9y
I guess I'm not totally sure what concrete suggestions you're trying to make. You do imply that we should stop saying things like "It’s better to become a banker and give away 10% of your income than to become a social worker" and stop holding EAs who earn and donate lots of money in high regard. So I guess I'll run with that. High-earning jobs are are often unpleasant and/or difficult to obtain. Not everyone is willing to get one or capable of getting one. Insofar as we de-emphasize earning to give, we are more appealing to people who can't get one or don't want one. But we'll also be encouraging fewer people to jump through the hoops necessary to achieve a high-earning job, meaning more self-proclaimed "EAs" will be in "do what you're passionate about" type jobs like going to grad school for pure math or trying to become a professional musician. Should Matt Wage have gone on to philosophy academia like his peers or not? You can't have it both ways. I don't think high-earning jobs are the be all and end all of EA. I have more respect for people who work for EA organizations, because I expect they're mostly capable of getting high-paying jobs but they chose to forgo that extra income while working almost as hard. I guess I'm kind of confused about what exactly you are proposing... are we still supposed to evaluate careers based on impact, or not? As long as we evaluate careers based on impact, we're going to have the problem that highly capable people are able to produce a greater impact. I agree this is a problem, but I doubt there is an easy solution. Insofar as your post presents a solution, it seems like it trades off almost directly against encouraging people to pursue high-impact careers. We might be able to soften the blow a little bit but the fundamental problem still remains. Just in terms of the "wealthy & privileged" image problem, I guess maybe making workers at highly effective nonprofits more the stars of the movement could help some? (And also help

Hi, I've recently written an article about what I think are some image problems that effective altruism have and how we can combat them. I'd love to post on this website here so that I get feedback and stimulate discussion but don't have enough Karma points to do so. Please like this post so that I can post it!

If you're worried about the material you can see an earlier draft of the article on the Effective Altruism fb group (https://www.facebook.com/groups/effective.altruists/) or the EA Hangout fb group (https://www.facebook.com/groups/eahangout/?fref=ts).

0
Peter Wildeford
9y
You're now good to go.