All of trammell's Comments + Replies

2-week summer course in "economic theory and global prioritization": LMK if interested!

Right—the primary audience is people who already have a fair bit of background in economics.

2-week summer course in "economic theory and global prioritization": LMK if interested!

Cool! I was thinking that this course would be a sort of early-stage / first-pass attempt at a curriculum that could eventually generate a textbook (and/or other materials) if it goes well and is repeated a few times, just as so many other textbooks have begun as lecture notes. But if you'd be willing to make something online / easier-to-update sooner, that could be useful. The slides and so on won't be done for quite a while, but I'll send them to you when they are.

2david_reinstein11dYes, it makes sense to first play with this in a flexible way, to figure out what works best and holds together best. But I would love to see your notes and think about ways to incorporate and organize them. (For me 'markdown syntax raw text' files are best ... but whatever you can share is great). By the way, I assume you are familiar with DRB's reading syllabus - An introduction to global priorities research for economists [https://forum.effectivealtruism.org/posts/dia3NcGCqLXhWmsaX/an-introduction-to-global-priorities-research-for-economists]
2-week summer course in "economic theory and global prioritization": LMK if interested!

Yup, I'll post the syllabus and slides and so on!

I'll also probably record the lectures, but probably not make them available except to the attendees, so they feel more comfortable asking questions. But if a lecture goes well, I might later use it as a template for a more polished/accessible video that is publicly available. (Some of the topics already have good lectures for online as well, though; in those cases I'd probably just link to those.)

2-week summer course in "economic theory and global prioritization": LMK if interested!

Glad to hear you might be interested!

Thanks for pointing this out. It's tough, because (a) as GrueEmerald notes below, at least some European schools end later, and (b) it will be easier to provide accommodation in Oxford once the Oxford spring term is over (e.g. I was thinking of just renting space in one of the colleges). Once the application form is up*, I might include a When2Meet-type thing so people can put exactly what weeks they expect to be free through the summer.

*If this goes ahead; but there have been a lot of expressions of interest so far, so it probably will!

2-week summer course in "economic theory and global prioritization": LMK if interested!

Sure. Those particular papers rely on a mathematical trick that only lets you work out how much a society should be willing to pay to avoid proportional losses in consumption. It turns out to be different from what to do in the x-risk case in lots of important ways, and the trick is not generalizable in those ways. But because the papers seem so close to being x-risk-relevant, I know of like half a dozen EA econ students (including me) who have tried extending them at some point before giving up…

I’m aware of at least a few other “common EA econ theorist dead ends” of this sort, and I’ll try making a list, along something written about each of them. When this and the rest of the course material is done, I’ll post it.

2-week summer course in "economic theory and global prioritization": LMK if interested!

Good to know, thanks!

Video recordings are among the "more polished and scalable educational materials" I was thinking might come out of this; i.e. to some extent the course lectures would serve as a trial run for any such videos. That wouldn't be for a year or so, I'm afraid. But if it happens, I'll make sure to get a good attached mike, and if I can't get my hands on one elsewhere I'll keep you in mind. : )

A Model of Patient Spending and Movement Building

Thanks! A lot of good points here.

Re 1: if I'm understanding you right, this would just lower the interest rate from r to r - capital 'depreciation rate'. So it wouldn't change any of the qualitative conclusions, except that it would make it more plausible that the EA movement (or any particular movement) is, for modeling purposes, "impatient". But cool, that's an important point. And particularly relevant these days; my understanding is that a lot of Will's(/etc) excitement around finding megaprojects ASAP is driven by the sense that if we don't, some of ... (read more)

A Model of Patient Spending and Movement Building

Thanks! I agree that this might be another pretty important consideration, though I'd want to think a bit about how to model it in a way that feels relatively realistic and non-arbitrary.

E.g. maybe we should say people start out with a prior on the effectiveness of a movement at getting good things done, and instead of just being deterministically "recruited", they decide whether to contribute their labor and/or capital to a movement partly on the basis of their evaluation of its effectiveness, after updating on the basis of its track record.

3Benjamin_Todd21dA hacky solution is just to bear in mind that 'movement building' often doesn't look like explicit recruitment, but could include a lot of things that look a lot like object level work. We can then consider two questions: * What's the ideal fraction to invest in movement building? * What are the highest-return movement building efforts? (where that might look like object-level work) This would ignore the object level value projected by the movement building efforts, but that would be fine, unless they're of comparable value. For most interventions, either the movement building effects or the object level value is going to dominate, so we can just treat them as one of the other.
Could EA be ideas constrained?

Good question! Yes, an ideas constraint absolutely could make sense.

My current favorite way to capture that possibility would be to model funding opportunities like consumer products as I do here. Pouring more capital and labor into existing funding opportunities might just bring you to an upper bound of impact, whereas thinking of new funding opportunities would raise the upper bound.

This is also one of the extensions I'm hoping to add to this model before too long. If you or anyone else reading this would be interested in working on that, especially if y... (read more)

New Articles on Utilitarianism.net: Population Ethics and Theories of Well-Being

Nice to see this coming along! How many visitors has utilitarianism.net been getting?

7Darius_M3moWebsite traffic was initially low (i.e. 21k pageviews by 9k unique visitors from March to December 2020) but has since been gaining steam (i.e. 40k pageviews by 20k unique visitors in 2021 to date) as the website's search performance has improved. We expect traffic to continue growing significantly as we add more content, gather more backlinks and rise up the search rank. For comparison, the Wikipedia article on utilitarianism [https://en.wikipedia.org/wiki/Utilitarianism] has received [https://pageviews.toolforge.org/?project=en.wikipedia.org&platform=all-access&agent=user&redirects=0&range=this-year&pages=Utilitarianism] ~ 480k pageviews in 2021 to date, which suggests substantial room for growth for utilitarianism.net [https://www.utilitarianism.net/].
3Ben_West10moIt stands for Representation, Equity, and Inclusion. It’s an alternative to the more common Diversity, Equity, and Inclusion, which some people prefer because it’s often more accurate to describe an organization’s goals as trying to be representative of some population then it is to say they want “diversity” per se. I’ve edited the post to clarify this as well.
A Model of Value Drift

I think this is a valuable contribution—thanks for writing it! Among other things, it demonstrates that conclusions about when to give are highly sensitive to how we model value drift.

In my own work on the timing of giving, I’ve been thinking about value drift as a simple increase to the discount rate: each year philanthropists (or their heirs) face some x% chance of running off with the money and spending it on worthless things. So if the discount rate would have been d% without any value drift risk, it just rises to (d+x)% given the value drift risk. If ... (read more)

"Patient vs urgent longtermism" has little direct bearing on giving now vs later

Sorry, no, that's clear! I should have noted that you say that too.

The point I wanted to make is that your reason for saving as an urgent longtermist isn't necessarily something like "we're already making use of all these urgent opportunities now, so might as well build up a buffer in case the money is gone later". You could just think that now isn't a particularly promising time to spend, period, but that there will be promising opportunities later this century, and still be classified as an urgent longtermist.

That is, an urgent longtermist could have stereotypically "patient longtermist" beliefs about the quality of direct-impact spending opportunities available in December 2020.

2Owen_Cotton-Barratt1yIn the abstract I agree that you could think that. But I'd make some of the same claims for the urgent longtermist as the patient longtermist: that some of the best investment opportunities are probably non-financial, and we should be trying to make use of those before going on to financial investments. (There's a question about whether at current margins we're already using them up.) I think there are some principled reasons to be unsurprised if the best available non-financial investment opportunities are better than the best available financial investment opportunities. Financial investment is a competitive market; there are lots of people who have money and want more money, and so for a given risk tolerance (and without lots of work) you can't expect to massively outperform what others are making. There are also markets (broadly understood) competing for buy-in to worldviews. At first glance these might look less attractive to enter into, since they seem to be (roughly) zero-sum. But unlike the financial case, capital is not fungible across worldviews, so we shouldn't assume that market forces mean that the returns from the best opportunities can't get too good (or they'd be taken by others). And I'm not concerned about the zero-sum point, because I don't think that the longtermist worldview is just an arbitrary set of beliefs; I think that it has ~truth on its side, and providing people with arguments plus encouraging them to reflect will on average be quite good for its market share (and to the extent that it isn't, maybe that's a sign that it's getting something wrong). This is a pretty major advantage and makes it plausible that there are some really excellent opportunities available. Then I think growth over the last few years is evidence that at least some of the activities people engage in have really good returns; the crucial question is how much there are really good ones being left on the table.
"Patient vs urgent longtermism" has little direct bearing on giving now vs later

Thanks! I was going to write an EA Forum post at some point also trying to clarify the relationship between the debate over "patient vs urgent longtermism" and the debate over giving now vs later, and I agree that it's not as straightforward as people sometimes think.

On the one hand, as you point out, one could be a "patient longtermist" but still think that there are capacity-building sorts of spending opportunities worth funding now.

But I'd also argue that, if urgent longtermism is defined roughly as the view that there will be critical junctures in the ... (read more)

I was going to write an EA Forum post at some point also trying to clarify the relationship between the debate over "patient vs urgent longtermism" and the debate over giving now vs later, and I agree that it's not as straightforward as people sometimes think.

It seems to me that there are roughly three relevant confusions/sources of confusion in discussions around patient philanthropy, patient longtermism, and investing to give. I'll try to briefly describe them, and I'd be interested to hear if you or others think this is accurate.

1. "Patient philanthropy... (read more)

7Owen_Cotton-Barratt1yYes, I totally agree with this. Indeed a large part of what I was trying to say was that I'm more sympathetic to this strategy right now for "urgent longtermists" than "patient longtermists" (although it happens that I mostly still think it's beaten by non-financial investment opportunities which will pay off soon enough). [LMK if you found something I wrote confusing; I could consider editing to improve clarity.]
'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions

Sure, I see how making people more patient has more-or-less symmetric effects on risks from arms race scenarios. But this is essentially separate from the global public goods issue, which you also seem to consider important (if I'm understanding your original point about "even the largest nation-states being only a small fraction of the world"), which is in turn separate from the intergenerational public goods issue (which was at the top of my own list).

I was putting arms race dynamics lower than the other two on my list of likely reasons fo... (read more)

But this is essentially separate from the global public goods issue, which you also seem to consider important (if I'm understanding your original point about "even the largest nation-states being only a small fraction of the world"),

The main dynamic I have in mind there is 'country X being overwhelmingly technologically advantaged/disadvantaged ' treated as an outcome on par with global destruction, driving racing, and the necessity for international coordination to set global policy.

I was putting arms race dynamics lower than
... (read more)
'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions

I agree that the world underinvests in x-risk reduction (/overspends on activities that increase x-risk as a side effect) for all kinds of reasons. My impression would be that the two most important reasons for the underinvestment are that existential safety is a public good on two fronts:

  • long-term (but people just care about the short term, and coordination with future generations is impossible), and
  • global (but governments just care about their own countries, and we don't do global coordination well).

So I definitely agree that it's important th... (read more)

I'd say it's the other way around, because longtermism increases both rewards and costs in prisoner's dilemmas. Consider an AGI race or nuclear war. Longtermism can increase the attraction of control over the future (e.g. wanting to have a long term future following religion X instead of Y, or communist vs capitalist). During the US nuclear monopoly some scientists advocated for preemptive war based on ideas about long-run totalitarianism. So the payoff stakes of C-C are magnified, but likewise for D-C and C-D.

On the other hand, effective ba... (read more)

What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

Thanks! No need to inflict another recording of my voice on the world for now, I think, but glad to hear you like how the project coming.

What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. Utilitarian policymakers might implement more redistribution too. Given policymakers as they are, we’re still left with the question of how utilitarian philanthropists with their fixed budgets should prioritize between filling the redistribution gap and f... (read more)

"The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save."

That was explicitly discussed at the time. I cited the blog post as a historical reference illustrating that such considerations were in mind, not as a comprehensive publication of everything people discussed at the time, when in fact there wasn'... (read more)

The case of the missing cause prioritisation research

Thanks! I agree that people in EA—including Christian, Leopold, and myself—have done a fair bit of theory/modeling work at this point which would benefit from relevant empirical work. I don’t think this is what either of the current new economists will engage in anytime soon, unfortunately. But I don’t think it would be outside a GPI economist’s remit, especially once we’ve grown.

2jackmalde1yOK that’s good to hear. It probably makes sense to spend some time laying a solid theoretical base to build on. I’m aware of how new GPI still is so I’m looking forward to seeing how things progress!
What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

Sorry--maybe I’m being blind, but I’m not seeing what citation you’d be referring to in that blog post. Where should I be looking?

2CarlShulman1yThe Stern discussion.
The case of the missing cause prioritisation research

Thanks, I definitely agree that there should be more prioritization research. (I work at GPI, so maybe that’s predictable.) And I agree that for all the EA talk about how important it is, there's surprisingly little really being done.

One point I'd like to raise, though: I don’t know what you’re looking for exactly, but my impression is that good prioritization research will in general not resemble what EA people usually have in mind when they talk about “cause prioritization”. So when putting together an overvie... (read more)

8FCCC1yI think this is one of the most important things we can be doing. Maybe even the most important since it covers such a wide area and so much government policy is so far from optimal. I don't think that's right. I've written about what it means for a system to do "the optimal thing" [https://forum.effectivealtruism.org/posts/FhkXvdP6Dy9BLcJFc/] and the answer cannot be that a single policy maximizes your objective function: Unless by policy, you mean "the entirety of what government does", then yes. But given that you're going to consider one area at a time, and you're "only including all the levers between which you’re considering", you could reach a local optimum rather than a truly ideal end state. The way I like to think about it is "How would a system for prisons (for example) be in the best possible future?" This is not necessarily going to be the system that does the greatest good at the margin when constrained to the domain you're considering (though they often are). Rather than think about a system maximizing your objective function, it's better to think of systems as satisfying goals that are aligned with your objective function.
5Milan_Griffes1yAt a glance, Salesforce's AI Economist [https://marginalrevolution.com/marginalrevolution/2020/08/a-real-world-ai-economist.html] seems like an attempted implementation of an IAM.

Hi, Thank you for this really helpful comment. It was really interesting to read about how you work on cause prioritisation research and use IAMs. Glad that GPI will be expanding.

Hey Phil. I'm someone who is very interested in the work of GPI and am impressed by what I have seen so far. I'm looking forward to seeing what the new economists get up to!

I had a look at Leopold's paper a while back, have listened to you on the 80K podcast and have watched a few of GPI's videos including Christian Tarsney's one on the epistemic challenge to longtermism. I notice that in a lot of this research, key results are highly sensitive to the value of certain parameters. My memory is slightly hazy on specifics but I think ... (read more)

What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

Hanson has advocated for investing for future giving, and I don't doubt he had this intuition in mind. But I'm actually not aware of any source in which he says that the condition under which zero-time-preference philanthropists should invest for future giving is that the interest rate incorporates beneficiaries' pure time preference. I only know that he's said that the relevant condition is when the interest rate is (a) positive or (b) higher than the growth rate. Do you have a particular source in mind?

Also, who made the "pure ti... (read more)

6CarlShulman1yMy recollection is that back in 2008-12 discussions would often cite the Stern Review [https://en.wikipedia.org/wiki/Stern_Review], which reduced pure time preference to 0.1% per year, and thus concluded massive climate investments would pay off, the critiques of it noting that it would by the same token call for immense savings rates (97.5% according to Dasgupta 2006), and the defenses by Stern and various philosophers that pure time preference of 0 was philosophically appropriate. In private discussions and correspondence it was used to make the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. I cited it for this in this 2012 blog post [http://reflectivedisequilibrium.blogspot.com/2012/05/philosophers-vs-economists-on.html] . People also discussed how this would go away if sufficient investment was applied patiently (whether for altruistic or other reasons), ending the era of dreamtime finance [https://www.overcomingbias.com/2011/06/dreamtime-finance.html] by driving pure time preference towards zero.
8Owen_Cotton-Barratt21yI don't know the provenance of the idea, but I recall Paul Christiano making the point about pure time preference during the debate on giving now vs later at the ?2014 GWWC weekend away.
What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

That post just makes the claim that "all we really need are positive interest rates". My own point which you were referring to in the original comment is that, at least in the context of poverty alleviation (/increasing human consumption more generally), what we need is pure time preference incorporated into interest rates. This condition is neither necessary nor sufficient for positive interest rates.

Hanson's post then says something which sounds kind of like my point, namely that we can infer that it's better for us as philanthropists... (read more)

4lukefreeman1yThe GWWWC Try Giving pledge (any percent above 1%, any period of time) https://www.givingwhatwecan.org/get-involved/try-giving/ [https://www.givingwhatwecan.org/get-involved/try-giving/]
2Prabhat Soni1yThanks! Added!
How Much Does New Research Inform Us About Existential Climate Risk?

In case the notation out of context isn’t clear to some forum readers: Sensitivity S is the extent to which the earth will warm given a doubling of CO2 in the atmosphere. K denotes degrees Kelvin, which have the same units as degrees Celsius.

Should I claim COVID-benefits I don't need to give to charity?

I don't know what counts as a core principle of EA exactly, but most people involved with EA are quite consequentialist.

Whatever you should in fact do here, you probably wouldn't find a public recommendation to be dishonest. On purely consequentialist grounds, after accounting for the value of the reputation of the EA community and so on, what community guidelines (and what EA Forum advice) do you think would be better to write: those that go out of their way to emphasize honesty or those that sound more consequentialist?

Existential Risk and Economic Growth

I'm just putting numbers to the previous sentence: "Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing."

If "most" means "80%" there, then halting growth would lower the hazard rate from 1% to 0.8%.

Existential Risk and Economic Growth

Hey, thanks for engaging with this, and sorry for not noticing your original comment for so many months. I agree that in reality the hazard rate at t depends not just on the level of output and safety measures maintained at t but also on "experiments that might go wrong" at t. The model is indeed a simplification in this way.

Just to make sure something's clear, though (and sorry if this was already clear): Toby's 20% hazard rate isn't the current hazard rate; it's the hazard rate this century, but most of that is due to develo... (read more)

1riceissa2yCan you say how you came up with the "moving from 1% to 0.8%" part? Everything else in your comment makes sense to me.
1MichaelA2yThere’s also a talk version here: https://www.youtube.com/watch?v=DAavPa8j0lM [https://www.youtube.com/watch?v=DAavPa8j0lM]
Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good

Glad you liked it, and thanks for the good questions!

#1: I should definitely have spent more time on this / been more careful explaining it. Yes, x-risks should “feed straight into interest rates”, in the sense that a +1% chance of an x-risk per year should mean a 1% higher interest rate. So if you’re going to be

  • spending on something other than x-risk reduction; or
  • spending on x-risk reduction but only able to marginally lower the risk in the period you’re spending (i.e. not permanently lower the rate), and think that there will
... (read more)
1AslanP2yThanks Phil - appreciate the response! On #1, I think I get it though it's a bit counterintuitive. I take it that the proposition is that permanent (or at least long-term) reduction in x-risk has a sort of 'compounding' impact on expected value, since it reduces risk each year, and therefore would compete with patient investing, but short-term reductions in risk don't have that same 'compounding' benefit and therefore don't compete in the same way with the interest rate (which is assumed to be increase with and therefore be higher than the x-risk rate). And #2 and #3, I think I follow too. Some interesting ideas to think about... Looking forward to seeing your further work in this area. Cheers
On Waiting to Invest

Glad you liked it!

In the model I'm working on, to try to weigh the main considerations, the goal is to maximize expected philanthropic impact, not to maximize expected returns. I do recommend spending more quickly than I would in a world where the goal were just to maximize expected returns. My tentative conclusion that long-term investing is a good idea already incorporates the conclusion that it will most likely just involve losing a lot of money.

That is, I argue that we're in a world where the highest-expected-impact strategy (not just the highest-expect-return strategy) is one with a low probability of having a lot of impact and a high probability of having very little impact.

1matthewp2yAh, that's interesting and the nub of a difference. The way I see it, a 'good' impact function would upweight the impact of low probability downside events and, perhaps, downweight low probability upside events. Maximising the expectation of such a function would push one toward policies which more reliably produce good outcomes.
If you value future people, why do you consider near term effects?

At the risk of repetition, I’d say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.

Your response here was that “[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust”. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of “sum u_i”... (read more)

2MichaelStJules2yI'm not trying to solve all complex cluelessness cases with my argument. I think population growth is plausibly a case with complex cluelessness, but this depends on your views. If I were a total utilitarian with symmetric population ethics, and didn't care much about nonhuman animals (neither of which is actually true for me), then I'd guess the negative externalities of a larger population would be strongly dominated by the benefits of a larger population, mostly just the direct benefits of the welfare of the extra people. I don't think the effects of climate change are that important here, and I'm not aware of other important negative externalities. So for people with such views, it's actually just not a case of complex cluelessness at all. The expectation that more people than just the one you saved will live probably increases the cost-effectiveness to someone with such views. Similarly, I think Brian Tomasik has supported the Humane Slaughter Association basically because he doesn't think the effects on animal population sizes and wild animals generally are significant compared to the benefits [https://reducing-suffering.org/why-i-support-the-humane-slaughter-association/#Whats_wrong_with_other_animal_organizations] . It does good with little risk of harm. So, compared to doing nothing (or some specific default action), some actions do look robustly good in expectation. Compared to some other options, there will be complex cluelessness, but I'm happy to choose something that looks best in expectation compared to doing nothing. I suppose this might privilege a specific default action to compare to in a nonconsequentialist way, although maybe there's a way that gives similar recommendations without such privileging (I'm only thinking about this now): You could model this as a partial order with A strictly dominating B if the expected value of A is robustly greater than the expected value of B. At least, you should never choose dominated actions. You could
On Waiting to Invest

Yup, no disagreement here. You're looking at what happens when we introduce uncertainty holding the absolute expected return constant, and I was discussing what happens when we introduce uncertainty holding the expected annual rate of return constant.

4matthewp2ySo, what do you think of the idea that aiming for high expected returns in long term investments might not be the best thing to do, given the skewed distribution? This is, we want to ensure that most futures are 'good'; not just a few that are 'excellent' lost in a mass of 'meh' or worse. BTW, I did like the podcast - it does take something to make me tap out forum posts :)
If you value future people, why do you consider near term effects?
If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.

What I'm saying is, "Michael: you've given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just ju... (read more)

2MichaelStJules2yThe value to the universe is the sum of values to possible beneficiaries, including the direct ones C, so there is a direct and known causal effect of C on B. u1 has a causal effect on ∑iui, under any reasonable definition of causal effect, and it's the obvious one: any change in u1 directly causes an equal change in the sum, without affecting the other terms. The value in my life (or some moment of it) u1 doesn't affect yours u2, although my life itself or your judgment about my u1 might affect your life and your u2. Similarly, any subset of the ui (including C) has a causal effect on the sum. If you think A has no effect on B (in expectation), this is a claim that the effects through C are exactly negated by other effects from A (in expectation), but this is the kind of causal claim that I've been saying I'm skeptical of, since it doesn't come with a (justified) effect size estimate (or even an plausible argument for how this happens, in this case). This is pretty different from the skepticism I have about long-term effects: in this case, people are claiming that A affects a particular set of beneficiaries C where C is in the future, but they haven't justified an effect size of A on C in the first place; many things could happen before C, completely drowning out the effect. Since I'm not convinced C is affected in any particular way, I'm not convinced B is either, through this proposed causal chain. With short term effects, when there's good feedback, I actually have proxy observations that tell me that in fact A affects C in certain ways (although there are still generalization error and the reference class problem to worry about).
On Waiting to Invest

Hey, I know that episode : )

Thanks for these numbers. Yes: holding expected returns equal, our propensity to invest should be decreasing in volatility.

But symmetric uncertainty about the long-run average rate of return—or to a lesser extent, as in your example, time-independent symmetric uncertainty about short-run returns at every period—increases expected returns. (I think this is the point I made that you’re referring to.) This is just the converse of your observation that, to keep expected returns equal upon introducing volatility... (read more)

4matthewp2yThanks for the response. To clarify: in the second model both the drift and the diffusion term impact on the expected returns. If you substitute in a model return e^{q + sz}, with z a standard normal: E[V(1)] = E[e^{q + s z}] = E[e^{sz}]e^q = e^{s^2/2} e^q > e^q So, if we have fixed from some source that E[V(1)]=1.07=e^r then we cannot set q=r in the model with randomness while maintaining the equality. Where the equality cashes out as 'the expected rate of return a year from now is 7%'. Empirically estimated long run rates already take into account the effects of randomness since they are typically some sort of mean of observed returns. If this were not the case one would always have to, at least, quote the parameters in pairs (drift=such and such, vol=such and such) and perform a calculation in order to get out the expected returns.
If you value future people, why do you consider near term effects?
Population growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust.

Suppose for simplicity that we can split the effects of saving a life into

1) benefits accruing to the beneficiary;

2) benefits accruing to future generations up to 2100, through increased size (following from (1)); and

3) further effects (following from (2)).

It seems like you're saying that there's some proposition X such that (3) is overall good if X and bad if not-X, where we can only guess at the pr... (read more)

2MichaelStJules2yI wasn't saying we should cancel them this way; I'm just trying to understand exactly what the CC problem is here. What I have been proposing is that I'm independently skeptical of each causal effect that doesn't come with effect size estimates (and can't, especially), as in my other comments, and Saulius' here [https://forum.effectivealtruism.org/posts/ajZ8AxhEtny7Hhbv7/if-you-value-future-people-why-do-you-consider-near-term?commentId=RsEqKrTXjrNxhM9wR] . If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B. However, I'm thinking that I could be pretty confident about effect sizes conditional on X and notX, but have little idea about the probability of X. In this case, I shouldn't just apply the same skepticism, and I'm stuck trying to figure out the probability of X, which would allow me to weigh the different effects against each other, but I don't know how to do it. Is this an example of CC?
If you value future people, why do you consider near term effects?
Is the point that I'm confident they're larger in magnitude, but still not confident enough to estimate their expected magnitudes more precisely?

Yes, exactly—that’s the point of the African population growth example.

Maybe I have a good idea of the impacts over each possible future, but I'm very uncertain about the distribution of possible futures. I could be confident about the sign of the effect of population growth when comparing pairs of counterfactuals, one with the child saved, and the other not, but I'm not confident
... (read more)
2MichaelStJules2yPopulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust. E.g. I might think it's bad in cases like X and good in cases like notX and have conditional expectations for both, but I'm basically just guessing the probability of X, and which is better depends on the probability of X (under each action). So the assumption here is that I think the effect is nonnegative with probability 1. I don't think mere plausibility arguments or considerations give me that kind of credence. As a specific example, is population growth actually bad for climate change? The argument is "More people, more consumption, more emissions", but with no numbers attached. In this case, I think there's some probability that population growth is good for climate change, and without estimates for the argument, I'd assume the amount of climate change would be identically distributed with and without population growth. Of course, in this case, I think we have enough data and models to actually estimate some of the effects. Even with estimates, I still think there's a chance population growth is good for climate change, although my expected value would be that it's bad. It could depend on what kind of people the extra people are like, and what kinds of effects they have on society.
If you value future people, why do you consider near term effects?

No worries, sorry if I didn't write it as clearly as I could have!

BTW, I've had this conversation enough times now that last summer I wrote down my thoughts on cluelessness in a document that I've been told is pretty accessible—this is the doc I link to from the words "don't have an expected value". I know it can be annoying just to be pointed off the page, but just letting you know in case you find it helpful or interesting.

If you value future people, why do you consider near term effects?

Hold on—now it seems like you might be talking past the OP on the issue of complex cluelessness. I 1000% agree that changing population size has many effects beyond those I listed, and that we can't weigh them; but that's the whole problem!

The claim is that CC arises when (a) there are both predictably positive and predictably negative indirect effects of (say) saving lives which are larger in magnitude than the direct effects, and (b) you can't weigh them all against each other so as to arise at an all-things-considered judgment of t... (read more)

2MichaelStJules2yI'm not sure if what I'm defending is quite the same as what's in your example. It's not really about direct or indirect effects or how to group effects to try to cancel them; it's just skepticism about effects. I'll exclude whichever I don't have a good effect size estimate on my social welfare function for (possibly multiple), since I'll assume the expected effect size is small. If I have effect sizes for both, then I can just estimate the net effect. As a first approximation, I'd just add the two effects. If I have reason to believe they should interact in certain ways and I can model this, I might. If you're saying I know the two opposite sign indirect effects are larger in magnitude than the direct ones, it sounds like I have estimates I can just sum (as a first approximation). Is the point that I'm confident they're larger in magnitude, but still not confident enough to estimate their expected magnitudes more precisely? Maybe I have a good idea of the impacts over each possible future, but I'm very uncertain about the distribution of possible futures. I could be confident about the sign of the effect of population growth when comparing pairs of counterfactuals, one with the child saved, and the other not, but I'm not confident enough to form distributions over the two sets of counterfactuals to be able to determine the sign of the expected value. I think I'm basically treating each effect without an estimate attached independently like simple cluelessness. I'm not looking at a group of positive and negative effects and assuming they cancel; I'm doubting the signs of the effects that don't come with estimates. If I have a plausible argument that doing X affects Y and Y affects Z, which I value directly and the effect should be good, but I don't have an estimate for the effect through this causal path, I'm not actually convinced that the effect through this path isn't bad. Now, I'm not relying on a nice symmetry argument to justify this treatment like sim
2MichaelStJules2ySorry, I misunderstood your comment on my first reading, so I retracted my first reply.
If you value future people, why do you consider near term effects?

Agreed that, at least from a utilitarian perspective, identity effects aren't what matter and feel pretty symmetrical, and that they're therefore not the right way to illustrate complex cluelessness. But when you say

you need an example where you can justify that the outcome distributions are significantly different. I actually haven't been convinced that this is the case for any longtermist intervention

—maybe I'm misunderstanding you, but I believe the proposition being defended here is that the distribution of long-term welfare ... (read more)

3MichaelStJules2yNo, that seems plausible, although I'd have to look into how long the population effects go. The point isn't about direct vs indirect effects (all effects are indirect, in my view), but net effects we have estimates of magnitude for. I don't consider those effects to be "long-term" in the way longtermists use the word. The expected value on the long term isn't obvious at all, since there are too many different considerations to weigh against one another, many we're unaware of, and no good way to weigh them. Note: I retracted my previous reply.
2MichaelStJules2yTo be specific (and revising my claim somewhat), I'm not convinced of any net expected longterm effect in any particular direction on my social welfare function/utility function. I think there are many considerations that can go in either direction, the weight we give them is basically arbitrary, and I usually don't have good reason to believe their effects persist very long or are that important, anyway. I am arguing from ignorance here, but I don't yet have enough reason to believe the expected effect is good or bad. Unless I expect to be able to weigh opposing considerations against one another in a way that feels robust and satisfactory to me and be confident that I'm not missing crucial considerations, I'm inclined to not account for them until I can (but also try to learn more about them in hope of having more robust predictions). A sensitivity analysis might help, too, but only so much. The two studies you cite are worth looking into, but there are also effects of different population sizes that you need to weigh. How do you weigh them against each other? What's the expected value (on net) of the indirect effects to you? Is its absolute value much greater than the direct effects' expected value? How robust do you think the sign of the expected value of the indirect effects is to your subjective weighting of different considerations and missed considerations? Also, what do you think the expected change in population size is from saving one life through AMF?

Yeah, agreed that using the white supremacist label needlessly poisons the discussion in both cases.

For whatever it’s worth, my own tentative guess would actually be that saving a life in the developing world contributes more to growth in the long run than saving a life in the developed world. Fertility in the former is much higher, and in the long run I expect growth and technological development to be increasing in global population size (at least over the ranges we can expect to see).

Maybe this is a bit off-topic, but I think it’s worth illustrating tha

... (read more)
2MichaelStJules2yIs this taking more immediate existential risks into account and to what degree and how people in the developing and developed worlds affect them?

Thanks for pointing that out!

For those who might worry that you're being hyperbolic, I'd say that the linked paper doesn't say that they are white supremacists. But it does claim that a major claim from Nick Beckstead's thesis is white supremacist. Here is the relevant quote, from pages 27-28:

"As he [Beckstead] makes the point,

>> saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their wor... (read more)

Thanks, I agree with this clarification.

I actually find the argument that those arguing against prioritising climate change are aiding white supremacy[1] more alarming than the attack on Beckstead, even though the accusations there are more oblique.

While I think Beckstead's argumentation here seems basically true, it is clearly somewhat incendiary in its implications and likely to make many people uncomfortable – it is a large bullet to bite, even if I think that calling it "overtly white-supremacist" is bad argumentation that risks substantially degrading

... (read more)
Why not give 90%?

I downvoted the comment because it's off-topic.

0lucy.ea82yThanks trammell. I notice that only you told me why, I assume I got 5 downvotes at a minimum. While not directly on topic, giving more is about bigger impact, if D&I is poor EA impact is worse. That's why I responded. My thinking is that money is not the constraint an understanding or lack of it is the constraint in improving the world. For which EA needs open hearts and minds, not https://en.wikipedia.org/wiki/In-group_favoritism [https://en.wikipedia.org/wiki/In-group_favoritism]
Load More