DT

David T

1294 karmaJoined

Comments
236

I don't think longtermism necessarily needs new priorities to be valuable if it offers a better perspective on existing ones (although I don't think it does this well either). 

Understanding what the far future might need is very difficult. If you'd asked someone 1000 years what they should focus on to benefit us, you'd get answers largely irrelevant to our needs today.[1] If you asked someone a little over 100 years ago their ideas might seem more intelligible and one guy was even perceptive enough to imagine nuclear weapons, although his optimism about what became known as mutually assured destruction setting the world free looks very wrong now, and people 100 years ago that did boring things focused on the current world did more for us than people dreaming of post-work utopias. 

To that extent, the focus on x-risk seems quite reasonable: still existing is something we actually can reasonably believe will be valued by humans in a million years time[2] Of course, there are also over 8 billion reasons to try to avoid human extinction alive today (and most non-longtermists consider at least as far as their children) but longtermism makes arguments for it being more important than we think. This logically leads to willingness to allocate more money to x-risk causes, and consider more unconventional and highly unlikely approaches x-risk. This is a consideration, but in practice I'm not sure that it leads to better outcomes: some of the approaches to x-risk seeking funding make directionally different assumptions about whether more or less AGI is crucial to survival: they can't both be right and the 'very long shot' proposals that only start to make sense if we introduce fantastically large numbers of humans to the benefit side of the equation look suspiciously like Pascal's muggings.[3] 

Plus people making longtermist arguments typically seem to attach fairly high probabilities to stuff like AGI that they're working on in their own estimations, which if true would make their work entirely justifiable even focusing only on humans living today.

 

(A moot point but I'd have also thought that although the word 'longtermist' wasn't coined until much later, Bostrom and to a lesser extent Parfit fit in with the description of longtermist philosophy. Of course they also weren't the first people to write about x-risk)

  1. ^

    I suspect the main answers would be to do with religious prophecies or strengthening their no-longer-extant empire/state

  2. ^

    Notwithstanding fringe possibilities like the possibility humans in a million years might be better off not existing, or for impartial total utilitarians humanity might be displacing something capable of experiencing much higher aggregate welfare.

  3. ^

    Not just superficially in that someone is asking to suspend scepticism by invoking huge reward, but also that the huge rewards themselves make sense only if you believe in very specific claims about x-risk over the long term future being highly concentrated in the present (very large numbers of future humans in expectation or x-risk being nontrivial for any extended period of time might seem superficially uncontroversial possibilities but they're actually strongly in conflict with each other). 

I think it's also more fundamental in the sense a number of EA orgs are inherently "comms-focused" because they're lobbying for some sort of cause to some sort of decision maker (convince politicians to endorse challenge trials or ban datacentres and lead paint,, or maybe persuade fish farmers or maternal care workers in LEDCs to adopt a different approach). Or if they're not directly lobbying they might be trying to communicate research to a relatively small group of people like computer scientists or people who want to do inter-species utility loss comparisons. 

Also, with some notable exceptions I think a lot of EA is quite insular: orgs want to convey that they're doing important work to OpenPhil funders, a pipeline of talent coming from EA groups, "aligned" organizations to collaborate with or the sort of small donor that's already thinking about long shot solutions to x-risks or making donations to improve the welfare of unfashionable creatures. That's a short list to a/b test, a hard group to target with paid media, and also an audience which has exacting expectations about how things are communicated, so the digital marketing to wider audience approach may not work so well. The down side is that competing for the same attention is going to usually be net less impactful than finding interest from the wider public...

Sure, your example showed that if one irrationally disregards earlier generations and focuses purely on the needs of cohort P, Option B is a clear winner. If one doesn't, we agree that it's actually pretty darn complicated to estimate the total welfare impact of donating now versus donating a larger nominal sum on equivalent problems (assuming they still exist) in future, which requires a lot of contestable counterfactual assumptions[1] as well as choice of discount rates, PPP and money nonlinearity assumptions and decisions about whether any value is attached to economic stimulus to non-recipients in developing countries and keeping marginal NGOs alive. (Donations to things other than poverty relief have their own idiosyncracies: hopefully the number of ITNs needed to prevent malaria deaths by ~2050 will be zero.) 

The intergenerational elasticity point is an interesting one, but intergenerational income elasticities are higher in less developed countries (and the higher incomes are partially inherited by more people in later generations, assuming they continue to reproduce above replacement rate). And under normal assumptions we care about the earlier generations helped at least as much as the later ones, so you've already helped many more people than the direct recipients by the time the patient philanthropy fund is investigating how many more people accrued compound interest will let them help. Plus in the specific example of the roof we're talking about wealth, and you'd have to invest very well in stocks and shares to beat the imputed 20% annual returns on a tin roof, even over time spans that extend beyond its serviceability.

Catchup growth definitely exists, the only question is whether more marginal economies will be excluded from it.[2] There are many reasons for economic stagnation in poorer regions (most obviously terrible governance), but it's certainly not independent from whether philanthropic funds for economic growth and poverty alleviation decide that in the near term they should shift towards promoting the economic development of the stock market in their own country instead.[3] Too much patience is probably worse for developing countries than the opposite extreme of too much philanthropic cash chasing too few viable opportunities.

  1. ^

    You also have to make assumptions about the philanthropists of the future as well: I'm not as rosy on near future technology-enabled post scarcity societies as some people on here, but if we trend in that direction maybe your nominally larger funds are a lot less relevant in future than that now

  2. ^

    Never mind the Asian Tiger economies, even some conflict-ridden impoverished backwaters like Burkina Faso have seen average growth rates comparable to US stocks over extended periods of time, and even without wild technological optimism  it'll probably be fairly hard to find people living under the new $3 per day (2025 PPP) poverty threshold in 2075

  3. ^

    Makes wayyy more sense for funds to keep most of the funds invested in domestic stocks when they're endowments ring fenced for specific things like selective scholarships or maintenance of a facility than funds for promoting economic growth and poverty alleviation

I follow the logic, but I think the logic of the contrived example actually exposes general weaknesses in patient philanthropy (i.e. for example B to be clearly better we have to assume that for some arbitrary reason we do not care about the first three generations of poor people at all, only about the poor people in 100 years' time, who are sufficiently far removed for us not to even know how poor they will be)

Once we relax that assumption and assume that poor people today are at least equally deserving as hypothetical poor people in the future, Option B starts looking rather good. [1]The first generation get helped to the extent they need, their descendants may benefit directly to some extent but also may be better able to help themselves, and for big enough donations there is some sort of compounding return in the wider poor country. In some plausible circumstances even the return to the grandchildren is greater than the sum of money you donated (ceteris paribus being born poor but obtaining a scholarship paid for by a foreign foundation isn't a better situation than being born into the middle class and having education paid for by the grandparent who received the foreign-funded scholarship and was able to earn much better for fifty years as a result) 

It then becomes a debate on whether poor people can achieve better returns than index investments in Western stocks, and whilst poor people aren't sophisticated investors and are often subject to all sorts of negative economic shocks, they are often in a position to dramatically improve their livelihood (RCTs on cash transfers suggested the annual return on a long lived tin roof that didn't need replacing every two years was at least 19%, for example) and then there's whether to take into account the nonlinearity of money returns to the poorest people living on $2 per day (2025 USD PPP) now and the poorest people [most likely, based on current trends] earning >$2 per day (2025 USD PPP) 50 years in the future. 

  1. ^

    The exception might be in cases where your target population has very little capacity to improve their own situation right now, relative to those in future and so most of your money just gets wasted or stolen. I'm unconvinced this applies to people in poverty in general, but it might if you wanted to maximise the positive impact on a population of Palestinians in Gaza specifically, for example.

On the other hand, there is also no guarantee that the global poor in 50 years will (i) be as poor as some people from Ghana you might be able to help today and (ii) have as many low hanging fruit interventions that can help them. 

Give What We Can currently assumes that the cost of saving a life today is ~$4k. It's not obvious that the cost of saving a life will be as low as ($4k x compound interest on index fund) in 50 years' time. 

(Also, if the philanthropist's timelines are as long as 50 years it may not be them doing the cause selection, which may or may not be a consideration)

I think it's also disanalogous in the sense that the EA community's belief in imminent AGI isn't predicated on the commercial success of various VC-funded companies in the same way as the EA community's belief in the inherent goodness and amazing epistemics of its community did kind of assume that half its money wasn't coming from an EA-leadership endorsed criminal who rationalized his gambling of other people's money in EA terms...

The AI bubble popping (which many EAs actually want to happen) is somewhat orthogonal to the imminent AGI hypothesis;[1] the internet carried on growing after a bunch of overpromisers who misspent their capital fell by the wayside.[2] I expect that (whilst not converging on superintelligence) the same will happen with chatbots and diffusion models, and there will be plenty of scope for models to be better fit to benchmarks or for researchers to talk bots into creepier responses over the coming years.

The Singularity not happening by 2027 might be a bit of a blow for people that attached great weight to that timeline, but a lot are cautious to do that or have already given themselves probabilistic getouts. I don't think its going to happen in 2027 or ever, but if I thought differently I'm not sure 2027 actually being the year some companies failed to convince sovereign wealth funds they were close enough to AGI to deserve a trillion would or even should have that much impact. 

I do agree with the wider point that it would be nice if EAs realized that many of their own donation preferences might be shaped at least as much by personal interests and vulnerable to rhetorical tricks as normies'; but I'm not sure that was the main takeaway from FTX 

  1. ^

    FWIW I hold similar views about it not being about to happen and about undue weight being placed on certain quasi-religious prophecies...

  2. ^

    there's perhaps also a lesson that the internet isn't that different from circa 2000, but certain aspects of it did keep getting better...

I would add that it's not just extreme proposals to make "AI go well" like Yudkowsky's airstrike that potentially have negative consequences beyond the counterfactual costs of not spending the money on other causes. Even 'pausing AI' through democratically elected legislation enacted as a result of smart and well-reasoned lobbying might be significantly negative in its direct impact, if the sort of 'AI' restricted would have failed to become a malign superintelligence but would have been very helpful to economic growth generally and perhaps medical researchers specifically.

This applies if the imminent AGI hypothesis is false, and probably to an even greater extent it if it is true.

(The simplest argument for why it's hard to justify all EA efforts to make AI go well based purely on its neglectedness as a cause is that some EA theories about what is needed for AI to go well directly conflict with others; to justify the course of action one needs to have some confidence not only that AGI is possibly a threat but that the proposed approach to it at least doesn't increase the threat. It is possible that both donations to a "charity" that became a commercial AI accelerationist and donations to lobbyists attempting to pause AI altogether were both mistakes, but it seems implausible that they were both good causes)

I'm more confused by how this apparent near future, current world resource base timeline interacts with the idea that this Dyson swarm is achieved clandestinely (I agree with your sentiment the "disassemble Mercury within 31 years" scenario is even more unlikely, though close to Mercury is a much better location for a Dyson swarm). Most of the stuff in the tech tree doesn't exist yet and the entities working on it are separate and funding-starved: the relationship between entities writing papers about ISRU or designing rectenna for power transmission and an autonomous self-replicating deep space construction facility capable of acquiring unassailable dominance of the asteroid belt within a year is akin to the relationship between a medieval blacksmith and a gigafactory. You could close that gap more quickly with an larger-than-Apollo-scale joined up research endeavour, but that's the opposite of discreet. 

Stuff like the challenges of transmitting power/data over planetary distances and the constant battle against natural factors like ionizing radiation don't exactly point towards permanent dominance by a single actor either.

Also you look at the current US administration and the priorities and ... they're certainly not Singaporean or particularly interested in x-risk mitigation

Feels like the most straightforwardly rational argument for portfolio diversification is the assumption your EV and probability estimates almost certainly aren't the accurate or at least unbiased estimator they need to be for the optimal strategy to be to stick everything on the highest EV outcome. Even more so when the probability that a given EV estimate is accurate is unlikely to be uncorrelated with whether it scores particularly highly (the good old optimiser's curse, with a dose of wishful thinking thrown in). Financiers don't trust themselves to be perfectly impartial about stuff like commodity prices in central Asia or binary bets on the value of Yen on Thursday, and it seems unlikely that people who are extremely passionate about the causes they and their friends participate in ahead of a vast range of other causes that nominally claim to do good achieve a greater level of impartiality. Pascalian odds seem particularly unlikely to be representative of the true best option (in plain English, a 0.0001% subjective probability assessment of a 1 shot event is roughly "I don't really know what the outcome of this will be and it seems like there could be many, many things more likely to achieve the same end"). You can make the assumption that if they appear to be robustly positive and neglected they might deserve funding anyway, but that is a portfolio argument...

Load more