DT

David T

1278 karmaJoined

Comments
234

Sure, your example showed that if one irrationally disregards earlier generations and focuses purely on the needs of cohort P, Option B is a clear winner. If one doesn't, we agree that it's actually pretty darn complicated to estimate the total welfare impact of donating now versus donating a larger nominal sum on equivalent problems (assuming they still exist) in future, which requires a lot of contestable counterfactual assumptions[1] as well as choice of discount rates, PPP and money nonlinearity assumptions and decisions about whether any value is attached to economic stimulus to non-recipients in developing countries and keeping marginal NGOs alive. (Donations to things other than poverty relief have their own idiosyncracies: hopefully the number of ITNs needed to prevent malaria deaths by ~2050 will be zero.) 

The intergenerational elasticity point is an interesting one, but intergenerational income elasticities are higher in less developed countries (and the higher incomes are partially inherited by more people in later generations, assuming they continue to reproduce above replacement rate). And under normal assumptions we care about the earlier generations helped at least as much as the later ones, so you've already helped many more people than the direct recipients by the time the patient philanthropy fund is investigating how many more people accrued compound interest will let them help. Plus in the specific example of the roof we're talking about wealth, and you'd have to invest very well in stocks and shares to beat the imputed 20% annual returns on a tin roof, even over time spans that extend beyond its serviceability.

Catchup growth definitely exists, the only question is whether more marginal economies will be excluded from it.[2] There are many reasons for economic stagnation in poorer regions (most obviously terrible governance), but it's certainly not independent from whether philanthropic funds for economic growth and poverty alleviation decide that in the near term they should shift towards promoting the economic development of the stock market in their own country instead.[3] Too much patience is probably worse for developing countries than the opposite extreme of too much philanthropic cash chasing too few viable opportunities.

  1. ^

    You also have to make assumptions about the philanthropists of the future as well: I'm not as rosy on near future technology-enabled post scarcity societies as some people on here, but if we trend in that direction maybe your nominally larger funds are a lot less relevant in future than that now

  2. ^

    Never mind the Asian Tiger economies, even some conflict-ridden impoverished backwaters like Burkina Faso have seen average growth rates comparable to US stocks over extended periods of time, and even without wild technological optimism  it'll probably be fairly hard to find people living under the new $3 per day (2025 PPP) poverty threshold in 2075

  3. ^

    Makes wayyy more sense for funds to keep most of the funds invested in domestic stocks when they're endowments ring fenced for specific things like selective scholarships or maintenance of a facility than funds for promoting economic growth and poverty alleviation

I follow the logic, but I think the logic of the contrived example actually exposes general weaknesses in patient philanthropy (i.e. for example B to be clearly better we have to assume that for some arbitrary reason we do not care about the first three generations of poor people at all, only about the poor people in 100 years' time, who are sufficiently far removed for us not to even know how poor they will be)

Once we relax that assumption and assume that poor people today are at least equally deserving as hypothetical poor people in the future, Option B starts looking rather good. [1]The first generation get helped to the extent they need, their descendants may benefit directly to some extent but also may be better able to help themselves, and for big enough donations there is some sort of compounding return in the wider poor country. In some plausible circumstances even the return to the grandchildren is greater than the sum of money you donated (ceteris paribus being born poor but obtaining a scholarship paid for by a foreign foundation isn't a better situation than being born into the middle class and having education paid for by the grandparent who received the foreign-funded scholarship and was able to earn much better for fifty years as a result) 

It then becomes a debate on whether poor people can achieve better returns than index investments in Western stocks, and whilst poor people aren't sophisticated investors and are often subject to all sorts of negative economic shocks, they are often in a position to dramatically improve their livelihood (RCTs on cash transfers suggested the annual return on a long lived tin roof that didn't need replacing every two years was at least 19%, for example) and then there's whether to take into account the nonlinearity of money returns to the poorest people living on $2 per day (2025 USD PPP) now and the poorest people [most likely, based on current trends] earning >$2 per day (2025 USD PPP) 50 years in the future. 

  1. ^

    The exception might be in cases where your target population has very little capacity to improve their own situation right now, relative to those in future and so most of your money just gets wasted or stolen. I'm unconvinced this applies to people in poverty in general, but it might if you wanted to maximise the positive impact on a population of Palestinians in Gaza specifically, for example.

On the other hand, there is also no guarantee that the global poor in 50 years will (i) be as poor as some people from Ghana you might be able to help today and (ii) have as many low hanging fruit interventions that can help them. 

Give What We Can currently assumes that the cost of saving a life today is ~$4k. It's not obvious that the cost of saving a life will be as low as ($4k x compound interest on index fund) in 50 years' time. 

(Also, if the philanthropist's timelines are as long as 50 years it may not be them doing the cause selection, which may or may not be a consideration)

I think it's also disanalogous in the sense that the EA community's belief in imminent AGI isn't predicated on the commercial success of various VC-funded companies in the same way as the EA community's belief in the inherent goodness and amazing epistemics of its community did kind of assume that half its money wasn't coming from an EA-leadership endorsed criminal who rationalized his gambling of other people's money in EA terms...

The AI bubble popping (which many EAs actually want to happen) is somewhat orthogonal to the imminent AGI hypothesis;[1] the internet carried on growing after a bunch of overpromisers who misspent their capital fell by the wayside.[2] I expect that (whilst not converging on superintelligence) the same will happen with chatbots and diffusion models, and there will be plenty of scope for models to be better fit to benchmarks or for researchers to talk bots into creepier responses over the coming years.

The Singularity not happening by 2027 might be a bit of a blow for people that attached great weight to that timeline, but a lot are cautious to do that or have already given themselves probabilistic getouts. I don't think its going to happen in 2027 or ever, but if I thought differently I'm not sure 2027 actually being the year some companies failed to convince sovereign wealth funds they were close enough to AGI to deserve a trillion would or even should have that much impact. 

I do agree with the wider point that it would be nice if EAs realized that many of their own donation preferences might be shaped at least as much by personal interests and vulnerable to rhetorical tricks as normies'; but I'm not sure that was the main takeaway from FTX 

  1. ^

    FWIW I hold similar views about it not being about to happen and about undue weight being placed on certain quasi-religious prophecies...

  2. ^

    there's perhaps also a lesson that the internet isn't that different from circa 2000, but certain aspects of it did keep getting better...

I would add that it's not just extreme proposals to make "AI go well" like Yudkowsky's airstrike that potentially have negative consequences beyond the counterfactual costs of not spending the money on other causes. Even 'pausing AI' through democratically elected legislation enacted as a result of smart and well-reasoned lobbying might be significantly negative in its direct impact, if the sort of 'AI' restricted would have failed to become a malign superintelligence but would have been very helpful to economic growth generally and perhaps medical researchers specifically.

This applies if the imminent AGI hypothesis is false, and probably to an even greater extent it if it is true.

(The simplest argument for why it's hard to justify all EA efforts to make AI go well based purely on its neglectedness as a cause is that some EA theories about what is needed for AI to go well directly conflict with others; to justify the course of action one needs to have some confidence not only that AGI is possibly a threat but that the proposed approach to it at least doesn't increase the threat. It is possible that both donations to a "charity" that became a commercial AI accelerationist and donations to lobbyists attempting to pause AI altogether were both mistakes, but it seems implausible that they were both good causes)

I'm more confused by how this apparent near future, current world resource base timeline interacts with the idea that this Dyson swarm is achieved clandestinely (I agree with your sentiment the "disassemble Mercury within 31 years" scenario is even more unlikely, though close to Mercury is a much better location for a Dyson swarm). Most of the stuff in the tech tree doesn't exist yet and the entities working on it are separate and funding-starved: the relationship between entities writing papers about ISRU or designing rectenna for power transmission and an autonomous self-replicating deep space construction facility capable of acquiring unassailable dominance of the asteroid belt within a year is akin to the relationship between a medieval blacksmith and a gigafactory. You could close that gap more quickly with an larger-than-Apollo-scale joined up research endeavour, but that's the opposite of discreet. 

Stuff like the challenges of transmitting power/data over planetary distances and the constant battle against natural factors like ionizing radiation don't exactly point towards permanent dominance by a single actor either.

Also you look at the current US administration and the priorities and ... they're certainly not Singaporean or particularly interested in x-risk mitigation

Feels like the most straightforwardly rational argument for portfolio diversification is the assumption your EV and probability estimates almost certainly aren't the accurate or at least unbiased estimator they need to be for the optimal strategy to be to stick everything on the highest EV outcome. Even more so when the probability that a given EV estimate is accurate is unlikely to be uncorrelated with whether it scores particularly highly (the good old optimiser's curse, with a dose of wishful thinking thrown in). Financiers don't trust themselves to be perfectly impartial about stuff like commodity prices in central Asia or binary bets on the value of Yen on Thursday, and it seems unlikely that people who are extremely passionate about the causes they and their friends participate in ahead of a vast range of other causes that nominally claim to do good achieve a greater level of impartiality. Pascalian odds seem particularly unlikely to be representative of the true best option (in plain English, a 0.0001% subjective probability assessment of a 1 shot event is roughly "I don't really know what the outcome of this will be and it seems like there could be many, many things more likely to achieve the same end"). You can make the assumption that if they appear to be robustly positive and neglected they might deserve funding anyway, but that is a portfolio argument...

Doesn't this depend on what you consider the "top tier areas for making AI go well" (which doesn't seem to be defined by the post)? If that happens to be AI safety research institutes focused specifically on preventing "AI doom" via stuff you consider to be non-harmful, then naively I'd expect nearly all of them to be aligned with the movement focused on that priority, given that those are relatively small niches, the OP and their organisation and the wider EA movement are actively nudging people into them based on EA assumption that they're the top tier ones, and anyone looking more broadly at AI as a professional interest will find a whole host of lucrative alternatives where they won't be scrutinised on their alignment at interview and can go and make cool tools and/or lots of money on options.

If you define it as "areas which have the most influence on how AI is built" then those are more the people @titotal was talking about, and yeah, they don't seem particularly aligned with EA, not even the ones that say safety-ish things as a marketing strategy and took money from EA funds.

And if you define "safety" more broadly there are plenty of other AI research areas focusing on stuff like cultural bias or job market impact. But you and your organisation and 80000 hours probably don't consider them top tier for effectiveness and (not coincidentally) I suspect these have very low proportions of EAs. Same goes for defence companies who've decided the "safest" approach to AI is to win the arms race.  Similarly, it's no surprise that people who happen to be very concerned about morality and utilitarianism and doing the best they can with their 80k hours of working life who get their advice from Brutger don't become AI researchers at all, despite the similarities of their moral views.

Got to agree with the AI "analysis" being pretty limited, even though it flatters me by describing my analysis as "rigorous".[1] It's not a positive sign that this news update and jobs listing is flagged as having particularly high "epistemic quality"

That said, I enjoyed the 'egregore' section bits about the "ritualistic displays of humility", "elevating developers to a priesthood" and  "compulsive need to model, quantify, and systematize everything, even with acknowledged high uncertainty and speculative inputs => illusion of rigor".[2] Gemini seems to have absorbed the standard critiques of EA and rationalism better than many humans, including humans writing criticisms of and defences of those belief systems. It's also not wrong.

Its poetry is still Vogon-level though.

  1. ^

    For a start I think most people reading our posts would conclude that Vasco and I disagree on far too much to be considered "intellectually aligned", even if we do it mostly politely by drilling down to the details of each others' arguments

  2. ^

    OK, if my rigour is illusory maybe that complement is more backhanded than I thought  :)

Load more