Jackson Wagner

Wiki Contributions


Is anyone worrying about the Institute for Health Metrics and Evaluation (IHME)?

If funding from the Gates Foundation is having the perverse effect of allowing them to ignore scientific criticism, it also sounds like an interesting case study in charitable spending/inventives gone wrong.

On the bright side, IHME isn't our only or even our main source of pandemic predictions. The CDC tracks numerous covid prediction projects (although the CDC has generally done badly too, and is definitely not going to be winning any forecasting awards). In recognition of these failures, the CDC is creating a new forecasting center staffed by what seems like a promising crew -- they will be led by Marc Lipsitch, a Harvard professor who has given talks at EA events and (IMO) offers very intelligent covid-19 commentary on twitter. https://fortune.com/2021/08/18/cdc-center-forecasting-and-outbreak-analytics-public-health/

Overall, forecasting and data gathering is one of the few aspects of pandemic response where I'm optimistic that we've learned our lesson and will do better next time.

Optimal Allocation of Spending on Existential Risk Reduction over an Infinite Time Horizon (in a too simplistic model)

Interesting, if as you say a bit unrealistic. If I'm interpreting your graph correctly (although I feel like I am probably not; I'm definitely not an economist), you end up describing an endowment-like structure, where if you're going to live forever, you'll want to end up giving away a constant amount of money each year (your b=0 line in the chart), or maybe an amount of money that represents something like a constant fraction of the growing world economy?? (Your b = (r-p)/r line?) It might be helpful for you to provide a layman-accessible summary, if I'm getting this all wrong.

In your conclusion, you talk about a couple ways that your model could be extended:

  • adding financial details like income, multiple investment assets
  • adding multiple charity types (like x-risk vs global health and development) -adding factor models of x-risk-reduction influence (via building up institutions, etc)
  • changes in x-risk over time, as dangers wax and wane
  • changes in whether spending should be thought of as reducing the current level of X-risk, versus permanently reducing the future rate of risk.

Of these, personally I'd be most interested in hearing about the last two, as they seem perhaps the most answerable with a pure mathematical approach. It would be interesting to know what ideal investment strategies look like under different combinations of assumptions:

  • X-risk increasing up until a "time of perils", then decreasing afterwards
  • X-risk greatest right now, steadily declining
  • X-risk just steadily getting worse and worse as dangerous new tech is developed
  • spending to temporarily suppress immediate risk levels, versus spending on permanently lowering the rate of risk
  • being able to flip back and forth between both kinds of spending

And other things like that.

Analyzing view metrics on the EA Forum

"The top 5% of posts accounted for about half of the views and view time."

Looks like the EA forum, just like the overall project of effective altruism itself, is a hits-based business!

What would you do if you had half a million dollars?

Yes, I was definitely thinking of stuff along the lines of "help fund the creation of a toy fund and work out the legal kinks, portfolio design, governance mechanisms, etc", in addition to pure blog-post-style research into the idea of investing-to-give.

Admittedly it's an odd position for me to be pessimistic about patient philanthropy itself but still pretty psyched about setting up the experiment. I guess for the argument to go through that funding the creation of the PPF is a great idea, it relies on one or more of the following being true:

  • Actually doing patient philanthropy turns out to be, in fact, extremely effective. However, we won't definitively know this for decades! A leading indicator might be if the perceived problems/drawbacks of PPF turn out to be more easily solved than we thought. (Perhaps everyone looks at the legal mechanisms of the newly-launched toy fund and thinks, "Wow, this is actually a really innovative and promising structure!")

  • If the PPF draws in lots more EA donations that wouldn't have otherwise happened, it could be a great idea even if it's not as competitive on effectiveness.

  • Designing the PPF might somehow have positive spillover effects. (Are there other areas in EA calling for weird long-term institution design or complex financial products? Surely a few...)

EA cause areas are just areas where great interventions should be easier to find

Hi! I was one of the downvoters on your earlier post about Israel/Palestine, but looking at the link again now, I see that nobody ever gave a good explanation for why the post got such a negative reception. I'm sorry that we gave such a hostile reaction without explaining. I can't speak for all EAs, but I suspect that some of the main reasons for hesitation might be:

  • Israel-related issues are extremely politically charged, so taking any stance whatsoever might risk damaging the carefully non-politicized reputation that other parts of the EA movement have built up. I imagine that EAs would have similar hesitation about taking a strong stance on abortion rights (even though EAs often have strong views on population ethics), or officially endorsing a candidate in a US presidential election (even though the majority of EAs are probably Democrats).
  • The Israel/Palestine conflict is the opposite of neglected -- tons of media coverage, hundreds of activist groups, and lots of funding on both sides. A typical EA might argue that it would be better for a newly-formed activist group to focus on something like the current situation in Chad, which attracts hundreds of times less media coverage although a much larger number of people have died. (Of course, raw death toll isn't the final arbiter of cause importance -- Israel is a nuclear power, after all, so its decisions have wide ramifications.)
  • For whatever reason, the Israel/Paletine conflict has gained a specific reputation as a devilishly intractable diplomatic puzzle -- there's little agreement on any obvious solutions that seem like they could resolve the biggest problems.

I'm more positive about your second idea -- trying to identify the areas at greatest risk of conflict throughout the whole world and take actions to calm tensions before violence erupts. To some extent, this is the traditional work of diplomacy, international NGOs, etc, but these efforts could perhaps be better-targeted, and there are probably some unique angles here that EAs could look into. While international attention from diplomats and NGOs seems to parachute into regions right at the moment of crisis, I could imagine EAs trying to intervene earlier in the lead-up to conflicts, perhaps running low-cost radio programs trying to spread American-style values of tolerance and anti-racism. I could also imagine taking an even longer-term view, and trying to investigate ways to head off the root causes of political tension and violence on a timespan of decades or centuries. (Here is a somewhat similar project examining what gave rise to positive social movements like slavery abolitionism.)

What would you do if you had half a million dollars?

Two approaches not mentioned in the article that I would advocate:

  1. Giving to global priorities research. You mentioned patient philanthropy (whether a few years or centuries), and one of the main motivations of waiting to give is to benefit from a more-developed landscape of EA thought. If the sophistication of EA thought is a key bottleneck, why not contribute today to global priorities research efforts, thus accelerating the pace of intellectual development that other patient philanthropists are waiting on? I'm not confident that giving to global priorities research today beats waiting and giving later, since it's unclear how much the intellectual development of the movement would be accelerated by additional cash, but it should be on the table of options you look at. (To some extent, new ideas are generated naturally & for free as people think about problems, write comments on blog posts, etc. Meanwhile, there might be some ways where gaining experience simply takes calendar time. So perhaps only a small portion of the EA movement's development could actually be accelerated with more global-priorities-research funding. On the other hand, a marginally more well-developed field would almost certainly pull in marginally more donations, so helping to kick-start the growth and (hopefully) eventual mainstreaming of EA while we are still in its early days could be very valuable. Anyways, if you are considering waiting for the EA community to learn more, I think it's worth also considering being the change you want to see in the movement, and trying to accelerate the global-priorities-research timeline.)

  2. Giving to various up-and-coming cause areas within EA. Despite being a very nimble and open-minded movement actively searching for new cause areas, it seems to me that there is still some inertia and path-dependency when it comes to bringing new causes online alongside traditional, established EA focus areas. In my mind, this creates a kind of inefficiency, where new causes are recognized as "likely to become a bigger EA focus in the future", but haven't yet fully scaled up due in part to intellectual inertia within the movement. You could help accelerate this onboarding process by making grants to a portfolio of newer and less-familiar causes. For example:

  • The "global health and wellbeing" side of EA has for years been focused on GiveWell top charities. Recently, OpenPhil has expanded into new programs devoted to south asian air quality and global aid advocacy. These interventions seem like great ideas, which plausibly do even better than GiveWell's recommendations, so it might be helpful to jump in early and help get projects in these areas off the ground.
  • Charter cities have been studied for their EA potential in several ways -- reducing poverty directly via economic growth, providing a model for improved governance that might spread to nearby regions, and (most exciting from my longtermist EA perspective) acting as laboratories to experiment with new institutions, new policies, and new forms of government. As far as I know, charter city initiatives haven't yet received large support from EA donors, but personally I think that ought to change.
  • As I mentioned in my previous comment, I'm slightly pessimistic about the idea of actually doing patient philanthropy over centuries on a large scale, but the idea is nevertheless promising enough that we should help get some experiments up and running.
  • There are a whole host of promising, niche ideas within EA that might benefit from dedicated funding -- although some of these areas are so small that there's no organization ready and waiting to accept the cash. Research into things like wild-animal welfare or risks of stable totalitarianism seem like good things to investigate, as would be experiments with improved institution-design mechanisms (like prediction markets, quadratic funding, improved voting systems, etc) or civilizational resilience plans along the lines of ALLFED.
What would you do if you had half a million dollars?

I appreciate that you're going meta and considering such a full mix of re-granting options, rather than just giving to charities themselves as past lottery winners have. Your point about not having as much local knowledge as the big granting organizations makes a lot of sense. Longview, the LTFF, and the EA Infrastructure fund all seem like worthy targets, although I don't know much about them in particular. Here are a few thoughts on the other approaches:

Paying someone to help decide: This idea doesn't make much sense to me. After all, figuring out the most effective ways to donate to charity is already the core research project of effective altruism! It seems to me that paying someone to research what to do with the money would just be a strange, roundabout way to support cause prioritization research. Better to just explicitly donate to a cause prioritization research initiative. That way, a team of researchers could work on whatever cause prioritization problems seem most important for the overall EA movement, rather than employing one person to deliberate on this specific pot of $500K.

Patient philanthropy fund: This is an intriguing idea, but I wonder if patient philanthropy is well-developed enough that money would be best used to actually fill up the fund, versus studying the idea and working out various details in the plan. As Founder's Fund says, there are significant risks of expropriation and value drift, and there is probably more research and planning that can be done to investigate how to mitigate these risks. To their list of dangers, I would add:

  • The risk of some kind of financial collapse or transition, such that the contents of the fund are no longer valuable and/or no longer honored. (For instance, as a result of nations defaulting on their debt, or a sudden switch away from today's currencies.) This seems similar to, but distinct from, expropriation.

  • Somewhat related to value drift, the risk that a fund designed to last for millennia and to be highly resistant against expropriation and value drift, would fail to also be nimble enough to recognize changing opportunities and actually deploy its assets at a crucial time when they could do the most good. Figuring out how best to mitigate this seems like a very tricky institution-design problem. But making even a small amount of progress on it could be really valuable, especially since the problem of staying on-mission while also being nimble and maintaining organizational skill/capacity is a fundamental paradox that bedevils all kinds of institutions.

…Anyways, I'm sure that people more involved in patient philanthropy have thought about this stuff in more depth than I. But my point is that right now, it's possible that funding should mostly go towards designing and testing and implementing patient-philanthropy funds, rather than just putting large amounts of cash in the fund itself.

Invest & wait a few years: Although similar in some ways to the patient-philanthropy plan, I think the motivations for choosing this option are actually quite different:

  • Giving to a patient-philanthropy fund is somewhat incompatible with "urgent longtermism" focused on AI and other X-risks, while a plan to wait 5 years and then give is perfectly compatible with urgent longtermism.

  • Two benefits of waiting are the growth in capital, and the ability to learn more as the EA movement makes intellectual progress. Presumably, over a timespan of centuries, the EA movement will start running into diminishing intellectual returns, so the economic-growth benefit (if we assume steady returns of a few percent per year) would be proportionately larger. By waiting just five years, I'd guess that the larger benefit would come from the development of the EA movement.

Personally, I'm more sympathetic to the idea of waiting just a few years to take advantage of the rapidly increasing sophistication of EA thought, rather than waiting centuries. But you'd have to balance this consideration against how much funding you expect EA to receive in the future. If you think EA is currently in a boom and will decline later, you should save your money and give later (when ideas are well-developed but money is scarce). If you think EA will be pulling in much bigger numbers in the future, it's best to give now (so future funding can benefit from a more well-developed EA movement).

Study results: The most convincing argument for effective donations

Yes, I was just going to ask if anyone had looked at longtermist arguments in a similar way, or even just compiled a similar list of any short, punchy longtermist pitches that are out there. I've been thinking of printing out some pamphlets or something to distribute around town when I go for walks, and it might be nice to be able to represent multiple EA pillars on one pamplet.

I also think it would be interesting to see results on longtermism because it's a much stranger, less familiar idea (more different than other charity messaging people have heard before), so it might be harder to explain in a short format, but there might be correspondingly big wins from introducing people to such a totally new concept.

Thought experiment -- Does it still make sense to be an altruist if the world is coming to an end.

Two points in response:

  1. The whole point of Agnes' essay is to showcase an infinite regression problem: the basis for meaning in our lives can't rest solely on the existence of future generations, because in that case the fact that the universe is finite would force us to all become committed nihilists right now, today. Consider: If life for the last generation is nothing but a horrifying, meaningless void leading to "complete ethical and political collapse", then surely life for the second-to-last generation would also be meaningless? (After all, who'd want to bring a child into such a brutal, chaotic, pointless world?) But if things are meaningless for the second-to-last generation, then by the same logic they would also be meaningless for the third-to-last generation, and so on, all the way back to us in the present day.

  2. What you're looking for in response to this essay isn't a defense of altruism, it's a defense of any meaning or goals or values whatsoever (instead of just wallowing in nihilism). If I believed that all meaning in my life came from future generations, and then I read Agnes' essay, I might become depressed and nihilistic. But it's not just altruism that would lose its appeal -- everything would lose its appeal! If nothing matters, why bother making money, staying healthy, having fun, or doing anything? For some quality thoughts about nihilism, you might be interested in this short and entertaining post by Eliezer Yudkowsky.

Of course, if I knew the world was ending next week, that would be terrible news, and in light of that news it would obviously be foolish to continue with efforts intended to help the far future. So, to answer your question -- yes, if we knew for sure that the world was going to be obliterated very soon, there would be little point in trying to build anything or help anyone for the long-term. (There would still be plenty of point in helping others short-term, such as talking with friends and family and commiserating / consoling each other about the impending doom, and in enjoying the time that was left.)

Personally, although effective altruism is a big part of my life, I don't view "helping others" as a terminal value, something that's especially good and meaningful in itself. Helping others is good because it leads to good and meaningful things, like those other people living happy lives. Ultimately, I'd say that the foundation of "meaning" in our lives comes from experiencing our own subjective feelings, sensations, thoughts, and conscious awareness. My main goal in contributing to EA is to make sure that there are lots and lots of people in a thriving long-term future, so that they too can enjoy the wonder of consciousness.

Is there any evidence that any method of debiasing, achieving rationality can work or is even possible?

With your aggressive tone, it's perhaps understandable why you've run into mod trouble on LessWrong. But as a simple existence proof, the forecasting techniques and training materials described by Phillip Tetlock in books like "Superforcasting" have been repeatedly shown to somewhat improve people's skill at making all kinds of predictions across varied subject areas. Forecasting isn't the same thing as LessWrong-style "rationality", but it's close -- both are general reasoning skills that focus on avoiding bias and understanding probability, rather than domain-specific expertise.


Load More