One consideration that came to my mind at multiple times of the post was that I was trying to understand what your angle for writing the post was. So while I think that the post was written with the goal of demarcating and pushing "your brand" of radical social justice from EA, you clearly seem to agree with the core "EA assumption" (i.e., that it's good to use careful reasoning and evidence to try to make the world better) even though you disagree on certain aspects about how to best implement this in practice.
Thus, I would really encourage you to engage with the EA community in a collaborative and open spirit. As you can tell by the reactions here, criticism is well appreciated by the EA community if it is well reasoned and articulated. Of course there are some rules to this game (i.e., as mentioned elsewhere you should provide justification for your believes) but if you have good arguments for your position you might even affect systemic change in EA ;)
Thanks for the quick reply!
Yeah, an article or podcast on the framework and possible pitfalls would be great. I generally like ITN for broad cause assessments (i.e., is this interesting to look at?) but the quantitative version that 80k uses does seem to have some serious limitations if one digs more deeply into the topic. I would be mostly concerned about people new to EA either having false confidence in numbers or being turned off by an overly simplistic approach. But you obviously have much more insight into peoples reactions and I am looking forward to how you develop and improve on the content in the future!
Thanks for the post, very interesting initiative! However, this investigation seems to be at least slightly in conflict/contrast with other Founderspledge investigations into "giving later" options such as DAFs. Could you elaborate how these projects relate and where Founderspledge priorities are pointing to?
I know this is a late reply to an old comment but it would be awesome to know in how far you think you have addressed the issues raised? Or if you did not address them what was you reason for discarding them?
I am working through the cause prio literature at the moment and I don't really feel that 80k addresses all (or most) of the substantial concerns raised. For instance, the assessments of climate change and AI safety are great examples where 80k's considerations can be quite easily attacked given conceptual difficulties in the underlying cause prio framework/argument.
Thanks for the counterpoint, I think that's an interesting perspective and in the abstract valid.
Nevertheless, as far as I can tell, in practice these discussions here don't seem to focus on the assessment of whether "other people spend too much now and not enough later" beyond the general assertion that people tend to discount the feature and the conclusion that, thus, there are opportunities to gain comparatively by investing.
However, what I haven't really seen are good arguments that people are actually spending too much now and not enough later or models which model this aspect in some way. In another comment I have outlined in more detail, why I think that it is important to explicitly consider the "nature" of problem solving when making such analyses and decisions.
Long story short, I think current models of giving now vs. giving later are way too simple and additional consideration about problem solving in general would lead me to believe that giving later should not become "the default" for longtermist giving - at least until we have set up an appropriate infrastructure to effectively identify and address problems as they arise. However, I don't want to misrepresent the position of giving later advocates who have often acknowledged that giving now that takes the form of "investments" (as I am suggesting) is somewhat exempt from the discussion. I agree that there might be substantial room for investments as part of wise philanthropic activity, I just don't think it's a winning strategy by itself. Thus, what I mostly seem to disagree with is the framing and emphasis of the debate.
Circling back to my comment on free riding. Simply postponing giving into the future under the assumption that other people will figure out what to do by then seems dangerous unless appropriate measures are taken to ensure that actual progress does happen at a reasonable rate as the world could also become much worse (e.g. climate change). However, postponing giving into the future, makes the individual who is postponing comparatively better of in the future, which would be a plus. Thus, there is in interesting dilemma situation here, where altruists who are not 100% aligned could get into conflict about who should invest when and how much to maximize overall expected value.
To avoid any potential conflicts as much as possible, care should be taken to communicate why specific decision to give now or later where made and how this is expected to affect the community as a whole. For instance, I would expect an organization considering giving later at a large scale like Founders Pledge to clearly articulate their strategy and what the EA community can expect from them now and in the future in a way that can be checked for value alignment over time. Otherwise, it seems totally plausible that opaque and non-transparent behavior could be perceived as free riding on the investments of the community as a whole.
To me that notion actually seems to be a little bit paradoxical because the notion of giving later seems to imply that there will be better opportunities in the future but at the same time we seem to expect less giving then. Economics 101 would suggest that better opportunities would attract more buyers. Thus, wouldn't we need some other type of argument which considers the nature of the problem under consideration to justify giving later? ↩︎
Thank you for raising some additional considerations against giving later. I think this is really valuable for the ongoing discussion that seems to be strongly tilted in favor of investing and giving later.
Even beyond your argument for movement growth, there seem to be many other intuitive considerations where similar arguments could be made. For instance, you consider that "converting" longtermists is an activity that is not only related to money but also to time and room for growth.
You need time to convert dollars into results given that there are generally strong limitations to room for more funding that is tied to the current allocation of resources in the world. I would guess one could model this as some kind of game where at each time point t you can effectively invest x amount into cause y where x is a function of cumulative money spent on cause y. It could be plausible to model this as a gaussian function (i.e., a bell curve) where money invested in the beginning leads to strong growth in room for more funding in the next round and then declines again at some point when full saturation (i.e., all money that could reasonable be spent is spent) is approached. Interestingly, this is both an argument for giving now and giving later as there is limited room where money could be spent effectively.
Going beyond this "simple" view, it would also be interesting to model how problems grow over time as they are not addressed. The most obvious example is climate change. If somehow a US president in the 80s could have been convinced to shift policy towards renewables... the problem would have likely required much less resources overall. This indicates that the money required to be spent on problems is a function of the time at which it is discovered and how much resources are directed to it over time.
I am not a mathematician but if any of this is remotely plausible, I am not sure that the thinking so far has considered such complications (i.e., at least I haven't seen models that model these things but I also haven't been searching in depth) and at least my intuition tells me that integrating such consideration could radically tip the balance toward a strong preference for giving as early as reasonable and provide a good argument for investing into infrastructure that would help us identify and address problems effectively as they emerge.
This could be an interesting topic for a PhD with simulations chops. Or even a benchmarking platform where different agent strategies can compete against each other.
See Ketter, W., Peters, M., Collins, J., and Gupta, A. 2016. “COMPETITIVE BENCHMARKING: AN IS RESEARCH APPROACH TO ADDRESS WICKED PROBLEMS WITH BIG DATA AND ANALYTICS,” MIS Quarterly (40:4), p. 34. ↩︎
Thanks for the post, it is interesting to see how other people are thinking about this question and I see it as valuable, although I am also somewhat critical toward this whole endeavor.
Maybe I am too naive or not thinking deep enough but with all of these giving now vs. giving later discussions I am somewhat worried about the mindset which is underlying such considerations. While I appreciate people investing time and resources into trying to understand how to have the biggest impact, just taking the perspective of a single investor comes across as somewhat narrow minded and selfish. What you basically seem to be calculating is the optimal degree of free riding that you can get away with to maximize the impact of your own dollars. Maybe it's good knowledge to have where that optimal point seems to be but I am somewhat worried about this becoming the underlying philosophy of longtermist giving.
For instance, longtermism is itself a rather new idea and people thinking about how they can invest as little as possible seems... yes, to some degree rational but also pretty risky in terms of ensuring success giving the many options for failure that exist in our world. I note that "capacity building" interventions are often explicitly excluded from these giving later considerations but giving off the whole vibe of "let's freeride as much as possible" doesn't seem to bode well for such initiatives as well. There is something like image, perception, and momentum and it really feels like this is strongly neglected in these kinds of discussions.
Having said that I am in favor of longtermist thinking but I would encourage to take a broader "community level" perspective. Wouldn't it be more effective to think about optimal rates of investment into community growth and then look for ways to get to those numbers and distributing them fairly rather than focusing on the best outcome for an individual investor and then circle back to what this means for the community? I mean your whole calculation depends on the possible return of investment that you can get from giving now vs. giving later. If we don't have a clear sense of what that RoI is right now how can you make good individual decisions?
Open to be shown the errors in my thinking!
Some simple but possible consideration against patient philanthropy that comes to my mind are:
That's not to say that it seems not worthwhile to explore ways that one can profit from patience but I would personally prefer a term like "wise philanthropy" as a more appropriate goal that respects a more holistic perspective.
Thanks for writing this post, very interesting! I haven't read all of the comments but wanted to share one point that came to me over and over again while reading the post. Apologies if it has already been mentioned in another comment.
It seems like you assume a strong (and relatively simple) causal relationship from genetics to malevolent traits to bad behavior. I think this view might make the problem seem more tractable than it actually might be. Humans are complex systems that are nested in other complex systems and everything is driven by complex and interacting feedback loops. Thus, to me it seems very difficult to untangle causality here. To me it would be much more intuitive to think about malevolence as a dynamic phenomenon that is emergent based on a history of interactions rather than a static personality trait. If you accept this characterization as plausible the task of screening for malevolence in a valid and reliable way seems much more difficult than just designing a better personality test. I think the main difference between those two perspectives is that in the simple case you have a lot of corner cases to keep in mind (e.g., what is if people have malevolent traits but actually want to be good people?) whereas the complex case is more holistic but also much more, well, complex and likely less tractable.
Nevertheless, I agree with the general premise of the post that mental health is an important aspect in the context of X/S-risk related activities. I would go even further than this post and argue that mental health in the context of X/S-risk related activities in general is a very pressing cause area that would score quite well in terms of ITN-Analysis. Thus, I would really love to see an organization or network being set up dedicated to the serious exploration of this area because existing efforts in the mental health space seem only to be focus on happiness in the context of global development. If some interested in this topic reads this, don't hesitate to reach out, I would love to support such efforts.
Thank you for the pointer. I updated the post to correct the typo.