I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.
If advocating now is a pre-requisite to advocating later, advocating now is part of the cost. By opting not to pay it, you aren’t increasing the overall cost-effectiveness of the LGBT rights movement, you’re just juicing your own numbers.
I think that relies on a certain model of the effects of social advocacy. Modeling is error-prone, but I don't think our activist in 1900 would be well-served spending significant money without giving some thought to their model. More often, I think the model for getting stuff done looks more like a more complicated version of: Inputs A and B are expected to produce C in the presence of a sufficient catalyst and the relative absence of inhibiting agents.
Putting more A into the system isn't going to help produce C if the rate limit is being caused by the amount of B available, the lack of the catalyst, or the presence of inhibiting agents. Although money is a useful input that is often fungible at various rates to other necessary inputs, and sometimes can influence catalyst & inhibitor levels, sometimes it cannot (or can do so very inefficiently and/or at levels beyond the funder's ability to meaningfully influence).
Sometimes for social change, having the older generation die off or otherwise lose power is useful. There's not much our hypothetical activist could do to accelerate that. One might think, for instance, that a significant decline in religiosity and/or the influence of religious entities is a necessary reagent in this model. While one could in theory put money into attempting to reduce the influence of religion in 1900s public life, I think there would be good reasons not to pursue this approach. Rather, I think it could make more sense for the activist to let the broader cultural and demographic changes to do some of the hard work for them.
There's also the reality that efforts often decay if there isn't sufficient forward momentum -- that was the intended point of the Pikachu welfare example. Ash doesn't have the money right now to found a perpetual foundation for the cause that will be able to accomplish anything meaningful. If he front-loads the money -- say on some field-building, some research grants, some grants to graduate students -- and the money runs out, then the organizations will fold, the research will grow increasingly out of date, and the graduate students will find new areas to work in.
You can you only care about providing free hologram entertainment to disadvantaged children, but since holograms are very expensive today, you’ll wait until they’re much cheaper. But shouldn’t you be responsible for making them cheaper? Why are you free-riding and counting on others to do that for you, for free, to juice your philanthropic impact?
The more neutral-to-positive way to cast free-riding is employing leverage. I'm really not concerned about free-riding on for-profit companies, or even much governmental work (especially things like military R&D, which has led to various socially useful technologies).
That's not an accounting trick in my book -- there are clear redistributive effects here. If I spend my money on basic science to promote hologram technology, the significant majority of the future benefits of my work are likely going to flow to future for-profit hologram companies, future middle-class+ people in developed countries, and so on. Those aren't the benefits I care about, and Big Hologram isn't likely to pay it forward by mailing a bunch of holograms to disadvantaged children (in your terminology, they are going to free-ride off my past efforts).
As a society, we give corporations and similar entities certain privileges to incentivize behavior because a lot of value ends up leaking out to third parties. For example, the point of patents is "To promote the Progress of Science and useful Art" with the understanding that said progress becomes part of the commons after a specified time has passed. Utilizing that progress after the patent period has expired isn't some sort of shady exploitation of the researcher; it is the deal society made in exchange for taking affirmative actions to protect the researcher's IP during the patent period.
Do you think the general superiority of filtration over Far-UVC is likely inherent to the technologies involved, or would the balance be reasonably likely to change given further development of Far-UVC technologies? In other words, is it something like solar, which used to be rather expensive for the amount of output but improved dramatically with investment, economies of scale, and technological progress?
(Of course, we could improve filter technology as well, although it strikes my uninformed eyes as having less potential room to improve.)
The scope of what could be considered "patient philanthropy" is pretty broad. My comment doesn't apply to all potential implementations of the topic.
To start with, I'll note the distinction between whether society should allow for patient philanthropy and whether it makes sense for a philanthropist who is attempting to maximize their own altruistic goals. For what it is worth, I think there should be some significant limits roughly in line with US law on private foundations, and I would close what I see as some loopholes on public charity status (e.g., that donors can evade the intent of the public-charity rules by donating through a DAF which is technically a public charity, and so counts as public support).
But it's not logically inconsistent to favor tightening the rules for everyone and to also think that if society chooses not to do so, then I shouldn't unilaterally disadvantage my preferred cause areas while (e.g.) the LDS church increases its cash hoard.
A Cause Area in Which Yarrow's Arguments Don't Work Well for Me
I think some of these arguments depend significantly on what the donor is trying to do. I'm going to pick non-EA cause areas for the examples to keep the focus somewhat abstract (while also concrete enough for examples to work).
Let's suppose Luna's preferred cause area is freeing dogs from shelters and giving them loving homes. The rational preference argument doesn't work here, and I know of no reason to think that the cost of freeing dogs will increase nearly as fast as the rate of return on investments. I also don't have any clear reason to think that there are shovel-ready interventions today that will have a large enough effect on future shelter populations in 50 years to justify spending a much smaller sum of money now. (Admittedly, I didn't research either; please do your own research if you are interested in funding canine rescue.)
Luna does face risk from "operational, legal, political, or force majeure" considerations, as well as the risk of technological or social changes making her original goal ineffective or inefficient. But many of these considerations happen over time, meaning that Luna (or her successors) should be able to sense them and start freeing dogs if their risk of disruption over the next 10-20 years gets too high. More broadly, I think this is an answer to some criticisms -- the philanthropist doesn't have to cabin the discretion of the fund to act as circumstances change (although there are also costs to allowing more discretion).
Donors can invest their own money and deploy it when it is most appropriate.
This sounds like patient philanthropy lite -- with an implied time limit of the rest of the donor's life and a restriction on buying/selling assets, both coming from tax considerations. That addresses some valid concerns with patient philanthropy, but we lose the advantage of having the money irrevocably committed to charitable purposes. I'm not sure how to weigh those considerations.
The Anti-PP Argument Calls for Faith in Future Foundations, Donors, and Governments
For the reserve-fund variants of PP: the patient philanthropist may not want to trust other foundations and governments to react strongly enough to future developments. There's at least some reason to hold such a view, although it may not be enough to justify the practice. I suspect most people think the government generally doesn't do a great job with its funding priorities (although they would have diverging and even contradictory opinions on why that is the case). I am not particularly impressed by the big foundations that have a wide scope of practice (and thus are potentially flexible). While experience that foundations tend to ossify and become bloated is an argument against patient philanthropy, it also counts as an argument against trusting big foundations to move swiftly and decisively in the face of an emerging threat or opportunity. Still, I think this premise would need to be developed and supported further to update my views on reserve-fund PP.
For other forms of PP: The assertion that the future world should rely on current-income donors, traditional foundations, and/or governments may rest on an assumption that the amount of need / opportunity in a given cause area tracks fairly well with the amount of funding available. If something happens in cause area X and it needs 1-2 orders of magnitude more money this year, will the money be forthcoming in short order? I don't have a clear opinion on that (and it may depend on the cause area).
Patient Philanthropy May Work Particularly Well in Some Use Cases
But the exercise of pasting and reading the results is carrying ~the entire argument here. The first two paragraphs basically say that you think we're missing something obvious; the post-prompt material links some reference materials without commentary. The prompt itself conveys instructions to an AI, not your argument to a human reader.
To the extent that the reader is discerning and evaluating your argument, they can only do so through running the prompt and reading the raw AI output. So the content that actually carries the argument is not, in my view, "your content" which you have merely "use[d] an AI to help you compose . . . ." Without the use of the raw AI content, what argument do the four corners of the post convey?
I would frown on someone running a prompt, and then pasting the unedited output as the main part of one's post. Posting a prompt and then asking the user to run the prompt and read the output strikes me as essentially the same thing. At least in the first scenario, the nominal user-author has probably at least read the specific output in question.
Although the norms allow users to employ AI assistance in producing content, [1] this exercise goes too far for me. (In my view, heavy reliance on AI can sometimes be okay in the context of comments if the use is disclosed.)
"If you, as a human, use an AI to help you compose your content, which you then post under your own name, that’s fine."
You're right to be concerned about the incentives of cooperators who had their own legal exposure. But those witnesses stood up to days of cross-examination about specifics by SBF's lawyers. Those attorneys had access to documentary evidence with which to try to impeach the witness testimony -- so it's not like the witnesses could just make up a story here.
Those who lost money are being repaid in cash based on the value of their crypto when the bankruptcy filing was made. The market was down at that time and later recovered. The victims are not being put in the same place they would have been in absent the fraud.
"Intentional fraud" is redundant since fraud requires intent to defraud. It does not, however, require the intent to permanently deprive people of their property. So a subjective belief that the fraudster would be able to return monies to those whose funds he misappropriated due to the fraudulent scheme is not a defense here.
"[F]raud is a broad term, which includes false representations, dishonesty and deceit." United States v. Grainger, 701 F.2d 308, 311 (4th Cir. 1983). SBF obtained client monies through false, dishonest, and deceitful representations that (for instance) the funds would not be used as Alameda's slush fund. He knew the representations made to secure client funds were false, dishonest, and deceitful. That's enough for the convictions.
To clarify, does our "crazy" vote consider all possible causes of crazy, or just crazy that is caused by / significantly associated with AI?