I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.
Thanks for restarting this conversation!
Relatedly, it's also time to start focusing on the increased conflicts of interest and epistemic challenges that an influx of AI industry insider cash could bring. As Nathan implies in his comment, proximity to massive amounts of money can have significant adverse effects in addition to positive ones. And I worry that if and when a relevant IPO or cashout is announced, the aroma of expected funds will not improve our ability to navigate these challenges well.
Most people are very hesitant to bite the hand that feeds them. Orgs may be hesitant to do things that could adversely effect their ability to access future donations from current or expected donors. We might expect that AI-insider donors will disproportionately choose to fund charities that align fairly well with -- or at least are consonant with -- their personal interests and viewpoints.
(I am aware that significant conflicts of interest with the AI industry have existed in the past and continue to exist. But there's not much I can do about that, and the conflict for the hypothesized new funding sources seems potentially even more acute. I imagine that some of these donors will retain significant financial interests in frontier AI labs even if they cash out part of their equity, as opposed to old-school donors who have a lesser portion of their wealth in AI. Also, Dustin and Cari donated their Anthropic stake, which addresses their personal conflict of interest on that front (although it may create a conflict for wherever that donation went)).
For purposes of these rest of this comment, a significantly AI-involved source has a continuing role at an frontier AI lab, or has a significant portion of their wealth still tied up in AI-related equity. The term does not include those who have exited their AI-related positions.
What Sorts of Adverse Effects Could Happen?
There are various ways in which the new donors' personal financial interests could bias the community's actions and beliefs. I use the word bias here because those personal interests should not have an effect on what the community believes and says.
Take stop/pause advocacy for an obvious example. Without expressing a view about the merits of such advocacy, significantly AI-involved sources have an obvious conflict of interest that creates a bias against that sort of work. To be fair, it is their choice on how to spend their money.
But -- one could imagine the community changing its behavior and/or beliefs in ways that are problematic. Maybe people don't write posts and comments in support of stop/pause advocacy because they don't want to irritate the new funders. Maybe grantmakers don't recommend stop/pause advocacy grants for their other clients because their AI-involved clients could view their money as indirectly supporting such advocacy via funging.
There's also a risk of losing public credibility -- it would not be hard to cast orgs that took AI-involved source funds as something like a lobbying arm of Anthropic equity holders.
What Types of Things Could Be Done to Mitigate This?
This is tougher, but some low-hanging fruit might include:
Anyway, it is this sort of thing that concerns me more than (e.g.) some university student scamming a free trip to some location by simulating interest in EA.
Also, if I were on the low probability end of a bet, I'd be more worried about the risk of measurement or adjudicator error where measuring the outcome isn't entirely clear cut. Maybe a ruleset could be devised that is so objective and so well captures whether AGI exists that this concern isn't applicable. But if there's an adjudication/error error risk of (say) 2 percent and the error is equally likely on either side, it's much more salient to someone betting on (say) under 1 percent odds.
76% of experts saying it's "unlikely" the current paradigm will lead to AGI leaves ample room for a majority thinking there's a 10%+ chance it will . . . .
. . . . and the field are still mostly against you (at the 10% threshold).
I agree that the "unlikely" statistic leaves ample room for the majority of the field thinking there is a 10%+ chance, but it does not establish that the majority actually thinks that.
I would like to bring back more of the pre-ChatGPT disposition where people were more comfortable emphasizing their uncertainty, but standing by the expected value of AI safety work.
I think there are at least two (potentially overlapping) ways one could take the general concern that @Yarrow Bouchard 🔸 is identifying here. One, if accepted, leads to the substantive conclusion that EA individuals, orgs, and funders shouldn't be nearly as focused on AI because the perceived dangers are just too remote. An alternative framing doesn't necessarily lead there. It goes something like there has been a significant and worrisome decline in the quality of epistemic practices surrounding AI in EA since the advent of ChatGPT. If it -- but not the other -- framing is accepted, it leads in my view to a different set of recommended actions.
I flag that since I think the relevant considerations for assessing the alternative framing could be significantly different.
Given EA's small share of the total global health/poverty funding landscape, the most likely effect of its investment on an expensive-but-permanent project is to speed the timetable up. So, for instance, perhaps we would get a hypothetical vaccine a year or two earlier if there had been EA investment. So, in comparing the effects of a yearly intervention vs. an expensive-but-permanent one, we are still looking at near-term effects that are relatively similar in nature and thus can be compared.
I don't suggest that is true for all "permanent" interventions, though, so it isn't a complete answer. It also might not scale well to a field in which EA funding is a large piece of the total funding pie.
Some supporters of AI Safety may overestimate the imminence of AGI. It's not clear to me how much of a problem that is?
It seems plausible that there could be significant adverse effects on AI Safety itself. There's been an increasing awareness of the importance of policy solutions, whose theory of impact requires support outside the AI Safety community. I think there's a risk that AI Safety is becoming linked in the minds of third parties with a belief in AGI imminence in a way that will seriously if not irrevocably damage the former's credibility in the event of a bubble / crash.
One might think that publicly embracing imminence is worth the risk, of course. For example, policymakers are less likely to endorse strong action for anything that is expected to have consequences many decades in the future. But being perceived as crying wolf if a bubble pops is likely to have some consequences.
Hot take behind a semi-veil of ignorance on this year's results: I submit that next year, there should be a modest allocation (~10%) for the best finisher in certain categories if no org in that category makes the top three:
If the main value of the election is eliciting / signaling community preferences, then I think it's helpful to have good information available for each major cause area and also to signal-boost a small (usually upstart) org or two. Guaranteeing a modest pot of money for each sub-winner should improve the quality of the signal.
If the main value of the election is driving engagement, then I think it's helpful to give (almost) everyone one race in which they are more invested in the outcome / feel like there's an option to meaningfully support one org in their preferred cause area.
Re: Nestle in particular, I get the spirit of what you're saying, although see my recent long comment where I try to think through the chocolate issue in more detail. As far as I can tell, the labor-exploitation problems are common to the entire industry, so switching from Nestle to another brand wouldn't do anything to help??
That could be correct. But I think the flip side of my individual chocolate purchasing decisions aren't very impactful is that maybe we should defer under some circumstances to the people who have thought a lot about these kinds of issues, even if we think their modeling isn't particularly good. Weak modeling is probably better, in expectancy, than no modeling at all -- and developing our own models may not be an impactful use of our time. Or stated differently, I would expect the boycott targets identified by weak modeling to be more problematic actors in expectancy than if we chose our chocolate brands by picking a brand out of a hat.[1] (This doesn't necessarily apply to boycotts that are not premised on each additional unit of production causing marginal harms.)
Of course, we may not be picking a brand at random -- we may be responding to price and quality differences.
Those who lost money are being repaid in cash based on the value of their crypto when the bankruptcy filing was made. The market was down at that time and later recovered. The victims are not being put in the same place they would have been in absent the fraud.
"Intentional fraud" is redundant since fraud requires intent to defraud. It does not, however, require the intent to permanently deprive people of their property. So a subjective belief that the fraudster would be able to return monies to those whose funds he misappropriated due to the fraudulent scheme is not a defense here.
"[F]raud is a broad term, which includes false representations, dishonesty and deceit." United States v. Grainger, 701 F.2d 308, 311 (4th Cir. 1983). SBF obtained client monies through false, dishonest, and deceitful representations that (for instance) the funds would not be used as Alameda's slush fund. He knew the representations made to secure client funds were false, dishonest, and deceitful. That's enough for the convictions.