I'm in the early stages of corporate campaign work similar to what's discussed in this post. I'm trying to mobilise investor pressure to advocate for safety practices at AI labs and chipmakers. I'd love to meet with others working on similar projects (or anyone interested in funding this work!). I'd be eager for feedback.
You can see a write-up of the project here.
Thanks for putting this together! Super helpful.
I really appreciated this post and it's sequel (and await the third in the sequence)! The "second mistake" was totally new to me, and I hadn't grasped the significance of the "first mistake". The post did persuade me that the case for existential risk reduction is less robust than I had previously thought.
One tiny thing. I think this should read "from 20% to 10% risk":
More rarely, we talk about absolute reductions, which subtract an absolute amount from the current level of risk. It is in this sense that a 10% reduction in risk takes us from 80% to 70% risk, from 20% to 18% risk, or from 10% to 0% risk. (Formally, relative risk reduction by f takes us from risk r to risk r – f).
Thanks for writing this! Hoping to respond more fully later. In the meantime: I really like the example of what a "near-term AI-Governance factor collection could look like".
So the question is 'what governance hurdles decrease risk but don't constitute a total barrier to entry?'
I agree. There are probably some kinds of democratic checks that honest UHNW individuals don't mind, but have relatively big improvements for epistemics and community risk. Perhaps there are ways to add incentives for agreeing to audits or democratic checks? It seems like SBF's reputation as a businessman benefited somewhat from his association with EA (I am not too confident in this claim). Perhaps offering some kind of "Super Effective Philanthropist" title/prize/trophy to particular UHNW donors that agree to subject their donations to democratic checks or financial audits might be an incentive? (I'm pretty skeptical, but unsure.) I'd like to do some more creative thinking here.
I wonder if submitting capital to your proposal seems a bit too much like the latter.
I think this is a great post, efficiently summarizing some of the most important takeaways from recent events.
I think this claim is especially important:
"It’s also vital to avoid a very small number of decision-makers having too much influence (even if they don’t want that level of influence in the first place). If we have more sources of funding and more decision-makers, it is likely to improve the overall quality of funding decisions and, critically, reduce the consequences for grantees if they are rejected by just one or two major funders."
Here's a sketchy idea in that vein for further consideration. One additional way to avoid extremely wealthy donors having too much influence is to try to insist that UHNW donors subject their giving to democratic checks on their decision-making from other EAs. For instance, what if taking a Giving What We Can pledge entitled you to a vote of some kind on certain fund disbursements or other decisions? What if Giving What We Can pledgers could put forward "shareholder proposals" on strategic decisions (subject to getting fifty signatures, say) at EA orgs, which other pledgers could then vote on? (Not necessarily just at GWWC) Obviously there are issues:
But there are advantages too, and I expect that often they outweigh the disadvantages:
This comment seems to be generating substantial disagreement. I'd be curious to hear from those who disagree: which parts of this comment do you disagree with, and why?
Hi Cesar! You might be interested to check out the transparency page for the Against Malaria Foundation: https://www.againstmalaria.com/transparency.aspx
I'd be interested in surveying on whether people believe that AI [could presently/might one day] do a better job governing the [United States/major businesses/US military/other important institutions] than [elected leaders/CEOs/generals/other leaders].