I really appreciated this post and it's sequel (and await the third in the sequence)! The "second mistake" was totally new to me, and I hadn't grasped the significance of the "first mistake". The post did persuade me that the case for existential risk reduction is less robust than I had previously thought.
One tiny thing. I think this should read "from 20% to 10% risk":
...More rarely, we talk about absolute reductions, which subtract an absolute amount from the current level of risk. It is in this sense that a 10% reduction in risk takes us from 80% to 7
Thanks for writing this! Hoping to respond more fully later.
In the meantime: I really like the example of what a "near-term AI-Governance factor collection could look like".
So the question is 'what governance hurdles decrease risk but don't constitute a total barrier to entry?'
I agree. There are probably some kinds of democratic checks that honest UHNW individuals don't mind, but have relatively big improvements for epistemics and community risk. Perhaps there are ways to add incentives for agreeing to audits or democratic checks? It seems like SBF's reputation as a businessman benefited somewhat from his association with EA (I am not too confident in this claim). Perhaps offering some kind of "Super Effective Philanthr...
I think this is a great post, efficiently summarizing some of the most important takeaways from recent events.
I think this claim is especially important:
"It’s also vital to avoid a very small number of decision-makers having too much influence (even if they don’t want that level of influence in the first place). If we have more sources of funding and more decision-makers, it is likely to improve the overall quality of funding decisions and, critically, reduce the consequences for grantees if they are rejected by just one or two major funders."
He...
Here's a sketchy idea in that vein for further consideration. One additional way to avoid extremely wealthy donors having too much influence is to try to insist that UHNW donors subject their giving to democratic checks on their decision-making from other EAs.
Fwiw, if I were a UHNW individual (which I am not, to be clear), this would make me much less receptive to EA giving and would plausibly put me off entirely. I would guess this is more costly than it's worth?
This comment seems to be generating substantial disagreement. I'd be curious to hear from those who disagree: which parts of this comment do you disagree with, and why?
Hi Cesar! You might be interested to check out the transparency page for the Against Malaria Foundation: https://www.againstmalaria.com/transparency.aspx
I'd be interested in surveying on whether people believe that AI [could presently/might one day] do a better job governing the [United States/major businesses/US military/other important institutions] than [elected leaders/CEOs/generals/other leaders].
I don't think this is true. Dunbar's number is a limit on the number of social relationships an individual can cognitively sustain. But the sorts of networks needed to facilitate productive work are different than those needed to sustain fulfilling social relations. If there is a norm that people are willing to productively collaborate with the unknown contact of a known contact, then surely you can sustain a productive community with approx Dunbar's number ^2 people (if each member of my Dunbar-sized community has their own equivalently-sized community with no shared members).
Thanks for contributing this critique, your invitation for argument, and your open-mindedness!
I think one important inequality in the distribution of power is that between presently living people and future generations. The latter have not only no political power, but no direct causal power at all. While we might decry a world where we have to persuade or compel billionaires -- or seek to become billionaires ourselves -- to have much hope at large-scale influence, these tools are much better than anything future generations have got. Our ...
Yes, and how many people we project will have this association in the future. I think it's reasonably likely that this view will pick up steam among vaguely activisty people on college campuses in the next five years. That's an important demographic for growing EA.
Great piece, I thought. I think Carrick Flynn's loss may in no small part be due to accidentally cultivating a white crypto-bro aesthetic. If that's right, it is a case of aesthetics mattering a fair amount. Personally, I'd like to see EA do more to avoid donning this aesthetic, which anecdotally seems to turn a lot of people off.
Hello Zachary,
I don't think the meaning of aesthetics that Etienne explores in this post really applies to Carrick Flynn's campaign. Aesthetics are a more replicable, cohesive, and norm-driven way of thinking about appearances. Carrick's Campaign may have garnered a poor public perception based on the proximity to/appearances of being a white-crypto bro. However, I don't think this has to do with an aesthetic he cultivated–rather a public image. The aesthetic of the campaign would have been things like graphic design choices, our media selection, and the r...
I'd be a little bit concerned by this. I think there's a growing sentiment among young people (especially on university campuses) that classicism is aesthetically: regressive, retrograde, old-white-man stuff. Here's a quote from a recent New York Times piece:
"Long revered as the foundation of “Western civilization,” [classics] was trying to shed its self-imposed reputation as an elitist subject overwhelmingly taught and studied by white men. Recently the effort had gained a new sense of urgency: Classics had been embraced by the far right, whose memb...
I'm curious whether community size, engagement level, and competence might matter less than the general perception of EA among non-EAs.
Not just because low general positive perception of EA makes it harder to attract highly engaged, competent EAs. But also because general positive perception matters even if it never results in conversion. General positive perception increases our ability to cooperate with and influence non-EA individuals and institutions.
Suppose an aggressive community building tactic attracts one HEA, of average competence. In addit...
I'm currently evaluating the feasibility and expected value of building a proxy voting advisory firm that would make EA-aligned voting recommendations. Would love to meet with you or anyone with expertise.
I think the virtues of moral expansiveness and altruistic sympathy for moral patients are really important for EAs to develop, and I think being vegan increased my stock of these virtues by reversing the "moral dulling" effect you postulate. (This paper makes the case for utilitarians to develop a set of similar virtues: https://psyarxiv.com/w52zm.) I've also developed a visceral disgust response to meat as a result of being vegan, which is for me probably inseparable from the motivating feeling of sympathy for animals as moral patients.
When I was a ...
If a community claims to be altruistic, it's reasonable for an outsider to seek evidence: acts of community altruism that can't be equally well explained by selfish impulses, like financial reward or desire for praise. In practice, that seems to require that community members make visible acts of personal sacrifice for altruistic ends. To some degree, EA's credibility as a moral movement (that moral people want to be a part of) depends on such sacrifices. GWWC pledges help; as this post points out, big spending probably doesn't.
One shift that might help is...
This is a very interesting point that, for me, reinforces the importance of keeping effective giving prominent in EA. It is both a good thing, and also a defence against accusations of self-serving wastefulness, if a lot of people in the community are voluntarily sacrificing some portion of their income (with the usual caveats about 'if you have actual disposable income).
GWWC, OFTW etc. may be doing EA an increasing favour by enlisting a decent proportion of the community to be altruistic.
It's also noticeable that giving seems to be least popular with longtermists, who also seem to be doing the most lavish spending.
Proportional Chances Voting is basically equivalent to a mechanism where one vote is selected at random to be the deciding vote, as Newberry and Ord register in a footnote (they refer to it as "Random Dictator"; I've also seen it described as "lottery voting"). Newberry and Ord do say that Proportional Chances is supposed to be different because of the negotiation period, but I don't see how Random Dictator is incompatible with negotiation.
Anyway, some of the literature on this mechanism may be of interest here, given footnotes 8-9. This paper propos...
To the extent average utilitarianism is motivated by avoiding the Repugnant Conclusion, I suspect that most average utilitarians would be as disturbed by aggregating over time as they are by aggregating within a generation, since we can establish a Repugnant Conclusion over times pretty straightforwardly. That said, to the extent intuitions differ when we aggregate over times, I can see that this could pose a challenge to average utilitarians.
I can't recall any work on this argument off the top of my head, but I did recently come across a hint of a related...
I'm in the early stages of corporate campaign work similar to what's discussed in this post. I'm trying to mobilise investor pressure to advocate for safety practices at AI labs and chipmakers. I'd love to meet with others working on similar projects (or anyone interested in funding this work!). I'd be eager for feedback.
You can see a write-up of the project here.