All of Zachary Brown's Comments + Replies

I'm in the early stages of corporate campaign work similar to what's discussed in this post. I'm trying to mobilise investor pressure to advocate for safety practices at AI labs and chipmakers. I'd love to meet with others working on similar projects (or anyone interested in funding this work!). I'd be eager for feedback.

You can see a write-up of the project here.

  • Frankenstein (Mary Shelley): moral circle expansion to a human created AI, kinda.
  • Elizabeth Costello (J M Coetzee): novel about a professor who gives animal rights lectures. The chapter that's most profoundly about animal ethics was published as "The Lives of Animals" which was printed with commentary from Peter Singer (in narrative form!).
  • Darkness at Noon (Arthur Koestler): Novel with reflections from an imprisoned old Bolshevik, reflecting on his past revolutionary activity. Interesting reflections on ends vs. means reasoning, and on weighing considerations of moral scale / the numbers affected vs personal emotional connection in moral tradeoff scenarios.

I really appreciated this post and it's sequel (and await the third in the sequence)! The "second mistake" was totally new to me, and I hadn't grasped the significance of the "first mistake". The post did persuade me that the case for existential risk reduction is less robust than I had previously thought. 

One tiny thing. I think this should read "from 20% to 10% risk":

More rarely, we talk about absolute reductions, which subtract an absolute amount from the current level of risk. It is in this sense that a 10% reduction in risk takes us from 80% to 7

... (read more)
5
David Thorstad
9mo
Whoops, thanks!

Thanks for writing this! Hoping to respond more fully later. 
In the meantime: I really like the example of what a "near-term AI-Governance factor collection could look like". 

So the question is 'what governance hurdles decrease risk but don't constitute a total barrier to entry?'

I agree. There are probably some kinds of democratic checks that honest UHNW individuals don't mind, but have relatively big improvements for epistemics and community risk. Perhaps there are ways to add incentives for agreeing to audits or democratic checks? It seems like SBF's reputation as a businessman  benefited somewhat from his association with EA (I am not too confident in this claim). Perhaps offering some kind of "Super Effective Philanthr... (read more)

I think this is a great post, efficiently summarizing some of the most important takeaways from recent events.

I think this claim is especially important: 

"It’s also vital to avoid a very small number of decision-makers having too much influence (even if they don’t want that level of influence in the first place). If we have more sources of funding and more decision-makers, it is likely to improve the overall quality of funding decisions and, critically, reduce the consequences for grantees if they are rejected by just one or two major funders."

He... (read more)

Here's a sketchy idea in that vein for further consideration. One additional way to avoid extremely wealthy donors having too much influence is to try to insist that UHNW donors subject their giving to democratic checks on their decision-making from other EAs.

Fwiw, if I were a UHNW individual (which I am not, to be clear), this would make me much less receptive to EA giving and would plausibly put me off entirely. I would guess this is more costly than it's worth?

7
Marcus Rademacher
1y
While I won't necessarily endorse your specific governance proposals here since I think it warrants serious thought about a good strategy, I like your goals and I wholeheartedly agree that EA needs to consider the impact of letting a small group of UHNW individuals control the direction of the movement. I also agree that the OP is excellent, and something I've been scrolling looking for here and in the EA subreddit hoping to find someone taking a harder look at the issue. If a person were really on board with EA principles they should be willing to admit that their own judgement is fallible, so much so that it would be good to relinquish control of the money they're giving away to a larger, more diverse group of people. Certainly the funder could decide on the larger goals (climate change  vs. AI safety, etc.), but I find myself questioning the motives of people who can't give up a large amount of control.  Was SBF legitimately on board with EA or was he doing it to launder his image? We may never know for sure, but there's a long history of billionaires doing exactly that through charitable giving. From Carnegie to the Sacklers, and I suspect even the recent announcement from Bezos, this is common practice among UHNW folks.  We as a community need to realize the danger this poses to the movement. Already, there is a negative perception of EA due to the embrace, and sometimes outright worship, of charismatic billionaires. Billionaires who do not live the values that EA is supposed to be pushing: epistemic humility, collaboration, and the belief that every person's interests deserve equal weight. The acceptance from the community of billionaires like Elon Musk and Peter Thiel jump out at me as giant red flags. I will remain quite skeptical of any UHNW pledges that don't include the following: 1. A transfer of a substantial amount of the pledge to a charitable organization in the short term, and a structured plan for how and when the balance of the pledge will b
4
Jack Lewars
1y
Thanks for this Zachary. This is an interesting idea and I think should be discussed in some detail. I am interested though in the trade offs between better governance and the sort of governance that might stop people giving at all. So, for examples, I saw a good post saying that a reform could be "anyone asked to join a new funding vehicle could demand and audit and, if the funder refuses the audit, they should refuse to join and criticise it publicly and discourage other people from joining." That seems very likely to stop FTX recurring; but also very likely to stop any UHNW investment in EA directly. So the question is 'what governance hurdles decrease risk but don't constitute a total barrier to entry?' I wonder if submitting capital to your proposal seems a bit too much like the latter. (Incidentally, I realise that asking 'what might a bad actor agree to?' is a slippery slope when deciding on what checks and balances to employ - but I think things like 'mega donors have to have an independent Board with financial and governance expertise, and register a charitable vehicle' is possibly a better balance than 'UHNWs need to let the crowd vet their giving decisions.')

This comment seems to be generating substantial disagreement. I'd be curious to hear from those who disagree: which parts of this comment do you disagree with, and why?

9
Peter
1y
Not sure but I think the Flynn campaign result was more likely an outcome of the fundamentals of the race: a popular, progressive, woman of color with local party support who already represented part of the district as a state rep and helped draw the new congressional district was way more likely to win over someone who hadn't lived there in years and had never run a political campaign before. 

Hi Cesar! You might be interested to check out the transparency page for the Against Malaria Foundation: https://www.againstmalaria.com/transparency.aspx  

1
Cesar Scapella
1y
Hi Zachary, Yes! I am aware that AMF is considered one of the best examples of transparency in the EA Community. I already glanced at it before but I will now take a closer look at their page. They definitely show a higher standard of transparency when compared to other organizations but I am still not sure they provide what can be called true transparency (one that could be independently verifiable and does not demand faith based trust). But I may be wrong. I don't have enough information to build strong conclusions yet... that is why I will spend some time reading their website and hopefully, if I have time, I will share my findings here. Thanks.

I'd be interested in surveying on whether people believe that AI [could presently/might one day] do a better job governing the [United States/major businesses/US military/other important institutions] than [elected leaders/CEOs/generals/other leaders].

I don't think this is true. Dunbar's number is a limit on the number of social relationships an individual can cognitively sustain. But the sorts of  networks needed to facilitate productive work are different than those needed to sustain fulfilling social relations. If there is a norm that people are willing to productively collaborate with the unknown contact of a known contact, then surely you can sustain a productive community with approx Dunbar's number ^2 people (if each member of my Dunbar-sized community has their own equivalently-sized community with no shared members). 

9
Stefan_Schubert
2y
Dunbar's number has received scholarly criticism.

Thanks for contributing this critique, your invitation for argument, and your open-mindedness! 

I think one important inequality in the distribution of power is that between presently living people and future generations. The latter have not only no political power, but no direct causal power at all. While we might decry a world where we have to persuade or compel billionaires  -- or seek to become billionaires ourselves -- to have much hope at large-scale influence, these tools are much better  than anything future generations have got. Our ... (read more)

Yes, and how many people we project will have this association in the future. I think it's reasonably likely that this view will pick up steam among vaguely activisty people on college campuses in the next five years. That's an important demographic for growing EA.

Great piece, I thought. I think Carrick Flynn's loss may in no small part be due to accidentally cultivating a white crypto-bro aesthetic. If that's right, it is a case of aesthetics mattering a fair amount. Personally, I'd like to see EA do more to avoid donning this aesthetic, which anecdotally seems to turn a lot of people off.

Hello Zachary,

I don't think the meaning of aesthetics that Etienne explores in this post really applies to Carrick Flynn's campaign. Aesthetics are a more replicable, cohesive, and norm-driven way of thinking about appearances. Carrick's Campaign may have garnered a poor public perception based on the proximity to/appearances of being a white-crypto bro. However, I don't think this has to do with an aesthetic he cultivated–rather a public image. The aesthetic of the campaign would have been things like graphic design choices, our media selection, and the r... (read more)

I'd be a little bit concerned by this. I think there's a growing sentiment among young people (especially on university campuses) that classicism is aesthetically: regressive, retrograde, old-white-man stuff. Here's a quote from a recent New York Times piece: 

"Long revered as the foundation of “Western civilization,” [classics] was trying to shed its self-imposed reputation as an elitist subject overwhelmingly taught and studied by white men. Recently the effort had gained a new sense of urgency: Classics had been embraced by the far right, whose memb... (read more)

5
Arjun Panickssery
2y
This is very much an online progressives thing, no? In America, the classics are our cultural heritage and carry a lot of respect.

I'm curious whether community size, engagement level, and competence might matter less than the general perception of EA among non-EAs. 

Not just because low general positive perception of EA makes it harder to attract highly engaged, competent EAs. But also because general positive perception matters even if it never results in conversion. General positive perception increases our ability to cooperate with and influence non-EA individuals and institutions.

Suppose an aggressive community building tactic attracts one HEA, of average competence. In addit... (read more)

I'm currently evaluating the feasibility and expected value of building a proxy voting advisory firm that would make EA-aligned voting recommendations. Would love to meet with you or anyone with expertise.

1
samstowers
2y
I don't have expertise here, I'm mostly just a concerned bystander lol. Other comments mention relevant people, you should reach out to them perhaps?

I think the virtues of moral expansiveness and altruistic sympathy for moral patients are really important for EAs to develop, and I think being vegan increased my stock of these virtues by reversing the "moral dulling" effect you postulate. (This paper makes the case for utilitarians to develop a set of similar virtues: https://psyarxiv.com/w52zm.) I've also developed a visceral disgust response to meat as a result of being vegan, which is for me probably inseparable from the motivating feeling of sympathy for animals as moral patients. 

When I was a ... (read more)

If a community claims to be altruistic, it's reasonable for an outsider to seek evidence: acts of community altruism that can't be equally well explained by selfish impulses, like financial reward or desire for praise. In practice, that seems to require that community members make visible acts of personal sacrifice for altruistic ends. To some degree, EA's credibility as a moral movement (that moral people want to be a part of) depends on such sacrifices. GWWC pledges help; as this post points out, big spending probably doesn't.

One shift that might help is... (read more)

This is a very interesting point that, for me, reinforces the importance of keeping effective giving prominent in EA. It is both a good thing, and also a defence against accusations of self-serving wastefulness, if a lot of people in the community are voluntarily sacrificing some portion of their income (with the usual caveats about 'if you have actual disposable income).

GWWC, OFTW etc. may be doing EA an increasing favour by enlisting a decent proportion of the community to be altruistic.

It's also noticeable that giving seems to be least popular with longtermists, who also seem to be doing the most lavish spending.

Proportional Chances Voting is basically equivalent to a mechanism where one vote is selected at random to be the deciding vote, as Newberry and Ord register in a footnote (they refer to it as "Random Dictator"; I've also seen it described as "lottery voting"). Newberry and Ord do say that Proportional Chances is supposed to be different because of the negotiation period, but I don't see how Random Dictator is incompatible with negotiation. 

Anyway, some of the literature on this mechanism may be of interest here, given footnotes 8-9. This paper propos... (read more)

To the extent average utilitarianism is motivated by avoiding the Repugnant Conclusion, I suspect that most average utilitarians would be as disturbed by aggregating over time as they are by aggregating within a generation, since we can establish a Repugnant Conclusion over times pretty straightforwardly. That said, to the extent intuitions differ when we aggregate over times, I can see that this could pose a challenge to average utilitarians.

I can't recall any work on this argument off the top of my head, but I did recently come across a hint of a related... (read more)