Samo Burja is a sociologist and longtermist who runs the consulting firm Bismarck in San Francisco. He has a lot of interesting opinions about civilisation, institutions, power and how to make impact. His opinions if true could have significant implications for how EA's priorities should shift. Unfortunately a lot of them are not backed by the kind of rigorous evidence that would be make it easy to convince EAs, but they're not unbacked either, and all seem like they could plausibly be true to me. (And you could also make a case that the topics he discusses are not of the kind where you get hard deterministic evidence.)

I've tried summaring implications of his opinions for EA in this post to the best of my abilities, to try and get more EA engagement on them. I've tried to avoid too much of my own personal interpretations, but some of that is inevitable so please correct me if I've gotten anything wrong.

Sources

Most of the content I've picked is from the following sources. More evidence (mostly historical anecdotes) are present in these sources. If you have the time I'd recommend going through the sources directly.

Great founder theory manuscript (GFT)

Samo Burja's twitter

Bismarck briefs (paywalled)

Opinions

On wealth as a source of power

Samo Burja believes wealth is significantly overrated as a source of power. And that most sources of power have been captured by bureaucracies - be it governments themselves, companies with regulatory moats, academia, and so on. Such that wealth cannot be used to purchase these sources of power.

He has also criticised Bill Gates' market-based approach to philanthropy and lack of attempt at "transformative changes" - which seems very similar to a lot of EA's current approach.

This seems increasingly relevant as EA is becoming less funding-constrained and more Bill Gates-size.

On reform from the inside

Samo Burja believes that reform from inside of a bureaucracy - for instance by pursuing a career in public policy - is mostly futile as bureaucracies are designed to be resistant to such change. He's more pessimistic than 80000hours in this matter. He believes that most institutions are not functional and hence cannot be made significantly adaptable even from the top, let alone from the bottom. Hence I assume he is not very optimistic on the whole EA bracket of "improving institutional decision-making".

On innovation in social technology

He is however optimistic on innovation in new social technologies and building new institutions. He believes that there are very few functional institutions and that most institutions are attempts at mimicking these functional institutions. He believes innovation in social technology is highly undersupplied today, and that individual founders have a significant shot at building them. He also believes that civilisation makes logistical jumps in complexity and scale in very short periods of time when such innovation happens. He believes this has happened in the past, and believes it is possible today. In short, that this is very high impact, and deserves a lot more people working on it than currently are.

(source: "functional institutions are the exception" in GFT)

What does this innovation entail?

He hasn't exactly published blueprints for new institutions but he has published a lot of disjointed points that can be combined to build new functional institutions.

Some points that he has mentioned include:

 - status engineering - redirecting social status towards productive ends (for instance on Elon Musk making engineers high status)

 - opportunities for people in positions of power to appoint their successors and form deep mentoring relationships (also this tweet, also see "the succession problem" in GFT)

 - less credential-based gating, especially when such credentials are offered in a systematised fashion

 - institutions that can support and genuinely reward truth-seeking, both socially and economically. EA and rationality communities attempt this, but he probably believes one can more consciously design and ensure this tradition lasts long-term

 - ensuring convergence between status, power and responsbility? (palladium article not by him)

 - better tools for knowledge transfer and preservation (see intellectual dark matter for instance)

On technology in the power landscape

He believes technology creates new elites, who integrate with old elites instead of replacing them. Failed integration can lead to that tech not ending up in society which is a huge loss. Successful integration could however mean the potential for significant new reforms.

So while he doesn't fully buy into Silicon Valley-style technological determinism where the technology itself directly grants power to the technologists, he believes that technology creates new elites who gain and weild more conventional forms of power. Which they can then use to bring change. This seems like an interesting vector that EA could use to push for reforms.

On fragility of institutions

He believes that the lack of functional institutions, combined with their significant dependence on each other, creates systemic risks that significant technologies and capabilities will be lost by society. I suspect he sees this from a more longtermist frame, wherein he believes functional institutions should attempt to safeguard these capabilities for the long-term. As opposed to say an AI researcher's frame that assumes we'll deploy aligned AI this century with high probability and then all this won't matter.

Thoughts?

Do let me know if I could provide any additional information or make this discussion more meaningful or constrained in any way.

51

8 comments, sorted by Click to highlight new comments since: Today at 3:13 PM
New Comment

Thanks very much for putting this together. This section stood out to me —

He is however optimistic on innovation in new social technologies and building new institutions. He believes that there are very few functional institutions and that most institutions are attempts at mimicking these functional institutions. He believes innovation in social technology is highly undersupplied today, and that individual founders have a significant shot at building them. He also believes that civilisation makes logistical jumps in complexity and scale in very short periods of time when such innovation happens. He believes this has happened in the past, and believes it is possible today. In short, that this is very high impact, and deserves a lot more people working on it than currently are.

Makes me think of some of the work of RadicalxChange, and also 80k's recent interview with Audrey Tang. Curious what Samo's take might be on either of those things.

Interesting thoughts, apart from the sections finm mentioned this one stood out to me as well:

status engineering - redirecting social status towards productive ends (for instance on Elon Musk making engineers high status)

I think this is something that the EA community is doing already and maybe could/should do even more. Many of my smartest (non EA) friends from college work in rent-seeking sectors / sectors that have neutral production value (not E2G). This seems to be an incredible waste of resources, since they could also work on the most pressing problems. 

One interesting question could be: Are there tractable ways to do status engineering with the large group of talented non EAs? I think this could be worthwhile doing, because obviously not all incredibly smart people are part / want to be part of the EA community.

I believe Sam Harris is working on an NFT project for people having taken the GWWC pledge, so that would be one example.

Academia seems like the highest leverage place one could focus on. Universities are to a large extent social status factories, and so aligning the status conferred by academic learning and research with EA objectives (for example, by creating an 'EA University') could be very high impact. Also relates to the point about 'institutions.'

An EA university sounds like a great idea! Also makes sense since a lot of EA work is research.

There's so much that can be experimented, including all the stuff that academics say they dislike about academia but haven't found alternatives to:

 -  alternatives to publishing pressure, peer-review methods, culture that is downstream of these incentives

 - new forms of relationships (including deeper ones) between those learning and those learning from. Potentially less rigid or systematised, or systematised in a different fashion from academia

 - newer ways to allocate material base - be it co-living spaces or different forms of tenure than the conventional

I'll try making a post about it sometime but I do feel influenced by Samo's work; I wish there were places to discuss and experiment this stuff.

Thanks for making this list. 

I'll be recording a podcast with Samo on the 9th of March.  We'll discuss these themes as well as the consequences and possible solutions to  underpopulation.

Nice, I will check it out if I can!

Thank you for having Samo on the podcast, Gus. I find him tremendously insightful, and I eagerly look forward to hearing what he has to say.

Thanks, me too. 

If you have any questions for Samo, you could write them here.