konrad

423Joined Apr 2015

Comments
61

Dear Nuño, thank you very much for the very reasonable critiques! I had intended to respond in depth but it's continuously not the best use of time. I hope you understand. Your effort has been thoroughly appreciated and continues to be integrated into our communications with the EA community. 

We have now secured around 2 years of funding and are ramping up our capacity . Until we can bridge the inferential gap more broadly, our blog offers insight into what we're up to. However, it is written for a UN audience and non-exhaustive, thus you may understandably remain on the fence.

Maybe a helpful reframe that avoids some of the complications of "interesting vs important" by being a bit more concrete is "pushing the knowledge frontier vs applied work"?

Many of us get into EA because we're excited about crucial considerations type things and too many get stuck there because you can currently think about it ~forever but it practically contributes 0 to securing posterity. Most problems I see beyond AGI safety aren't bottlenecked by new intellectual insights (though sometimes those can still help). And even AGI safety might turn out in practice to come down to a leadership and governance problem.

This sounds great. It feels like a more EA-accessible reframe of the core value proposition of Nora and my post on tribes. 

tl;dr please write that post

I'm very strongly in favor of this level of transparency. My co-founder Max has been doing some work along those lines in coordination with CEA's community health team. But if I understand correctly, they're not that up front about why they're reaching out. Being more "on the nose" about it, paired with a clear signal of support would be great because these people are usually well-meaning and can struggle parsing ambiguous signals. Of course, that's a question of qualified manpower - arguably our most limited resource - but we shouldn't let our limited capacity for immediate implementation stand in the way of inching ever closer to our ideal norms.

Thanks very much for highlighting this so clearly, yes indeed. We are currently in touch with one potential such grantmaker. If you know of others we could talk to, that would be great.

The amount isn't trivial at ~600k. Max' salary also guarantees my financial stability beyond the ~6 months of runway I have. It's what has allowed us to make mid-term plans and me to quit my CBG.

The Simon Institute for Longterm Governance (SI) is developing the capacity to do a) more practical research on many of the issues you're interested in and b) the kind of direct engagement necessary to play a role in international affairs. For now, this is with a focus on the UN and related institutions but if growth is sustainable for SI, we think it would be sensible to expand to EU policy engagement. 

You can read more in our 2021 review and 2022 plans.  We also have significant room for more funding, as we have only started fundraising again last month.

In my model, strong ties are the ones that need most work because they have highest payoff. I would suggest they generate weak ties even more efficiently than focusing on creating weak ties.

This hinges on the assumption that the strong-tie groups are sufficiently diverse to avoid insularity. Which seems to be the case at sufficiently long timescales (e.g 1+years) as most strong tie groups that are very homogenous eventually fall apart if they're actually trying to do something and not just congratulate one another. That hopefully applies to any EA group.

That's why I'm excited that, especially in the past year, the CBG program seems to be funding more teams in various locations, instead of just individuals. And I think those CB teams would do best to build more teams who start projects. The CB teams then provide services and infrastructure to keep exchange between all teams going.

This suggests I would do fewer EAGx (because EAGs likely cover most of that need if CEA scales further) and more local "charity entrepreneurship" type things.

EAs talk a lot about value alignment and try to identify people who are aligned with them. I do, too. But this is also funny at a global level, given we don't understand our values nor aren't very sure about how to understand them much better, reliably. Zoe's post highlights that it's too early to double down on our current best guesses and more diversification is needed to cover more of the vast search space.

Disclaimer: I have disagreeable tendencies, working on it but biased. I think you're getting at something useful, even if most people are somewhere in the middle. I think we should care most about the outliers on both sides because they could be extremely powerful when working together.

I want to add some **speculations** on these roles in the context of the level at which we're trying to achieve something: individual or collective.

When no single agent can understand reality well enough to be a good principal, it seems most beneficial for the collective to consist of modestly polarized agents (this seems true from most of the literature on group decision-making and policy processes, e.g. Adaptive Rationality, Garbage Cans, and the Policy Process | Emerald Insight).

This means that the EA network should want people who are confident enough in their own world views to explore them properly, who are happy to generate new ideas through epistemic trespassing, and to explore outside of the Overton window etc. Unless your social environment productively reframes what is currently perceived as "failure", overconfidence seems basically required to keep going as a disagreeable.

By nature, overconfidence gets punished in communities that value calibration and clear metrics of success. Disagreeables become poisonous as they feel misunderstood and good assessors become increasingly conservative. The succesful ones of the two characters build up different communities in which they are high status and extremize one another.

To succeed altogether, we need to walk the very fine line between productive epistemic trespassing and conserving what we have.

Disagreeables can quickly lose status with assessors because they seem insufficiently epistemically humble or outright nuts. Making your case against a local consensus costs you points. Not being well calibrated on what reality looks like costs you points.

If we are in a sub-optimal reality, however, effort needs to be put into defying the odds and change reality. To have the chutzpah to change a system, it helps to ignore parts of reality at times. It helps to believe that you can have sufficient power to change it. If you're convinced enough of those beliefs, they often confer power on you in and of themselves.

Incrementally assessing baseline and then betting on the most plausible outcomes also deepens the tracks we find ourselves on. It is the safe thing to do and stabilizes society. Stability is needed if you want to make sure coordination happens. Thus, assessors rightly gain status for predicting correctly. Yet, they also reinforce existing narratives and create consensus about what the future could be like.

Consensus about the median outcome can make it harder to break out of existing dynamics because the barrier to coordinating such a break-out is even higher when everyone knows the expected outcome (e.g. odds of success of major change are low).

In a world where ground truth doesn't matter much, the power of disagreeables is to create a mob that isn't anchored in reality but that achieves the coordination to break out of local realities.

Unfortunately, to us who have insufficient capabilities to achieve their aims - to change not just our local social reality but the human condition - creating a cult just isn't helpful. None of us have sufficient data or compute to do it alone.

To achieve our mission, we will need constant error correction. Plus, the universe is so large that information won't always travel fast enough, even if there was a sufficiently swift processor. So we need to compute decentrally and somehow still coordinate.

It seems hard for single brains to be both explorers and stabilizers simultaneously, however. So as a collective, we need to appropriately value both and insure one another. Maybe we can help each other switch roles to make it easier to understand both. Instead of drawing conclusions for action at our individual levels, we need to aggregate our insights and decide on action as a collective.

As of right now, only very high status or privileged people really say what they think and most others defer to the authorities to ensure their social survival. At an individual level, that's the right thing to do. But as a collective, we would all benefit if we enabled more value-aligned people to explore, fail and yet survive comfortably enough to be able to feed their learnings back into the collective.

This is of course not just a norms questions, but also a question of infrastructure and psychology.

Load More