K

konrad

426 karmaJoined Apr 2015

Comments
63

I didn't say Duncan can't judge OP. I'm questioning the judgment.

FWIW, this sounds pretty wrongheaded to me: anonymization protects OP from more distant (mis)judgment while their entourage is aware of them having posted this. That seems like fair game to me and not at all as you're implying. 

We didn't evolve to operate at these scales, so this appears like a good solution.

Dear Nuño, thank you very much for the very reasonable critiques! I had intended to respond in depth but it's continuously not the best use of time. I hope you understand. Your effort has been thoroughly appreciated and continues to be integrated into our communications with the EA community. 

We have now secured around 2 years of funding and are ramping up our capacity . Until we can bridge the inferential gap more broadly, our blog offers insight into what we're up to. However, it is written for a UN audience and non-exhaustive, thus you may understandably remain on the fence.

Maybe a helpful reframe that avoids some of the complications of "interesting vs important" by being a bit more concrete is "pushing the knowledge frontier vs applied work"?

Many of us get into EA because we're excited about crucial considerations type things and too many get stuck there because you can currently think about it ~forever but it practically contributes 0 to securing posterity. Most problems I see beyond AGI safety aren't bottlenecked by new intellectual insights (though sometimes those can still help). And even AGI safety might turn out in practice to come down to a leadership and governance problem.

This sounds great. It feels like a more EA-accessible reframe of the core value proposition of Nora and my post on tribes. 

tl;dr please write that post

I'm very strongly in favor of this level of transparency. My co-founder Max has been doing some work along those lines in coordination with CEA's community health team. But if I understand correctly, they're not that up front about why they're reaching out. Being more "on the nose" about it, paired with a clear signal of support would be great because these people are usually well-meaning and can struggle parsing ambiguous signals. Of course, that's a question of qualified manpower - arguably our most limited resource - but we shouldn't let our limited capacity for immediate implementation stand in the way of inching ever closer to our ideal norms.

Thanks very much for highlighting this so clearly, yes indeed. We are currently in touch with one potential such grantmaker. If you know of others we could talk to, that would be great.

The amount isn't trivial at ~600k. Max' salary also guarantees my financial stability beyond the ~6 months of runway I have. It's what has allowed us to make mid-term plans and me to quit my CBG.

The Simon Institute for Longterm Governance (SI) is developing the capacity to do a) more practical research on many of the issues you're interested in and b) the kind of direct engagement necessary to play a role in international affairs. For now, this is with a focus on the UN and related institutions but if growth is sustainable for SI, we think it would be sensible to expand to EU policy engagement. 

You can read more in our 2021 review and 2022 plans.  We also have significant room for more funding, as we have only started fundraising again last month.

In my model, strong ties are the ones that need most work because they have highest payoff. I would suggest they generate weak ties even more efficiently than focusing on creating weak ties.

This hinges on the assumption that the strong-tie groups are sufficiently diverse to avoid insularity. Which seems to be the case at sufficiently long timescales (e.g 1+years) as most strong tie groups that are very homogenous eventually fall apart if they're actually trying to do something and not just congratulate one another. That hopefully applies to any EA group.

That's why I'm excited that, especially in the past year, the CBG program seems to be funding more teams in various locations, instead of just individuals. And I think those CB teams would do best to build more teams who start projects. The CB teams then provide services and infrastructure to keep exchange between all teams going.

This suggests I would do fewer EAGx (because EAGs likely cover most of that need if CEA scales further) and more local "charity entrepreneurship" type things.

Load more