konrad

Topic Contributions

Comments

Issues with centralised grantmaking

tl;dr please write that post

I'm very strongly in favor of this level of transparency. My co-founder Max has been doing some work along those lines in coordination with CEA's community health team. But if I understand correctly, they're not that up front about why they're reaching out. Being more "on the nose" about it, paired with a clear signal of support would be great because these people are usually well-meaning and can struggle parsing ambiguous signals. Of course, that's a question of qualified manpower - arguably our most limited resource - but we shouldn't let our limited capacity for immediate implementation stand in the way of inching ever closer to our ideal norms.

Update on the Simon Institute: Year One

Thanks very much for highlighting this so clearly, yes indeed. We are currently in touch with one potential such grantmaker. If you know of others we could talk to, that would be great.

The amount isn't trivial at ~600k. Max' salary also guarantees my financial stability beyond the ~6 months of runway I have. It's what has allowed us to make mid-term plans and me to quit my CBG.

Are there highly leveraged donation opportunities to prevent wars and dictatorships?

The Simon Institute for Longterm Governance (SI) is developing the capacity to do a) more practical research on many of the issues you're interested in and b) the kind of direct engagement necessary to play a role in international affairs. For now, this is with a focus on the UN and related institutions but if growth is sustainable for SI, we think it would be sensible to expand to EU policy engagement. 

You can read more in our 2021 review and 2022 plans.  We also have significant room for more funding, as we have only started fundraising again last month.

Ideas from network science about EA community building

In my model, strong ties are the ones that need most work because they have highest payoff. I would suggest they generate weak ties even more efficiently than focusing on creating weak ties.

This hinges on the assumption that the strong-tie groups are sufficiently diverse to avoid insularity. Which seems to be the case at sufficiently long timescales (e.g 1+years) as most strong tie groups that are very homogenous eventually fall apart if they're actually trying to do something and not just congratulate one another. That hopefully applies to any EA group.

That's why I'm excited that, especially in the past year, the CBG program seems to be funding more teams in various locations, instead of just individuals. And I think those CB teams would do best to build more teams who start projects. The CB teams then provide services and infrastructure to keep exchange between all teams going.

This suggests I would do fewer EAGx (because EAGs likely cover most of that need if CEA scales further) and more local "charity entrepreneurship" type things.

Objections to Value-Alignment between Effective Altruists

EAs talk a lot about value alignment and try to identify people who are aligned with them. I do, too. But this is also funny at a global level, given we don't understand our values nor aren't very sure about how to understand them much better, reliably. Zoe's post highlights that it's too early to double down on our current best guesses and more diversification is needed to cover more of the vast search space.

Disagreeables and Assessors: Two Intellectual Archetypes

Disclaimer: I have disagreeable tendencies, working on it but biased. I think you're getting at something useful, even if most people are somewhere in the middle. I think we should care most about the outliers on both sides because they could be extremely powerful when working together.

I want to add some **speculations** on these roles in the context of the level at which we're trying to achieve something: individual or collective.

When no single agent can understand reality well enough to be a good principal, it seems most beneficial for the collective to consist of modestly polarized agents (this seems true from most of the literature on group decision-making and policy processes, e.g. Adaptive Rationality, Garbage Cans, and the Policy Process | Emerald Insight).

This means that the EA network should want people who are confident enough in their own world views to explore them properly, who are happy to generate new ideas through epistemic trespassing, and to explore outside of the Overton window etc. Unless your social environment productively reframes what is currently perceived as "failure", overconfidence seems basically required to keep going as a disagreeable.

By nature, overconfidence gets punished in communities that value calibration and clear metrics of success. Disagreeables become poisonous as they feel misunderstood and good assessors become increasingly conservative. The succesful ones of the two characters build up different communities in which they are high status and extremize one another.

To succeed altogether, we need to walk the very fine line between productive epistemic trespassing and conserving what we have.

Disagreeables can quickly lose status with assessors because they seem insufficiently epistemically humble or outright nuts. Making your case against a local consensus costs you points. Not being well calibrated on what reality looks like costs you points.

If we are in a sub-optimal reality, however, effort needs to be put into defying the odds and change reality. To have the chutzpah to change a system, it helps to ignore parts of reality at times. It helps to believe that you can have sufficient power to change it. If you're convinced enough of those beliefs, they often confer power on you in and of themselves.

Incrementally assessing baseline and then betting on the most plausible outcomes also deepens the tracks we find ourselves on. It is the safe thing to do and stabilizes society. Stability is needed if you want to make sure coordination happens. Thus, assessors rightly gain status for predicting correctly. Yet, they also reinforce existing narratives and create consensus about what the future could be like.

Consensus about the median outcome can make it harder to break out of existing dynamics because the barrier to coordinating such a break-out is even higher when everyone knows the expected outcome (e.g. odds of success of major change are low).

In a world where ground truth doesn't matter much, the power of disagreeables is to create a mob that isn't anchored in reality but that achieves the coordination to break out of local realities.

Unfortunately, to us who have insufficient capabilities to achieve their aims - to change not just our local social reality but the human condition - creating a cult just isn't helpful. None of us have sufficient data or compute to do it alone.

To achieve our mission, we will need constant error correction. Plus, the universe is so large that information won't always travel fast enough, even if there was a sufficiently swift processor. So we need to compute decentrally and somehow still coordinate.

It seems hard for single brains to be both explorers and stabilizers simultaneously, however. So as a collective, we need to appropriately value both and insure one another. Maybe we can help each other switch roles to make it easier to understand both. Instead of drawing conclusions for action at our individual levels, we need to aggregate our insights and decide on action as a collective.

As of right now, only very high status or privileged people really say what they think and most others defer to the authorities to ensure their social survival. At an individual level, that's the right thing to do. But as a collective, we would all benefit if we enabled more value-aligned people to explore, fail and yet survive comfortably enough to be able to feed their learnings back into the collective.

This is of course not just a norms questions, but also a question of infrastructure and psychology.

Suffering-Focused Ethics (SFE) FAQ

Thank you (and an anonymous contributor) very much for this!

you made some pretty important claims (critical of SFE-related work) with little explanation or substantiation

If that's what's causing downvotes in and of itself, I would want to caution people against it - that's how we end up in a bubble.

What interpretations are you referring to? When are personal best guesses and metaphysical truth confused?

E.g. in his book on SFE, Vinding regularly cites people's subjective accounts of reality in support of SFE at the normative level. He acknowledges that each individual has a limited dataset and biased cognition but instead of simply sharing his and the perspective of others, he immediately jumps to normative conclusions. I take issue with that, see below.

 Do you mean between "practically SFE" people and people who are neither "practically SFE" nor SFE?

Between "SFE(-ish) people" and "non-SFE people", indeed.

What do you mean [by "as a result of this deconfusion ..."]?

I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we're still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).

So in the end, you'll want to push humanity's development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to. 

In practice, we practically never face decisions where we would be sufficiently certain about the possible results to have choices dominated by our ethics. We need collective authoring of decisions and, given moral uncertainty, this decentralized computation seems to hinge on a robust synthesis of points of view. I don't see a need to appeal to normative theories.

Does that make sense?

Suffering-Focused Ethics (SFE) FAQ

Intrigued by which part of my comment it is that seems to be dividing reactions. Feel free to PM me with a low effort explanation. If you want to make it anonymous, drop it here.

Suffering-Focused Ethics (SFE) FAQ

Strong upvote. Most people who identify with SFE I have encountered seem to subscribe to the practical interpretation. The core writings I have read (e.g. much of Gloor & Mannino's or Vinding's stuff) tend to make normative claims but mostly support them using interpretations of reality that do not at all match mine.  I would be very happy if we found a way to avoid confusing personal best guesses with metaphysical truth. 

Also, as a result of this deconfusion, I would expect there to  be very few to no decision-relevant cases of divergence between "practically SFE" people and others, if all of them subscribe to some form of longtermism or suspect that there's other life in the universe.

Load More