Arepo

Sequences

EA Gather Town
Improving EA tech work

Topic Contributions

Comments

Impact markets may incentivize predictably net-negative projects

Strong disagree. A bioweapons lab working in secret on gain of function research for a somewhat belligerent despotic government, which denies everything after an accidental release is nowhere near any model I have of 'scrupulous altruism'.

Ironically, the person I mentioned in my previous comment is one of the main players at Anthropic, so your second paragraph doesn't give me much comfort.

Impact markets may incentivize predictably net-negative projects

I'm talking about the unilateralist's curse with respect to actions intended to be altruistic, not the uncontroversial claim that people sometimes do bad things. I find it hard to believe that any version of the lab leak theory involved all the main actors scrupulously doing what they thought was best for the world.

I think we should be careful with arguments that such and such existential risk factor is entirely hypothetical.

I think we should be careful with arguments that existential risk discussions require lower epistemic standards. That could backfire in all sorts of ways, and leads to claims like one I heard recently from a prominent player that a claim about artificial intelligence prioritisation for which I asked for evidence is 'too important to lose to measurability bias'.

Impact markets may incentivize predictably net-negative projects

Is there any real-world evidence of the unilateralist's curse being realised? My sense historically is that this sort of reasoning to date has been almost entirely hypothetical, and has done a lot to stifle innovation and exploration in the EA space.

Does the Forum Prize lead people to write more posts?

Another vote against this being a wise metric, here. Anecdotally, while writing my last post when (I thought) the prize was still running, I felt both a) incentivised to ensure the quality was as high as I could make it and b) less likely to actually post as a consequence (writing higher quality takes longer).

And that matches what I'd like to see on the forum - better signal to noise ratio, which can be achieved both by increasing the average quality of posts and by decreasing the number of marginal posts.

How to dissolve moral cluelessness about donating mosquito nets

Unsurprisingly I disagree with many of the estimates, but I very much like this approach. For any analysis of any action, one can divide the premises arbitrarily many times. You stop when you're comfortable that the granularity of the priors you're forming is high enough to outweigh the opportunity cost of further research, which is how any of us can literally take any action.

In the case of 'cluelessness', it honestly seems better framed as 'laziness' to me. There's no principled reason why we can't throw a bunch of resources at refining and parameterising cost-effectiveness analyses like these, but Givewell afaict don't do it because they like to deal in relatively granular priors and longtermist organisations don't do it afaict because post-'Beware Suprising and Suspicious Convergences' no-one takes the idea seriously that global poverty research could be a good use of longtermist resources. I think that's a shame, both because it doesn't seem either surprising or suspicious to me that high granularity interventions could be more effective long-term than low-granularity ones (eg 'more AI safety research') - IMO the planning fallacy gets much worse over longer periods - and because this...

Plausibly what we really need is more emphasis on geopolitical stability, well-being enhancing values, and resilient, well-being enhancing governance institutions. If that were the case, I’d expect the case for altruistically donating bednets to help the less well-off is fairly straightforward.

... seems to me like it should be a much larger part of the conversation. The only case I've seen for disregarding it amounts to hard cluelessness - we 'know' extinction reduces value by a vast amount (assuming we think the future is +EV) - whereas trajectory change is difficult to map out. But as above, that seems like lazy reasoning that we could radically improve if we put some resources into it.

Should large EA nonprofits consider splitting?

I'm really not sure this is true. A market is one way of aggregating knowledge and preferences, but there are others (e.g. democracy). And as in a democracy, we expect many or most decisions to be better handled by a small group of people whose job it is.

This doesn't sound like most people's view on democracy to me. Normally it's more like 'we have to relinquish control over our lives to someone, so it gives slightly better incentives if we have a fractional say in who that someone is'.

I'm reminded of Scott Siskind on prediction markets - while there might be some grantmakers who I happen to trust, EA prioritisation is exceptionally hard, and I think 'have the community have as representative a say in it as they want to have' is a far better Schelling point than 'appoint a handful of gatekeepers and encourage everyone to defer to them'.

First of all, relevant xkcd.

This seems like a cheap shot. What's the equivalent of systemwide security risk in this analogy? Looking at the specific CEA form example, if you fill out a feedback form at the event, do CEA currently need to share it among their forum, community health, movement building departments? If not, then your privacy would actually increase post-split, since the minimum number of people you could usefully consent to sharing it with would have decreased.

Also, what's the analogy where you end up with an increasing number of sandboxes? The worst case scenario in that respect seems to be 'organisations realise splitting didn't help and recombine to their original state'.

Secondly, this may be true in some aspects but not in others, and I'd still expect overhead to increase, or some things to become much more challenging.

I agree in the sense that overhead would increase in expectation, but a) the gains might outweigh it - IMO higher fidelity comparison is worth a lot and b) it also seems like there's a <50% but plausible chance that movement-wide overhead would actually decrease, since you'd need shared services for helping establish small organisations. And that's before considering things like efficiency of services, which I'm confident would increase for the reasons I gave here.

Revisiting the karma system

Fwiw I didn't downvote this comment, though I would guess the downvotes were based on the somewhat personal remarks/rhetoric. I'm also finding it hard to parse some of what you say. 

A system or pattern or general belief that leads to a defect or plausible potential defect (even if there is some benefit to it), and even if this defect is abstract or somewhat disagreed upon.

This still leaves a lot of room for subjective interpretation, but in the interests of good faith, I'll give what I believe is a fairly clear example from my own recent investigations: it seems that somewhere between 20-80% of the EA community believe that the orthogonality thesis shows that AI is extremely likely to wipe us all out. This is based on a drastic misreading of an often-cited 10-year old paper, which is available publicly for any EA to check.

Another odd belief, albeit one which seems more muddled than mistaken is the role of neglectedness in 'ITN' reasoning. What we ultimately care about is the amount of good done per resource unit, ie, roughly, <importance>*<tractability>. Neglectedness is just a heuristic for estimating tractability absent more precise methods. Perhaps it's a heuristic with interesting mathematical properties, but it's not a separate factor, as it's often presented. For example, in 80k's new climate change profile, they cite 'not neglected' as one of the two main arguments against working on it. I find this quite disappointing - all it gives us is a weak a priori probabilistic inference which is totally insensitive to the type of things the money has been spent on and the scale of the problem, which seems much less than we could learn about tractability by looking directly at the best opportunities to contribute to the field, as Founders Pledge did.

Also, it seems like you are close to implicating literally any belief?

I don't know why you conclude this. I specified 'belief shared widely among EAs and not among intelligent people in general'. That is a very small subset of beliefs, albeit a fairly large subset of EA ones. And I do think we should be very cautious about a karma system that biases towards promoting those views.

Revisiting the karma system

For those who enjoy irony: the upvotes on this post pushed me over the threshold not only to 6-karma strong upvotes, but for my 'single' upvoted now being double-weighted.

Revisiting the karma system

Often authors mention the issue, but don't offer any specific instances of groupthink, or how their solution solves it, even though it seems easy to do—they wrote up a whole idea motivated by it. 

 

You've seriously loaded the terms of engagement here. Any given belief shared widely among EAs and not among intelligent people in general is a candidate for potential groupthink, but qua them being shared EA beliefs, if I just listed a few of them I would expect you and most other forum users to consider them not groupthink - because things we believe are true don't qualify. 

So can you tell me what conditions you think would be sufficient to judge something as groupthink before I try to satisfy you? 

Also do we agree that if groupthink turns out to be a phenomenon among EAs then the karma system would tend to accentuate it? Because if that's true then unless you think the probability of EA groupthink is 0, this is going to be an expected downside of the karma system - so the argument should be whether the upsides outweigh the downsides, not whether the downsides exist.

Revisiting the karma system

As a datum I rarely look beyond the front page posts, and tbh the majority of my engagement probably comes from the EA forum digest recommendations, which I imagine are basically a curated version of the same.

Load More