sapphire

Topic Contributions

Comments

Critiques of EA that I want to read

There are multiple examples of EA orgs behaving badly I can't really discuss in public. The community really does not ask for much 'openness'.

Transcript of Twitter Discussion on EA from June 2022

The story is more complicated but I can't really get into it in public. Since you work at Rethink you can maybe get the story from Peter.  I've maybe suggested too simplistic a narrative before. But you should chat Peter or Marcus about what happened with Rethink and EA funding.

Transcript of Twitter Discussion on EA from June 2022

https://forum.effectivealtruism.org/posts/3c8dLtNyMzS9WkfgA/what-are-some-high-ev-but-failed-ea-projects?commentId=7htva3Xc9snLSvAkB 

"Few people know that we tried to start something pretty similar to Rethink Priorities in 2016 (our actual founding was in 2018). We (Marcus and me, the RP co-founders, plus some others) did some initial work but failed to get sustained funding and traction so we gave up for >1 year before trying again. Given that RP -2018 seems to have turned out to be quite successful, I think RP-2016 could be an example of a failed project?"

Seems somewhat misleading to leave this out.

The Strange Shortage of Moral Optimizers

DXE Bay is not very decentralized. It's run by the five people in 'Core Leadership'. The leadership is elected democratically. Though there is a bit on complexity since Wayne is influential but not formally part of the leadership. 

Leadership being replaced over time is not something to lament. I would strongly prefer more uhhhh 'churn' in EA's leadership. I endorse the current leadership quite a bit and strongly prefer that several previous 'Core' members lost their elections.

note: I haven't been very involved in DXE since I left California. Its really quite concentrated in the Bay.

Transcript of Twitter Discussion on EA from June 2022

If I had to guess I would predict Luke is more careful than various other EA leaders (mostly cause of Luke's ties to Eliezer). But you can look at the observed behavior of OpenPhil/80K/etc and I dont think they are behaving as carefully as I would endorse with respect to the most dangerous possible topic (besides maybe gain of function research which Ea would not fund). It doesn't make sense to write leadership a blank check. But it also doesn't make sense to worry about the 'unilateralists curse' when deciding if you should buy your friend a laptop!

Transcript of Twitter Discussion on EA from June 2022

This level of support for centralization and deferral is really unusual. I actually don't know of any community besides EA that endorses it. I'm aware it's a common position in effective altruism. But the arguments for it haven't been worked out in detail anywhere I know. 

"Keep in mind that many things you might want to fund are in scope of an existing fund, including even small grants for things like laptops. You can just recommend they apply to these funds. If they don't get any money, I'd guess there were better options you would have missed but should have funded first. You may also be unaware of ways it would backfire, and the reason something doesn't get funded is because others judge it to be net negative." 

I genuinely don't think there is any evidence (besides some theory-crafting around unilateralists curse) to think this level of second-guessing yourself and deferring is effective. Please keep in mind the history of the EA funds. Several funds basically never dispersed the funds. And the fund managers explicitly said they didn't have time. Of course things can improve but this level of deferral is really extreme given the communities history.

Suffice to day I don't think further centralizing resources is good nor is making things more bureaucratic. Im also not sure there is actually very much risk of 'unilateralist curse' unless you are being extremely careless. I trust most EAs to be at least as careful as the leadership. Probably the most dangerous thing you could possible fund is AI capabilities. Openphil gave 30M to OpenAI and the community has been pretty accepting of ai capabilities. This is way more dangerous than anything I would consider funding!

Transcript of Twitter Discussion on EA from June 2022

That doesnt really engage with the argument. If some other agent is values aligned and approximately equally capable why would you keep all the resources? It doesnt really make sense to value 'you being you' so much.

I dont find donor lotteries compelling. I think resources in Ea are way too concentrated. 'Deeper investigations' is not enough compensation for making power imbalances even worse.

Transcript of Twitter Discussion on EA from June 2022

I think the Aumann/outside-view argument for 'giving friends money' is very strong. Imagine your friend is about as capable and altruistic as you. But you have way more money. It just seems rational and efficient to make the distribution of resources more even? This argument does not at all endorse giving semi-random people money.

Solving the replication crisis (FTX proposal)

What was the approximate budget? When I read this my first thought was 'did they ask for a super ton of money and get rejected on that basis'?

Load More