P

pseudonym

1773 karmaJoined Oct 2022

Bio

Feel free to DM me anything you want to share but don't want to or can't under your own account(s), and I can share them on your behalf if I think it's adding value to the discourse/community.

The corollary is that views shared on this account don't necessarily reflect my own personal views, though they will usually be worded to sound like they are.

Comments
151

pseudonym
3mo4322

https://ea.greaterwrong.com/posts/NJwqKSbnAgFHogaL2/key-questions-for-digital-minds#comment-S9jjzKf3AaTt62Lja

Leaving this comment up for myself and as a PSA, as the original post by Jacy was deleted shortly after this comment was posted.

Edit: received a message saying the link is broken. I'm not sure why this is, but I think this is an issue if you click the link but not if you copy+paste the link. Screenshot below in any case if this issue persists for others.
 


pseudonym
12d118

On a separate note: I currently don't think that epistemic deference as a concept makes sense, because defying a consensus has two effects that are often roughly the same size: it means you're more likely to be wrong, and it means you're creating more value if right.

I don't fully follow this explanation, but if it's true that defying a consensus has two effects that are the same size, doesn't that suggest you can choose any consensus-defying action because the EV is the same regardless, since the likelihood of you being wrong is ~cancelled out by the expected value of being right?

Also the "value if right" doesn't seem likely to be only modulated by the extent to which you are defying the consensus?

Example:
If you are flying a plane and considering a new way of landing a plane that goes against what 99% of pilots think is reasonable , the "value if right" might be much smaller than the negative effects of "value if wrong". It's also not clear to me that if you now decide to take an landing approach that was against what 99.9% of pilots think was reasonable you will 10x your "value if right" compared to the 99% action.

This is more of a misread than a strawman, but on page 8 the paper says:

Sometimes the institutional critique is stated in ways that illegitimately presuppose that “complicity” with suboptimal institutions entails net harm. For example, Adams, Crary, and Gruen (2023, xxv) write:

> EA’s principles are actualized in ways that support some of the very social structures that cause suffering, thereby undermining its efforts to “do the most good.” (emphasis added)

This reasoning is straightforwardly invalid. It’s entirely possible—indeed, plausible—that you may do the most good by supporting some structures that cause suffering. For one thing, even the best possible structures—like democracy—will likely cause some suffering; it suffices that the alternatives are even worse. For another, even a suboptimal structure might be too costly, or too risky, to replace. But again, if there’s evidence that current EA priorities are actually doing more harm than good, then that’s precisely the sort of thing that EA principles are concerned with. So it makes literally no sense to express this as an external critique 10 (i.e. of the ideas, rather than their implementation).

I don't think saying that Adams, Crary, and Gruen "illegitimately presuppose that “complicity” with suboptimal institutions entails net harm" is correct. The paper misunderstands what they were saying. Here's the full sentence (emphasis added):

Taken together, the book's chapters show that in numerous interrelated areas of social justice work - including animal protection, antiracism, public health advocacy, poverty alleviation, community organizing, the running of animal sanctuaries, education, feminist and LGBTQ politics, and international advocacy - EA's principles are actualized in ways that support some of the very social structures that cause suffering, thereby undermining its efforts to “do the most good.”

I interpret it as saying:

The way the EA movement/community/professional network employs EA principles in practice fundamentally support and enable fundamental causes of suffering, which undermines EA's ability to do the most good. 

In other words, it is an empirical claim that the way EA is carried out in practice has some counterproductive results. It is not a normative claim about whether complicity with suboptimal institutions is ever okay. 

pseudonym
2mo1410

fwiw the two videos linked look identical to me (EAG Bay Area 2023, "The current alignment plan, and how we might improve it")

pseudonym
2mo4735

If Zach was previously interim CEO for EVF US and has now moved to the board, is there a new interim CEO, or is this returning to the previous model of organizations reporting directly to the board? Is Howie still interim CEO of EVF UK? Apologies if this is on the website somewhere, I couldn't find the details.

I didn't make a claim that this was just about making sexual jokes or just about 'discomfort', and I'm not really sure where you got that from.

Also, you're clearly entitled to your opinion around what you consider uncomfortable personally, but what happens if someone else thinks putting you putting your hand around them in a somewhat intimate way is inappropriate? It sounds like you'd consider this a false accusation? That this shouldn't be something classified as sexual harassment?

Again,

It sounds like you're basically saying that 2/3 to 3/4 accusations are false? Are you grounding these in anything empirical or are these uninformed priors?

pseudonym
2mo119

Makes sense RE: it encompassing milder problems, but this means it is also more likely, so it's not clear that this cashes out favorably in the direction of the false accusations.

What do you think the base rate of sexual harassment is? e.g. if you think 80% is the baseline risk for someone, i don't know how you justify a 25% to 37.5% likelihood of actual harassment conditional on an accusation. It sounds like you're basically saying that 2/3 to 3/4 accusations are false? Are you grounding these in anything empirical or are these uninformed priors?

What are the base rates you are anchoring to here? This is basically comparing the probability of someone being sexually assaulted VS the probability of someone making a false accusation right?

pseudonym
2mo812

Out of curiosity, did you read the comment in response to your original comment earlier? Specifically this part:

these principles may not genuinely be in tension. There's a nice discussion of this in this recent paper (page 16):

"Interpreting the presumption of innocence does not genuinely conflict with believing women, if we respect the evidence we receive. Here’s why: an accusation is made. We start with no evidence, then the accuser offers her testimony that p. If we’re good Bayesians, we update by conditionalizing our prior in p on how probable p is given this new evidence. Where this leaves us depends on where we started, how attached we were to that starting point (sometimes called resilience), and how much we trust the evidence we got. If we put more stock in the trustworthiness of the testimony than in our starting presumptions—which we should—then no matter our starting point, it will be pretty easy for testimony that p to move us to significant confidence in p. The more attached we are to the starting point (or the less we trust the testimony), the more the difference between the alternative understandings of the presumption of innocence makes a difference to the post-update degree of confidence. If the starting point is resilient, it will take an overwhelming amount of evidence to convince someone who starts out presuming ~p that in fact p is probably true. But insofar as the presumption of innocence is a stance defined by the absence of any evidence, the starting credence it yields should not be resilient, and so should easily shift in response to the weight of evidence with any real probative force."
 

Load more