P

pseudonym

2034 karmaJoined Oct 2022

Bio

Feel free to DM me anything you want to share but don't want to or can't under your own account(s), and I can share them on your behalf if I think it's adding value to the discourse/community.

The corollary is that views shared on this account don't necessarily reflect my own personal views, though they will usually be worded to sound like they are.

Comments
182

https://ea.greaterwrong.com/posts/NJwqKSbnAgFHogaL2/key-questions-for-digital-minds#comment-S9jjzKf3AaTt62Lja

Leaving this comment up for myself and as a PSA, as the original post by Jacy was deleted shortly after this comment was posted.

Edit: received a message saying the link is broken. I'm not sure why this is, but I think this is an issue if you click the link but not if you copy+paste the link. Screenshot below in any case if this issue persists for others.
 


Fair point about reputational harms being worse and possibly too punishing in some cases. I think in terms of a proposed standard it might be worth differentiating (if possible) between e.g. careless errors, or momentary lapses in judgement that were quickly rectified and likely caused no harm in expectation, versus a pattern of dishonest voting intended to mislead the EAF audience, and especially if they or an org that they work for stand to gain from it, or the comments in question are directly harmful to another org. In these latter cases the reputational harm may be more justifiable.

pinkfrog: 1 month (12 cases of double voting)
LukeDing: 6 months (>200 times)
JamesS: indefinite (8 accounts, number not specified)
[Redacted]: 2 months (13 double votes, most are "likely accidental", two "self upvotes")
RichardTK: 6 months (number not specified)

Charles He: 10 years (not quite analogous as these are using alts to circumvent initial bans, included other violations)
Torres: 20 years (not quite analogous as these are using alts to circumvent initial bans, included other violations)

Have the moderators come to a view on identifying information? is pinkfrog the account with higher karma or more forum activity?

In other cases the identity has been revealed to various degrees:

LukeDing
JamesS
Richard TK (noting that an alt account in this case, Anin, was also named)
[Redacted]
Charles He
philosophytorres (but identified as "Torres" in the moderator post)

It seems inconsistent to have this info public for some, and redacted for others. I do think it is good public service to have this information public, but am primarily pushing here for consistency and some more visibility around existing decisions.

Theory A says that SBF genuinely believed in effective altruism when running FTX, and he does not have DAE. Additionally, this theory says that his unethical behaviors were just the result of some mixture of incompetence and bad luck (possibly, though not necessarily, exacerbated by a belief in naïve utilitarianism). In this theory, he did not intentionally defraud anyone, though he was, at the very minimum, extremely reckless. This theory says that he was either so incompetent that he didn't know he was behaving unethically and breaking the law, or else he was able to justify his behaviors to himself (even if he felt very bad/guilty about his actions) via a naïve utilitarian calculus based on his guess that doing so would yield more utility in the world in the long-run for all conscious beings (compared to if he followed the law and behaved ethically). 

Possibly nitpicky, but I don't think "In this theory, he did not intentionally defraud anyone" is necessarily the case even if we assume theory A (SBF genuinely believed in EA principles and did not have DAE).

I'm not sure if there's something here along the line of "if you believe in the ideas of EA then you would never intentionally defraud anyone / behave unethically under any reasonable interpretation / break the law", which I think I feel less sure about, but at the minimum it's plausible to think (as you point out) that SBF could have been a naive utilitarian, which could have made him more likely to intentionally defraud someone than if he wasn't, all else equal.

Alternatively, you do mean this to be the "unintentional" theory, but in this case there should probably also be room to explore the hypothesis that he was acting more intentionally and as a naive utilitarian, and I think there is definitely some evidence out there to support this (and I would find this more compelling and probably more likely than theory C).

Yes, see here:

https://forum.effectivealtruism.org/posts/32LMQsjEMm6NK2GTH/sharing-information-about-nonlinear?commentId=oSJh4RJvG4Gy4hQ3t

This failure mode seemed similar in nature to this listed mistake on the CEA website. Specifically:

We think we should have taken on fewer new projects, set clearer expectations for them, and ended unsuccessful projects earlier.

Running this wide array of projects has sometimes resulted in a lack of organizational focus, poor execution, and a lack of follow-through. It also meant that we were staking a claim on projects that might otherwise have been taken on by other individuals or groups that could have done a better job than we were doing (for example, by funding good projects that we were slow to fund).

OTOH, it may not have caused harm in this case if 1) or others were sufficient reasons to close the project without 2), or if this wasn't a project that could have been done better than CEA.

There's some evidence in that thread they weren't dismissed by the EA community given the claim that multiple people were banned from EA events as a result. Perhaps not all accusations lead to an action, and you/OP mean dismissed as in "not entirely accept all claims", but it does seem like that's a pretty high, and likely unreasonable bar.

I would also be interested in more clarification about how EA relevant the case studies provided might be, to whatever extent this is possible without breaking confidentiality. For example:

We were pressured to sign non-disclosure agreements or “consent statements” in a manipulative “community process”.

this does not sound like the work of CEA Community health team, but it would be an important update if it was, and it would be useful to clarify if it wasn't so people don't jump to the wrong conclusions.

That being said, I think the AI community in the Bay Area is probably sufficiently small such that these cases may be personally relevant to individual EAs even if it's not institutionally relevant-it seems plausible that a potential victim who gets into AI work via EA might meet alleged abusers in cases A to K, even if no EA organizations or self-identified EAs are involved.

Load more