148 karmaJoined Aug 2015


Agree this seems bad. Without commenting on whether this would still be bad, here's one possible series of events/framing that strikes me as less bad:
- Org: We're hiring a temporary contractor and opening this up to international applicants
- Applicant: Gets the contract
- Applicant: Can I use your office as a working space during periods I'm in the states?
- Org: Sure

This maybe then just seems like the sort of thing the org and applicant would want to have good legal advice on (I presume the applicant would in fact look for a B1/B2 visa that allows business during their trip rather than just tourism)

For completeness, here's what OpenAI says in its "Governance of superintelligence" post:

Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.


If there was someone well-trusted by the community (in or outside of it) you trusted not to doxx you, you might ask if they'd be willing to endorse a non-specific version of events as accurate.  I do accept there's an irony in suggesting this given your bad experience with something similar previously!

This may or may not be relevant to your situation, but I'd be more willing to accept non-specific claims at face value if a trusted third party was vouching for that interpretation.

Tl;dr - my (potentially flawed or misguided) attempt at a comment that provides my impression of Catherine as a particularly trustworthy and helpful person, with appropriate caveats and sensitivity to Throwaway's allegation.

Note: I haven't written this sort of comment before, and appreciate that it would be easy for this sort of comment to contribute to have a chilling effect on important allegations of wrongdoing coming to light, so would welcome feedback on this comment or any norms that would have been useful for me to adhere to in making it or deciding to make it.

First things first: I'm sorry that Throwaway had had a bad experience with Catherine! Notwithstanding the lack of further detail, I recognize that given this comment there's a reasonable chance that there was miscommunication or Catherine made mistakes around confidentiality, including some chance this was in poor faith, part of a meaningful pattern, or involved misjudgement that could call her position into question. Throwaway has my empathy and I wish them the best in their forthcoming post, which seems courageous and selfless to do given their experience. I appreciate that what I say below would be very frustrating and disheartening to read for someone in the position they mention in the comment. I'll certainly do my best to read anything further from them with my best attempt at good faith and unbiasedness, and would need to apologize to them and downgrade my confidence in my ability to judge people's trustworthiness if the picture I paint below turns out to be have been unhelpful in hindsight.

With this said, it makes me feel uncomfortable to see a fully anonymous/uncorroborated/non-specific allegation of wrongdoing prominently in the comments to this sort of post - I'm not sure I like the incentive structures where anyone can costlessly cast a significant shadow on someone's reputation, given the costs involved in dissuading people from talking to someone whose role is to provide community health support. I definitely agree with Lorenzo's impression that it would be great to have appointed independent/external person or body that someone could take these sorts of allegations to with confidence.

I feel compelled to provide my own impression of Catherine (for context, we first met 5+ years ago through the Effective Altruism community in New Zealand; my early experiences involved sharing some thoughts with Catherine based on my experience of having been involved with EA a couple of years prior to her,  and I helped her put on a Giving Game, we have subsequently continued to have conversations when we have the chance to see each other out of a mutual good feeling and an interest in EA community health). 

My impression of Catherine is that she is a particularly kind, trustworthy and  virtuous person, with an interest in doing right by people and not breaking commonly accepted ethical norms. Concretely, I'd give at least 5-to-1 in odds that a year from now, I'll continue to both recommend her as someone particularly helpful and trustworthy to talk to, and endorse her meaningfully remaining in her current role (of course, in the hypothetical where I'm telling someone this, I would out of transparency note that someone has called into question her upholding of confidentiality in one instance).

Any updates around the likelihood/timing  of a discussion course? :) 

[Update 26 Jul '22: the website should be operational again. Sorry again to those inconvenienced!]

I've recently taken over monitoring the donation swaps. There have historically been a handful of offers listed each month, but it looks like the system has broken sometime over the past few weeks - thanks to Oscar below for emailing to bring this to our attention. I'm sorry for the inconvenience for anyone who has been trying to use the service and will hopefully provide a further update in the not-too-distant future!

Thanks for organising :)

When do you expect decisions on applications will be made by?

Thanks for writing this - it seems worthwhile to be strategic about potential "value drift", and this list is definitely useful in that regard.

I have the tentative hypothesis that a framing with slightly more self-loyalty would be preferable.

In the vein of Denise_Melchin's comment on Joey's post, I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think. In the vein of The Replacing Guilt series, I don't think that attempting to override these other values is generally sustainable for long-term motivation.

This hypothesis would point away from pledges or 'locking in' (at least for the sake of avoiding value drift) and, I think, towards a slightly different framing of some suggestions: for example, rather than spending time with value-aligned people to "reduce the risk of value drift", we might instead recognize that spending time with value-aligned people is an opportunity to both meet our social needs and cultivate one's impactfulness.

In the same vein as this comment and its replies: I'm disposed to framing the three as expansions of the "moral circle". See, for example: https://www.effectivealtruism.org/articles/three-heuristics-for-finding-cause-x/

Load more