Head of Online (EA Forum, effectivealtruism.org, EA Newsletter) at CEA. Non-EA interests include chess and TikTok (@benthamite). We are probably hiring: https://www.centreforeffectivealtruism.org/careers
It was both.
And yeah, the article reports Sam telling someone that he would "destroy them", but I don't fully understand the threat model. I guess the idea is that Sam would tell a bunch of people that I was bad, and then I wouldn't be able to get a job or opportunities in EA?
I guess I don't know for sure that Sam never attempted this, but I can't recall evidence of it.
Thanks! I appreciate the link round up; the boundaries of what exactly was audited does seem helpful to know, and the claim that VCs have a preference for fraudulent founders is interesting. This is exactly the kind of comment I was hoping to get from my post.
I still don't understand why they can't give a clear promise of when they will talk and that the lack of this makes me trust them less
fwiw I will probably post something in the next ~week (though I'm not sure if I'm one of the people you are waiting to hear from).
I think the claim is:
This is a cool point, thanks for making it.
However, a huge amount of profit is effectively transferred from third party investors to the CEO or management team. They go from probably a few percent of the profits to spend as they wish to controlling the distribution of perhaps half
Can't this be fixed by just stating that the windfall is distributed by shareholders on a pro rata basis?[1]
Also, if I'm understanding Appendix II of the original report correctly, it seems like the proposals all involve distributing the windfall to a trust which the AI developer does not have control of. I guess your point is that e.g. Open AI might distribute its windfall to "the trust for buying Sam Altman yachts" or something, but I think it is per se incorrect to describe the CEO as having control over spending the windfall?
"Fixed" in scare quotes - I'm not sure it's actually better for the world to have random VCs spending the money than CEOs of AGI companies.
I think we might be talking past each other – I understood you to be asking about Chana/CEA's thoughts on commissioning an investigation whose scope is broader than the one the board commissioned. Is that wrong?
Some background, which is probably pedantic but I want to err on the side of over sharing:
We already had a proposal from an external entity to audit some of the CH team’s general processes before the news of this specific incident broke, and I expect (~70%) we will end up working with them, although I'm not sure exactly what the details will be like. We have done this kind of external review before and have had mixed results; as with any kind of peer review/best practice sharing, the median result is that there aren't major changes. Still, I think the potential upside is probably enough to justify doing something like this.
Note that this audit is “external” in the sense that it will be performed by people who don’t work at CEA, but is “internal” in the sense that it’s triggered by CEA, rather than the board. And in yet a third sense of the word “external:” I am involved, so the audit is “external” in the sense that it involves me, a person not on the community health team but who has the power to fire/reassign/etc. anyone on the community health team.
It seems like it’s best to start the investigation into this particular incident and announce it as quickly as possible, so I don't have a full plan about other audits we might do, but this comment resonates with some of my thoughts on potentially sharing information that comes out of this kind of review.
I feel like there's some implicit claim that only a subset of people (socially awkward men?) aren't romantically perceptive, but my understanding is that basically everyone is bad at this and if you are going to flirt with someone you should expect that you are probably unable to tell whether they want it.[1]
An example paper largely chosen at random says:
Based on a community sample of real-life speed daters we
were able to show that actual mate choices are not reciprocal,
although people strongly expect their choices to be
reciprocated and dating behaviour (flirting) is indeed
strongly reciprocal.
I.e. people reciprocate flirting essentially independent of whether they are actually attracted to the other person, and the other person is essentially unable to distinguish "real" from "fake" flirting.
Furthermore, that paper had two "independent, trained raters" who watched recordings and marked if the person involved was flirting. These raters had interrater reliability of which isn't terrible, but isn't amazing either.[2]
tl;dr: my guess is that most people should 1) not assume that they can reliably identify flirting and 2) even if they can, should not assume that they can reliably predict whether this flirting is indicative of romantic interest.
Of course, this also cuts the other way: people who you don't think are attracted to you are sometimes attracted to you. But whatever risk/reward calculation you are running should include the fact that you are probably going to make mistakes here.
Obviously it's possible to get reliable signals, e.g. if someone explicitly says "I don't like you" then probably you can accurately guess that they don't like you. This comment is referring to "normal" flirting signals like eye contact, touch, etc.
I assume these "trained raters" were grad students who had thought about the problem for a couple days or something, and I bet that if you actually genuinely studied this you could get good at it, but probably very few people are in that reference class.
Thanks! This is helpful (though a bit sad).
Thanks for doing this! I appreciate the willingness to think about mistakes, but for what it's worth I would also be interested to hear what things well. At the end you allude to "key metrics" that haven't resolved yet, but it might be worth sharing what the initial results are?