Even from an unsympathetic outside view, there needs to be some cause or at least some narrative to do something like this:
It’s been a decade since Dustin and Holden has been working in philanthropy. Open Philanthropy worked closely with thousands of people, making hard decisions. There’s a vast amount of trust built up, that is more than just money. This is accumulated evidence in their favor.
Without a reason or even narrative to do so, I think even a high quality investigation or audit will be a waste of time or a distraction. All of this is bad if there are important issues or reforms in EA.
Pointing at arbitrary people or generating fear is bad.
Holden built GiveWell and Open Phil. Those are really good. And the truth is that making those built a lot of enemies and critics.
I mean, what do you really think happened with Criminal Justice Reform?
Also, if there was some dumb investigation, it will make my name mean and I’ll have to change it.
I always saw Dustin as an abstract bag of money. This was until two weeks ago when I found out there was an EA Twitter.
I think we should investigate if he is paying someone to be this funny.
Hi, I now found and I agree that the advice is bad, directionally.
However, I expect LT people who receive large amounts of funds, to be personally competent and responsible enough to say/write/prepare to set aside these funds. They would be looked down upon if they needed to be scolded on an online forum to navigate the moral and legal issues in the most basic way. Polite disagreement would have been adequate.
However I really think you should reconsider the way you've worded your sentiments. It's fine to register an anonymous account to say something that you wouldn't readily put your name to.
(I'm not anonymous, and this setup is intentional, but this is wildly hard to explain. )
More to the heart of the issue, unfortunately, the situation is exactly the opposite as I believe you perceive.
Outside of voting/writing on the EA forums, many parts of EA is treated with absolute contempt and seen as noxious, and this was before November and held by multiple senior EA people across all cause areas, people that you would respect.
As for one example, see Matt Yglesias.
Personally, it’s also hard not to just generally feel worse about the “EA community” as a set of social institutions distinct from the specific ideas. I always had sort of mixed feelings about this, and I gave money to GiveWell’s Top Charities Fund for years before I ever attended my first EA conference. And while I thought the conference was fine, afterward I felt more confident that I would keep donating to GiveWell than that I would ever go to another EA conference.If two weeks ago you found the whole scene to be obnoxious and weird and suffused with an odd mix of arrogance and credulity, recent events have tended to vindicate that.
Personally, it’s also hard not to just generally feel worse about the “EA community” as a set of social institutions distinct from the specific ideas. I always had sort of mixed feelings about this, and I gave money to GiveWell’s Top Charities Fund for years before I ever attended my first EA conference. And while I thought the conference was fine, afterward I felt more confident that I would keep donating to GiveWell than that I would ever go to another EA conference.
If two weeks ago you found the whole scene to be obnoxious and weird and suffused with an odd mix of arrogance and credulity, recent events have tended to vindicate that.
What is especially bad and broken is that many people do actually act with great conscientiousness and care online, on the EA forum and Lesswrong, but this is effectively harvested by active people who want access to the resources, power structures, that has been built up by conscientious, unrelated work.
I literally suspected this was deliberate or at least tolerated, in part because this kept the related worldviews relatively weak. However, in the wake of the FTX collapse, this situation and the weakening of MacAskill and non-AI establishment, these latent issues might result in extremely bad states for EA.
I believe large parts of online EA discourse is intellectually bankrupt and dysfunctional. I believe I can decisively articulate why. This would really depress a lot of people without a solution, so I haven't written it up.
What suggested action or claim warrants this emergency-like statement? I can't find it in this post.
Overall, this contributes to the squalid character of these events, that this whole thing is essentially 16 year olds who read too much online blog posts.
As an aside, I don't think or don't know if FTX grantees should give back all the money, but Yudkowsky's post about is badly argued, intellectually and morally, and it's disappointing, and amazing really, it got the upvotes and credibility it did without the obvious counterarguments appearing.
This sort of behavior is obvious to outsiders.
Your work and background seems valuable and impressive, and is far superior to me in VC. I would like to learn more from you.
As a comments on your statements taken in isolation:
what requires board consent, what is the past history of investor updates, what conflicts of interest exist and mechanisms to resolve them etc
It is surprising if an early-VC looked at things like board, board consent. I expect the main thing they would look at is team.
Certainly in tech, boards are usually not respected, even in later larger companies, much less a small early stage project. Are you conflating board and board control, with founder/lead team dynamics?
All things that should have, if press reports are accurate, provided ample red flags prior to an investment, even before getting into forensic accounting etc.
In my opinion, most projects would bomb this, including Apple, Facebook, etc. (modulo the claimed romantic/sexual relationships, and even then that's not clear).
Fear of missing out on a competitive round can drive normally savvy investors to skip or discount results of the normal dd process.
I understand the most negative narratives of the VC investment in FTX ("I LOVE THIS FOUNDER").
At the same time, it's not clear it's ex ante terrible. It was clear what they were investing in, and that was almost no control or visibility into the organization.
Most elite/top/successful founding teams want exactly the arrangement SBF achieved, because VC control or influence is seen as (strongly) net negative. If this is true, this cannot be a signal or red flag.
Not sure how much knowing you rotate shapes better than 99% of people is useful in real life
The study you clicked on claims to be an IQ study or even meta-IQ study. So whatever it is doing, it would be weird if it omitted Visual-Spatial ability (which I think is commonly studied). The absence of a strong claim is consistent with the authors being agnostic/open minded/uncertain of this value.
To contextualize, Visual-Spatial awareness is pretty normal in IQ tests, it would be like asking math or verbal questions on an SAT.
(I don't know much about IQ tests in the way I know about other disciplines, I thought about all of the above for about 60 seconds and deduced some things before I typed this but I'm pretty sure I'm right).
I think this aesthetic comes from deliberate choices, to use much shorter statements and trust the audience. For example, it allows them to reflect instead of being didactic or overbearing. Your reaction is valid.
In this other thread, see this claim.
This phrasing is a yellow flag to me, it's a remarkably large effect, without contextualizing it in a specific, medical claim (e.g. so it can be retreated from).
The description of the therapy itself is not very promising. https://strongminds.org/our-model/
This is a coarse description. It does not suggest how such a powerful technique is reliably replicated and distributed.
Strong Minds appears to orchestrate its own evaluations, controlling data flow by local hiring contractors.
As described elsewhere, the approach of measuring happiness/sentiment in a cardinal way and comparing this to welfare from pivotal/tragic life events measured in years/disability, seems challenging and the parent comment's concerns about the magnitudes seems dubious is justified.
GHD is basically built on a 70-year graveyard of very smart people essentially doing meta things that don't work well.
Some concerns that an educated reasonable person should raise (and have been raised are)
The above aren't dispositive, but the construct of WELLBY does not at all seem easy to compare to QALYs and DALYs, and the pat response is unpromising.