relatedly how paths towards realizing the Long Reflection are most likely totalitarian
Peter Thiel touches on this point in a recent interview where he argues against Bostrom's vulnerable world hypothesis.
There's a new paper on jhana (in Cerebral Cortex) out of Matthew Sacchet's Harvard Center: Fu Zun Yang et al. 2023
Got it, thanks. I'm interested in the cattle analysis because cows yield ~4x more meat than pigs per slaughter, and could perform even better than that when factoring in cognition.
Apart from pivoting to “x-risk”, what else could we do?
Cultivate approaches to heal psychological wounds and get people above baseline on ability to coordinate and see clearly.
CFAR was in the right direction goalwise (though its approach was obviously lacking). EA needs more efforts in that direction.
I wrote a thread with some reactions to this.
(Overall I agree with Tyler's outlook and many aspects of his story resonate with my own.)
(b) intriguing IMO and I want to hear more -- #10, #11, #16, #19
10. nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang
See discussion in this thread
11. EA correctly identifies improving institutional decision-making as important but hasn't yet grappled with the radical political implications of doing that
This one feels like it requires substantial unpacking; I'll probably expand on it further at some point.
Essentially the existing power structure is composed of organizations (mostly l...
But I have a feeling that the community takes revenge on him for all the tension the recent events left. This is cruel. I’m honestly worried if the guy is ok. Hope he is.
The scapegoat mechanism comes to mind:
The key to Girard's anthropological theory is what he calls the scapegoat mechanism. Just as desires tend to converge on the same object, violence tends to converge on the same victim. The violence of all against all gives way to the violence of all against one. When the crowd vents its violence on a common scapegoat, unity is restored. Sacrificial rites the world over are rooted in this mechanism.
I wrote in this direction a few years ago, and I'm very glad to see you clearly stating these points here.
From What's the best structure for optimal allocation of EA capital? –
...So EA is currently in a regime wherein the large majority of capital flows from a single source, and capital allocation is set by a small number of decision-makers.
Rough estimate: if ~60% of Open Phil grantmaking decisioning is attributable to Holden, then 47.2% of all EA capital allocation, or $157.4M, was decided by one individual in 2017. 2018 & 2019 will probably
... there is a lot we can actually do. We are currently working on it quite directly at Conjecture.
I was hoping this post would explain how Conjecture sees its work as contributing to the overall AI alignment project, and was surprised to see that that topic isn't addressed at all. Could you speak to it?
Isn't the point of being placed on leave in a case like this to (temporarily) remove the trustee from their duties and responsibilities while the situation is investigated, as their ability to successfully execute on their duties and responsibilities has been called into question?
(I'm not trying to antagonize here – I'm genuinely trying to understand the decision-making of EA leadership better as I think it's very important for us to be as transparent as possible in this moment given how it seems the opacity around past decision-making contributed to...
Thank you for a good description of what this feels like . But I have to ask… do you still “want to join that inner circle” after all this? Because this reads like your defense of using a burner account is that it preserves your chance to enter/remain in an inner ring which you believe to be deeply unethical.
Anonymity is not useful solely for preserving the option to join the critiqued group. It can also help buffer against reprisal from the critiqued group.
See Ben Hoffman on this (a):
"Ayn Rand is the only writer I've seen get both these ...
It seems like we're talking past each other here, in part because as you note we're referring to different EA subpopulations:
I don't really know who knew what when; most of my critical feeling is directed at folks in category (1). Out of everyone we've mentioned here (EA or not), they had the most exposure to and knowledge about (or at least opportunity to learn about) SBF & FTX's operations.
I think we sho...
Thanks. I think Cowen's point is a mix of your (a) & (b).
I think this mixture is concerning and should prompt reflection about some foundational issues.
l question in this space is if EAs have allocated their attention wisely. The answer seems to be "mostly yes." In case of FTX, heavyweights like Temasek, Sequoia Capital, and SoftBank with billions on the line did their due diligence but still missed what was happening. Expecting EAs to be better evaluators of FTX's health than established hedge funds is somewhat odd.
Two things:
I read Cremer as gesturing in these passages to the point Tyler Cowen made here (a):
...Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.
I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant. And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated). When i
In particular, the shot at Cold Takes being "incomprehensible" didn't sit right with me - Holden's blog is a really clear presentation of those concerned by the risk misaligned AI plays to the long-run future, regardless of whether you agree with it or not.
Agree that her description of Holden's thing is uncharitable, though she might be describing the fact that he self-describes his vision of the future as 'radically unfamiliar... a future galaxy-wide civilization... seem[ing] too "wild" to take seriously... we live in a wild time, and should be ready for ...
So, out of your list of 5 organizations, 4 of them were really very much quite bad for the world, by my lights, and if you were to find yourself to be on track to having a similar balance of good and evil done in your life, I really would encourage you to stop and do something less impactful on the world.
This view is myopic (doesn't consider the nth-order effects of the projects) and ahistorical (compares them to present-day moral standards rather than the counterfactuals of the time).
Probably Good is a reasonable counterexample to my model here (though it's not really a direct competitor – they're aiming at a different audience and consulted with 80k on how to structure the project).
It'll be interesting to see how its relationships with 80k and Open Phil develop as we enter a funding contraction.
If you or me or anyone else wanted to start our own organisation under a new brand with similar goals to CEA or GWWC I don't think anyone would try to stop us!
My model is that no one would try to formally stop this effort (i.e. via a lawsuit), though it would receive substantial pushback in the form of:
I don't follow what you're pointing to with "beholden to the will of every single participant in this community."
My point is that CEA was established as a centralizing organization to coordinate the actions and branding of the then-nascent EA community.
Whereas Luke's phrasing suggests that CEA drove the creation of the EA community, i.e. CEA was created and then the community sprung up around it.
CEA was setup before there was an EA movement (the term "effective altruism" was invented while setting up CEA to support GWWC/80,000 Hours).
The coinage of a name for a movement is different from the establishment of that movement.
That's true, but before the brand "Effective Altruism" existed, there was no reason why starting an organisation using that name should have made the founders beholden to the will of every single participant in this community - you'd need to conjecture a pretty unreasonable amount of foresight and scheming to think that even back then the founders were trying to structure these orgs in a manner designed to maintain central control over the movement.
If you or me or anyone else wanted to start our own organisation under a new brand with similar goals to CEA or GWWC I don't think anyone would try to stop us!
Another conflict-of-interest vector is that EVF board members could influence funding to EVF sub-orgs via other positions they hold, e.g. Open Phil (where Claire Zabel works as a senior program officer) funds CEA (a sub-org of EVF, where Claire is a board member).
Ah ha:
Effective Ventures Foundation is governed by a board of five trustees (Will MacAskill, Nick Beckstead, Tasha McCauley, Owen Cotton-Barratt, and Claire Zabel) (the “Board”). The Board is responsible for overall management and oversight of the charity, and where appropriate it delegates some of its functions to sub-committees and directors within the charity.
"What actions would you like to see from EA organizations or EA leadership in the next few months?"
As Shakeel wrote here, the leaders of EA organizations can’t say a lot right now, and we know that’s really frustrating.
** the leaders of EA organizations are deciding not to say a lot right now...
Here are some jumping-off points for reflecting on how one might update their moral philosophy given what we know so far.
From this July 2022 FactCheck article (a):
...Bankman-Fried has provided Protect Our Future PAC with the majority of its donations. The group has raised $28 million for the 2022 election cycle as of June 30, with $23 million from Bankman-Fried. Nishad Singh, who serves as head of engineering at FTX, has donated another $1 million.
As of July 21, the PAC has spent $21.3 million on independent expenditures — exclusively in Democratic primaries for House seats.
This level of spending makes Protect Our Future PAC the third highest among outside spe
by the founder of Jhourney: https://stephenzerfas.substack.com/p/jhanas-are-human-not-buddhist