I agree, most of my uncertainty / hedging was on parts of the post that were removed within a few hours of posting. Thanks for checking.
[this comment references the first version of this post, which has since been edited substantially such that this qualification no longer feels necessary]
Just want to note that my main contribution to this post was listing out questions I wanted answered to inform what EAs or the EA community should do. I have a lot of uncertainty about the structure of what assets belong to whom (compared to previous expectations) and what this implies about the EA funding landscape.
I don't have high confidence in empirical claims that might be made in this post, and I th...
For 2), you might be interested in the EA Coworking Discord: https://discord.gg/zpCVDBGE (link valid for 7 days)
I've heard and used "aligned EA" to refer to someone in category 2 (that is, someone deeply committed to overarching EA principles).
I don't think arrangement 1 (investor buys house and rents out just to EAs) is better than arrangement 2 (investor invests in whatever has highest returns, and EAs rent most convenient house) since the coordination required and inflexibility might be too much of a downside.
If the goal is to reduce costs of living together for EAs, the investor could subsidize the rent for the group of EAs while investing in something completely different with higher returns.
Some possible benefits of arrangement 1 are if the cohabitating EAs could actively make the house a ...
Some thoughts I have:
Quantum randomness seems aleatory, so anything that depends on that to a large extent (everything depends on that to some extent) would probably also fit the term.
Winter solstice / summer solstice? Popular secular holiday in EA circles (though not strictly EA per se)
In case someone has capacity to do this right now, I'm under the impression that Open Phil does want their own page (based on conversation I had with someone researching there).
I think "fast takeoff" and "intelligence explosion" mean approximately the same thing as FOOM (notably "catastrophic AI" refers to a broader category of scenarios), and these terms are often used especially in more formal contexts.
I'm not concerned about this being a big problem, but do think this post is a good nudge for people who don't typically think about the effect their language has on getting buy-in for their ideas.