Yesterday morning I woke up and saw this tweet by Émile Torres: https://twitter.com/xriskology/status/1599511179738505216
I was shocked, angry and upset at first. Especially since it appears that the estate was for sale last year for 15 million pounds: https://twitter.com/RhiannonDauster/status/1599539148565934086
I'm not a big fan of Émile's writing and how they often misrepresent the EA movement. But that's not what this question is about, because they do raise a good point here: Why did CEA buy this property? My trust in CEA has been a bit shaky lately, and this doesn't help.
Apparently it was already mentioned in the New Yorker piece: https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism#:~:text=Last year%2C the Centre for Effective Altruism bought Wytham Abbey%2C a palatial estate near Oxford%2C built in 1480. Money%2C which no longer seemed an object%2C was increasingly being reinvested in...
Lazarus Chakwera won Malawi’s 2020 Presidential election on an anti-corruption, pro-growth platform. It’s no surprise that Malawians voted for growth, as Malawi has been called the world’s “poorest peaceful country”. According to Our World in Data, the median income per day is $1.53, or about $560 per year. Real GDP per capita has grown at an average rate of just 1.4% per year since 1961 and stands today at $1650 per person (PPP, current international $). Furthermore, the country has yet to recover from an economic downturn caused by the Covid-19 pandemic, leaving GDP per capita only slightly higher than it was in 2014.
Life on $560 a year is possible, but not very comfortable. A sudden illness, accident, or natural disaster...
The Survival and Flourishing Fund (SFF) funds many longtermist, x-risk, and meta projects, and has distributed $18mm YTD. While SFF’s focus areas are similar to those of the FTX Future Fund, SFF has received few applications since the latest round closed in August.
This is a reminder that projects can apply to be considered for expedited speculation grants at any time. Speculation grants can be approved in days and paid out as quickly as within a month. Past speculation grants have ranged from $10,000 to $400,000, and applicants for speculation grants will automatically be considered for the next main SFF round. In response to the recent extraordinary need, Jaan Tallinn, the main funder of SFF, is doubling speculation budgets. Grantees impacted by recent events should apply.
SFF funds charities and projects hosted by organizations with charity status. You can get a better idea of SFF’s scope from its website and its recent grants. I encourage relevant grantees to consider applying to SFF, in addition to the current array of efforts led by Open Phil, Mercatus, and Nonlinear.
For general information about the Survival and Flourishing Fund, see:
Written quickly. It's better to draft my objections poorly, than to not draft them at all.
I am sceptical that "foom" is some of not physically possible/feasible/economically viable.
[Not sure yet what level of scepticism I endorse.]
I have a few object level beliefs that bear on it. I'll try and express them succinctly below (there's a summary at the end of the post for those pressed for time).
Note that my objections to foom are more disjunctive than they are conjuctive. Each is independently a reason why foom looks less likely to me.
I currently believe/expect the following to a sufficient degree that they inform my position on foom.
1.0. Marginal returns to cognitive investment (e.g. compute) decay at a superlinear rate (e.g. exponential) across some relevant cognitive domains (e.g. some...
[Update: Work posted after September 23 2022 (and before whatever deadline we establish) will be eligible for the prizes. If you are sitting on great research, there's no need to delay posting until the formal contest announcement in 2023.]
At Open Philanthropy we believe that future developments in AI could be extremely important, but the timing, pathways, and implications of those developments are uncertain. We want to continually test our arguments about AI and work to surface new considerations that could inform our thinking.
We were pleased when the Future Fund announced a competition earlier this year to challenge their fundamental assumptions about AI. We believe this sort of openness to criticism is good for the AI, longtermist, and EA communities. Given recent developments, it seems likely that competition is no...
Epistemic status: This post is meant to be a conversation starter rather than a conclusive argument. I don’t assert that any of the concerns in it are overwhelming, only that we have too quickly adopted a set of media communication practices without discussing their trade-offs.
Also, while this was in draft form, Shakeel Hashim, CEA’s new head of communications, made some positive comments on the main thesis suggesting that he agreed with a lot of my criticisms and planned to have a much more active involvement with the media. If so, this post may be largely redundant - nonetheless, it seems worth having the conversation in public.
CEA adheres to what they call the fidelity model of spreading ideas, which they formally introduced in 2017, though my sense is it...
Each year, wealthy countries collectively spend around 178 billion dollars (!!) on Development aid.
Development aid has funded some of the most cost-effective lifesaving programs that have ever been run. Such examples include PEPFAR, the US emergency aids relief programme rolled out at the height of the African aids pandemic, which estimates suggest saved 25 million lives at a cost of some 85 billion ($3400 per life saved, competitive with Givewell’s very best). EAs working with global poverty will know just how difficult it is to achieve high cost effectiveness at these scales.
Development aid has also funded some of the very worst development projects conceived, in some instances causing outright harm to the recipients.
Development aid is spent with a large variety of goals in mind. Climate mitigation projects, gender equality campaigns, and free-trade...
I'm want to run the listening exercise I'd like to see.
Give concrete suggestions for community changes. 1 - 2 sentences only.
Upvote if you think they are worth putting in the polis poll and agreevote if you think the comment is true.
Agreevote if you think they are well-framed.
Aim for them to be upvoted. Please add suggestions you'd like to see.
I'll take the top 20 - 30
I will delete/move to comments top-level answers that are longer than 2 sentences.
Polis poll here: https://pol.is/5kfknjc9mj
(Note: This essay was largely written by Rob, based on notes from Nate. It’s formatted as Rob-paraphrasing-Nate because (a) Nate didn’t have time to rephrase everything into his own words, and (b) most of the impetus for this post came from Eliezer wanting MIRI to praise a recent OpenAI post and Rob wanting to share more MIRI-thoughts about the space of AGI organizations, so it felt a bit less like a Nate-post than usual.)
Nate and I have been happy about the AGI conversation seeming more honest and “real” recently. To contribute to that, I’ve collected some general Nate-thoughts in this post, even though they’re relatively informal and disorganized.
AGI development is a critically important topic, and the world should obviously be able to hash out such topics in...