Recent Discussion

Yesterday morning I woke up and saw this tweet by Émile Torres: https://twitter.com/xriskology/status/1599511179738505216

I was shocked, angry and upset at first. Especially since it appears that the estate was for sale last year for 15 million pounds: https://twitter.com/RhiannonDauster/status/1599539148565934086

I'm not a big fan of Émile's writing and how they often misrepresent the EA movement. But that's not what this question is about, because they do raise a good point here: Why did CEA buy this property? My trust in CEA has been a bit shaky lately, and this doesn't help.

Apparently it was already mentioned in the New Yorker piece: https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism#:~:text=Last year%2C the Centre for Effective Altruism bought Wytham Abbey%2C a palatial estate near Oxford%2C built in 1480. Money%2C which no longer seemed an object%2C was increasingly being reinvested in...

Or at least a cheaper one? With better access to public transport?
This seems overbudget and public transport is not only better for the environment, it's also more egalitarian. It would allow people from more impoverished backgrounds to more easily join our community, which - given our demographics - might be something we want to encourage.

1Answer by Bob Jacobs42m
Just wanted to answer this edited question: They did mention it on August 2nd [https://forum.effectivealtruism.org/posts/hp2FWKhWiCto6oBrL/the-operations-team-at-cea-transforms?commentId=Dky2zoRTHDqaMcB4h] : Although I would like to point out that it was going to be revealed in the New Yorker a few days later [https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism] . It is possible that they were allowed to preview the article and were trying to get ahead of it.
1Bob Jacobs1h
This section of [https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/why-did-cea-buy-wytham-abbey?commentId=u3yJfbm2pes8TFpYX] Owen's reply [https://forum.effectivealtruism.org/users/owen_cotton-barratt] seems to imply the latter: [https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/why-did-cea-buy-wytham-abbey?commentId=u3yJfbm2pes8TFpYX] Although I suppose it is possible he sought out a funder but the funder was only willing to pay for this specific one.

Lazarus Chakwera won Malawi’s 2020 Presidential election on an anti-corruptionpro-growth platform. It’s no surprise that Malawians voted for growth, as Malawi has been called the world’s “poorest peaceful country”. According to Our World in Data, the median income per day is $1.53, or about $560 per year. Real GDP per capita has grown at an average rate of just 1.4% per year since 1961 and stands today at $1650 per person (PPP, current international $). Furthermore, the country has yet to recover from an economic downturn caused by the Covid-19 pandemic, leaving GDP per capita only slightly higher than it was in 2014.

GDP per capita, PPP (current international $). Data from World Bank

Life on $560 a year is possible, but not very comfortable. A sudden illness, accident, or natural disaster...

Donation opportunities, yes. I'm not sure if donation opportunities in particular are something development economists look for; I'm not familiar with the literature. 

I broadly agree with the substance of your comment, I just admittedly find the tone off-puttingly abrasive ("delusional and arrogant" doesn't seem charitable), so I'll respectfully bow out of this exchange. 

1ryancbriggs2h
This is a nice post that should help people better grasp the magnitudes of income gaps across countries and the importance of growth. I'd only add two things: 1. Regarding growth, the issue in LICs usually isn't actually speeding up growth, it's sustaining growth [https://direct.mit.edu/rest/article-abstract/90/3/582/57751/The-Anatomy-of-Start-Stop-Growth] . Basically every country has experienced periods of Chinese level growth rates (Malawi's GDP growth data here [https://data.worldbank.org/indicator/NY.GDP.PCAP.KD.ZG?locations=MW]). The main issue is that in poorer countries these spells are shorter and they are often followed by periods of negative growth. 2. The development community is very much aware of these arguments, as are EA people that overlap both communities (hi). The reason people gravitate towards things like bed nets is purely on tractability. Somewhat crudely speaking, the last time that outsiders tried seriously to reform institutions in LICs in order to promote growth was the period of IMF structural adjustment in the 80s and 90s and that wasn't exactly a huge success. This is basically why there was a shift towards "randomista" development i the 2000s. As a matter of history, the development community goes back and forth on this macro vs. micro question every 20ish years, so I suppose we're due for a swing towards growth again.
2DavidNash4h
I think the tricky part is finding where smaller donors can donate, similar to GiveWell. Those organisations have suggestions for large sums of money but there is a gap for advice for individuals that want to give to global development and are okay with it not just being RCT evidence.

The Survival and Flourishing Fund (SFF) funds many longtermist, x-risk, and meta projects, and has distributed $18mm YTD. While SFF’s focus areas are similar to those of the FTX Future Fund, SFF has received few applications since the latest round closed in August.

This is a reminder that projects can apply to be considered for expedited speculation grants at any time. Speculation grants can be approved in days and paid out as quickly as within a month. Past speculation grants have ranged from $10,000 to $400,000, and applicants for speculation grants will automatically be considered for the next main SFF round. In response to the recent extraordinary need, Jaan Tallinn, the main funder of SFF, is doubling speculation budgets. Grantees impacted by recent events should apply.

SFF funds charities and projects hosted by organizations with charity status. You can get a better idea of SFF’s scope from its website and its recent grants. I encourage relevant grantees to consider applying to SFF, in addition to the current array of efforts led by Open Phil, Mercatus, and Nonlinear.

For general information about the Survival and Flourishing Fund, see:
https://survivalandflourishing.fund/

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Disclaimer

Written quickly[1]. It's better to draft my objections poorly, than to not draft them at all.

 

Introduction

I am sceptical that "foom"[2] is some of not physically possible/feasible/economically viable.
[Not sure yet what level of scepticism I endorse.]

I have a few object level beliefs that bear on it. I'll try and express them succinctly below (there's a summary at the end of the post for those pressed for time).

 

Note that my objections to foom are more disjunctive than they are conjuctive. Each is independently a reason why foom looks less likely to me.


Beliefs

I currently believe/expect the following to a sufficient degree that they inform my position on foom.

 

Diminishing Marginal Returns

1.0. Marginal returns to cognitive investment (e.g. compute) decay at a superlinear rate (e.g. exponential) across some relevant cognitive domains (e.g. some...

1titotal29m
Indeed, better programs will outcompete worse ones, so I expect AI to improve gradually over time. Which is the opposite of foom. I don't quite understand what you mean by a "meta-system that faces this problem". Do you mean the problem of having imperfect beliefs, or not being perfectly rational? That's literally every system!

In this case "improve gradually over time" could take place over the course of a few days or even a few hours. So it's not actually antithetical to FOOM

1𝕮𝖎𝖓𝖊𝖗𝖆2h
Thanks for the detailed reply. I'll try and address these objections later.

[Update: Work posted after September 23 2022 (and before whatever deadline we establish) will be eligible for the prizes. If you are sitting on great research, there's no need to delay posting until the formal contest announcement in 2023.]

At Open Philanthropy we believe that future developments in AI could be extremely important, but the timing, pathways, and implications of those developments are uncertain. We want to continually test our arguments about AI and work to surface new considerations that could inform our thinking.

We were pleased when the Future Fund announced a competition earlier this year to challenge their fundamental assumptions about AI. We believe this sort of openness to criticism is good for the AI, longtermist, and EA communities. Given recent developments, it seems likely  that competition is no...

Thanks Jason. I can now confirm that that is indeed the case!

2Jason Schukraft38m
Hi Zach, thanks for the question and apologies for the long delay in my response. I'm happy to confirm that work posted after September 23 2022 (and before whatever deadline we establish) will be eligible for the prize. No need to save your work until the formal announcement.

Epistemic status: This post is meant to be a conversation starter rather than a conclusive argument. I don’t assert that any of the concerns in it are overwhelming, only that we have too quickly adopted a set of media communication practices without discussing their trade-offs.

Also, while this was in draft form, Shakeel Hashim, CEA’s new head of communications, made some positive comments on the main thesis suggesting that he agreed with a lot of my criticisms and planned to have a much more active involvement with the media. If so, this post may be largely redundant - nonetheless, it seems worth having the conversation in public.

CEA adheres to what they call the fidelity model of spreading ideas, which they formally introduced in 2017, though my sense is it...

-1Holly Morgan4h
I think it is justified. On thinking it's a warped summary: * My reading of your original synopsis and conclusion: "The de facto EA policy is not to engage with journalists unless you're CEA-sanctioned and extremely confident they'll report your ideas exactly as you describe them. So CEA almost forced us to mess this nice man around, causing the situation to go much worse than it would have otherwise." * My synopsis and conclusion: "As the decision-maker here I felt very unsure, sought input from others, and ultimately because of several reasons (one of which was CEA's wariness of journalists in certain situations) I decided not to engage, but I did give the owner the opportunity to get interviewed by this Economist journalist or a nicer Economist journalist. Given that this journalist doesn't seem very nice, that I didn't have the spoons (or beds?) to host a journalist, and worries about a popular piece attracting too many freeloaders, it's not clear whether welcoming him for a few days would have gone better." On thinking it's deliberately written to paint CEA in a bad light: The whole post generally sounds quite exaggerated to me and only talks about downsides. Perhaps we should agree to disagree.

You're throwing together -

a) the thesis of the whole post, which is that CEA's approach hasn't been a good idea in retrospect; 

b) the claim that CEA have previously used their influence on funding to enforce their policy, which I didn't argue for and can't publicly discuss, but stand by;

c) the approach of the post, which is to assume that linking to the case for the policy's upsides was sufficient - and focus on the undiscussed downsides; and 

d) the synopsis of that particular event, which wasn't meant to imply that we did anything other than fol... (read more)

2John_Maxwell8h
You make good points, but there's no boolean that flips when "sufficient quantities of data [are] practically collected". The right mental model is closer to a multi-armed bandit IMO.

Each year, wealthy countries collectively spend around 178 billion dollars[1] (!!) on Development aid.

Development aid has funded some of the most cost-effective lifesaving programs that have ever been run. Such examples include PEPFAR, the US emergency aids relief programme rolled out at the height of the African aids pandemic, which estimates suggest saved 25 million lives[2] at a cost of some 85 billion[3] ($3400 per life saved, competitive with Givewell’s very best). EAs working with global poverty will know just how difficult it is to achieve high cost effectiveness at these scales.

Development aid has also funded some of the very worst development projects conceived, in some instances causing outright harm to the recipients.

Development aid is spent with a large variety of goals in mind. Climate mitigation projects, gender equality campaigns, and free-trade...

We currently have GIZ (German development agency with an unfortunate name) spending millions here in Northern Uganda doing lot's of close to zero impact projects while causing potential harm while they are at it. Their projects are so painfully bad it sometimes makes me feel physically sick, and I've failed completely in my few efforts to talk to GIZ members about what they do.

I'd be very curious to hear more about this, in particular specific projects and how they have failed or caused harm. Or is it basically all of the projects in this list?

3ElliotJDavies5h
I've been curious about foreign aid for a while, but never delved into researching it. I would be curious to follow more posts or a newsletter on it . Some questions off the top of my head: * how effective (i.e. $ per DALY/QALY) is x country's aid budget in y year, how does this compare to givewells best interventions? * what does the talent pipeline look like for foreign aid, who works in these departments/orgs? * what underline assumptions or biases influence foreign aid? * case studies of projects done well. Case studies of projects done poorly. Also, curious if you have any book recommendations?
10Zoe Williams14h
Post summary (feel free to suggest edits!): Wealthy countries spend a collective $178B on development aid per year - 25% of all giving worldwide. Some aid projects have been cost-effective on a level with Givewell’s top recommendations (eg.PEPFAR [https://en.wikipedia.org/wiki/President%27s_Emergency_Plan_for_AIDS_Relief]), while others have caused outright harm. Aid is usually distributed via a several step process: 1. Decide to spend money on aid. Many countries signed a 1970 UN resolution to spend 0.7% of GNI on official development assistance. 2. Government decides a general strategy / principles. 3. Government passes a budget, assigning $s to different aid subcategories. 4. The country’s aid agency decides on projects. Sometimes this is donating to intermediaries like the UN or WHO, sometimes it’s direct. 5. Projects are implemented. This area is large scale, tractability is unsure but there are many pathways and some past successes (eg. a grassroots EA campaign in Switzerland redirected funding, and the US aid agency ran a cash-benchmarking experiment with GiveDirectly), and few organisations focus on this area compared to the scale. The author and their co-founder have been funded to start an organization in this area. Get in touch if you’re interested in Global Development and Policy. (If you'd like to see more summaries of top EA and LW forum posts, check out the Weekly Summaries [https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN] series.)

I'm want to run the listening exercise I'd like to see.

  1. Get popular suggestions
  2. Run a polis poll
  3. Make a google doc where we research consensus suggestions/ near consensus/consensus for specific groups
  4. Poll again

Stage 1

Give concrete suggestions for community changes. 1 - 2 sentences only.

Upvote if you think they are worth putting in the polis poll and agreevote if you think the comment is true.

Agreevote if you think they are well-framed.

Aim for them to be upvoted. Please add suggestions you'd like to see.

I'll take the top 20 - 30

I will delete/move to comments top-level answers that are longer than 2 sentences.

Stage 2

Post here: https://forum.effectivealtruism.org/posts/9R5eJhimR3QtjNFmP/what-specific-changes-should-we-as-a-community-make-to-the-1

Polis poll here: https://pol.is/5kfknjc9mj 

Broadly agree. One recent EAGx team did invite a vocal critic, but they declined. The EAG team also does lots of outreach to non-EAs whose work is relevant.

3James Herbert5h
I don't think it's been publically written down anywhere, I've only been discouraged via private comms. E.g., I've been approached by journalists, I've then mentioned it to CEA, and then CEA will have explicitly discouraged me from engaging. To their credit, when I've explained my reasoning they've said ok and have even provided media training. But there's definitely been explicit discouragement nonetheless.
1Brendon_Wong6h
I agree! As one example, there are large opportunity costs that arise from how savings are being managed: https://forum.effectivealtruism.org/posts/vuG9x6PNemhCzaMZb/how-you-can-counterfactually-send-millions-of-dollars-to-ea [https://forum.effectivealtruism.org/posts/vuG9x6PNemhCzaMZb/how-you-can-counterfactually-send-millions-of-dollars-to-ea]

(Note: This essay was largely written by Rob, based on notes from Nate. It’s formatted as Rob-paraphrasing-Nate because (a) Nate didn’t have time to rephrase everything into his own words, and (b) most of the impetus for this post came from Eliezer wanting MIRI to praise a recent OpenAI post and Rob wanting to share more MIRI-thoughts about the space of AGI organizations, so it felt a bit less like a Nate-post than usual.)


Nate and I have been happy about the AGI conversation seeming more honest and “real” recently. To contribute to that, I’ve collected some general Nate-thoughts in this post, even though they’re relatively informal and disorganized.

AGI development is a critically important topic, and the world should obviously be able to hash out such topics in...

I agree that publishing results of the form "it turns out that X can be done, though we won't say how we did it" is clearly better than publishing your full results, but I think it's much more harmful than publishing nothing in a world where other people are still doing capabilities research. 

This is because it seems to me that knowing something is possible is often a first step to understanding how. This is especially true if you have any understanding of where this researcher or organisation were looking before publishing this result. 

 

I a... (read more)

3Emrik6h
[weirdness-filter: ur weird if you read m commnt n agree w me lol] Doing private capabilities research seems not obviously net-bad, for some subcategories of capabilities research. It constrains your expectations about how AGI will unfold, meaning you have a narrower target for your alignment ideas (incl. strategies, politics, etc.) to hit. The basic case: If an alignment researcher doesn't understand how gradient descent works, I think they're going to be less effective at alignment. I expect this to generalise for most advances they could make in their theoretical understanding of how to build intelligences. And there's no fundamental difference between learning the basics and doing novel research, as it all amounts to increased understanding in the end. That said, it would in most cases be very silly to publish about that increased understanding, and people should be disincentivised from doing so. (I'll delete this comment if you've read it and you want it gone. I think the above can be very bad advice to give some poorly aligned selfish researchers, but I want reasonable people to hear it.)