HaydnBelfield

Wiki Contributions

Comments

CEA grew a lot in the past year

Very cool you've previously mentioned it - nice that we've both been thinking about it!

One proposal is a slight modification. Basically to use your example, you could (a) randomise the entire 250 or (b) you could rank the 500, give the 'treatment' to the top 150 say, then randomise 100 'treatments' to 200 around (100 above and 100 below) the cutoff. I think both proposals, or a RDD, would be good - but would defer to advice from actual EA experts on RCTs.

CEA grew a lot in the past year

Congratulations on this growth, really exciting!

Have you thought about including randomisation to facilitate evaluation?

E.g. you could include some randomisation in who invited to events (of those who applied), which universities/cities get organisers (of those on the shortlist) etc. This could also be done with 80k coaching calls, dunno if it has been tried.

You then track who did and didn't get the treatment, to see what effect it had.  This doesn't have to involve denying 'treatment' to people/places - presumably there are more applicants than there are places - you introduce randomisation at the cutoff.

This would allow some causal inference (RCT/Randomista,  does x cause y, etc) as to what effect these treatments are having (vs the control, and null hypothesis of no effect). This could help justify impact to the community and funders. I'm sure people at eg JPAL, Rethink, etc could help with research design.

Bounty to disclose new x-risks

Interesting idea. Wanted to throw in a few reflections from working at the Centre for the Study of Existential Risk for four years.

Just want to give a big plus one to the infohazards section. Several states and terrorist groups have been inspired by bioweapons information in the public domain - its a real problem. At CSER we've occassionally thought up what might be a new contributor to existential risk - and have decided not to publish on it. I'm sure Anders Sandberg has come up with tonnes too (thankfully he's on the good side!) - and has also published good stuff on them. Very important bit.

I imagine you'd get lots of kooks writing in (e.g. we get lots of Biblical prediction books in the post), so you'd need some way to sift through that. You'd also need some way to handle disagreement (eg. I think climate change is a major contributor to existential risk, some other researchers in the field do not). Also worth thinking about incentives - in a way, this is a prize for people to come up with new dangerous ideas.

What is the EU AI Act and why should you care about it?

Excellent overview, and I completely agree that the AI Act is an important policy for AI governance.

One quibble: as far as I know, the Center for Data Innovation is just a lobbying group for Big Tech - I was a little surprised to see it listed in "public responses from various EA and EA Adjacent organisations".

Lessons for AI governance from the Biological Weapons Convention

Hi Aryan,

Cool post, very interesting! I'm fascinated by this topic - the PhD thesis I'm writing is on nuclear, bio and cyber weapons arms control regimes and what lessons can be drawn for AI. So obviously I'm very into this, and want to see more work done on this. Really excellent to see you exploring the parallels. A few thoughts:

  • Your point on 'lock-in' seems crucial. It currently seems to me that there are 'critical junctures' (Capoccia) in which regimes get set and then its very hard to change them. So e.g. the failure to control nukes or cyber in early years. ABM is a complex example - very very hard to get back on the table, but Rumsfeld +others managed it after 30 years of battling.
  • My impression is that the BWC (and CWC) - the meetings/conferences etc - are often seen as arms control regimes that are pretty good at keeping up with technical developments - maybe a point in favour of centralisation.
  • Just on the details of the BWC, seems worth mentioning a few things. (Nitpicky: when the UK proposed a BWC, it said verification wasn't technically possible at the time [1]). First, the Nixon Administration thought BW were militarily useless and had already unilaterally disarmed, so verification was less of a priority [2]. Second, one of the reasons to want a Verification Protocol in the 90s was the revelation that the Soviets cheated over the 70s-80s, building the biggest BW program ever. Third, the Bush Admin rejected the Verification Protocol in 2001 (pre 9/11!), its first year - at the same time as it was ripping up START III, Kyoto, and the ABM Treaty. This is all to suggest that state interest, and elites' changing conceptions of state interest, can create space for change.

[1] http://www.cbw-events.org.uk/EX1968.PDF 

[2] https://www.belfercenter.org/publication/farewell-germs-us-renunciation-biological-and-toxin-warfare-1969-70

https://wmdcenter.ndu.edu/Publications/Publication-View/Article/627136/president-nixons-decision-to-renounce-the-us-offensive-biological-weapons-progr/ 

What EA projects could grow to become megaprojects, eventually spending $100m per year?

Interesting first point, but I disagree. To me, the increased salience of climate change in recent years can be traced back to the 2018 Special Report on Global Warming of 1.5 °C (SR15), and in particular the meme '12 years to save the world'. Seems to have contributed to the start of School Strike for Climate, Extinction Rebellion and the Green New Deal. Another big new scary IPCC report on catastrophic climate change would further raise the salience of this issue-area.

I was thinking that $100m would be for all four of these topics, and that we'd  get cause-prioritisation VOI across all four of these areas. $100m for impact and VOI across all four seems pretty good to me (however I'm a researcher not a funder!)

On solar geo, I'm not an expert on it and am not arguing for it myself, merely reporting that its top of the 'asks' list for orgs like Silver Lining.

I actually rather like the framing in Xu & Ram - I don't think we know enough about >5 °C scenarios, so describing them as "unknown, implying beyond catastrophic, including existential threats" seems pretty reasonable to me. In any case, I cited that more to demonstrate the lack of research thats been done on these scenarios.

Most research/advocacy charities are not scalable

I think its a really good point that there's something very different between research/policy orgs and orgs that deliver products and services at scale. I basically agree, but I'd slightly tweak this to
"It is very hard for a charity to scale to more than $100 million per year without delivering a physical product or service."

Because  digital orgs/companies who deliver a digital service (GiveDirectly, Facebook/Google/etc) obviously can scale to $100 million per year. 

What EA projects could grow to become megaprojects, eventually spending $100m per year?

Hell yeah! Get JGL to star - https://www.eaglobal.org/speakers/joseph-gordon-levitt/

Load More