RyanCarey

Researcher of causal models and human-aligned AI at FHI | https://twitter.com/ryancareyai

Comments

Opinion: Digital marketing is under-utilized in EA

Ultimately it's the funder who'll judge that. But if I had all of the donors' funds, maybe I'd pay ~$1B to double the size of the EA movement (~3k->~6k) while preserving its average quality?

Opinion: Digital marketing is under-utilized in EA

I think it'd be worthwhile to try advertising longtermist websites and books to people (targeting by interests/location to the largest extent possible). I think it's been tried a bit (e.g. at the tens of thousands of dollars scale) years ago, and it was already nearly at the threshold for cost-effectiveness. And funding availability has more than doubled since then. What I don't know is what further experiments have been run in the last two years...

RyanCarey's Shortform

Agreed that in her outlying case, most of what she's done is tap into a political movement in ways we'd prefer not to. But is that true for high-performers generally? I'd hypothesise that elite academic credentials + policy-relevant research + willingness to be political, is enough to get people into elite political positions, maybe a tier lower than hers, a decade later, but it'd be worth knowing how all the variables in these different cases contribute.

RyanCarey's Shortform

A case of precocious policy influence, and my pitch for more research on how to get a top policy job.

Last week Lina Khan was appointed as Chair of the FTC, at age 32! How did she get such an elite role? At age 11, she moved to the US from London. In 2014, she studied antitrust topics at the New America Foundation (centre-left think tank). Got a JD from Yale in 2017, and published work relevant to the emerging Hipster Antitrust movement at the same time. In 2018, she worked as a legal fellow at the FTC. In 2020, became an associate professor of law at Columbia. This year - 2021 - she was appointed by Biden.

The FTC chair role is an extraordinary level of success to reach at such a young age. But it kind-of makes sense that she should be able to get such a role: she has elite academic credentials that are highly relevant for the role, has riden the hipster antitrust wave, and has experience of and willingness to work in government.

I think biosec and AI policy EAs could try to emulate this. Specifically, they could try to gather some elite academic credentials, while also engaging with regulatory issues and working for regulators, or more broadly, in the executive branch of goverment. Jason Matheny's success is arguably a related example.

This also suggests a possible research agenda surrounding how people get influential jobs in general. For many talented young EAs, it would be very useful to know. Similar to how Wiblin ran some numbers in 2015 on the chances at a seat in congress given a background at Yale Law, we could ask about the whitehouse, external political appointments (such as FTC commissioner) and the judiciary. Also, this ought to be quite tractable: all the names are in public, e.g. here [Trump years] and here [Obama years], most of the CVs are in the public domain - it just needs doing.

Forum update: New features (June 2021)

I might've asked this before, but would we be in a better place if posts just counted for 2-3x karma (rather than the previous 10x or the current 1x)?

What should CEEALAR be called?

Building: Athena House? Athena Centre? Charity: I guess it should describe that you give people funding and autonomy to focus on their high-priority work, together. Independent Research Centre? Impact Hub?

How well did EA-funded biorisk organisations do on Covid?

Ah. If global IFR is worse than rich-countries' IFR, that seems to imply that developing countries had lower survival rates, despite their more favourable demographics, which would be sad.

How well did EA-funded biorisk organisations do on Covid?

Was the prediction for infection fatality rate (IFR) or case fatality rate (CFR)? And high-income or all countries? Globally, the CFR is 2% (3.7M/173M), but the IFR is <0.66%, because <1/3 of cases were detected.

Max_Daniel's Shortform

I think PAI exists primarily for companies to contribute to beneficial AI and harvest PR benefits from doing so. Whereas GPAI is a diplomatic apparatus, for Trudeau and Macron to influence the conversation surrounding AI.

Draft report on existential risk from power-seeking AI

The upshot seems to be that Joe, 80k, the AI researcher survey (2008), Holden-2016 are all at about a 3% estimate of AI risk, whereas AI safety researchers now are at about 30%. The latter is a bit lower (or at least differently distributed) than Rob expected, and seems higher than among Joe's advisors.

The divergence is big, but pretty explainable, because it concords with the direction that apparent biases point in. For the 3% camp, the credibility of one's name, brand, or field benefits from making a lowball estimates. Whereas the 30% camp is self-selected to have severe concern. And risk perception all-round has increased a bit in the last 5-15 years due to Deep Learning.

Load More