RyanCarey

Researcher of causal models and human-aligned AI at FHI | https://twitter.com/ryancareyai

Comments

Avoiding Munich's Mistakes: Advice for CEA and Local Groups

Interesting that one of the two main hypotheses advanced in that paper is that media is influencing public opinion, but the media is not the internet, but TV!

The rise of 24-hour partisan cable news provides another potential explanation. Partisan cable networks emerged during the period we study and arguably played a much larger role in the US than elsewhere, though this may be in part a consequence rather than a cause of growing affective polarization.9 Older demographic groups also consume more partisan cable news and have polarized more quickly than younger demographic groups in the US (Boxell et al. 2017; Martin and Yurukoglu 2017). Interestingly, the five countries with a negative linear slope for affective polarization all devote more public funds per capita to public service broadcast media than three of the countries with a positive slope (Benson and Powers 2011, Table 1; see also Benson et al. 2017). A role for partisan cable news is also consistent with visual evidence (see Figure 1) of an acceleration of the growth in affective polarization in the US following the mid-1990s, which saw the launch of Fox News and MSNBC.

(The other hypothesis is "party sorting", wherein people move to parties that align more in ideology and social identity.)

Perhaps campaigning for more money to PBS or somehow countering Fox and MSNBC could be really important for US-democracy.

Also, if TV has been so influential, it also suggests that even if online media isn't yet influential on the population-scale, it may be influential for smaller groups of people, and that it will be extremely influential in the future.

When does it make sense to support/oppose political candidates on EA grounds?
[Politicisation] will reduce EA's long-term impact: I have to confess I've never really understood this argument. I can think of numerous examples of social movements that have been both highly politicized and tremendously impactful.

Right, but none that have done so without risking a big fight. The status quo is that EA consists of a few thousand people, often trying to enter important technocratic roles, and achieve change without provoking big political fights (and being many-fold more efficient by doing so). The problem is that political EA efforts can inflict effectiveness penalties on other EA efforts. If EA is associated with a side e.g., "caring about the long-term" is considered a long-term issue, then other EA efforts may become associated to that side, e.g. long-term security legislation gets drawn into large battles, diminishing the effectiveness of technocratic efforts many-fold.

Basically, by bringing EA into politics, you're basically taking a few people who normally use scalpels, and arming them for a large-scale machine gun fight. The risk is not just losing a particular fight, but inflaming a multi-front war.

There are a bunch of ways of mitigating the effectiveness penalties that one accrues on others. The costs are less if political efforts are taken individually, so that they're not seen as systematic to EA. Also if they're from less prominent people, e.g. if Will and Toby stay out of the fray. It's less costly if it's symmetric between parties. For example, the cost of affiliating to a Rubio at this point might be less than the cost of affiliating to a Buttigieg, or could even be net positive.

RyanCarey's Shortform

Affector & Effector Roles as Task Y?

Longtermist EA seems relatively strong at thinking about how to do good, and raising funds for doing so, but relatively weak in affector organs, that tell us what's going on in the world, and effector organs that influence the world. Three examples of ways that EAs can actually influence behaviour are:

- working in & advising US nat sec

- working in UK & EU governments, in regulation

- working in & advising AI companies

But I expect this is not enough, and our (a/e)ffector organs are bottlenecking our impact. To be clear, it's not that these roles aren't mentally stimulating - they are. It's just that their impact lies primarily in implementing ideas, and uncovering practical considerations, rather than in an Ivory tower's pure, deep thinking.

The world is quickly becoming polarised between US and China, and this means that certain (a/e)ffector organs may be even more neglected than the others. We may want to promote: i) work as a diplomat ii) working at diplomat-adjacent think tanks, such as the Asia Society, iii) working at relevant UN bodies, relating to disarmament and bioweapon control, iv) working at UN-adjacent bodies that seek to pressure disarmament etc. These roles often reside in large entities that can accept hundreds or thousands of new staff at a wide range of skill levels, and so perhaps many people who are currently “earning to give” should move into these “affector” or “effector” roles (as well as those mentioned above, in other relevant parts of national governments). I'm also curious whether 80,000 Hours has considered diplomatic roles - I couldn't find much on a cursory search.

Getting money out of politics and into charity

Consequentialists and EAs have certainly been interested in these questions. We were discussing the idea back in 2009. Toby Ord has written a relevant paper.

I'm not donating to politics, so wouldn't use it. I would say that if an election costs ~$10B, and you might move 0.1% of that into charities for a cost of $0.25M, that seems like a good deal. The obvious criticism, I think, is: "couldn't they benefit more from keeping the money?" I think this is surmountable because donating it may be psychologically preferable. Another reservation would be "You should figure out what happened with Repledge before trying to repeat it", which I think is basically something you should do.

I guess the funding that you initially need is probably significantly less than $250k, so it might make sense to apply for the February deadline of the EA Infrastructure Fund. If you're trying to do things before November (which seems difficult), then you might apply "off-cycle". Although there's a range of other funders of varying degrees of plausibility such as OpenPhil (mostly for funding amounts >$100k), the funders behind Progress studies (maybe the Collisons), the Survival and Flourishing Fund, the Long-term Future Fund etc.

Re choice of charities, well.. we do think that charities vary in effectiveness by many orders of magnitude, so probably it does make sense to be selective. In particular, most people who've studied the question think that those that focus on long-term impact can be orders of magnitude more effective than those that don't. So a lot of EAs (including me) work on catastrophic threats. This would be a good idea if you believe Haidt's ideas about common threats making common ground, which I think is nice. See also his Asteroids Club. This could support choices like the Nuclear Threat Initiative and Hopkins' Centre for Health Security, discussed here. To the extent that you were funding such charities, I think the case for effectiveness (and the case for EA funding) would be stronger.

The ideal choice of charities could also depend to some extent on other design choices taken: 1) do you want to allow trades other than $1:$1? 2) do you allow people to offer to do a trade, specific for one particular charity? On (1), one argument in favour would be that if one party has a larger funding base than the other, then a $1:$1 trade might favour them. Another would be that this naturally balances out the problem of charities being preferred by one side more than the other. One argument against would be that people might view 1:1 as fairer, and donate more. (2), arguments in favour would be that diversity can better satisfy people's preferences, and that you might fund certain charities too much if you just choose one. The argument against would be that people really hate choosing between charities. Overall, for (1) I'd guess "no". For (2), I'd guess "no" again, although I think it could be great to have a system where the charity rotates each week - it could help with promoting the app as well! But these are of course no more than guesses.

Anyway, those are all details - it seems like an exciting project!

Feedback Request on EA Philippines' Career Advice Research for Technical AI Safety
I noticed though that your answers for #5, 7, and 8 were for the questions for the expert interviews I planned on doing, and not on questions 5-7 in the "Questions we'd like feedback on". You basically were able to answer #5 already there, so I'd just like your thoughts on #6 and #7 (on AI policy work and what questions we should ask people at local firms).

Ah, oops! 6. I'm not sure AI policy is that important in the Philippines, given that not that much AI research is happening there, compared to US/UK. 7. Relevance to AI safety is a bit tricky to gauge, and doesn't always matter that much for career capital. It might be better to just ask: do I get to do research activities, and does the team publish research papers?

On A, yeah it could make sense to push for nuclear power, or to become a local biosecurity expert. To be clear, the US China peace issue is not my area of expertise, just something that might be interesting to look into. I'm not thinking of something as simple as fighting for certain waters to be owned by China or the Philippines, but more to find ways to increase understanding and reinforce peace. Roughly: (improved traid/aid/treaties) -> (decreased tensions between China and ASEAN) -> (reduced chance of US-China war) -> (reduced risk of technology arms races between US and China) -> reduced existential risk. So maybe people in the Philippines can build links of trade, aid, treaties, etc between China/US and neutral countries. These things are probably done by foreign policy experts, diplomats and politicians, in places including embassies, the department of foreign affairs, national security organisations, and think tanks and universities.

Feedback Request on EA Philippines' Career Advice Research for Technical AI Safety

I had a quick look over. I basically agree with the article. Here are some responses to some of your feedback questions:

2. Might be good to clarify that if you start a degree in US/UK, it makes it easier to get a work visa and job afterwards

3. You could argue that there's little bits in Switzerland, Czech Republic, Israel. Not so much in Aus anymore. but US, UK, Canada are the main ones.

4. Yes, it's possible. But generally you want to have some collaborators and/or be a professor. For the latter, you'd want to get a degree from a top-30 university worldwide, and then pursue professorship back home, so it wouldn't necessarily be easy.

And likewise for some of the expert interview questions:

5. You could check out Ajeya's report for some work on plausible timelines

7. Maybe, but it's hard. Either you'd need to find a startup that offers remote software work, or get a long-term job at a university

8. Same as non-Filipiino undergrads. Aim for papers and strong references.


Also, here are two other big picture elements of feedback:

A. A bigger picture question is: how can Filipinos best help to reduce existential risk? Often, the answer will be the same as if they were non-Filipinos - AI safety, biosecurity, or whatever. But one idea is that EA Filipinos could help with building US-China peace. The Philippines is close to China, and in major territorial disputes over the South China Sea. It's in ASEAN, which is big, close to China and somewhat neutral. So maybe it's useful to work for the department of foreign affairs or military, and try to reduce the chances of global conflict emerging from the South China Sea, or help to ensure that countries in ASEAN trade with both China and the US.

B. A lot of considerations for Filipino EAs interested in AI safety will be similar for a lot of EAs who aren't in Anglosphere or EU countries. But only a small fraction of these people are in The Philippines (~1%). So maybe for articles like this, it would be better to write for that larger audience.

RyanCarey's Shortform

EAs have reason to favour Top-5 postdocs over Top-100 tenure?

Related to Hacking Academia.

A bunch of people face a choice between being a postdoc at one of the top 5 universities, and being a professor at one of the top 100 universities. For the purpose of this post, let's set aside the possibilities of working in industry, grantmaking and nonprofits. Some of the relative strengths (+) of the top-5 postdoc route are accentuated for EAs, while some of the weaknesses (-) are attenuated:

+greater access to elite talent (extra-important for EAs)

+larger university-based EA communities, many of which are at top-5 universities

-less secure research funding (less of an issue in longtermist research)

-less career security (less important for high levels of altruism)

-can't be a sole-supervisor of a PhD student (less important if one works with a full-professor who can supervise, e.g. at Berkeley or Oxford).

-harder to set up a centre (this one does seem bad for EAs, and hard to escape)

There are also considerations relating to EAs' ability to secure tenure. Sometimes, this is decreased a bit due to the research running against prevailing trends.

Overall, I think that some EAs should still pursue professorships, especially to set up research centres, or to establish a presence in an influential location but that we will want more postdocs than is usual.

Load More