Researcher of causal models and human-aligned AI at FHI | https://twitter.com/ryancareyai


RyanCarey's Shortform

There's a new center in the Department of State, dedicated to the diplomacy surrounding new and emerging tech. This seems like great place for Americans to go and work, if they're interested in arms control in relation to AI and emerging technology.

Confusingly, it's called the "Bureau of Cyberspace Security and Emerging Technologies (CSET)". So we now have to distinguish the State CSET from the Georgetown one - the "Centre for Security and Emerging Technology".

EA Forum feature suggestion thread

Across the internet as a whole. I agree that a lot of discourse happens on Facebook, some of it within groups. But in terms of serious, public conversation, I think a lot of it was initially on newsgroups/mailing lists, then blogs, and now blogs (linked from Twitter) and podcasts.

EA Forum feature suggestion thread

I worry a bit that all the suggestions are about details, whereas the macro trend is that public discourse is moving toward Twitter, and blog content linked from Twitter. One thing that could help attract new audience would be to revive the EA Forum Twitter account, automatically, or manually.

RyanCarey's Shortform

This framing is not quite right, because it implies that there's a clean division of labour between thinkers and doers. A better claim would be: "we have a bunch of thinkers, now we need a bunch of thinker-doers".

Careers Questions Open Thread

I'm currently studying a statistics PhD while researching AI safety, after a bioinformatics msc and medical undergrad. I agree with some parts of this, but would contest others.

I agree that:

  • What you do within a major can matter more than which major you choose
  • It's easier to move from math and physics to CS.

But it's still easier to move from CS to CS, than from physics or pure math. And CS is where a decent majority of AI safety work is done. The second-most prevalent subject is statistics, due to its containing statistical learning (aka machine learning) and causal inference, although these are areas of research that are equally performed in a CS department. So if impact was the only concern, starting with CS would still be my advice, followed by statistics.

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

Which annual filings? Presumably the investment went to the for-profit component.

Books / book reviews on nuclear risk, WMDs, great power war?

I liked Command and Control, The Doomsday Machine, and The Dead Hand, but didn't get many interesting ideas from The Making of the Atomic Bomb.

Only some parts are relevant to nuclear risk, but Spy Schools by Daniel Golden taught me some interesting stuff about science and espionage. 

RyanCarey's Shortform

Translating EA into Republican. There are dozens of EAs in US party politics, Vox, the Obama admin, Google, and Facebook. Hardly in the Republican party, working for WSJ, appointed for Trump, or working for Palantir. A dozen community groups in places like NYC, SF, Seattle, Berkeley, Stanford, Harvard, Yale. But none in Dallas, Phoenix, Miami, the US Naval Laboratory, the Westpoint Military Academy, etc - the libertarian-leaning GMU economics department being a sole possible exception.

This is despite the fact that people passing through military academies would be disproportionately more likely to work on technological dangers in the military and public service, while the ease of competitiveness is less than more liberal colleges.

I'm coming to the view that similarly to the serious effort to rework EA ideas to align with Chinese politics and culture, we need to translate EA into Republican, and that this should be a multi-year, multi-person project.

Load More