RyanCarey

Researcher of causal models and human-aligned AI at FHI | https://twitter.com/ryancareyai

RyanCarey's Comments

EA Forum Prize: Winners for December 2019

Larks' post was one of the best of the year, so it's nice of him to effectively make a hundreds-of-dollars donation to the EA Forum Prize!

The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR)

Have you heard of Neumeir's naming criteria? It's designed for businesses, but I think it's an OK heuristic. I'd agree that there are better available names, e.g.:

  • CEEALAR. Distinctiveness: 1, Brevity: 1, Appropriateness: 4, Easy spelling and punctuation: 1, Likability: 2, Extendability: 1, Protectability: 4.
  • Athena Centre. 4,4,4,4,4,4,4
  • EA Study Centre. 3,3,4,3,3,3,3.
RyanCarey's Shortform

Tom Inglesby on nCoV response is one recent example from just the last few days. I've generally known Stefan Schubert, Eliezer Yudkowsky, Julia Galef, and others to make very insightful comments there. I'm sure there are very many other examples.

Generally speaking, though, the philosophy would be to go to the platforms that top contributors are actually using, and offer our services there, rather than trying to push them onto ours, or at least to complement the latter with the former.

RyanCarey's Shortform

Possible EA intervention: just like the EA Forum Prizes, but for the best Tweets (from an EA point-of-view) in a given time window.

Reasons this might be better than the EA Forum Prize:

1) Popular tweets have greater reach than popular forum posts, so this could promote EA more effectively

2) The prizes could go to EAs who are not regular forum users, which could also help to promote EA more effectively.

One would have to check the rules and regulations.

The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*)

Interesting point of comparison: the Conservative Party has ~35% as many members, and had held government ~60% more often over the last 100 years, so the Leverage per member is ~4.5x higher. Although for many people, their ideology would mean they cannot credibly be involved in one or the other party.

Long-term investment fund at Founders Pledge

The obvious approach would be to by-default invest in the stock market, (or maybe a leveraged ETF?), and only move money from that into other investments when they have higher EV.

Pablo_Stafforini's Shortform

I think Pablo is right about points (1) and (3). Community Favorites is quite net-negative for my experience of the forum (because it repeatedly shows the same old content), and probably likewise for users on average. "Community" seems to needlessly complicate the posting experience, whose simplicity should be valued highly.

2019 AI Alignment Literature Review and Charity Comparison
Of these categories, I am most excited by the Individual Research, Event and Platform projects. I am generally somewhat sceptical of paying people to ‘level up’ their skills.

If I'm understanding the categories correctly, I agree here.

While generally good, one side effect of this (perhaps combined with the fact that many low-hanging fruits of the insight tree have been plucked) is that a considerable amount of low-quality work has been produced. Furthermore, the conventional peer review system seems to be extremely bad at dealing with this issue... Perhaps you, enlightened reader, can judge that “How to solve AI Ethics: Just use RNNs” is not great. But is it really efficient to require everyone to independently work this out?

I agree. I think part of the equation is that peer review does not just filter papers "in" or "out" - it accepts them to a journal of a certain quality. Many bad papers will get into weak journals, but will usually get read much less. Researchers who read these papers cite them, also taking into account to their quality, thereby boosting the readership of good papers. Finally, some core of elite researchers bats down arguments that due to being weirdly attractive yet misguided, manage to make it through the earlier filters. I think this process works okay in general, and can also work okay in AI safety.

I do have some ideas for improving our process though, basically to establish a steeper incentive gradient for research quality (in the dimensions of quality that we care about): (i) more private and public criticism of misguided work, (ii) stronger filters on papers being published in safety workshops, probably by agreeing to have fewer workshops, with fewer papers, and by largely ignoring any extra workshops from "rogue" creators, and (iii) funding undersupervised talent-pipeline projects a bit more carefully.

Bar guvat V jbhyq yvxr gb frr zber bs va gur shgher vf tenagf sbe CuQ fghqragf jub jnag gb jbex va gur nern. Hasbeghangryl ng cerfrag V nz abg njner bs znal jnlf sbe vaqvivqhny qbabef gb cenpgvpnyyl fhccbeg guvf.

Svygrevat ~100 nccyvpnagf qbja gb n srj npprcgrq fpubynefuvc erpvcvragf vf abg gung qvssrerag gb jung PUNV naq SUV nyernql qb va fryrpgvat vagreaf. Gur rkcrpgrq bhgchgf frrz ng yrnfg pbzcnenoyl-uvtu. Fb V guvax pubbfvat fpubynefuvc erpvcvragf jbhyq or fvzvyneyl tbbq inyhr va grezf bs rinyhngbef' gvzr, naq nyfb n cerggl tbbq hfr bs shaqf.

--

It's an impressive effort as in previous years! One meta-thought: if you stop providing this service at some point, it might be worth reaching out to the authors of the alignment newsletter, to ask whether they or anyone they know would jump in to fill the breach.

Load More