quinn

https://quinnd.net

Host of the Technical AI Safety Podcast https://technical-ai-safety.libsyn.com

Coorganizer of EA Philly

Streams Linear Algebra Done Right in Coq on twitch https://twitch.tv/quinndougherty92

Wiki Contributions

Comments

quinn's Shortform

What's the latest on moral circle expansion and political circle expansion? 

  • Were slaves excluded from the moral circle in ancient greece or the US antebellum south, and how does this relate to their exclusion from the political circle? 
  • If AIs could suffer, is recognizing that capacity a slippery slope toward giving AIs the right to vote? 
  • Can moral patients be political subjects, or must political subjects be moral agents? If there was some tipping point or avalanche of moral concern for chickens, that wouldn't imply arguments for political representation of chickens, right? 
  • Consider pre-suffrage women, or contemporary children: they seem fully admitted into the moral circle, but only barely admitted to the political circle. 
  • A critique of MCE is that history is not one march of worse to better (smaller to larger), there are in fact false starts, moments of retrograde, etc. Is PCE the same but even moreso? 

If I must make a really bad first approximation, I would say a rubber band is attached to the moral circle, and on the other end of the rubber band is the political circle, so when the moral circle expands it drags the political circle along with it on a delay, modulo some metaphorical tension and inertia. This rubber band model seems informative in the slave case, but uselessly wrong in the chickens case, and points to some I think very real possibilities in the AI case. 

AMA: The new Open Philanthropy Technology Policy Fellowship

(cc'd to the provided email address) 

In Think Tank Junior Fellow, OP writes

Recently obtained a bachelor’s or master’s degree (including Spring 2022 graduates)

How are you thinking about this requirement? Is there something flex about it (like when a startup says they want a college graduate) or are there bureaucratic forces at partner organizations locking it in stone (like when a hospital IT department says they want a college graduate)? Perhaps describe properties of a hypothetical candidate that would inspire you to flex this requirement? 

Apply to the new Open Philanthropy Technology Policy Fellowship!


We're writing to let you know that the group you tried to contact (techpolicyfellowship) may not exist, or you may not have permission to post messages to the group. A few more details on why you weren't able to post:

* You might have spelled or formatted the group name incorrectly.
* The owner of the group may have removed this group.
* You may need to join the group before receiving permission to post.
* This group may not be open to posting.

If you have questions related to this or any other Google Group, visit the Help Center at https://support.google.com/a/openphilanthropy.org/bin/topic.py?topic=25838.

Thanks,

openphilanthropy.org admins
 

Apply to the new Open Philanthropy Technology Policy Fellowship!

Ah, just saw techpolicyfellowship@openphilanthropy.org at the bottom of the page. Sorry, will direct my question to there! 

Apply to the new Open Philanthropy Technology Policy Fellowship!

Hi Luke, could you describe a candidate that would inspire you to flex the bachelor's requirement for Think Tank Jr. Fellow? I took time off credentialed institutions to do lambda school and work (didn't realize I want to be a researcher until I was already in industry), but I think my overall CS/ML experience is higher than a ton of the applicants you're going to get (I worked on cooperative AI at AI Safety Camp 5 and I'm currently working on multi-multi delegation, hence my interest in AI governance). If possible, I'd like to hear from you how you're thinking about the college requirement before I invest the time into writing a cumulative 1400 words. 

Hiring Director of Applied Data & Research - CES

Awesome! I probably won't apply as I lack political background and couldn't tell you the first thing about running a poll, but my eyes will be keenly open in case you post a broader data/analytics job as you grow. Good luck with the search! 

The Importance of Artificial Sentience

I'm thrilled about this post - during my first two-three years of studying math/cs and thinking about AGI my primary concern was the rights and liberties of baby agents (but I wasn't giving suffering nearly adequate thought). Over the years I became more of an orthodox x-risk reducer, and while the process has been full of nutritious exercises, I fully admit that becoming orthodox is a good way to win colleagues, not get shrugged off as a crank at parties, etc. and this may have played a small role, if not motivated reasoning then at least humbly deferring to people who seem like they're thinking clearer than me. 

I think this area is sufficiently undertheorized and neglected that the following is only hypothetical, but could become important: how is one to tradeoff between existential safety (for humans) and suffering risks (for all minds)? 

  1. Value is complex and fragile. There are numerous reasons to be more careful than kneejerk cosmopolitanism, and if one's intuitions are "for all minds, of course!" it's important to think through what steps one'd have to take to become someone who thinks safeguarding humanity is more important than ensuring good outcomes for creatures in other substrates. This was best written about, to my knowledge, in the old Value Theory sequence by Eliezer Yudkowsky and to some extent Fun Theory, while it's not 100% satisfying I don't think one go-to sequence is the answer, as a lot of this stuff should be left as exercise for the reader.
  2. Is anyone worried about x-risk and s-risk signaling a future of two opposite factions of EA? That is to say, what are the odds that there's no way for humanity-preservers and suffering-reducers to get along? You can easily imagine disagreement about how to tradeoff research resources between human existential safety and artificial welfare, but what if we had to reason about deployment? Do we deploy an AI that's 90% safe against some alien paperclipping outcome, 30% reduction in artificial suffering; or one that's 75% safe against paperclipping, 70% reduction in artificial suffering? 
  3. If we're lucky, there will be a galaxy-brained research agenda or program, some holes or gaps in the theory or implementation that allows and even encourages coalitioning between humanity-preservers and suffering-reducers. I don't think we'll be this lucky, in the limiting case where one humanity-preserver and one suffering-reducer are each at the penultimate stages of their goals. However we shouldn't be surprised if there is some overlap, the cooperative AI agenda comes to mind. 

I find myself shocked at point #2, at the inadequacy of the state of theory of these tradeoffs. Is it premature to worry about that before the AS movement has even published a detailed agenda/proposal of how to allocate research effort grounded in today's AI field? Much theorization is needed to even get to that point, but it might be wise to think ahead. 

I look forward to reading the preprint this week, thanks

AMA: Ajeya Cotra, researcher at Open Phil

I've been increasingly hearing advice to the effect that "stories" are an effective way for an AI x-safety researcher to figure out what to work on, that drawing scenarios about how you think it could go well or go poorly and doing backward induction to derive a research question is better than traditional methods of finding a research question. Do you agree with this? It seems like the uncertainty when you draw such scenarios is so massive that one couldn't make a dent in it, but do you think it's valuable for AI x-safety researchers to make significant (i.e. more than 30% of their time) investments in both 1. doing this directly by telling stories and attempting backward induction, and 2. training so that their stories will be better/more reflective of reality (by studying forecasting, for instance)? 

What would an EA do in the american revolution?

So I read Gwern and I also read this Dylan Matthews piece, I'm fairly convinced the revolution did not lead to the best outcomes for slaves and for indigenous people. I think there are two cruxes for believing that it would be possible to make this determination in real-time: 

  1. as Matthews points out, follow the preferences of slaves.
  2. notice that a complaint in the declaration of independence was that the british wanted to citizenize indigenous people. 

One of my core assumptions, which is up for debate, is that EAs ought to focus on outcomes for slaves and indigenous people more than the general case of outcomes. 

Promoting EA to billionaires?

I'm puzzled by the lack of push to convert Patrick Collision. Paul Graham once tweeted that Stripe would be the next google, so if Patrick Collision doesn't qualify as a billionaire yet, it might be a good bet that he will someday (I'm not strictly basing that on PG's authority, I'm also basing that on my personal opinion that Stripe seems like world domination material). He cowrote this piece "We need a science of progress" and from what I heard in this interview, signs point to a very EA-sympathetic person. 

Load More