All of quinn's Comments + Replies

quinn's Shortform

What's the latest on moral circle expansion and political circle expansion? 

  • Were slaves excluded from the moral circle in ancient greece or the US antebellum south, and how does this relate to their exclusion from the political circle? 
  • If AIs could suffer, is recognizing that capacity a slippery slope toward giving AIs the right to vote? 
  • Can moral patients be political subjects, or must political subjects be moral agents? If there was some tipping point or avalanche of moral concern for chickens, that wouldn't imply arguments for political r
... (read more)
AMA: The new Open Philanthropy Technology Policy Fellowship

(cc'd to the provided email address) 

In Think Tank Junior Fellow, OP writes

Recently obtained a bachelor’s or master’s degree (including Spring 2022 graduates)

How are you thinking about this requirement? Is there something flex about it (like when a startup says they want a college graduate) or are there bureaucratic forces at partner organizations locking it in stone (like when a hospital IT department says they want a college graduate)? Perhaps describe properties of a hypothetical candidate that would inspire you to flex this requirement? 

7Technology Policy Fellowship3moThis requirement mainly exists because our host organizations tend to value traditional credentials. However, as we note on the application page, “The eligibility guidelines below are loose and somewhat flexible. If you’re not sure whether you are eligible, we still encourage you to apply.” To the extent possible, we will work to accommodate applicants that we are excited about even if they don’t have traditional credentials. We expect most think tanks to fall somewhere between a startup and a hospital IT department, in terms of flexibility. Different think tanks will also have different cultures and policies with respect to credentials. If we receive promising applications from people without a college degree, we may reach out to some potential host organizations on that candidate’s behalf to assess whether host organizations would consider the lack of a traditional credential to be a dealbreaker. Our (and potentially the candidate’s) decision about advancement would depend in large part on the responses we receive to those inquiries.
Apply to the new Open Philanthropy Technology Policy Fellowship!


We're writing to let you know that the group you tried to contact (techpolicyfellowship) may not exist, or you may not have permission to post messages to the group. A few more details on why you weren't able to post:

* You might have spelled or formatted the group name incorrectly.
* The owner of the group may have removed this group.
* You may need to join the group before receiving permission to post.
* This group may not be open to posting.

If you have questions related to this or any other Google Group, visit the Help Center at https://support.google.com/a

... (read more)
2lukeprog3moOops! Should be fixed now.
Apply to the new Open Philanthropy Technology Policy Fellowship!

Ah, just saw techpolicyfellowship@openphilanthropy.org at the bottom of the page. Sorry, will direct my question to there! 

1quinn3mo
Apply to the new Open Philanthropy Technology Policy Fellowship!

Hi Luke, could you describe a candidate that would inspire you to flex the bachelor's requirement for Think Tank Jr. Fellow? I took time off credentialed institutions to do lambda school and work (didn't realize I want to be a researcher until I was already in industry), but I think my overall CS/ML experience is higher than a ton of the applicants you're going to get (I worked on cooperative AI at AI Safety Camp 5 and I'm currently working on multi-multi delegation, hence my interest in AI governance). If possible, I'd like to hear from you how you're thinking about the college requirement before I invest the time into writing a cumulative 1400 words. 

1quinn3moAh, just saw techpolicyfellowship@openphilanthropy.org [techpolicyfellowship@openphilanthropy.org] at the bottom of the page. Sorry, will direct my question to there!
Hiring Director of Applied Data & Research - CES

Awesome! I probably won't apply as I lack political background and couldn't tell you the first thing about running a poll, but my eyes will be keenly open in case you post a broader data/analytics job as you grow. Good luck with the search! 

1aaronhamlin7moIf an applicant has a strong stats and data analysis background, I would still encourage them to apply. It can sometimes be hard to check off every single box. Either way, please share with your network as well. Thanks!
The Importance of Artificial Sentience

I'm thrilled about this post - during my first two-three years of studying math/cs and thinking about AGI my primary concern was the rights and liberties of baby agents (but I wasn't giving suffering nearly adequate thought). Over the years I became more of an orthodox x-risk reducer, and while the process has been full of nutritious exercises, I fully admit that becoming orthodox is a good way to win colleagues, not get shrugged off as a crank at parties, etc. and this may have played a small role, if not motivated reasoning then at least humbly deferring... (read more)

5MichaelA6moIt seems to me that your comment kind-of implies that people who focus on reducing extinction risk and people who focus on reducing s-risk are mainly divided by moral views. (Maybe that’s just me mis-reading you, though.) But I think empirical views can also be very relevant. For example, if someone who leans towards suffering-focused ethics [https://longtermrisk.org/the-case-for-suffering-focused-ethics/] became convinced that s-risks are less likely, smaller scale in expectation, or harder to reduce the likelihood or scale of than they’d thought, that should probably update them somewhat away from prioritising s-risk reduction, leaving more room for prioritising extinction risk reduction. Likewise, if someone who was prioritising extinction risk reduction came to believe extinction was less likely or harder to change the likelihood of than they’d thought, that should update them somewhat away from prioritising extinction risk reduction. So one way to address the questions, tradeoffs, and potential divisions you mention is simply to engage in further research and debate on empirical questions relevant to the importance, tractability, and neglectedness of extinction risk reduction, s-risk reduction, and other potential longtermist priorities. The following post also contains some relevant questions and links to relevant sources: Crucial questions for longtermists [https://forum.effectivealtruism.org/posts/wicAtfihz2JmPRgez/crucial-questions-for-longtermists] .
3MichaelA6moIt seems that what you have in mind is tradeoffs between extinction risk reduction vs suffering risk reduction. I say this because existential risk itself include a substantial portion of possible suffering risks, and isn't just about preserving humanity. (See Venn diagrams of existential, global, and suffering catastrophes [https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering] .) I also think it would be best to separate out the question of which types of beings to focus on (e.g., humans, nonhuman animals, artificial sentient beings…) from the question of how much to focus on reducing suffering in those beings vs achieving other possible moral goals (e.g., increasing happiness, increasing freedom, creating art). (There are also many other distinctions one could make, such as between affecting the lives of beings that already exist vs changing whether beings come to exist in future.)

Hey, glad you liked the post! I don't really see a tradeoff between extinction risk reduction and moral circle expansion, except insofar as we have limited time and resources to make progress on each. Maybe I'm missing something?

When it comes to limited time and resources, I'm not too worried about that at this stage. My guess is that by reaching out to new (academic) audiences, we can actually increase the total resources and community capital dedicated to longtermist topics in general. Some individuals might have tough decisions to face about where they ... (read more)

AMA: Ajeya Cotra, researcher at Open Phil

I've been increasingly hearing advice to the effect that "stories" are an effective way for an AI x-safety researcher to figure out what to work on, that drawing scenarios about how you think it could go well or go poorly and doing backward induction to derive a research question is better than traditional methods of finding a research question. Do you agree with this? It seems like the uncertainty when you draw such scenarios is so massive that one couldn't make a dent in it, but do you think it's valuable for AI x-safety researchers to make significant (... (read more)

4Ajeya9moI would love to see more stories of this form, and think that writing stories like this is a good area of research to be pursuing for its own sake that could help inform strategy at Open Phil and elsewhere. With that said, I don't think I'd advise everyone who is trying to do technical AI alignment to determine what questions they're going to pursue based on an exercise like this -- doing this can be very laborious, and the technical research route it makes the most sense for you to pursue will probably be affected by a lot of considerations not captured in the exercise, such as your existing background, your native research intuitions and aesthetic (which can often determine what approaches you'll be able to find any purchase on), what mentorship opportunities you have available to you and what your potential mentors are interested in, etc.
What would an EA do in the american revolution?

So I read Gwern and I also read this Dylan Matthews piece, I'm fairly convinced the revolution did not lead to the best outcomes for slaves and for indigenous people. I think there are two cruxes for believing that it would be possible to make this determination in real-time: 

  1. as Matthews points out, follow the preferences of slaves.
  2. notice that a complaint in the declaration of independence was that the british wanted to citizenize indigenous people. 

One of my core assumptions, which is up for debate, is that EAs ought to focus on outcomes for sla... (read more)

Promoting EA to billionaires?

I'm puzzled by the lack of push to convert Patrick Collision. Paul Graham once tweeted that Stripe would be the next google, so if Patrick Collision doesn't qualify as a billionaire yet, it might be a good bet that he will someday (I'm not strictly basing that on PG's authority, I'm also basing that on my personal opinion that Stripe seems like world domination material). He cowrote this piece "We need a science of progress" and from what I heard in this interview, signs point to a very EA-sympathetic person. 

9Aaron Gertler9moDonor engagement isn't part of my job, so I can't be sure about this, but I think it's quite likely that EA-affiliated people have many conversations with wealthy and successful people, and those conversations just happen quietly. I wouldn't be so sure that Patrick Collison hasn't had these conversations; most people keep their giving relatively private, and I don't see why he would be an exception. I also don't advise using the word "convert" to describe situations where someone is thinking of changing where they donate; the word has religious connotations, and I think it often isn't helpful to think of someone as being "in EA" or "not in EA". There also may be organizations that Patrick supports in the area of "progress studies" or "the science of progress" that have goals very similar to some EA orgs but don't happen to be formally linked to our movement. Many such organizations likely exist. (For one, I think there's a good chance that Patrick is among the supporters of Tyler Cowen's Emergent Ventures.)
What would an EA do in the american revolution?

My first guess, based on the knowledge I have, is that the abolitionist faction was good, and that supporting them would be necessary for an EA in that time (but maybe not sufficient). Additionally, my guess is that I'd be able to determine this in real time. 

My upcoming CEEALAR stay

Maybe! I'm only going after a steady stream of 2-3 chapters per week. Be in touch if you're interested: I'm re-reading the first quarter of PLF since they published a new version in the time since I knocked out the first quarter of it. 

My upcoming CEEALAR stay

Thanks for the comment. I wasn't aware of yours and Rohin's discussion on Arden's post. Did you flesh out the inductive alignment idea on lw or alignment forum? It seems really promising to me. 

I want to jot down notes more substantive than "wait until I post 'Going Long on FV' in a few months" today. 

FV in AI Safety in particular

As Rohin's comment suggests, both aiming proofs about properties of models toward today's type theories and aiming tomorrow's type theories toward ML have two classes of obstacles: 1. is it possible? 2. can it be made co... (read more)

1abergal10moI was speaking about AI safety! To clarify, I meant that investments in formal verification work could in part be used to develop those less primitive proof assistants.