# All of quinn's Comments + Replies

quinn's Shortform

# Stem cell slowdown and AI timelines

My knowledge of christians and stem cell research in the US is very limited, but my understanding is that they accomplished real slowdown.

Has anyone looked to that movement for lessons about AI?

Did anybody from that movement take a "change it from the inside" or "build clout by boosting stem cell capabilities so you can later spend that clout on stem cell alignment" approach?

Valuing research works by eliciting comparisons from EA researchers

One particularly worrying difference in opinions is the difference in the range of values. Moorhouse’s range is 5.1 orders of magnitude, whereas Leech’s is 12.6 (the participants’ average is 7.6).

what about taking exp(normalize(log(x)) for some normalization function that behaves roughly like vector normalization?

The Vultures Are Circling

“You should apply for [insert EA grant], all I had to do was pretend to care about x, and I got !”

I can speak of one EA institution, which I will not name, that suffers from this. Math and cognitive science majors can get a little too  far in EA circles just by mumbling something about AI Safety, and not delivering any actual interfacing with the literature or the community.

So, thanks for posting.

Have you told the institution about this? Seems like a pretty important thing for them to know!

Sex work as part of mental health and wellbeing services

I am commenting to create public knowledge, in a form stronger than a mere upvote, that I think this post is on the right track and that wellbeing increases from just tackling loneliness, lack of affection, lack of validation, etc. directly ought to be a serious cause candidate.

quinn's Shortform

idea: taboo "community building", say "capacity building" instead.

6Nathan Young2mo
Why?
2Gavin2mo
Gotta be one word or bust
3Catherine Low2mo
Here: https://join.slack.com/t/eagroups/shared_invite/zt-3ws1vk1v-spLPUkYxNTkpT1RpnC1YLQ [https://join.slack.com/t/eagroups/shared_invite/zt-3ws1vk1v-spLPUkYxNTkpT1RpnC1YLQ] Welcome Quinn!
The Future Fund’s Project Ideas Competition

Sub-extinction event drills, games, exercises

Civilizational resilience to catastrophes

Someone should build up expertise and produce educational materials / run workshops on questions like

1. Nuclear attacks on several cities in a 1000 mile radius of you, including one within 100 miles. What is your first move?
2. Reports of a bioweapon in the water supply of your city. What do you do?
3. You're a survivor of an industrial-revolution-erasing event. What chunks of knowledge from science can be useful to you? After survival, what are the steps to rebuil
Introducing 80k After Hours

Stitcher please? Many shows named "after hours" of some kind, can't find 80k on there.

3Robert_Wiblin3mo
Here you go: https://www.stitcher.com/show/80k-after-hours [https://www.stitcher.com/show/80k-after-hours] (Seems like Stitcher is having technical problems, I've contacted their technical support about it.)
The case for building more and better epistemic institutions in the effective altruism community

Related: the term I've been using lately in thinking about this sort of thing is epistemic public goods, which I think was prompted when I saw Julia Galef tweet about the "epistemic commons"

"Should have been hired" Prizes

I think you missed a disadvantage: I think there's a free rider problem where everyone reaps the benefits of the research and it's too easy for a given org to decline funding it.

Overall I like the idea a lot and

Some mechanism may be required to ensure that multiple organisations do not fund the same work.

I hope to find time for this exercise later today.

quinn's Shortform

We need a name for the following heuristic, I think, I think of it as one of those "tribal knowledge"  things that gets passed on like an oral tradition without being citeable in the sense of being a part of a literature. If you come up with a name I'll certainly credit you in a top level post!

I heard it from Abram Demski at AISU'21.

Suppose you're either going to end up in world A or world B, and you're uncertain about which one it's going to be. Suppose you can pull lever  which will be 100 valuable if you end up in world A, or you ... (read more)

Is Bitcoin Dangerous?

One downside of decentralization you missed is the idea that protocols are slower to update than any other software, which in some scenarios leads to a lock-in risk.

To be more specific, suppose mechanism designer A encodes beliefs/values/aesthetics X into a mechanism M, which gets deployed in a robustly decentralized fashion. Then, upon philosophical breakthroughs totally updating X into X', A encodes X' into a new mechanism M'. The troubling idea I'm pointing to is that coordinating the pivot from M to M' seems exceedingly difficult, likely much mor... (read more)

quinn's Shortform

Is there an econ major or geek out there who would like to

1. accelerate my lit review as I evaluate potential startup ideas in prediction markets and IIDM by writing paper summaries
2. occasionally tutor me in microeconomics and game theory and similar fun things

something like 5 hours / week, something like  \$20-40 /hr

(EA Forum DMs / quinnd@tutanota.com / disc @quinn#9100)

I'm aware that there are contractor-coordinating services for each of these asks, I just think it'd be really awesome to have one person to do both and to keep the money in the community, maybe meet a future collaborator!

The Bioethicists are (Mostly) Alright

This is odd. I audited/freeloaded at a perfectly mediocre university math department and they seemed careful to assign the prof who's dissertation was in functional analysis to teach real analysis, and the prof who's dissertation was in algebraic geometry to teach group theory. I guess I only observed in the 3rd/4th year courses case. For 1st/2nd year courses, intuitively you'd want the analysts teaching calculus and the logicians teaching discrete, perhaps something like this, but I don't expect a disaster if they crossed the streams, in the way that I so... (read more)

quinn's Shortform

post idea: based on interviews, profile scenarios from software of exploit discovery, responsible disclosure, coordination of patching, etc. and try to analyze with an aim toward understanding what good infohazard protocols would look like.

(I have a contact who was involved with a big patch, if someone else wants to tackle this reach out for a warm intro!)

Pedant, a type checker for Cost Effectiveness Analysis

What if pedant was a sort of "backend" to a sheet UX? A compiler that takes sheet formulae and generates pedant code?

The central claim is that sheet UX is error prone, so why not keep the UX and add verification behind it?

Linkpost for "Organizations vs. Getting Stuff Done" and discussion of Zvi's post about SFF and the S-process (or; Doing Actual Thing)

to partially rehash what was on discord and partially add more:

• I don't think saying that institutions have benefits and are effective is at all an argument against specific drawbacks and failure modes. Things that are have pros can also have cons, pros and cons can coexist, etc.
• I agree that a portion of the criticism is moot if you don't on priors think hierarchy and power are intrinsically risky or disvaluable, but I think having those priors directs one's attention to problems or failure modes that people without those priors would be wise to
The case against degrowth

Can people please list who they think should be asked if an EA org wanted to host a debate? I don't know who an excellent moderator would be, my first guess after not being very informed about the space is that Tyler Cowen would make a good participant on the growth side, and I don't know enough about degrowth to even make a guess at who would participate for the degrowth side.

Have you considered switching countries to save money?

It's in my plans for the next few years. Only thing that would stop me would be a truly fantastic group house community in the states. Cash increasing the margin that goes to donation is one reason, basic cosmopolitan values implying an urge to actually form inside views about other cultures is another.

My shortlist is singapore, taipei, or several south american cities. Open to having my mind changed about neglect for african cities, and south asian cities.

1acylhalide5mo
Thank you for replying, this helps.
Democratising Risk - or how EA deals with critics

I think it's plausible that it's hard to notice this issue if your personal aesthetic preferences happen to be aligned with TUA. I tried to write here a little questioning how important aesthetic preferences may be. I think it's plausible that people can unite around negative goals even if positive goals would divide them, for instance, but I'm not convinced.

Democratising Risk - or how EA deals with critics

the word "techbros" signals  you have a kind of information diet and worldview that I think people have bad priors about

IMO we should seek out and listen to the most persuasive advocates for a lot of different worldviews. It doesn't seem epistemically justified to penalize a worldview because it gets a lot of obtuse advocacy.

If people downvote comments on the basis of perceived ingroup affiliation rather than content then I think that might make OP's point for them...

Democratising Risk - or how EA deals with critics

Techno-utopian approach (via paper abstract)

6berglund5mo
Thanks!
Movie Review: The Story of Louis Pasteur

This review is great and has gotten a lot of my friends excited about science and being a human. Just watched the movie last night, absolutely loved it.

quinn's Shortform

Why have I heard about Tyson investing into lab grown, but I haven't heard about big oil investing in renewable?

Tyson's basic insight here is not to identify as "an animal agriculture company". Instead, they identify as "a feeding people company". (Which happens to align with doing the right thing, conveniently!)

It seems like big oil is making a tremendous mistake here. Do you think oil execs go around saying "we're an oil company"? When they could instead be going around saying "we're a powering stuff" company. Being a powering stuff company means you hav... (read more)

3James Ozden5mo
1quinn5mo
https://www.lesswrong.com/posts/kq8CZzcPKQtCzbGxg/quinn-s-shortform?commentId=yLG8yWWHhuTKLbdZA [https://www.lesswrong.com/posts/kq8CZzcPKQtCzbGxg/quinn-s-shortform?commentId=yLG8yWWHhuTKLbdZA] seems like a I didn't hear about it kind of thing
Movie review: Don't Look Up

Strange. Everyone I watched it with (the second time when I watched it with non-EAs) was impressed and touched. My sister, who has mostly climate change epistemics, was emotionally moved into thinking more about her own extinction concerns (and was very amenable when I explained that pandemics and some AI scenarios are greater threats than climate change).

Reasons not to identify as EA to me are just nuances about identities altogether and trying to keep them small (http://www.paulgraham.com/identity.html).

1. Social group membership is not upstream. "trying to believe true things" or "trying to win" are the things I aspire to upstream, and maybe some social groups, like EA, are instrumentally useful downstream of those. Computer science chatrooms are instrumentally useful too!
2. Identities can be like organizations (https://theanarchistlibrary.org/library/william-gillis-organizations-versus-getting-shit-done
quinn's Shortform

Don't Look Up might be one of the best mainstream movies for the xrisk movement. Eliezer said it's too on the nose to bare/warrant actually watching. I fully expect to write a review for EA Forum and lesswrong about xrisk movement building.

quinn's Shortform

CC'd to lesswrong.com/shortform

# Positive and negative longtermism

I'm not aware of a literature or a dialogue on what I think is a very crucial divide in longtermism.

In this shortform, I'm going to take a polarity approach. I'm going to bring each pole to it's extreme, probably each beyond positions that are actually held, because I think median longtermism or the longtermism described in the Precipice is a kind of average of the two.

Negative longtermism is saying "let's not let some bad stuff happen", namely extinction. It wants to preserve. If nothing gets... (read more)

https://github.com/daattali/beautiful-jekyll

1acylhalide6mo
Thank you for this, I found jekyll + github pages easiest to use too :)
quinn's Shortform

CW death

I'm imagining myself having a 6+ figure net worth at some point in a few years, and I don't know anything about how wills work.

Do EAs have hit-by-a-bus contingency plans for their net worths?

Is there something easy we can do to reduce the friction of the following process: Ask five EAs with trustworthy beliefs and values to form a grantmaking panel in the event of my death. This grantmaking panel could meet for thirty minutes and make a weight allocation decision on the giving what we can app, or they can accept applications and run it ... (read more)

quinn's Shortform

# What's the latest on moral circle expansion and political circle expansion?

• Were slaves excluded from the moral circle in ancient greece or the US antebellum south, and how does this relate to their exclusion from the political circle?
• If AIs could suffer, is recognizing that capacity a slippery slope toward giving AIs the right to vote?
• Can moral patients be political subjects, or must political subjects be moral agents? If there was some tipping point or avalanche of moral concern for chickens, that wouldn't imply arguments for political r
AMA: The new Open Philanthropy Technology Policy Fellowship

(cc'd to the provided email address)

In Think Tank Junior Fellow, OP writes

Recently obtained a bachelor’s or master’s degree (including Spring 2022 graduates)

How are you thinking about this requirement? Is there something flex about it (like when a startup says they want a college graduate) or are there bureaucratic forces at partner organizations locking it in stone (like when a hospital IT department says they want a college graduate)? Perhaps describe properties of a hypothetical candidate that would inspire you to flex this requirement?

7Technology Policy Fellowship10mo
This requirement mainly exists because our host organizations tend to value traditional credentials. However, as we note on the application page, “The eligibility guidelines below are loose and somewhat flexible. If you’re not sure whether you are eligible, we still encourage you to apply.” To the extent possible, we will work to accommodate applicants that we are excited about even if they don’t have traditional credentials. We expect most think tanks to fall somewhere between a startup and a hospital IT department, in terms of flexibility. Different think tanks will also have different cultures and policies with respect to credentials. If we receive promising applications from people without a college degree, we may reach out to some potential host organizations on that candidate’s behalf to assess whether host organizations would consider the lack of a traditional credential to be a dealbreaker. Our (and potentially the candidate’s) decision about advancement would depend in large part on the responses we receive to those inquiries.
Apply to the new Open Philanthropy Technology Policy Fellowship!

We're writing to let you know that the group you tried to contact (techpolicyfellowship) may not exist, or you may not have permission to post messages to the group. A few more details on why you weren't able to post:

* You might have spelled or formatted the group name incorrectly.
* The owner of the group may have removed this group.
* You may need to join the group before receiving permission to post.
* This group may not be open to posting.

If you have questions related to this or any other Google Group, visit the Help Center at https://support.google.com/a

2lukeprog10mo
Oops! Should be fixed now.
Apply to the new Open Philanthropy Technology Policy Fellowship!

Ah, just saw techpolicyfellowship@openphilanthropy.org at the bottom of the page. Sorry, will direct my question to there!

1quinn10mo
Apply to the new Open Philanthropy Technology Policy Fellowship!

Hi Luke, could you describe a candidate that would inspire you to flex the bachelor's requirement for Think Tank Jr. Fellow? I took time off credentialed institutions to do lambda school and work (didn't realize I want to be a researcher until I was already in industry), but I think my overall CS/ML experience is higher than a ton of the applicants you're going to get (I worked on cooperative AI at AI Safety Camp 5 and I'm currently working on multi-multi delegation, hence my interest in AI governance). If possible, I'd like to hear from you how you're thinking about the college requirement before I invest the time into writing a cumulative 1400 words.

1quinn10mo
Ah, just saw techpolicyfellowship@openphilanthropy.org [techpolicyfellowship@openphilanthropy.org] at the bottom of the page. Sorry, will direct my question to there!
Hiring Director of Applied Data & Research - CES

Awesome! I probably won't apply as I lack political background and couldn't tell you the first thing about running a poll, but my eyes will be keenly open in case you post a broader data/analytics job as you grow. Good luck with the search!

1aaronhamlin1y
If an applicant has a strong stats and data analysis background, I would still encourage them to apply. It can sometimes be hard to check off every single box. Either way, please share with your network as well. Thanks!
The Importance of Artificial Sentience

I'm thrilled about this post - during my first two-three years of studying math/cs and thinking about AGI my primary concern was the rights and liberties of baby agents (but I wasn't giving suffering nearly adequate thought). Over the years I became more of an orthodox x-risk reducer, and while the process has been full of nutritious exercises, I fully admit that becoming orthodox is a good way to win colleagues, not get shrugged off as a crank at parties, etc. and this may have played a small role, if not motivated reasoning then at least humbly deferring... (read more)

5MichaelA1y
It seems to me that your comment kind-of implies that people who focus on reducing extinction risk and people who focus on reducing s-risk are mainly divided by moral views. (Maybe that’s just me mis-reading you, though.) But I think empirical views can also be very relevant. For example, if someone who leans towards suffering-focused ethics [https://longtermrisk.org/the-case-for-suffering-focused-ethics/] became convinced that s-risks are less likely, smaller scale in expectation, or harder to reduce the likelihood or scale of than they’d thought, that should probably update them somewhat away from prioritising s-risk reduction, leaving more room for prioritising extinction risk reduction. Likewise, if someone who was prioritising extinction risk reduction came to believe extinction was less likely or harder to change the likelihood of than they’d thought, that should update them somewhat away from prioritising extinction risk reduction. So one way to address the questions, tradeoffs, and potential divisions you mention is simply to engage in further research and debate on empirical questions relevant to the importance, tractability, and neglectedness of extinction risk reduction, s-risk reduction, and other potential longtermist priorities. The following post also contains some relevant questions and links to relevant sources: Crucial questions for longtermists [https://forum.effectivealtruism.org/posts/wicAtfihz2JmPRgez/crucial-questions-for-longtermists] .
3MichaelA1y
It seems that what you have in mind is tradeoffs between extinction risk reduction vs suffering risk reduction. I say this because existential risk itself include a substantial portion of possible suffering risks, and isn't just about preserving humanity. (See Venn diagrams of existential, global, and suffering catastrophes [https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering] .) I also think it would be best to separate out the question of which types of beings to focus on (e.g., humans, nonhuman animals, artificial sentient beings…) from the question of how much to focus on reducing suffering in those beings vs achieving other possible moral goals (e.g., increasing happiness, increasing freedom, creating art). (There are also many other distinctions one could make, such as between affecting the lives of beings that already exist vs changing whether beings come to exist in future.)

Hey, glad you liked the post! I don't really see a tradeoff between extinction risk reduction and moral circle expansion, except insofar as we have limited time and resources to make progress on each. Maybe I'm missing something?

When it comes to limited time and resources, I'm not too worried about that at this stage. My guess is that by reaching out to new (academic) audiences, we can actually increase the total resources and community capital dedicated to longtermist topics in general. Some individuals might have tough decisions to face about where they ... (read more)

AMA: Ajeya Cotra, researcher at Open Phil

I've been increasingly hearing advice to the effect that "stories" are an effective way for an AI x-safety researcher to figure out what to work on, that drawing scenarios about how you think it could go well or go poorly and doing backward induction to derive a research question is better than traditional methods of finding a research question. Do you agree with this? It seems like the uncertainty when you draw such scenarios is so massive that one couldn't make a dent in it, but do you think it's valuable for AI x-safety researchers to make significant (... (read more)

4Ajeya1y
I would love to see more stories of this form, and think that writing stories like this is a good area of research to be pursuing for its own sake that could help inform strategy at Open Phil and elsewhere. With that said, I don't think I'd advise everyone who is trying to do technical AI alignment to determine what questions they're going to pursue based on an exercise like this -- doing this can be very laborious, and the technical research route it makes the most sense for you to pursue will probably be affected by a lot of considerations not captured in the exercise, such as your existing background, your native research intuitions and aesthetic (which can often determine what approaches you'll be able to find any purchase on), what mentorship opportunities you have available to you and what your potential mentors are interested in, etc.
What would an EA do in the american revolution?

So I read Gwern and I also read this Dylan Matthews piece, I'm fairly convinced the revolution did not lead to the best outcomes for slaves and for indigenous people. I think there are two cruxes for believing that it would be possible to make this determination in real-time:

1. as Matthews points out, follow the preferences of slaves.
2. notice that a complaint in the declaration of independence was that the british wanted to citizenize indigenous people.

One of my core assumptions, which is up for debate, is that EAs ought to focus on outcomes for sla... (read more)

Promoting EA to billionaires?

I'm puzzled by the lack of push to convert Patrick Collision. Paul Graham once tweeted that Stripe would be the next google, so if Patrick Collision doesn't qualify as a billionaire yet, it might be a good bet that he will someday (I'm not strictly basing that on PG's authority, I'm also basing that on my personal opinion that Stripe seems like world domination material). He cowrote this piece "We need a science of progress" and from what I heard in this interview, signs point to a very EA-sympathetic person.

9Aaron Gertler1y
Donor engagement isn't part of my job, so I can't be sure about this, but I think it's quite likely that EA-affiliated people have many conversations with wealthy and successful people, and those conversations just happen quietly. I wouldn't be so sure that Patrick Collison hasn't had these conversations; most people keep their giving relatively private, and I don't see why he would be an exception. I also don't advise using the word "convert" to describe situations where someone is thinking of changing where they donate; the word has religious connotations, and I think it often isn't helpful to think of someone as being "in EA" or "not in EA". There also may be organizations that Patrick supports in the area of "progress studies" or "the science of progress" that have goals very similar to some EA orgs but don't happen to be formally linked to our movement. Many such organizations likely exist. (For one, I think there's a good chance that Patrick is among the supporters of Tyler Cowen's Emergent Ventures.)
What would an EA do in the american revolution?

My first guess, based on the knowledge I have, is that the abolitionist faction was good, and that supporting them would be necessary for an EA in that time (but maybe not sufficient). Additionally, my guess is that I'd be able to determine this in real time.

My upcoming CEEALAR stay

Maybe! I'm only going after a steady stream of 2-3 chapters per week. Be in touch if you're interested: I'm re-reading the first quarter of PLF since they published a new version in the time since I knocked out the first quarter of it.

My upcoming CEEALAR stay

Thanks for the comment. I wasn't aware of yours and Rohin's discussion on Arden's post. Did you flesh out the inductive alignment idea on lw or alignment forum? It seems really promising to me.

I want to jot down notes more substantive than "wait until I post 'Going Long on FV' in a few months" today.

## FV in AI Safety in particular

As Rohin's comment suggests, both aiming proofs about properties of models toward today's type theories and aiming tomorrow's type theories toward ML have two classes of obstacles: 1. is it possible? 2. can it be made co... (read more)

1abergal1y
I was speaking about AI safety! To clarify, I meant that investments in formal verification work could in part be used to develop those less primitive proof assistants.