All of jayquigley's Comments + Replies

The vague term "great" gets used a lot in this post. If possible, wielding more precise concepts regarding what you're looking for—what counts as "great" in the sense you're using the term—could be helpful moving forward. By honing in on the particular kind of skill you're seeking, you'll help identify those who have those skills. And you may help yourselves confirm which specific skills are truly are essential to the position you're seeking to fill. 

(Also, I think there are more ways to be a "great" software engineer than being able to write a substa... (read more)

3
Andy Jones
2y
I appreciate the feedback, but the spec is intentionally over-broad rather than over-narrow. I and several other engineers in AI safety have made serious efforts to try and pin down exactly what 'great software engineering' is, and - for want of a better phrase - have found ourselves missing the forest for the trees. What we're after is a certain level of tacit, hard-to-specify skills and knowledge that we felt was best characterised by the litmus test given above.

Any updates here? I share Devon's concern: this news also makes me less likely to want to donate via EA Funds.  At worst, the fear would be this: so much transparency is lost that donations go into mysterious black holes rather than funding effective organizations. What steps can be taken to convince donors that that's not what's happening?

8
calebp
2y
(I am the new interim project lead for EA funds and will be running EA funds going forward.) I completely understand that you want to know that your donations are used in a way that you think is good for the world. We refer private grants to private funders so that you know that your money is not being used for projects that you get little or no visibility on. I think that EA Funds is mostly for donors that are happy to lean on the judgment of our fund managers. Sometimes our fund managers may well fund things like mental health support if they think is one of the best ways to improve the world. LTFF and EAIF in particular, fund a variety of projects that are often unusual. If you don't trust the judgment of our fund managers or don't agree with the scope of our funds there are probably donation opportunities that might be a better fit for you than EA Funds.  We try hard to optimise the service for the grantees and this means that we may fall short of building the best service for our donors. We are exploring more donor-focused products with GWWC, that we will hopefully be able to offer soon. 
  • What is your stance regarding aiming your output at an EA audience vs. a wider audience? (Academic & governmental audiences, etc.?)
  • It seems that a large portion of output begins on your blog and in EA Forum posts. What other venues do you aim at, if any?
  • To what extent do you regard tailoring your work to academic journals with "peer-review" as counterfactually worthwhile?

For the cross-referencing, did they ask your permission first? Hopefully so. Otherwise, there can be the awkward situation where one does not actually want to work at the organization to which one has been referred.

7
Aaron Gertler
5y
Yes, they asked my permission first.

Amazing idea! I'll be thinking and talking more about this, including with the animal-issue lobbying organizations I've worked with here in the US and California.

0
saulius
6y
great, please tell how it goes!

For the animal advocacy space, my anecdata suggest that the talent gap is in large part a product of funding constraints. Most animal charities pay rather poorly, even compared to other nonprofits.

Thanks for your engaging insights!

this sounds like you're talking about a substantive concept of rationality

Yes indeed!

Substantive concepts of rationally always go under moral non-naturalism, I think.

I'm unclear on why you say this. It certainly depends on how exactly 'non-naturalism' is defined.

One contrast of the Gert-inspired view I've described and that of some objectivists about reasons or substantive rationality (e.g. Parfit) is that the latter tend to talk about reasons as brute normative facts. Sometimes it seems they have no story to tell ... (read more)

So you know who's asking, I happen to consider myself a realist, but closest to the intersubjectivism you've delineated above. The idea is that morality is the set of rules that impartial, rational people would advocate as a public system. Rationality is understood, roughly speaking, as the set of things that virtually all rational agents would be averse to. This ends up being a list of basic harms--things like pain, death, disability, injury, loss of freedom, loss of pleasure. There's not much more objective or "facty" about rationality than the... (read more)

1
Lukas_Gloor
6y
Yes, this sounds like constructivism. I think this is definitely a useful framework for thinking about some moral/morality-related questions. I don't think all of moral discourse is best construed as being about this type of hypothetical rule-making, but like I say in the post, I don't think interpreting moral discourse should be the primary focus. Hm, this sounds like you're talking about a substantive concept of rationality, as opposed to a merely "procedural" or "instrumental" concept of rationality (such as it's common on Lesswrong and with anti-realist philosophers like Bernard Williams). Substantive concepts of rationally always go under moral non-naturalism, I think. My post is a little confusing with respect to the distinction here, because you can be a constructivist in two different ways: Primarily as an intersubjectivist metaethical position, and "secondarily" as a form of non-naturalism. (See my comments on Thomas Sittler's chart.) Yeah, it should be noted that "strong" versions of moral realism are not committed to silly views such as morality existing in some kind of supernatural realm. I often find it difficult to explain moral non-naturalism in a way that makes it sound as non-weird as when actual moral non-naturalists write about it, so I have to be careful to not strawman these positions. But what you describe may still qualify as "strong" because you're talking about rationality as a substantive concept. (Classifying something as a "harm" is one thing if done in a descriptive sense, but probably you're talking about classifying things as a harm in a sense that has moral connotations – and that gets into more controversial territory.) The book title "normative bedrock" also sounds relevant because my next post will talk about "bedrock concepts" (Chalmers) at length, and specifically about "irreducible normativity" as a bedrock concept, which I think makes up the core of moral non-naturalism.

One thought is that if morality is not real, then we would not have reasons to do altruistic things. However, I often encounter anti-realists making arguments about which causes we should prioritize, and why. A worry about that is that if morality boils down to mere preference, then it is unclear why a different person should agree with the anti-realist's preference.

5
jayquigley
6y
So you know who's asking, I happen to consider myself a realist, but closest to the intersubjectivism you've delineated above. The idea is that morality is the set of rules that impartial, rational people would advocate as a public system. Rationality is understood, roughly speaking, as the set of things that virtually all rational agents would be averse to. This ends up being a list of basic harms--things like pain, death, disability, injury, loss of freedom, loss of pleasure. There's not much more objective or "facty" about rationality than the fact that basically all vertebrates are disposed to be averse to those things, and it's rather puzzling for someone not to be. People can be incorrect about whether a thing is harmful, just as they can be incorrect about whether a flower is red. But there's nothing much more objective or "facty" about whether the plant is red than that ordinary human language users on earth are disposed to see and label it as red. I don't know whether or not you'd label that as objectivism about color or about rationality/harm. But I'd classify it as a weak form of realism and objectivism because people can be incorrect, and those who are not reliably disposed to identify cases correctly would be considered blind to color or to harm. These things I'm saying are influenced by Joshua Gert, who holds very similar views. You may enjoy his work, including his Normative Bedrock (2012) or Brute Rationality (2004). He is in turn influenced by his late father Bernard Gert, whose normative ethical theory Josh's metaethics work complements.

What do you think are the implications of moral anti-realism for choosing altruistic activities?

Why should we care whether or not moral realism is true?

(I would understand if you were to say this line of questions is more relevant to a later post in your series.)

8
Lukas_Gloor
6y
I plan to address this more in a future post, but the short answer is this that for some ways in which moral realism has been defined, it really doesn't matter (much). But there are some versions of moral realism that would "change the game" for those people who currently reject them. And vice-versa, if one currently endorses a view that corresponds to the two versions of "strong moral realism" described in the last section of my post, one's priorities could change noticeably if one changes one's mind towards anti-realism. It's hard to summarize this succinctly because for most of the things that are straightforwardly important under moral realism (such as moral uncertainty or deferring judgment to future people who are more knowledgeable about morality), you can also make good arguments in favor of them going from anti-realist premises. Some quick thoughts: – The main difference is that things become more "messy" with anti-realism. – I think anti-realists should, all else equal, be more reluctant to engage in "bullet biting" where you abandon some of your moral intuitions in favor of making your moral view "simpler" or "more elegant." The simplicity/elegance appeal is that if you have a view with many parameters that are fine-tuned for your personal intuitions, it seems extremely unlikely that other people would come up with the same parameters if they only thought about morality more. Moral realists may think that the correct answer to morality is one that everyone who is knowledgeable enough would endorse, whereas anti-realists may consider this a potentially impossible demand and therefore place more weight on finding something that feels very intuitively compelling on the individual level. Having said that, I think there are a number of arguments why even an anti-realist might want to adopt moral views that are "simple and elegant." For instance, people may care about doing something meaningful that is "greater than their own petty little intuitions" – I thi
1
jayquigley
6y
One thought is that if morality is not real, then we would not have reasons to do altruistic things. However, I often encounter anti-realists making arguments about which causes we should prioritize, and why. A worry about that is that if morality boils down to mere preference, then it is unclear why a different person should agree with the anti-realist's preference.

Just want to second that interested readers visit Khorton's very helpful link. It's a great article with a very helpful decision tree produced by 80,000 Hours & the Global Priorities Project.

The idea behind trying to end factory farming for animals' sake is that animals who spend their whole lives on factory farms are enduring lives that are not worth living. It is better not to bring creatures into existence who would live net negative lives.

You're right that extinction is a (very) extreme case. It's more likely that even with a drastic reduction in factory farming, a small fraction of descendants of farmed species would be preserved--either for farming, or in zoos or similar institutions. After all, they're easy to domesticate, having been bred over the centuries for precisely those purposes.

0
turchin
6y
How could we know that they are unhappy? Photos of overcrowded farms look terrible, but animals may have different value structure, like: * warm * safe * many friends * longer life expectancy than in the forest * guaranteed access to unlimited amounts of food. Technically, we could have two ways to measure their preferences: do they feel constant pain according to their EEG + do they want to escape at any price and even happy to be slaughtered?

Another useful, well-writtten statement of this argument is in Brian Tomasik's "Does Vegetarianism Make a Difference?":

Suppose that a supermarket currently purchases three big cases per week of factory-farmed chickens, with each case containing 25 birds. The store does not purchase fractions of cases, so even if several surplus chickens remain each week, the supermarket will continue to buy three cases. This is what the anti-vegetarian means by "subsisting off of surplus animal products that would otherwise go to waste": the three cas

... (read more)
1
Avi Norowitz
6y
I wonder if the cutoff point is more like 25,000 though, the number of broiler chickens raised in a shed. It's unclear to me whether producers respond to small changes in demand by adjusting the numbers of broilers in a shed or only by adjusting the number of sheds in use. If the cutoff point is more like 25,000, then this would imply that most veg*ns go their entire lives without preventing the existence of a single broiler through their consumption changes, while a minority prevent the existence of a huge number. For what it's worth, it seems likely that donations to AMF are similar since their distributions typically cover hundreds of thousands or millions of people.

Joey, do you think you would adjust this for different circumstances---say, if living in a more expensive region, facing medical hardship, or having to support an elderly family member? For example, assuming you're renting a room for $440 USD, rents in the Bay Area would be anywhere from 200% to 500% more. If for some reason you wound up here, would you take the price difference into account, or still try to go with the global average?

3
Joey
7y
The global average number came after I had a sense of what our spending was. This was the number we felt we could both be comfortable and optimally effective at. If it was different circumstances we would have picked a different number.

Your point is well taken. Indeed, the goal is a world where everyone's interest is given the same weight as equivalent interests, regardless of species.

It is probable that lofty philosophical visions motivate and inspire people, just as you indicate.

I suppose the reason we don't always lead with that kind of messaging is that it can scare away opponents who aren't ready to dare challenging the "meat" industry and worry about slippery slopes. Including lawmakers whose constituents include scores of entrepreneurs who sell animal bodies as food.

If BCA were a major animal protection organization such as HSUS or PETA, I would mostly agree with you. But we are an all-volunteer force of around 4 dedicated members in one of the very most progressive cities in the U.S. What we should prioritize is not the building of awareness but rather the accumulation of inspiring legislative victories which will help mobilize the rest of those who are already aware of animal issues.

Rather than "run[ning] around and try[ing] to do something about every incidence of suffering [we] see", we are prioritizing ... (read more)

0
Remmelt
7y
Fair point. You seem to be opening up the way to show what's possible to larger organisations. Having said that, can't you connect these two? Can't you one one end take practical steps to showing that real legal progress is possible while at the other end show the big picture that you're working towards and why? Thinking big around a shared goal could the increase cohesion and ambition of the idealistic people you're connected with and work with on each new project from now on (this reminds me of Elon Musk's leadership approach, who unfortunately doesn't seem to care much about animal issues).

Speaking specifically for Fur Free Berkeley, and speculating on behalf of Fur Free West Hollywood, the reasons for focusing on banning fur were that it was:

  • attainable yet challenging

  • a meaningful step in an incremental progression toward further, more all-encompassing reforms

  • a farmed animal issue with which the general public has substantial sympathy

  • an industry wherein welfare misdeeds are egregious and relatively well-understood

  • an issue on which both welfare reformers and staunch abolitionists can agree (because it is a form of outright prohibiti

... (read more)
1
Remmelt
7y
Thanks for the explanation for your decision to focus on fur at this point. I'm curious – if you see this particular ban as a stepping stone to larger behavioural change in the state of California – how are you using your success here as leverage to make citizens become aware of the suffering happening on a much larger scale in intensive factory farms? I saw this article on extending your progress to other animals. But, to be fair, it isn't clear to me yet how you're prioritising these areas. In the Netherlands, I have seen a tendency amongst animal welfare charities to run around and try to do something about every incidence of suffering they see. While I understand this and admire these efforts, I try to bring across to them that becoming really good at one or two areas would make them capable of helping more animals overall, even by virtue of specialisation.

I worry that SI will delineate lots of research questions usefully, but that it will be harder to make needed progress on those questions. Are you worried about this as well, and if so, are there steps to be taken here? One idea is promoting the research projects to graduate students in the social sciences, such as via grants or scholarships.

1
Jacy
7y
The Foundational Summaries page is our only completed or planned project that was primarily intended to delineate research questions. Because of its fairly exhaustive nature, I (Jacy) think it does only have to be done once, and now our future research can just go into that page instead of needing to be repeatedly delineated, if that makes sense. None of the projects in our research agenda are armchair projects, i.e. they all include empirical, real-world study and aggregation of data. You can also find me personally critiquing other EA research projects for being too much about delineation and armchair speculation, instead of doing empirical research. We have also noted that our niche as Sentience Institute within EAA is foundational research that expands the EAA evidence base. That is definitely our primary goal as an organization. For all those reasons, I'm not very worried about us spending too much time on delineation. There's also just the question of whether these research questions are so difficult, at least to make concrete progress on, that our work will not be cost-effective even if such progress, if achieved, would be very impactful. That's my second biggest worry about SI's impact (biggest is that big decision-makers won't properly account for the research results). I don't think there's much to do to fix that concern besides working hard for the next few months or couple years and seeing what sort of results we can get. We've also had some foundational research from ACE, Open Phil, and other parties that seems to have been taken somewhat seriously by big EAA decision-makers, so that's promising. We'd be open to giving grants or scholarships to relevant research projects done by graduate students in the social sciences. I don't think the demand for such funding and the amount of funding we could supply is such that it'd be cost-effective to set up a formal grants program at this time (we only have two staff and would like to get a lot of research don

Lila, thanks for sharing. You've made it clear that you've left the EA movement, but I'm wondering whether and, if so, why, your arguments also have pushed you away from being committed to "lowercase effective altruism"---that is, altruism which is effective, but isn't necessarily associated with this movement.

Are you still an altruist? If so, do you think altruism is better engaged in with careful attention put to the effectiveness of the endeavors?

Thanks in advance.

The larger point---that film can be a compelling vehicle for important ideas---stands regardless whether Cowspiracy was fully accurate or unbiased in its selection of figures.

That said, I agree that we should be cautious about endorsing Cowspiracy in particular, since certain key numbers on which it rests its arguments and emphases are disputed (good discussion and links on wikipedia). That said, it's a bit unfortunate if discussion surrounding the film centers only around fact checking--e.g., 15% vs. 51%--when in most any case there is an important, oft-overlooked environmental rationale for a shift toward cutting livestock out of the world's food system.

1
MichaelDickens
8y
McMahan: http://opinionator.blogs.nytimes.com/2010/09/19/the-meat-eaters/ MacAskill: http://qz.com/497675/to-truly-end-animal-suffering-the-most-ethical-choice-is-to-kill-all-predators-especially-cecil-the-lion/

Percentages:

  • Direct charity / nonprofit work: 190 / 2352 = 8%
  • Earning to give: 512 / 2352 = 22%
  • Research: 362 / 2352 = 15%
  • None of these: 375 / 2352 = 16%
  • Didn't answer: 913 / 2352 = 39%

2) People should definitely watch and try to screen the film Unlocking the Cage (website, trailer), which documents the ongoing fight in the US for legal personhood for primates.

3.1) AI safety and existential risk are obvious topics on which stimulating documentaries could be impactful.

4) My impression is that Cowspiracy was independently screened scores of times across the world, especially privately by the vegan and animal rights communities. I'd love to know more details. The trailer currently has 1.1 million views.

5.1) Cost: If a documentary mostly inv... (read more)

@cdc482 I share your concerns, suspect many others to as well, and appreciate the honesty of this post.

I think a lot of whether it's worth taking higher-risk-higher-reward paths toward doing good depends on a lot of specifics. Specifics such as those covered in 80K's framework (https://80000hours.org/articles/framework/).

In particular, the question about earning vs. working on the front lines has to do with what sort of needs your cause has, and your would-be 'role impact'. Is the cause more funding-constrained, research-constrained, talent-constrained i... (read more)

0
jayquigley
8y
Here is a provocative piece that challenges people to think outside of the box of merely earning-to-give long term: https://80000hours.org/2015/07/80000-hours-thinks-that-only-a-small-proportion-of-people-should-earn-to-give-long-term/

We should avoid the temptation to think it's an all-or-nothing between direct work now until retirement and earning-to-give from now until retirement. (Not saying that was exactly your view.)

Here's one example of something in between these extremes. One can work at for-profit jobs as a means of skilling up such that your talents can be used for direct work projects during non-work hours and/or later on in one's career. And meanwhile one can earn-to-give in the short term, remaining agnostic about the long-term path.

Peter Hurford has an interesting profile ... (read more)

2
Peter Wildeford
8y
You can also do some direct work while also doing ETG. :)

FWIW, the links at the top of this take me to the google doc rather than to anchors within this page.

0
Julia_Wise
8y
Thanks, fixed!

Finally I've realized:

My future giving could potentially be greatly aided by an accountant.

1-3 and 9-11 seem to be criticisms of EAs, not EA.

To 4-8, I want to say, of course, the biggest problems in the universe are extremely hard ones. Are we really surprised?

Number 12 is easily the most important criticism. The more we professionalize and institutionalize this movement, the more fractured, intransigent, and immobile it will become.

On the side of optimism, the Open Philanthropy Project shows signs of one important institution rigorously looking into broad cause effectiveness.

Admittedly, some people may be more motivated by 'be a superdonor' than 'be a mega-lifesaver'. Different strokes can be expected to motivate different folks.

I agree, and had actually thought about that.

(Just to reiterate, the point of my suggestion for an improved slogan was to motivate more by appealing more to recipients' needs than to donors' sense of being a hero.)

It would be nice to have a slogan that could capture all types of causes. Let's keep thinking...

If we cannot find such a slogan, something about lifesaving or difference-making may be a good proxy.

After all, preventing there from being more farmed animals in the future is in a sense 'saving' lives, at least in the roundabout sense that prevent... (read more)

-1
Gleb_T
8y
Yup, let's keep brainstorming, some good ideas here! Also, we can do some experimenting. We can try out "Superdonor," "Mega-Lifesaver," etc. and see how people respond. Could be a good experiment.
0
jayquigley
8y
Admittedly, some people may be more motivated by 'be a superdonor' than 'be a mega-lifesaver'. Different strokes can be expected to motivate different folks.

"Be a Superdonor" sounds like one is being encouraged to rack up a high score or become a superhero. That's okay. And maybe it sublty would help people think about effectiveness. But it doesn't put focus on the individuals people are helping. Any old local charity could ask us to be the same.

By contrast, what about "Be a Mega-Lifesaver"? This makes effective altruism out to be about the thrilling task of literally saving lives and life-years. That's why I'm an EA. One problem is that this phrase is slightly more cumbersome.

1
Gleb_T
8y
Hm, that's an interesting variant. Let's ponder this a bit more. "Superdonor" is specifically about helping people be effective in their donations, whatever the cause. "Mega-Lifesaver" is specifically about helping save lives. That seems a bit more limiting in terms of focusing on only one outcome. It might not capture people who care about animal welfare, environmental issues, etc. What do you think?

You might consider italicizing the word "most" in the phrase "whichever organisations can most effectively use it". This might guard somewhat against ineffective or complacent giving.

0
Michelle_Hutchinson
10y
Good thinking, thanks Jay. I think I'd be a little hesitant to do that, because it seems rather aggressive. That would be a good thing to test though (we're working with a group of student consultants, who will hopefully be able to get us some data on questions like this).