All of Marcus_A_Davis's Comments + Replies

Hey Saulius,

I’m very sorry that you felt that way – that wasn’t our intention. We aren’t going to get into the details of your resignation in public, but as you mention in your follow up comment, neither this incident, nor our disagreement over WAW views were the reason for your resignation.

As you recall, you did publish your views on wild animal welfare publicly. Because RP leadership was not convinced by the reasoning in your piece, we rejected your request to publish it under the RP byline as an RP article representative of an RP position. This decision... (read more)

4
Elizabeth
3mo
  Was Saulius representing RP at the time? It sounds like they were asked out of band, and surely OP already had access to the RP consensus view. 

Probably not the right place to discuss it, but at some point I'd be interested in both the object level question of whether marginal wild animal welfare research should be funded and the more meta question of what RP WAW employees and ex-employees believe on this issue.

[Edit: as per Saulius' reply below I was perhaps to critical here, especially regarding the WAW post, and it sounds like Saulius thinks that was manged relatively well by RP senior staff]

I found this reply made me less confident in Rethink's ability to address publication bias. Some things that triggered my 'hmmm not so sure about this' sense were:

  • The reply did not directly address the claims in Saulius's comment. E.g. "I'm sorry you feel that way" not "I'm sorry”. No acknowledgement that if, as Saulius claimed, a senior staff told him that it was wrong to
... (read more)

I get RP's concerns that an individual researcher's opinions not come across as RP's organizational position. However, equal care needs to be given to the flipside -- that the donor does not get the impression that a response fully reflects the researcher's opinion when it has been materially affected by the donor-communication policy. 

I'm not suggesting that management interference is inappropriate . . . but the donor has the right to know when it is occurring. Otherwise, if I were a donor/funder, I would have to assume that all communications from R... (read more)

saulius
4mo119
23
1
2

Thank you for your answer Marcus.

What bothers me is that if I said that I was excited about funding WAW research, no one would have said anything. I was free to say that. But to say that I’m not excited, I have to go through all these hurdles. This introduces a bias because a lot of the time researchers won’t want to go through hurdles and opinions that would indirectly threaten RP’s funding won’t be shared. Hence, funders would have a distorted view of researchers' opinions. 

Put yourself into my shoes. OpenPhil sends an email to multiple people askin... (read more)

One relevant dimension is that we think that if one of our researchers, especially while representing RP, is sending something to a funder that has a plausible implication that one of the main funders of a department should seriously reduce or stop funding that department, we should know they are planning to do so before they do so, and roughly what is being said so that we can be prepared. While we don’t want to be seen as censoring our researchers, we do think it’s important to approach these sorts of things with clarity and tact.

There are also times whe

... (read more)

We mean to say that the ideas for these projects and the vast majority of the funding were ours, including the moral weight work. To be clear, these projects were the result of our own initiative. They wouldn't have gone ahead when they did without us insisting on their value.

For example, after our initial work on invertebrate sentience and moral weight in 2018-2020, in 2021 OP funded $315K to support this work. In 2023 they also funded $15K for the open access book rights to a forthcoming book based on the topic. In that period of 2021-2023, for public-fa... (read more)

5
weeatquince
5mo
Hi Marcus thanks very helpful to get some numbers and clarification on this. And well done to you and Rethink for driving forward such important research. (I meant to post a similar question asking for clarification on the rethink post too but my perfectionism ran away with me and I never quite found the wording and then ran out of drafting time, but great to see your reply here)

Hey Vasco, thanks for the thoughtful reply.

I do find fanaticism problematic at a theoretical level since it suggests spending all your time and resources on quixotic quests. I would go one further and say I think if you have a series of axioms and it proposes something like fanaticism, this should at least potentially count against that combination of axioms. That said, I definitely think, as Hayden Wilkinson pointed out in his In Defence of Fanaticism paper, there are many weaknesses with alternatives to EV.

Also, the idea that fanaticism doesn’t come up i... (read more)

2
Vasco Grilo
5mo
Thanks for the reply, Marcus! To clarify, fanaticism would only suggest pursuing quixotic quests if they had the highest EV, and I think this is very unlikely. Money-pumping is not so intuitively repelling, but rejecting EV maximisation in principle (I am fine with rejecting it in practice for instrumental reasons) really leads to bad actions. If you reject EV maximisation, you could be forced to counterfactually create arbitrarily large amounts of torture. Consider these actions: * Action A. Prevent N days of torture with probability 100 %, i.e. prevent N days of torture in expectation. * Action B. Prevent 2*N/p days of torture with probability p, i.e. prevent 2*N days of torture in expectation. Fanatic EV maximisation would always support B, thus preventing N (= 2*N - N) days of torture relative to A. I think rejecting fanaticism would imply picking A over B for a sufficiently small p, in which case one could be forced to counterfactually create arbitrarily many days of torture (for an arbitrarily large N). I believe this is a very sensible approach. I recently commented that: So I agree fanaticism can be troubling. However, just in practice (e.g. due to overly high probabilities of large upside), not in principle. I think these cases are much less problematic than the alternative. In the situations above, one would still be counterfactually producing arbitrarily large amounts of welfare by pursuing EV maximisation. By rejecting it, one could be forced to counterfactually produce arbitrarity large amounts of torture. In any case, I do not think situations like the above are found in practice? Thanks for clarifying! Nice context! I would be curious to see a quantitative investigation of how much RP should be investing in each area accounting for the factors above, and the fact that the marginal cost-effectiveness of the best animal welfare interventions is arguably much higher than that of the best GHD interventions. Investing in animal welfare work coul

Thanks for the engagement, Michael.

I largely agree with your notes and caveats.

However, on this:

Expected utility maximization can be guaranteed to avoid fanaticism while satisfying the standard EUT axioms (and countable extensions), with a bounded utility function and the bounds small enough or marginal returns decreasing fast enough, in relative terms… In my view, expected utility with a bounded utility function (not difference-making) is the most instrumentally rational of the options, and it and boundedness with respect to differences seem the most pr

... (read more)
2
MichaelStJules
5mo
Difference-making risk aversion (the accounts RP has considered, other than rounding/discounting) doesn't necessarily avoid generalizations of (2), the 50-50 problem. It can 1. just shift the 50-50 problem to a different place, e.g. 70% good vs 30% bad being neutral in expectation but 70.0001% being extremely good in expectation, or 2. still have the 50-50 problem, but with unequal payoffs for good and bad, so be neutral at 50-50, but 50.0001% being extremely good in expectation. To avoid these more general problems within standard difference-making accounts, I think you'd need to bound the differences you make from above. For example, apply a function that's bounded above to the difference, or assume differences in value are bounded above). On the other hand, maybe having the problem at 50-50 with equal magnitude but opposite sign payoffs is much worse, because our uninformed prior for the value of a random action is generally going to be symmetric around 0 net value. Proofs below. ---------------------------------------- Assume you have an action with positive payoff x (compared to doing nothing) with probability p=50.0001%, and negative payoff y=-x otherwise, with x very large. Then 1. Holding the conditional payoffs x and -x constant, but changing the probabilities at 100% x and 0% y=-x, the act would be good overall. OTOH, it's bad at 0% x and 100% y=-x. By Continuity (or the Intermediate Value Theorem), there has to be some p so that the act that's x with probability p and y=-x with probability 1-p is neutral in expectation. Then we get the same problem at p, and a small probability like 0.0001% over p instead of p can make the action extremely good in expectation, if x was chosen to be large enough. 2. Holding the probability p=50% constant, if the negative payoff y were actually 0, and the positive payoff still x and large, the act would be good overall. It's bad for y<0 low enough.[1] Then, by the Intermediate Value Theorem, there's some y so tha
4
MichaelStJules
5mo
I agree it would be hard to avoid something like (2) with views that respect stochastic dominance with respect to the total welfare of outcomes, including background value (not difference-making). That includes maximizing the EV of a bounded increasing function of total welfare, as well as REU and WLU for total welfare, all with respect to outcomes including background value and not difference-making. Tarsney, 2020 makes it hard, and following it, x-risk reduction might be best across those views (Tarsney, 2023, footnote 43, although he says it could depend on the probabilities). See the following footnote for another possible exception with outcome risk aversion, relevant for extinction risk reduction.[1] If you change the underlying order on outcomes from total welfare, you can also avoid nearly 50-50 actions from dominating things that are more likely to make a positive difference. A steep enough geometric discounting of future welfare[2] or a low enough future cutoff for consideration (a kind of view RP considered here) + excluding invertebrates might work. I also think difference-making views, as you suggest, would avoid (2). Fair. This seems right to me. 1. ^ Tarsney, 2020 requires a lot of very uncertain background value that's statistically independent from the effects of the intervention. Too little background value could be statistically independent, because a lot of things are jointly determined or correlated across the universe, e.g. sentience, moral weights, and, perhaps most importantly, (the sign of) the average welfare across the universe. Conditional on generally horrible welfare across aliens (non-Earth-originating moral patients, generally), we should worry more that our descendants (or Earth-originating moral patients) will have horrible welfare if we don't go extinct. Then you just need to be sufficiently risk-averse, and something slighly better than 50-50 that could make things far worse could look bad overall.

In trying to convince people to support global health charities I don't think I've ever gotten the objection "but people in other countries don't matter" or "they matter far less than Americans", while I expect vegan advocates often hear that about animals.

I have gotten the latter one explicitly and the former implicitly, so I'm afraid you should get out more often :).

More generally, that foreigners and/or immigrants don't matter, or matter little compared to native born locals, is fundamental to political parties around the world. It's a banal take in ... (read more)

6
Jeff Kaufman
7mo
Yikes; ugh. Probably a lot of this is me talking to so many college students in the Northeast. I think maybe I'm not being clear enough about what I'm trying to do with my post? As I wrote to Wayne below, what I'm hoping happens is: 1. Some people who don't think animals matter very much respond to RP's weights with "that seems really far from where I'd put them, but if those are really right then a lot of us are making very poor prioritization decisions". 2. Those people put in a bunch of effort to generate their own weights. 3. Probably those weights end up in a very different place, and then there's a lot of discussion, figuring out why, and identifying the core disagreements.

David's post is here: Perceived Moral Value of Animals and Cortical Neuron Count

What do you think of this rephrasing of your original argument:

I suspect people rarely get deeply interested in the the value of foreign aid unless they come in with an unusually high initial intuitive view that being human is what matters, not being in my country... If you somehow could convince a research group, not selected for caring non-Americans, to pursue this question in isolation, I'd predict they'd end up with far less foreign aid-friendly results.

I think this arg... (read more)

6
Jeff Kaufman
7mo
Awesome, thanks! Good post! First, I think GiveWell's research, say, is mostly consumed by people who agree people matter equally regardless of which country they live in. Which makes this scenario more similar to my "When using the moral weights of animals to decide between various animal-focused interventions this is not a major concern: the donors, charity evaluators, and moral weights researchers are coming from a similar perspective." But say I argued that the US Department of Transportation funding ($12.5M/life) should be redirected to foreign aid until they had equal marginal costs per life saved. I don't think the objection I'd get would be "Americans have greater moral value" but instead things like "saving lives in other countries is the role of private charity, not the government". In trying to convince people to support global health charities I don't think I've ever gotten the objection "but people in other countries don't matter" or "they matter far less than Americans", while I expect vegan advocates often hear that about animals.

Maybe. We're a little unsure about this right now. The code base for this is part of the bigger Cross-Cause Cost-Effectiveness Model which we haven't made a final determination on whether we will release it.

6
EdoArad
7mo
Do you mind sharing your main consideration against releasing it? Not trying to push back, but rather to understand this as I'm considering working on related topics 

Jeff, are you saying you think "an intuition that a human year was worth about 100-1000 times more than a chicken year" is a starting point of "unusually pro-animal views"?

In some sense, this seems true relative to most humans' implied views by their actions. But, as Wayne pointed out above, this same critique could apply to, say, the typical American's views about global health and development. Generally, it doesn't seem to buy much to frame things relative to people who've never thought about a given topic substantively and I don't think you'd think this... (read more)

8
Jeff Kaufman
7mo
I did say that, and at the time I wrote that I would have predicted that in realistic situations requiring people to trade off harms/benefits going to humans vs chickens the median respondent would just always choose the human (but maybe that's just our morality having a terrible sense of scale), and Peter's 300x mean would have put him somewhere around 95th percentile. Since writing that I read Michael Dickens' comment, linking to this SSC post summarizing the disagreements [1] and I'm now less sure. It's hard for me to tell exactly what they surveys included: for example, I think they excluded people who didn't think animals have moral worth at all, and it's not clear to me whether they were getting people to compare lives vs life years. I don't know if there's anything better on this? I agree! I'm not trying to say that uninformed people's off-the-cuff guesses about moral weights are very informative on what moral weights we should have. Instead, I'm saying that people start with a wide range of background assumptions and if two people started off with 5th and 95th percentile views trading off benefits/harms to chickens vs humans I expect them to end up farther apart in their post-investigation views than two people who started at 95th. [1] That post cites David Moss from RP as having run a better survey, and summarizes it, but doesn't link to it -- I'm guessing this is because it was Moss doing something informally with SSC and the SSC post is the canonical source, but if there's a full writeup of the survey I'd like to see it!

Thanks for the question, but unfortunately we can not share more about those involved or the total.

I can say we're confident this unlocked millions for something that otherwise wouldn't have happened. We think maybe half of the money moved would not have been spent, and some lesser amount would have been spent on less promising opportunities from an EA perspective.

Thanks for the question and the kind words. However, I don’t think I can answer this without falling back somewhat on some rather generic advice. We do a lot of things that I think has contributed to where we are now, but I don’t think any of them are particularly novel:

  • We try to identify really high quality hires, bring them on, train them up and trust them to execute their jobs.
  • We seek feedback from our staff, and proactively seek to improve any processes that aren’t working.
  • We try to follow research and management best practices, and gather ideas on
... (read more)

Thanks for the question! I think describing the current state will hint at a lot on what might make us change the distribution, so I’m primarily going to focus on that.

I think the current distribution of what we work on is dependent on a number of factors, including but not limited to:

  1. What we think about research opportunities in each space
  2. What we think about the opportunity to exert meaningful influence in the space
  3. Funding opportunities
  4. Our ability to hire people

In a sense, I think we’re cause neutral in that we’d be happy to work on any cause provi... (read more)

4
Zach Stein-Perlman
2y
Thanks for your reply. I think (1) and (2) are doing a ton of work — they largely determine whether expected marginal research is astronomically important or not. So I'll ask a more pointed follow-up: Why does RP think it has reason to spend significant resources on both shorttermist and longtermist issues (or is this misleading; e.g., do all of your unrestricted funds go to just one)? What are your "opinions on high level cause prioritization" and the "disagreement inside RP about this topic"? What would make RP focus more exclusively on either short-term or long-term issues?

Given we know so little about their potential capacities and what alters their welfare, I’d suggest the potential factory farming of insects is potentially quite bad. However, I don’t know what methods are effective at discouraging people from consuming them, though some of the things you suggest seem plausible paths here. I think it is pretty hard to say much on the tractability of these things, without further research.

Also, we are generally keen to hear from folks who are interested in doing further work on invertebrates. And, personally, if you know of... (read more)

I would like to see more applications in the areas outlined in our RFP and I’d encourage anyone with interest in working on those topics to contact us.

More generally, I would like to see far more people and funding engaged in this area. Of course, that’s really difficult to accomplish. Outside of that, I’m not sure I’d point to anything in particular.

We don’t have a cost-effectiveness estimate of our grants. The reason as to why not, is it’s likely very difficult to produce, and while it could be useful, we're not sure it's worth the investment for now.

On who to be in touch with, I would suggest such a prospective student is in touch with groups like GFI and New Harvest if they would like advice on attempting to find advisors for this type of work.

On advice, I would generally stay away from career advice. If forced to answer, I would not give general advice that everyone or most people are better off attempting to do as high impact research as soon as is feasible.

I think we’re looking for promising projects and one clear sign of that is often a track-record of success. The more challenging the proposal, the more something like this might be important. However, we’re definitely open to funding people without a long track record if there are other reasons to believe the project would be successful.

Personally, I’d say good university grades alone is probably not a strong enough signal, but running or participating in successful small projects on a campus might be particularly if the projects were similar in scope or s... (read more)

We grade all applications with the same scoring system. For the prior round, after the review of the primary and secondary investigator and we’ve all read their conclusions, each grant manager gave a score (excluding cases of conflict of interests) of +5 to -5, with +5 being the strongest possible endorsement of positive impact, and -5 being a grant with an anti-endorsement that’s actively harmful to a significant degree. We then averaged across scores, approving those at the very top, and dismissing those at the bottom, largely discussing only those grant... (read more)

I don’t think it is true the EA AW Fund is essentially neartermist, though this may depend somewhat on what you mean. We definitely consider grants that have potential long term payoffs beyond the next few decades. In my opinion, much of the promise of PBM and cultivated meat relies on impacts that would be 15-100 years away and there’s no intrinsic reason held, for me and I believe other funders, to discount or not consider other areas for animal welfare that would have long term payoffs.

That said, as you suggest in (2), I do think it is true that it make... (read more)

In the just completed round we got several applications from academics looking to support research on plant-based and cultivated meat projects though we ultimately decided not to support any of them. We definitely welcome grant applications in this area and our new requests for proposals explicitly calls for applications on work in this space. Additionally, I would direct them to consider applying to GFI’s alternative protein research grants, and the Food Systems Research Fund, among other locations, if they believe they have promising projects in this sp... (read more)

7
Avi Norowitz
3y
  Did you mean the ACE Research Fund / Animal Advocacy Research Fund?

What new charities do you want to be created by EAs?

I don't have any strong opinions about this and it would likely take months of work to develop them. In general, I don't know enough to suggest that it is desirable that new charities work in areas I think could use more work rather than existing organizations up their work in those domains.

What are the biggest mistakes Rethink Priorities did?

Not doing enough early enough to figure out how to achieve impact from our work and communicate with other organizations and funders about how we can work together.

Thanks for the questions!

If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?

I think this depends on many factual beliefs you hold, including what groups of creatures count and what time period you are concerned about. Restricting ourselves to the present and assuming all plausibly sentient minds count (and ignoring extremes, say, less than 0.1% chance), I think farm and wild animals are plausibly candidates for enduring... (read more)

Thanks for the question, Edo!

We keep a large list of project ideas, and regularly add to it by asking others for projects ideas including staff, funders, advisors, and organizations in the spaces we work in.

4
EdoArad
3y
Thank you! I have some followup questions if that's ok :) Is it reasonable to publicly publish the list or some of it? How do you prioritize and select them?  Do the suggestions to pursue a project come from the managers or the researchers? If they sometimes come from the researchers, do you have any mechanisms in place to motivate the researchers to explore the list or does it happen naturally?

Hey Edo, thanks for the question!

We've had some experience working with volunteers. In the past, when we had less operational support than we do now, we found it challenging to manage and monitor volunteers but we think it's something that we're better placed to handle now so may explore again in the coming years, though we are generally hesitant about depending on free labor.

We've not really had experience publicly outsourcing questions to the EA community, but we regularly consult wider EA communities for input on questions we are working on. Finally, and I'm not sure this is what you meant, but we've also partnered with Metaculus on some forecasting questions.

Hey Josh, thanks for the question!

From first principles, our allocation depends on talent fit, the counterfactual value of our work, fundraising, and, of course, some assessment of how important we think the work is, all things considered.

At the operational level, we set targets as percentage of time we want to spend on each cause area based on these factors and we re-evaluate based on that as our existing commitments, the data, and as changes in our opinions about these matters warrant.

I think it's going great! I think our combined skillset is a big pro when reviewing work, considering project ideas. In general, I think bouncing ideas off each other improves and sharpens our ideas. We are definitely able to cover more depth and breadth with the two of us than if only one person was leading the organization.

Additionally, Peter and I get along great and I enjoy working alongside him everyday (well, digitally anyway given we are remote).

Thanks for the question!

We hire for fairly specific roles, and the difference between those we do hire and don't isn't necessarily as simple as those brought on being better as researchers overall (to say nothing of differences in fit or skill across causes).

That said, we generally prioritize ability in writing, general reasoning, and quantitative skills. That is we value the ability to uncover and address considerations, counter-points, meta-considerations on a topic, produce quantitative models and do data analysis when appropriate (obviously this is mor... (read more)

Thanks for the questions!

On (1), we see our work in WAW as currently doing three things: (1) foundational research (e.g., understanding moral value and sentience, understanding well-being at various stages of life), (2) investigating plausible tractable interventions (i.e., feasible interventions currently happening or doable within 5 years), and (3) field building and understanding (e.g., currently we are running polls to see how "weird" the public finds WAW interventions).

We generally defer to WAI on matters of direct outreach (both academic and general ... (read more)

Thanks for the question!

I think the short answer is this what we think of doing projects in the improving the collective understanding space depends on a number of factors including the nature of the project, and the probability of that general change in perspective leading to actions changed in the future, and how important it would be if that change occurred.

One very simplistic model you can use to think about possible research projects in this area is:

  1. Big considerations (classically "crucial considerations", i.e. moral weight, invertebrate sentience
... (read more)
2
EdoArad
3y
Thanks! This makes a lot of sense. 

Hey I'm happy to see this on the forum! I think farmed shrimp interventions are a promising area and this report highlights some important considerations. I should note that Rethink Priorities has also been researching this topic for a while and I won't go into detail as I'm not leading up this work and the person who is currently is on leave, but I think we've tentatively come to some different conclusions about the most promising next steps in this domain.

In the future, if anyone reading this is inclined to work on farmed shrimp, in addition to reviewing this report I'd hope you'd read over our forthcoming work and/or reach out to us about this area.

7
KarolinaSarek
3y
Thanks for adding this, Marcus! Indeed, Vicky - primary author - worked with Daniela Waldhorn from Rethink Priorities while researching this topic. We both cannot wait to read the final report and see your tentative conclusions. Once your report is published, I will link it in this post to ensure that people can read more from a different angle and see where our research differs. One thing to note is that CE plans to follow up our shrimp welfare report with an implementation report that looks more at the practicalities, which may lead to some changes in next steps.

I think 1 and 2 should result in the exact same experiences (and hence same intensity) since the difference is just some neurons that didn't do anything or interact with the rest of the brain, even though 2 has a greater proportion of neurons firing. The claim that their presence/absence makes a difference to me seems unphysical, because they didn't do anything in 1 where they were present.

I'm unclear why you think proportion couldn't matter in this scenario.

I've written a pseudo program in Python below in which proportion does matter, removing neuron... (read more)

3
MichaelStJules
3y
  Ya, this is where I'd push back. My understanding is that neurons don't "check" if other neurons are firing, they just receive signals from other neurons. So, a neuron (or a brain, generally) really shouldn't be able to tell whether a neuron was not firing or just didn't exist at that moment. This text box I'm typing into can't tell whether the keyboard doesn't exist or just isn't sending input signals, when I'm not typing, because (I assume) all it does is check for input. (I think the computer does "know" if there's a keyboard, though, but I'd guess that's because it's running a current through it or otherwise receiving signals from the keyboard, regardless of whether I type or not. It's also possible to tell that something exists because a signal is received in its absence but not when it's present, like an object blocking light or a current.) Specifically, I don't think this makes sense within the constraints of my thought experiment, since it requires the brain to be able to tell that a neuron exists at a given moment even if that neuron doesn't fire: def get_number_of_pain_neurons(nociceptive_neurons_list): return len(nociceptive_neurons_list) # get length of list   It could be that even non-firing neurons affect other neurons in some other important ways I'm not aware of, though. EDIT: What could make sense is that instead of this function, you have two separate constants to normalize by, one for each brain, and these constants happen to match the number of neurons in their respective brain regions (5 and 2 in your example), but this would involve further neurons that have different sensitivities to these neurons as inputs between the two brains. And now this doesn't reflect removing neurons from the larger brain while holding all else equal, since you also replaced neurons or increased their sensitivities. So this wouldn't reflect my thought experiment anymore, which is intended to hold all else equal. I don't think it's a priori implausible that

We in fact do (1) then (2). However, to continue your example, donations to animal work still end up going to animals. If it were the case, say, that we hit the animal total needed for 2020 before the overall total, additional animal donations would go to animal work for 2021.*

It is true in this scenario that in 2020 we'd end up spending less unrestricted funding on animals, but the total spent on animals that year wouldn't change and the animal donations for 2020 would not then be spent on non-animal work.

*We would very much state publicly when we have no more room for further donations in general, and by cause area.

Internally, as part of Rethink Charity, we have fairly standard formal anti-harassment, discrimination, and reasonable accommodation policies. That is, we comply with all relevant anti-discrimination laws, including [Title VII of the Civil Rights Act of 1964, Americans with Disabilities Act (ADA) and Age Discrimination in Employment Act (ADEA.)] We explicitly prohibit offensive behavior (e.g. derogatory comments towards colleagues of a specific gender or ethnicity.)

We also provide a way for any of our staff to offer anonymous feedback and information to se

... (read more)
1
mativazquez
4y
Thanks a lot, Marcus.
3
Kirsten
4y
I'm glad to hear you've taken steps to be an inclusive organisation. Follow up question: Do you plan to do any more with on diversity and inclusion going forward?

Thanks for the question. We have forthcoming work on ballot initiatives which will hopefully be published in January and other work that we plan to keep unpublished (though accessible to allies) for the foreseeable future.

In addition, we have some plans to investigate potentially high value policies for animal welfare.

On CE's work, we communicate with them fairly regularly about their work and their plans, in addition to reading and considering the outputs of their work.

I honestly don’t know. I’d probably be doing research at another EA charity, or potentially leading (or trying to lead) a slightly different EA charity that doesn’t currently exist. Generally, I have previously seriously considered working at other EA organizations but it's been some time since I've seriously considered this topic.

Thanks for the question and thanks for the compliment about our work! As to the impact of the work, from our Impact survey:

Invertebrate sentience was the second most common (13) piece of work that changed beliefs. Also the second largest number of changed actions of all our work (alongside EA survey) including 1 donation influenced, 1 research inspiration, and 4 unspecified actions.

Informally, I could add many people (probably >10) in the animal welfare space have personally told me they think our work on invertebrates changed their opinion about invert

... (read more)
2
abrahamrowe
4y
That's great to hear! I guess I think it would be great for norms of caring about invertebrates to be spread in the animal advocacy space, so that seems good.

We have raised half his salary for 2020 and 2021 on a grant explicitly for this purpose. If you’d like to talk more about this, I’d be happy for you to shoot me an email: marcus [at] rtcharity.org

Thanks for the question! We do research informed by input from funders, organizations, and researchers that we think will help funders make better grants and help direct work organizations do to higher impact work.

So our plans for distribution vary by the audience in question. For funders and particular researchers we make direct efforts to share our work with them. Additionally, we try to regularly have discussions about our work and priorities with the relevant existing research EA communities (researchers themselves and org leaders). However, as we've s

... (read more)
1
abrahamrowe
4y
I don't actually know if engagement, is important (maybe it is an indicator of either your thoroughness, as there are few followups, or just that you all are the experts, so most people on the forum aren't going to weigh in). Sharing with funders makes a lot of sense. Thanks!

Thanks for the question! We do not view our work as necessarily focused on the West. To the extent our work so far has focused on such countries, it's because that's where we think our comparative advantage currently has centered but as our team learns, and possibly grows, this won't necessarily hold over time.

Thanks for the question! To echo Ozzie, I don't think it's fair to directly compare the quality of our work to the quality of GPI's work given we work in overlapping but quite distinct domains with different aims and target audiences.

Additionally, we haven't prioritized publishing in academic journals, though we have considered it for many projects. We don't believe publishing in academic journals is necessarily the best path towards impact in the areas we've published in given our goals and don't view it as our comparative advantage.

All this said, we don'

... (read more)

My ranges represent what I think is a reasonable position is on the probability of each creatures sentience given all current input and expected future input. Still, as I said:

...the range is still more of a guideline for my subjective impression than a declaration of what all agents would estimate given their engagement with the literature

I could have made a 90% subjective confidence interval, but I wasn't confident enough that such an explicit goal in creating or distributing my understanding would be helpful.

I meant to highlight a case where I downgraded my belief in a scenario in which there multiple ways to update on a piece of evidence.

To take an extreme example for purposes of clarification, suppose you begin with a theory of sentience (or a set of theories) which suggests behavior X is possibly indicative of sentience. Then, you discover behavior X is possessed by entities you believe are not sentient, say, rocks. There are multiple options here as to how to reconcile these beliefs. You could update towards thinking rocks are sentient, or you could downgr

... (read more)
3
MichaelA
4y
I might be wrong about this or might be misunderstanding you, but I believe that, in any case where the absence of X is evidence against Y, the presence of X has to be evidence for Y. (Equivalently, whenever the presence of X is evidence for Y, the absence of X has to be evidence against Y.) This does go against the common statement that "Absence of evidence is not evidence of absence." But we can understand that statement as having a very large kernel of truth, in that it is often the case that absence of evidence is only extremely weak evidence of absence. It depends on how likely it would be that we'd see the evidence if the hypothesis was true. For an extreme example, let's say that an entity not being made of molecules would count as very strong evidence against that entity being sentient. But we also expect a huge number of other entities to be made of molecules without being sentient, and thus the fact that a given entity is made of molecules is extraordinarily weak evidence - arguably negligible for many purposes - that the entity is sentient. But it's still some evidence. If we were trying to bet on whether entity A (made of molecules) or entity B (may or may not be molecules; might be just a single atom or quark or whatever) is more likely to be sentient, we have reason to go with entity A. This seems to sort-of mirror the possibility you describe (though here we're not talking behaviours), because being made of molecules is a necessary precondition for a huge number of what we'd take to be "indicators of sentience", but by itself is far from enough. Which does mean evidence of X is extremely weak evidence of sentience, but it's still some evidence, relative to a state in which we don't know whether X is true or not. (I'm aware this is a bit of a tangent, and one that's coming fairly late. The post as a whole was very interesting, by the way - thanks to everyone who contributed to it.)

I think the proposed karma system, particularly when combined with the highly rated posts being listed higher, is a quite bad idea. In general, if you are trying to ensure quality of posts and comments while spreading the forum out more broadly there are hard tradeoffs with different strengths and weaknesses. Indeed, I might prefer some type of karma weighting system to overly strict moderation but even then the weights proposed here don't seem justifiable.

What problem is being solved by giving up to 16 times maximum weight that would not be solved with gi... (read more)

5
SamDeere
6y
Thanks for the comments on this Marcus (+ Kyle and others elsewhere). I certainly appreciate the concern, but I think it's worth noting that any feedback effects are likely to be minor. As Larks notes elsewhere, the scoring is quasi-logarithmic — to gain one extra point of voting power (i.e. to have your vote be able to count against that of a single extra brand-new user) is exponentially harder each time. Assuming that it's twice as hard to get from one 'level' to the next (meaning that each 'level' has half the number of users than the preceding one), the average 'voting power' across the whole of the forum is only 2 votes. Even if you make the assumption that people at the top of the distribution are proportionally more active on the forum (i.e. a person with 500,000 karma is 16 times as active as a new user), the average voting power is still only ≈3 votes. Given a random distribution of viewpoints, this means that it would take the forum's current highest-karma users (≈5,000 karma) 30-50 times as much engagement in the forum to get from their current position to the maximum level. Given that those current karma levels have been accrued over a period of several years, this would entail an extreme step-change in the way people use the forum. (Obviously this toy model makes some simplifying assumptions, but these shouldn't change the underlying point, which is that logarithmic growth is slooooooow, and that the difference between a logarithmically-weighted system and the counterfactual 1-point system is minor.) This means that the extra voting power is a fairly light thumb on the scale. It means that community members who have earned a reputation for consistently providing thoughtful, interesting content can have a slightly greater chance of influencing the ordering of top posts. But the effect is going to be swamped if only a few newer users disagree with that perspective. The emphasis on can in the preceding sentence is because people shouldn't be using s

Sorry for the extremely slow reply, but yes. That topic is on our radar.

It might be helpful if you elaborated more on what you mean by 'aim for neutrality'. What >actions would that entail, if you did that, in the real world, yourself?

I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for.

An ev... (read more)

Such personal incentives are important but, again, I didn't advocate getting someone hostile to AI risk. I proposed aiming for someone neutral. I know, no one is "truly" neutral but you have to weigh potential positive personal incentives of someone invested against potential motivated thinking (or more accurately in this case, "motivated selection").

1
Robert_Wiblin
8y
Someone who was just neutral on the cause area would probably be fine, but I think there are few of those as it's a divisive issue, and they probably wouldn't be that motivated to do the work.

I don't disagree on the problems of getting someone who thinks there is "negligible probability" of AI causing extinction being not suited for the task. That's why I said to aim for neutrality.

But I think we may be disagreeing over whether "thinks AI risk is an important cause" is too close to "is broadly positive towards AI risk as a cause area." I think so. You think not?

0
Gram_Stone
8y
Are there alternatives to a person like this? It doesn't seem to me like there are. "Is broadly positive towards AI risk as a cause area" could mean "believes that there should exist effective organizations working on mitigating AI risk", or could mean "automatically gives more credence to the effectiveness of organizations that are attempting to mitigate AI risk." It might be helpful if you elaborated more on what you mean by 'aim for neutrality'. What actions would that entail, if you did that, in the real world, yourself? What does hiring the ideal survey supervisor look like in your mind if you can't use the words "neutral" or "neutrality" or any clever rephrasings thereof?

This survey makes sense. However, I have a few caveats:

Think that AI risk is an important cause, but have no particular convictions about the best >approach or organisation for dealing with it. They shouldn't have worked for MIRI in the past, but >will presumably have some association with the general rationality or AI community.

Why should the person overseeing the survey think AI risk is an important cause? Doesn't that self-select for people who or more likely to be positive toward MIRI than whatever the baseline is for all people familiar with... (read more)

2
Robert_Wiblin
8y
"Why should the person overseeing the survey think AI risk is an important cause?" Because someone who believes it's a real risk has strong personal incentives to try to make the survey informative and report the results correctly (i.e. they don't want to die). Someone who believed it's a dumb cause would be tempted to discredit the cause by making MIRI look bad (or at least wouldn't be as trusted by prospective MIRI donors).
0
Gram_Stone
8y
Because the purpose of the survey is to determine MIRI's effectiveness as a charitable organization. If one believes that there is a negligible probability that an artificial intelligence will cause the extinction of the human species within the next several centuries, then it immediately follows that MIRI is an extremely ineffective organization, as it would be designed to mitigate a risk that ostensibly does not need mitigating. The survey is moot if one believes this.

It's complicated, but I don't think it makes sense to have a probability distribution over probability distributions, because it collapses. We should just have a probability distribution over outcomes.

I did mean over outcomes. I was referring to this:

If we're uncertain about Matthews propositions, we ought to place our guesses somewhere closer to 50%. To do otherwise would be to mistake our deep uncertainty deep scepticism.

That seems mistaken to me but it could be because I'm misinterpreting it. I was reading it as saying we should split the differ... (read more)

I think you are short selling Matthews on Pascal's Mugging. I don't think his point was that you must throw up your hands because of the uncertainty, but that he believes friendly AI researchers have approximately the same amount of evidence that AI research done today will have a 10^-15 chance of saving the existence of future humanity as any infinitesimal but positive chance.

Anyone feel free to correct me, but I believe in such a scenario spreading your prior evenly over all possible outcomes wouldn't arbitrarily just include spitting the difference betw... (read more)

1
RyanCarey
9y
It's complicated, but I don't think it makes sense to have a probability distribution over probability distributions, because it collapses. We should just have a probability distribution over outcomes. We choose our prior estimate for chance of success based on other cases of people attempting to make safer tech. In fairness, for people who adhere to expected value thinking to the fullest extent (some of whom would have turned out to the conference), arguments purely on the basis of scope of potential impact would be persuasive. But if it's even annoying folks at EA Global, then probably people ought to stop using them.

This is super practical advice that I can definitely see myself applying in the future. The introductions on the sheets seem particularly well-suited to getting people engaged.

Also, "What is the first thing you would do if appointed dictator of the United States?" likely just entered my favorite questions to ask anyone in ice-breaker scenarios, many of which have nothing to do with EA.

3
Kelsey Piper
9y
The question we've had the most success with for a regular/weekly meetup is "what is something interesting you've learned/read/thought about recently". The advantage to keeping it consistent is that people know what to expect; this question also avoids most of the disadvantages of keeping the question consistent (namely that people repeat themselves and get bored). It also tends to provoke fascinating answers.
Load more