All of Pete Rowlett's Comments + Replies

I may have made an incorrect assumption! I thought that when you said "the average person can intuit that there’s no reasonable alternative to just politely ignoring the suffering of the quintillions of insects, worms and mites on the planet,” you were arguing that solving the problem wasn’t tractable.

Generally people on the EA Forum prioritize work on problems that do well under the ITN framework. If you suggested that we ignore the suffering, then perhaps you partly accept that there is suffering, and it’s important, though now I’m curious whether you ... (read more)

I think this sentence has a mistake:

“Hey, I quite like this post that summarizes my organization’s work is cool, check it out”

Could be either:

“Hey, I quite like this post that summarizes my organization’s work, check it out”

“Hey, this post that summarizes my organization’s work is cool, check it out”

Your first point seems like a legitimate question to me.  I've not read much about those animals, but I would assume there are many of them, perhaps far more than there are insects.  I would be curious to read about indicators of their sentience.  The author, however, described evidence of several indicators of insect sentience ("responding to anesthetic, nursing their wounds, making tradeoffs between pain and reward, cognitively modeling both risks and reward in decision-making, responding in novel ways to novel experiences, self-medicating... (read more)

0
Henry Howard🔸
I didn't say anything about the tractability of insect welfare interventions but I'm sure there are many things you could do to help insects. Almost all of those things will be at the direct or indirect cost of people. There are very few worlds in which you can consider insects sentient and not go completely off the rails sacrificing human welfare to insect welfare. In a world with limited resources, meaningfullness is necessarily measured on a relative scale to triage resources. A toddler dropping their ice cream is "absolutely important" but I don't spend much time daily preventing that when there are families struggling to put food on the table, or 600,000 people dying of malaria annually, or chickens in cages. When one moral issue is magnitudes greater than any existing moral issue it requires a similarly large reorientation of attention and resources. I think you're too flippant in dismissing how disruptive this would be.

I am optimistic about this sort of idea, but I agree that it's important to pay close attention to perverse incentives. For what it's worth, the paper referenced in the post says the following regarding increased quantity concerns in the imagined animal well-being units (AWBUs) market:

"As can be seen from Table 2, a producer has three options to increase the number of AWBUs produced—it can add more animals, increase well-being, or avoid discount factors. The incentive for all farms to improve animal well being is straightforward. The higher the price of AW... (read more)

Thank you so much for this write-up and all the work the SWP team does! Very useful as a potential donor to see both the strategy and the absorbency plans. I'm also looking forward to the results of the University of Stirling study.

I'm curious about the margins on the products in your store. If they're low, I'll purchase them more rarely (for myself and people who I know will wear and enjoy them, mostly in the personal fun/fuzzies bucket) and donate more directly. If they're very high, I'll be more inclined to buy them for other people as a gamble that... (read more)

5
Aaron Boddy🔸
Thanks Pete :)  Good question! The margin on the merch is pretty slim (around 20% per item, depending on what you get), we mainly use it as an awareness tool rather than a major fundraising channel. So if you wanted to distribute t-shirts/stickers to friends, then I agree it probably makes more sense to get a bunch made up yourself rather than buy them through our store.

There are a few possible sources of funding that I'm aware of.  These first two are managed funds that accept applications:

Effective Altruism Funds Long-Term Future Fund (Application)
Founders Pledge Global Catastrophic Risks Fund (Application)

Manifund may be a good fit since your request is small and urgent.  You can list your project there, and anyone can fund it.

It doesn't sound like you're doing anything related to antimicrobial resistance, but if you are, there's the AMR Funding Circle.

Do you already know what sort of power system you need an... (read more)

1
Nnaemeka Emmanuel Nnadi
⁩ Thanks for your comment . Here is the breakdown Solar Power System Cost Breakdown • Lithium Ion batteries (20kWh): 4,600,000 NGN • Hybrid inverter (16kVA): 1,800,000 NGN • Solar cells: 1,000,000 NGN • Cables: 72,000 NGN • Installation: 500 USD (800,000 NGN at 1600 NGN/USD) • Total in NGN: 8,272,000 NGN • Total in USD: 5,170 USD (at 1600 NGN/USD)

I think the website is already quite good. It includes almost everything that somebody new to the community might find useful without overcrowding. If I had to come up with a couple comments:

  1. “For the first couple of weeks, I’ll be testing how the current site performs against these goals, then move on to the redesign, which I’ll user-test against the same goals.” For the testing methodology, it sounds like you’re planning to gather metrics on this version, switch to V2, and gather metrics again. I think A/B testing might be a better option if it’s no
... (read more)

Hello Altar!  As far as I know, there is no Seattle area EA-focused charity evaluator.  Generally speaking, EA organizations do not engage in such work for a couple reasons.

1. EAs focus on impartial altruism, meaning that they try to give equal priority to everyone’s interests, regardless of their location.
2. The difference in impact between the least and most cost-effective organizations in Seattle is small relative to the difference in impact between the least and most cost-effective organizations globally.  This means that getting local-o... (read more)

These are interesting ideas.  It seems like there's still a lack of clarity about the magnitude of the effects of each issue on the nonhuman animal side, and therefore their relative cost-effectiveness.  But as more research is done, say on ITNs in later stages of their lifecycle and the effects of tapeworms on pigs, maybe trades could be made based on these issues!

Wow, this is amazing!  Thank you for putting in the time and effort to write it.  I just ordered a copy for the Effective Altruism at Georgia Tech library.  Can’t wait to read it!

6
Jeff Thomas
Thank you, Pete, that is so kind! 

I think it would be really useful for someone with a mathematical background to develop this further. The flexibility/dedication tradeoff seems about the same as the explore/exploit tradeoff, which I understand to have been studied a fair amount.  I'd imagine there's a lot of theory that could be applied and would allow us to make better decisions as a community, especially now that lots of people are thinking about specializing or funding specialization.  I bet we could avoid significant mistakes at a low cost by quantifying investments in each area and comparing them to theoretical ideals.

Congratulations on your first post!  I think this is a really cool and interesting idea.  The team at Basefund has started doing something similar, so you may want to reach out to them if you're interested in working on it!

2
Denis
Wow, that is cool. Thanks for this great connection!! I didn't know about this. But it is indeed close to what I had in mind, albeit a more modest version. Great minds think alike and all that :)  I will contact them and share my post and see if there's anything in there that might be useful to them - or alternatively, if they have some feedback on the idea based on their first year of operation. I would be especially interested to see if they have any data to confirm or refute my ideas about expected value, testing, optimisation, etc.  When I first started thinking about this, a couple of years ago, I didn't find anyone doing anything similar, but it wasn't easy to search. And anyhow I wouldn't have found Basefund since they started since then.  Thanks for sharing this info!   

I quite like how you distinguish approaches at the individual level!  I think focusing on which area they support makes sense.  One lingering question I have is the relative value a donor's donations vs. the value of their contribution toward building a culture of effective giving.  I also think it's at least somewhat common for people to get into other areas of EA after starting out in effective giving.

Agreed on the intro fellowship point as well!  Long-term it supports field-building since plenty of participants filter through, but it... (read more)

1
James Herbert
Ah that's a very good point about the uptake of practices. I think when I wrote that I had area two in mind much more than area one, but I definitely didn't make that clear. I'll edit it :)

It's great that you're doing what you can on this front, despite all the challenges!  I don't have specific nutritional advice, though maybe the writer of the first post you linked would.

You may have already considered this (some of your ideas hinted in this direction), but I think it's important to focus on suffering intensity, which you could measure in terms of suffering per calorie or suffering per pound of food.  Doing so will minimize your overall suffering footprint.  My understanding is that the differences in capacity for suffering ... (read more)

1
Joe Rogero
Thanks, those are some great resources! I can read the post on insect sentience but the link to the paper throws an error. I'd love to read the definitions they use for their criteria. 

Great post, thanks for writing it!  Healthy and active vegans sharing their stories helps change the narrative, bit by bit.

Destroying viruses in at-risk labs

Thanks to Garrett Ehinger for feedback and for writing the last paragraph.

Military conflict around or in the vicinity of biological research laboratories could substantially increase the risk of releasing a dangerous pathogen into the environment. The fighting and mass movement of refugees combine with other risk factors to magnify the potential ramifications of this risk.  Garrett Ehinger elaborates on this issue in his excellent Chicago Tribune piece, and proposes the creation of nonaggression treaties for biol... (read more)

Rawls’ veil of ignorance supports maximizing expected value

One common topic in effective altruism introductory seminars is expected value, specifically the idea that we should usually maximize it. It’s intuitive for some participants, but others are less sure. Here I will offer a simple justification for expected value maximization using a variation of the veil of ignorance thought experiment. This line of thinking has helped make my introductory seminar participants (and me) more confident in the legitimacy of expected value.

The thought experime... (read more)

I appreciate how this post adds dimension to community building, and I think the four examples you used are solid examples of each approach.  I'm not sure what numbers I'd put on each area as current or ideal numbers, but I do have some other thoughts.

I think it's a little hard to distinguish between movement support and field building in many community building cases.  When someone in a university group decides to earn to give instead of researching global priorities, does that put them in movement support instead of the field?  To what ext... (read more)

3
James Herbert
Thanks for the thoughtful reply!  Distinguishing between approaches at the level of the individual I think it gets a little tricky at the level of the individual. But with your specific example, I'd classify an E2G individual on the basis of what they give to. If they give to HLI or GPI I'd say they're field building. If they give to CEA I'd say they're doing movement support and network development.  If they just give to AMF or whatever, i.e., an org doing 'direct work', I'd say they aren't strictly speaking contributing to the specific social change EA is aiming for, viz., increasing the extent to which people use reason and evidence when trying to do good. And so I wouldn't use the classification system I've laid out in this post to describe them.[1]  But that isn't to say they aren't an EA. I wouldn't say you need to be pushing for the increased use of evidence of reason when doing good to be an EA, you just need to be adopting the approach yourself. What does running an intro fellowship count as? Based on little more than vibes, I'd describe running an intro fellowship as movement support rather than field building. This is because it isn't directly pushing EA forward as a research field, nor is it providing a professional level of training for future researchers. It also fits quite well into this system for measuring the progress of social (protest) movements (see page 60). Why I think movement support and promoting the uptake of practices is currently more valuable than networking Yes for sure networking is important, I get a lot of value from it too, but when I'm talking with other EAs I often find myself saying/thinking, "Have you thought about asking someone who isn't an EA for their opinion on this?", and that to me is an indicator we spend too much time talking to each other. I also think there are lots of people who would benefit the EA movement who are not currently part of it, particularly people beyond the anglosphere and Europe.  Given thes

I don't think that the development of sentience (the ability to experience positive and negative qualia) is necessary for an AI to pursue goals.  I'm also not sure what it would look like for an AI to select its own interests.  This may be due to my own lack of knowledge rather than a real lack of necessity or possibility though.

To answer your main question, some have theorized that self-preservation is a useful instrumental goal for all sufficiently intelligent agents.  I recommend reading about instrumental convergence.  Hope this helps!

Different group organizers have widely varying beliefs that affect what work they think is valuable.  From certain perspectives, work that’s generally espoused by EA orgs looks quite negative.  For example, someone may believe that the harms of global health work through the meat eater problem dominate the benefits of helping reduce human suffering and saving lives.  Someone may believe that the expected value of the future with humans is negative, and as such, biosecurity work that reduces human extinction risk is net-negative.  I... (read more)

Fantastic post, thank you for writing it!  One challenge I have with encouraging effective giving, especially with a broader non-EA crowd, is that global health and development will probably be the main thing people end up giving to.  I currently don't support that work because of the meat eater problem.  If you have any thoughts on dealing with this, I'd love to hear them.

Some arguments to support global health work despite the meat eater problem that I see are:

"People in low-income countries that are being helped with Givewell-style interv... (read more)

2
Jason
I think there's room for subject-specific giving advocacy campaigns. A single broad-based effective giving organization isn't likely to be effective at reaching all populations.

I was talking with a new university group organizer recently, and the topic of heavy-tailed impact came up.  Here I’ll briefly explain what heavy tails are and what I think they imply about university group community building.

What’s a heavy tail?

In certain areas, the (vast) majority of the total effect comes from a (small) minority of the causes.  In venture capital, for example, a fund will invest in a portfolio of companies.  Most are expected to fail completely.  A small portion will survive but not change significantly in value.&nbs... (read more)

My current belief in the sentience of most nonhuman animals comes partly from the fact that they were subjected to many of the same evolutionary forces that gave consciousness to humans.  Other animals also share many brain structures with us.  ChatGPT never went through that process and doesn't have the same structures, so I wouldn't really expect it to be conscious.  I guess your post looks at the outputs of conscious beings, which are very similar to what ChatGPT produces, whereas I'm partly looking at the inputs that we know have created... (read more)

3
JBentham
Many nonhuman animals also show long-term abnormal behaviours, and will try to access analgesia (even paying a cost to do so), if they are in pain. I don’t think we have evidence that’s quite analogous to that with large language models, and if we did, it would cause me to update in favour of current models having sentience. It’s also worth noting that the same lines of evidence that cause me to believe nonhuman animals are sentient also lead me to believe that humans are sentient, even if some of the evidence (like physiological and neuro-anatomical similarities, and evolutionary distance) may be somewhat stronger in humans.
1
splinter
Other animals do share many brain structures with us, but by the same token, most animals lack brain structures that are the most fundamental to what make us human. As far as I am aware (and I will quickly get out of my depth here), only mammals have a neocortex, and small mammals don't have much of one.  Hopefully this is clear from my post, but ChatGPT hasn't made me rethink my beliefs about primates or even dogs. It definitely has made me more uncertain about invertebrates, reptiles, and  fish. (I have no idea what to think about birds.)

I’ve addressed the point on costs in other commentary, so we may just disagree there!

  1. I think the core idea is that the EA ethos is about constantly asking how we can do the most good and updating based on new information.  So the book would hopefully codify that spirit rather than just talk about how great we’re doing.
  2. I find it easier to trust people whose motivations I understand and who have demonstrated strong character in the past.  History can give a better sense of those two things.  Reading about Julia Wise in Strangers Drowning, for
... (read more)

Great point!  A historian or archivist could take on this role.  Maybe CEA could hire one?  I’d say it fits within their mission “to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.”

Definitely agree with Chris here!  Worst case scenario, you create useful material for someone else who tackles it down the line, best case scenario, you write the whole thing yourself.

I think opportunity cost is well worth mentioning, but I don't know that I think it's as high as you believe it to be.

Choosing someone who has been around a while is optional.  The value of having an experienced community member do it is built-in trust, access, and understanding.  The costs are the writer's time (though that cost is decreasing as more people start writing about EA professionally) and the time of those being interviewed.  I would also note that while there's lots of work for technical people in EA, writers in the community ma... (read more)

I agree with this last point on underlying motives.  EA is one direction for purpose-seeking people to go in, but not everyone will choose it.  This program could also look vaguely religious, which is generally preferable to avoid.

I would also question whether a focused program is the best way to develop people with EA motivation.  I think sometimes people go through the intro program and find purpose in it because...

  1. They see their peers struggling with the same questions about meaning and purpose
  2. Their facilitator has found meaning through E
... (read more)
6
JohanEA
Also thank you Pete for your point here! I agree that the intro program can be a very good way for people to find purpose. However, I argue that a significant proportion of people are less interested in learning about "doing good better" simply because more basic needs are not being met (you can read more about this in my response to Harrison's comment I just posted). If people read through the curriculum before signing up to the intro fellowship and see concepts like "effectiveness mindset" or "scope insensitivity", then I think many will ask themselves "Great, that's all very nice. But how is that going to help me find a job with which I support myself and my family?" People will prioritise their time according to what is currently most important to them. And if you are in a phase of your life where you are not as privileged to be able to make doing good a core part of your life, you will often have more urgent things to manage than joining an Introductory EA Program. So while I agree that the intro program has many potential benefits, I believe the actual challenge is getting people to sign up for it in the first place. That's why the PLP Track might be more effective at attracting those who wouldn't normally consider the Intro Program. It provides value in a different way and addresses different priorities.

I think that stipends for intro fellows is an idea worth considering, but I have real concerns at the moment, especially since Penn’s write-up about it hasn’t come out yet.

1.1 “Makes Fellowships more accessible to people who are not wealthy, potentially leading to a more diverse community”
I think there’s probably some truth to this, but honestly, I don’t think an amount that we could give every fellow would allow anyone to meaningfully decrease the outside work they do.  I’d be in support of packages for those that wouldn’t be able to participate with... (read more)