All of Pete Rowlett's Comments + Replies

These are interesting ideas.  It seems like there's still a lack of clarity about the magnitude of the effects of each issue on the nonhuman animal side, and therefore their relative cost-effectiveness.  But as more research is done, say on ITNs in later stages of their lifecycle and the effects of tapeworms on pigs, maybe trades could be made based on these issues!

Wow, this is amazing!  Thank you for putting in the time and effort to write it.  I just ordered a copy for the Effective Altruism at Georgia Tech library.  Can’t wait to read it!

6
Jeff Thomas
3mo
Thank you, Pete, that is so kind! 

I think it would be really useful for someone with a mathematical background to develop this further. The flexibility/dedication tradeoff seems about the same as the explore/exploit tradeoff, which I understand to have been studied a fair amount.  I'd imagine there's a lot of theory that could be applied and would allow us to make better decisions as a community, especially now that lots of people are thinking about specializing or funding specialization.  I bet we could avoid significant mistakes at a low cost by quantifying investments in each area and comparing them to theoretical ideals.

Congratulations on your first post!  I think this is a really cool and interesting idea.  The team at Basefund has started doing something similar, so you may want to reach out to them if you're interested in working on it!

2
Denis
6mo
Wow, that is cool. Thanks for this great connection!! I didn't know about this. But it is indeed close to what I had in mind, albeit a more modest version. Great minds think alike and all that :)  I will contact them and share my post and see if there's anything in there that might be useful to them - or alternatively, if they have some feedback on the idea based on their first year of operation. I would be especially interested to see if they have any data to confirm or refute my ideas about expected value, testing, optimisation, etc.  When I first started thinking about this, a couple of years ago, I didn't find anyone doing anything similar, but it wasn't easy to search. And anyhow I wouldn't have found Basefund since they started since then.  Thanks for sharing this info!   

I quite like how you distinguish approaches at the individual level!  I think focusing on which area they support makes sense.  One lingering question I have is the relative value a donor's donations vs. the value of their contribution toward building a culture of effective giving.  I also think it's at least somewhat common for people to get into other areas of EA after starting out in effective giving.

Agreed on the intro fellowship point as well!  Long-term it supports field-building since plenty of participants filter through, but it... (read more)

1
James Herbert
10mo
Ah that's a very good point about the uptake of practices. I think when I wrote that I had area two in mind much more than area one, but I definitely didn't make that clear. I'll edit it :)

It's great that you're doing what you can on this front, despite all the challenges!  I don't have specific nutritional advice, though maybe the writer of the first post you linked would.

You may have already considered this (some of your ideas hinted in this direction), but I think it's important to focus on suffering intensity, which you could measure in terms of suffering per calorie or suffering per pound of food.  Doing so will minimize your overall suffering footprint.  My understanding is that the differences in capacity for suffering ... (read more)

1
Joe Rogero
10mo
Thanks, those are some great resources! I can read the post on insect sentience but the link to the paper throws an error. I'd love to read the definitions they use for their criteria. 

Great post, thanks for writing it!  Healthy and active vegans sharing their stories helps change the narrative, bit by bit.

Destroying viruses in at-risk labs

Thanks to Garrett Ehinger for feedback and for writing the last paragraph.

Military conflict around or in the vicinity of biological research laboratories could substantially increase the risk of releasing a dangerous pathogen into the environment. The fighting and mass movement of refugees combine with other risk factors to magnify the potential ramifications of this risk.  Garrett Ehinger elaborates on this issue in his excellent Chicago Tribune piece, and proposes the creation of nonaggression treaties for biol... (read more)

Rawls’ veil of ignorance supports maximizing expected value

One common topic in effective altruism introductory seminars is expected value, specifically the idea that we should usually maximize it. It’s intuitive for some participants, but others are less sure. Here I will offer a simple justification for expected value maximization using a variation of the veil of ignorance thought experiment. This line of thinking has helped make my introductory seminar participants (and me) more confident in the legitimacy of expected value.

The thought experime... (read more)

I appreciate how this post adds dimension to community building, and I think the four examples you used are solid examples of each approach.  I'm not sure what numbers I'd put on each area as current or ideal numbers, but I do have some other thoughts.

I think it's a little hard to distinguish between movement support and field building in many community building cases.  When someone in a university group decides to earn to give instead of researching global priorities, does that put them in movement support instead of the field?  To what ext... (read more)

3
James Herbert
10mo
Thanks for the thoughtful reply!  Distinguishing between approaches at the level of the individual I think it gets a little tricky at the level of the individual. But with your specific example, I'd classify an E2G individual on the basis of what they give to. If they give to HLI or GPI I'd say they're field building. If they give to CEA I'd say they're doing movement support and network development.  If they just give to AMF or whatever, i.e., an org doing 'direct work', I'd say they aren't strictly speaking contributing to the specific social change EA is aiming for, viz., increasing the extent to which people use reason and evidence when trying to do good. And so I wouldn't use the classification system I've laid out in this post to describe them.[1]  But that isn't to say they aren't an EA. I wouldn't say you need to be pushing for the increased use of evidence of reason when doing good to be an EA, you just need to be adopting the approach yourself. What does running an intro fellowship count as? Based on little more than vibes, I'd describe running an intro fellowship as movement support rather than field building. This is because it isn't directly pushing EA forward as a research field, nor is it providing a professional level of training for future researchers. It also fits quite well into this system for measuring the progress of social (protest) movements (see page 60). Why I think movement support and promoting the uptake of practices is currently more valuable than networking Yes for sure networking is important, I get a lot of value from it too, but when I'm talking with other EAs I often find myself saying/thinking, "Have you thought about asking someone who isn't an EA for their opinion on this?", and that to me is an indicator we spend too much time talking to each other. I also think there are lots of people who would benefit the EA movement who are not currently part of it, particularly people beyond the anglosphere and Europe.  Given thes

I don't think that the development of sentience (the ability to experience positive and negative qualia) is necessary for an AI to pursue goals.  I'm also not sure what it would look like for an AI to select its own interests.  This may be due to my own lack of knowledge rather than a real lack of necessity or possibility though.

To answer your main question, some have theorized that self-preservation is a useful instrumental goal for all sufficiently intelligent agents.  I recommend reading about instrumental convergence.  Hope this helps!

Different group organizers have widely varying beliefs that affect what work they think is valuable.  From certain perspectives, work that’s generally espoused by EA orgs looks quite negative.  For example, someone may believe that the harms of global health work through the meat eater problem dominate the benefits of helping reduce human suffering and saving lives.  Someone may believe that the expected value of the future with humans is negative, and as such, biosecurity work that reduces human extinction risk is net-negative.  I... (read more)

Fantastic post, thank you for writing it!  One challenge I have with encouraging effective giving, especially with a broader non-EA crowd, is that global health and development will probably be the main thing people end up giving to.  I currently don't support that work because of the meat eater problem.  If you have any thoughts on dealing with this, I'd love to hear them.

Some arguments to support global health work despite the meat eater problem that I see are:

"People in low-income countries that are being helped with Givewell-style interv... (read more)

2
Jason
10mo
I think there's room for subject-specific giving advocacy campaigns. A single broad-based effective giving organization isn't likely to be effective at reaching all populations.

I was talking with a new university group organizer recently, and the topic of heavy-tailed impact came up.  Here I’ll briefly explain what heavy tails are and what I think they imply about university group community building.

What’s a heavy tail?

In certain areas, the (vast) majority of the total effect comes from a (small) minority of the causes.  In venture capital, for example, a fund will invest in a portfolio of companies.  Most are expected to fail completely.  A small portion will survive but not change significantly in value.&nbs... (read more)

My current belief in the sentience of most nonhuman animals comes partly from the fact that they were subjected to many of the same evolutionary forces that gave consciousness to humans.  Other animals also share many brain structures with us.  ChatGPT never went through that process and doesn't have the same structures, so I wouldn't really expect it to be conscious.  I guess your post looks at the outputs of conscious beings, which are very similar to what ChatGPT produces, whereas I'm partly looking at the inputs that we know have created... (read more)

3
JBentham
1y
Many nonhuman animals also show long-term abnormal behaviours, and will try to access analgesia (even paying a cost to do so), if they are in pain. I don’t think we have evidence that’s quite analogous to that with large language models, and if we did, it would cause me to update in favour of current models having sentience. It’s also worth noting that the same lines of evidence that cause me to believe nonhuman animals are sentient also lead me to believe that humans are sentient, even if some of the evidence (like physiological and neuro-anatomical similarities, and evolutionary distance) may be somewhat stronger in humans.
1
splinter
1y
Other animals do share many brain structures with us, but by the same token, most animals lack brain structures that are the most fundamental to what make us human. As far as I am aware (and I will quickly get out of my depth here), only mammals have a neocortex, and small mammals don't have much of one.  Hopefully this is clear from my post, but ChatGPT hasn't made me rethink my beliefs about primates or even dogs. It definitely has made me more uncertain about invertebrates, reptiles, and  fish. (I have no idea what to think about birds.)

I’ve addressed the point on costs in other commentary, so we may just disagree there!

  1. I think the core idea is that the EA ethos is about constantly asking how we can do the most good and updating based on new information.  So the book would hopefully codify that spirit rather than just talk about how great we’re doing.
  2. I find it easier to trust people whose motivations I understand and who have demonstrated strong character in the past.  History can give a better sense of those two things.  Reading about Julia Wise in Strangers Drowning, for
... (read more)

Great point!  A historian or archivist could take on this role.  Maybe CEA could hire one?  I’d say it fits within their mission “to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.”

Definitely agree with Chris here!  Worst case scenario, you create useful material for someone else who tackles it down the line, best case scenario, you write the whole thing yourself.

I think opportunity cost is well worth mentioning, but I don't know that I think it's as high as you believe it to be.

Choosing someone who has been around a while is optional.  The value of having an experienced community member do it is built-in trust, access, and understanding.  The costs are the writer's time (though that cost is decreasing as more people start writing about EA professionally) and the time of those being interviewed.  I would also note that while there's lots of work for technical people in EA, writers in the community ma... (read more)

I agree with this last point on underlying motives.  EA is one direction for purpose-seeking people to go in, but not everyone will choose it.  This program could also look vaguely religious, which is generally preferable to avoid.

I would also question whether a focused program is the best way to develop people with EA motivation.  I think sometimes people go through the intro program and find purpose in it because...

  1. They see their peers struggling with the same questions about meaning and purpose
  2. Their facilitator has found meaning through E
... (read more)
6
Johan de Kock
1y
Also thank you Pete for your point here! I agree that the intro program can be a very good way for people to find purpose. However, I argue that a significant proportion of people are less interested in learning about "doing good better" simply because more basic needs are not being met (you can read more about this in my response to Harrison's comment I just posted). If people read through the curriculum before signing up to the intro fellowship and see concepts like "effectiveness mindset" or "scope insensitivity", then I think many will ask themselves "Great, that's all very nice. But how is that going to help me find a job with which I support myself and my family?" People will prioritise their time according to what is currently most important to them. And if you are in a phase of your life where you are not as privileged to be able to make doing good a core part of your life, you will often have more urgent things to manage than joining an Introductory EA Program. So while I agree that the intro program has many potential benefits, I believe the actual challenge is getting people to sign up for it in the first place. That's why the PLP Track might be more effective at attracting those who wouldn't normally consider the Intro Program. It provides value in a different way and addresses different priorities.

I think that stipends for intro fellows is an idea worth considering, but I have real concerns at the moment, especially since Penn’s write-up about it hasn’t come out yet.

1.1 “Makes Fellowships more accessible to people who are not wealthy, potentially leading to a more diverse community”
I think there’s probably some truth to this, but honestly, I don’t think an amount that we could give every fellow would allow anyone to meaningfully decrease the outside work they do.  I’d be in support of packages for those that wouldn’t be able to participate with... (read more)