All of AppliedDivinityStudies's Comments + Replies

An update in favor of trying to make tens of billions of dollars

That's a good clarification, I do agree that EAs should consider becoming VCs in order to make a lot of money. I just don't think they should become VCs in order to enable earn-to-give EA founders.

4Stefan_Schubert2dAlright, but if there were such EA VCs they might want to keep an extra eye on EA start-ups, because of special insider knowledge, mutual trust, etc. Plus EAs may be underestimated, as per above. I do agree, however, that unpromising EA start-ups shouldn't be funded just because they're EAs.
An update in favor of trying to make tens of billions of dollars

This is my personal view, I understand that it might not be rigorously argued enough to be compelling to others, but I'm fairly confident it in anyway:

I literally believe that there are ~0 companies which would have been valued at $10b or more, but which do not exist because they were unable to raise seed funding.

You will often hear stories from founders who had a great idea, but the VCs were just too close minded. I don't believe these. I think a founder who's unable to raise seed money is simply not formidable (as described here), and will not be able to... (read more)

You don't have to believe that VCs are generally irrational in order to believe that an EA VC could be a good idea. I think arguing against the claim that VCs are generally irrational is akin to a weak man argument.

People presumably start successful venture capitalist firms, e.g. based on niche competencies or niche insights, now and then. It's not the case that new venture capital firms never succeed. And to determine whether an EA venture capital firm could succeed, you'd have to look into the nitty-gritty details, rather than raising general considerati... (read more)

An update in favor of trying to make tens of billions of dollars

Depends immensely on if you think there are EAs who could start billion-dollar companies, but would not be able to without EA funding. I.e. they're great founders, but can't raise money from VCs. Despite a lot of hand-wringing over the years about the ineffectiveness of VCs, I generally think being able to raise seed money is a decent and reasonable test, and not arbitrary gatekeeping. The upshot being, I don't think think EAs should try to start a seed fund.

You could argue that it would be worth it, solely for the sake of getting equity in very valuable companies. But at that point you're just trying to compete with VCs directly, and it's not clear that EAs have a comparative advantage.

8tylermaule2dI think the core argument here is that not enough EAs try to start a company, as opposed to try and are rejected by VCs. IMO the point of seeding would be to take more swings. Also, presumably the bar should be lower for an EA VC, because much of the founders' stake will also go to effective charity.
4Stefan_Schubert2dOne possibility is that EAs are better than it might seem at first glance. The fact that there is some track-record of EA start-up success (as per the OP) may be some evidence of that. If that is the case, then VCs may underestimate EA start-ups even if VCs are generally decent - and EA companies may also be a good investment (cf. your second paragraph). I guess a relevant factor here is to what extent successful EA start-ups have been funded by EA vs non-EA sources.
Why aren't you freaking out about OpenAI? At what point would you start?

Google does claim to be working on "general purpose intelligence" https://www.alignmentforum.org/posts/bEKW5gBawZirJXREb/pathways-google-s-agi

I do think we should be worried about DeepMind, though OpenAI has undergone more dramatic changes recently, including restructuring into a for-profit, losing a large chunk of the safety/policy people, taking on new leadership, etc.

Why aren't you freaking out about OpenAI? At what point would you start?

In the absence of rapid public progress, my default assumption is that "trying to build AGI" is mostly a marketing gimmick. There seem to be several other companies like this, e.g.: https://generallyintelligent.ai/

But it is possible they're just making progress in private, or might achieve some kind of unexpected breakthrough. I guess I'm just less clear about how to handle these scenarios. Maybe by tracking talent flows, which is something the AI Safety community has been trying to do for a while.

On the assessment of volcanic eruptions as global catastrophic or existential risks

Thanks so for taking the time to write this up! I've been (casually) curious about this topic for a while, and it's great to have your expert analysis.

My main question is: How tractable are the current solutions to all of this? Are there specific next steps one could take? Organizations that could accept funding or incoming talent? Particular laws or regulations we ought to be advocating for? Those are all tough questions, but it would be helpful to have even a very vague sense of how far a unit of money/time could go towards this cause.

What is clear tho

... (read more)

Thanks for your input! 

My main question is: How tractable are the current solutions to all of this? Are there specific next steps one could take? Organizations that could accept funding or incoming talent? Particular laws or regulations we ought to be advocating for? Those are all tough questions, but it would be helpful to have even a very vague sense of how far a unit of money/time could go towards this cause.

Yes we think there are tractable solutions to reduce the impact from these large eruptions, and we're currently planning these behind the scen... (read more)

Why aren't you freaking out about OpenAI? At what point would you start?

Happy to see they think this should be discussed in public! Wish there was more on questions #2 and #3.

Also very helpful to see how my question could have been presented in a less contentious way.

Progress studies vs. longtermist EA: some differences

Hey sorry for the late reply, I missed this.

Yes, the upshot from that piece is "eh". I think there are some plausible XR-minded arguments in favor of economic growth, but I don't find them overly compelling.

In practice, I think the particulars matter a lot. If you were to say, make progress on a cost-effective malaria vaccine, it's hard to argue that it'll end up bringing about superintelligence in the next couple decades. But it depends on your time scale. If you think AI is more on a 100 year time horizon, there might be more reason to be worried about growth.

R.e. DTD, I think it depends way more than EA/XR people tend to think on global coordination.

Why aren't you freaking out about OpenAI? At what point would you start?

dynamics of Musk at the creation of OpenAI, not recent events or increasing salience

Thanks, this is a good clarification.

It is hard to tell if the OP has a model of AI safety or insight into what the recent org dynamics mean, all of which are critical to his post having meaning.

You're right that I lack insight into what the recent org dynamics mean, this is precisely why I'm asking if anyone has more information. As I write at the end:

To be clear, I'm not advocating any of this. I'm asking why you aren't. I'm seriously curious and want to understa

... (read more)
Why aren't you freaking out about OpenAI? At what point would you start?

Thanks for the recommendation. I spent about an hour looking for contact info, but was only able to find 5 public addresses of ex-OpenAI employees involved in the recent exodus. I emailed them all, and provided an anonymous Google Form as well. I'll provide an update if I do hear back from anyone.

Why aren't you freaking out about OpenAI? At what point would you start?

Is it that I'm out of touch, missing recent news, and OpenAI has recently convincingly demonstrated their ongoing commitment to safety?

This turns out to be at least partially the answer. As I'm told, Jan Leike joined OpenAI earlier this year and does run an alignment team.

5John_Maxwell5dI also noticed this post [https://www.alignmentforum.org/posts/6ccG9i5cTncebmhsH/frequent-arguments-about-alignment] . It could be that OpenAI is more safety-conscious than the ML mainstream. That might not be safety-conscious enough. But it seems like something to be mindful of if we're tempted to criticize them more than we criticize the less-safety-conscious ML mainstream (e.g. does Google Brain have any sort of safety team at all? Last I checked they publish way more papers [https://i.redd.it/wa6kjzmhzix21.png] than OpenAI. Then again, I suppose Google Brain doesn't brand themselves as trying to discover AGI--but I'm also not sure how correlated a "trying to discover AGI" brand is likely to be with actually discovering AGI?)
Why I am probably not a longtermist

Ah, yes, extinction risk, thanks for clarifying.

Why I am probably not a longtermist

Hey, great post, I pretty much agree with all of this.

My caveat is: One aspect of longtermism is that the future should be big and long, because that's how we'll create the most moral value. But a slightly different perspective is that the future might be big and long, and so that's where the most moral value will be, even in expectation.

The more strongly you believe that humanity is not inherently super awesome, the more important that latter view seems to be. It's not "moral value" in the sense of positive utility, it's "moral value" in the sense of live... (read more)

FWIW my completely personal and highly speculative view is that EA orgs and EA leaders tend to talk too much about x-risk and not enough about s-risk, mostly because the former is more palatable, and is currently sufficient for advocating for s-risk relevant causes anyway. Or more concretely: It's pretty easy to imagine an asteroid hitting the planet, killing everyone, and eliminating the possibility of future humans. It's a lot wackier, more alienating and more bizarre to imagine an AI that not only destroys humanity, but permanently enslaves it in some k

... (read more)
6MichaelStJules25dTo be clear, by "x-risk" here, you mean extinction risks specifically, and not existential risks generally (which is what "x-risk" was coined to refer to, from my understanding)? There are existential risks that don't involve extinction, and some s-risks (or all, depending on how we define s-risk) are existential risks because of the expected scale of their suffering.
The expected value of funding anti-aging research has probably dropped significantly

I think the billionaire space race may be a good example of the public disliking weird stuff that billionaires are doing, but public opinion not significantly impacting their ability to do the weird stuff.

But what if they could be doing way more? If being a civilian space tourist was seen as the coolest person anyone could do, there would probably be even more of a market incentive for Branson.

I am also not too worried about bad PR keeping good scientists away since I think high salaries should help to overcome their fears / misunderstandings surround

... (read more)
The expected value of funding anti-aging research has probably dropped significantly

I think it's still under-appreciated how much people hate billionaire-funded research into areas perceived to be weird, creepy or potentially inequality-exacerbating.

Consider some of the comments on that same article from the SlateStarCodex subreddit:

I'll give a longevity startup the time of day when they show me a year old drosophila. And "slaps roof of longevity startup this bad boy can fit so much fraud in it"

Or a semi-popular reply to the tweet you shared:

Getting for longevity research from ageing billionaires is the bio equivalent of taking can

... (read more)
2freedomandutility1moIn my opinion, the public seems to dislike the idea of rejuvenation biotechnology, but doesn't dislike it enough that public opinion would significantly hamper the progress of this field. I think the billionaire space race may be a good example of the public disliking weird stuff that billionaires are doing, but public opinion not significantly impacting their ability to do the weird stuff. I am also not too worried about bad PR keeping good scientists away since I think high salaries should help to overcome their fears / misunderstandings surrounding anti-ageing research.
4Emanuele_Ascani1moRelatedly, here's another example of the kind of headlines you mention: https://futurism.com/neoscope/aging-unstoppable-youth [https://futurism.com/neoscope/aging-unstoppable-youth] The fact that it's on an online newspaper called "Futurism" is even more eye-popping. One positive thing this might lead to is if people on the fence start to be actually more positive about weird future-related stuff given the hysteria of such headlines. But I have no idea. Might be wishful thinking.
Who do intellectual prizewinners follow on Twitter?

Would be interested to see a list of accounts by:

  • Follower count among prize-winners
  • Divided by overall follower count

It's not that interesting to be that Barack Obama is #1, since he's just the #1 Twitter account overall. But it would be super interesting to see who prize-winners follow that other people do not.

Thanks for this analysis and dataset, super interested in this kind of work and would love to see more!

Lifetime Impact of a GiveWell Researcher?

I sort of interpret that post as typical EA scrupulosity. They write:

Overall, the more suspect the estimates, the less you should update on the results and the more weight you should put on your prior.

But I didn't really have a strong prior to begin with. Maybe the hire's salary, but that's really just the lower bound.

Lifetime Impact of a GiveWell Researcher?

Thanks! Glad you did this analysis. You might also be interested in the numbers here, where surveyed EA leaders said they would be willing to sacrifice $250k in donations to keep their most recent junior hire ($1m for senior).

That's not the question you're asking exactly, but it's another interesting angle.

1agent182moThanks a lot for your response. I think 80000hours has actually "sort of withdrawn their conclusions" [https://80000hours.org/2019/05/why-do-organisations-say-recent-hires-are-worth-so-much/#1-the-estimates-might-be-wrong] from that post about extra donations and recent hires. Thus I am not sure we should pursue these numbers anymore, from the survey. Your thoughts?
Share your journey to EA?

Hey Simone, this is more high level than you're asking for, but you might like the How People Get Involved in EA report: https://rethinkpriorities.org/publications/eas2020-how-people-get-involved-in-ea

How to Train Better EAs?

Yeah again, for highly creative intellectual labor on multi-decade timescale, I'm not really convinced that working super hard or having no personal life or whatever is actually helpful. But I might be fooling myself since this view is very self-serving.

What is the role of public discussion for hits-based Open Philanthropy causes?

a post, a few pages long, with a perspective about New Science that point out things that are useful and interesting would certainly would be well received

Okay that's helpful to hear.

A lot of this question is inspired by the recent Charter Cities debate. For context:

  • Charter Cities Institute released a short paper a while back arguing that it could be as good as top GiveWell charities
  • Rethink Priorities more recently shared a longer report, concluding that it was likely not as good as GiveWell charities
  • Mark Lutter (who runs CCI) replied, arguing that
... (read more)
1Charles He2moBut isn't the GiveWell-style philanthropy exactly not applicable for your example of charter cities? My sense is that the case for charter cities has some macro/systems process that is hard to measure (and that is why it is only now a new cause area and why the debate exists). I specifically didn't want to pull out examples, but if it's helpful here's another example of a debate for an intervention [https://forum.effectivealtruism.org/posts/wx6Xw63yJt67YKdzh/?commentId=nP3qbGuERS4L2cqhn] that relies on difficult to measure outcomes and involves, hard to untangle, divergent worldviews between the respective proponents. (This is somewhat of a tangent but honestly, your important question is inherently complex and there seems to be a lot of things going on, so clarity from smoothing out some of the points seems valuable.) I don't understand why my answer in the previous post above, or these debates aren't object level responses to how you could discuss the value of these interventions. I'm worried I'm talking past you and not being helpful. Now, trying more vigorously / speculatively here: 1. Maybe one answer is that you are right, it is hard to influence direct granting—furthermore, this means that directly influencing granting is not what we should be focused on in the forum. At the risk of being prescriptive (which I dislike) I think this is a reasonable attitude on the forum, in the sense that "policing grants" or something, should be a very low priority for organic reasons for most people, and instead learning/communicating and a "scout mindset" is ultimately more productive.But such discussion cannot be proscribed and even a tacit norm against them would be bad. 2. Maybe you mean that this level of difficulty is "wrong" in some sense. For example, we should respond by paying special, unique attention to the HOP grants or expect them to be communicated and discussed actively. This seems not implausi
2EdoArad2moInteresting! Do you know anything about the state of regulations around this? (sorta related, there are several pet [https://www.viagenpets.com/] cloning [https://www.sinogene.org/] services [https://en.wikipedia.org/wiki/Commercial_animal_cloning]) I'm not sure what are the potential downsides of such a wide-spread tech, but it seems like something which can have high scalability if done as a for-profit company.
What EA projects could grow to become megaprojects, eventually spending $100m per year?

Is there a good writeup anywhere on cost estimates for this kind of refuge? Or what it would require?

9Linch2moNot that I know of, Nick Beckstead wrote a moderately negative review of civilizational refuges [https://forum.effectivealtruism.org/posts/fTDhRL3pLY4PNee67/improving-disaster-shelters-to-increase-the-chances-of#What_are_the_possible_interventions_] 7 years ago (note that this was back when longtermist EA had a lot less $s than we currently do). One reason I'd like to write out a moderately detailed MVP is that then we can have a clear picture for others to critique concrete details of, suggest clear empirical or conceptual lines for further work, etc, rather than have most of this conversation a) be overly high-level or b) too tied in with/anchored to existing (non-longtermist) versions of what's currently going on in adjacent spaces.
3RobertDaoust2moYes I know, thank you ADS, but I rather have in mind something like "Toward an Institute for the Science of Suffering" https://docs.google.com/document/d/1cyDnDBxQKarKjeug2YJTv7XNTlVY-v9sQL45-Q2BFac/edit#
How to Train Better EAs?

There's the CFAR workshop, but it's just a 4 day program. (Though it would take longer to read all of Yudkowsky's writing.)

I'm no expert, but in some plausible reading, US Military training is primarily about cultivating obedience and conformity. Of course some degree of physical conditioning is genuinely beneficial, but when's the last time a Navy Seal got into a fist fight?

For most of the EA work that needs to get done (at the moment), having an army of replaceable, high-discipline, drones is not actually that useful. A lot of the movement hinges on a re... (read more)

6MaxRa2moI used to listen to the podcast of a former Navy SEAL and he argues that the idea of obedient drones is totally off for SEALs, and I got the impression they learn a lot of specialized skills for strategic warfare stuff. Here an article he wrote about this (haven’t read it myself): https://www.businessinsider.com/navy-seal-jocko-willink-debunks-military-blind-obedience-2018-6 [https://www.businessinsider.com/navy-seal-jocko-willink-debunks-military-blind-obedience-2018-6]

My impression is that the people who end up working in EA organizations are not on the same tier of discipline, work ethic, commitment, etc. as elite military forces and are not really even very close?

I don't say that to disparage EA direct workers, I'm involved in direct work myself  -- but my sense is that much more is possible. That said, as you mention the amount of discipline needed may simply not be as high.

What is the role of public discussion for hits-based Open Philanthropy causes?

the same criticism applies to the large Open Phil spending on specific scientific bets.

Sorry, just to clarify again (and on the topic of swearing fealty), I don't mean any of this as a criticism of Open Phil. I agree enthusiastically with the hits-based giving point, and generally think it's good for at least some percentage of philanthropy to be carried out without the expectation of full transparency and GiveWell-level rigor.

It's unclear how we would expect a public forum discussion to substantially influence any of the scientific granting above.

I... (read more)

1Charles He2moThanks for the thoughtful response! I don't think this is true or even can be true, as long as we value general discussion. I think I have a better sense of your question and maybe I will write up a more direct answer from my perspective. I am honestly worried my writeup will be long-winded or wrong, and I'll wait in case someone else writes something better first. Also, using low effort/time on your end, do you have any links to good writeup(s) on the "constructivist view of science"? I'm worried I don't have a real education and will get owned on a discussion related to it, the worst case while deep in some public conversation relying on it.
What is the role of public discussion for hits-based Open Philanthropy causes?

I also don't know for sure, but this examples might be illustrative:

Ought General Support:

Paul Christiano is excited by Ought’s plan and work, and we trust his judgement.

And:

We have seen some minor indications that Ought is well-run and has a reasonable chance at success, such as: an affiliation with Stanford’s Noah Goodman, which we believe will help with attracting talent and funding; acceptance into the Stanford-Startx4 accelerator; and that Andreas has already done some research, application prototyping, testing, basic organizational set-up, and

... (read more)
Writing about my job: Internet Blogger

It's really hard to tell if my writing has had any impact. I think it has, but it's often in the form of vague influence that's difficult to verify. And honestly, I haven't tried very hard because I think it's potentially harmful in the short run to index too heavily on any proxy metric. F.e.x. I don't even track page views.

Though I have talked to some EA people who mostly told me to keep blogging, rather than pursuing any of the other common paths. Some people did recommend that I pursue the Future Perfect Fellowship, which I think is likely to be super h... (read more)

1Lovkush3moThanks! I asked because I am currently going through 80k 8-week planning course and I get impression there is just large uncertainty around what could or could not be impactful.
Writing about my job: Data Scientist

I would guess US market (at least those reporting on Glassdoor) skews heavily SF/NYC, maybe Seattle.

5Peter Wildeford3moFWIW I made $187K/yr in total comp (£136K/yr) in Chicago as a data scientist after four years of experience. My starting salary was $83K/yr in total comp (£60K/yr) with no experience. In both jobs, I worked about 30hrs/wk. My day-to-day experience was rather identical to this post.
Writing about my job: Internet Blogger

Thanks! That's one perk I neglected to mention. You can try blogging in your spare time without much commitment. Though I do think it's a bit risky to do it half-heartedly, get disappointed in the response, and never find out what you would be capable of if you went full time.

There are lots of bloggers who definitely don't do independent research, but within the broader EA space it's a really blurry line. One wacky example is Nadia Eghbal who's writing products include tweets, notes, a newsletter, blog posts, a 100 page report, and a book.

The journalism pi... (read more)

1Achim2moHowever, if journalists just do opinion-writing on their substack, and that kind of journalism becomes dominant, these boundaries may dissolve. That is not necessarily a good thing, though.
The Duplicator: Instant Cloning Would Make the World Economy Explode

This is a very good longtermist piece. Is the short-termist interpretation that we should try very hard to clone John von Neumann?

3Holden Karnofsky3mo(Response to both AppliedDivinityStudies and branperr) My aim was to argue that a particular extreme sort of duplication technology would have extreme consequences, which is important because I think technologies that are "extreme" in the relevant way could be developed this century. I don't think the arguments in this piece point to any particular conclusions about biological cloning (which is not "instant"), natalism, etc., which have less extreme consequences.
Writing about my job: Internet Blogger
  1. It depends on your skillset. My impression is that EA is not really talent constrained, with regards to the talents I currently have. So I would have a bit to offer on the margins, but that's all. I also just don't think I'm nearly as productive when working on a specific set of goals, so there's some tradeoff there. I'm interested in doing RSP one day, and might apply in the future. In theory I think the Vox Future Perfect role could be super high impact.

  2. I probably should.

  3. The short answer is that it's an irreversible decision, so I'm being overly

... (read more)
Writing about my job: Internet Blogger

I've wanted to do this for a while, but haven't yet amassed enough material on a topic to consider it a very coherent work. But someday...

Writing about my job: Internet Blogger

Thanks!

Prior to blogging, I had a day job for a while and lived pretty frugally. I told myself I was investing the money to donate eventually, and did eventually donate some, but kept the bulk of it. So when I first started blogging I already had enough to live on for a while. Then I got the EV grant, and a bit of additional private funding. So long story short, it's not stressful, but it is something I think about. I'm not 100% sure what the long term strategy will be, but based on the feedback I've gotten so far, I think it's likely I'll be able to continue getting grants/donations.

1newptcai3moIf you keep writing on a topic, maybe one day you can publish a collection of your blog posts as a book?
Writing about my job: Data Scientist

Thanks for the writeup. Minor point about salary, is £41k entry-level is typical for London? According to Glassdoor average base pay for US is $116k USD, equivalent to £85k. Their page for Data Scientists in London puts the average at £52k.

I get that this is an average overall levels of seniority, but it's also just your base pay. My impression from Levels.fyi is that at large US companies, base pay is only around 67-75% of total compensation.

So I guess what I'm asking is, given your experience, which of the following statements would you agree with:

  • The
... (read more)
1dan.pandori3moI also had a sticker shock here at the number. Thanks for including the Glassdoor links, I was very surprised that base pay in the US overall is higher than London (which is presumably the most expensive UK market).
5technicalities3moBig old US >> UK pay gap imo. Partial explanation for that: 32 days holiday in the UK vs 10 days US. (My base pay was 85% of total; 100% seems pretty normal in UK tech.) Other big factor: this was in a sorta sleepy industry that tacitly trades off money for working the contracted 37.5 h week, unlike say startups. Per hour it was decent, particularly given 10% study time. If we say hustling places have a 50 h week (which is what one fancy startup actually told me they expected), then 41 looks fine [https://www.google.com/search?q=%2850%2F37.5%29+*+41+%2F+0.9].
New blog: Cold Takes

Exciting to hear!

Minor UI nit: I found the grey Sign up button slightly confusing and initially thought it was disabled.

1Holden Karnofsky3moThanks, I agree it's not ideal, but haven't found a way to change the color of that button between light and dark mode.
Can money buy happiness? A review of new data

Rohin, I thought this was super weird too. Did a bit more digging and found this blog post: https://kieranhealy.org/blog/archives/2021/01/26/income-and-happiness/

if the figure is showing a subset of the two (i.e. only observations from people who answered both questions) then the z-score means across income levels will be slightly different, depending on who is excluded.

The author (who is an academic) agrees this is a bit weird, and notes "small-n noisiness at high incomes".

So overall, I see the result as plausible but not super robust. Though note tha... (read more)

5rohinmshah3moNice find, thanks! (For others: note that the linked blog post also considers things like "maybe they just uploaded the wrong data" to be a plausible explanation.)
People working on x-risks: what emotionally motivates you?

Personally:

  • 5% internalized consequences
  • 45% intellectual curiosity
  • 50% status

I'm sort of joking. Really, I think it's that "motivation" is at least a couple things. In the grand scheme of things, I tell myself "this research is important". Then day to day, I think "I've decided to do this research, so now I should get to work". Then every once in a while, I become very unmotivated, and I think about what's actually at stake here, and also about the fact that some Very Important People I Respect tell me this is important.

Is an increase in attention to the idea that 'suffering is bad' likely to increase existential risk?

This is a good question, but I worry you can make this argument about many ideas, and the cost of self-censorship is really not worth it. For example:

  • If we talk too much about how much animals are suffering, someone might conclude humans are evil
  • If we talk too much about superintelligence, someone might conclude AI is superior and deserves to outlive us
  • If we talk too much about the importance of the far future, a maximally evil supervillain could actually become more motivated to increase x-risk

As a semi-outsider working on the fringes of this commun... (read more)

1dotsam4moThank you for your reply. I would not wish to advocate for self-censorship but I would be interested in creating and spreading arguments against the efficacy of doomsday projects, which may help to avert them.
What would an entity with GiveWell's decision-making process have recommended in the past?

A couple relevant pieces: In this talk, Tyler Cowen talks about how impartial utilitarianism makes sense today since we can impact humans far from ourselves (in both time and space), but how deontology may have been more sensible in the distant past.

In this talk, Devin Kalish argues that utilitarianism is the correct moral theory on the basis of its historical track record. He argues that utilitarianism correctly "predicted" now widely recognized ethical positions (women's rights, anti-slavery, etc).

So I think it's interesting to ask, if GiveWell was aroun... (read more)

4Aaron Gertler4moNitpicky point: Depending on how much better the farming practices were, and how wide they might have spread, the hypothetical comparison to abolition may not be as clear as it looks. If this list [https://en.wikipedia.org/wiki/List_of_famines] is even close to accurate, famines seem to have killed millions of people in the average decade of the 19th century. I'm not sure what better practices might have been possible to introduce "early" in that era, but I think EA circa 1800 might have had "famine" as a major cause area! ***** I can't easily find the link, but GiveWell's early discussions of U.S. interventions focused on how difficult it can be to make a permanent change in someone's life in the developed world. One example (this is mine, not theirs): some of the worst-off people in the U.S. are prisoners, and you can't pay to get someone out of jail. On the other hand, Open Philanthropy made multiple grants to the Brooklyn Community Bail Fund [https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform/brooklyn-community-bail-fund-general-support] , with the goal of reducing the amount of time Americans spend in a state of imprisonment. The first of these came two years after the GiveWell Labs -> Open Philanthropy transition, which means the organization was being seriously considered and researched even earlier. If GiveWell of 1800 could pay to buy permanent freedom for slaves, it's not crazy to think they'd have done quite a lot of that. (And perhaps advocated for abolition or funded the Underground Railroad; they seriously considered multiple U.S.-based "systemic" causes early on, all of which ran into the problem that it's really hard to do as much good for people here as for people in the developing world with similar amounts of money. This picture looks different when there's a huge slave population in your country, and clear measures can be taken to free them.) ***** Now I'm fascinated by this question, so here's an ugly BOTEC (based on
7Linch4moI think the systemic change point and the treatise on human nature points are meaningfully different. One presumes that "we" (leftist society) knows how to do good and EA is empirically mistaken, while the latter is saying that we're lost on how to do good but having smart people explore their inclinations is plausibly a better path on getting there. Just addressing the latter point for now: I find Hume a bad example from Collison, since empirically EA has a lot of philosophers and interest in psychology/philosophy, and "understanding human nature or we can better make decisions" feels right up the alley of EAs and people adjacent to us (eg rationality community). If I wanted to make the point that EAs in history would have been insufficiently exploratory, I would've pointed to Newton instead of Hume. Newton ~ spent his life doing 4 things : astronomy/physics/calculus, Bible studies, alchemy, and managing the British banking system. Arguably an EA I/N/T framework at the time would have said (given empirical beliefs at the time) that any of the latter 3 would be a better use of time than staring at stars and understanding how they move across the sky. And of course these days Newton is famous mainly as the inventor of calculus. So I'd be more interested in whether EA would have stunted Isaac Newton's intellectual development, more than Hume.
Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare

Yes that's a good point, as Scott argues in the linked post:

The moral of the story is: if there's some kind of weird market failure that causes galaxies to be priced at $1, normal reasoning stops working; things that do incalculable damage can be fairly described as "only doing $1 worth of damage", and you will do them even if less damaging options are available.

Give Well notes that their analysis should only really be taken a relative measure of cost-effectiveness. But even putting that aside, you're right that it doesn't imply human lives are cheap o... (read more)

3Max_Daniel4mo(FWIW, this might be worth emphasizing more prominently. When I first read this post and the landing page, it took me a while to understand what question you were addressing.)
Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare

This is very specifically attempting to compile some existing analysis on whether it's better to eat chicken or beef, incorporating ethical and environmental costs, and assuming you choose to offset both harms through donations.

In the future, I would like to aggregate more analysis into a single model, including the one you link.

As I understand it (this might be wrong), what we have currently is a much of floating analyses, each mostly focused on the cost-effectiveness of a specific intervention. Donors can then compare those analyses and make a judgement ... (read more)

Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare

Yeah, I'm hopeful that this is correct, and plan to incorporate other intervention impact estimates soon.

For that particular post, Saulius is talking about "lives affected". E.g chickens having more room as described here: https://www.compass-usa.com/compass-group-usa-becomes-first-food-service-company-commit-100-healthier-slower-growing-chicken-2024-landmark-global-animal-partnership-agreement/

I don't yet have a good sense of how valuable this is v.s. the chicken not being produced in the first place, and I think this will end up being a major point of c... (read more)

3Max_Daniel4moOne other thing that's important, and that I should have emphasized more in my original comment: You are specifically interested in offsetting chicken consumption (not eggs), but I believe that most successful corporate campaigns to date were about hen welfare (i.e., chicken farmed for eggs). At a glance, the post I linked to covers both 'hen welfare' and 'broiler welfare' (i.e., chicken farmed for meat). But it's worth paying attention to whether cost-effectiveness estimates for hen welfare or broiler welfare differ, or if we even have ones for broiler welfare (if we do, I think they would probably me more uncertain since I would guess there is less data on cost, tractability, corporate follow-through etc.). This of course also applies to the improvement in living conditions. I think (but am not totally sure) that everything about caged vs. cage-free is relevant for hen welfare only. For this, I would recommend looking at this report [https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/how-will-hen-welfare-be-impacted-transition-cage-free-housing] . I know that animal advocates have also tried to estimate the effect of potential welfare improvement for broilers (e.g., using different breeds) - including concerns whether some welfare improvements might cause an increase in farmed broiler population due to lowered 'efficieny', and whether this could make some measures net negative w.r.t total, aggregated welfare - but I don't know of a good source off the top of my head.
Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare

Yes good question! Cow lives are longer, and cows are probably more "conscious" (I'm using that term loosely), but their treatment is generally better than that of chickens.

For this particular calculation, the "offset" isn't just an abstract moral good, it's attempting to decrease cow/chicken production respectively. E.g. you eat one chicken, donate to a fund that reduces the numbers of chickens produced by one, the net ethical impact is 0 regardless of farming conditions.

That convenience is part of the reason I chose to start with this analysis, but it's certainly something I'll have to consider for future work.

Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare

Sorry yes, "saving a life" means some kind of intervention that leads to fewer animals going through factory farming. The estimate I'm using is from: https://forum.effectivealtruism.org/posts/9ShnvD6Zprhj77zD8/animal-equality-showed-that-advocating-for-diet-change-works

And yes, it is definitely better to just be vegan and not eat meat at all. This analysis is purely aimed at answer the chicken vs cow question.

3Max_Daniel4moAnother thing that wasn't immediately clear to me: are you comparing chicken lives to cow lives (by numerically distinct individuals), or chicken-years to cow-years? I think this is a significant difference since iirc the standard length of the life of a factory farmed chicken is on the order of 0.1 years, while I would guess that it's higher for cows (but don't recall a number off the top of my head).
Load More