All of Sanjay's Comments + Replies

If the authors of this post haven't indicated what their star signs are, how do I know if I believe what they say?

3
Ivan Burduk
20d
While our star signs may be relevant for optimizing 1-1 matches (sorry, we're not compatible), we don't have evidence to suggest there's any influence of star signs on the ability to evaluate the use of star signs to optimize 1-1 matches. We therefore recommend against using the lack of knowledge of our star signs as a reason to not believe our claims.

Can you say any more about what you plan to do?

3
alene
1mo
Yes! Let's talk, Sanjay!! To summarize: As partial owners of corporations, shareholders have some power to protect the corporation’s interests. For example, when an investigation revealed mistreatment of Costco’s birds, two shareholders stepped into Costco’s shoes and sued Costco’s executives for making the company violate state animal neglect laws.

At the time, the comment was "it's not obvious, more rationale needed" -- i.e. I expressed sympathies for the proposal of transparency, but erred towards not doing it. 

I think the main thing which has changed is that it's a slightly more academic question now -- we no longer have the resource to run something like this. 

If, hypothetically, we did have the resource to run this again, would we default to asking funders to be transparent (rather than our previous default choice of not making this request)? I'm not sure -- as I say, it's a rather more academic question now.

Thanks very much for this, much appreciated. Your best guess of vaccines being less cost-effective than bednets and SMC, but not by an order of magnitude, sounds sensible.

Thanks very much for the comment, this is really interesting. The idea of explicitly adding in suicide risk is an interesting direction for the analysis, it sounds like good work. When you publish your paper, I'll be interested to consider whether the underlying estimates of the badness of depression (perhaps implicitly) already reflect the suicide angle.

At some point it might be useful to do a more careful compare and contrast between your method (using Pyne et al's paper) and our method (using the Sanderson paper). Given that the methods are quite differ... (read more)

1
Stan Pinsent
3mo
The report is now public: https://forum.effectivealtruism.org/s/ykdScawzq59ntw9N3

I certainly would like to equip my toddler with more maths (and preferably computer science) skills than we see in schools. I was planning to remedy this by taking more time on teaching her the content myself (assuming she's willing!) I appreciate this won't work for everyone -- it's time-consuming and not every parent has great maths.

I'm hoping that I will be able to get into a routine of regular maths fun with Daddy. At first this will be the basics (my daughter can't talk yet, so she still has a lot to learn!), and then over time moving on to more advan... (read more)

I said this in another comment, but in case it gets missed, I just want to highlight that 1Day Sooner has shown an excellent attitude. When we reached out to them, they were consistently welcoming of the criticism and had constructive useful comments. I've found these virtues to be more common in the EA community than elsewhere, but I still like to call them out when I see it.

Thank you Josh. I've found 1Day Sooner's collaborative spirit to be exemplary here -- both being welcoming of the challenge and adding useful thoughts.

It seems intuitive to me that the following package of considerations may lead to vaccines and nets/SMC having roughly the same cost-effectiveness:

  • vaccines are 10x (ish) more expensive (bad for vaccines)
  • vaccines are more targeted at the most vulnerable ages (good for vaccines)
  • misc other considerations, like insecticide resistance (this is a bit hand-wavey at the moment, but I guess probably nets out to being
... (read more)

Sorry for asking about a minor detail, but Figure 3 in section 3.2.1 shows an internal validity adjustment of 90% for ITNs (top row of figure). I thought this was 95%? Am I misunderstanding how you're thinking about the adjustment in this document?

4
GiveWell
3mo
Hi Sanjay - thanks for the close read! You're right that Figure 3 should read 95%, not 90% - we're working on correcting the figure and will update the post ASAP. Thanks again!

I've often thought that more quantification of the uncertainty could be useful in communicating to donors as well. E.g. "our 50% confidence interval for AMF is blah, and that confidence interval for deworming blah, so you can see we have much less confidence in it". So I think this is a step in the right direction, thanks for sharing, setting it out in your usual thoughtful manner.

Good question. 

It's also helpful because the wording of my post was meant to convey that "expert opinions tend to believe that the therapeutic alliance matters" (and not necessarily that I'm confident that that's the case). 

One of the papers that I referenced did flag that most of the studies are observational rather than experimental, which does validate your concern. (I think it was Arnow & Steidman 2014 which said this; I don't know if a more recent paper sheds more light on this).

I'm not planning to look into this topic in any depth, but perhaps someone more knowledgeable can give a more definitive answer.

I think it's useful for people to express opinions on the forum, but this post didn't quite hit the mark, in my view.

The post makes a number of fairly strong claims, but some of them (including important ones) have little to no justification. Examples:

  • that the default goal of an AI system is "literally killing everyone"
  • ' "But maybe the alignment plan of OpenAI/whatever will work out!" is wrong. It won't. '

If you didn't want to lengthen the post by going over lengthy justifications which have already been made elsewhere, I think it would have been reasonably to link to other places where those claims have been justified.

3
Greg_Colbourn
4mo
Here's my attempt at a concise explanation for why the default outcome of AGI is doom.
dsj
4mo11
7
1

I’ll go further and say that I think those two claims are widely believed by many in the AI safety world (in which I count myself) with a degree of confidence that goes way beyond what can be justified by any argument that has been provided by anyone, anywhere, and I think this is a huge epistemic failure of that part of the AI safety community.

I strongly downvoted the OP for making these broad, sweeping, controversial claims as if they are established fact and obviously correct, as opposed to one possible way the world could be which requires good argumen... (read more)

Here's a few quotes from your post (emphasis added):

I ran into a quite unhealthy looking dog who was riddled with ticks. We spent half an hour taking the ticks out and by the time we were done with him, we knew we wouldn’t let him lie there....

We brought him there [to a shelter] right away, leaving him in a pen <...> . When we went away, it was with a bad feeling. 

When we went back the next day, we were told that the dog had escaped. <...> We felt devastated

By now we had bonded with this dog <...> and mourned for the rest of the day.

... (read more)

My intuition says that people are probably already following the heuristic "if you don't like your therapist, try to get another one". I also haven't given much thought to the patient's/client's perspective on the therapeutic alliance.

Sanjay
4mo25
4
1
4
2

I'm used to seeing many expert opinions on psychotherapy converge on the view that the type of therapy doesn't make much difference (at least as far as the evidence can tell us). I.e. it doesn't seem to matter much whether you choose CBT or IPT or whatever. The therapeutic alliance, on the other hand, does matter. Therapeutic alliance means something like "How well you get on with your therapist" (plus some related things).

I had a fleeting thought that perhaps the therapeutic alliance might be neglected. E.g. maybe there's a novel intervention which involv... (read more)

4
Linch
4mo
How likely is this to be a real effect vs a confound? I imagine if I feel like therapy is working, I'm much more likely to like my therapist (similarly I'm more likely to like a physical trainer if I'm getting healthier, I'm more likely to like my teacher if I feel like I'm learning more, etc)
5
geoffrey
4mo
I always read therapeutic alliance as advice for the patient, where one should try many therapists before finding one that fits. I imagine therapists are already putting a lot of effort on the alliance front Perhaps an intervention could be an information campaign to tell patients more about this? I feel it’s not well known or to obvious that you can (1) tell your therapist their approach isn’t working and (2) switch around a ton before potentially finding a fit I haven’t looked much into it though
Sanjay
4mo10
18
6
9

It's with a heavy heart that I find myself (a) spotting this post (b) starting to read it. Rightly or wrongly, I'm not enjoying the community drama.

I feel like I just want to forget that I'd ever seen any of these posts, and just continue being kind and friendly to anyone I know who's involved in this.

This solution sounds like a crude cludge (shouldn't I be more truth-seeking that that? can't I be more thoughtful?) But I just don't think I have the energy to do better than that.

8
Richenda
4mo
Would that everybody would do this.

Great that you did this, really appreciate it. 

I'm no expert on the biology, but my intuition would in any case have been that the effect size would be tiny/negligible for 6 weeks of supplementation, and that for non-trivial effects, you would need sustained supplementation over a longer time period. 

Is there any reason to doubt my intuition on this?

3
Ryan Greenblatt
5mo
I believe prior work showed large effects from short periods of supplementation. (Edit: Note that this work seems to debunk prior work, but this should explain the study design)

Oh yes, that is weird. The impression I had was that Ilya might even have been behind Sam's ousting (based on rumours from the internet). I also understood that sacking Sam need 4 out of 6 board members, and since two of the board members were Sam A and Greg B, that meant everyone else had to have voted for him to leave, including Ilya. Most confusing.

5
HenryStanley
5mo
It's bizarre isn't it Very much hoping the board makes public some of the reasons behind the decision.
3
titotal
5mo
His recent twitter post: Either he wasn't behind the push, or he was but subsequently decided it was a huge mistake. 

Bravo for writing this stuff up, glad to see that.

I actually didn't realise that this elephant was an elephant? Indeed, I had the impression that paid ads had been used already by other EA orgs (if memory serves correctly, by EAG, 80k, and SoGive) so I thought they were considered to have legitimacy, as far as I was aware.

1
gergo
8mo
You might be right! My impression was based on talking to a handful of people within community building, about fellowship programs specifically - that might be what explains our different impressions (although I'm sure there are plenty of people who are excited about paid ads within this niche too!)

I believe the financial system is well-positioned for "consistent pressure on companies". I have more to say on this based on my own work experience, so if anyone is interested feel free to reach out.

If we're only considering plant-based meat, and only looking out over the near term (say, 1-3 years) then the claims here seem reasonable. So much so, that I'm surprised that the PTC model is so popular.

It may look like your concerns also apply to other alternative proteins (e.g. lab-grown meat). I don't believe that's the case.

  • We have a long way to go before we have lab-grown meat which is price competitive with traditional meat.
  • I'm willing to accept the argument that price-competitive lab-grown meat may not be sufficient, because of social and psychologi
... (read more)
3
Jacob_Peacock
8mo
Hi Sanjay, thank you for reading and your thoughtful comment! The evidence I reviewed here already spans a couple of years, so I do think it might be reasonable to extrapolate closer to 3-5 years. That said, there isn't any analysis of trends of over time, so maybe not. I agree conditional on the existence of similar alternatives, regulating against animal-based meat is easier than if those alternatives don't exist. Can you elaborate on the why you think the arguments apply differently to lab-grown rather than plant-based meat in your third point? If one believes leaders in the field (eg, Ethan Brown, I think, but could be mis-remembering), we might eventually literally synthesized meat from plant sources; thus, plant-based meat would be meat, as would lab-grown meat. By transitivity, they'd all be "the same." I myself don't find the premise here too compelling, but it helps motivate the question: what exactly will be the differences between, plant-based and lab-grown meat that would diferentially impact consumer acceptance?

A summary based on the quotes which I included in a separate comment:

  • Larry Madoff, who served as editor of the program from 2002 to 2021, said he was “forced out” by the organization’s CEO, Linda MacKinnon, according to STATnews
    • It seems likely that something unfortunate is happening here, but I'm unclear what.
  • There was a letter written by several ProMED moderators, it appears that they objected to:
    • A letter going out to all ProMED subscribers proposing a subscription model; 
      • this was signed by "The ProMED team" without the moderators being inform
... (read more)

I'm also concerned about the internal strife within ISID/ProMED. I've copied and pasted some quotes below.

Here's an excerpt from the STATnews article that this post links to:

...Larry Madoff, who served as editor of the program from 2002 to 2021. In spring 2021, Madoff said he was “forced out” by the organization’s CEO, Linda MacKinnon, and Alison Holmes, then president of the ISID executive committee. A professor of infectious diseases at the University of Massachusetts, Madoff refers to himself as editor emeritus of ProMED, a title bestowed upon him by th

... (read more)

It seems that a central bottleneck for the fund is that a few key people are decision-makers, and they are very busy, which makes it hard to operate quickly at scale and be transparent.

When SoGive ran its grants programme last year, we tackled these problems by getting more junior people to help.

I.e. the structure was:

  • most senior/expert people helped at key moments, but were mostly consultative
  • mid-level people (i.e. fairly experienced but non-expert) actively reviewed applications, including have a call with applicants (we actually did two rounds of calls,
... (read more)
9
calebp
9mo
>It seems that a central bottleneck for the fund is that a few key people are decision-makers, and they are very busy, which makes it hard to operate quickly at scale and be transparent. I think this is at least somewhat true. We have tried out having more junior managers on the fund with mixed success. The EAIF currently has "assistant fund managers" which I think was a good experiment for us to run, and I think it's generally gone well. My impression is that SoGive gave out something like $300k and had 26 applicants so it doesn't seem super comparable to me to the LTFF (I think last year we had ~1000 applications), and I'd guess that your methods don't scale particularly well to the kind of grantmaking the LTFF does (but I could be wrong). I also somewhat disagree with Asya re our transparency, I think that we are falling short of where I'd like us to be, but if you compare us to other grantmaking programs that have existed for more than 1 year I think we look pretty good transparency wise (e.g. Longview, Effective Giving, Open Phil) though plausibly they don't need to be as transparent as they are raising less from the public.

I was worried that this whole post might omit missing hedging and impact investing:

(a) an investor may wish to invest in equities for mission hedging reasons (e.g. scenarios where markets go up may be correlated with scenarios where more AI safety work is needed, or you might invest heavily in certain types of biotech firms, since their success might be correlated with pandemic risk work being needed)

(b) an investor can have impact on the entities they have a stake in through stewardship/engagement (sometimes referred to as investor activism). Roughly spea... (read more)

I've wondered about the interaction between far-UVC and immunity:

  • as well as protecting us against a scary novel pandemic-level pathogen, far-UVC would also kill off germs for various "common or garden" infections
  • at first glance, this sounds like a pretty great cherry on the cake
  • but could it exacerbate pandemic risk by reducing immunity, thereby making it easier for a bioweapon engineer to create a scary pathogen?
Jmd
9mo10
2
0

I was thinking along these same lines but for skin-microbiota... we are lagging behind understanding this compared to gut-microbiota but it seems like the diversity is pretty important to our overall health? Its probably only a risk worth considering for the "install it in all the offices" rather than against using far-UVC in pandemic situations, but I guess research would be needed to assess the risks for skin disorders, or whatever else these microbiota might be important for? 

8
Max Görlitz
9mo
Hi Sanjay, thanks for the comment! Indeed, I think part of the path to impact for far-UVC will be that adoption will hopefully be driven by, e.g., employers like Google equipping their offices with far-UVC lamps because they expect this to reduce the total number of sick days of their workers and therefore increase productivity + profits. Getting this type of evidence for efficacy would be great since it would be an excellent sales pitch to companies whose employees earn a lot, meaning sick days are costly. Ideally, you would be able to tell them something like, "Installing these far-UVC fixtures in the whole office will cost you $30,000, but based on existing evidence and our best models, you'll likely recoup those costs after approx. 18 months due to a reduction in sick days of your employees."  Presumably, that would be a big boost for demand and competition, thereby reducing costs and increasing R&D. It could help to make far-UVC widespread enough to make a difference in stopping future outbreaks or slow down the spread of disease during the next pandemic.  There has been very little research on the interaction of far-UVC and the immune system. It is a topic that often comes up in discussions around far-UVC safety and is related to the well-known "hygiene hypothesis," which says something like, "If you're not exposed to enough germs as a child, you might get more allergies."  I want to see more research on this, but so far, it hasn't been as much of a priority. First, people wanted to figure out things like whether far-UVC could give you skin cancer or make you blind. By now, we know those things won't happen, so we can turn to more "second-order" type risks like immune system effects.  However, I have a few intuitions about why this seems unlikely. First of all, it is an "end-game" worry in the sense that it seems like it would only become relevant once far-UVC is almost ubiquitous. Even if it becomes widespread, it would be installed in places like hospit

I agree with Jason that the specific moral hazard of "people might move to flood-prone areas in order to get cash" seems unlikely to be a concern.

The moral hazard that I was thinking of when I read Robi Rahman's comment was "people who already live in flood-prone areas might be less prone to invest in flood defences/move away/do other things in light of the information that floods may be coming"

2
Jason
9mo
I think that's contingent on the percentage of people who receive payments, and the ability to predict one's likelihood of receipt. If GD gives money to the same people every flood season, then I would be much more concerned about this than if everyone in the flood zone knows they have a 5% chance of receiving money in any year their area was flooded. If the question is whether the beneficiaries may be more likely to stay / underinvest / not take action once identified as conditional beneficiaries shortly before the flood -- it didn't sound like getting the payment was conditional on being in the flood zone when the flood actually hit. If you were pre-registered to location X earlier in the season, and location X was selected as a beneficiary site, it sounds like you got paid. If that's true, one could argue for the opposite effect -- evacuation can be pricey, last-minute flood defenses require resources, etc. So getting them money a few days ahead of the storm might enable better risk-mitigation measures. I'm thinking of the people in the US who didn't leave before Hurricane Katrina due to lack of funds. 

Re your question: "I would be especially interested if you have ideas for other historical case studies that could inform the longtermist project." Here's a few ideas:

  • In Scott Alexander's post Beware Systemic Change, he argued that by funding Marx, Engels brought about "global mass murder without any lasting positive change". I'd be quite interested in an assessment of whether this is true. 
    • Did Marx's work really cause the mass murder, or did the countries led by Marxist dictators happen to find themselves in circumstances where despots were prone to
... (read more)

At the start of your post, you said, rather tantalisingly: "I believe that many of the learnings from the creation of climate risk financial regulation in the UK  can be applied to AI regulation." Could you expand on this?

Also, I'm pleased you wrote this post :-)

Answer by SanjayJul 22, 20233
0
0

This comment will focus on the specific approaches you set out, rather than the high level question, although I'm also interested in seeing comments from others on how difficult it is to solve alignment, and why.

The approach you've set out resembles Coherent Extrapolated Volition (CEV),  which was described earlier by Bostrom. I'm not sure what the consensus is on CEV, but here's a few thoughts which I have in my head from when I thought about CEV (several years ago now).

  • How do we choose the correct philosophers and intellectuals -- e.g. would we want
... (read more)
1
Jadon Schmitt
9mo
"How do we choose the correct philosophers?" Choose nearly all of them; don't be selective. Because the AI must get approval fom every philosopher, this will be a severe constraint, but it ensures that the AI's actions will be unambiguously good. Even if the AI has to make contentious extrapolations about some of the philosophers, I don't think it would be free to do anything awful.

Can you expand on why the ideal unit is "the settlement, village, community, or neighborhood"?

2
Sjlver
9mo
Here are some reasons why I think that units of ~100 households are ideal. The post itself has more examples. * It's best for detailed planning. There is a type of humanitarian/development work that tries to reach every household in a region. Think vitamin A supplementation, vaccination programs, bednet distributions, cash transfers, ... For these, one typically needs logistics per settlement, such as a contact person/agent/community health worker, some means of transportation, a specific amount of bednets/simcards/..., etc. Of course, the higher levels of the location hierarchy (health areas, counties, districts, ...) are also needed. But these are often not sufficient for planning. Also note that some programs use other units of planning altogether (e.g., schools or health centers), but the settlement is common. * It's great for monitoring. The interventions mentioned above typically want to reach 100% settlement coverage. It makes sense to monitor things at that level, i.e., ensure that each settlement is reached. * It's great for research. Many organizations use household sampling surveys. These are typically clustered, which means that researchers select a given number of "enumeration units", and then sample a fixed number of households in each unit. Ideally, these enumeration units have roughly even size, clear and well-understood boundaries, and known population counts. The type of locations that I'm aiming for would make good enumeration units. * This type of place name is used and known. For example, people in the region will know where "Kalamu" is. There will likely be a natural contact person, such as a village chief. There will be a road that leads there and a way to obtain transportation. One can ask questions like "is there cellphone coverage in Kalamu" and get a good answer. In the majority of cases, a place name is a well-understood, unambiguous and meaningful concept. The final reason is about data availability: settlement names are

I can also confirm that an early employee of W3W told me that supporting development work was one the main original aims of W3W.

If I'm reading claim 3 correctly, are you saying that being a 10% GWWC pledger should be sufficient to get a spot at EAG, and this is true regardless of absolute donation amount?

That's much stronger than what I read it as. I think Sjir was saying something more like "if you turn up to a local EA event you should feel welcomed and like you are 'one of the gang' even if you only donate".

The purpose of EAG these days seems a bit murky to me, but it seems to be to be mostly for people who are highly engaged, and I think it's fair to say that if you just donate you are probably not highly engaged (although you might be).

At the outset, I had the same concern, however thus far it doesn't appear to have been a problem. It's possible that this may change in time, in which case we'll cross that bridge when we get there.

I think it would be easy for someone to confuse the two, but (as Matt_Sharp rightly indicated) the SoGive 18 months and the GiveWell 3 years are referring to different things.

The SoGive 18 month threshold refers to funds where there are no plans to use the money.

GiveWell is referring to money which is planned to be spent.

I fear you might be confusing "reserves" and "designated funds" (to use the parlance common in UK charity accounting).

Attracting senior staff members might be easier with high reserves, but I imagine it would be easier still if the charity "designated" some money to be used on the staff member's salary for (say) the next 3 years. SoGive's methodology is very liberal about this, and the charity is at liberty to set reserves aside, or "designate" them for some purpose, and this is non-binding, and if the charity does this, SoGive totally ignores those funds when considering reserves.

3
blonergan
10mo
That's a helpful clarification, thank you. I would be concerned, then, that if an organization were motivated to get SoGive's seal of approval, they could improve their ratio by designating more of their money for specific purposes. Wouldn't it be pretty easy to write down a four-year (non-binding) plan that would convert much of the current "reserves" to "designated funds"?

Although we didn't run this post past Open Phil before publishing, we are in touch with Open Phil, and we do ask them for suggestions of places to direct the money we support.

If they were against what is being outlined here, I think they would have said so when we've been in touch with them. Instead they were helpful.

I can confirm that the username looks like it's associated with someone I know at NTI, and that the wording looks consistent with wording that I've seen from NTI, and overall I judge it very very likely that this is a legitimate comment from NTI.

Good question Yonatan. The "too rich" category has been around for a long time, but I think this is the first time it's been given much attention. As a result, we haven't thought hard about how it's worded. "Overfunded" may well convey what we want without having unwanted connotations. Thank you for the comment.

This is a potentially relevant point, thanks for raising it. NTI did allude to this when we spoke to them (as we discuss in section 3.1).

In determining our rating, a key thing we needed to work out is: does NTI have all this money for arbitrary reasons (e.g. they have a chunk of money leftover from previous work)? or do they have high reserves for good risk management reasons (e.g. the "reserves" aren't really reserves because they plan to spend them down)?

We believe that it's for arbitrary reasons because they told us that this was the case (see the refer... (read more)

I think it's important that Eliezer used the words "and not mention the obvious notion that" (emphasis added).

The use of the word "obvious" suggests that Eliezer thinks that Ted is either lying by not mentioning an obvious point, or he's so stupid that he shouldn't be contributing to the forum.

  • If Eliezer had simply dropped the word "obvious", then I would agree with Aaron's assessment. 
  • However as is, I agree with JP's assessment. 

(Not that I'm a moderator, nor am I suggesting that my opinion should receive some special weight, just adding another... (read more)

I see some disagree votes on Ted's comment. My guess at what they mean:

"Ted, please don't be put off, Eliezer is being unnecessarily unkind. Your post was a useful contribution".

How did you decide to be a not-for-profit? I imagine that the evals/audit work will likely be very lucrative at some point?

4
mariushobbhahn
10mo
As stated in the post itself (section "Status"), we are not yet decided about this and are considering both non-profit and public-benefit-type for-profit style organizations. 

Great to see people writing about this topic, thank you. Thank you also for reaching out to discuss and for sharing a draft with me in advance. I'm sorry I wasn't able to review it, I've been a bit under the weather of late. 

As I'm still under the weather, I've only skimmed your post, so sorry if I've missed something. As this is a topic I'm interested in I would normally prefer to read more carefully. Some quick comments:

  • Your post mentions that "the longer term growth trajectory [for ESG] is promising"; I don't think this does justice to the anti-ESG
... (read more)
1
Christopher Chan
11mo
You are absolutely correct, the perspective I have is quite narrow due to my lack of experience in the field. However, I hope this none-the-less help shed some light on the current ongoing projects and thoughts around the field. Moreover, I do realise the perspective on the post is very Euro-environment centric. I do not have expertise or experience in other markets to comment on them.

I think it's interesting that an impact investing fund is making the comparison to Givewell. This is far from widespread in the philanthropic world, and is even rarer in investing.

I predict that I probably wouldn't agree with the 3x claim if scrutinised properly.

I sympathise with the point made by Michael St Jules about quality of evidence, but I'm more worried about counterfactuals. I.e. if GIF had not made those investments, how likely is it that someone else would have?

7
jh
11mo
Actually, they are more of a grant fund than an impact investment fund. I've updated the post to clarify this. Thanks for bringing it up. One might call them an 'investing for impact' fund - making whatever investments they think will generate the biggest long-term impact. The reported projections aren't adjusted for counterfactuals (or additionality, contribution, funging, etc.). I wonder if the fact we're mostly talking about GIF grants vs GiveWell grants changes your worry at all? For my part, I'd be excited to see more grant analyses (in addition to impact investment analyses) explicitly account for counterfactuals. I believe GiveWell does make some adjustments for funging, though I'm uncertain if they are comprehensive enough.
Answer by SanjayMay 27, 202315
8
0

I expect that answering this question overall (for all animals) is hard, but there exist specific animals for which it's (probably) easy. A chicken farmed in the most egregious factory farmed conditions likely has a materially negative quality of life (as you noted), but also has minimal impact on climate change. I'm not sure how to size the effects of chicken farming on cropland for feed or the oversized-ness of the food system, so it's possible this example could be rendered more complex by that consideration. Avian flu can be nasty (avian flu has been associated with mortality of c.50% in the past), so chickens seem likely to be a risk factor for pandemics.

3
Vasco Grilo
11mo
Hi Sanjay, I agree the case for reducing consumption is stronger for factory-farmed chickens (or other animals living super bad lives which have a small impact on global warming).

Not sure if I missed it, but another factor might be AMR. (Anti-microbial resistance is a mechanism for factory farming leading to pandemics, which you mention. But AMR causes other harms too)

5
Vasco Grilo
11mo
Thanks! I have now added:

Someone told me that they had heard that OpenAI was training GPT-5.

The someone was the sort of person who would likely be in the know (but was not at OpenAI).

I'd prefer not to say more, because I don't know whether they are willing to have their identity stated in public.

Interesting that Sam Altman said "We are not currently training what will be GPT-5". I've certainly heard rumours to the contrary.

2
NIC_1615
1y
Could you share more details about the rumors?
9
Matt Brooks
1y
Maybe they're training "GPT-4.5", maybe they've come up with a new name and they're training "Assistant-1" But he's said elsewhere publicly that they're not training GPT-5 Maybe they're going to focus on plugins, fine-tuning, visual processing, etc.
Load more