At the time, the comment was "it's not obvious, more rationale needed" -- i.e. I expressed sympathies for the proposal of transparency, but erred towards not doing it.
I think the main thing which has changed is that it's a slightly more academic question now -- we no longer have the resource to run something like this.
If, hypothetically, we did have the resource to run this again, would we default to asking funders to be transparent (rather than our previous default choice of not making this request)? I'm not sure -- as I say, it's a rather more academic question now.
Thanks very much for this, much appreciated. Your best guess of vaccines being less cost-effective than bednets and SMC, but not by an order of magnitude, sounds sensible.
Thanks very much for the comment, this is really interesting. The idea of explicitly adding in suicide risk is an interesting direction for the analysis, it sounds like good work. When you publish your paper, I'll be interested to consider whether the underlying estimates of the badness of depression (perhaps implicitly) already reflect the suicide angle.
At some point it might be useful to do a more careful compare and contrast between your method (using Pyne et al's paper) and our method (using the Sanderson paper). Given that the methods are quite differ...
I certainly would like to equip my toddler with more maths (and preferably computer science) skills than we see in schools. I was planning to remedy this by taking more time on teaching her the content myself (assuming she's willing!) I appreciate this won't work for everyone -- it's time-consuming and not every parent has great maths.
I'm hoping that I will be able to get into a routine of regular maths fun with Daddy. At first this will be the basics (my daughter can't talk yet, so she still has a lot to learn!), and then over time moving on to more advan...
I said this in another comment, but in case it gets missed, I just want to highlight that 1Day Sooner has shown an excellent attitude. When we reached out to them, they were consistently welcoming of the criticism and had constructive useful comments. I've found these virtues to be more common in the EA community than elsewhere, but I still like to call them out when I see it.
Thank you Josh. I've found 1Day Sooner's collaborative spirit to be exemplary here -- both being welcoming of the challenge and adding useful thoughts.
It seems intuitive to me that the following package of considerations may lead to vaccines and nets/SMC having roughly the same cost-effectiveness:
Sorry for asking about a minor detail, but Figure 3 in section 3.2.1 shows an internal validity adjustment of 90% for ITNs (top row of figure). I thought this was 95%? Am I misunderstanding how you're thinking about the adjustment in this document?
I've often thought that more quantification of the uncertainty could be useful in communicating to donors as well. E.g. "our 50% confidence interval for AMF is blah, and that confidence interval for deworming blah, so you can see we have much less confidence in it". So I think this is a step in the right direction, thanks for sharing, setting it out in your usual thoughtful manner.
Good question.
It's also helpful because the wording of my post was meant to convey that "expert opinions tend to believe that the therapeutic alliance matters" (and not necessarily that I'm confident that that's the case).
One of the papers that I referenced did flag that most of the studies are observational rather than experimental, which does validate your concern. (I think it was Arnow & Steidman 2014 which said this; I don't know if a more recent paper sheds more light on this).
I'm not planning to look into this topic in any depth, but perhaps someone more knowledgeable can give a more definitive answer.
I think it's useful for people to express opinions on the forum, but this post didn't quite hit the mark, in my view.
The post makes a number of fairly strong claims, but some of them (including important ones) have little to no justification. Examples:
If you didn't want to lengthen the post by going over lengthy justifications which have already been made elsewhere, I think it would have been reasonably to link to other places where those claims have been justified.
I’ll go further and say that I think those two claims are widely believed by many in the AI safety world (in which I count myself) with a degree of confidence that goes way beyond what can be justified by any argument that has been provided by anyone, anywhere, and I think this is a huge epistemic failure of that part of the AI safety community.
I strongly downvoted the OP for making these broad, sweeping, controversial claims as if they are established fact and obviously correct, as opposed to one possible way the world could be which requires good argumen...
Here's a few quotes from your post (emphasis added):
...I ran into a quite unhealthy looking dog who was riddled with ticks. We spent half an hour taking the ticks out and by the time we were done with him, we knew we wouldn’t let him lie there....
We brought him there [to a shelter] right away, leaving him in a pen <...> . When we went away, it was with a bad feeling.
When we went back the next day, we were told that the dog had escaped. <...> We felt devastated
By now we had bonded with this dog <...> and mourned for the rest of the day.
My intuition says that people are probably already following the heuristic "if you don't like your therapist, try to get another one". I also haven't given much thought to the patient's/client's perspective on the therapeutic alliance.
I'm used to seeing many expert opinions on psychotherapy converge on the view that the type of therapy doesn't make much difference (at least as far as the evidence can tell us). I.e. it doesn't seem to matter much whether you choose CBT or IPT or whatever. The therapeutic alliance, on the other hand, does matter. Therapeutic alliance means something like "How well you get on with your therapist" (plus some related things).
I had a fleeting thought that perhaps the therapeutic alliance might be neglected. E.g. maybe there's a novel intervention which involv...
It's with a heavy heart that I find myself (a) spotting this post (b) starting to read it. Rightly or wrongly, I'm not enjoying the community drama.
I feel like I just want to forget that I'd ever seen any of these posts, and just continue being kind and friendly to anyone I know who's involved in this.
This solution sounds like a crude cludge (shouldn't I be more truth-seeking that that? can't I be more thoughtful?) But I just don't think I have the energy to do better than that.
Great that you did this, really appreciate it.
I'm no expert on the biology, but my intuition would in any case have been that the effect size would be tiny/negligible for 6 weeks of supplementation, and that for non-trivial effects, you would need sustained supplementation over a longer time period.
Is there any reason to doubt my intuition on this?
Oh yes, that is weird. The impression I had was that Ilya might even have been behind Sam's ousting (based on rumours from the internet). I also understood that sacking Sam need 4 out of 6 board members, and since two of the board members were Sam A and Greg B, that meant everyone else had to have voted for him to leave, including Ilya. Most confusing.
Bravo for writing this stuff up, glad to see that.
I actually didn't realise that this elephant was an elephant? Indeed, I had the impression that paid ads had been used already by other EA orgs (if memory serves correctly, by EAG, 80k, and SoGive) so I thought they were considered to have legitimacy, as far as I was aware.
I believe the financial system is well-positioned for "consistent pressure on companies". I have more to say on this based on my own work experience, so if anyone is interested feel free to reach out.
If we're only considering plant-based meat, and only looking out over the near term (say, 1-3 years) then the claims here seem reasonable. So much so, that I'm surprised that the PTC model is so popular.
It may look like your concerns also apply to other alternative proteins (e.g. lab-grown meat). I don't believe that's the case.
A summary based on the quotes which I included in a separate comment:
I'm also concerned about the internal strife within ISID/ProMED. I've copied and pasted some quotes below.
Here's an excerpt from the STATnews article that this post links to:
......Larry Madoff, who served as editor of the program from 2002 to 2021. In spring 2021, Madoff said he was “forced out” by the organization’s CEO, Linda MacKinnon, and Alison Holmes, then president of the ISID executive committee. A professor of infectious diseases at the University of Massachusetts, Madoff refers to himself as editor emeritus of ProMED, a title bestowed upon him by th
It seems that a central bottleneck for the fund is that a few key people are decision-makers, and they are very busy, which makes it hard to operate quickly at scale and be transparent.
When SoGive ran its grants programme last year, we tackled these problems by getting more junior people to help.
I.e. the structure was:
I was worried that this whole post might omit missing hedging and impact investing:
(a) an investor may wish to invest in equities for mission hedging reasons (e.g. scenarios where markets go up may be correlated with scenarios where more AI safety work is needed, or you might invest heavily in certain types of biotech firms, since their success might be correlated with pandemic risk work being needed)
(b) an investor can have impact on the entities they have a stake in through stewardship/engagement (sometimes referred to as investor activism). Roughly spea...
I've wondered about the interaction between far-UVC and immunity:
I was thinking along these same lines but for skin-microbiota... we are lagging behind understanding this compared to gut-microbiota but it seems like the diversity is pretty important to our overall health? Its probably only a risk worth considering for the "install it in all the offices" rather than against using far-UVC in pandemic situations, but I guess research would be needed to assess the risks for skin disorders, or whatever else these microbiota might be important for?
I agree with Jason that the specific moral hazard of "people might move to flood-prone areas in order to get cash" seems unlikely to be a concern.
The moral hazard that I was thinking of when I read Robi Rahman's comment was "people who already live in flood-prone areas might be less prone to invest in flood defences/move away/do other things in light of the information that floods may be coming"
Re your question: "I would be especially interested if you have ideas for other historical case studies that could inform the longtermist project." Here's a few ideas:
At the start of your post, you said, rather tantalisingly: "I believe that many of the learnings from the creation of climate risk financial regulation in the UK can be applied to AI regulation." Could you expand on this?
Also, I'm pleased you wrote this post :-)
This comment will focus on the specific approaches you set out, rather than the high level question, although I'm also interested in seeing comments from others on how difficult it is to solve alignment, and why.
The approach you've set out resembles Coherent Extrapolated Volition (CEV), which was described earlier by Bostrom. I'm not sure what the consensus is on CEV, but here's a few thoughts which I have in my head from when I thought about CEV (several years ago now).
I can also confirm that an early employee of W3W told me that supporting development work was one the main original aims of W3W.
If I'm reading claim 3 correctly, are you saying that being a 10% GWWC pledger should be sufficient to get a spot at EAG, and this is true regardless of absolute donation amount?
That's much stronger than what I read it as. I think Sjir was saying something more like "if you turn up to a local EA event you should feel welcomed and like you are 'one of the gang' even if you only donate".
The purpose of EAG these days seems a bit murky to me, but it seems to be to be mostly for people who are highly engaged, and I think it's fair to say that if you just donate you are probably not highly engaged (although you might be).
At the outset, I had the same concern, however thus far it doesn't appear to have been a problem. It's possible that this may change in time, in which case we'll cross that bridge when we get there.
I think it would be easy for someone to confuse the two, but (as Matt_Sharp rightly indicated) the SoGive 18 months and the GiveWell 3 years are referring to different things.
The SoGive 18 month threshold refers to funds where there are no plans to use the money.
GiveWell is referring to money which is planned to be spent.
I fear you might be confusing "reserves" and "designated funds" (to use the parlance common in UK charity accounting).
Attracting senior staff members might be easier with high reserves, but I imagine it would be easier still if the charity "designated" some money to be used on the staff member's salary for (say) the next 3 years. SoGive's methodology is very liberal about this, and the charity is at liberty to set reserves aside, or "designate" them for some purpose, and this is non-binding, and if the charity does this, SoGive totally ignores those funds when considering reserves.
Although we didn't run this post past Open Phil before publishing, we are in touch with Open Phil, and we do ask them for suggestions of places to direct the money we support.
If they were against what is being outlined here, I think they would have said so when we've been in touch with them. Instead they were helpful.
I can confirm that the username looks like it's associated with someone I know at NTI, and that the wording looks consistent with wording that I've seen from NTI, and overall I judge it very very likely that this is a legitimate comment from NTI.
Good question Yonatan. The "too rich" category has been around for a long time, but I think this is the first time it's been given much attention. As a result, we haven't thought hard about how it's worded. "Overfunded" may well convey what we want without having unwanted connotations. Thank you for the comment.
This is a potentially relevant point, thanks for raising it. NTI did allude to this when we spoke to them (as we discuss in section 3.1).
In determining our rating, a key thing we needed to work out is: does NTI have all this money for arbitrary reasons (e.g. they have a chunk of money leftover from previous work)? or do they have high reserves for good risk management reasons (e.g. the "reserves" aren't really reserves because they plan to spend them down)?
We believe that it's for arbitrary reasons because they told us that this was the case (see the refer...
I think it's important that Eliezer used the words "and not mention the obvious notion that" (emphasis added).
The use of the word "obvious" suggests that Eliezer thinks that Ted is either lying by not mentioning an obvious point, or he's so stupid that he shouldn't be contributing to the forum.
(Not that I'm a moderator, nor am I suggesting that my opinion should receive some special weight, just adding another...
I see some disagree votes on Ted's comment. My guess at what they mean:
"Ted, please don't be put off, Eliezer is being unnecessarily unkind. Your post was a useful contribution".
How did you decide to be a not-for-profit? I imagine that the evals/audit work will likely be very lucrative at some point?
Great to see people writing about this topic, thank you. Thank you also for reaching out to discuss and for sharing a draft with me in advance. I'm sorry I wasn't able to review it, I've been a bit under the weather of late.
As I'm still under the weather, I've only skimmed your post, so sorry if I've missed something. As this is a topic I'm interested in I would normally prefer to read more carefully. Some quick comments:
I think it's interesting that an impact investing fund is making the comparison to Givewell. This is far from widespread in the philanthropic world, and is even rarer in investing.
I predict that I probably wouldn't agree with the 3x claim if scrutinised properly.
I sympathise with the point made by Michael St Jules about quality of evidence, but I'm more worried about counterfactuals. I.e. if GIF had not made those investments, how likely is it that someone else would have?
I expect that answering this question overall (for all animals) is hard, but there exist specific animals for which it's (probably) easy. A chicken farmed in the most egregious factory farmed conditions likely has a materially negative quality of life (as you noted), but also has minimal impact on climate change. I'm not sure how to size the effects of chicken farming on cropland for feed or the oversized-ness of the food system, so it's possible this example could be rendered more complex by that consideration. Avian flu can be nasty (avian flu has been associated with mortality of c.50% in the past), so chickens seem likely to be a risk factor for pandemics.
Not sure if I missed it, but another factor might be AMR. (Anti-microbial resistance is a mechanism for factory farming leading to pandemics, which you mention. But AMR causes other harms too)
Someone told me that they had heard that OpenAI was training GPT-5.
The someone was the sort of person who would likely be in the know (but was not at OpenAI).
I'd prefer not to say more, because I don't know whether they are willing to have their identity stated in public.
Interesting that Sam Altman said "We are not currently training what will be GPT-5". I've certainly heard rumours to the contrary.
If the authors of this post haven't indicated what their star signs are, how do I know if I believe what they say?