All of katriel's Comments + Replies

Do you know if other CGIAR centers contributed significantly to the Green Revolution or if it was only CIMMYT? I do not. 

3
Karthik Tadepalli
2mo
There were a bunch, most prominently IRRI in the Philippines - Table 1 in this paper lists all of them.

Presumably the people running these charities seek funding from EA sources, despite knowing that counterfactually the bulk of that money would otherwise go to AGF/GHDF/et al.

This presumption isn't always true. In 2019, at CSH we made a deliberate decision not to continue seeking funding from sources that would counterfactually donate to GiveWell top charities. 

4
Vasco Grilo
3mo
Thanks for clarifying, Katriel. For readers reference, CSH stands for Charity Science Health.

On an earlier discussion of Nonlinear's practices, I wrote:

I worked closely with Kat for a year or so (2018-2019) when I was working at (and later leading) Charity Science Health. She's now a good friend.  

I considered Kat a good and ethical leader. I personally learned a lot from working with her. In her spending and life choices, she has shown a considerable moral courage: paying herself only $12K/year, dropping out of college because she didn't think it passed an impact cost-benefit test. Obviously that doesn't preclude the possibility that she has

... (read more)
2
Morpheus_Trinity
7mo
Being a close friend of Kat for quite some time, do you believe that your point of view could shed some valuable light on this discussion, or is there a chance folks might see it as an effort to spruce up Kat's or Nonlinear's image? In the interest of keeping things on the level, can you confirm whether you were clued in about this situation before making this post? If so, did you take it upon yourself to dig into the allegations independently? Lastly, have you received any requests or nudges from Kat or other members of the Nonlinear team to drop a favorable comment in this thread?

I'm unclear on how this comment speaks to the content of the post, which is compatible with Kat being a courageous, frugal, and dedicated friend and leader.

My positive experience seems very different from what is reported here.

Are you implying that you don't believe what's reported here, because it's very different, or something else?

At a very quick skim, I am confused about whether this post is arguing that:

  1. if fake meat were better than real meat in terms of each of price, taste, and convenience, many consumers would still buy real meat, or
  2. if fake meat were as good as real meat in terms of PTC overall, many consumers would still buy real meat.

(2) seems obvious intuitively. (1) would be surprising to me but it makes sense to point out any gaps in our evidence against it. 

1
Jacob_Peacock
6mo
Sorry I missed this—mostly (2), sometimes discussing (1).

FYI, EA Iran had a quite active group of mostly medical students in Tehran a year ago - they might have other things on their mind these days, but could be worth connecting with them. 

Giant congratulations! Are you open to working on neartermist projects? 

2
Jona
1y
Thanks! Yes, feel free to DM me, if relevant.

I worked closely with Kat for a year or so (2018-2019) when I was working at (and later leading) Charity Science Health. She's now a good friend.  

I considered Kat a good and ethical leader. I personally learned a lot from working with her. In her spending and life choices, she has shown a considerable moral courage: paying herself only $12K/year, dropping out of college because she didn't think it passed an impact cost-benefit test. Obviously that doesn't preclude the possibility that she has willfully done harmful things, but I think willfully bad behavior by Kat Woods is quite unlikely, a priori. 

With limited manpower, GiveWell also has to prioritize which CEA improvements to make--and added complexity can moreover increase the risk of errors. 

I have heard the claim that there were no professional ethicists among the authors of the Belmont Report. 

3
Davidmanheim
1y
Per HHS, "The Belmont Report... is the outgrowth of an intensive four-day period of discussions that were held in February 1976 at the Smithsonian Institution's Belmont Conference Center supplemented by the monthly deliberations of the Commission that were held over a period of nearly four years." Not sure who was part of the four-day discussion, but per that site, the commission included, among others: * Albert R. Jonsen, Ph.D., Associate Professor of Bioethics, University of California at San Francisco. * Karen Lebacqz, Ph.D., Associate Professor of Christian Ethics, Pacific School of Religion.

What do real existing bioethicists think of compensation for kidney donors? 

2
Devin Kalish
1y
I'm going to break my usual policy of not replying to comments anymore because I think this counts as a direct question. So, my guess is that bioethicists, on average, believe similar things to what the general public believes, on average, but that either extreme is overrepresented (there will be more bioethicists in favor of a fully privatized kidney market, and also more bioethicists against all kidney donation), just based on my experience on other issues. I also suspect much of the controversy will be in the fine details rarely discussed by the public. As an example, if the purpose of payment is supposed to be reimbursement, should this be weighted by someone's income in order to directly reimburse their lost wages, or should it be a flat rate to avoid the inherently regressive nature of the weighting policy? If the latter, how should one decide which flat rate counts as the correct one for "reimbursement"? That said, I really don't know, the topic hasn't come up much in conversation or readings, and I haven't informally polled anyone in the way I did with challenge trials. I know someone currently working on trying to set up a philpapers-like survey of bioethicists, so I hope that will shed some more light on issues like this if/when it comes out. Still, I hope this helps.

Heads up that it's still in the headline version - though I think as an average it's fine and useful to include. 

2
rosehadshar
2y
Thanks; I forgot about the headline version. I've now removed.

Amazing. Quick comments on "how much is spent" (GDP). 

  • At first glance this looks like nominal GDP, not adjusted for changes in prices. That's more literally "how much is spent" but less informative about how people's welfare and capabilities are changing over time.

Can someone tell me things about when I should expect the next doubling, i.e. in what year should I expect daily global spending to exceed $526 billion? Feels complicated and important; I’m ignorant about what sensible projections are and how much uncertainty there is.

... (read more)

Will any of the lectures/talks not be recorded? It would be nice if Swapcard could indicate this.

1
Kaleem
2y
Only talks in 205BC will be recorded

I think that complex cluelessness implies we should be very skeptical of interventions whose claim to cost-effectiveness is through their direct, proximate effects. As has been well argued elsewhere, the long-term effects of these actions probably dominate. 

In my reading, the 80,000 Hours article in the link does not fully support this claim. In the section "Can we actually influence the future," it identifies four ways actions today can influence the long-term future. But it doesn't provide a solid case about why most interventions would  influe... (read more)

I didn't see any molluscs here. Would you consider adding a mollusc?

Fabulous! This is extremely good to know and it's also quite a relief!

Do the expected values of the output probability distributions equal the point estimates that GiveWell gets from their non-probabilistic estimates? If not, how different are they?

More generally, are there any good write-ups about when and how the expected value of a model with multiple random variables differs from the same model filled out with the expected value of each of its random variables?

(I didn't find the answer skimming through, but it might be there already--sorry!)

3
cole_haus
5y
Short version: No, but they're close. Don't know of any write-ups unfortunately, but the linearity of expectation means that the two are equal if and (generally?) only if the model is linear. Long version: When I run the Python versions of the models with point estimates, I get: Charity Value/$ GiveDirectly 0.0038 END 0.0211 DTW 0.0733 SCI 0.0370 Sightsavers 0.0394 Malaria Consortium 0.0316 HKI 0.0219 AMF 0.0240 The (mostly minor) deviations from the official GiveWell numbers are due to: 1. Different handling of floating point numbers between Google Sheets and Python 2. Rounded/truncated inputs 3. A couple models calculated the net present value of an annuity based on payments at the end of the each period instead of the beginning--I never got around to implementing this 4. Unknown errors When I calculate the expected values of the probability distributions given the uniform input uncertainty, I get: Charity Value/$ GiveDirectly 0.0038 END 0.0204 DTW 0.0715 SCI 0.0354 Sightsavers 0.0383 Malaria Consortium 0.0300 HKI 0.0230 AMF 0.0231 I would generally call these values pretty close. It's worth noting though that the procedure I used to add uncertainty to inputs doesn't produce inputs distributions that have the original point estimate as their expected value. By creating a 90% CI at ±20% of the original value, the CI is centered around the point estimate but since log normal distributions aren't symmetric, the expected value is not precisely at the the point estimate. That explains some of the discrepancy. The rest of the discrepancy is presumably from the non-linearity of the models (e.g. there are some logarithms in the models). In general, the linearity of expectation means that the expected value of a linear model of multiple random variables is exactly equal to the linear model of the expected values. For non-linear models, no such rule holds. (The relatively modest discrepancy between the point estimates and the expected values suggests that th
1
Michael_Wiebe
5y
Yes, how does the posterior mode differ from GiveWell's point estimates, and how does this vary as a function of the input uncertainty (confidence interval length)?