All of SamNolan's Comments + Replies

Sorry for the late comment, but I was wondering:

We think the 2018 FP estimate of 10 hen-years/$ is likely a slight underestimate. Across the different tabs on the spreadsheet, we model four scenarios: 1, 10, 30 and 100 hen-years affected per dollar.

Why do you think it's an underestimate?

Ahh, that makes sense. I think "250 Hens a year" sounds like "250 Hens/year" not "250 hens * year". That's probably where I got my mistake

I would go so far as to say your interpretation is correct and the original text is wrong, it should read "hen-years", not "hens a year".

Because this comes up when googling street outreach, as President of EA Melbourne (the EA group that ran the above-mentioned event), I'd love to tell you how it went.

Interestingly, people in the public seem open to ideas of effective altruism. However, the conversion rate is truly tiny, no one we saw on that day came to any future event. In the end, we decided that this was not a worthwhile activity.

Some interesting notes however:

  • People, especially in the current political climate (referring to Russia invading Ukraine here), are actually quite supportive o
... (read more)

This sadly, will likely never happen, or at least not for a few years. This was never within Squiggle's scope. Squiggle currently has much more critical issues before adding such a feature!

Thanks for checking out this post! This is an old one, and I'm no longer as interested in dimensional checking in models. However, I may come back to this project because I have a feeling it could be used to optimize Squiggle code as well as offer dimensional checking.

What do you mean by "Combines the strengths of"? To me, Squiggle is the successor of Guesstimate, and the strengths of Squiggle + the strengths of Guesstimate = the strengths of Squiggle? What features are you looking for?

When I started this project, Squiggle was not in a state where Pedant c... (read more)

1
Falk Lieder
1y
I made a typo. I meant to ask you about integrating the type-checking functionality of Pedant with the probabilistic modeling functionality of Squiggle. I think a version of Squiggle where each value has units that are propagated through the calculations would be very useful. This would allow the user to see whether the  final result has the right units.

Hey! Love the post. Just putting my comments here as they go.

Tldr This seems to be a special case of the more general theory of Value of Information. There's a lot to be said about value of information, and there are a couple of parameter choices I would question.

The EA Forum supports both Math and Footnotes now! Would be lovely to see them included for readability.

I'm sure you're familiar with Value of Information. It has a tag on the EA Forum. It seems as if you have presumed the calculations around value of information (For instance, you have given a pr... (read more)

1
Falk Lieder
1y
Thank you, Sam! Yes, I am familiar with the Value of Information, and I am building on it in this project. I have added the “Value of Information” tag.   I might be wrong, but I think this is assuming that this is the only >research project that is happening. I could easily assume that EA >spends more than 0.1% of it's resources on identifying/evaluting >new interventions. Although, I'm yet to know of how to do the >math with multiple research projects. It's currently a bit beyond >me. Yes, this argument assumes that the alternative to investing some funds into R&D is that all the funds are invested into existing interventions/charities. I intended to answer the question ”When is it worthwhile for a grant maker, such as GiveWell, that currently does not fund any R&D projects to invest into the development of new interventions at all?".    Your "lower bound" is entirely of your own construction. It's derived from your decleration at the start that p is the chance that you find a "investing c dollars into the research generates an intervention that is at least n times as effective as the best existing intervention with probability p. If I was to call your construction the "Minimum value of information", it's possible to calculate the "Expected value of [Perfect|imperfect] information", which I feel like might be a more useful number. Guesstimate can do this as well, I could provide an example if you'd like. I have done that. Those analyses are reported farther down in this post and in follow-up posts.   We have to remember that we are still uncertain about the cost-effectiveness of the new intervention, which means it would need to be expected to be more cost-effective even after considering all priors. This may increase  or decrease . However, this is probably irrelevant to the argument. Good point! One way to accommodate this is to add the cost of determining whether the new intervention is more cost-effective than the previous to the research cost c. 

And to add to this, very recently there was a post Quantifying the Uncertainty in AMF! Which still seems a bit in the works but I'm super excited for it.

My Hecking Goodness! This is the coolest thing I have ever seen in a long time! You've done a great job! I am like literally popping with excitement and joy. There's a lot you can do once you've got this!

I'll have to go through the model with a finer comb (and look through Nuno's recommendations) and probably contribute a few changes, but I'm glad you got so much utility out of using Squiggle! I've got a couple of ideas on how to manage the multiple demographics problem, but honestly I'd love to have some chats with you about next steps for these models.

2
NunoSempere
2y
+1

Hello! My goodness I love this! You've really written this in a super accessible way!

Some citations: I have previously Quantified the Uncertainty in the GiveDirectly CEA (using Squiggle). I believe the Happier Lives Institute has done the same thing, as did cole_haus who didn't do an analysis but built a framework for uncertainty analysis (much like I think you did). I just posted a simple example of calculating the Value of Information on GiveWell models. There's a question about why GiveWell doesn't quantity uncertainty

My partner Hannah currently has a g... (read more)

1
SamNolan
2y
And to add to this, very recently there was a post Quantifying the Uncertainty in AMF! Which still seems a bit in the works but I'm super excited for it.

Hello! Thanks for showing interest in my post.

First of all, I don't represent GiveWell or anyone else but myself, so all of this is more or less speculation.

My best guess as why GiveWell does not quantify uncertainty in their estimates is because the technology to do this is still somewhat primitive. The most mature candidate I see is Causal, but even then it's difficult to identify how one might do something like have multiple parallel analyses of the same program but in different countries. GiveWell has a lot of requirements that their host plaftorm need... (read more)

6
Lorenzo Buonanno
2y
Wildly guessing, but I don't think it's a technological issue. Givewell does publish upper and lower estimates for some of their analyses, at least they did for malnutrition interventions: https://docs.google.com/spreadsheets/d/1IdZLSBgEK46vc7cX9C7KnFgUcOk_M0UIYJ8go_DrvS0/edit#gid=1468241237 see at the bottom, 4x to 19x cash. Many of their CEAs (e.g. new incentives)  are just one column. Even for the ones that have one column per country, they could have multiple sheets for upper and lower bounds. I agree with your second point, I think GiveWell's mission is not informing many small donors anymore, but informing OpenPhil (and maybe other big players), and OpenPhil cares mostly about GiveWell's best guess about "what does the most good". I disagree with the uncertainty being "not actually that high", or that moral uncertainties should be considered separately. Considering moral uncertainty, the impact can vary by orders of magnitude. See https://blog.givewell.org/2008/08/22/dalys-and-disagreement/ (very old blog post, but I think the main point still stands), and https://forum.effectivealtruism.org/posts/3h3mscSSTwGs6qbei/givewell-s-charity-recommendations-require-taking-a. I think many donors would be interested in seeing those kinds of uncertainties somewhere.
4
Karthik Tadepalli
2y
One useful takeaway would be to know whether some interventions are much more uncertain about their range, and if that says something about the strength of evidence. If AMF is 6-10x and deworming is 1-20x (where 1x is point estimate on cash transfer cost effectiveness), then deworming might have a higher point estimate of cost effectiveness than AMF. But the large uncertainty suggests that maybe this is because we have much less evidence and not because the true cost effectiveness is much larger. So a risk averse donor could prioritize AMF on certainty. In other words, we can favor more certain interventions, even within GiveWell top charities, because they are more robust to the risk that we have got it all wrong. They are less likely to be overturned by a new study. That seems pretty valuable. Also an order of magnitude is really large.
2
JoelMcGuire
2y
Adding uncertainty to a single intervention may not be too informative. Still, I think it's more informative than you imply for comparing interventions -- especially if you're considering other decision frameworks for allocating funds beyond giving your money to the one with the highest average cost-effectiveness.  E.g., If you have a framework where you allocate your money in proportion to the probability it has the highest cost-effectiveness, then uncertainty quantification would be essential. I'm not sure anyone supports a rule like this.  Another potentially more real-world example: imagine you're a grantmaker choosing between 10 interventions that are all 10x more cost-effective than GiveDirectly, but vary in uncertainty. If you're a Bayesian with more sceptical priors than analysts, you will favour the relatively less uncertain analyses.  Really? What do you mean by practically? If we crunched the numbers, I guess there'd be a single digit likelihood that GiveDirectly would be more impactful than AMF. 
5
Roddy MacSween
2y
That seems pretty high to me! When I've seen GiveDirectly used as a point of comparison for other global health/poverty charities, they're usually described as 1-10x more effective (i.e. people care about distinctions within one order of magnitude).
2
Vasco Grilo
2y
Great points, thanks! I would just like to emphasise your point that for other GiveWell top charities, we should expect a higher uncertainty than for GiveDirectly, especially the ones working on deworming. So modelling them could be more valuable.

Haha, I came up with that example as well. You're thinking about this in the same way I did!

I think to say that one is the "actual objective" is not very rigorous. Although I'm saying this from a place of making that same argument. It does answer a valid question of "how much money should one donate to get an expected 1 unit of good" (which is also really easy to communicate, dollars per life saved is much easier to talk about than lives saved per dollar). I've been thinking about it for a while and put a comment under Edo Arad's one.

As for the second poin... (read more)

1
Lorenzo Buonanno
2y
I still don't think it's an error, added a comment with my perspective, curious to hear your thoughts! Indeed it was common feedback, but I don't understand it fully, maybe we add a section on it to the post if we reach an agreement.

Thank you so much for the post! I might communicate it as:

People are asking the question "How much money do you have to donate to get an expected value of 1 unit of good" Which could be formulated as:

where  is the amount you donate and  is the amount of utility you get out of it.

In most cases, this is linear, so: . And .

Solving for x in this case gets , but the mistake is to solve it and get .

Please correct me if this is a bad way to form... (read more)

1
Vasco Grilo
2y
I think the question is: * How can I do as much good as possible with C units of cost? This corresponds to the problem of maximising E(U(C)), where U(c) is the utility achieved (via a certain intervention) for the cost c (which must not exceed C). If the budget C is small enough (thinking at the margin): * U(C) = U'(0)*C, where U'(c) is the derivate of U with respect to cost. Assuming U'(0) and C are independent, mean("effect"/"cost") equals mean("effect")/mean("cost"): * mean("effect"/"cost") = E(U(C)/C) = E(U'(0)*C/C) =  E(U'(0)). * mean("effect")/mean("cost") = E(U(C))/E(C) =  E(U'(0))*E(C)/E(C) (assuming independence between U'(0) and C) = E(U'(0)). So, it seems that, regardless of the metric we choose, we should maximise E(U'(0)), i.e. the expected marginal cost-effectiveness. However, U'(0) and C will not be independent for large C, so I think it is better to maximise mean("effect"/"cost").
1
Lorenzo Buonanno
2y
Thanks for commenting here, and thanks again for your initial feedback! I don't really have anything planned in this area, what would you be excited to see?
2
EdoArad
2y
nice explanation :) 

That's true!  could easily be something other than 1.5. In London, it was found to be 1.5, in 20 OECD countries, it was found to be about 1.4. James Snowden assumes 1.59

I could but don't represent eta with actual uncertainty! This could be an improvement.

Now that I've realised this, I will remove the entire baseline consumption consideration. As projecting forward I assume GiveDirectly will just get better at selecting poor households to counteract the fact that they should be richer. Thanks for pointing this out!

2
EdoArad
2y
👍 A good time to mention that I thought this is really great work, and nice spotting of gaps in the original analysis (even if specifically this seem to have been addressed, it wasn't at all clear from the CEA spreadsheet) :)

Oh no, I've missed this consideration! I'll definitely fix this as soon as possible.

Would love to! I'm in communication to set up an EA Funds grant to continue building these for other GiveWell charities. I'd also like to do this with ACE! but I'll need to communicate with them about it.

Hey Neil,

How is this different from EA CoLabs? This team is working to connect people with projects and need as much help as they can help as they can get. Would it be worth joining them over starting a new project?

1
Neil Natarajan
2y
Hi Hazelfire, Thanks for pointing them out, we’ll definitely have a chat with them! It looks to me like they’re mostly focused on volunteering opportunities at pre-existing projects, whereas our main focus is going to be in helping people start / join something new - not necessarily volunteering. Our aim is to break down the barriers that would keep people from going into value-aligned jobs / full-time roles, where CoLabs appears to be mostly matching helping hands to projects that need help.

Maybe, your work there is definitely interesting.

However, I don't fully understand your project. Is it possible to refine a Cost Effectiveness Analysis from this? I'd probably need to see a worked example of your methodology before being convinced it could work.

1
Harrison Durland
2y
The purpose of the TUILS framework is to break down advantages and disadvantages into smaller but still collectively exhaustive pieces/questions (e.g., are the supposed benefits counterfactual, how likely will does the desired state materialize). I’m not sure if you have a very particular definition for “Cost Effectiveness Analysis,” but if you just mean calculating costs and benefits then yes: the whole point of the framework is to guide your reasoning through the major components of an advantage or disadvantage. There is a spectrum of formality in applying the TUILS framework, but the informal version basically treats the relationship between the factors as roughly multiplicative (e.g., anything times zero is zero, if you only solve half the problem you may only get half the originally claimed benefit—assuming there is a linear relationship). I haven’t fully sketched out a truly formal/rigorous version since I’m not a mathematician and I don’t see that as the main value of the framework (which tends to more about assumption checking, intuition pumping, concept grouping/harmonizing, and a few other things). However, if you’re actually interested in the more-formal application, I could write out some of my rough thoughts thus far. (It’s basically “imagine an n-dimensional graph, with one of the dimensions representing utility and everything else representing input variables/measure…”) In terms of example applications, I gave some simplified example applications in the post (see the subsection “ Examples of the TUILS framework being applied”), but I haven’t done any written, deep, formal analyses yet since I haven’t seen that as a valuable use of time since organic interest/attention didn’t seem very high even in the simpler version. That being said, I’ve also used a less refined/generalized version of the framework many times orally in competitive policy debate (where it’s referred to as the stock issues)—in fact, probably more than 90% of policy debate rounds I w

Hello Michael!

Yes, I've heard of Idris (I don't know it, but I'm a fan, I'm looking into Coq for this project). I'm also already a massive fan of your work on CEAs, I believe I emailed you about it a while back.

I'm not sure I agree with you about the DSL implementation issue. You seem to be mainly citing development difficulties, whereas I would think that doing this may put a stop to some interesting features. It would definitely restrict the amount of applications. For instance, I'm fully considering Pedant to be simply a serialization format for Causal.... (read more)

Hopefully Pedant ends up pretty much being a continuation and completion of Squiggle, that's the dream anyway. Basically Squiggle plus more abstraction features, and more development time poured into it.

1
Falk Lieder
1y
When do you think a tool that combines the strengths of Squiggle and Guesstimate will become available? Given where you are at now, what do you think would be the fastest way to integrate the dimensional analysis capabilities of Pedant with the probabilistic modelling capabilities of Squiggle? How long would it take?

Causal is amazing, and if I could introduce Causal into this mix, this would save a lot of my time in developing, and I would be massively appreciative. It would likely help enable many of the things I'm trying to do.

I definitely was considering adding some form of exporting feature to Pedant at some point. I'm not sure that it's within the current scope/roadmap of Pedant, but maybe at some point in the future!

Thanks for your considerations!

Yes, I agree. I can very much add tuple style function application, and it will probably be more intuitive if I do so. It's just that the theory works out a lot easier if I do Haskell style functions.

It seems to be a priority however. I've added an issue for it.

The web interface should be able to write pedant code without actually installing Pedant. Needing to install custom software is definitely a barrier.

Thanks for pointing that out! I just fixed it up.

For Improving Infrastructure around epistemics and forecasting, Ozzie or Nuno would likely be the best to answer this, so here I'm just trying to put myself in their mind. These ideas are a mixture of mine + a discussion with Ozzie.

I would say a clear opportunity would be to investigate looking into writing prediction functions, rather than just predictions. Say for instance "If SpaceX has a press release about an innovation to be released before 2025, then I estimate SpaceX to become a trillion dollar company 5 years earlier". Having such a fidelity makes... (read more)

For Improving Infrastructure around Cost Effectiveness Analysis, my current project is pedant.

Pedant is a math DSL that's designed to make it easier to write cost effectiveness analysis. It checks the calculations for things like dimensional violations, and hopefully in the future allows you to calculate with uncertainties and explore cost effectiveness calculations more graphically.

I wouldn't say that there are people who are asking for cost effectiveness analysis, and more that they simply aren't done or are of low quality to large amounts of EA causes. ... (read more)

"having people sell products where all proceeds go to charity" is different from simply earning to give as it uses this fact to market to a buyer. The idea is that I may be more willing to purchase a second hand book from someone else if I know that the proceeds go to an effective charity (although I find that this is a surprisingly weak motivator, in my experience people don't purchase things even if they know the money goes to an effective charity...).

I run a bookstore to this end that is currently not that successful, that I really want to see become a ... (read more)

There's quite a few opportunities I see from looking around in EA. I am doing direct technical work for EA right now.

EA CoLabs

EA CoLabs itself can be framed as a technical problem. It's the problem of optimally matching different skillsets to different projects to maximise utility. You could definitely tackle it from a fun technical perspective (say, using the Hungarian Algorithm for matching, and using the Australian Skills Classification to describe skills). These however are just my ideas. I may be currently too busy with other things to properly invest... (read more)

8
Yonatan Cale
2y
(Strong upvote!) (Feel free to split up your reply into separate comments if you want) EA Colabs  I'm part of the team there and I have a lot of thoughts around it, perhaps commenting here wouldn't be the best place "having people sell products where all proceeds go to charity" / "the profits end up to Effective Charities"  How is this different from earning to give? (Or founders' pledge) Improving Infrastructure around Cost Effectiveness Analysis Hearing things like this is why I posted this in the first place!! :D :D  Could you tell me much more? Who has these needs? What do they look like? Would you like collaborators? (And if so, do you have some bar for their skill? Improving Infrastructure around epistemics and forecasting Same thing! Do you know of needs here? 

This is currently just a prototype, with many many bugs. I've actually joined the team and EA CoLabs. Which is a proper application of the concepts here.

Thanks a lot!

If I was to flesh this out further, it would likely involve a way of proposing EA projects that we could then curate. The form would likely be accessible via the browser, but yes, it's currently just a very modest proof of concept.

I've been seeing you around and have loved some of your posts! The project is meant to try and find both highly skilled but also beginners in EA. I'm not sure what direction it needs to go in, as I kind of want to talk to the people that have proposed this idea in the past to try and get their thoughts on what it should look like. I should probably get in contact with them soon.

Answer by SamNolanAug 15, 202116
0
0

To me, The Uniting Church of Australia.

It's probably controversial to list a church, but I walked in to the church and got a bible study on how to effectively help people in global poverty, and absolutely loved it.

I definitely think that Churches are a good place to start for places similar to EA, simply because I find that communities around churches have a lot of what I call "intent to do good". Particularly, In my experience, they seem to be unusually disposed to help reduce global poverty. 

Further, when coming into Christianity, I found that there... (read more)

4
Nathan Young
3y
For what it's worth, I don't think it's remotely controversial to list a church. Thanks for writing :)
8
Vilfredo's Ghost
3y
Strong endorse. Long before I came across EA as a movement I had adopted the philosophical foundations of it for religious reasons. Although the specific verses that struck me were not the ones about perfection, which sounds optional, but the greatest commandment, which sounds obligatory:  Jesus replied: “‘Love the Lord your God with all your heart and with all your soul and with all your mind.’ This is the first and greatest commandment.  And the second is like it: ‘Love your neighbor as yourself.’ Matthew 22:37-39. The first didn't really sound actionable beyond state of mind because God doesn't need anything, so the implication is that in practice all one's effort needs to go into the second. And if you actually love your neighbor as yourself, you naturally think about effectiveness, not just intentions.