M

MaximeCdS

39 karmaJoined Feb 2021

Comments
6

Thanks for sharing! 

Can I ask why you recommend both a Kindle and a Remarkable 2? Do you think there's a need for Kindle if one has a Remarkable? 

Thanks for your recommendations! Very much appreciated.

 

Your link to order vitamin B12 seems to point to a study instead. Do you have any specific brand recommendation? 

  • I think this post makes a very good point in a very important conversation, namely that we can do better than our currently identified best interventions for development.
  • The argument is convincing, and I would like to see both more people working on growth-oriented interventions, and counter-arguments to this. 
  • As a PhD in economics, this post may influence what topic I choose to work on during the dissertation phase. I think most EA economists at the start of their PhD would benefit from reading this. 

So let me know in the comments if you’re interested in a followup post on *how* to build models. 

 

Yes please!

Her choice to use multiple, independent probability functions itself seems arbitrary to me,...

I'm not sure what makes you think that. Prof. Greaves does state that rational agents may be required "to include all such equally-recommended credence functions in their representor". This feels a lot less arbitrary that deciding to pick a single prior among all those available and decide to compute the expected value of your actions based on it. 

Instead of multiple independent probability functions, you could start with a set of probability distributions for each of the items you are uncertain about, and then calculate the joint probability distribution by combining all of those distributions. That'll give you a single probability density function on which you can base your decision.

I agree that you could do that, but it seems even more arbitrary! If you think that choosing a set of probability functions was arbitrary, then having a meta-probability distribution over your probability distributions seems even more arbitrary, unless I'm missing something. It doesn't seem to me like the kind of situations where going meta helps: intuitively, if someone is very unsure about what prior to use in the first place, they should also probably be unsure about  coming up with a second-order probability distribution over their set of priors . 

You might need to use an improper prior, and in that case, they can be difficult to update on in some circumstances. I think these are a Bayesian, mathematical representation of what Greaves calls an "imprecise credence".

I do not think that's what Prof. Greaves mean when she says "imprecise credence". This article for the Stanford Encyclopedia of Philosophy explains the meaning of that phrase for philosophers. It also explains what a representor is in a better way that I did. 

But I think the good news is that many times, your priors are not so imprecise that you can't assign some probability distribution, even if it is incredibly vague. So there may end up not being too many problems where we can't calculate expected long-term consequences for actions.

I think Prof. Greaves and Philip Trammel would disagree with that, which is why they're talking about cluelessness. For instance, Phil writes: 

Perhaps there is some sense in which my credences should be sharp (see e.g. Elga (2010)), but the inescapable fact is that they are not. There are obviously some objects that do not have expected values for the act of giving to Malaria Consortium. The mug on my desk right now is one of them. Upon immediately encountering the above problem, my brain is like the mug: just another object that does not have an expected value for the act of giving to Malaria Consortium. Nor is there any reason to think that an expected value must “really be there”, deep down, lurking in my subconscious. Lots of theorists, going back at least to Knight’s (1921) famous distinction between “risk” and “uncertainty”, have recognized this.

Hope this helps.

Hey! 

I think Hilary Greaves does a great job at explaining what cluelessness in non-jargon terms in her most recent appearance on 80K podcast

As far as I understand it, cluelessness arises because, as we don't have sufficient evidence, we're very unsure about what our credence should be, to the point they feel -or maybe just are- arbitrary. In this case, you could still just carry out the expected value calculation and opt to do the most choice worthy action  as you suggest.  However, it seems unsatisfying because the credence function you use is arbitrary. Indeed, given your level of evidence, you could very well have opted for another set of beliefs that would have lead you to act differently. 

Thus, one might argue that in order to be rational in this type of predicament, you have to consider several probability functions that are consistent with the evidence you have. In other words, you are required to have "imprecise credences" because you cannot determine in a principled manner which probability function you should use.  

As Hilary Greaves herself points out in the podcast I mentioned above,  if you're not troubled by this, and you're by yourself, you can just compute the expected value, but issues can arise when you try to coordinate with other agents that have different arbitrary beliefs.  This is why it might be important to take cluelessness seriously. 

I hope this helps!