Tyner

Comments
70

Hi Edward,

You might be interested in the work of the Non-Human Rights Project.  They are attempting to establish the legal and political frameworks to ensure that animals (e.g. tigers) will be treated well by people.

https://www.nonhumanrights.org

Thanks for writing this.

  1. I don't think this is significant.  The use of the word "consumption" is interchangeable with purchasing in economic contexts.  The use of the word "marginal" is possibly superfluous.  However, I think there's an interpretation that makes sense here, where an individual is increasing total suffering "at the margin" by virtue of their consumption.  That is, they are not responsible for the whole of the suffering, but the marginal increase in suffering caused by their personal consumption.  The language is unclear, but I would not agree that it is a significant error (unless you consider unclarity or vagueness to be significant mistakes).

2-4 I agree with you.  I particularly appreciate the point about 'naive vs. non-naive'.

cheers

Maybe one way to address this would be separate posts?  The first raises the problems, shares emotions.  The second suggests particular actions that could help.

Could you explain further why funding diversity would exacerbate unilateralist's curse?  

Is it this one?

https://forum.effectivealtruism.org/posts/bXP7mtkK6WRS4QMFv/are-bad-people-really-unwelcome-in-ea

This was another discussion of EA/FIRE

https://forum.effectivealtruism.org/posts/j2ccaxmHcjiwGDs9T/ea-vs-fire-reconciling-these-two-movements

Below is a link to the Philanthropy 50 from last year.  It is US only and ranks by amount given

https://archive.ph/XFfEI

This sounds like a great project and I would really like to participate, but cannot make the commitment for that date span.  Is there a good way to stay in the loop for future cohorts?  Thanks!

Answer by TynerOct 05, 202260

Does "calibrated probability assessment" training work?

In "How to Measure Anything" chapter 5, Douglas Hubbard describes the training he provides to individuals and organizations that want to improve their skills.  He provides a sample test which is based on general knowledge trivia, questions like

 "What is the air distance from LA to NY?" 

for which the student is supposed to provide a 90% confidence interval.  There are also some true/false questions where you provide your level of confidence in the answer e.g. 

"Napoleon was born on Corsica".  

In the following few pages he describes some of the data he's collected about his trainees implying this sort of practice helps people become better estimators of various things, including forecasting the likelihood of future events.  For example, he describes CTO's making more accurate predictions of new tech after completing training.

My question: Is there evidence that practice making probabilistic estimates about trivia improves people's ability to forecast non-trivial matters?  Have there been published studies?

I asked Dr. Hubbard these questions and he graciously replied saying to check out his book, which only cites 1980 Kahneman and Tversky, or the Wikipedia page, which also only cites his book and the above study, or to read Superforecasting.

Thanks!

[note that this is a re-post of a question I asked before but didn't get an answer]

Load More