Clara Torres Latorre 🔸

Postdoc @ CSIC
214 karmaJoined Working (6-15 years)Barcelona, España

Participation
2

  • Completed the Introductory EA Virtual Program
  • Attended more than three meetings with a local EA group

Comments
55

I think allowing this debate to happen would be a fantastic opportunity to put our money where our mouth is regarding not ignoring systemic issues:
https://80000hours.org/2020/08/misconceptions-effective-altruism/#misconception-3-effective-altruism-ignores-systemic-change

On the other hand, deciding that democratic backsliding is off limits, and not even trying to have a conversation about it, could (rightfully, in my view) be treated as evidence of EA being in an ivory tower and disconnected from the real world.

I was thinking the same, a bet resistant to something like COVID + rebound would be more in the spirit of the argument.

Maybe gdp growth over previous all-time high?

I think you have a point. However, I strongly disagree with the framing of your post, for several reasons.

One is advertising your hedge fund here, that made me doubt of the entire post.

Second, is that the link does not go to a mathematical paper, rather to the whitepapers section of your startup. Nevertheless, I believe the first PDF there appears to be the math behind your post.

Third, calling that PDF a mathematical proof is a stretch (at least, from my pov as a math researcher). Expressions like "it is plausible that" never belong in a mathematical proof.

And most importantly, the substance of the argument:

In your model, you assume that effort by allies depends on the actor's confidence signal (sigma), and that allies' contribution is monotonic (larger if the actor is more confident). I find this assumption questionable, since, from an ally/investor perspective, unwarranted high confidence can undermine trust.

Then, you conflate the fact that the optimal signal is higher (when optimizing for outcomes) than the optimal forecast (when optimizing for accuracy) as an indication against calibration. I would take it as an indication for calibration, but including possible actions (such as signaling) as variables to optimize for success.

In my view, your model is a nice toy model to explain why, in certain situations, signaling more confidence than what would be accurate can be instrumental.

Ironically, your post and your whitepaper do what they recommend, using expressions like "demonstrate" and "proof" without properly acknowledging that most of the load of the argument rests on the modelling assumptions.

How much time is this expected/recommended to take?

  1. Depends on what you count as meaningful earning potential.

    One of the big ideas that I take from the old days of effective altruism, is that strategically donating 10% of the median US salary can save more lives than becoming a doctor in the US over one's career.

    Same logic applies to animal welfare, catastrophic risk reduction, and other priorities.
     
  2. A different question is would you be satisfied with having a normal job and donating 10% (or whatever % makes sense in your situation)?
     

Over the last decade, we should have invested more in community growth at the expense of research.

My answer is largely based on my view that short-timeline AI risk people are more dominant in the discourse that the credence I give them, ymmv

I would like to see more low quality / unserious content. Mainly, to lower the barrier to entry for newcomers and make it more welcoming.

Very unsure if this is actually a good idea.

I appreciate the irony and see the value in this, but I'm afraid that you're going to be downvoted into oblivion because of your last paragraph.

"At high levels of uncertainty, common sense produces better outcomes than explicit modelling"

Load more