All of Ales_Flidr's Comments + Replies

Thanks for an excellent summary of the literature, Hauke! This interview we did with Lant in 2017 touches on some of these ideas. We don't go as deep but I think Lant makes some insightful points about the intellectual history of the debate and I found it interesting to hear him think out loud. I recommend skipping to around min 20. https://harvardeapodcast.com/2019/09/24/the-turing-test-9-lant-pritchett/amp/

From Critique of Pure Reason (h/t Daniel Filan, h/t Bryan Caplan) : http://www.econlib.org/archives/2014/07/kant_on_betting.html

This is an excellent review, thanks! Really like (a) the visual metaphors in bullet points and (b) the critical commentary with updates on replication. As someone who's only skimmed the Righteous Mind (didn't feel worth reading after Moral Tribes), I got a really good picture of the core concepts from this review (which I typically don't - it's really hard to efficiently compress the whole book). Thanks a lot :)

Yes, forgot to add GiveWell website, quite important in my early days. Especially their post on flow-through effects relieved some of my concerns that EA mental models may be too narrow.

Yes, I would also particularly recommend the early sections on metaethics. Later parts are also good if you actually want to pass the Ideological Turing Test against long-termism. He spends a lot of time with the person affecting view :)

Primed by Singer's Famine, Affluence and Morality and Jeffrey Sachs + White Man's Burden + Poor economics for an EA approach to development econ. Sequences, Harry Potter and the Methods of Rationality and Nick Beckstead's thesis were the most important in making me deeply interested in EA.

2
Ben Pace
6y
I never read Nick's thesis. I'm curious if there are particular sections you can point to that might give me a sense of why it was influential on you? I have a vague sense that it's primarily mathematical population ethics calculations or something, and I'm guessing I might be wrong.

Thanks :) The idea behind the Ideological Turing Test is (a) to put epistemic rationality into the spotlight (b) to see how good a model the guests have of the debate and how well they considered the other side, which should help you think about how seriously you should take their claims (c) we think it's kind of fun :)

Thanks for the suggestion! Sounds like a fun topic, will definitely think of potential guests when we get back to recording.

Thanks for the suggestions, Ben! We will look into them at our next org meeting tomorrow.

As for the T-shirts, we found someone who is willing to donate, but obviously the lower the costs, the better. And we are still looking for suggestions from EA about the design, so if anyone has ideas, please let us know!

Thank you, Tom! I will let you know in a few days what things look like and whether it is likely that we will need your backup.

Right, the main problem is that we finalized the date after most departments and funds finalized their budget, so we only managed to raise ~1k.

Thanks so much Evan, Harvard EA will greatly appreciate that. I've been planning on doing something like that for our semi-involved members but never got around to actually doing that.

Hey Seth,

Are you coordinating with FLI and FHI to have some division of labor? What would you identify GCRI's main comparative advantage?

Best, Ales

4
SethBaum
9y
Hi Ales, We are in regular contact with both FLI & FHI. FHI is more philosophical than GCRI. The most basic division of labor there is for FHI to develop fundamental theory and GCRI to make the ideas more applied. But this is a bit of a simplication, and the coordination there is informal. With FLI, I can't yet point to any conceptual division of labor, but we're certainly in touch. Actually I was just spending time with Max Tegmark over the weekend in NYC, and we had some nice conversations about that. GCRI comes from the world of risk analysis. Tony Barrett and I (GCRI's co-founders) met at a Society for Risk Analysis conference. So at the core of GCRI's identity and skill set is rigorous risk analysis and risk management methodology. We're also good at synthesizing insights across disciplines and across risks, as in our integrated assessment, and at developing practical risk reduction interventions. Other people and other groups may also be good at some of this, but these are some of our strengths.

Thanks Rob, this is very useful. Even though there's a lot of overlaps and a lot of people might have read it, I'd also mention this great summary on LessWrong. Someone might find it helpful in combination with this article.

I've had great experience combining Beeminder with Fitocracy, which is a very easy way to quantify and gamify exercise. Prior to that, I had trouble comparing eg. run to gym workout. It usually made me resort to only running, which is easy to quantify, even though I knew it was sub-optimal.

Not necessarily, but it's a risk management issue, so it seems like a good fit. Could be equally useful for other EA causes, though. I'll look at it after I'm done with my finals in a week or so.

Just stumbled upon this in Baron's Thinking and Deciding:

"For example, Breyer (1993) argues that the removal of asbestos from school buildings might, if it were done throughout the United States, would cost about $100,000,000,000 and save about 400 lives over a period of 40 years. This comes to $250,000,000 per life saved. (And it might not save any lives at all in total, because it endangers the workers who do the removal.)" - BARON, J. Thinking and Deciding, p.502. New York : Cambridge University Press, 2008.

Does anyone know more about the actu... (read more)

0
Peter Wildeford
9y
Why "for x-risk" in particular?

I think this is a great idea! I myself am a huge fan of podcasts, as I have relatively large amounts of ear time. My impression is that it might be true for a lot of EAs and, more importantly, the non-EA target audience.

I was considering a podcast as a potential project for Harvard EA, but haven't found anyone suitable and don't think I would be a good fit.

As for GiveWell's conversations page, I wouldn't think of it as a substitute. The interviews are great, but I rarely find time to read them.

I second EconTalk as a good model. I would also recommend In Ou... (read more)

1
Robert_Wiblin
9y
If I made half a dozen and set up the infrastructure, do you think someone from EA Harvard could be interested to take it over?

Technical question: can Harvard EA cross-post articles from our website? We mostly do interviews with people in academia, examples here: http://harvardea.org/blog/

0
Peter Wildeford
9y
Looks fun to me!
3
RyanCarey
9y
Post it. It's great to have more relevant content. Take note of whether people vote up your content so you know whether to post more like it.

Hi Geuss, thanks for sharing this, we (Harvard EA) will try to find out more about it.

1[anonymous]10y
Great to hear!

"Do-bester" might have the inverse problem.

Thanks! We've been quite successful/lucky getting speakers. We're going to have another talk by Elie tonight. Later in the semester, we'll have George Church and Steven Pinker.

As for FLI, the main thing now is x-risk publicity (ie articles, editing wikipedia to replace sci-fi with science, etc.), project prioritization, conferences and panel discussions for academics and people working in AI. All of those are going really well, much faster than expected.

Sorry for the double-post, figured it would be better to sign up with my real name

Hi, I'm Ales, a second-year student at Harvard College and a prospective economics major interested in too many things.

I'm currently one of the co-presidents of Harvard College Effective Altruism [1], in which position I succeeded Ben Kuhn. We are currently working on making HCEA an established organization and it seems like we're getting near the critical mass of dedicated people to work on some really great projects. We're also helping to found a group at MIT and Tufts, and... (read more)