Richard Ren

BSE Computer Science + BS Economics student at the University of Pennsylvania's Jerome Fisher Management & Technology Dual-Degree program. Interested in climate longtermism, AI alignment, and cascading global risks.

I love EA frameworks and ideas. They've changed my worldview. But I love critiquing them even more.

Previous published research in Applied Physics Express (1st author), Physical Review Applied (2nd author), Air & Waste Mgmt Assoc 113th Annual Conference (1st author).

Please reach out if you're interested in climate change adaptation, technology, entrepreneurship, effective policy, environmental economics, or political theory. I'd also love to learn more about AI safety and biorisk mitigation.

Email: hi.richard.ren@gmail.com

LinkedIn: https://www.linkedin.com/in/richard-ren-sustainability-tech/

Topic Contributions

Comments

New cause area: training health workers to prevent newborn deaths

Thank you so much for this well-written article. I especially love the calculations on cost-effectiveness & comparison on newborn deaths versus other EA cause areas – your proposal clearly makes sense clearly as an alternate GiveWell cause area from a DALYs perspective.

As a student during the pandemic, I’m quite skeptical of online education – but on the other hand, the unit economics are too good for me to ignore. It only takes one decent, quality course to scale and one can have an outsized return on investment.

Therefore, I’d love to know: how do you train people online, effectively?  And to the extent previous health training courses exist, how effective have they been and what are their shortcomings?

If it works out, I feel like this “med-ed-tech” model could work really well for a lot of different health professions in developing countries for making an outsized impact. Would also be curious to hear what would make certain professions easy to train online and which would be the most difficult relative to its impact.

Should longtermists focus more on climate resilience?

This is very fair criticism and I agree. 

For some reason, when writing order of magnitude, I was thinking about existential risks that may have a 0.1% or 1% chance of happening being multiplied into the 1-10% range (e.g. nuclear war). However, I wasn't considering many of the existential risks I was actually talking about (like biosafety, AI safety, etc) - it'd be ridiculous for AI safety risk to be multiplied from 10% to 100%.

I think the estimate of a great power war increasing the total existential risk by 10% is much more fair than my estimate; because of this, in response to your feedback, I've modified my EA forum post to state that a total existential risk increase of 10% is a fair estimate given expected climate politics scenarios, citing Toby Ord's estimates of existential risk increase under global power conflict.

Thanks a ton for the thoughtful feedback! It is greatly appreciated.

The AI Messiah

100% agree with this point, and it has helped me understand the original post more.

I feel that too many times, EAs take current EA frameworks and ways of thinking for granted instead of questioning those frameworks and actively trying to identify flaws and in-built assumptions. Thinking through and questioning those perspectives is a good exercise in general but also extremely helpful to contribute to the motivating worldview of the community.

Would love to see more people critique EA frameworks and conventional EA ideas in this forum - I believe there are plenty of flaws to be found.

The AI Messiah

Hey! I liked certain parts of this post and not other parts of this post. I like the thoughtfulness by which you critique EA through this post. Keep at it.

On your first point about the AI messiah: 

I think the key distinction is that there are many reasons to believe this argument about the dangers of an AGI are correct, though. Even if many claims with a similar form are wrong, that doesn't exclude this specific claim from being right. 

"Climate scientists keep telling us about how climate change is going to be so disastrous and we need to be prepared. But humanity has seen so many claims of this form and they've all been so wrong!"

The key distinction is that there is a lot of reason to believe that AGI will be dangerous. There is also a lot of reason to support the claim that we are not prepared for AGI currently. Without addressing that chain of logic directly, I don't think I'm convinced by this argument.

On your second point about the EA religious tendencies:

Because religious communities are one of the most common communities we see, there's obviously going to be parallels that exist between religious communities and EA. 

Some of these analogies hold, others not so much. We, too, want to community build, network, and learn from each other. I'd love for you to point at specific examples of things EA do, from conferences to holding EA university groups, that are ineffective or unnecessary.

To perhaps a greater point of EA perhaps becoming too groupthink-y, which I think may be warranted:

I think a key distinction is that EA has a healthy level of debate, disagreement, and skepticism - while religions tend to demand blind faith in believing something unprovable. This ongoing debate on how to do the most good I personally find the most valuable in the community - and I hope this spirit never dies.

Keep on critiquing EA. I think such critiques are extremely valuable. Thanks for writing this.

Should longtermists focus more on climate resilience?

Thanks a ton for your comment! I'm planning to write a follow-up EA forum post on cascading and interlinking effects - and I agree with you in that I think a lot of times, EA frameworks only take into account first-order impacts while assuming linearity between cause areas.

Should longtermists focus more on climate resilience?

Thanks a ton Darren! I'd love to connect with you — and I found the ideas you linked to interesting. Thanks for introducing me to these ideas.

I completely agree with you — I think I ended up focusing on climate change specifically because it is the most clear, well-studied manifestation of "Earth Systems Health" gone wrong and potentially causing existential risk. However, emphasizing a broader need to preserve the stability of Earth's systems is extremely valuable — and encompasses climate change. 

Reducing greenhouse gas emissions may be the most important issue currently, but given our current societal inability to interface with our environment in a way that doesn't damage it, there may be many other environmental crises in the future that manifest as well that damage our ability to survive. A broader framework encompassing environmental preservation may be necessary to address all of these issues at once.

Should longtermists focus more on climate resilience?

Hey Johannes! I really appreciate the feedback, and I love the work you guys are doing through Founder's Pledge. I appreciate that you also believe sociopolitical existential risk factors are an important element worth consideration.

I wish there was a lot more quantitative evidence on sociopolitical climate risk — I had to lean to a lot of qualitative expert sociopolitical analyses for this forum post. I acknowledge a lot of the scenarios I talk about here lean on the pessimistic side. In scenarios where there is high(er) governmental competence and societal resilience (than I predicted), it could be that very few of these x-risk multiplying impacts manifest. It could also be that they manifest in ways I don’t predict initially in this forum post.

I therefore agree with the critique about the overly confident statements. I ended up changing quite a bit of the phrasing in my forum post as a result of your feedback — I absolutely agree that some of the phrasing was a little too certain and bold. The focus should have been more on laying out possibilities rather than statements of what would happen. Thank you for that feedback.

To address your criticism/feedback on IPCC climate reports:

I think it is known that the IPCC’s climate reports, being consensus-driven, will not err in favor of extreme effects but rather include climate effects agreed upon by the broader research community. There was a recent Washington Post article I was considering including as well, where many notable climate scientists comment on the conservative, consensus nature of the IPCC and how this may impact their climate reports.

I cited the Scientific American article initially because it showed evidence of how a conservative consensus-driven organization has historically underestimated climate impacts. The article highlights specific examples of when IPCC predictions have been conservative from 1990 to 2012 — for instance, a 2007 report in which the IPCC dramatically underestimated Arctic summer ice , or a 2001 report where the IPCC predicts of sea level were 40% lower than actual sea level rise. 

However, I absolutely acknowledge the accuracy of IPCC reports may have changed since 2012. I agree this evidence is not sufficient to warrant a statement that IPCC climate reports may lean conservative currently — so I've modified my statement to emphasize that certain past IPCC reports have leaned conservative. Thank you for the catch.

Overall, I appreciate your feedback — and I hope to speak to you sometime! I'd love to contribute to the research in the future quantifying the sociopolitical impacts of climate change, and I'm particularly interested in the work you do at Founder's Pledge.

(Note for transparency: This comment has been edited.)

Should longtermists focus more on climate resilience?

Acknowledgements to Esban Kran, Stian Grønlund, Liam Alexander, Pablo Rosado, Sebastian Engen, and many others for providing feedback and connecting me with helpful resources while I was writing this forum post. :-)

EAG is over, but don't delete Swapcard

Interested in the forthcoming successor to EA Hub - to what extent do EA organizations require software engineers to build these networking platforms? I (and probably many other college student EAs over the summer) would be really interested working on a software engineering project to create a Swapcard-and-EA-hub-but-better. 

It'd be cool to gather a team of part-time or interning CS/SWE college students and invest in them, given how much effort and money goes into EA conference events but how difficult and time-consuming post-conference followups are.

Reflect on Your Career Aptitudes (Exercise)

I really, really like this approach! I like how this exercise doesn’t box in your thinking - rather, it is a very simple and plain “What do you want to do, now how do you get there?" reflection. It leaves a lot of room for imagination, creativity, and interpretation that will differ based on how you imagine solving your specific cause area.

Load More