All of Richard Ren's Comments + Replies

This comment seems to violate EA forum norms, particularly by assuming very bad faith from the original poster (e.g. "these claims smell especially untrustworthy" and "I don't think these arguments are transparent."). These comments made certainly have very creative interpretations of the original post. 

I believe you're aware that signatories such as Anders Sandberg and SJ Beard are not advocating for "folding EA into extinction rebellion" -- an extremely outlandish claim and accusation. 

Many of the comments made give untrue interpretations of th... (read more)

6
quinn
8mo
Trump supporters and homophobes are easy to rule out if you assume that the only way to be valid or useful in expectation is to go to college. Which, fine, whatever, but it does violate the spirit of the thing in a way that I'd hope is obvious. 

Thanks a ton for your critique!

your argument can extend for any argument — any progress one makes, for instance, on disease prevention/malaria nets impacts the same outcome of economic wellbeing & thus transition + resilience against climate change.

I think a lot of these arguments remind me of the narrow vs broad intervention framework, where narrow interventions are targeted interventions meant at mitigating a specific type of risk while broad interventions include generally positive interventions like economic wellbeing, malaria nets, etc. that have ... (read more)

Thanks a ton for your kind response (and for being the guy that points something out). :)

"Counterfactual" & "replaceability" work too and essentially mean the same thing, so I'm really choosing which beautiful fruit I prefer in this instance (it doesn't really matter).

I slightly prefer the word contingent because it feels less hypothetical and more like you're pulling a lever for impact in the future, which reflects the spirit I want to create in community building. It also seems reflect uncertainty better: e.g. the ability to shift the path dependence... (read more)

2
Jonas Hallgren
2y
That makes sense, and I would tend to agree that the framing of contingency invokes more of a "what if I were to do this" feeling which might be more conducive toward people choosing to do more entrepreneurial thinking which in turn seems to have higher impact

Terribly sorry for the late reply! I didn't realize I missed replying to this comment. 

I appreciate your kind words, and I think your thoughts are very eloquent and ultimately tackle a core epistemic challenge:

to my knowledge we (EA, but also humanity) don't have many formal tools to deal with complexity/feedback loops/emergence, especially not at a global scale with so many different types of flows ... some time could be spent to scope what we can reasonably incorporate into a methodology that could deal with that complexity.

I recently wrote a new fo... (read more)

At various points in history, some dominant class - say capitalists, men, or white Europeans - have developed a set of concepts for describing and governing social reality which serve their own interests at the expense of overall welfare. As such these concepts come to embody a particular set of values. There are multiple ways this can happen - it could be a deliberate, pernicious act by members of the dominant class; it could be the result of unconscious biases of a group of researchers; or it could be the result of a systematic selection pressure, in whi

... (read more)

Thank you so much for writing this. It was very comprehensive and highlighted how the intersection of social values and technology may be overlooked in EA. 

I especially liked how the "societal friction, governance capacity, and democracy" section of the forum post ties together strengthening democracy, inter-group dynamics, disenfranchised groups, and long-term technological development risk through the path dependence framework; it seems like a very relevant & eloquent explanation for government competence that we see play out even in current eve... (read more)

I love your thoughts on this.

Need to do more thinking on whether this point is correct, but a lot of what you're saying about forging our own institutions reminds me of Abraham Rowe's forum post on EA critiques:

EA is neglecting trying to influence non-EA organizations, and this is becoming more detrimental to impact over time.

I’m assuming that EA is generally not missing huge opportunities for impact. As time goes on, theoretically many grants / decisions in the EA space ought to be becoming more effective, and closer to what the peak level of impact possi

... (read more)

Constantly expanding list of mistakes I made / things I would change in this post (am not editing at the moment because this is an EA criticism contest submission):

1)

Toby Ord wrote similarly that he preferred narrow over broad interventions because they can be targeted and thus most immediately effective without relying on too many casual steps.

I misinterpreted what Toby Ord was saying in The Precipice (page 268). He specifically claimed he preferred narrow/targeted over broad interventions because they can be targeted toward technological risks direc... (read more)

I disagree with the following:

But I doubt you can make a case that’s robustly compelling and is widely agreed upon, enough to prevent the dynamics I worry about above.

"I doubt you can make a case that's robustly compelling..."

Systemic cascading effects and path dependency might be very coherent consequentialist frameworks & catchphrases to resolve a lot of your epistemic concerns (and this is something I want to explore further).

Naive consequentialism might incentivize you to lie to "do whatever it takes to do good", but the impacts of lying can cascad... (read more)

1
Sharmake
2y
There's another problem with the norm of lying for the greater good: One, it is very easy for biased human minds to convince themselves of the lie and become systematically distorted from their path. To put it in Sarah Constantin's words: Another problem is you are much more vulnerable to Goodharting yourself, and eventually you will use it for motivated reasoning, where your pet causes can be lied about, and outsiders can't tell if the organization is actually doing what it claims. While I think the dentological notion of honesty is too exploitable and naive for the 21st century, I definitely agree with Holden that lying should not be a norm, as well as misleading people should also not be a norm, but a regrettable exception.

I agree with the following statement:

We need the type of system you're talking about, but we also need resiliency built into the system now.

My low-confidence rationale for including a section on modeling, scenario analysis, & its helpfulness to building resiliency is twofold:

1. Targeting & informing on-the-ground efforts: Overlaying accurate climate agriculture projections on top of food trading systems can help us determine which trade flows will be most relied on in the future and target interventions where they would be most effective and neglec... (read more)

1
Noah Scales
2y
Yes, so gather information about what's happening and tell those who could be effected by changes later on.   I proposed a reform to enhance food system resiliency for smaller regions and populations. What do you think of it?

Quick thoughts: People might be a lot more sympathetic to migrants (or refugees) who are of similar cultural backgrounds to them, prompting less social tension and political extremism.

As a notable example, the political effects of Arab vs Ukrainian refugees on Europe are markedly different.

1
Noah Scales
2y
Thanks! What about in the case that the number of refugees or internal migrants rises a lot? So rather than ten thousand, a million?

I didn't realize the phrase "climate refugees" implied involuntary cross-border migration and mistaked it for a blanket term for climate migration. Thanks for the catch!

For the sake of fairness for the EA criticism contest, I won't edit the mistake now but maybe after the competition winners have been announced. If I were to edit & rephrase it, it'd look something like:

~216 million internally displaced climate migrants by 2050 (World Bank Groundswell Report), which can give a rough order of magnitude estimate for total cross-border climate migrants and

... (read more)

Hey Thomas! Love the feedback & follow up form the conversation. Thanks for taking so much time to think this over -- this is really well-researched. :)

In response to your arguments: 

1 -> 2 is generally well established by climate literature. I think the quote you provided gives me good reasons for why climate war may not be perfectly rational; however, humans don't act in a perfectly rational way. 

There are clear historical correlations that exist between rainfall patterns and civil tensions, expert opinions on climate causing violent con... (read more)

Thank you so much for this well-written article. I especially love the calculations on cost-effectiveness & comparison on newborn deaths versus other EA cause areas – your proposal clearly makes sense clearly as an alternate GiveWell cause area from a DALYs perspective.

As a student during the pandemic, I’m quite skeptical of online education – but on the other hand, the unit economics are too good for me to ignore. It only takes one decent, quality course to scale and one can have an outsized return on investment.

Therefore, I’d love to know: how do you... (read more)

3
Marshall
2y
Thanks for the comment! I agree with you - ensuring that the training works truly is the key. There are multiple lines of evidence showing that it's entirely possible to create  effective online training for health workers - all of the technology exists. There's more to be said on this than can be covered well in one comment, but here are some thoughts.  * It's critical to have an understanding of why  most online learning doesn't work well and deliberately design better solutions based upon that understanding. * There's plenty of research demonstrating that online learning can increase  knowledge and clinical skills of HWs living in LMICs (and other research demonstrating the same in high income countries).  * Keep in mind that in-service HWs are working adults - they have greater motivations and capacities for self-regulated learning than they did when they were students, particularly when that learning is directly applicable to their work. All that said, there's much work to do in terms of developing better trainings, evaluating them, and measuring their impacts on clinical practices and public health outcomes!

This is very fair criticism and I agree. 

For some reason, when writing order of magnitude, I was thinking about existential risks that may have a 0.1% or 1% chance of happening being multiplied into the 1-10% range (e.g. nuclear war). However, I wasn't considering many of the existential risks I was actually talking about (like biosafety, AI safety, etc) - it'd be ridiculous for AI safety risk to be multiplied from 10% to 100%.

I think the estimate of a great power war increasing the total existential risk by 10% is much more fair than my estimate; bec... (read more)

This point has helped me understand the original post more.

I feel that too many times, many EAs take current EA frameworks and ways of thinking for granted instead of questioning those frameworks and actively trying to identify flaws and in-built assumptions. Thinking through and questioning those perspectives is a good exercise in general but also extremely helpful to contribute to the motivating worldview of the community.

Still don't believe that this necessarily means EAs "tend toward the religious" - there are probably several layers of nuance that are... (read more)

Hey! I liked certain parts of this post and not other parts of this post. I appreciate the thoughtfulness by which you critique EA through this post.

On your first point about the AI messiah: 

I think the key distinction is that there are many reasons to believe this argument about the dangers of an AGI are correct, though. Even if many claims with a similar form are wrong, that doesn't exclude this specific claim from being right. 

"Climate scientists keep telling us about how climate change is going to be so disastrous and we need to be prepared. ... (read more)

Maintaining that healthy level of debate, disagreement, and skepticism is critical, but harder to do when an idea becomes more popular. I believe most of the early "converts" to AI Safety have carefully weighed the arguments and made a decision based on analysis of the evidence. But as AI Safety becomes a larger portion of EA, the idea will begin to spread for other, more "religious" reasons (e.g., social conformity, $'s, institutionalized recruiting/evangelization, leadership authority). 

As an example, I'd put the belief in prediction markets as an E... (read more)

9
ryancbriggs
2y
Thanks for the kind words Richard. Re: your first point: I agree people have inside view reasons for believing in risk from AGI. My point was just that it's quite remarkable to believe that, sure, all those other times the god-like figure didn't show up, but that this time we're right. I realize this argument will probably sound unsatisfactory to many people. My main goal was not to try to persuade people away from focusing on AI risks, it was to point out that the claims being made are very messianic and that that is kind of interesting sociologically. Re: your second point: I  should perhaps have been clearer: I am not making a parallel to religion as a way of criticizing EA. I think religions are kind of amazing. They're one of the few human institutions that have been able to reproduce themselves and shape human behaviour in fairly consistent ways over thousands of years. That's an incredible accomplishment. We could learn from them.

Thanks a ton for your comment! I'm planning to write a follow-up EA forum post on cascading and interlinking effects - and I agree with you in that I think a lot of times, EA frameworks only take into account first-order impacts while assuming linearity between cause areas.

Thanks a ton Darren! I'd love to connect with you — and I found the ideas you linked to interesting. Thanks for introducing me to these ideas.

I completely agree with you — I think I ended up focusing on climate change specifically because it is the most clear, well-studied manifestation of "Earth Systems Health" gone wrong and potentially causing existential risk. However, emphasizing a broader need to preserve the stability of Earth's systems is extremely valuable — and encompasses climate change. 

Reducing greenhouse gas emissions may be the most imp... (read more)

6[anonymous]2y
This paper on Assessing climate change’s contribution to global catastrophic risk uses the planetary boundaries framework! And this paper on Classifying global catastrophic risks might also be of interest :)

Hey Johannes! I really appreciate the feedback, and I love the work you guys are doing through Founder's Pledge. I appreciate that you also believe sociopolitical existential risk factors are an important element worth consideration.

I wish there was a lot more quantitative evidence on sociopolitical climate risk — I had to lean to a lot of qualitative expert sociopolitical analyses for this forum post. I acknowledge a lot of the scenarios I talk about here lean on the pessimistic side. In scenarios where there is high(er) governmental competence and societ... (read more)

3
Denkenberger
2y
I agree that there should be more focus on resilience (thanks for mentioning ALLFED), and I also agree that we need to consider scenarios where leaders do not respond rationally. You may be aware of Toby Ord's discussion of existential risk factors in the Precipice, where he roughly estimates a great power war might increase the total existential risk by 10% (page 176). You say: So you're saying the impact of climate change is ~90 times as much as his estimate of the impact of great power war (900% increase versus 10% increase in X risk). I think part of the issue is that you believe the world with climate change is significantly worse than the world is now.  We agree that the world with climate change is worse than the business as usual, but to claim it is worse than now means that climate change would overwhelm all the economic growth that would have occurred in the next century or so. I think this is hard to defend for expected climate change. But this could be the case for the versions of climate change that ALLFED focuses on, such as the abrupt regional climate change, extreme weather including floods and droughts on multiple continents at the same time causing around a 10% abrupt food production shortfall, or the extreme global climate change of around 6°C or more. Still, I don't think it is plausible to multiply existential risks such as unaligned AGI or engineered pandemic by 10 because of these climate catastrophes.

Acknowledgements to Esban Kran, Stian Grønlund, Liam Alexander, Pablo Rosado, Sebastian Engen, and many others for providing feedback and connecting me with helpful resources while I was writing this forum post. :-)

Interested in the forthcoming successor to EA Hub - to what extent do EA organizations require software engineers to build these networking platforms? I (and probably many other college student EAs over the summer) would be really interested working on a software engineering project to create a Swapcard-and-EA-hub-but-better. 

It'd be cool to gather a team of part-time or interning CS/SWE college students and invest in them, given how much effort and money goes into EA conference events but how difficult and time-consuming post-conference followups are.

7
Sarah Cheng
2y
We are hiring software engineers to help build some of this on the EA Forum. :) [Note: The rest are my personal thoughts - we're a small team that may not have the capacity to consider interns.] I've mentored many interns, and in my experience, it takes a lot of my time to provide a valuable learning experience for them. Unfortunately I think it's easy for this to end up as a net negative in terms of productivity. I did enjoy the experience, so I think it would be fun to do again, but it's a bit hard to justify with our team of ~3 engineers. I would be curious if you think there is something particularly valuable about interning at CEA, vs a tech company with experienced mentors. My guess is that the average student would get more out of the latter. A couple ways I could imagine this working are: 1. If someone were just interested in shadowing one of us for a day or two, to learn what it's like to work here 2. If someone had a specific feature / project in mind, and required relatively little oversight or feedback from us - our codebase is public, so you could try contributing in small ways first before attempting to build something larger

I really, really like this approach! I like how this exercise doesn’t box in your thinking - rather, it is a very simple and plain “What do you want to do, now how do you get there?" reflection. It leaves a lot of room for imagination, creativity, and interpretation that will differ based on how you imagine solving your specific cause area.