J

jehan

179 karmaJoined Oct 2021wikiciv.org

Comments
8

This is an excellent point that again highlights the problem of labeling something "Longtermist" when many expect it to transpire within their lifetimes.

Perhaps rather than a spectrum of  "Respectable <-> Speculative" the label could be a more neutral (though more of a mouthful) "High Uncertainty Discounting <-> Low Uncertainty Discounting"

What, if anything, does this imply about the hundreds of millions of insecticide-treated bednets we have helped distribute?

Contribute to Wikiciv.org - A wiki for rebuilding civilization's technology
 

Ways you can help:

-Write and edit articles
-Research and collect content
-Work on a port of Entitree to make a tech tree visualization

 

No coding experience needed! Wikiciv has a "What You See Is What You Get" editor, if you can edit a google doc, you can edit Wikiciv.

To me, it seems to be evidence that you can be a believer in a cause, but still become corrupt because you use that very belief to justify self-serving logic about how what you're doing really advances the cause.

Thus it would be even more relevant to EA because I think the risk of EAs becoming nakedly self-interested is low; the more likely failure mode is using EA to fool yourself and rationalize self-serving behavior.

Great question! When I write fraternization I mean any personal relationship, romantic or platonic. I realize the Wiki article is misleading in this regard, here's an example where the term is used to mean romantic relationships. Romantic interactions generally pose the most risk so policies around this are common. This is the lowest-hanging fruit for EA as far as I'm concerned.

You probably already know this though and are referring to platonic interactions?

I agree that having lunch here and there with someone is fine. The issue is when  close friends can influence EA activities (like access to events, grants, jobs etc.). Here Apple defines close friendships as a "significant personal relationship" and therefore "do not conduct Apple business" with them.
 

With appropriate conflict of interest norms, I don't think EA will need to do anything close to trying to regulate normal friendships. I'm mostly referring to the often intense social scene that emerges around EA communities and events. An example cited in this post "Open EA Global":

Waking up hungover, again, trying to make sense of the fact the most serious partying in my life now happens at conferences of people who talk obsessively about doing the most good.

This seems unprecedented for a mission-driven organization. The American Psychiatric Association probably doesn't have a policy around raging parties because there is a 0% chance that raging parties would be a fixture of APA conferences.

What I would like to see is for EA to professionalize a little bit and implement some movement-protecting norms around 1) romantic relationships and 2) certain social gatherings, which would be most of the relevant risk mitigation.

Sorry to be the fun-police, I just think the riskiest sorts of fun should happen a bit further from EA.

I think slow decline, cultural change, mission creep etc. are harder to control, but I make the claim that the leading causes of sudden death are sex scandals and corruption scandals, which EA has not taken adequate steps to prevent: Chesterton Fences and EA’s X-risks

Yes, I have a similar position that early-AGI risk runs through nuclear mostly.  I wrote my thoughts on this here: When Bits Split Atoms

I spoke to Yonatan today regarding a new software project and found it very helpful.  I would highly recommend anyone at any stage of a project or startup to book a meeting with him; he took my meeting on quite short notice and was very helpful.

He had very specific insights about an early stage project and was able to coach me through the process of interviewing users.  Expect direct feedback which is exactly why explicit coaching is different from talking with friends or users about a project. 

I had read various advice about starting projects/startups before and thought I had an understanding, but it's very likely that you're not effectively putting it all into practice and Yonatan will identify that for you.

Near the end of the meeting after we had both agreed what the best next steps would be, he asked if I would be willing to do them right then.  He waited patiently while I  went and implemented what we had discussed. 

That was the most concrete and probably most positive result from the meeting: Right then and there completing something that I knew I needed to (and possibly had been procrastinating from). 

TL;DR: Yonatan helped me identify what I need to do, and then had me do it right then and there. A+