lincolnq

910Joined Sep 2014

Comments
77

Useful perspective. (I'm excited about this debate because I think you're wrong, but feel free to stop responding anytime obviously! You've already helped me a ton, to clarify my thoughts on this.)

First, what I agree with: I am excited by your last paragraph - my ideal EA community also helps people reason better, and the topics you listed definitely seem like part of the 'curriculum'. I only think it needs to be introduced gently, and with low expectations (e.g. in my envisioned EA world, the ~bottom 75% of engaged EAs will probably not change their careers).

I even agree with this:

EA is a place that can collectively orient towards those crucial considerations and create incentives and systems that align with those crucial considerations

I have two main disagreements:

  1. most stuff that seems good is good
  2. siphoning people into AI without supporting the ones left behind leaves behind a hollow, inauthentic movement

Most stuff that seems good is good

You wrote:

lots of stuff turns out to be net-negative

I don't really agree with this, but I don't really expect to make much progress in a debate. I interpret this as you being generally against 'progress studies' also? I have pretty low priors on someone thinking they are working on something useful/innovative/altruistic, and then putting a lot of thought/effort behind it, and it ending up net negative.

Siphoning people into AI

A thing that I perceive is: EA is an important onramp into AI safety stuff. (How does this work? EA is broadly acceptable, incontrovertible - it gets a lot of people talking about it positively. Then the EA onboarding process is shaped to attracting and identifying people good at working on weird ideas, and pushes those people into AI.)

(To be clear, I may be misinterpreting you - you didn't say this explicitly but I kind of get it from the "orient towards those crucial considerations" thing and so I'm addressing it directly.)

This is an ok thing to do on its own, and I think is a valid reason-to-exist of the community. But not the only one! I don't think it will work in the long run without the community being able to exist on its own, independently of being a feeder into important projects. It has worked for a while "under the radar" of the recruitment process. I expect this to stop working for various reasons.

One major point of the changes I'm proposing is to make that more explicit, and one optional way that people can engage with EA.

Useful input. Can you give a bit more color about your feelings? In particular whether this is a disagreement with the core direction being proposed vs. just something I wrote down that seems off? (if the latter - i wrote this quickly trying to give a gist so not surprised. if the former i'm more surprised and interested in what I am missing.)

Thanks for writing!

To be clear, I don't think we as a community should be scope insensitive. But here's the FAQ I would write about this...

  • Q: does EA mean I should only work on the most important cause areas?
    • no! being in EA means you choose to do good with your life, and think about those choices. We hope that you'll choose to improve your life / career / donations in more-altruistic ways, and we might talk with you and discover ideas for making your altruistic life even better.
  • Q: does EA mean I should do or support [crazy thing X] to improve the world?
    • Probably not: if it sounds crazy to you, trust your reasoning! However, EA is a big umbrella and we're nurturing lots of weird ideas; some ideas that seem crazy to one person might make sense to another. We're committed to reasoning about ideas that might actually help the world even if they sound absurd at first. Contribute to this reasoning process and you might well make a big impact.
  • Q: does ea's "big umbrella" mean that I should avoid criticizing people for not reaching their potential or doing as much good as I think they could do?
    • This is very nuanced! You'll see lots of internal feedback and criticism in EA spaces. We do have a norm against loudly and unsolicitedly critiquing people's plans for not doing enough good, but this is overridden in cases where a) the person has asked for the feedback first or b) the person making the critique has a deep and nuanced understanding of the existing plan, as well as a strong relationship with the recipient of the feedback. Our advice, if you see something you want to critique, is to ask if they want feedback before offering.
  • Q: what about widely-recommended canonical public posts listing EA priorities, implicitly condemning anything that's not on the priority list?
    • ...yeah this feels like a big part of the problem to me. I think it makes sense to write up a standard disclaimer for such posts, saying "there's lots of good things not on this list" (GiveWell had something like this for a while I think?) but I don't know if it is enough.
  • Q: So is EA scope sensitive or not?
    • We are definitely scope sensitive. One of the best ways that reasoning can help figure out how to make the world better is by comparing different things, putting numbers on stuff, and/or figuring out other reasons why path A is better than path B.

We should retain awareness around optics, in good times and bad

I'd like to push back on this frame a bit. I almost never want to be thinking about "optics" as a category, but instead to focus on mitigating specific risks, some of which might be reputational.

See https://lesswrong.substack.com/p/pr-is-corrosive-reputation-is-not for a more in-depth explanation that I tend to agree with.

I don't mean to suggest never worrying about "optics" but I think a few of the things you cited in that category are miscategorized:

err on the side of registering charitable foundations to disburse grants; to avoid putting single donors on a huge pedestal inside and outside the community; and to try to avoid major sources of funding being publicly tied to any one individual or industry.

Registering a charitable entity to disburse grants mostly makes sense for legal reasons; avoiding funding sources being too concentrated is a good risk-mitigation strategy. We should do both of these but not primarily for optics reasons.

I agree with you that we should avoid putting single donors on a pedestal, and this is the one that makes most sense to do for "optics" reasons; but it's also the most nuanced one, because we also want to incentivize such people to exist within the ecosystem, and so shouldn't pull back too hard from giving status to our heroes. One thing that I would like to be better about along this axis is identifying heroes who don't self-promote. SBF was doing a lot of self-promotion. A well-functioning movement wouldn't require self-promotion.

Huh, useful analogy. I do think cryptocurrency has potential, I just think the expected altruism-value of the whole thing is quite negative currently, and has been for 5+ years, and this was super not true in the early days of the internet, even during the crash years.

(I was a very well-connected teenager in 1999 and I remember some things about the early internet... I remember the browser wars, Netscape, AltaVista, then Google, eBay and PayPal, as well as the adware, email viruses, chain letters, worms, hoaxes, etc.)

Early internet was clearly awful in so many ways but I think the benefits outweighed the drawbacks, at least as a kid -- I have so many instances of getting value from the internet -- mostly through education (by searching to solve problems, discovering forums, sharing info, etc).

Similarly, I was a fairly early adopter for crypto. Again, lots of technical promise. Seemed quite a cool community early on; I sipped 0.05 btc from the Bitcoin Faucet in early 2011, then gave in and bought $100 worth (at $8). Then I waited for useful stuff to come of it. And waited. I remember making arguments like "this is the first time computers can talk to each other and exchange value" to my friends and family. A few other coins seemed interesting -- Namecoin seemed useful but didn't pan out; I spent some time studying Stellar when it came out too, for similar "computers can send value" type reasons. I had started to get bored in 2015 and missed the launch of Ethereum, which was quite a promising thing in retrospect, but I didn't miss the DAO collapse. I think it was around this time (2016-ish) that I started thinking that maybe the whole ecosystem was net negative.

A sweeping condemnation of crypto based on FTX's failure seems about as prudent or rational as a sweeping condemnation of democracy

Ah, but (to be a bit of a devil's advocate here) a lot of people have been sweepingly condemning crypto since before this whole fiasco, we are just making more noise and have a higher chance to be heard now :)

At risk of derailing the thread, I would argue none of your #1-5 are panning out in any substantive way. (I know a lot about 1 and 2, and claim that the entire crypto industry is at least an order of magnitude less effective than Wave and Sendwave towards these goals.) And on the flip side, most crypto things turn out to be scams and are risky to get involved with.

To me, it seems entirely reasonable to collectively take this opportunity to say "oops" on crypto; agree that the technology has potential but that financial stuff is built on trust and crypto attracts too many get-rich-quick-ers and is thus toxic for the community, and bail.

I agree! As a founder, I promise to never engage in fraud, either personally or with my business, even if it seems like doing so would result in large amounts of money (or other benefits) to good things in the world. I also intend to discourage other people who ask my advice from making similar trade-offs.

This should obviously go without saying, and I already was operating this way, but it is worth writing down publicly that I think fraud is of course wrong, and is not in line with how I operate the philosophy of EA.

Answer by lincolnqNov 10, 20224022

I think you might now be overreacting to recent negative news; but before then you were probably overreacting to positive news. I do recommend building your own culture and brand for a company.

At Wave, we touch on EA in our mission and certainly did a little bit of hiring through EA aligned venues; but our mission is both simple to explain and largely independent of the whims of EA movement stuff. I think that's the way it should be; it was a deliberate choice for us and I think has served us well.

I think 80k has tried to emphasize personal fit in the content, but something about the presentation seems to dominate the content, and I think that is somehow related to social dynamics. Something seems to get in the way of the "personal fit" message coming through; I think it is related to having "top recommended career paths". I don't know how to ameliorate this, or I would suggest it directly.

I'm sure this is frustrating to you too, since like 90% of the guide is dedicated to making the point that personal fit is important; and people seem to gloss over that.

One thing that could help would be eliminating the "top recommended career paths" part of the website entirely. That will be very unsatisfying to some readers, and possibly reduce the 'virality' of the entire project, so may be a net bad idea; but it would help with this particular problem. I am afraid I don't have any better ideas.

Load More