(My personal opinion, not EV's:)
EV is winding down and being on this board is quite a lot of work. This makes it very hard to recruit for! The positive flip side though of the fact of the wind-down means that the cultural leadership we are doing is a bit less impactful than it was say a year or two ago.
When we faced the decision of whether to keep searching or accept the candidates in front of us, I considered many factors but eventually agreed that it was ok to prioritize allowing the existing board members to leave (which they couldn't do until we found ...
I would like to point out that this is one of those things where n=1 is enough to improve people's lives (e.g., the placebo effect works in your favor), in the same way that I can improve my life by taking a weird supplement that isn't scientifically known to work but helps me when I take it.
For what it's worth, my life did seem to start going better (I started to feel more in touch with my emotional side) after becoming vegan.
While I broadly agree with Rocky's list I want to push back a little vs. your points:
Re your (2): I've found that small entities are in a constant struggle for survival, and must move fast and focus on the most important problems unique to their ability to make a difference in the world. Small-seeming requirements like "new hires have to find their own housing" can easily make the difference between being able to move quickly vs. slowly on some project that makes or breaks the company. I think for new entities the risks of incurring large costs before you ...
You might want to check out some of Phil Trammell's reports, where he analyzes what he calls time preference (time discount rate) with respect to philanthropy: https://docs.google.com/document/d/1NcfTgZsqT9k30ngeQbappYyn-UO4vltjkm64n4or5r4/edit
Congrats on having invented something exciting!
Usually, the best way to get innovative new technology into the hands of beneficiaries quickly is to get a for-profit company to invest with a promise of making money. This can happen via licensing a patent to an existing manufacturer, or creating a whole startup company and raising venture capital, etc.
One of the things such investors want to see is a 'moat': something that this company can do that no other company can easily copy. A patent/exclusive license is a good way to create a moat.
There are some domai...
I'm a bit confused about this because "getting ambitious slowly" seems like one of those things where you might not be able to successfully fool yourself: once you can conceive that your true goal is to cure cancer, you are already "ambitious"; unless you're really good at fooling yourself, you will immediately view smaller goals as instrumental to the big one. It doesn't work to say I'm going to get ambitious slowly.
What does work is focusing on achievable goals though! Like, I can say I want to cure cancer but then decide to focus on understanding metabolic pathways of the cell, or whatever. I think if you are saying that you need to focus on smaller stuff, then I am 100% in agreement.
I avoid reading, and don't usually respond to, comments on my posts, or replies to my own comments.
The reason is that it's emotionally intense to do so: after posting something on the EA Forum, I avoid checking the forum at all for ~24h or so (for fear of noticing replies in the 'recents' area, or changes in my karma), and after that I mainly skim for people flagging major errors or omissions that need my input to be resolved.
Lizka's You Don't Have to Respond to Every Comment talks about this a bit (and was enormously helpful for me) - I am not strongly av...
I think this is a useful question and I'm glad to be discussing this.
I agree with many of your concerns - and would love to see a more culturally-unified EA on the axis of how conscious we are of our own impact - but I also think you're failing to acknowledge something crucial: As much as EA is about altruism, it is also about focus on what's important, and your post doesn't acknowledge this as a potential trade-off for the folks you're discussing.
You'll find a lot of EA folks perceive climate change as a real problem but also perceive marginal carbon cost...
I'm interested in the discussion of whether in fact we are at a hinge of history, maybe this is a good comments section for that. I agree that Will's analysis barely scratches the surface and has some flaws.
Factors under consideration for me:
The GiveDirectly founders (Michael Faye and Paul Niehaus) also founded TapTapSend (https://techcrunch.com/2021/12/20/taptap-send-raises-65m-to-build-cross-border-remittances-focused-on-the-most-underserved-markets/) which competes with Sendwave to keep remittance prices down.
Yeah. I just joined the board so I don’t exactly know why, but we are definitely aware of missing this deadline and the charity commission is as well, and I think it is caused by the ongoing investigation.
This is correct. We ended up needing to resolve a couple issues related to the inquiry before we can file. We’ve stayed in touch with the Charity Commission about the delay.
It's a fair critique. I use "legible" in this way, and I don't really want to give it up, and I think it's not too bad jargon-wise because even non-EA people seem to understand it without too much prefixing with definition.
Your alternatives don't quite capture the idea right:
Why does it make sense for Rethink Priorities to host research related to all five of the listed focus areas within one research org? It seems like they have little in common (other than, I guess, all being popular EA topics)?
You said in your "Five years" post that you are planning to do more self-eval and impact assessments, and I strongly encourage this. What are the most realistic bits of evidence you could get from an impact report of Rethink Priorities which would cause you to dramatically update your strategy? (or, another generator: what are you most worried about learning from such assessments?)
How has your experience as co-CEO been? How do you share responsibilities? Would you recommend it to other orgs?
I’ve personally liked it. There have been several times when I’ve talked with my co-CEO Marcus about whether one of us should just become CEO and it’s never really made sense. We work well together and the co-CEO dynamic creates a great balance between our pros and cons as leaders – Marcus leads the organization to be more deliberate and careful at the cost of potentially going too slowly and I lead the organization to be more visionary at the cost of potentially being too chaotic.
Right now we split the organization very well where Marcus handles the portf...
Excellent piece! I agree with this mindset but regularly struggle to explain why it's motivating / good to think this way, and I think you've done a nice job.
I don’t believe this is an unbelievably terrible idea; it makes sense to do this in some circumstances. That said, take resentment buildup seriously! If you feel that you are the sort of person who has even a small chance of feeling resentful about this choice later on, it is probably not worth it. You need to feel unambiguously good about this decision in the short and long term.
Yeah, sorry, I wrote the comment quickly and "resources" was overloaded. My first reference to resources was intended to be money; the second was information like career guides and such.
I think the critical-info-in-private thing is actually super impactful towards centralization, because when the info leaks, the "decentralized people" have a high-salience moment where they realize that what's happening privately isn't what they thought was happening publicly, they feel slightly lied-to or betrayed, lose perceived empowerment and engagement.
The tractability of further centralisation seems low
I'm not sure yet about my overall take on the piece but I do quibble a bit with this; I think that there are lots of simple steps that CEA/Will/various central actors (possibly including me) could do, if we wished, to push towards centralization. Things like:
Thanks! I agree that we are already (kind of) doing most of these things. So the question is whether further centralisation is tractable (and desirable). Like I say, it seems to me the big thing is if there’s someone, or some group of people, who really wants to make that further centralisation to happen. (E.g. I don’t think I’d be the right person even if I wanted to do it.)
Some things I didn't understand from your bullet-point list:
Having most of the resources come from one place
By “resources” do you primarily mean funding? (I'll assume ...
I mostly agree, but would add that it seems totally okay if two orgs sometimes work on the same thing! It's easy to over-index on the simple existence of an item within scope and say "oh that's covered" and move on, without actually asking "is this need really being met in the world?" Competition is good in general, and I wouldn't want to overly discourage it.
Vague agree with the framing of questions vs. answers, but I feel worried that "answer-based communities" are quite divergent from the epistemic culture of EA. Like, religions are answer-based communities but a lot of EAs would dispute that EA is a religion or that it is prescriptive in that way.
Not sure how exactly this fits into what you wrote, but figured I should register it.
I wrote up my nutrition notes here, from my first year of being vegan: http://www.lincolnquirk.com/2023/06/02/vegan_nutrition.html
I want to push back a little against this. I care more about the epistemic climate than I do about the emotional climate. Ideally in most cases they don't trade off. Where they do, though, I would rather people prioritize the epistemic climate, since I think knowing what is true is incredibly core to EA, more than the motivational aspect of it!
Everything here is based on friends' recommendations and very lightweight research, I didn't do much original research and didn't measure my levels. I'll probably get around to measuring soon and I expect this plan to change a bit. Philosophically I have chosen a low effort/low risk plan which I think is sustainable for me.
I take creatine and b12 when I remember to take them, which tends to be on days I go to the gym and make a smoothie afterwards. I take D3 sporadically when I think of it during the winter months (although this winter I didn't bother for ...
Ok, my best idea is to highlight a Marxist theory of labor vs. capital at the small scale. I know this sounds very high brow but I think a distillation of it could work?
Give someone a loaf, they can eat it.
Teach them to bake, they can join the labor market and work hard to feed themselves.
Give them money for an oven, they can own the means of production.
Ok, I mostly agree with you, but let's reframe as a devil's advocate: what if "EA" is a shaky concept in the first place (doesn't carve reality at joints)? Would you then agree that borders should be redrawn to have a more coherent mission, even if that ends up cutting out some bits of the "old EA"?
Very clear - makes a point that I've been struggling to think about and explain to people. Thanks for writing this.
Great post! It inspired me to write this, because I worry that such posts might accidentally discourage others from working on this cause area. https://forum.effectivealtruism.org/posts/e8ZJvaiuxwQraG3yL/don-t-over-update-on-others-failures
(to be clear: I really appreciate postmortems and want more content like it!)
hm, at a minimum: moving lots of money, and making a big impact on the discussion around ai risk, and probably also making a pretty big impact on animal welfare advocacy.
My loose understanding of farmed animal advocacy is that something like half the money, and most of the leaders, are EA-aligned or EA-adjacent. And the moral value of their $s is very high. Like you just see wins after wins every year, on a total budget across the entire field on the order of tens of millions.
A lot of organisations with totally awful ideas and norms have nonethless ended up moving lots of money and persuading a lot of people. You can insert your favourite punching bag pseudoscience movement or bad political party here. The OP is not saying that the norms of EA are worse than those organisations, just that they're not as good as they could be.
Nice. Thanks. Really well written, very clear language, and I think this is pointed in a pretty good direction. Overall I learned a lot.
I do have the sense it maybe proves too much -- i.e. if these critiques are all correct then I think it's surprising that EA is as successful as it is, and that raises alarm bells for me about the overall writeup.
I don't see you doing much acknowledging what might be good about the stuff that you critique -- for example, you critique the focus on individual rationality over e.g. deferring to external consensus. But it seem...
“I don't see you doing much acknowledging what might be good about the stuff that you critique”
I don’t think it’s important for criticisms to do this.
I think it’s fair to expect readers to view things on a spectrum, and interpret critiques as an argument in favour of moving in a certain direction along a spectrum, rather than going to the other extreme.
It’s interesting to me, because many entrepreneurs like myself get into entrepreneurship with (we sincerely believe) a goal of making the world a better place. Some are seemingly frauds. It is good to read this, to gain perspective on what not to do.
The problem with considering optics is that it’s chaotic. I think Wytham is a reasonable example. You might want a fancy space so you can have good optics - imagining that you need to convince fancy people of things, otherwise they won’t take you seriously. Or you might imagine that it looks too fancy, and then people won’t take you seriously because it looks like you’re spending too much money.
Pretty much everything in “PR” has weird nonlinear dynamics like this. I’m not going to say that it is completely unpredictable but I do think that it’s quite hard ...
The problem with considering optics is that it’s chaotic.
The world is chaotic, and everything EAs try to do have a largely unpredictable long-term effect because of complex dynamic interactions. We should try to think through the contingencies and make the best guess we can, but completely ignoring chaotic considerations just seems impossible.
It’s a better heuristic to focus on things which are actually good for the world, consistent with your values.
This sounds good in principle, but there are a ton of things that might conceivably be good-but-for-...
Thanks Patrick - glad to see you on EA forum.
Did you reach out to EA funders for VaccinateCA? From the linked article:
I called in favors and pled our case up and down the tech industry, and scraped together about $1.2 million in funding.
I have the sense that (at least today) a project with this level of prioritization, organizational competence and star power would be able to pull down 5x that amount with 1/10th the fundraising effort through the EA network. I think that was approximately still the case in early 2021.
(FWIW I've been a fan of yours sinc...
I didn't reach out to any EA funders, for somewhat quirky and contingent reasons, and I'm unfortunately going to be slightly elliptical here rather than saying everything I know:
At various points when I was raising money, I had miscalibrated understanding of how much money was committed or on the cusp of being committed. Since I was optimizing for speed-to-commitment, at most points I favored either my own network or networks I had perceived-high-quality intros to rather than attempting to light up funding sources which I perceived would not have a high pr...
[note: I don't work for CEA, but I did recently invest in a house to live in and do events in.] I wrote a piece on my blog about why. Here's what I wrote:
Real estate purchases can make sense for financial planning reasons in some cases. This money should not be considered to trade off against, e.g., donations to effective charities. Instead it should trade off against short-term rental budgets for retreats, conferences, etc. And because banks are willing to loan against real estate at very good rates, it is surprisingly cheap to invest in real estate, requ...
Useful perspective. (I'm excited about this debate because I think you're wrong, but feel free to stop responding anytime obviously! You've already helped me a ton, to clarify my thoughts on this.)
First, what I agree with: I am excited by your last paragraph - my ideal EA community also helps people reason better, and the topics you listed definitely seem like part of the 'curriculum'. I only think it needs to be introduced gently, and with low expectations (e.g. in my envisioned EA world, the ~bottom 75% of engaged EAs will probably not change their caree...
Useful input. Can you give a bit more color about your feelings? In particular whether this is a disagreement with the core direction being proposed vs. just something I wrote down that seems off? (if the latter - i wrote this quickly trying to give a gist so not surprised. if the former i'm more surprised and interested in what I am missing.)
I am not fully sure, and it's a bit late. Here are some thoughts that came to mind on thinking more about this:
I think I do personally believe if you actually think hard about the impact, few things matter, and also that the world is confusing and lots of stuff turns out to be net-negative (like, I think if you take AI X-risk seriously a lot of stuff that seemed previously good in terms of accelerating technological progress now suddenly looks quite bad).
And so, I don't even know whether a community that just broadly encourages people to do thi...
Thanks for writing!
To be clear, I don't think we as a community should be scope insensitive. But here's the FAQ I would write about this...
We should retain awareness around optics, in good times and bad
I'd like to push back on this frame a bit. I almost never want to be thinking about "optics" as a category, but instead to focus on mitigating specific risks, some of which might be reputational.
See https://lesswrong.substack.com/p/pr-is-corrosive-reputation-is-not for a more in-depth explanation that I tend to agree with.
I don't mean to suggest never worrying about "optics" but I think a few of the things you cited in that category are miscategorized:
...err on the side of registering charita
Huh, useful analogy. I do think cryptocurrency has potential, I just think the expected altruism-value of the whole thing is quite negative currently, and has been for 5+ years, and this was super not true in the early days of the internet, even during the crash years.
(I was a very well-connected teenager in 1999 and I remember some things about the early internet... I remember the browser wars, Netscape, AltaVista, then Google, eBay and PayPal, as well as the adware, email viruses, chain letters, worms, hoaxes, etc.)
Early internet was clearly awful in so ...
A sweeping condemnation of crypto based on FTX's failure seems about as prudent or rational as a sweeping condemnation of democracy
Ah, but (to be a bit of a devil's advocate here) a lot of people have been sweepingly condemning crypto since before this whole fiasco, we are just making more noise and have a higher chance to be heard now :)
At risk of derailing the thread, I would argue none of your #1-5 are panning out in any substantive way. (I know a lot about 1 and 2, and claim that the entire crypto industry is at least an order of magnitude less effe...
I understand this skepticism. You're right that crypto still has a lot to prove in terms of large-scale utility, security, reliability, and regulatory compliance.
I would just caution that in the early 2000's after the dot-com bubble, many people expressed the same kinds of skepticism about the Internet itself, and about all online businesses. From 1995 to 2002, there were too many scammers, sociopaths, and opportunists who had ridden the initial wave of hype; security protocols were not well-developed; regulation was patchy and unclear; the use cases were ...
I agree! As a founder, I promise to never engage in fraud, either personally or with my business, even if it seems like doing so would result in large amounts of money (or other benefits) to good things in the world. I also intend to discourage other people who ask my advice from making similar trade-offs.
This should obviously go without saying, and I already was operating this way, but it is worth writing down publicly that I think fraud is of course wrong, and is not in line with how I operate the philosophy of EA.
I endorse the sentiment but I think anyone who was planning to commit fraud would say the same thing, so I don't think that promise is particularly useful.
I think you might now be overreacting to recent negative news; but before then you were probably overreacting to positive news. I do recommend building your own culture and brand for a company.
At Wave, we touch on EA in our mission and certainly did a little bit of hiring through EA aligned venues; but our mission is both simple to explain and largely independent of the whims of EA movement stuff. I think that's the way it should be; it was a deliberate choice for us and I think has served us well.
I think 80k has tried to emphasize personal fit in the content, but something about the presentation seems to dominate the content, and I think that is somehow related to social dynamics. Something seems to get in the way of the "personal fit" message coming through; I think it is related to having "top recommended career paths". I don't know how to ameliorate this, or I would suggest it directly.
I'm sure this is frustrating to you too, since like 90% of the guide is dedicated to making the point that personal fit is important; and people seem to gloss ove...
(I submitted this to your form, figured I could also write it here for further discussion):
I think 80k has provided a clear and easy-to-consume career guide, which has influenced the conversation. There are a set of careers/cause areas which are legibly high priority and thus approved by 80k. This has the effect of nudging people into the approved careers and discouraging them from everything else.
I suspect this has both positive and negative effects. The positive ones are the first-order effects: hopefully people actually make better career choices and th...
Awesome - excited about this! Please let me know if you want me or someone from Wave to give a talk or say hello. (I am not likely to make it in person, but it's not crazy that someone from my team might be.)
These feel like classic arguments. Such arguments hold some weight. But (in agreement with Derek Shiller) I think they are more an argument against a very strong form of longtermism where you are actively sacrificing everything else we care about in order to make the future better. Such lines of reasoning seem bad-in-general because errors in reasoning magnify tremendously. Instead, we should mix lots of different worldviews into our moral strategy, preferring actions which are robustly good across many worldviews.
I also have noticed (in the process of eat...
There are so many different bags and brands available that you should specify more constraints if you want more personalized recommendations. Vegan is not too hard to satisfy - most luggage that's vegan won't necessarily mention it, you just have to look in the description for animal products to avoid (leather / suede mostly).
For me personally, my main carry is a Tom Bihn Techonaut 30 - it's big enough to carry 5+ days of clothing and my laptop and other gear without needing another bag, but lightweight enough that when I need more space, I am happy to carry it as just a small backpack alongside my Travelpro Maxlite 5.
I also like the /r/onebag subreddit.