Hmm, I don't entirely disagree but I also don't fully agree either:
Where I agree: I have indeed hired people on the opposite side of the world (eg Australia) for whom it was not a problem.
Where I disagree: working at weird hours is a skill, and one that is hard to test for in interviews. There is a reasonably high base rate (off the cuff: maybe 30 percent?) of candidates claiming overconfidently in interviews that they can meet a work schedule that is actually incredibly impractical for them and end up causing problems or needing firing later on. I would rather not take that collective risk -- to hire you and discover 3 months in, that the schedule you signed up for is not practical for you.
One thing which may be newly possible in the last few years is getting satellite imagery of the country and using AI to count houses. With appropriate methodology, this is far more likely to be accurate than relying on bureaucratic reporting and/or projections, although there are the obvious pitfalls and probably some non-obvious ones too in backing this out to population. I believe MSF used to do something like this for their deployment areas but I haven't heard of it attempted at a countrywide scale.
I think the 2nd place result for Carrick is quite good for a 1st-time candidate with 1st-time political action team behind. There were many mistakes obviously, but deciding to run was not one of them IMO. No political action will result in certainty, the goal is ~always to move the needle or take a bunch of swings.
Yep... this stuff is simultaneously fairly obvious and surprisingly neglected. I have been looking at TB programs in various developing countries and the best policy proposals all include these components - although the details differ in some places. Implementation is a challenge.
I've been paying special attention to household contacts screening for TB and preventative treatment (TPT; your policy #2). While TPT for household contacts is extraordinarily cost-effective, it is quite frustrating to make progress on this, because of the reasons the report cites...
Hmm, I'll take another stab at this point, which has some mathematical basis but in the end is a fairly intuitive point:
Consider the per-person utility of society X which has a generous margin of slack for each person. Compare it to the per-person utility of society Z which has no margin of slack for each person.
My claim is something like, the per-person utility curves (of actual societies, that could exist, given reasonable assumptions about resources) most likely look like a steep drop from "quite positive" to "quite negative" between X and Z, because of...
One of the things that seems intuitively repugnant is the idea of "lives barely worth living". The word barely is doing a lot of work in driving my intuition of repugnancy, since a life "barely" worth living seems to imply some level of fragility -- "if something goes even slightly wrong, then that life is no longer worth living".
I think this may simply be a marketing problem though. Could we use some variation of "middle class"? This is essentially the standard of living that developed-world politics accepts as both sustainable and achievable, and sounds...
There are so many different bags and brands available that you should specify more constraints if you want more personalized recommendations. Vegan is not too hard to satisfy - most luggage that's vegan won't necessarily mention it, you just have to look in the description for animal products to avoid (leather / suede mostly).
For me personally, my main carry is a Tom Bihn Techonaut 30 - it's big enough to carry 5+ days of clothing and my laptop and other gear without needing another bag, but lightweight enough that when I need more space, I am happy to carry it as just a small backpack alongside my Travelpro Maxlite 5.
I also like the /r/onebag subreddit.
(My personal opinion, not EV's:)
EV is winding down and being on this board is quite a lot of work. This makes it very hard to recruit for! The positive flip side though of the fact of the wind-down means that the cultural leadership we are doing is a bit less impactful than it was say a year or two ago.
When we faced the decision of whether to keep searching or accept the candidates in front of us, I considered many factors but eventually agreed that it was ok to prioritize allowing the existing board members to leave (which they couldn't do until we found ...
I would like to point out that this is one of those things where n=1 is enough to improve people's lives (e.g., the placebo effect works in your favor), in the same way that I can improve my life by taking a weird supplement that isn't scientifically known to work but helps me when I take it.
For what it's worth, my life did seem to start going better (I started to feel more in touch with my emotional side) after becoming vegan.
While I broadly agree with Rocky's list I want to push back a little vs. your points:
Re your (2): I've found that small entities are in a constant struggle for survival, and must move fast and focus on the most important problems unique to their ability to make a difference in the world. Small-seeming requirements like "new hires have to find their own housing" can easily make the difference between being able to move quickly vs. slowly on some project that makes or breaks the company. I think for new entities the risks of incurring large costs before you ...
You might want to check out some of Phil Trammell's reports, where he analyzes what he calls time preference (time discount rate) with respect to philanthropy: https://docs.google.com/document/d/1NcfTgZsqT9k30ngeQbappYyn-UO4vltjkm64n4or5r4/edit
Congrats on having invented something exciting!
Usually, the best way to get innovative new technology into the hands of beneficiaries quickly is to get a for-profit company to invest with a promise of making money. This can happen via licensing a patent to an existing manufacturer, or creating a whole startup company and raising venture capital, etc.
One of the things such investors want to see is a 'moat': something that this company can do that no other company can easily copy. A patent/exclusive license is a good way to create a moat.
There are some domai...
I'm a bit confused about this because "getting ambitious slowly" seems like one of those things where you might not be able to successfully fool yourself: once you can conceive that your true goal is to cure cancer, you are already "ambitious"; unless you're really good at fooling yourself, you will immediately view smaller goals as instrumental to the big one. It doesn't work to say I'm going to get ambitious slowly.
What does work is focusing on achievable goals though! Like, I can say I want to cure cancer but then decide to focus on understanding metabolic pathways of the cell, or whatever. I think if you are saying that you need to focus on smaller stuff, then I am 100% in agreement.
I avoid reading, and don't usually respond to, comments on my posts, or replies to my own comments.
The reason is that it's emotionally intense to do so: after posting something on the EA Forum, I avoid checking the forum at all for ~24h or so (for fear of noticing replies in the 'recents' area, or changes in my karma), and after that I mainly skim for people flagging major errors or omissions that need my input to be resolved.
Lizka's You Don't Have to Respond to Every Comment talks about this a bit (and was enormously helpful for me) - I am not strongly av...
I think this is a useful question and I'm glad to be discussing this.
I agree with many of your concerns - and would love to see a more culturally-unified EA on the axis of how conscious we are of our own impact - but I also think you're failing to acknowledge something crucial: As much as EA is about altruism, it is also about focus on what's important, and your post doesn't acknowledge this as a potential trade-off for the folks you're discussing.
You'll find a lot of EA folks perceive climate change as a real problem but also perceive marginal carbon cost...
I'm interested in the discussion of whether in fact we are at a hinge of history, maybe this is a good comments section for that. I agree that Will's analysis barely scratches the surface and has some flaws.
Factors under consideration for me:
The GiveDirectly founders (Michael Faye and Paul Niehaus) also founded TapTapSend (https://techcrunch.com/2021/12/20/taptap-send-raises-65m-to-build-cross-border-remittances-focused-on-the-most-underserved-markets/) which competes with Sendwave to keep remittance prices down.
It's a fair critique. I use "legible" in this way, and I don't really want to give it up, and I think it's not too bad jargon-wise because even non-EA people seem to understand it without too much prefixing with definition.
Your alternatives don't quite capture the idea right:
You said in your "Five years" post that you are planning to do more self-eval and impact assessments, and I strongly encourage this. What are the most realistic bits of evidence you could get from an impact report of Rethink Priorities which would cause you to dramatically update your strategy? (or, another generator: what are you most worried about learning from such assessments?)
I’ve personally liked it. There have been several times when I’ve talked with my co-CEO Marcus about whether one of us should just become CEO and it’s never really made sense. We work well together and the co-CEO dynamic creates a great balance between our pros and cons as leaders – Marcus leads the organization to be more deliberate and careful at the cost of potentially going too slowly and I lead the organization to be more visionary at the cost of potentially being too chaotic.
Right now we split the organization very well where Marcus handles the portf...
I don’t believe this is an unbelievably terrible idea; it makes sense to do this in some circumstances. That said, take resentment buildup seriously! If you feel that you are the sort of person who has even a small chance of feeling resentful about this choice later on, it is probably not worth it. You need to feel unambiguously good about this decision in the short and long term.
Yeah, sorry, I wrote the comment quickly and "resources" was overloaded. My first reference to resources was intended to be money; the second was information like career guides and such.
I think the critical-info-in-private thing is actually super impactful towards centralization, because when the info leaks, the "decentralized people" have a high-salience moment where they realize that what's happening privately isn't what they thought was happening publicly, they feel slightly lied-to or betrayed, lose perceived empowerment and engagement.
The tractability of further centralisation seems low
I'm not sure yet about my overall take on the piece but I do quibble a bit with this; I think that there are lots of simple steps that CEA/Will/various central actors (possibly including me) could do, if we wished, to push towards centralization. Things like:
Thanks! I agree that we are already (kind of) doing most of these things. So the question is whether further centralisation is tractable (and desirable). Like I say, it seems to me the big thing is if there’s someone, or some group of people, who really wants to make that further centralisation to happen. (E.g. I don’t think I’d be the right person even if I wanted to do it.)
Some things I didn't understand from your bullet-point list:
Having most of the resources come from one place
By “resources” do you primarily mean funding? (I'll assume ...
I mostly agree, but would add that it seems totally okay if two orgs sometimes work on the same thing! It's easy to over-index on the simple existence of an item within scope and say "oh that's covered" and move on, without actually asking "is this need really being met in the world?" Competition is good in general, and I wouldn't want to overly discourage it.
Vague agree with the framing of questions vs. answers, but I feel worried that "answer-based communities" are quite divergent from the epistemic culture of EA. Like, religions are answer-based communities but a lot of EAs would dispute that EA is a religion or that it is prescriptive in that way.
Not sure how exactly this fits into what you wrote, but figured I should register it.
I wrote up my nutrition notes here, from my first year of being vegan: http://www.lincolnquirk.com/2023/06/02/vegan_nutrition.html
I want to push back a little against this. I care more about the epistemic climate than I do about the emotional climate. Ideally in most cases they don't trade off. Where they do, though, I would rather people prioritize the epistemic climate, since I think knowing what is true is incredibly core to EA, more than the motivational aspect of it!
Everything here is based on friends' recommendations and very lightweight research, I didn't do much original research and didn't measure my levels. I'll probably get around to measuring soon and I expect this plan to change a bit. Philosophically I have chosen a low effort/low risk plan which I think is sustainable for me.
I take creatine and b12 when I remember to take them, which tends to be on days I go to the gym and make a smoothie afterwards. I take D3 sporadically when I think of it during the winter months (although this winter I didn't bother for ...
Ok, my best idea is to highlight a Marxist theory of labor vs. capital at the small scale. I know this sounds very high brow but I think a distillation of it could work?
Give someone a loaf, they can eat it.
Teach them to bake, they can join the labor market and work hard to feed themselves.
Give them money for an oven, they can own the means of production.
Ok, I mostly agree with you, but let's reframe as a devil's advocate: what if "EA" is a shaky concept in the first place (doesn't carve reality at joints)? Would you then agree that borders should be redrawn to have a more coherent mission, even if that ends up cutting out some bits of the "old EA"?
Great post! It inspired me to write this, because I worry that such posts might accidentally discourage others from working on this cause area. https://forum.effectivealtruism.org/posts/e8ZJvaiuxwQraG3yL/don-t-over-update-on-others-failures
(to be clear: I really appreciate postmortems and want more content like it!)
My loose understanding of farmed animal advocacy is that something like half the money, and most of the leaders, are EA-aligned or EA-adjacent. And the moral value of their $s is very high. Like you just see wins after wins every year, on a total budget across the entire field on the order of tens of millions.
A lot of organisations with totally awful ideas and norms have nonethless ended up moving lots of money and persuading a lot of people. You can insert your favourite punching bag pseudoscience movement or bad political party here. The OP is not saying that the norms of EA are worse than those organisations, just that they're not as good as they could be.
Nice. Thanks. Really well written, very clear language, and I think this is pointed in a pretty good direction. Overall I learned a lot.
I do have the sense it maybe proves too much -- i.e. if these critiques are all correct then I think it's surprising that EA is as successful as it is, and that raises alarm bells for me about the overall writeup.
I don't see you doing much acknowledging what might be good about the stuff that you critique -- for example, you critique the focus on individual rationality over e.g. deferring to external consensus. But it seem...
"I do have the sense it maybe proves too much -- i.e. if these critiques are all correct then I think it's surprising that EA is as successful as it is, and that raises alarm bells for me about the overall writeup"
Agreed. Chesterton's fence applies here.
“I don't see you doing much acknowledging what might be good about the stuff that you critique”
I don’t think it’s important for criticisms to do this.
I think it’s fair to expect readers to view things on a spectrum, and interpret critiques as an argument in favour of moving in a certain direction along a spectrum, rather than going to the other extreme.
The problem with considering optics is that it’s chaotic. I think Wytham is a reasonable example. You might want a fancy space so you can have good optics - imagining that you need to convince fancy people of things, otherwise they won’t take you seriously. Or you might imagine that it looks too fancy, and then people won’t take you seriously because it looks like you’re spending too much money.
Pretty much everything in “PR” has weird nonlinear dynamics like this. I’m not going to say that it is completely unpredictable but I do think that it’s quite hard ...
The problem with considering optics is that it’s chaotic.
The world is chaotic, and everything EAs try to do have a largely unpredictable long-term effect because of complex dynamic interactions. We should try to think through the contingencies and make the best guess we can, but completely ignoring chaotic considerations just seems impossible.
It’s a better heuristic to focus on things which are actually good for the world, consistent with your values.
This sounds good in principle, but there are a ton of things that might conceivably be good-but-for-...
Thanks Patrick - glad to see you on EA forum.
Did you reach out to EA funders for VaccinateCA? From the linked article:
I called in favors and pled our case up and down the tech industry, and scraped together about $1.2 million in funding.
I have the sense that (at least today) a project with this level of prioritization, organizational competence and star power would be able to pull down 5x that amount with 1/10th the fundraising effort through the EA network. I think that was approximately still the case in early 2021.
(FWIW I've been a fan of yours sinc...
I didn't reach out to any EA funders, for somewhat quirky and contingent reasons, and I'm unfortunately going to be slightly elliptical here rather than saying everything I know:
At various points when I was raising money, I had miscalibrated understanding of how much money was committed or on the cusp of being committed. Since I was optimizing for speed-to-commitment, at most points I favored either my own network or networks I had perceived-high-quality intros to rather than attempting to light up funding sources which I perceived would not have a high pr...
[note: I don't work for CEA, but I did recently invest in a house to live in and do events in.] I wrote a piece on my blog about why. Here's what I wrote:
Real estate purchases can make sense for financial planning reasons in some cases. This money should not be considered to trade off against, e.g., donations to effective charities. Instead it should trade off against short-term rental budgets for retreats, conferences, etc. And because banks are willing to loan against real estate at very good rates, it is surprisingly cheap to invest in real estate, requ...
Useful perspective. (I'm excited about this debate because I think you're wrong, but feel free to stop responding anytime obviously! You've already helped me a ton, to clarify my thoughts on this.)
First, what I agree with: I am excited by your last paragraph - my ideal EA community also helps people reason better, and the topics you listed definitely seem like part of the 'curriculum'. I only think it needs to be introduced gently, and with low expectations (e.g. in my envisioned EA world, the ~bottom 75% of engaged EAs will probably not change their caree...
Useful input. Can you give a bit more color about your feelings? In particular whether this is a disagreement with the core direction being proposed vs. just something I wrote down that seems off? (if the latter - i wrote this quickly trying to give a gist so not surprised. if the former i'm more surprised and interested in what I am missing.)
I am not fully sure, and it's a bit late. Here are some thoughts that came to mind on thinking more about this:
I think I do personally believe if you actually think hard about the impact, few things matter, and also that the world is confusing and lots of stuff turns out to be net-negative (like, I think if you take AI X-risk seriously a lot of stuff that seemed previously good in terms of accelerating technological progress now suddenly looks quite bad).
And so, I don't even know whether a community that just broadly encourages people to do thi...
Thanks for writing!
To be clear, I don't think we as a community should be scope insensitive. But here's the FAQ I would write about this...
We should retain awareness around optics, in good times and bad
I'd like to push back on this frame a bit. I almost never want to be thinking about "optics" as a category, but instead to focus on mitigating specific risks, some of which might be reputational.
See https://lesswrong.substack.com/p/pr-is-corrosive-reputation-is-not for a more in-depth explanation that I tend to agree with.
I don't mean to suggest never worrying about "optics" but I think a few of the things you cited in that category are miscategorized:
...err on the side of registering charita
I feel a lot more optimistic about this direction than you. It's a theory of change that you seem to think is unrealistic, when I think it is highly realistic, and thus you're focused on the downside risks when I think the upside is potentially huge and worthwhile.
My theory goes something like: we block all factory farm expansions -> people realize that we don't want factory farmed products in the UK at all -> public opinion shifts quickly -> multiple policy changes are now simultaneously possible: we ban new factory farms, start working on closin... (read more)
"we block all factory farm expansions -> people realize that we don't want factory farmed products in the UK at all -> public opinion shifts quickly -> multiple policy changes are now simultaneously possible"
I think this ToC is much less clean than it sounds.
- We block all factory farm expansions - somewhat unlikely. I don't think you can't block them on welfare grounds, you have to find some other reason to block an expansion like environmental grounds so each fight is unique and the chance of winning every time is consequently lower.
... (read more)Note: there wa