I guess I don't understand why w > x > y > z implies w - y = x - y iff w - x = y - z. Sorry if this is a standard result I've forgotten, but at first glance it's not totally obvious to me.
I didn't quite follow. What's the reasoning for claiming this?
From the definition of the four variables, the following equivalence can be deduced:w−y=x−z⟺w−x=y−z
From the definition of the four variables, the following equivalence can be deduced:
Well, I'd say we're all pragmatists whether we acknowledge it or not due to the problem of the criterion.
Not exactly based on EA org experience, but I think one of the biggest challenges orgs face is going from small enough that everyone can sit at the same table (people sometimes call these 2 pizza teams, because you can feed everyone with two pizzas; in practice the number is somewhere between 8 and 12) to medium (less than 150 people, aka the point at which you can personally know of everyone) to large.
EA orgs are most likely to face the first transition, small to medium. The big thing to know is that you'll have to find ways to take what happened and work... (read more)
Dislike the idea. Feels like this will change the character of the site in a way that's negative. It's a bit hard to say way, but part of the vibe of this place is that it's about ideas not about people, and this will take it away from that direction, and I think have more an idea vibe than a personal brand vibe is good for what this forum is for. There's plenty of other places people can have more highly personally identifiable or warmer experience of connecting with others.
If we did this I feel like it would be trying to optimize for something that's not, in my view, the primary purpose of the forum, and thus would make this site worse at being the EA Forum than without this feature.
I've been asking for this feature on LW. If we're not going to get it there, at least we can get it here!
Given the inclusion of space opera like Dune, I recommend including Vinge's work like A Fire Upon the Deep and A Deepness In the Sky. These deal with the long term consequences of intelligence explosion, albeit one in a world with slightly different physics than ours (or so it seems given our limited information; Vinge is careful to construct it in a way such that I think we can't be certain today our universe is not like the one he depicts in the books).
I'd also include Niven's Ringworld. Not obvious this is longtermist at first, but deep into the book that changes (not much more I can say without spoilers if you're hoping to read it).
So I remain unconvinced that there's a specific longtermist case for democracy, but I think there is a longtermist case for some kind of context in which longtermist work can happen.
What I have in mind is I'm not sure democracy or liberal democracy is necessary to work on longtermist cause areas, but liberal democracy is creating an environment in which this work can get done. So there's an interesting question, then: what are the feature of liberal democracy that enable longtermist work?
I ask this because I'm not sure that, for example, democracy is neces... (read more)
As best I can tell you don't seem to address the main reasons most organizations don't choose to outsource:
You could of course hand-wave here and try to say that since you propose an EA-oriented agency to serve EA orgs this would be less of an issue, but I'm skeptical since if such a model worked I'd expect, for example, not not have ever had a job at a startup and instead have worked for a large firm that specialized in providing tech services to startups. Given that there's a lot of mo... (read more)
I think the obvious challenge here is how to be more inclusive in the ways you suggest without destroying the thing that makes EA valuable. The trouble as I see it is that you only have 4-5 words to explain an idea to most people, and I'm not sure you can cram the level of nuance you're advocating for into that for EA.
I agree that when you first present EA to someone, there is a clear limitation on how much nuance you can squeeze in. For the sake of being concrete and down to earth, I don't see harm in giving examples from classic EA cause areas (giving the example of distributing bed nets to prevent malaria as a very cost-effective intervention is a great way to get people to start appreciating EA's attitude).
The problem I see is more in later stages of engagement with EA, when people already have a sense of what EA is but still get the impression (often unconsciously) that "if you really want to be part of EA then you need to work on one of the very specific EA cause areas".
This question on the EA Facebook group got some especially not EA answers. This seems not great given many people possibly first interact with EA via Facebook. I tend to ignore this group and maybe others do the same, but if this post is representative then we probably need to put more effort in there to make sure comments are moderated or replied to so it's at least clear who is speaking with an EA perspective and who isn't.
You want more good and less bad in the world? Would it be better if we had a little more good and a little less bad? If so, then we should care about the efficiency of our efforts to make the world better.
*note that I of course here mean something like efficiency that includes Pareto efficiency, not the narrow notion of efficiency we use everyday; you could also say "effective" but you asked for why giving should be effective, and we can ground effectiveness in Pareto efficiency across all dimensions we care about
I've been pretty skeptical that mental health is something EAs should focus on. One thing I see lacking in this report (apologies if it's there and I didn't find it) seems to be a way of comparing it to alternatives, since I don't think that mental health is a source of suffering for people is in question, but whether it's compares favorably to other issues.
For example I'd love something like QALY analysis on mental health that would allow us to compare it to other cause areas more directly.
Thanks for raising this - comparing things is a cause very close to my heart!
First, the report wasn't trying to compare the importance of mental health as a cause area to other things, so I understand that you didn't find that, because it wasn't central.
Second, the report (p8) does compare the impact of depression and anxiety to various other health conditions, as well as to debt, unemployment, and divorce in terms of 0-10 life satisfaction, a measure of subjective well-being (SWB) - the other main measure of SWB is happiness. We, as in HLI, are pret... (read more)
Having lived with someone who suffered chronic kidney stones, at least within the US, a huge problem in recent years has been the over-reaction to the so-called opioid crisis. The result has been a decreased willingness to actually treat what we might call chronic acute pain, like the kind that comes from kidney stones.
This is a somewhat technical distinction I'm making here. Kidney stone pain is acute in that it has a clear cause that can be remediated. However if someone produces kidney stones chronically (let's say at least one a month), they are chroni... (read more)
Regarding the difference in prevalence between chronic pain in men and women, there's a tendency, at least within the US medical system, to dismiss women's pain more often than men. A good example of this is pain resulting for endometriosis, which is often dismissed or downplayed by doctors as "just bad period cramps" rather than a serious source of chronic pain. So too for many other sources of pain unique to women.
I don't have a source, but my experience is that most of this seems to be due to a variant of the typical mind fallacy: male doctors and some ... (read more)
My model is the that the global angle is kind of boring: the drug war was pushed by the US, and I expect if the US ends it then other nations will either follow their example or at least drift in random directions with the US no longer imposing the drug war on them by threat of trade penalties.
I think this starts to get at questions of tractability, i.e. how neglected is this contingent on tractability (and vice versa). In my mind this is one of the big challenges of any kind of policy work where there's already a decent number of folks in the space: you have to have reasonably high confidence that you can do better than everyone else is doing now (and not just that you have an idea for how to do better, but like can actually succeed in executing better) in order for it to cross the bar of a sufficiently effective intervention (in expectation) to be worth working on.
I would expect this not to be very neglected, hence I would expect EAs to be able to have much impact here only if, for example, it's effectively neglected because the existing people pushing for an end to the drug war are unusually ineffective.
For example, there's already NORML, who's been working on cannabis angle of this since the 1970s to decent success, Portugal has already ended the drug war locally, and Oregon recently decriminalized possession of drugs for personal use.
Getting involved feels a bit like getting involved in, say, marriage equality in... (read more)
I'm partially sympathetic to this. However, I think EAs have got a bit hung up on 'neglectedness' to the extent it's got in the way of clear thinking: if lots of people are doing something, and you can make them do it slightly better, then working on non-neglected things is promising. Really, I think you need to judge the 'facts on the grounds', what you can do, and go from there. If there aren't ruthlessly impact-focused types working on a problem, that would a good heuristic for some such people to get stuck in.
What was salient to me, compared to when I knew very little of the topic, is how much larger the expected value of drug legalisation now seems.
On the one hand I'm in favor of more housing. I live in the SF Bay Area where this is also a problem, and really insufficient housing is a problem for all of California, so I'm naturally supportive of efforts to address this problem. However, I'm not sure this project is a high priority for EAs.
This seems like something that's not especially neglected (lots of people are thinking about ways to improve the housing situation in American cities) and also unlikely to have high impact in relative terms (viz. globally rich Americans are not suffering as much due... (read more)
One, I'd argue that hits-based giving is a natural consequence of working through what using "high-quality evidence and careful reasoning to work out how to help others as much as possible" reallying means, since that statement doesn't say anything about excluding high-variance strategies. For example, many would say there's high-quality evidence about AI risk, lots of careful reasoning has been done to assess its impact on the long term future, and many have concluded that working on such things is likely to help others as much as possible, though we may ... (read more)
Googling, I primarily find the term "high-quality evidence" in association with randomised controlled trials. I think many would say there isn't any high-quality evidence regarding, e.g. AI risk.
I do like the idea of being able to construct an experiment to test naturalism. I think it's mistaken in that I doubt there are any facts about what is right and wrong to be discovered, by observing the world or otherwise, but currently I and anyone else who wants to talk about metaethics is forced to rely primarily on argumentation. Being able to run an experiment using minds different from our own seems quite compelling to testing a variety of metaethical hypotheses.
I'm also somewhat concerned because this seems like a clear case of a dual use intervention that makes life better for the animals but also confers benefits to the farmers that may ultimately result in more suffering rather than less by, for example, making chickens more palatable to consumers as "humanely farmed" (I'm guessing that's what is meant by "humane-washing") or making chicken production more profitable (either by humane-washing or by making the chickens produce a better quality meat product that is in higher demand).
I can't seem to find the previous posts at the moment, but I have this sense that this is not an isolated issue and that ACE has some serious problems given that it draws continued criticism, not for its core mission, but for the way it carries that mission out. Although I can't remember at the moment what that other criticism was, I recall thinking "wow, ACE needs to get it together" or something similar. Maybe it has learned from those things and gotten better, but I notice I'm developing a belief that ACE is failing at the "effective" part of effective altruism.
Does this match what others are thinking or am I off?
Previous criticism of ACE in venues like the Forum has primarily been about its research methodology (e.g. here and response here).
It's been a while since I followed EAA research closely, but it's my impression ACE has improved its research methodology substantially and removed/replaced a lot of the old content people were concerned about – at least as far as non-DEI issues are concerned.
I'll note that I used to have some reservations but no longer do, so I'll answer about why I previously had reservations.
When EA got interested in what we now call longtermism, it didn't seem obvious to me that EA was for me. My read was that EA was about near concerns like global poverty and animal welfare and not far concerns like x-risk and aging. So it seemed natural to me that I was on the outside of EA looking in because my primary cause area (though note that I wouldn't have thought of it that way at the time) wasn't clearly under the EA umbrella.
Ob... (read more)
"Effective Altruism" sounds self-congratulatory and arrogant to some people:
Your comments in this section suggest to me there might be something going on where EA is only appealing within some particular social context. Maybe it's appealing within WEIRD culture, and the further you get from peak WEIRD the more objections there are. Alternatively maybe there's something specific to northern European or even just Anglo culture that makes it work there and not work as well elsewhere, translation issues aside.
Running with the valley metaphor, perhaps the 1990s were when we reached the most verdant floor of the valley. It remains unclear if we're still there or have started to climb out and away from it, assuming the model to be correct.
The people I know of who are best at mentorship are quite busy. As far as I can tell, they are already putting effort into mentoring and managing people. Mentorship and management also both directly trade off against other high value work they could be doing.There are people with more free time, but those people are also less obviously qualified to mentor people. You can (and probably should) have people across the EA landscape mentoring each other. But, you need to be realistic about how valuable this is, and how much it enables EA to scale.
The people I know of who are best at mentorship are quite busy. As far as I can tell, they are already putting effort into mentoring and managing people. Mentorship and management also both directly trade off against other high value work they could be doing.
There are people with more free time, but those people are also less obviously qualified to mentor people. You can (and probably should) have people across the EA landscape mentoring each other. But, you need to be realistic about how valuable this is, and how much it enables EA to scale.
Slight push ba... (read more)
On a related but different note, I wish there was a way to combine conversations on cross-posts between EA Forum and LW. I really like the way AI Alignment Forum works with LW and wish EA Forum worked the same way.
I often make an adjacent point to folks, which is something like:
EA is not all one thing, just like the economy is not all one thing. Just as civilization as we know it doesn't work unless we have people willing to do different things for different reasons, EA depends on different folks doing different things for different reasons to give us a rounded out basket of altruistic "goods".
Like, if everyone thought saltine crackers were the best food and everyone competed to make the best saltines, we'd ultimately all be pretty disappointed that we had a mountai... (read more)
I think it's worth saying that the context of "maximize paperclips" is not one where the person literally says the words "maximize paperclips" or something similar; this is instead an intuitive stand-in for building an AI capable of superhuman levels of optimization, such that if you set it the task, say via specifying a reward function, of creating an unbounded number of paperclips you'll get it doing things you wouldn't as a human do to maximize paperclips because humans have competing concerns and will stop when, say, they'd have to kill themselves or t... (read more)
I wrote about something similar about a year ago: https://forum.effectivealtruism.org/posts/Z94vr6ighvDBXmrRC/illegible-impact-is-still-impact
There's a lot to unpack in that tweet. I think something is going on like:
None of it looks like a real criticism of EA, but rather of lots of other things EA just happens to be adjacent to.
Doesn't mean it doesn... (read more)
I find others answers about what the actual low resolution version of EA they see in the wild fascinating.
I go with the classic and if people ask I give them a three word answer: "doing good better".
If they ask for more, it's something like: "People want to do good in the world, and some good doing efforts produce better outcomes than others. EA is about figuring out how to get the best outcomes (or the largest positive impact) for time/money/effort relative to what a person thinks is important."
I realize this is a total tangent to the point of your post, but I feel you're giving short-shrift here to continental philosophy.
If it were only about writing style I'd say fair: continental philosophy has chosen a style of writing that resembles that used in other traditions to try to avoid over-simplifying and not compressing understanding down into just a few words that are easily misunderstood. Whereas you see unclear writing, I see a desperate attempt to say anything detailed about reality without accidentally pointing in the wrong direction.
This is ... (read more)
I think this is often non-explicit in most discussions of morality/ethics/what-people-should-do. It seems common for people to conflate "actions that are bad because it ruins ability to coordinate" and "actions that are bad because empathy and/or principles tell me they are."
I think it's worth challenging the idea that this conflation is actually an issue with ethics.
Although it's true that things like coordination mechanisms and compassion are not literally the same thing and can have expressions that try to isolate themselves from each other (cf. market ... (read more)
Weird, that sounds strange to me because I don't really regret things since I couldn't have done anything better than what I did under the circumstances or else I would have done that, so the idea of regret awakening compassion feels very alien. Guilt seems more clear cut to me, because I can do my best but my best may not be good enough and I may be culpable for the suffering of others as a result, perhaps through insufficient compassion.
These cases seem not at all analogous to me because of the differing amount of uncertainty in each.
In the case of the drowning child, you presumably have high certainty that the child is going to die. The case is clear cut in that way.
In the case of the distant commotion on an autumn walk, it's just that, a distant commotion. As the walker, you have no knowledge about what it is and whether or not you could do anything. That you later learn you could have done something might lead you to experience regret, but in the moment you lacked information to make i... (read more)
Could the seeming contradiction be resolved by greater specificity of statements?
For example, rather than abandoning "Everyone should sell everything that begins with a 'C', but nothing that begins with an 'A'." as a norm, we might realize we underspecified it to begin with and really meant "Everyone should sell everything that is called by a word in English that begins with a 'C', but nothing that begins with an 'A' in English.". We could get even more specific if objections remained until we were not at risk of under specifying what we mean and suffering... (read more)
Where do you work, and what do you do?
I'm a software engineer at Plaid working on the Infrastructure team. My main project is leading our internal observability efforts.
What are some things you've worked on that you consider impactful?
In terms of EA impact at my current job, not much. I view this as an earning to give situation where I'm taking my expertise as a software engineer and turning it into donations. I think there's some argument that Plaid has positive impact on the world by enabling lots of new financial applications built on our APIs, thereby ... (read more)
I think this holds true in more traditionally "quantitative" fields, too, because often things can be useful or not depending on how they are framed such that without the proper framing good numbers don't matter because they are measuring the right thing.
This seems to suggest that a lot of what makes quantitative research successful also makes qualitative research successful, and so we should expect any extent to which expertise matters in quantitative fields to matter in qualitative fields (although I think this mostly points at the quant/qual distinction being a very fuzzy one that is only relevant along certain dimensions).
Jonas also mentioned to me that EA Funds is considering offering Donor-Advised Funds that could grant to individuals as long as there’s a clear charitable benefit. If implemented, this would also allow donors to provide tax-deductible support to individuals.
This is pretty exciting to me. Without going into too much detail, I expect to have a large amount of money to donate in the near future, and LTF is basically the best option I know of (in terms of giving based on what I most want to give to) for the bulk of that money short of having the ability to do ... (read more)
LTF covers a lot of ground. How do you prioritize between different cause areas within the general theme of bettering the long term future?
The LTFF chooses grants to make from our open application rounds. Because of this, our grant composition depends a lot on the composition of applications we receive. Although we may of course apply a different bar to applications in different areas, the proportion of grants we make certainly doesn't represent what we think is the ideal split of total EA funding between cause-areas.
In particular, I tend to see more variance in our scores between applications in the same cause-area than I do between cause-areas. This is likely because most of our application... (read more)
How much room for additional funding does LTF have? Do you have an estimate of how much money you could take on and still achieve your same ROI on the marginal dollar donated?
Really good question!
We currently have ~$315K in the fund balance.* My personal median guess is that we could use $2M over the next year while maintaining this year's bar for funding. This would be:
Do you have any plans to become more risk tolerant?
Without getting too much into details, I disagree with some things you've chosen not to fund, and as an outsider view it as being too unwilling to take risks on projects, especially projects where you don't know the requesters well, and truly pursue a hits-based model. I really like some of the big bets you've taken in the past on, for example, funding people doing independent research who then produce what I consider useful or interesting results, but I'm somewhat hesitant around donating to LTF because I... (read more)
From an internal perspective I'd view the fund as being fairly close to risk-neutral. We hear around twice as many complaints that we're too risk-tolerant than too risk-averse, although of course the people who reach out to us may not be representative of our donors as a whole.
We do explicitly try to be conservative around things with a chance of significant negative impact to avoid the unilateralist's curse. I'd estimate this affects less than 10% of our grant decisions, although the proportion is higher in some areas, such as community building, biosecur... (read more)
I'm being strategic in 2020 and shifting much of my giving for it into 2021 because I expect a windfall, but here's where I chose to give this year:
I've had RSI in the past, but not from typing, but instead from repetitive motions loading paper into a machine for scanning. I didn't need to see a doctor about it, and addressing it was ultimately pretty straight forward and I was able to keep doing the job that caused it while I recovered. Things I did:
I've got a few:
My somewhat uncharitable reaction while reading this was something like "people running ineffective charities are upset that EAs don't want to fund them, and their philosopher friend then tries to argue that efficiency is not that important".
I'm a big fan of ideas like this. One of the things I think EAs can bring to charitable giving that is otherwise missing from the landscape is being risk-neutral, and thus willing to bet on high variance strategies that, taken as a whole in a portfolio, may have the same or hopefully higher expect returns compared to typical risk-averse charitable spending that tends to focus on things like making no money is wasted to the exclusion of taking necessary risks to realize benefits.
Taking a predictive processing perspective, we should expect to see an initial decrease in happiness upon finding oneself living a less expensive lifestyle because it would be a regular "surprise" violating the expected outcome, but then over time for this surprise to go away as daily evidence slowly retrains the brain the to expect less and so have less negative emotional valence around upon perceiving the actual conditions.
However I'd still expect someone who "fell from grace" like this to be somewhat sadder than a person who ros... (read more)