# All of G Gordon Worley III's Comments + Replies

Ambiguity aversion and reduction of X-risks: A modelling situation

I guess I don't understand why w > x > y > z implies w - y = x - y iff w - x = y - z. Sorry if this is a standard result I've forgotten, but at first glance it's not totally obvious to me.

1Benedikt Schmidt2dMaybe it gets clearer if you compare the relative values of the 4 variables.w−y corresponds to the benfits of RXR,x−zalso corresponds to the benefits of RXR. But maybe I was not precise enough: The equivalence does not follow only fromw>x >y>z, we also need to take into account the definitions of the 4 variables. Do you see what I mean?
Ambiguity aversion and reduction of X-risks: A modelling situation

I didn't quite follow. What's the reasoning for claiming this?

From the definition of the four variables, the following equivalence can be deduced:

2Benedikt Schmidt3dThe reasoning is the following: The agent-neutral values are now denoted by variables instead of numbers. The worst case is represented byz, where the agent neither enjoys the beneﬁts of PAP, nor those of RXR.yrepresents the value yielded by the choice of PAP whereasxcorresponds to the value yielded by the choice of RXR. The best case arises if the agent chooses PAP while RXR is not necessary, since then the agent-neutral value incorporates the beneﬁts of PAP and RXR, amounting tow. Therefore, clearly the following relation holds: w>x>y>zFrom there, the equivalence under question follows. Do you agree?
Are many EAs philosophical pragmatists?

Well, I'd say we're all pragmatists whether we acknowledge it or not due to the problem of the criterion.

Management for growing teams

Not exactly based on EA org experience, but I think one of the biggest challenges orgs face is going from small enough that everyone can sit at the same table (people sometimes call these 2 pizza teams, because you can feed everyone with two pizzas; in practice the number is somewhere between 8 and 12) to medium (less than 150 people, aka the point at which you can personally know of everyone) to large.

EA orgs are most likely to face the first transition, small to medium. The big thing to know is that you'll have to find ways to take what happened and work... (read more)

[PR FAQ] Adding profile pictures to the Forum

Dislike the idea. Feels like this will change the character of the site in a way that's negative. It's a bit hard to say way, but part of the vibe of this place is that it's about ideas not about people, and this will take it away from that direction, and I think have more an idea vibe than a personal brand vibe is good for what this forum is for. There's plenty of other places people can have more highly personally identifiable or warmer experience of connecting with others.

If we did this I feel like it would be trying to optimize for something that's not, in my view, the primary purpose of the forum, and thus would make this site worse at being the EA Forum than without this feature.

[PR FAQ] Sharing readership data with Forum authors

I've been asking for this feature on LW. If we're not going to get it there, at least we can get it here!

Some longtermist fiction

Given the inclusion of space opera like Dune, I recommend including Vinge's work like A Fire Upon the Deep and A Deepness In the Sky. These deal with the long term consequences of intelligence explosion, albeit one in a world with slightly different physics than ours (or so it seems given our limited information; Vinge is careful to construct it in a way such that I think we can't be certain today our universe is not like the one he depicts in the books).

I'd also include Niven's Ringworld. Not obvious this is longtermist at first, but deep into the book that changes (not much more I can say without spoilers if you're hoping to read it).

Towards a longtermist framework for evaluating democracy-related interventions

So I remain unconvinced that there's a specific longtermist case for democracy, but I think there is a longtermist case for some kind of context in which longtermist work can happen.

What I have in mind is I'm not sure democracy or liberal democracy is necessary to work on longtermist cause areas, but liberal democracy is creating an environment in which this work can get done. So there's an interesting question, then: what are the feature of liberal democracy that enable longtermist work?

I ask this because I'm not sure that, for example, democracy is neces... (read more)

5Buhl1moThanks for raising these points! A few of my (personal) reactions: 1. We definitely didn't intend for the post to presuppose that democracy is good for the long term. It’s true that most of the potential effects we identity are positive-leaning – but none of these effects, nor the all-things-considered effect, is a settled case. 2. I think the question of what conditions allowed EA to come into existence is interesting, although not sure if that's the main positive impact of liberal democracy (especially given we don’t have super strong evidence that liberal democracy was necessary for EA to arise). As is sort-of mentioned in the post, (inclusive) liberalism might be the feature most directly important to the flourishing of EA. But of course it’s hard to tell and I think it’s plausible that a combination of features reinforcing each other is key.
Part 1: EA tech work is inefficiently allocated & bad for technical career capital

As best I can tell you don't seem to address the main reasons most organizations don't choose to outsource:

• additional communication and planning friction
• principal-agent problems

You could of course hand-wave here and try to say that since you propose an EA-oriented agency to serve EA orgs this would be less of an issue, but I'm skeptical since if such a model worked I'd expect, for example, not not have ever had a job at a startup and instead have worked for a large firm that specialized in providing tech services to startups. Given that there's a lot of mo... (read more)

3Arepo2moI think Charles' responses are good. I'd also like to see evidence of the claim that these are the main reasons. Occam's Razor says to me that if outsourcing costs 2-3x (or more) than in-house hiring, then that's the main reason lean startups don't go for it. Otherwise, companies could just hire agencies on permanent contracts, effectively treating them as superexpensive (but partially pre-vetted) in-house staff.
7Charles He2moI agree, that in their posts, the OP only advocates for their idea. Also, I agree with your points. I think having full time tech staff, someone that knows the ins and out of a system/org and is valuable, and this can be hard to replace in an agency model. However, I think the rest of your comment is ungenerous. * There are literally firms that specialize in providing tech to startups, and if you expand this to include general contractor firms in IT, that indeed work for startups, this is a large fraction of the tech industry. * Setting aside OpenAI, few EA orgs focus on creating intellectual property (EA aren't "disrupting" social media/logistics/healthcare, etc.). Indeed, based on the OP's comments, the need is more toward prosaic work (which is sort of the problem). The skill is more fungible. * You make a point by saying there's "a lot of money at stake at startups" but this itself supports the OP's point: (early) employees in for-profits grind brutally to win equity and exit (often in zero-sum games with competitors). There's less need for that level of control and aggressiveness in EA orgs. These do suggest that an agency model could work. More directly, I think it would not have been difficult for the OP to add some pro forma, "there are drawbacks section" but, basically, really your perfectly correct points are sort of expected and normal. I don't think the OP is planning to take over all tech in all EA orgs but instead offer an alternative. Even if only 30% of EA orgs use it, the idea seems viable. I think the level of discussion should be higher and address "devil is in the details", seeing what the demand could be and what can be worked out. That seems to be have the OP is doing. I do think the EA advantages like the OP suggest is indeed large and may even be unique in the non-profit field.
The case against “EA cause areas”

I think the obvious challenge here is how to be more inclusive in the ways you suggest without destroying the thing that makes EA valuable. The trouble as I see it is that you only have 4-5 words to explain an idea to most people, and I'm not sure you can cram the level of nuance you're advocating for into that for EA.

I agree that when you first present EA to someone, there is a clear limitation on how much nuance you can squeeze in. For the sake of being concrete and down to earth, I don't see harm in giving examples from classic EA cause areas (giving the example of distributing bed nets to prevent malaria as a very cost-effective intervention is a great way to get people to start appreciating EA's attitude).

The problem I see is more in later stages of engagement with EA, when people already have a sense of what EA is but still get the impression (often unconsciously) that "if you really want to be part of EA then you need to work on one of the very specific EA cause areas".

G Gordon Worley III's Shortform

This question on the EA Facebook group got some especially not EA answers. This seems not great given many people possibly first interact with EA via Facebook. I tend to ignore this group and maybe others do the same, but if this post is representative then we probably need to put more effort in there to make sure comments are moderated or replied to so it's at least clear who is speaking with an EA perspective and who isn't.

Why should we be effective in our altruism?

You want more good and less bad in the world? Would it be better if we had a little more good and a little less bad? If so, then we should care about the efficiency of our efforts to make the world better.

*note that I of course here mean something like efficiency that includes Pareto efficiency, not the narrow notion of efficiency we use everyday; you could also say "effective" but you asked for why giving should be effective, and we can ground effectiveness in Pareto efficiency across all dimensions we care about

Problem area report: mental health

I've been pretty skeptical that mental health is something EAs should focus on. One thing I see lacking in this report (apologies if it's there and I didn't find it) seems to be a way of comparing it to alternatives, since I don't think that mental health is a source of suffering for people is in question, but whether it's compares favorably to other issues.

For example I'd love something like QALY analysis on mental health that would allow us to compare it to other cause areas more directly.

3Jackson Wagner4moSpeaking of comparing to alternatives, I can't resist making a pedantic note for posterity that the fact about "per DALY lost, spending on HIV is 150x higher than spending on mental health" is not necessarily a sign of irrational priorities. After all, HIV is contagious in a way that mental health problems mostly aren't! I'm sure Taiwan is spending much more on covid-19 prevention right now than they are on cancer treatment per DALY being lost (they have almost no covid cases but are willing to constantly apply severe anti-covid restrictions), but it's a rational decision because covid has the potential to rapidly spiral out of control in a way that cancer can't do. I don't think the report was trying to use the fact as some kind of instant knockdown argument in favor of mental health vs HIV spending -- they were just using it as an illustrative comparison (which it is) to a well-known existing category of international health spending, to show that mental health spending is much smaller. So, the report is totally cool by me (and indeed we probably should be spending more on mental health globally, regardless of whether it's a #1 EA issue or a lower priority). I just wanted to make a note here for anyone interested in the 150x fact in the future.

Thanks for raising this - comparing things is a cause very close to my heart!

First, the report wasn't trying to compare the importance of mental health as a cause area to other things, so I understand that you didn't find that, because it wasn't central.

Second, the report  (p8) does compare the impact of depression and anxiety to various other health conditions, as well as to debt, unemployment, and divorce in terms of 0-10 life satisfaction, a measure of subjective well-being (SWB) - the other main measure of SWB is happiness. We, as in HLI, are pret... (read more)

Should Chronic Pain be a cause area?

Having lived with someone who suffered chronic kidney stones, at least within the US, a huge problem in recent years has been the over-reaction to the so-called opioid crisis. The result has been a decreased willingness to actually treat what we might call chronic acute pain, like the kind that comes from kidney stones.

This is a somewhat technical distinction I'm making here. Kidney stone pain is acute in that it has a clear cause that can be remediated. However if someone produces kidney stones chronically (let's say at least one a month), they are chroni... (read more)

Should Chronic Pain be a cause area?

Regarding the difference in prevalence between chronic pain in men and women, there's a tendency, at least within the US medical system, to dismiss women's pain more often than men. A good example of this is pain resulting for endometriosis, which is often dismissed or downplayed by doctors as "just bad period cramps" rather than a serious source of chronic pain. So too for many other sources of pain unique to women.

I don't have a source, but my experience is that most of this seems to be due to a variant of the typical mind fallacy: male doctors and some ... (read more)

Ending The War on Drugs - A New Cause For Effective Altruists?

My model is the that the global angle is kind of boring: the drug war was pushed by the US, and I expect if the US ends it then other nations will either follow their example or at least drift in random directions with the US no longer imposing the drug war on them by threat of trade penalties.

Ending The War on Drugs - A New Cause For Effective Altruists?

I think this starts to get at questions of tractability, i.e. how neglected is this contingent on tractability (and vice versa). In my mind this is one of the big challenges of any kind of policy work where there's already a decent number of folks in the space: you have to have reasonably high confidence that you can do better than everyone else is doing now (and not just that you have an idea for how to do better, but like can actually succeed in executing better) in order for it to cross the bar of a sufficiently effective intervention (in expectation) to be worth working on.

Ending The War on Drugs - A New Cause For Effective Altruists?

I would expect this not to be very neglected, hence I would expect EAs to be able to have much impact here only if, for example, it's effectively neglected because the existing people pushing for an end to the drug war are unusually ineffective.

For example, there's already NORML, who's been working on cannabis angle of this since the 1970s to decent success, Portugal has already ended the drug war locally, and Oregon recently decriminalized possession of drugs for personal use.

Getting involved feels a bit like getting involved in, say, marriage equality in... (read more)

5freedomandutility4moI think this is still a good cause area for EAs: 1. I think the potential positive effects of global drug legalisation on opioid access in LMICs adds massively to the expected value 2. I agree that this area is probably not neglected in absolute terms, but I suspect that it might be neglected relative to the expected value of global drug legalisation 3. I think a global angle (which might have more of a focus on working with WHO and the UN) might not even be neglected in absolute terms

I'm partially sympathetic to this. However, I think EAs have got a bit hung up on 'neglectedness' to the extent it's got in the way of clear thinking: if lots of people are doing something, and you can make them do it slightly better, then working on non-neglected things is promising. Really, I think you need to judge the 'facts on the grounds', what you can do, and go from there. If there aren't ruthlessly impact-focused types working on a problem, that would a good heuristic for some such people to get stuck in.

What was salient to me, compared to when I knew very little of the topic, is how much larger the expected value of drug legalisation now seems.

Why We Need Abundant Housing

On the one hand I'm in favor of more housing. I live in the SF Bay Area where this is also a problem, and really insufficient housing is a problem for all of California, so I'm naturally supportive of efforts to address this problem. However, I'm not sure this project is a high priority for EAs.

This seems like something that's not especially neglected (lots of people are thinking about ways to improve the housing situation in American cities) and also unlikely to have high impact in relative terms (viz. globally rich Americans are not suffering as much due... (read more)

1leonoraahla4moHi Gordon, thank you so much for your comments. While housing gets a lot of attention in California, land use and zoning reform is politically unpopular. Land use and zoning is an extremely cost-effective way to impact economics, racial justice, and the environment. In California, the approach to housing is to invest in raising money for affordable housing, which doesn't address the systemic root causes of the housing shortage. Meeting our housing needs just through subsidized affordable housing, in LA County alone, would cost more than 500 billion dollars. Additionally, a McKinsey Global Institute report estimates that the housing crisis is costing the California economy between 143 and 233 billion dollars per year. On the other hand, land use and zoning is basically free (other than the staff cost of implementing it) and has a major effect on affordability - https://www.hamiltonproject.org/papers/removing_barriers_to_accessing_high_productivity_places [https://www.hamiltonproject.org/papers/removing_barriers_to_accessing_high_productivity_places] Abundant Housing focuses on opportunities to make a maximum impact with minimal resources. In 2019, the Coastal Plan we advocated for created housing targets for the region that would get us to national rates of rent-burden and overcrowding, as well as get our GHG emissions on track for state climate goals. That's through advocacy in a single administrative process, which few people pay attention to.
Is the current definition of EA not representative of hits-based giving?

One, I'd argue that hits-based giving is a natural consequence of working through what using "high-quality evidence and careful reasoning to work out how to help others as much as possible" reallying means, since that statement doesn't say anything about excluding high-variance strategies. For example, many would say there's high-quality evidence about AI risk, lots of careful reasoning has been done to assess its impact on the long term future, and many have concluded that working on such things is likely to help others as much as possible, though we may ... (read more)

3Venkatesh5mo1. The point about "working through what it really means" is very interesting. (more on this below) But when I read, "high-quality evidence and careful reasoning", it doesn't really engage the curious part of my brain to work out what that really means. All of those are words I have already heard and it feels like standard phrasing. When one isn't encouraged to actually work through that definition, it does feel like it is excluding high variance strategies. I am not sure if you feel this way but "high-quality evidence" to my brain just says empirical evidence. Maybe that is why I am sensing this exclusion of high variance strategies. 2. You are probably right. But I am worried if that is really a good strategy? By not openly saying that we do things we are uncertain about we could end up coming off as a know-it-all who has it all figured out with evidence! There were some discussions along these lines in another recent post [https://forum.effectivealtruism.org/posts/h566GT4ECfJAB38af/some-quick-notes-on-effective-altruism] . Maybe having a definition that kind of gives a subtle nod to hits-based giving could help with that? Your point about 'working through the definition' actually gave me an idea: What if we rephrased to "high-quality evidence and/or careful reasoning". That non-standard phrasing of 'and/or' sows some curiosity to actually work things out, doesn't it? I am making the assumption that the phrase "high-quality evidence" is empirical evidence (as I already said) and the phrase "careful reasoning" includes Expected Value thinking, making Fermi estimates and all the other reasoning tools that EAs use. Also, this small phrasing change is not that radically different from what we already have so the cost of changing shouldn't be that high. Of course the question is, is it actually that much more effective than what we have. Would love to hear thoughts on that and of course other

Googling, I primarily find the term "high-quality evidence" in association with randomised controlled trials. I think many would say there isn't any high-quality evidence regarding, e.g. AI risk.

Naturalism and AI alignment

I do like the idea of being able to construct an experiment to test naturalism. I think it's mistaken in that I doubt there are any facts about what is right and wrong to be discovered, by observing the world or otherwise, but currently I and anyone else who wants to talk about metaethics is forced to rely primarily on argumentation. Being able to run an experiment using minds different from our own seems quite compelling to testing a variety of metaethical hypotheses.

Why we want to start a charity fortifying feed for hens (founders needed)

I'm also somewhat concerned because this seems like a clear case of a dual use intervention that makes life better for the animals but also confers benefits to the farmers that may ultimately result in more suffering rather than less by, for example, making chickens more palatable to consumers as "humanely farmed" (I'm guessing that's what is meant by "humane-washing") or making chicken production more profitable (either by humane-washing or by making the chickens produce a better quality meat product that is in higher demand).

6KarolinaSarek5moHi Gordan! Happy to respond more in-depth but first, I have two clarifying points. This intervention is for egg-laying hens, not broiler chickens. Egg-laying hens are not used for meat, but I could address your question from the perspective of egg quality. Is that fine? Also, are you making an argument that feed fort will specifically be more prone to “humane-washing” compared to, e.g. cage-free/broiler campaigns or that all welfare-focused interventions that aim to improve the conditions on the farms are prone to “humane-washing” and therefore may be net-negative in the long term?
Concerns with ACE's Recent Behavior

I can't seem to find the previous posts at the moment, but I have this sense that this is not an isolated issue and that ACE has some serious problems given that it draws continued criticism, not for its core mission, but for the way it carries that mission out. Although I can't remember at the moment what that other criticism was, I recall thinking "wow, ACE needs to get it together" or something similar. Maybe it has learned from those things and gotten better, but I notice I'm developing a belief that ACE is failing at the "effective" part of effective altruism.

Does this match what others are thinking or am I off?

Previous criticism of ACE in venues like the Forum has primarily been about its research methodology (e.g. here and response here).

It's been a while since I followed EAA research closely, but it's my impression ACE has improved its research methodology substantially and removed/replaced a lot of the old content people were concerned about – at least as far as non-DEI issues are concerned.

I'll note that I used to have some reservations but no longer do, so I'll answer about why I previously had reservations.

When EA got interested in what we now call longtermism, it didn't seem obvious to me that EA was for me. My read was that EA was about near concerns like global poverty and animal welfare and not far concerns like x-risk and aging. So it seemed natural to me that I was on the outside of EA looking in because my primary cause area (though note that I wouldn't have thought of it that way at the time) wasn't clearly under the EA umbrella.

Some quick notes on "effective altruism"

"Effective Altruism" sounds self-congratulatory and arrogant to some people:

Your comments in this section suggest to me there might be something going on where EA is only appealing within some particular social context. Maybe it's appealing within WEIRD culture, and the further you get from peak WEIRD the more objections there are. Alternatively maybe there's something specific to northern European or even just Anglo culture that makes it work there and not work as well elsewhere, translation issues aside.

8Julia_Wise6moI think I'd expect US culture to be most ok with self-congratulation, and basically everywhere else (including UK) to be more allergic to it? But most of the people who voted on the name in the first place were British.

Running with the valley metaphor, perhaps the 1990s were when we reached the most verdant floor of the valley. It remains unclear if we're still there or have started to climb out and away from it, assuming the model to be correct.

5Ben Garfinkel6moI would actually bet on average democracy continuing to increase over the next few decades.* Over this timespan, I'm still pretty inclined to extrapolate the rising trend forward, rather than updating very much on the past decade or so of possible backsliding. It also seems relevant that many relatively poorer and less democratic countries are continuing to develop, supposing that development actually is an important factor in democratization. I also don't think there are any signs that automation is already playing a major role in democratic backsliding. (I think much more automation is probably necessary). So, unless there's really rapid AI progress, I don't expect the specific causal mechanism I'm nervous about to kick in for a while. *Off the top of my head, conditional on the Polity project continuing to exist, I might say there's something like a 70% chance that the average country's Polity score is higher in 2050 than it is today.
Mentorship, Management, and Mysterious Old Wizards

The people I know of who are best at mentorship are quite busy. As far as I can tell, they are already putting effort into mentoring and managing people. Mentorship and management also both directly trade off against other high value work they could be doing.

There are people with more free time, but those people are also less obviously qualified to mentor people. You can (and probably should) have people across the EA landscape mentoring each other. But, you need to be realistic about how valuable this is, and how much it enables EA to scale.

5Raemon7moThat is helpful, thanks. I've been sitting on this post for years and published it yesterday while thinking generally about "okay, but what do we do about the mentorship bottleneck? how much free energy is there?", and "make sure that starting-mentorship is frictionless" seems like an obvious mechanism to improve things.
Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?

On a related but different note, I wish there was a way to combine conversations on cross-posts between EA Forum and LW. I really like the way AI Alignment Forum works with LW and wish EA Forum worked the same way.

7MichaelA8moYeah, I agree with that. A related point is that at least once I've seen something that was posted to both sites without the author noting that. This means users won't even know to check the other site for more comments, let alone having them automatically visible from the first site they were on. Not sure how often this happens. Maybe it's not a big deal. Also not sure if there's a good technical fix for this. My first thought is an opt-in checkbox that always appears on either site to ask "Do you want this to also appear as a cross-post on [other site]?" (which would fix it because then presumably no one would bother manually making a cross-post, and the automated cross-post would always say at the top that it's a cross-posted). But that might lead to too many cross-posts (I don't know). The best fix - if this is even a semi-regular problem in the first place - might just be to occasionally prominently mention the option of cross-posting and that one should label posts as cross-posts when one does so. Or just to comment on the relevant posts when one happens to notice that this has happened.
The Folly of "EAs Should"

I often make an adjacent point to folks, which is something like:

EA is not all one thing, just like the economy is not all one thing. Just as civilization as we know it doesn't work unless we have people willing to do different things for different reasons, EA depends on different folks doing different things for different reasons to give us a rounded out basket of altruistic "goods".

Like, if everyone thought saltine crackers were the best food and everyone competed to make the best saltines, we'd ultimately all be pretty disappointed that we had a mountai... (read more)

5Davidmanheim8moStrongly agree substantively about the adjacency of your point, and about the desire for a well-rounded world. I think it's a different thread of thought than mine, but it is worth being clear about as well. And see my reply to Jacob_J elsewhere in the comments, here [https://forum.effectivealtruism.org/posts/bGqPbQeTr2fv47ihb/the-folly-of-eas-should?commentId=pRnw7qh8vuumz2RQu] , for how I think that can work even for individuals.
evelynciara's Shortform

I think it's worth saying that the context of "maximize paperclips" is not one where the person literally says the words "maximize paperclips" or something similar; this is instead an intuitive stand-in for building an AI capable of superhuman levels of optimization, such that if you set it the task, say via specifying a reward function, of creating an unbounded number of paperclips you'll get it doing things you wouldn't as a human do to maximize paperclips because humans have competing concerns and will stop when, say, they'd have to kill themselves or t... (read more)

What’s the low resolution version of effective altruism?

There's a lot to unpack in that tweet. I think something is going on like:

• fighting about who is really the most virtuous
• being upset people aren't more focused on the things you think are important
• being upset that people claim status by doing things you can't or won't do
• being jealous people are doing good doing things you aren't/can't/won't do
• virtue signaling
• righteous indignation
• spillover of culture war stuff going on in SF

None of it looks like a real criticism of EA, but rather of lots of other things EA just happens to be adjacent to.

Doesn't mean it doesn... (read more)

What’s the low resolution version of effective altruism?

I find others answers about what the actual low resolution version of EA they see in the wild fascinating.

I go with the classic and if people ask I give them a three word answer: "doing good better".

If they ask for more, it's something like: "People want to do good in the world, and some good doing efforts produce better outcomes than others. EA is about figuring out how to get the best outcomes (or the largest positive impact) for time/money/effort relative to what a person thinks is important."

How modest should you be?

I realize this is a total tangent to the point of your post, but I feel you're giving short-shrift here to continental philosophy.

If it were only about writing style I'd say fair: continental philosophy has chosen a style of writing that resembles that used in other traditions to try to avoid over-simplifying and not compressing understanding down into just a few words that are easily misunderstood. Whereas you see unclear writing, I see a desperate attempt to say anything detailed about reality without accidentally pointing in the wrong direction.

Morality as "Coordination" vs "Altruism"

I think this is often non-explicit in most discussions of morality/ethics/what-people-should-do. It seems common for people to conflate "actions that are bad because it ruins ability to coordinate" and "actions that are bad because empathy and/or principles tell me they are."

I think it's worth challenging the idea that this conflation is actually an issue with ethics.

Although it's true that things like coordination mechanisms and compassion are not literally the same thing and can have expressions that try to isolate themselves from each other (cf. market ... (read more)

3Raemon9moThe issue isn't just the conflation, but missing a gear about how the two relate. The mistake I was making, that I think many EAs are making, is to conflate different pieces of the moral model that have specifically different purposes. Singer-ian ethics pushes you to take the entire world into your circle of concern. And this is quite important. But, it's also quite important that the way that the entire world is in your circle of concern is different from the way your friends and government and company and tribal groups are in your circle of concern. In particular, I was concretely assuming "torturing people to death is generally worse than lying." But, that's specifically comparing within alike circles. It is now quite plausible to me that lying (or even mild dishonesty) among the groups of people I actually have to coordinate with might actually be worse than allowing the torture-killing of others who I don't have the ability to coordinate with. (Or, might not – it depends a lot on the the weightings. But it is not the straightforward question I assumed at first)
Wholehearted choices and "morality as taxes"

Weird, that sounds strange to me because I don't really regret things since I couldn't have done anything better than what I did under the circumstances or else I would have done that, so the idea of regret awakening compassion feels very alien. Guilt seems more clear cut to me, because I can do my best but my best may not be good enough and I may be culpable for the suffering of others as a result, perhaps through insufficient compassion.

Wholehearted choices and "morality as taxes"

These cases seem not at all analogous to me because of the differing amount of uncertainty in each.

In the case of the drowning child, you presumably have high certainty that the child is going to die. The case is clear cut in that way.

In the case of the distant commotion on an autumn walk, it's just that, a distant commotion. As the walker, you have no knowledge about what it is and whether or not you could do anything. That you later learn you could have done something might lead you to experience regret, but in the moment you lacked information to make i... (read more)

1AidanGoth9moMy reading of the post is quite different: This isn't an argument that, morally, you ought to save the drowning man. The distant commotion thought experiment is designed to help you notice that it would be great if you had saved him and to make you genuinely want to have saved him. Applying this to real life, we can make sacrifices to help others because we genuinely/wholeheartedly want to, not just because morality demands it of us. Maybe morality does demand it of us but that doesn't matter because we want to do it anyway.
Incompatibility of moral realism and time discounting

Could the seeming contradiction be resolved by greater specificity of statements?

For example, rather than abandoning "Everyone should sell everything that begins with a 'C', but nothing that begins with an 'A'." as a norm, we might realize we underspecified it to begin with and really meant "Everyone should sell everything that is called by a word in English that begins with a 'C', but nothing that begins with an 'A' in English.". We could get even more specific if objections remained until we were not at risk of under specifying what we mean and suffering... (read more)

3wuschel9moYes, good point. I agree that sufficient specification can make time discounting compatible with moral realism. One would have to specify an inertial system, from which to measure time. (That would be equivalent to specifying the language to English for example.) Then we would not have a logical contradiction anymore, which weakens my claim, but we would still have something I would find unplausible: An inertial system that is preferred by the correct moral theory, even though it is not preferred by the laws of physics.
EAs working at non-EA organizations: What do you do?
• Where do you work, and what do you do?

I'm a software engineer at Plaid working on the Infrastructure team. My main project is leading our internal observability efforts.

• What are some things you've worked on that you consider impactful?

In terms of EA impact at my current job, not much. I view this as an earning to give situation where I'm taking my expertise as a software engineer and turning it into donations. I think there's some argument that Plaid has positive impact on the world by enabling lots of new financial applications built on our APIs, thereby ... (read more)

Does Qualitative Research improve drastically with increasing expertise?

I think this holds true in more traditionally "quantitative" fields, too, because often things can be useful or not depending on how they are framed such that without the proper framing good numbers don't matter because they are measuring the right thing.

This seems to suggest that a lot of what makes quantitative research successful also makes qualitative research successful, and so we should expect any extent to which expertise matters in quantitative fields to matter in qualitative fields (although I think this mostly points at the quant/qual distinction being a very fuzzy one that is only relevant along certain dimensions).

Long-Term Future Fund: Ask Us Anything!

Jonas also mentioned to me that EA Funds is considering offering Donor-Advised Funds that could grant to individuals as long as there’s a clear charitable benefit. If implemented, this would also allow donors to provide tax-deductible support to individuals.

This is pretty exciting to me. Without going into too much detail, I expect to have a large amount of money to donate in the near future, and LTF is basically the best option I know of (in terms of giving based on what I most want to give to) for the bulk of that money short of having the ability to do ... (read more)

2Jonas Vollmer9moInterested in talking more about this – sent you a PM! EDIT: I should mention that this is generally pretty hard to implement, so there might be a large fee on such grants, and it might take a long time until we can offer it.
Long-Term Future Fund: Ask Us Anything!

LTF covers a lot of ground. How do you prioritize between different cause areas within the general theme of bettering the long term future?

The LTFF chooses grants to make from our open application rounds. Because of this, our grant composition depends a lot on the composition of applications we receive. Although we may of course apply a different bar to applications in different areas, the proportion of grants we make certainly doesn't represent what we think is the ideal split of total EA funding between cause-areas.

In particular, I tend to see more variance in our scores between applications in the same cause-area than I do between cause-areas. This is likely because most of our application... (read more)

Long-Term Future Fund: Ask Us Anything!

How much room for additional funding does LTF have? Do you have an estimate of how much money you could take on and still achieve your same ROI on the marginal dollar donated?

Really good question!

We currently have ~$315K in the fund balance.* My personal median guess is that we could use$2M over the next year while maintaining this year's bar for funding. This would be:

• $1.7M more than our current balance •$500K more per year than we’ve spent in previous years
• $800K more than the total amount of donations received in 2020 so far •$400K more than a naive guess for what the total amount of donations received will be in all of 2020. (That is, if we wanted a year of donations to pay for a year of funding, we would need  $400K ... (read more) Long-Term Future Fund: Ask Us Anything! Do you have any plans to become more risk tolerant? Without getting too much into details, I disagree with some things you've chosen not to fund, and as an outsider view it as being too unwilling to take risks on projects, especially projects where you don't know the requesters well, and truly pursue a hits-based model. I really like some of the big bets you've taken in the past on, for example, funding people doing independent research who then produce what I consider useful or interesting results, but I'm somewhat hesitant around donating to LTF because I... (read more) From an internal perspective I'd view the fund as being fairly close to risk-neutral. We hear around twice as many complaints that we're too risk-tolerant than too risk-averse, although of course the people who reach out to us may not be representative of our donors as a whole. We do explicitly try to be conservative around things with a chance of significant negative impact to avoid the unilateralist's curse. I'd estimate this affects less than 10% of our grant decisions, although the proportion is higher in some areas, such as community building, biosecur... (read more) Where are you donating in 2020 and why? I'm being strategic in 2020 and shifting much of my giving for it into 2021 because I expect a windfall, but here's where I chose to give this year: • AI Safety Support • I think the work Linda (and now JJ) are doing is great and is woefully underfunded. I would give them more sooner but I have to shift that into 2021. They've had some trouble getting funding from more established sources for reasons I don't endorse but don't want to go into here, and I think giving to them now is especially high leverage to help AISS bootstrap. • I'll be giving$5k soon and plan t
4GMcGowan10moCan you say a little bit more about this? I tend not to think of cryonics as charitable.
1aaronhamlin10moCongrats on your giving! I would maybe add a note of caution if you were anticipating deducting fees to Alcor on your taxes. Even though they're a c3, they're providing a service to you. An analogy would be deducting fees for a YMCA gym membership, which is also not tax deductible. I also say this being an Alcor member myself. Also, here's a resource on charitable giving and taxes I put together that may be useful: https://medium.com/@aaronhamlin/your-guide-to-charitable-giving-and-taxes-a7c0f44c922 [https://medium.com/@aaronhamlin/your-guide-to-charitable-giving-and-taxes-a7c0f44c922] Note that I don't count my payments for membership/cryopreservation towards my giving.
How to best address Repetitive Strain Injury (RSI)?

I've had RSI in the past, but not from typing, but instead from repetitive motions loading paper into a machine for scanning. I didn't need to see a doctor about it, and addressing it was ultimately pretty straight forward and I was able to keep doing the job that caused it while I recovered. Things I did:

• wore a stabilizing wrist brace to alleviate the strain on my wrist that was causing pain, even when I was not engaged in an activity that would necessarily cause pain
• payed attention to and changed my motions to reduce wrist strain
• rearranged my work so I h
What is a book that genuinely changed your life for the better?

I've got a few:

• GEB
• Put me on the path to something like thinking of rationality as something intuitive/S1 rather than something I have to think about with a lot of deliberation/S2.
• Seven Habits of Highly Effective People
• I often forget how much this book is "in the water" for me. There's all kinds of great stuff in here about prioritization, relationships, and self-improvement. It can feel a little like platitudes at time, but it's really great.
• The Design of Everyday Things
• This is kind of out there, but this gave me a strong sense of the importance of groundi
EA's abstract moral epistemology

My somewhat uncharitable reaction while reading this was something like "people running ineffective charities are upset that EAs don't want to fund them, and their philosopher friend then tries to argue that efficiency is not that important".

Michael_Wiebe's Shortform

I'm a big fan of ideas like this. One of the things I think EAs can bring to charitable giving that is otherwise missing from the landscape is being risk-neutral, and thus willing to bet on high variance strategies that, taken as a whole in a portfolio, may have the same or hopefully higher expect returns compared to typical risk-averse charitable spending that tends to focus on things like making no money is wasted to the exclusion of taking necessary risks to realize benefits.

Evidence on correlation between making less than parents and welfare/happiness?

Taking a predictive processing perspective, we should expect to see an initial decrease in happiness upon finding oneself living a less expensive lifestyle because it would be a regular "surprise" violating the expected outcome, but then over time for this surprise to go away as daily evidence slowly retrains the brain the to expect less and so have less negative emotional valence around upon perceiving the actual conditions.

However I'd still expect someone who "fell from grace" like this to be somewhat sadder than a person who ros... (read more)