All of G Gordon Worley III's Comments + Replies

EA is Insufficiently Value Neutral in Practice

Hmm, I think these arguments comparing to other causes are missing two key things:

  • they aren't sensitive to scope
  • they aren't considering opportunity cost

Here's an example of how that plays out. From my perspective, the value of the very large number of potential future lives dwarfs basically everything else. Like the value of worrying about most other things is close to 0 when I run the numbers. So in the face of those numbers, working on anything other than mitigating x-risk is basically equally bad from my perspective because that's all missed oppor... (read more)

EA is Insufficiently Value Neutral in Practice

To your footnote, I'm not sure how many people are directly uncomfortable, but I do find arguments that roughly boil down to "but what about Nazis?" lazy as they try to run around the discussion by pointing to a thing that will make most readers go "Nazis bad, I agree with whatever says 'Nazis bad' most strongly!". This doesn't mean thinking Nazis are bad is an unreasonable position or something, only that it looms so large it swamps many people's ability to think clearly.

Rationalists the to taboo comparing things to Nazis or using Nazis as an example for ... (read more)

? Maybe the formatting of your comment cut off the later portions. It seems like your response to my comment only included a discussion of my end note. To be clear, my end note was meant as merely a side-conversation, only tangentially related to the main body of the comment. I’ll be generous in assuming that it was merely a formatting error — I wouldn’t hope to assume that you ignored the main points of my comment in favor of writing only about my relatively unimportant end note. I await your response to the content of my comment! :)
EA is Insufficiently Value Neutral in Practice

I'd bite the bullet and say "yes". I disagree with Nazism, but to be intellectually consistent I have to accept that even beliefs about what is good that I find personally unpalatable deserve consideration. This is very similar to my stance on free speech: people should be allowed to say things that I disagree with, and I'm generally in favor of making it easier for people to say things, including things I disagree with.

To your point about not caring about the difference between good and evil, this sort of misses the point I'd like to make. How do you know... (read more)

A) No — to be intellectually consistent, you wouldn't merely have to claim that Nazism deserves consideration. You'd have to actively support an anti-Semitic person donating to the Nazi Party and ensuring that it functions as efficiently as possible to eradicate Jewish people.[1] [#fngcxra8quu4h]Correct me if I'm wrong, but your post didn't seem to stop at wanting just a discussion of values — it pushed for action to increase the effectiveness of whatever values someone else held, even if those values are counter to your own. B) Why do you think beliefs you find personally unpalatable deserve consideration — or, at least, how much consideration is necessary? Was the Holocaust insufficient consideration of the ideals of Nazism? Do you believe we should leave the Final Solution on the table as a way of pursuing ethical good in the world? These aren't "gotcha" questions — given that you responded "yes" to Richard's incisive question, I'd legitimately like to see how far your intellectual consistency will take you. Agreed. This is a key question, and I think Richard avoids this thorny problem in his comment. However, the fact that the field of ethics hasn't come to a conclusion over which system of values we should hold doesn't implicate a free-for-all. We may not (yet?) know what is objectively good and evil, or even if ethics are objective or exist in the first place, but we can still aim for the good and away from the bad. I'm excited to hear out your answer — you have a lot of interesting takes, and you have an easy-to-follow writing style. 1. ^ [#fnrefgcxra8quu4h]I'm Jewish. I'm a descendent of Holocaust survivors. My father is a Holocaust scholar. I'm attending a conference on the Holocaust tomorrow. I'm not offended by the employment of the Nazi Party as an example, but if someone else is, I'd be happy to edit this post and change the example to something else — either shoot me a direct message or simply reply to this chain.

There is (or, at least, ought to be) a big gap between "considering" a view and "allying" with it.  If you're going to ally with any view no matter its content, there's no point in going to the trouble of actually thinking about it.  Thinking is only worthwhile if it's possible to reach conclusions that differ depending on the details of what's considered.

Of course we're fallible, but that doesn't entail radical skepticism (see: any decent intro philosophy text).  Whatever premises you think lead to the conclusion "maybe Nazism is okay after... (read more)

What actions most effective if you care about reproductive rights in America?

I've edited my post to make it clear I think this is an off topic discussion within the context of this question. I think it's fine for this comment to stay because it was there before I made this clarification, but I have asked the moderators to convert this from an answer to a proper comment.

Buddhism and Utilitarianism; EA vs EB

I don't think it actually has (1).

Engaged Buddhism is, as I see it, best understood as a movement among Western Liberals who are also Buddhists, and as such as primarily infused with Western liberal values. These are sometimes incidentally the best way to do good, but unlike EA they don't explicitly target doing the most good, they instead uphold an ideology that values things like racial equality, human dignity, and freedom on religion (including freedom to reject religion).

As for (2), I'm not sure how much there is to learn. There's likely some things, b... (read more)

Buddhism and Utilitarianism; EA vs EB

I think there's some case for specialization. That is, some people should dedicate their lives to meditation because it is necessary to carry forward the dharma. Most people probably have other comparative advantages. This is not a typical way of thinking about practice, but I think there's a case to be made that we could look at becoming a monk, for example, as a case of exercises comparative advantage as part of an ecosystem of practitioners who engage in various ways based on their comparative abilities (mostly focused on what they could be doing in the... (read more)

4Michael B.1mo
A few years ago I asked a zen nun what exactly is the use of being a nun, living quite secluded and without much impact on the world. Her response was (roughly speaking) that it is good if some people practice and study intensely because that keeps the quality and depth of the tradition alive and develops it. But not everyone should take that path. It seems like you was expressing the same idea as you are! I think she now leads one of the monastic centers in Germany.
3Noah Starbuck2mo
Really appreciate that notion. It is something I've thought a lot about myself. I also tend to find that my personal spiritual practice benefits from a mix of many short meditation retreats, daily formal meditation sessions & ongoing altruistic efforts in daily life. I don't feel that I would make a good teacher of meditation if I did that full time or that my practice would reach greater depth faster if I quit my job & practiced full time.
Doing good easier: how to have passive impact

A couple comments.

First, I think there's something akin to creating a pyramid scheme for EA by leaning too heavy on this idea, e.g. "earn to give, or better yet get 3 friends to earn to give and you don't need to donate yourself because you had so much indirect impact!". I think david_reinstein's comment is in the same vein and good.

Second, this is a general complaint about the active/passive distinction that is not specific to your proposal but since your proposal relies on it I have to complain about it. :-)

I don't think the active/passive distinction is... (read more)

Free-spending EA might be a big problem for optics and epistemics

Maybe I can help Chris explain his point here, because I came to the comments to say something similar.

The way I see it, neartermists and longtermists are doing different calculations and so value money and optics differently.

Neartermists are right to be worried about spending money on things that aren't clearly impacting measures of global health, animal welfare, etc. because they could in theory take that money and funnel it directly into work on that stuff, even if it had low marginal returns. They should probably feel bad if they wasted money on a big ... (read more)

Go Republican, Young EA!

Two thoughts:

  1. We should be careful about claiming the GOP is the "worse party". Worse for whom? Maybe they are doing things you don't like, but half the country thinks the Democrats are the worse party. We should be wise to the state of normative uncertainty we are in. Neither party is really worse except by some measure, and because of how they are structured against each other one party being worse means the other is better by that measure. If you wanted to make a case that one party or the other is better for EA and then frame the claim that way I think
... (read more)
Go Republican, Young EA!

to the fall of US democracy and a party that has much worse views on almost every subject under most moral frameworks.

This seems like a pretty partisan take and fails to adequately consider metaethical uncertainty. There's nothing about this statement that I couldn't imagine a sincere Republican with good intentions saying about Democrats and being basically right (and wrong!) for the same reasons (right assuming their normative framework, wrong when we suppose normative uncertainty).

Go Republican, Young EA!

While I don't want to suggest that you or any other person who feels the GOP has an obligation to work for them, part of the reason they are able to be hostile to various groups is because those groups are not part of how they get elected. If tomorrow the GOP was dependent on LGBTQ votes to win elections, they'd transform into a different party.

So while I'm not expert enough here to see how to change the current situation, I think there is something interesting about changing the incentive gradients for both parties to make them both more inclusive (both construct on outgroup—GOP: minorities and foreigners, Democrats: rural and working-class white people) and I expect that to have positive outcomes.

Isn't saying to support a worse party in hopes that it becomes better like saying you should support a worse business in hopes that it becomes better? If they already have your vote/money/support why would they change? Repeatedly losing elections seems like it would be more likely to cause the Republican party to change.
How to Choose the Optimal Meditation Practice

The more I practice, the more I've come to believe that that only thing that really matters is that you do it. Not that you do it well by whatever standard one might judge, but just that you do it. 30 minutes of quiet time is a foundation on which more can be explored and discovered. You don't have to sit a special way, do a special thing with your mind, or do anything else in particular for it to be worth the effort, although all those things can help and are worth doing if you're called to them!

You should totally learn a bunch of techniques or practice a... (read more)

.01% Fund - Ideation and Proposal

What does this funding source do that existing LT sources don’t?

Natural followup: why a new fund rather than convince an existing fund to use and emphasize the >0.0.1% xrisk reduction criterion?

I think there's a pretty smooth continuum between an entirely new fund and an RFP within an existing fund, particularly if you plan to borrow funders and operational support. 

I think I a) want the branding of an apparent "new fund" to help make more of a splash and to motivate people to try really hard to come up with ambitious longtermist projects, and b) to help skill up people within an org to do something pretty specific.

You also shave off downside risks a little if you aren't institutionally affiliated with existing orgs (but get advice in a way that decreases unilateralist-y bad stuff).

Nuclear attack risk? Implications for personal decision-making

Even if he wants to do that, his power is not absolute. I'd expect/hope for his generals to step in if he tries something like that, perhaps using it as reason for a coup.

Nuclear attack risk? Implications for personal decision-making

I'm not super worried. Maybe this is because I am old enough that I grew up with a perception that nuclear war could happen at any time and unexpectedly kill us all. The current threat level feels like a return to the Cold War: something could happen, but MAD still works and Putin, like everyone else, doesn't really have anything to gain from all out nuclear war, but does have something to gain from playing chicken. So we should expect a lot of posturing but probably no real action, except by accident.

In think the largest risk of nuclear weapons comes from... (read more)

There is a theory that Putin is terminally ill and could therefore be open to taking the rest of the world with him. I don't know how much weight to put on it.
What psychological traits predict interest in effective altruism?

Yes, I suppose I left out non-English. I should have more properly made my claim that growth has slowed in English-speaking countries where the ideas have already had time to saturate and reach more of the affected people.

I forget where I got this from. I'm sure I can dig something up, but I seem to recall other posts on this forum showing that the growth of EA in places where it was already established had slowed.

What psychological traits predict interest in effective altruism?

It's unclear to me we've really investigated deeply enough to say that. We just know these factors matter, but it still seems quite possible that lots of other factors matter or that those other factors cause these two.

Fair. In that case this seems like a necessary prerequisite result for doing that deeper investigation, though, so valuable in that respect.
What psychological traits predict interest in effective altruism?

I don't mean to be rude, but this feels a bit like a non-result, since as your conclusion puts it effective altruists are basically people who like to act altruistically and like to be effective. Also seems not surprising that there's a small confluence of the two based on the fact that EA growth has slowed after quickly reaching most of the people who were going to be interested in it. It's nice to have some studies to back up the anecdotes powering the Basyesian evidence we already had about these claims, but am I correct that this is basically what you found?

At least for myself, it wouldn't have been obvious in advance that there would be exactly two factors, as opposed to (say) one, three or four.
I don't think that the supposed lack of EA growth is evidence that there's small correlation between the two factors. Seems like hindsight bias [] to me.
Has EA growth slowed? Has EA reached most of the people who were going to be interested in it? Where are you getting this from? The Spanish-speaking community is growing fast. I assume there are other countries/languages that are yet to be significantly reached, all of which are bound to have some amount of people with significant E and A factors.
We need more nuance regarding funding gaps

More info always seems better, but maybe it's not useful here?

My thinking is that perhaps all the gaps worth filling are already well known and being addressed roughly as soon as they become overdetermined. Other gaps maybe aren't worth addressing because the expected value of doing so is low. More info might help identify the marginal gap, but if there's something like a power law distribution of gaps in terms of expected value of filling them then we've likely already identified all the best ones to fill and the rest are the long tail where differences don't matter much and people should fill based on other criteria.

The Culture of Fear in Effective Altruism Is Much Worse than Commonly Recognized

I often think of it as EA being too conservative rather than having a culture of fear, and maybe those are different things, but here's some of what I see happening.

People reason that EA orgs and people representing EA need to be respectable because this will later enable doing more good. And I'd be totally fine with that if every instance of it was clearly instrumental to doing the most good.

However, I think this goal of being respectable doesn't take long to become fixed in place, and now people are optimizing for doing the most good AND being respectabl... (read more)

The Life-Goals Framework: How I Reason About Morality as an Anti-Realist

Life goals and life plans seem to me to sit somewhere between Heidegger's Sorge (both feel to be like aspects of Sorge) and general notions of axiology (life goals and life plans seem like a model of how axiology gets implemented). Curious if that resonates with what you mean by life goals and life plans.

I'm not familiar enough with Heidegger to comment on his concepts, but I can imagine similarities between life goals and existentialist thinking! Regarding axiology, I usually encounter this in moral realist contexts where an axiology tells us what's good/valuable in a universal sense.
Running for U.S. president as a high-impact career path

I don't know if someone has posted this before, but would be good to compare this to the idea of running for other political offices. For example, maybe a lot could be achieved as a senator or representative rather than as president and those seem easier jobs to get.

Illegible impact is still impact

Since I originally wrote this post I've only become more certain of the central message, which is that EAs and rationalist-like people in general are at extreme risk of Goodharting ourselves. See for example a more recent LW post on that theme.

In this post I use the idea of "legibility" to talk about impact that can be easily measured. I'm now less sure that was the right move, since legibility is a bit of jargon that, while it's taken off in some circles, hasn't caught on more broadly. Although the post deals with this, a better version of this post might... (read more)

The phrase “hard-core EAs” does more harm than good

I don't think I ever heard anyone use the phrase "hard-core EAs" or if I did it just passed by without note, but now that I bother to think about it I actually think it's really apt!

The etymology of hardcore has been a bit lost over the years. Here's what etymonline says:

also hard-core; 1936 (n.); 1951 (adj.); from hard (adj.) + core (n.). Original use seems to be among economists and sociologists, in reference to unemployables. Extension to pornography is attested by 1966. Also the name of a surfacing material.

Merriam-Webster seem to think it's a bit olde... (read more)


I can only speak for myself, but assuming my experience generalizes, this means lots of people will miss out on what you have to say. Since you don't have a prior belief that posts by you are worth reading and this post has a vague title that could be about any number of things, it makes it hard to consider it worth the time to invest in reading. So just purely from the pragmatic point of view, I estimate a summary would help get more people to read.

The irony is that EdoArad and myself have probably now spend enough time engaging with comments on this post... (read more)


Friendly suggestion: a summary might help. I briefly skimmed this but was really hoping for a summary. These are often helpful to help readers like me to decide to invest time in a post or not.

Thanks for the suggestion, but I don't think I will add one - not because the article can't be summarized but because adding a summary is kind of antithetical to the whole thrust of the essay. In part, I am arguing that excessive emphasis on legibility and efficiency in science is killing creativity. If the lack of a summary means that less people will read it then so be it :)
Free Guy, a rom-com on the moral patienthood of digital sentience

I think what's great about Free Guy is that the AI part is not the center of the plot most of the time. Rather it's a story about some characters who find themselves in some unusual circumstances. That might not seem much different, but compare typical AI films that spend a lot of time being about AI rather than the characters. By being character-focused, I think it delivers on ideas better than most idea movies that get so caught up in the ideas they forget to tell a good story.

As you've noticed, the root of good and bad lies with individual preferences and values. What is good is "merely" that which satisfies our desires at the lowest levels (perhaps what is good is what is least surprising to us, if you buy the predictive processing model of the brain). I put "merely" in scare quote, though, because it's not so mere as it seems. This is in fact the root of all that matters to us in the world.

It's normal, when first noticing that good and bad rest on something so subjective as what individuals like, to feel a sense of disease be... (read more)

I want EA-charity gift cards!

I like this idea a lot. I spent O($1k) on giftcards this year from tisbest instead of giving more traditional gifts. This is nice in multiple ways: this is way more than I would have spent on regular gifts, and each person gets the chance to give to something they care about. And selfishly I get a tax deduction (although I would have gotten it anyway since most of this money would have been donated anyway) and get to push my agenda on family that giving money is good (this doesn't seem like the worst thing in the world, but I'll take it for what it is: I'm... (read more)

[Event] Bodhi Day All Night Sitting 2021-12-07 to 2021-12-8

Note: Sorry for not creating this as an event post, but I can't do that yet, and this is time sensitive so I created it as a regular post.

A Red-Team Against the Impact of Small Donations

Fund weird things: A decent litmus test is "would it be really embarrassing for my parents, friends or employer to find out about this?" and if the answer is yes, more strongly consider making the grant.

Things don't even have to be that weird to be things that let you have outsized impact with small funding.

A couple examples come to mind of things I've either helped fund or encouraged others to fund that for one reason or another got passed over for grants. Typically the reason wasn't that the idea was in principle bad, but that there were trust issues wit... (read more)

Opportunity Costs of Technical Talent: Intuition and (Simple) Implications

This is basically my own experience. I worked a bunch on AI independent research, but now I don't really because it just doesn't make sense: I have way more opportunity to make money to do more good than any direct work I could do, in my estimation, so I just double down on that.

(For context I'm on the higher end of technical talent now: 12 years of work experience, L7-equivalent, in a group tech lead role, and if I can crank up to L8 the potential gains are quite large in terms of comp that I can then donate.)

I also really like the platform this uses, Tisbest. This year I decided to do all my Xmas giving by giving Tisbest cards to folks so they can make donations to places of their choosing. I think it's a nice way to spread the spirit of giving with folks, and it's a great chance to talk about EA if anyone asks "what should I donate it to?".

Cool! Do you have experience using [] by chance? Their Charity Gift Cards accomplish the same thing I believe and I'd be interested in hearing which platform you like better. E.g. Does one platform make the Charity Gift Card feel more like a Christmas gift than the other?
Help California implement Approval Voting - Time Sensitive - Nov. 18, 2021

I don't want this to seem like it's directed at this post in particular, but more a general class of things on see on EA Forum, and this just happened to finally trigger the thought for me.

Calls to action like this for things that aren't broadly accepted as core EA areas would benefit substantially from including links to reminding us why we should care about this.

Like, if someone posts about x-risk or global poverty or animal welfare or something like that, I'm like, sure, seems on topic and relevant to EAs because there's broad agreement that this thing ... (read more)

6Mahendra Prasad8mo
Thanks Gordon for this helpful criticism. I will try to include such links on future posts. (Please don't hesitate to call me out if I fail to do so.) Here is a link about Open Philanthropy's support of approval voting [] , and a talk on approval voting at EA Global London []. Thanks again.
Should Earners-to-Give Work at Startups Instead of Big Companies?

My own experience is that there's a sweet spot. Big tech companies only really offer high compensation for the most experienced and capable employees. If there's 10 levels and you're not at least at level 8, a big company is probably not, in my own informal analysis, like to offer you the best compensation in expectation. Some of this is simply because these folks have high opportunity costs, and the only way to get them as employees is to pay them enough that it balances off against what they would likely do instead: start a company.

If you're in the middl... (read more)

G Gordon Worley III's Shortform

Many people want the world to be better.

I feel like there's a lot of people who take this desire for a better world and then hope that they will be the one to make it all better. Maybe they'll discover some grand idea that will improve many things and lead us to salvation!

I don't think that's what we need though. We mostly need all us little people to just be a bit nicer, a bit more trusting, a bit more compassionate, and then not quite so many grand schemes will be required because we'll find we're already living in a better world.

Good points, but for that to happen, I think, we now need a good bit of cultural change. I think a long term plan, or set of alternative plans from which to choose, to get there would help give us direction. Not a grand idea, but a set of ideas that we can all agree meet our values, and a set of steps to make that happen over a finite number of years. That is more than doable, but since I didn't see anyone offering such plans, I decided to start writing on up. More information, or the 74k word 1st draft, if you'd like, upon request. Best regards, Shira Destinie A. Jones, aka Shira
The effective altruist case for parliamentarism

Thanks for your reply. Helps make a case that parliaments do something above and beyond the culture/tradition in which they are situated.

That said, I do want to respond to one thing you said:

Some would say that the aspects that matter are issues like trust, low corruption, respect of property rights, etc. But are there any cultures which do not value those things, which claim they are outright undesirable? I don't think there are.

Up until 2 days ago I likely would have shared this sentiment, but I was talking with someone who grew up in Romania and as he p... (read more)

3Tiago Santos9mo
Thanks. Yes, you are right that there are some differences like you said, and they can have some importance my point should have been more nuanced. To paraphrase/quote from memory author Huey Li (who wrote a great book related to this theme, "Dividing the Rulers"), constitutions can affect cultures in years, cultures will affect constitutions in centuries. Also, I'm not sure I would attach that much weight to that story for a general sense of how unsatisfied the Romanians are with the level of corruption in their country. And with respect to property rights, trust, I think we can imagine how people might in abstract argue that they "prefer" societies with less of it, but in reality I do find it hard to imagine people preferring to live in societies where they have no security that their stuff will be with them tomorrow, or whether they can trust others to do what they said they would.
The effective altruist case for parliamentarism

I'm sure this is addressed in the book I haven't read, but I wonder how much of this is confounded by former British rule. That is, if you factor out parliamentary systems that were established after a legacy of British rule, would it still be the case that parliaments are better?

I'm guess the argument is "yes' but I'm not sure and am somewhat suspicious that some of these effects could be cultural ones that just happen to come along with parliaments, making parliamentarism an effect rather than a cause.

5Magnus Vinding9mo
Tiago writes the following in response to a similar comment made on Overcoming Bias []: As Tiago notes, the evidence goes beyond just national governments; the first chapter of his book has sections on national governments, corporations, and local government, and the latter two are not subject to this confounder. And as Hanson writes in his review of the book, one may argue that the evidence from cities (i.e. local governance) is most convincing: See e.g. the studies on local governance cited above: Carr, 2015 []; Nelson & Afonso, 2019 [].
EA for Jews: Launch and Call for Volunteers

I think of it as coming from two angles. One is that it's a form of community building to expose folks to EA ideas who might otherwise not engage with them by doing so in a language they are familiar with. Two, it's a way for EAs who are religious to explore how EA impacts other spheres of their life.

I think it's also nice to have community by creating a sense of belonging. With EA being such a secular space normally, having a way to learn you're not the only one trying to combine EA and practice of a religion is nice. Good to have folks to talk to, etc.

EA for Jews: Launch and Call for Volunteers

Woo, as the person running Buddhists in EA, really excited to see more groups like this! At this point there's enough of us (3 groups) that maybe it's time to start thinking about an EA Interfaith group. :-)

Thanks Gordon! I think an pan-interfaith group would be great! I'd love to compare notes and hear more about what your group is up to.
What would you say are the biggest benefits of being part of an EA faith group?
On the assessment of volcanic eruptions as global catastrophic or existential risks

This is pretty long. Is there something like an abstract or executive summary of the post? Skimming a few of the expected places didn't feel like I was quite getting that without reading the whole thing.

Hi Gordon, I think by reading the 'challenging assumptions and why we think the current risk may be underappreciated' and 'Conclusions and the future' sections, you'll get a summary of most of the main points. 

The Cost of Rejection

True, but what you can do is have explicit values that you publicize and then ask candidates questions that assess how much they support/embody those values. Then you can reasonably say "rejected candidate because they didn't demonstrate value X" and have notes to back it up, or say "rejected because demonstrated ~X". This is harder feedback for candidates to hear, especially if X is something positive that everyone thinks they are like "hard working", but at the same time it should be made clear this isn't about what's true about the candidate, but what could be determined from their interview performance.

The Cost of Rejection

My vague understanding is that there's likely no legal issues with giving feedback as long as it's impartial. It's instead one of those things where lawyers reasonably advise against doing anything not required since literally anything you do exposes you to risk. Of course you could give feedback that would obviously land you in trouble, e.g. "we didn't hire you because you're [ethnicity]/[gender]/[physical attribute]", but I think most people are smart enough to give feedback of the form "we didn't hire you because legible reason X".

And it's quickly becom... (read more)

There are a bunch of illegible factors involved in hiring the right person, though. If the reason for rejection is something like "we think you'd be a bad culture fit," then it seems legally risky to be honest.
EA Survey 2020: Geography

For many of the breakdowns it would be helpful to understand the base rate in those countries to understand what the data means. For example, gender is easy enough since the base rate is usually close to 50/50, but for things like race I have no idea how many people identify as white, black, asian, etc. in each region to compare against. I realize not everything has a base rate to compare against, but for those that do having that data would really help contextualize what's going on here.

That makes sense. Reference numbers even for things like race is surprisingly tricky. We've previously considered comparing the percentages for race within the EA Survey to baseline percentages. But although this works passably well for the US [] (EAS respondents are more white) and UK [] (EAS respondents are less white)- without taking into account the fact that EAS respondents are disproportionately rich, highly educated and young and therefore should not be expected to represent the composition of the general population- for many other major countries there simple isn't national data on race/ethnicity that matches the same categories as the US/UK. I think people should generally be a lot more uncertain when estimating how far the EA community is representative in this sense. The figures still allow comparison within the EA community though.
Ambiguity aversion and reduction of X-risks: A modelling situation

I guess I don't understand why w > x > y > z implies w - y = x - y iff w - x = y - z. Sorry if this is a standard result I've forgotten, but at first glance it's not totally obvious to me.

1Benedikt Schmidt1y
Maybe it gets clearer if you compare the relative values of the 4 variables.w−y corresponds to the benfits of RXR,x−zalso corresponds to the benefits of RXR. But maybe I was not precise enough: The equivalence does not follow only fromw>x >y>z, we also need to take into account the definitions of the 4 variables. Do you see what I mean?
Ambiguity aversion and reduction of X-risks: A modelling situation

I didn't quite follow. What's the reasoning for claiming this?

From the definition of the four variables, the following equivalence can be deduced:

2Benedikt Schmidt1y
The reasoning is the following: The agent-neutral values are now denoted by variables instead of numbers. The worst case is represented byz, where the agent neither enjoys the benefits of PAP, nor those of RXR.yrepresents the value yielded by the choice of PAP whereasxcorresponds to the value yielded by the choice of RXR. The best case arises if the agent chooses PAP while RXR is not necessary, since then the agent-neutral value incorporates the benefits of PAP and RXR, amounting tow. Therefore, clearly the following relation holds: w>x>y>zFrom there, the equivalence under question follows. Do you agree?
Are many EAs philosophical pragmatists?

Well, I'd say we're all pragmatists whether we acknowledge it or not due to the problem of the criterion.

Management for growing teams

Not exactly based on EA org experience, but I think one of the biggest challenges orgs face is going from small enough that everyone can sit at the same table (people sometimes call these 2 pizza teams, because you can feed everyone with two pizzas; in practice the number is somewhere between 8 and 12) to medium (less than 150 people, aka the point at which you can personally know of everyone) to large.

EA orgs are most likely to face the first transition, small to medium. The big thing to know is that you'll have to find ways to take what happened and work... (read more)

[PR FAQ] Adding profile pictures to the Forum

Dislike the idea. Feels like this will change the character of the site in a way that's negative. It's a bit hard to say way, but part of the vibe of this place is that it's about ideas not about people, and this will take it away from that direction, and I think have more an idea vibe than a personal brand vibe is good for what this forum is for. There's plenty of other places people can have more highly personally identifiable or warmer experience of connecting with others.

If we did this I feel like it would be trying to optimize for something that's not, in my view, the primary purpose of the forum, and thus would make this site worse at being the EA Forum than without this feature.

[PR FAQ] Sharing readership data with Forum authors

I've been asking for this feature on LW. If we're not going to get it there, at least we can get it here!

Load More