Hmm, I think these arguments comparing to other causes are missing two key things:
Here's an example of how that plays out. From my perspective, the value of the very large number of potential future lives dwarfs basically everything else. Like the value of worrying about most other things is close to 0 when I run the numbers. So in the face of those numbers, working on anything other than mitigating x-risk is basically equally bad from my perspective because that's all missed oppor... (read more)
To your footnote, I'm not sure how many people are directly uncomfortable, but I do find arguments that roughly boil down to "but what about Nazis?" lazy as they try to run around the discussion by pointing to a thing that will make most readers go "Nazis bad, I agree with whatever says 'Nazis bad' most strongly!". This doesn't mean thinking Nazis are bad is an unreasonable position or something, only that it looms so large it swamps many people's ability to think clearly.
Rationalists the to taboo comparing things to Nazis or using Nazis as an example for ... (read more)
I'd bite the bullet and say "yes". I disagree with Nazism, but to be intellectually consistent I have to accept that even beliefs about what is good that I find personally unpalatable deserve consideration. This is very similar to my stance on free speech: people should be allowed to say things that I disagree with, and I'm generally in favor of making it easier for people to say things, including things I disagree with.
To your point about not caring about the difference between good and evil, this sort of misses the point I'd like to make. How do you know... (read more)
There is (or, at least, ought to be) a big gap between "considering" a view and "allying" with it. If you're going to ally with any view no matter its content, there's no point in going to the trouble of actually thinking about it. Thinking is only worthwhile if it's possible to reach conclusions that differ depending on the details of what's considered.
Of course we're fallible, but that doesn't entail radical skepticism (see: any decent intro philosophy text). Whatever premises you think lead to the conclusion "maybe Nazism is okay after... (read more)
I've edited my post to make it clear I think this is an off topic discussion within the context of this question. I think it's fine for this comment to stay because it was there before I made this clarification, but I have asked the moderators to convert this from an answer to a proper comment.
I don't think it actually has (1).
Engaged Buddhism is, as I see it, best understood as a movement among Western Liberals who are also Buddhists, and as such as primarily infused with Western liberal values. These are sometimes incidentally the best way to do good, but unlike EA they don't explicitly target doing the most good, they instead uphold an ideology that values things like racial equality, human dignity, and freedom on religion (including freedom to reject religion).
As for (2), I'm not sure how much there is to learn. There's likely some things, b... (read more)
I think there's some case for specialization. That is, some people should dedicate their lives to meditation because it is necessary to carry forward the dharma. Most people probably have other comparative advantages. This is not a typical way of thinking about practice, but I think there's a case to be made that we could look at becoming a monk, for example, as a case of exercises comparative advantage as part of an ecosystem of practitioners who engage in various ways based on their comparative abilities (mostly focused on what they could be doing in the... (read more)
A couple comments.
First, I think there's something akin to creating a pyramid scheme for EA by leaning too heavy on this idea, e.g. "earn to give, or better yet get 3 friends to earn to give and you don't need to donate yourself because you had so much indirect impact!". I think david_reinstein's comment is in the same vein and good.
Second, this is a general complaint about the active/passive distinction that is not specific to your proposal but since your proposal relies on it I have to complain about it. :-)
I don't think the active/passive distinction is... (read more)
Maybe I can help Chris explain his point here, because I came to the comments to say something similar.
The way I see it, neartermists and longtermists are doing different calculations and so value money and optics differently.
Neartermists are right to be worried about spending money on things that aren't clearly impacting measures of global health, animal welfare, etc. because they could in theory take that money and funnel it directly into work on that stuff, even if it had low marginal returns. They should probably feel bad if they wasted money on a big ... (read more)
to the fall of US democracy and a party that has much worse views on almost every subject under most moral frameworks.
This seems like a pretty partisan take and fails to adequately consider metaethical uncertainty. There's nothing about this statement that I couldn't imagine a sincere Republican with good intentions saying about Democrats and being basically right (and wrong!) for the same reasons (right assuming their normative framework, wrong when we suppose normative uncertainty).
While I don't want to suggest that you or any other person who feels the GOP has an obligation to work for them, part of the reason they are able to be hostile to various groups is because those groups are not part of how they get elected. If tomorrow the GOP was dependent on LGBTQ votes to win elections, they'd transform into a different party.
So while I'm not expert enough here to see how to change the current situation, I think there is something interesting about changing the incentive gradients for both parties to make them both more inclusive (both construct on outgroup—GOP: minorities and foreigners, Democrats: rural and working-class white people) and I expect that to have positive outcomes.
The more I practice, the more I've come to believe that that only thing that really matters is that you do it. Not that you do it well by whatever standard one might judge, but just that you do it. 30 minutes of quiet time is a foundation on which more can be explored and discovered. You don't have to sit a special way, do a special thing with your mind, or do anything else in particular for it to be worth the effort, although all those things can help and are worth doing if you're called to them!
You should totally learn a bunch of techniques or practice a... (read more)
What does this funding source do that existing LT sources don’t?
Natural followup: why a new fund rather than convince an existing fund to use and emphasize the >0.0.1% xrisk reduction criterion?
I think there's a pretty smooth continuum between an entirely new fund and an RFP within an existing fund, particularly if you plan to borrow funders and operational support.
I think I a) want the branding of an apparent "new fund" to help make more of a splash and to motivate people to try really hard to come up with ambitious longtermist projects, and b) to help skill up people within an org to do something pretty specific.You also shave off downside risks a little if you aren't institutionally affiliated with existing orgs (but get advice in a way that decreases unilateralist-y bad stuff).
Even if he wants to do that, his power is not absolute. I'd expect/hope for his generals to step in if he tries something like that, perhaps using it as reason for a coup.
I'm not super worried. Maybe this is because I am old enough that I grew up with a perception that nuclear war could happen at any time and unexpectedly kill us all. The current threat level feels like a return to the Cold War: something could happen, but MAD still works and Putin, like everyone else, doesn't really have anything to gain from all out nuclear war, but does have something to gain from playing chicken. So we should expect a lot of posturing but probably no real action, except by accident.
In think the largest risk of nuclear weapons comes from... (read more)
Yes, I suppose I left out non-English. I should have more properly made my claim that growth has slowed in English-speaking countries where the ideas have already had time to saturate and reach more of the affected people.
I forget where I got this from. I'm sure I can dig something up, but I seem to recall other posts on this forum showing that the growth of EA in places where it was already established had slowed.
It's unclear to me we've really investigated deeply enough to say that. We just know these factors matter, but it still seems quite possible that lots of other factors matter or that those other factors cause these two.
I don't mean to be rude, but this feels a bit like a non-result, since as your conclusion puts it effective altruists are basically people who like to act altruistically and like to be effective. Also seems not surprising that there's a small confluence of the two based on the fact that EA growth has slowed after quickly reaching most of the people who were going to be interested in it. It's nice to have some studies to back up the anecdotes powering the Basyesian evidence we already had about these claims, but am I correct that this is basically what you found?
More info always seems better, but maybe it's not useful here?
My thinking is that perhaps all the gaps worth filling are already well known and being addressed roughly as soon as they become overdetermined. Other gaps maybe aren't worth addressing because the expected value of doing so is low. More info might help identify the marginal gap, but if there's something like a power law distribution of gaps in terms of expected value of filling them then we've likely already identified all the best ones to fill and the rest are the long tail where differences don't matter much and people should fill based on other criteria.
I often think of it as EA being too conservative rather than having a culture of fear, and maybe those are different things, but here's some of what I see happening.
People reason that EA orgs and people representing EA need to be respectable because this will later enable doing more good. And I'd be totally fine with that if every instance of it was clearly instrumental to doing the most good.
However, I think this goal of being respectable doesn't take long to become fixed in place, and now people are optimizing for doing the most good AND being respectabl... (read more)
Life goals and life plans seem to me to sit somewhere between Heidegger's Sorge (both feel to be like aspects of Sorge) and general notions of axiology (life goals and life plans seem like a model of how axiology gets implemented). Curious if that resonates with what you mean by life goals and life plans.
I don't know if someone has posted this before, but would be good to compare this to the idea of running for other political offices. For example, maybe a lot could be achieved as a senator or representative rather than as president and those seem easier jobs to get.
Since I originally wrote this post I've only become more certain of the central message, which is that EAs and rationalist-like people in general are at extreme risk of Goodharting ourselves. See for example a more recent LW post on that theme.
In this post I use the idea of "legibility" to talk about impact that can be easily measured. I'm now less sure that was the right move, since legibility is a bit of jargon that, while it's taken off in some circles, hasn't caught on more broadly. Although the post deals with this, a better version of this post might... (read more)
I don't think I ever heard anyone use the phrase "hard-core EAs" or if I did it just passed by without note, but now that I bother to think about it I actually think it's really apt!
The etymology of hardcore has been a bit lost over the years. Here's what etymonline says:
also hard-core; 1936 (n.); 1951 (adj.); from hard (adj.) + core (n.). Original use seems to be among economists and sociologists, in reference to unemployables. Extension to pornography is attested by 1966. Also the name of a surfacing material.
Merriam-Webster seem to think it's a bit olde... (read more)
I can only speak for myself, but assuming my experience generalizes, this means lots of people will miss out on what you have to say. Since you don't have a prior belief that posts by you are worth reading and this post has a vague title that could be about any number of things, it makes it hard to consider it worth the time to invest in reading. So just purely from the pragmatic point of view, I estimate a summary would help get more people to read.
The irony is that EdoArad and myself have probably now spend enough time engaging with comments on this post... (read more)
Friendly suggestion: a summary might help. I briefly skimmed this but was really hoping for a summary. These are often helpful to help readers like me to decide to invest time in a post or not.
I think what's great about Free Guy is that the AI part is not the center of the plot most of the time. Rather it's a story about some characters who find themselves in some unusual circumstances. That might not seem much different, but compare typical AI films that spend a lot of time being about AI rather than the characters. By being character-focused, I think it delivers on ideas better than most idea movies that get so caught up in the ideas they forget to tell a good story.
As you've noticed, the root of good and bad lies with individual preferences and values. What is good is "merely" that which satisfies our desires at the lowest levels (perhaps what is good is what is least surprising to us, if you buy the predictive processing model of the brain). I put "merely" in scare quote, though, because it's not so mere as it seems. This is in fact the root of all that matters to us in the world.
It's normal, when first noticing that good and bad rest on something so subjective as what individuals like, to feel a sense of disease be... (read more)
I like this idea a lot. I spent O($1k) on giftcards this year from tisbest instead of giving more traditional gifts. This is nice in multiple ways: this is way more than I would have spent on regular gifts, and each person gets the chance to give to something they care about. And selfishly I get a tax deduction (although I would have gotten it anyway since most of this money would have been donated anyway) and get to push my agenda on family that giving money is good (this doesn't seem like the worst thing in the world, but I'll take it for what it is: I'm... (read more)
Note: Sorry for not creating this as an event post, but I can't do that yet, and this is time sensitive so I created it as a regular post.
Fund weird things: A decent litmus test is "would it be really embarrassing for my parents, friends or employer to find out about this?" and if the answer is yes, more strongly consider making the grant.
Things don't even have to be that weird to be things that let you have outsized impact with small funding.
A couple examples come to mind of things I've either helped fund or encouraged others to fund that for one reason or another got passed over for grants. Typically the reason wasn't that the idea was in principle bad, but that there were trust issues wit... (read more)
This is basically my own experience. I worked a bunch on AI independent research, but now I don't really because it just doesn't make sense: I have way more opportunity to make money to do more good than any direct work I could do, in my estimation, so I just double down on that.
(For context I'm on the higher end of technical talent now: 12 years of work experience, L7-equivalent, in a group tech lead role, and if I can crank up to L8 the potential gains are quite large in terms of comp that I can then donate.)
I also really like the platform this uses, Tisbest. This year I decided to do all my Xmas giving by giving Tisbest cards to folks so they can make donations to places of their choosing. I think it's a nice way to spread the spirit of giving with folks, and it's a great chance to talk about EA if anyone asks "what should I donate it to?".
I don't want this to seem like it's directed at this post in particular, but more a general class of things on see on EA Forum, and this just happened to finally trigger the thought for me.
Calls to action like this for things that aren't broadly accepted as core EA areas would benefit substantially from including links to reminding us why we should care about this.
Like, if someone posts about x-risk or global poverty or animal welfare or something like that, I'm like, sure, seems on topic and relevant to EAs because there's broad agreement that this thing ... (read more)
My own experience is that there's a sweet spot. Big tech companies only really offer high compensation for the most experienced and capable employees. If there's 10 levels and you're not at least at level 8, a big company is probably not, in my own informal analysis, like to offer you the best compensation in expectation. Some of this is simply because these folks have high opportunity costs, and the only way to get them as employees is to pay them enough that it balances off against what they would likely do instead: start a company.
If you're in the middl... (read more)
Many people want the world to be better.
I feel like there's a lot of people who take this desire for a better world and then hope that they will be the one to make it all better. Maybe they'll discover some grand idea that will improve many things and lead us to salvation!
I don't think that's what we need though. We mostly need all us little people to just be a bit nicer, a bit more trusting, a bit more compassionate, and then not quite so many grand schemes will be required because we'll find we're already living in a better world.
Thanks for your reply. Helps make a case that parliaments do something above and beyond the culture/tradition in which they are situated.
That said, I do want to respond to one thing you said:
Some would say that the aspects that matter are issues like trust, low corruption, respect of property rights, etc. But are there any cultures which do not value those things, which claim they are outright undesirable? I don't think there are.
Up until 2 days ago I likely would have shared this sentiment, but I was talking with someone who grew up in Romania and as he p... (read more)
I'm sure this is addressed in the book I haven't read, but I wonder how much of this is confounded by former British rule. That is, if you factor out parliamentary systems that were established after a legacy of British rule, would it still be the case that parliaments are better?
I'm guess the argument is "yes' but I'm not sure and am somewhat suspicious that some of these effects could be cultural ones that just happen to come along with parliaments, making parliamentarism an effect rather than a cause.
I think of it as coming from two angles. One is that it's a form of community building to expose folks to EA ideas who might otherwise not engage with them by doing so in a language they are familiar with. Two, it's a way for EAs who are religious to explore how EA impacts other spheres of their life.
I think it's also nice to have community by creating a sense of belonging. With EA being such a secular space normally, having a way to learn you're not the only one trying to combine EA and practice of a religion is nice. Good to have folks to talk to, etc.
Woo, as the person running Buddhists in EA, really excited to see more groups like this! At this point there's enough of us (3 groups) that maybe it's time to start thinking about an EA Interfaith group. :-)
This is pretty long. Is there something like an abstract or executive summary of the post? Skimming a few of the expected places didn't feel like I was quite getting that without reading the whole thing.
Hi Gordon, I think by reading the 'challenging assumptions and why we think the current risk may be underappreciated' and 'Conclusions and the future' sections, you'll get a summary of most of the main points.
True, but what you can do is have explicit values that you publicize and then ask candidates questions that assess how much they support/embody those values. Then you can reasonably say "rejected candidate because they didn't demonstrate value X" and have notes to back it up, or say "rejected because demonstrated ~X". This is harder feedback for candidates to hear, especially if X is something positive that everyone thinks they are like "hard working", but at the same time it should be made clear this isn't about what's true about the candidate, but what could be determined from their interview performance.
My vague understanding is that there's likely no legal issues with giving feedback as long as it's impartial. It's instead one of those things where lawyers reasonably advise against doing anything not required since literally anything you do exposes you to risk. Of course you could give feedback that would obviously land you in trouble, e.g. "we didn't hire you because you're [ethnicity]/[gender]/[physical attribute]", but I think most people are smart enough to give feedback of the form "we didn't hire you because legible reason X".
And it's quickly becom... (read more)
For many of the breakdowns it would be helpful to understand the base rate in those countries to understand what the data means. For example, gender is easy enough since the base rate is usually close to 50/50, but for things like race I have no idea how many people identify as white, black, asian, etc. in each region to compare against. I realize not everything has a base rate to compare against, but for those that do having that data would really help contextualize what's going on here.
I guess I don't understand why w > x > y > z implies w - y = x - y iff w - x = y - z. Sorry if this is a standard result I've forgotten, but at first glance it's not totally obvious to me.
I didn't quite follow. What's the reasoning for claiming this?
From the definition of the four variables, the following equivalence can be deduced:w−y=x−z⟺w−x=y−z
From the definition of the four variables, the following equivalence can be deduced:
Well, I'd say we're all pragmatists whether we acknowledge it or not due to the problem of the criterion.
Not exactly based on EA org experience, but I think one of the biggest challenges orgs face is going from small enough that everyone can sit at the same table (people sometimes call these 2 pizza teams, because you can feed everyone with two pizzas; in practice the number is somewhere between 8 and 12) to medium (less than 150 people, aka the point at which you can personally know of everyone) to large.
EA orgs are most likely to face the first transition, small to medium. The big thing to know is that you'll have to find ways to take what happened and work... (read more)
Dislike the idea. Feels like this will change the character of the site in a way that's negative. It's a bit hard to say way, but part of the vibe of this place is that it's about ideas not about people, and this will take it away from that direction, and I think have more an idea vibe than a personal brand vibe is good for what this forum is for. There's plenty of other places people can have more highly personally identifiable or warmer experience of connecting with others.
If we did this I feel like it would be trying to optimize for something that's not, in my view, the primary purpose of the forum, and thus would make this site worse at being the EA Forum than without this feature.
I've been asking for this feature on LW. If we're not going to get it there, at least we can get it here!