I think EA uses the word in a basically standard way. I imagine there being helpful things to say about "what do we mean by funding infrastructure" or "what kind of infrastructure is the EA Infrastructure Fund meaning to support", but I don't know that there's anything to say in a more general context than that.
Why do you think it's valuable? I don't think we have this norm already, and it's not immediately obvious to me how it would change my behaviour.
I don't think we have a single "landing page" for all the needs of the community, but I'd recommend applying for relevant jobs or getting career advice or going to an EA Global conference, or figuring out what local community groups are nearby you and asking them for advice.
I agree with paragraph 1 and 2 and disagree with paragraph 3 :)
That is: I agree longtermism and x-risk are much more difficult to introduce to the general population. They're substantially farther from the status quo and have weirder and more counterintuitive implications.
However, we don't choose what to talk about by how palatable it is. We must be guided by what's true, and what's most important. Unfortunately, we live in a world where what's palatable and what's true need not align.
To be clear, if you think global development is more important than x-ri... (read more)
I don't buy the asymmetry of your scope argument. It feels very possible that totalitarian lock-in could have billions of lives at stake too, and cause a similar quantity of premature deaths.
apologies if this was obvious from the responses in some other way, but did you consider that the person who gave a 9 might have had the scale backwards, i.e. been thinking of 1 as the maximally uncomfortable score?
I don't understand what you think Holden / OpenPhil's bias is. I can see why they might have happened to be wrong, but I don't see what in their process makes them systematically wrong in a particular way.
I also think it's generally reasonable to form expectations about who in an expert disagreement is correct using heuristics that don't directly engage with the content of the arguments. Such heuristics, again, can go wrong, but I think they still carry information, and I think we often have to ultimately rely on them when there's just too many issues to investigate them all.
(in case anyone else was confused, this was a reply to a now-deleted comment)
I don't know. Partly I think that some of those people are working on something that's also important and neglected, and they should keep working on it, and need not switch.
I think to the extent you are trying to draw the focus away from longtermist philosophical arguments when advocating for people to work on extinction risk reduction, that seems like a perfectly reasonable thing to suggest (though I'm unsure which side of the fence I'm on).
But I don't want people casually equivocating between x-risk reduction and EA, relegating the rest of the community to a footnote.
It's not enough to have an important problem: you need to be reasonably persuaded that there's a good plan for actually making the problem better, the 1% lower. It's not a universal point of view among people in the field that all or even most research that purports to be AI alignment or safety research is actually decreasing the probability of bad outcomes. Indeed, in both AI and bio it's even worse than that: many people believe that incautious action will make things substantially worse, and there's no easy road to identifying which routes are both safe... (read more)
My main criticism of this post is that it seems to implicitly suggest that "the core action relevant points of EA" are "work on AI or bio", and doesn't seem to acknowledge that a lot of people don't have that as their bottom line. I think it's reasonable to believe that they're wrong and you're right, but:
It's been about 7 months since this writeup. Did the Survival and Flourishing Fund make a decision on funding NOVID?
Pointing out more weirdnesses may by now be unnecessary to make the point, but I can't resist: the estimate also seems to equivocate between "number of people alive at any moment" and "number of people in each generation", as if the 900 million population was comprised of a single generation that fully replaced itself each 31.125 years. Numerically this only impacts the result by a factor of 3 or so, but it's perhaps another reason not to take it as a serious attempt :)
Can you give examples of technopessimists "in the wild"? I'm sure there are plenty of examples of "folk technopessimism" but if you mean something more fleshed-out than that I don't think I've seen it expressed or argued for a lot. (That said, I'm not very widely-read, so I'm sure there's lots of stuff out there I don't hear about.)
I see the image now (weirdly, it's a stylized form of https://reductress.com/post/quiz-are-you-even-good-enough-to-have-imposter-syndrome/ )
Don't think it's hosted on the forum, when I right-click and copy image link I get https://scontent-lhr8-1.xx.fbcdn.net/v/t39.30808-6/259629194_10220039871613660_9218217279654834365_n.jpg?_nc_cat=110&ccb=1-5&_nc_sid=825194&_nc_ohc=tnrYKfG2lQ4AX8LlETd&_nc_ht=scontent-lhr8-1.xx&oh=5af3c7d105c83cc6c472526d4573647c&oe=61A320C8 which looks like a Facebook URL.
"if AI has moral status, then AI helping its replicas grow or share pleasant experiences is morally valuable stuff". Sure, but I think the claim is that "most" AI won't be interested in doing that, and will pursue some other goal instead that doesn't really involve helping anyone.
It's a little aside from your point, but good feedback is not only useful for emotionally managing the rejection -- it's also incredibly valuable information! Consider especially that someone who is applying for a job at your organization may well apply for jobs at other organizations. Telling them what is good or bad with their application will help them improve that process, and make them more likely to find something that is the right fit for them. It could be vital in helping them understand what they need to do to position themselves to be more useful... (read more)
In our current hiring round for EA Germany, I'm offering all 26 applicants "personal feedback on request if time allows", and I think it's probably worth my time at least trying to answer as many feedback requests as I can.
I'd encourage other EA recruiters to do the same, especially for those candidates that already did work tests. If you ask someone to spend 2h on an unpaid work test, it seems fair to make at least 5min time for feedback.
Like Sanjay's answer, I think this is a correct diagnosis of a problem, but I think the advertising solution is worse than the problem.
I'd like to push the opt-in / opt-out suggestion further, and say that the button should only affect people who have opted in (that is, the button bans all the opted-in players for a day, rather than taking the website down for a day). Or you could imagine running it on another venue than the Forum entirely, that was more focused on these kinds of collaborative social experiments.
I can see an argument that this takes away too much from the game, but in that case I'd lean towards just not running it at all. I think it's a cute idea but I don't think it feel... (read more)
I think this correctly identifies a problem (not only is it a bad model for reality, it's also confusing for users IMO). I don't think extra karma points is the right fix, though, since I imagine a lot of people only care about karma insofar as it's a proxy for other people's opinions of their posts, which you can't just give 30 more of :)
(also it's weird inasmuch as karma is a proxy for social trust, whereas nuking people probably lowers your social trust)
Sure, precommitments are not certain, but they're a way of raising the stakes for yourself (putting more of your reputation on the line) to make it more likely that you'll follow through, and more convincing to other people that this is likely.
In other words: of course you don't have any way to reach probability 0, but you can form intentions and make promises that reduce the probability (I guess technically this is "restructuring your brain"?)
Yeah, that did occur to me. I think it's more likely that he's telling the truth, and even if he's lying, I think it's worth engaging as if he's sincere, since other people might sincerely believe the same things.
I downvoted this. I'm not sure if that was an appropriate way to express my views about your comment, but I think you should lift your pledge to second strike, and I think it's bad that you pledged to do so in the first place.
I think one important disanalogy between real nuclear strategy and this game is that there's kind of no reason to press the button, which means that for someone pressing the button, we don't really understand their motives, which makes it less clear that this kind of comment addresses their motives.
Consider that last time LessWrong wa... (read more)
While I think it's useful to have concrete records like this, I would caution against drawing conclusions about the cultured meat community specifically unless we draw a comparison with other fields and find that forecast accuracy is better anywhere else. I'd expect that overoptimistic forecasts are just very common when people evaluate their own work in any field.
Another two examples off the top of my head:
GiveIndia says donations from India or the US are tax-deductible.
Milaap says they have tax benefits to donations but I couldn't find a more specific statement so I guess it's just in India?
Anyone know a way to donate with tax deduction from other jurisdictions? If 0.75x - 2x is accurate, it seems like for some donors that could make the difference.
(Siobhan's comment elsewhere here suggests that Canadian donors might want to talk to RCForward about this).
You've previously spoken about the need to reach "existential security" -- in order to believe the future is long and large, we need to believe that existential risk per year will eventually drop very close to zero. What are the best reasons for believing this can happen, and how convincing do they seem to you? Do you think that working on existential risk reduction or longtermist ideas would still be worthwhile for someone who believed existential security was very unlikely?
It seems plausible that reasonable people might disagree on whether student groups on the whole would benefit from being more or less conforming to the EA consensus on things. One person's "value drift" might be another person's "conceptual innovation / development".
On balance I think I find it more likely that an EA group would be co-opted in the way you describe than an EA group would feel limited from doing something effective because they were worried it was too "off-brand", but it seems worth mentioning the latter as a possibility.
I think this post doesn't explicitly recognize a (to me) important upside of doing this, which applies to doing all things that other people aren't doing: potential information value.
This post exists because people tried something different and were thoughtful about the results, and now potentially many other people in similar situations can benefit from the knowledge of how it went. On the other hand, if you try it and it's bad, you can write a post about what difficulties you encountered so that other people can anticipate and avoid them better.
By contrast, naming your group Effective Altruism Erasmus wouldn't have led to any new insights about group naming.
Bluntly I think a prior of 98% is extremely unreasonable. I think that someone who had thoroughly studied the theory, all credible counterarguments against it, had long discussions about it with experts who disagreed, etc. could reasonably come to a belief that strong. An amateur who has undertaken a simplistic study of the basic elements of the situation can't IMO reasonably conclude that all the rest of that thought and debate would have a <2% chance of changing their mind.
Even in an extremely empirically grounded and verifiable theory like physics, f... (read more)
I agree with Halstead that this post seems to ignore the upsides of creating more humans. If you, like me, subscribe to a totalist population ethics, then each additional person who enjoys life, lives richly, loves, expresses themselves creatively, etc. -- all of these things make for a better world. (That said, I think that improving the lives of existing people is currently a better way to achieve that than creating more -- but I wouldn't say that creating more is wrong).
Moreover, I think this post misses the instrumental value of people, too. To underst... (read more)
The only place where births per woman are not close to 2 is sub-saharan Africa. Thus, the only place where family planning could reduce emissions is sub-saharan Africa, which is currently a tiny fraction of emissions.
This is not literally true: family planning can reduce emissions in the developed world if the desired births per woman is even lower than the actual births per woman. But I don't dispute the substance of the argument: it seems relatively difficult to claim that there's a big unmet need for contraceptives elsewhere, and that should determine what estimates we use for emissions.
At least in the US women have been having fewer children than they want for many decades:
As a result, the gap between the number of children that women say they want to have (2.7) and the number of children they will probably actually have (1.8) has risen to the highest level in 40 years.
I buy two of your examples: in the case of masks, it seems clear now that the experts were wrong before, and in "First doses first", you present some new evidence that the priors were right.
On nutrition and lockdowns, you haven't convinced me that the point of view you're defending isn't the one that deference would arrive at anyway: it seems to me like the expert consensus is that lockdowns work and most nutritional fads are ignorable.
On minimum wage and alcohol during pregnancy, you've presented a conflict between evidence and priors, but I don't feel li... (read more)
I don't know if this meets all the details, but it seems like it might get there: Singapore restaurant will be the first ever to serve lab-grown chicken (for $23)
Hmm, I was going to mention mission hedging as the flipside of this, but then noticed the first reference I found was written by you :P
For other interested readers, mission hedging is where you do the opposite of this and invest in the thing you're trying to prevent -- invest in tobacco companies as an anti-smoking campaigner, invest in coal industry as a climate change campaigner, etc. The idea being that if those industries start doing really well for whatever reason, your investment will rise, giving you extra money to fund your countermeasures.
I'm sure... (read more)
I don't buy your counterargument exactly. The market is broadly efficient with respect to public information. If you have private information (e.g. that you plan to mount a lobbying campaign in the near future; or private information about your own effectiveness at lobbying) then you have a material advantage, so I think it's possible to make money this way. (Trading based on private information is sometimes illegal, but sometimes not, depending on what the information is and why you have it, and which jurisdiction you're in. Trading based on a belief that... (read more)
Here are a couple of interpretations of value alignment:
I think your claim is not that "all value-alignment is bad" but rather "when EAs talk about value-alignment, they're talking about something much more specific and constraining than this tame interpretation".
To attempt an answer on behalf of the author. The author says "an increasingly narrow definition of value-alignment" and I think the idea is that seeking "value-alignment" has got narrower and narrower over term and further from the goal of wanting to do good.
In my time in EA value alignment has, among some... (read more)
Though betting money is a useful way to make epistemics concrete, sometimes it introduces considerations that tease apart the bet from the outcome and probabilities you actually wanted to discuss. Here's some circumstances when it can be a lot more difficult to get the outcomes you want from a bet:
As an example, I saw someone claim that the US was facing civil war. Someone else ... (read more)
I don't think this is a big concern. When people say "timing the market" they mean acting before the market does. But donating countercyclically means acting after the market does, which is obviously much easier :)
While I think it's important to understand what Scott means when Scott says eugenics, I think:
a. I'm not certain clarifying that you mean "liberal eugenics" will actually pacify the critics, depending on why they think eugenics is wrong,
b. if there's really two kinds of thing called "eugenics", and one of them has a long history of being practiced by horrible, racist people coercively to further their horrible, racist views, and the other one is just fine, I think Scott is reckless in using the word here. I've never ... (read more)
My response to (b): the word is probably beyond rehabilitation now, but I also think that people ought to be able to have discussions about bioethics without having to clarify their terms every ten seconds. I actually think it is unreasonable of someone to skim someone’s post on something, see a word that looks objectionable, and cast aspersions over their whole worldview as a result.
Reminds me of when I saw a recipe which called for palm sugar. The comments were full of people who were outraged at the inclusion of such an exploitative, unsustainable ingre
I'm very motivated to make accurate decisions about when it will be safe for me to see the people I love again. I'm in Hong Kong and they're in the UK, though I'm sure readers will prefer generalizable stuff. Do you have any recommendations about how I can accurately make this judgement, and who or what I should follow to keep it up to date?
Do you think people who are bad at forecasting or related skills (e.g. calibration) should try to become mediocre at it? (Do you think people who are mediocre should try to become decent but not great? etc.)
As someone with some fuzzy reasons to believe in their own judgement, but little explicit evidence of whether I would be good at forecasting or not, what advice do you have for figuring out if I would be good at it, and how much do you think it's worth focusing on?
No one is going to run a prison for free--there has to be some exchange of money (even in public prisons, you must pay the employees). Whether that exchange is moral or not, depends on whether it is facilitated by a system that has good consequences.
In the predominant popular consciousness, this is not sufficient for the exchange to be moral. Buying a slave and treating them well is not moral, even if they end up with a happier life than they otherwise would have had. Personally, I'm consequentialist, so in some sense I agree with you, but even the... (read more)
As my other comment promised, here's a couple of criticisms of your model on its own terms:
My instinctive emotional reaction to this post is that it worries me, because it feels a bit like "purchasing a person", or purchasing their membership in civil society. I think that a common reaction to this kind of idea would be that it contributes to, or at least continues, the commodification and dehumanization of prison inmates, the reduction of people to their financial worth / bottom line (indeed, parts of your analysis explicitly ignore non-monetary aspects of people's interactions with society and the state; as far as I can tell, al... (read more)
As an offtopic aside, I'm never sure how to vote on comments like this. I'm glad the comment was made and want to encourage people to make comments like this in future. But, having served its purpose, it's not useful for future readers, so I don't want to sort it to the top of the conversation.
The number of possible pairs of people in a room of n people is about n^2/2, not n factorial. 10^2 is many orders of magnitude smaller than 10! :)
(I think you are making the mistake of multiplying together the contacts from each individual, rather than adding them together)