All of Khorton's Comments + Replies

How would you run the Petrov Day game?

I imagined you would get people to volunteer in advance of Petrov Day and then choose who you trust from the list of volunteers (or trust all of them to collaborate, dealer's choice)

But I really love the idea of people saying "I care about preserving humanity, I'm committed to the values of prudence and rationality, and I want to take part in observing this holiday". I would love to see that group of people in action.

2Habryka1hI do know that people are busy and easily distracted and probably wouldn't sign up in advance, even if they would like to participate, based on my past experience of generally getting people to do things. I do think we could build this list over multiple years though, while I previously thought that maybe the right choice is to just not sign up to volunteer, if you are in favor of the ritual, there is an argument that you should sign up to be a volunteer, because if you don't we might have to pick someone who is more likely to press the button instead of you, which still creates a decent incentive, and I hadn't considered before, but am overall still concerned about people just kind of not noticing the email and opt-in process until the day comes and they are sad they weren't considered (or the ritual doesn't happen at all because not enough people who are actually unlikely to press the button opted in).
Clarifying the Petrov Day Exercise

That's genuinely fine with me, I hope all the people who decide to embrace this shared ritual find it meaningful and fulfilling :)

Edit: Also then I would have 3 options, opting out, opting in and cooperating or opting in and defecting, which was exactly my point? Here the only way to signal that I'm not interested is by blowing up LessWrong.

Clarifying the Petrov Day Exercise

Ah yes. I don't consider ignoring the email to be opting out. As soon as I've read the email, inaction is one of my two options. If I delete the email, there will be a post that says something like, "We won! Everyone we emailed decided to cooperate!" even though I didn't choose to cooperate, I wasn't even sure if I wanted to be involved in the first place.

6Larks1hIf you had to opt-in, not opting in would be one of your two options. If you don't opt-in, there would be a post that says something like "We won! Everyone who could opt in decided to cooperate!"
Honoring Petrov Day on the EA Forum: 2021

It could be seen as uncooperative or pushing things too far, like "for what value of donations should an Effective Altruist let me punch them?"

Hmm, I guess I find it strange. To me, asking this question is part of taking this ritual seriously. IE how valuable is this ritual to maintain?

Honoring Petrov Day on the EA Forum: 2021

If it's not permissable for me to shut down the site, why is it permissible for Aaron to send unsolicited emails to 100 people inviting them to shut it down?

7WilliamKiely9hHe didn't invite anyone to shut it down. He simply gave more people the power to shut it down than already had it and invited us to practice not using that power. (I think this was permissible.) But for the sake of argument, even if Aaron did invite us to shut it down, that would not mean that Aaron's action was necessarily permissible. Maybe it would be since service providers have the right to stop providing services, but when the stakes are sufficiently high suddenly deciding to just stop providing a service to harm all your customers seems unethical (e.g. if Bezos and/or whoever else has the authority at Amazon decided to just shut down Amazon without warning).
Honoring Petrov Day on the EA Forum: 2021

I know we're trying to remember when the US and USSR had their weapons pointed at each other but it feels more like the North and South islands of New Zealand are trying to decide whether to nuke each other!

Edit: Not even something so violent - just temporarily inconvenience each other

5WilliamKiely11hI like Ben Pace's response from the linked post: Just because the site admins gave us the ability to shut down the site does not mean that it is harmless or permissible to do so. Even if they were to tell us it's a game and it's permissible to do so (which they did not) that still would not make it harmless nor necessarily permissible. The stakes still affect the permissibility regardless of what they were to say.
Why I am probably not a longtermist

To me that doesn't sound very different from "I want a future with less suffering, so I'm going to evaluate my impact based on how far humanity gets towards eradicating malaria and other painful diseases". Which I guess is consistent with my views but doesn't sound like most long-termists I've met.

1UriKatz2dWell, it wouldn’t work if you said “I want a future with less suffering, so I am going to evaluate my impact based on how many paper clips exist in the world at a given time”. Bostrom selects collaboration, technology and wisdom because he thinks they are the most important indicators of a better future and reduced x-risk. You are welcome to suggest other parameters for the evaluation function of course, but not every parameter works. If you read the analogy to chess in the link I posted it will become much more clear how Bostrom is thinking about this. (if anyone reading this comment knows of evolutions in Bostrom’s thought since this lecture I would very much appreciate a reference)
Why I am probably not a longtermist

I'm not Denise, but I agree that we can and will all affect the long-term future. The children we have or don't have, the work we do, the lives we save, will all effect future generations.

What I'm more skeptical about is the claim that we can decide /how/ we want to affect future generations. The Bible has certainly had a massive influence on world history, but it hasn't been exclusively good, and the apostle Paul would have never guessed how his writing would influence people even a couple hundred years after his death.

7UriKatz2dHi Khorton, If by “decide” you mean control the outcome in any meaningful way I agree, we cannot. However I think it is possible to make a best effort attempt to steer things towards a better future (in small and big ways). Mistakes will be made, progress is never linear and we may even fail altogether, but the attempt is really all we have, and there is reason to believe in a non-trivial probability that our efforts will bear fruit, especially compared to not trying or to aiming towards something else (like maximum power in the hands of a few). For a great exploration of this topic I refer to this talk by Nick Bostrom: []. The tl;dr is that we can come up with evaluation functions for states of the world that, while not yet being our desired outcome, are indications that we are probably moving in the right direction. We can then figure out how we get to the very next state, in the near future. Once there, we will jot a course for the next state, and so on. Bostrom signals out technology, collaboration and wisdom as traits humanity will need a lot of in the better future we are envisioning, so he suggests can measure them with our evaluation function.
evelynciara's Shortform

EA Global normally has an EA career fair, or something similar

Effective Volunteering

A starting point for this view is that volunteering isn't free for the charity you want to help:

This Guardian article gives an overview of the most common criticisms of volunteering overseas, including links to articles that go into more depth:

The Importance-Avoidance Effect

Great article on an important and engaging topic :)

If I find myself really paralyzed on a topic I have a thought experiment I go to: What would happen if I HAD to delegate this to someone else? I pick a specific person (say, an intern at my company) and think through how I would adapt the project so that she could complete it, or at least contribute to it. Often when I've finished my thought experiment I realize that the "easy" version of the task I would delegate to someone else is exactly what I should be doing.

The motivated reasoning critique of effective altruism

Fwiw my suggestions for how to act under conditions where you know your reasoning is biased are:

  1. Follow common-sense morality
  2. Have deep trusting relationships with people who disagree with each other (e.g. being a member of the EA community while working for a traditional philanthropist, seeking out a mentor who's made a significant impact in their local community and another who's made a global impact, having some friends who work in big institutions and others who are maverick entrepreneurs)
2Linch10dI'm suspicious of 1), especially if taken too far, because I think if taken too far it would justify way too much complacency in worlds where foreseeable moral catastrophes are not only possible but probable [] .
The motivated reasoning critique of effective altruism

Thanks for posting Linch. I think I've always assumed a level of motivated reasoning or at least a heavy dose of optimism from the EA community about the EA community, but it's nice to see it written up so clearly, especially in a way that's still warm towards the community.

2Khorton11dFwiw my suggestions for how to act under conditions where you know your reasoning is biased are: 1. Follow common-sense morality 2. Have deep trusting relationships with people who disagree with each other (e.g. being a member of the EA community while working for a traditional philanthropist, seeking out a mentor who's made a significant impact in their local community and another who's made a global impact, having some friends who work in big institutions and others who are maverick entrepreneurs)
EA Forum Creative Writing Contest: $10,000 in prizes for good stories

I think you're right - I can't find anything under "fair use" that involves pasting someone else's story onto the Forum without their permission, even if you link back to it.

I don't understand how "the exception is writing covered by copyright". All writing is covered by copyright!

7Linch13dNot writing published 70 years after the author's death, if I understand correctly. (Which is not a hypothetical example if people are planning to excerpt Kipling).
First vs. last name policies?

I normally go for first name, especially for casual down to earth people.

I feel Mr/Ms is for showing respect in a formal setting, like if I'm applying for a job at a traditional hierarchical organisation.

1Mathieu Putz15dThanks! That's super helpful.
JP's Shortform

Sorry I missed that! My bad

Giving Green: An early investigation into the impact of insider and outsider policy advocacy on climate change

I was curious about your methodology for your research project - thanks for linking.

Could you share a bit about how much research or existing expertise went into setting your priorities for the year? I only skimmed the document so I wasn't totally clear. It sounded like the importance of the different areas was weighted most heavily, and that was mostly identified by staff's intuitions from having worked in the area for a while?

1Dan Stein16dAs outlined in the document, the ranking/prioritization was done internally by Giving Green staff, based on our experience working in the space, a wide array of experts working on various parts of the climate issue, and reviewing public documents. I agree probably not the most robust procedure, but it was meant mostly to limit the scope of our search task to make it manageable given the size of our team. In 2021 we're taking some different tactics, in an attempt to improve our methods. For our US work we're diving much more deeply into some sector analysis (particularly activism) to make a clearer yes/no case for inclusion. We'll post more about that soon. In Australia, we're taking a different tactic of doing a systematic quantitative and qualitative survey of experts, using the ITN framework. Going forward, we're going to try to integrate the best of these different tactics into a set of best practices for future years.
JP's Shortform

One thing that hasn't been mentioned here is vacation time and sabbaticals, which would presumably be very useful for a fresh perspective!

1nonn20dYeah I agree that's pretty plausible. That's what I was trying to make an allowance for with "I'd also distinguish vacations from...", but worth mentioning more explicitly.
Concern about the EA London COVID protocol

A cynical person might see your post as asking CEA to do extra work for very little potential gain, because most people involved in EA are already pretty careful about Covid. So I guess that's where the negative reaction could be coming from - it sounds like you don't trust individual EAs or the event organizers to e.g. use hand sanitizer unless it's been written down somewhere that people will use hand sanitizer.

Frank Feedback Given To Very Junior Researchers

All my psych classes and management training have agreed so far that shit sandwich style feedback is ineffective because either people only absorb the negative or only absorb the positive. (This is more true if you have an ongoing relationship with someone - if you're giving one-off feedback I guess you have no choice!)

I recommend instead framing conversations around someone's goals. Framing feedback as advice to help someone meet their goals helps me to give more useful information and them to absorb it better, for example "Hiring managers will be looking... (read more)

4NunoSempere24dRe: the Edit, I've added an additional paragraph to make that particular point slightly less biting. Also, thanks for the point around framing in terms of people's goals.
Promoting Simple Altruism

That's a beautiful story, thanks for sharing

Gifted $1 million. What to do? (Not hypothetical)

That's wonderful news. I imagine your financial advisor will talk to you about when and how to donate the stock, so I'll just share a couple of considerations about what charities:

-One charity or multiple charities? On the one hand, committing to give to one charity at a time forces you to be quite rigorous with your evaluation, and making larger donations would probably reduce the difficulty of donating your stocks. On the other hand, some people like to "diversify" their donations by giving to multiple organisations, some riskier and some better establis... (read more)

2Khorton1moFor some non-EA further reading, here's a report I really liked looking at how charities prefer to work with major donors like you: []
Forecasting transformative AI: what's the burden of proof?

Strongly agree - the final paragraph rubbed me the wrong way because it equated "the most important century" with "people needing to take action to save the world"

Charity teaching people to learn & form knowledge effectively

I think most people who aren't able to successfully run a business would also struggle to successfully run a charity - there's a lot of overlapping skills required. If I were considering donating I would want to feel confident the founding team had the relevant skills and experience!

2Linch1moI want to second that there aren't many people who I'd be excited to start a charity who can't also start a business. ETA: But they do exist, and EA should arguably encourage more people like them!
Charity teaching people to learn & form knowledge effectively

I think helping people be better at learning and working can be very impactful, but instead of a charity, why not make it a business? Corporations would definitely pay for this if it's high quality. You could then do pro bono work for charities.

1alexanderklarge1moI have considered this and set up a basic website [] with the idea of starting with free Zoom 1:1s (to iterate and learn), the moving to paid (for a low cost) 1:1s, then cohort, then one day corporations etc. My main dissuaders right now are startup cost, early investments, how to actually run a business etc. Definitely something I'm considering though! Definitely want to do some research into the potential impact of a corporation vs non-profit as mentioned in Linch's comment. I got briefly excited that "Charity Entrepreneurship" could provide a grant but they have specific problem areas - will look into other funding means for it as a non-profit venture...
3Linch1moThere are strong theoretical reasons [] against corporations investing optimal amounts in job-general training, fwiw.
Open Thread: August 2021

Nice to meet you Conrad! EA London is starting to host more in person events - you should come hang out when you're in London!

Denise_Melchin's Shortform

Hm, I agree that the most impactful careers are competitive, but the different careers themselves seem to require very different aptitudes and abilities so I'm not sure the same small group would be at the top of each of these career trajectories.

For example when Holden* talks about options like becoming a politician, doing conceptual research, being an entrepreneur, or skillfully managing the day-to-day workings of an office I just don't see the same people succeeding in all of those paths.

In my view the majority of people currently involved in EA could d... (read more)

6Denise_Melchin1moI agree with this. But I think adding all of these groups together won't result in much more than the top 3% of the population. You don't just need to be in the top 3% to be an AI safety researcher in terms of ability/aptitude for ML research, this will be much more selective. Say it's 0.3%. Same goes for directing global aid budgets efficiently. While these paths require somewhat different abilities/aptitudes, proficiency in them will be very correlated with each other. I don't disagree with this, but this is not the bar I have in mind. I think it's worth trying your aptitude for direct work even if you are likely not in the top ~3% (often you won't even know where you are!) but with the expectation that the majority of your impact may likely still come from your donations in the long term.
Khorton's Shortform

Doom: The Politics of Catastrophe by Niall Ferguson examines the way governments have handled catastrophes in the past, with widely varying results.

5Nathan Young2moI enjoyed his podcast with Tyler Cowen on it which touches on AI risk [] "FERGUSON: I think the problem is that we are haunted by doomsday scenarios because they’re seared in our subconscious by religion, even though we think we’re very secular. We have this hunch that the end is nigh. The world is going to end in 12 years, or no, it must be 10. So I think part of the problem of modernity is that we’re still haunted by the end time. We also have the nasty suspicion — this is there in Nick Bostrom’s work []— that we’ve created a whole bunch of technologies that have actually increased the probability rather than reduced the probability of an extinction-level event. On the other hand, we’re told that there’s a singularity in prospect when all the technologies will come together to produce superhuman beings with massively extended lifespans and the added advantage of artificial general intelligence. The epistemic problem, as I see it is —Ian Morris [] wrote this in one of his recent books— which is the scenario? Extinction-level events or the singularity? That seems a tremendously widely divergent set of scenarios to choose from. I sense that — perhaps this is just the historian’s instinct — that each of these scenarios is, in fact, a very low probability indeed, and that we should spend more time thinking about the more likely scenarios that lie between them. Your essay [], which I was prompted to read before our conversation, about the epistemic problem and consequentialism set me thinking about work I’d done on counterfactual history [] , for which I would have benefited from reading that essay sooner. I think that if you ask what are
How are resources in EA allocated across issues?

My impression is that there's a lot of funding available for stuff other than global health, but not a lot of great places to spend it at the moment. So finding a charity with a robust theory of change for improving the long-term future and donating there can be very valuable - and starting something like that would be even more valuable! - but I'm less sure about the value of taking money you would spend on bednets and donating the the Long-Term Future Fund (or at least I'd recommend reviewing their past grants first).

Disclaimer: I have a much higher bar for funding long-termist charities than many other EAs.

4Miranda_Zhang2moThis is a good point! I actually redirected the funding more towards EA Infrastructure instead of the Long-term fund - partly since my giving acts as diversifying my investments (as I'm investing time in building a career oriented towards more longtermist goals), and partly because my existing donations are much smaller relative to what I'm investing to give later on (and hopefully we have more longtermist charities then). I really appreciate you highlighting the different implications one could draw out from funding disparities.
Most research/advocacy charities are not scalable

I strong upvoted this because I think it's really important to consider in what situations you should NOT try to develop these kinds of skills!

Most research/advocacy charities are not scalable

I wanted to avoid double-counting, so I didn't want to say "both GiveWell and GiveDirectly can absorb $100M" when actually it's the same $100M - that's why I excluded regranting

Is anyone in EA currently looking at short-term famine alleviation as a possible high-impact opportunity this year?

I don't have an answer to this, but if you find something or do a bit of digging into this yourself, please share it here!

How to Train Better EAs?

Some trainable things I think would help with grantmaking:

-knowledge of the field you're making grants in

-making a simple model to predict the expected value of a grant (looking for a theory of change, forecasting the probability of different steps, identifying the range of possible outcomes)

-best practices for identifying early signs a grant won't be worth funding, to save time, without being super biased against people you don't know or from a different background to you who eventually could do good work

-giving quality feedback to successful and unsucces... (read more)

Is effective altruism growing? An update on the stock of funding vs. people

I'm surprised to see this so heavily downvoted - I've also had concerns about EA culture with regards to sex and race and I wouldn't be surprised if it puts off people with some of the soft skills EA is missing. This comment definitely exaggerates and I'm not happy about that, but the underlying idea people who are good at navigating social dynamics are wary of EA, which is contributing to the talent gap, is pretty interesting.

How to Train Better EAs?

I think this will vary a lot depending on what kind of work you're aiming to do, but I could imagine a training programme for e.g. promising young grantmakers being very helpful

Charity Entrepreneurship tries to do this for entrepreneurs

1Davis_Kingsley2moGood point re: Charity Entrepreneurship. I'm somewhat more skeptical of the grantmaking thing though because there are few enough positions that it is not very legible who is good at it, whether others currently outside the field could do better, etc. I could be wrong -- I can point to specific things from some grantmakers that I thought were particularly good, for instance -- but it doesn't feel to me that it's the most amenable field for such a program. (Note that this is low-confidence and I could be wrong -- if there are more objective grantmaking skill metrics somewhere I'd be very interested to see more!)

Adding to that, Lucia Coulter of the Lead Exposure Elimination Project had high praise for Charity Entrepreneurship when I interviewed her:

Charity Entrepreneurship has [...] made a big difference – their support from the incubation program to now has helped with pretty much every aspect of our work. [...] Firstly they provided a two-month full-time incubation program, which I went through (remotely) in the summer 2020. This was where I decided to work on lead exposure (which was an idea researched and recommended by Charity Entrepreneurship), where I pai

... (read more)
EA Forum feature suggestion thread

Yes that's right - it has [Draft] [Unlisted] before that title

6Aaron Gertler2moOy vey, thanks for the notice. Definitely a bug, and one LessWrong is now looking into.
EA Forum feature suggestion thread

I can't hover, I only use the Forum on mobile. Thanks for the suggestion though - good to know it's possible!

EA Forum feature suggestion thread

Hi, sorry to be a complainer - I've just seen a new "continue reading" feature and I don't like it. If I stopped reading a sequence or article it means I'm aware of its existence and have chosen not to read it. This feature keeps reminding me of my least favourite articles (right now it's convinced I should read Aaron's placeholder post for a new sequence). I couldn't spot any way to remove it. Okay, that's all, thanks very much for your attention.

2Aaron Gertler2moIt sounds like the "placeholder post" you're seeing is a draft that should be invisible to you, which indicates a different bug. Is the title you're seeing "Sequence Placeholder Draft", or something else?
2Habryka2moThere should be a button that appears when you hover over the post on the frontpage that allows you to remove it from your continue reading queue.
(Video) How to be a less crappy person

I agree with Harrison that free attention isn't always a blessing!

(Video) How to be a less crappy person

Yes, this seems to be a pretty high quality example of a certain genre of YouTube video, but a lot of people really don't like this genre, so it's a bit tricky

1President Red2mo(I'm not sure if you'd get a notification for my reply to Koen so I'll send a direct reply just in case) Thanks for the compliment, though I think the editing style is a bit simplistic right now, mainly because I'm focusing on getting videos out as quickly as possible at the moment. Later down the line I'm hoping to have the budget for more produced videos.
Khorton's Shortform

Reducing procrastination on altruistic projects:

I have often struggled to get started on projects that are particularly important to me so I thought I'd jot down a couple ways I handle procrastination.

  1. Check if I actually want to do the project. Sometimes I like the idea of the project but don't actually want to do it (maybe I can post the idea here instead), or I'm conflicted because working on this task would conflict with my other values (can I change the plan so it meets my needs more fully?).
  2. Check if I have an actually realistic plan. My subconsciou
... (read more)
1agent182moI recently wrote about post on procrastination related to my EA work here []. Feel free to just check out the references at the end.
Why & How to Make Progress on Diversity & Inclusion in EA

Thanks for taking the time to post this result four years later!

Testing Newport's "Digital Minimalism" at CEEALAR

Hey, I'm actually most interested in what kind of data you're planning to collect from the control group - what are your plans there?

1Kurt Brown2moI'm still writing the questionnaires, but I think I want to ask weekly for {hours & quality of sleep, subjective sense of productivity, subjective well-being, subjective stress & anxiety, resting heart rate, hours spent socializing}. As for softer data, I'll also try to get a sense of whether/how much each participant rekindled an old hobby. That's a major promise of the digital declutter, so it will be informative to compare how much it happens in the control group. I'm only going to ask for a weekly questionnaire and an entry/exit interview since I don't want to scare people off from joining the control group.
Load More