All of Ikaxas's Comments + Replies

Moral uncertainty and public justification (Barret and Schmidt, 2021)

Oops, one correction: "public justification" doesn't mean "justification to the people a policy will affect", it means "justification to all reasonable people"; "reasonable people" is roughly everyone except Nazis and others with similarly extreme views.

Moral uncertainty and public justification (Barret and Schmidt, 2021)

I know this doesn't solve the actual problem you're getting at, but here's a translation of that sentence from philosophese to English. "Pro tanto" essentially means "all else equal": a "pro tanto" consideration is a consideration, but not necessarily an overriding one. "Public justification" just means justifying policy choices with reasons that would/could be persuasive to the public/to the people they will affect. So the sentence as a whole means something like "While moral uncertainty doesn't mean that governments (and other institutions) should always justify their decisions to the people, it does mean they should do so when they can."

2Ikaxas3moOops, one correction: "public justification" doesn't mean "justification to the people a policy will affect", it means "justification to all reasonable people"; "reasonable people" is roughly everyone except Nazis and others with similarly extreme views.
The Importance-Avoidance Effect

This is something I'm dealing with right now, so reading this was helpful. Thanks

2davidhartsough4moSo glad to hear it was helpful! Thanks for reading it :) Lemme know which strategies end up being the most effective for you! I'm keen to know what works best for people. (If you couldn't tell, I'm also a person who struggles with this a great deal, so this is mostly me trying to find answers and solutions for myself as well haha.)
You should write about your job

I'd be interested in this. Even though "generalist researcher" is well-known, I think it's easy from the outside to get a distorted picture of the "content" of the job. Aside from this recent post, I don't know of write ups about it off the top of my head (though there could be ones I don't know about), and of course multiple writeups are useful since different people's situations and experiences will be different.

How to make people appreciate asynchronous written communication more?

I had this reaction as well. Can't speak for OP, but one issue with this is that audio is harder to look back at than writing; harder to skim when you're just looking for that one thing you think was said but you want to be sure. One solution here would be transcription, which could probably be automated because it wouldn't have to be perfect, just good enough to be able to skim to the part of the audio you're looking for.

What are the effects of considering that (human) morality emerges from evolutionary and other functional pressures?

You might check out this SEP article: Haven't read it myself, but looking at the table of contents it seems like it might be helpful for you (SEP is generally pretty high-quality). People have made a lot of different arguments that start from the observation that human morality has likely been shaped by evolutionary pressures, and it's pretty complicated to try to figure out what conclusions to draw from this observation. It's not at all obvious that it implies we should try to "escape the shackles of e... (read more)

Should I transition from economics to AI research?

So I am a philosopher and thus fundamentally unqualified to answer this question. So take these thoughts with a grain of salt. However:

  1. From my outsider's perspective, it seems as though AI safety uses a lot of concepts from economics (especially expected utility theory). And if you're at the grad level in economics, then you probably have a decent math background. So at least many of your skills seem like they would transfer over.
  2. I don't know how much impact you can expect to have as an AI researcher compared to an economist. But that seems like the kin
... (read more)
1EAguy1yThanks for all this! I'm not familiar with AI safety, and even if some concepts are used both in AI and economics, I suspect there would still be a lot of retraining involved, but I could be wrong. I'll take a look at the blog posts you mentioned!

Let me say this: I am extremely confused, either about what your goals are with this post, or about how you think your chosen strategy for communication is likely to achieve those goals.

1Milan_Griffes1yGood! I'm curious about what being extremely confused feels like for you, in your body and in your mind.

Not Jeff, but I agree with what he said, and here are my reasons:

  1. The feedback Jpmos is giving you is time-sensitive ("Since it is still relatively early in the life of this post...")
  2. The feedback Jpmos is giving you is not actually about what you said. Rather, it's simply about the way you're communicating it, letting you know that, at least in Jpmos's case, your chosen method of communication came close to not being effective (unless your goals in writing the post are significantly different than the usual goals of someone writing a post, i.e. to commun
... (read more)
3Milan_Griffes1yThis is my goal. This is what I want. This is my intention. (I tried to state it clearly in the post.) Wouldn't it be interesting if this goal were significantly different from how goals are usually used?
What posts do you want someone to write?

Ooh, I would also very much like to see this post

Normative Uncertainty and the Dependence Problem

Hm. Do you think it would be useful for me to write a short summary of the arguments against taking normative uncertainty into account and post it to the EA forum? (Wrote a term paper last semester arguing against Weatherson, which of course involved reading a chunk of that literature.)

2Aaron Gertler2yI'd be very excited about this! I really appreciate it when people take research effort they already put in, then making it accessible to others (a la Effective Thesis or my own thesis [] ).
1G Gordon Worley III2yYeah, sounds interesting!
Ask Me Anything!

I'd be very interested in hearing more about the views you list under the "more philosophical end" (esp. moral uncertainty) -- either here or on the 80k podcast.

The Possibility of an Ongoing Moral Catastrophe (Summary)

Definitely, I'll send it along when I design it. Since intro ethics at my institution is usually taught as applied ethics, the basic concept would be to start by introducing the students to the moral catastrophes paper/concept, then go through at least some of the moral issues Williams brings up in the disjunctive portion of the argument to examine how likely they are to be moral catastrophes. I haven't picked particular readings yet though as I don't know the literatures yet. Other possible topics: a unit on historical moral catastrophes (e.g. slavery in

... (read more)
1Linch2yFor #2, Ideological Turing Tests could be cool too.
The Possibility of an Ongoing Moral Catastrophe (Summary)

I'm entering philosophy grad school now, but in a few years I'm going to have to start thinking about designing courses, and I'm thinking of designing an intro course around this paper. Would it be alright if I used your summary as course material?

2Linch2yYou may also like our discussion sheets for this topic: []
1Linch2ySure! In general you can assume that anything I write publicly is freely available for academic purposes. I'd also be interested in seeing the syllabus if/when you end up designing it.
Please May I Have Reading Suggestions on Consistency in Ethical Frameworks

David Moss mentioned a "long tradition of viewing ethical theorising (and in particular attempts to reason about morality) sceptically." Aside from Nietzsche, another very well-known proponent of this tradition is Bernard Williams. Take a look at his page in the Stanford Encyclopedia of Philosophy, and if it looks promising check out his book Ethics and the Limits of Philosophy. You might also check out his essays "Ethical Consistency" (which I haven't read; in his essay collection Problems of the Self) and "Conflicts of Value... (read more)

The harm of preventing extinction

Counterpoint (for purposes of getting it into the discussion; I'm undecided about antinatalism myself): that argument only applied to people who are already alive, and thus not to most of the people who would be affected by the decision whether to extend the human species or not (I.e. those who don't yet exist). David Benatar argues (podcast, book) that while, as you point out, many human lives may well be worth continuing, those very same lives (he thinks all lives, but that's more than I need to make this argument) may nevertheless not have been worth st

... (read more)
3Habryka3yDo you have a short summary of why he thinks that someone answering the question of "would you have preferred to die right after child birth?" with "No?" is not strong evidence that they should have been born? Seems like the same thing to me. I surely prefer to exist and would be pretty sad about a world in which I wasn't born (in that I would be willing to endure significant additional suffering in order to cause a world in which I was born).
Would killing one be in line with EA if it can save 10?

What I was describing wasn't exactly Pascal's mugging. Pascal's mugging is an attempted argument *against* this sort of reasoning, by arguing that it leads to pathological conclusions (like that you ought to pay the mugger, when all he's told you is some ridiculous story about how, if you don't, there's a tiny chance that something catastrophic will happen). Of course, some people bite the bullet and say that you just should pay the mugger, others claim that this sort of uncertainty reasoning doesn't actually lead you to ... (read more)

Would killing one be in line with EA if it can save 10?

Here are two other considerations that haven't yet been mentioned:

1. EA is supposed to be largely neutral between ethical theories. In practice, most EAs tend to be consequentialists, specifically utilitarians, and a utilitarian might plausibly think that killing one to save ten was the right thing to do (though others in this thread have given reasons why that might not be the case even under utilitarianism), but in theory one could unite EA principles with most ethical systems. So if the ethical system you think is most likely to be correct includes... (read more)

1eaphilosophy3yThank you very much for explaining this! I appreciate the analogy of the flood damage and tiny risks with great reward, that's such an interesting point that I never considered. After researching that, it seems like what you're describing is Pascal's mugging, so I'll read up on that also. Thanks again.
Rationality as an EA Cause Area

Scott Alexander has actually gotten academic citations, e.g. in Paul Bloom's book Against Empathy (sadly I don't remember which article of his Bloom cites), and I get the impression a fair few academics read him.

2Ben Pace3yBostrom has also cited him in his papers.
Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy


What would be the attitude towards someone who wanted to work with you after undergrad for a year or two, but then go on to graduate school (likely for philosophy in my case), with an eye towards then continuing to work with you or other EA orgs after grad school?

0lukeprog4y(I work for Open Phil.) We'd encourage someone like that to apply, and flag in their application that they currently would plan to leave after a couple years to go to graduate school. Depending on their fit for the work, we might be excited to consider their application anyway, or we might not.