Ikaxas

Posts

Sorted by New

Comments

How to make people appreciate asynchronous written communication more?

I had this reaction as well. Can't speak for OP, but one issue with this is that audio is harder to look back at than writing; harder to skim when you're just looking for that one thing you think was said but you want to be sure. One solution here would be transcription, which could probably be automated because it wouldn't have to be perfect, just good enough to be able to skim to the part of the audio you're looking for.

What are the effects of considering that (human) morality emerges from evolutionary and other functional pressures?

You might check out this SEP article: https://plato.stanford.edu/entries/morality-biology/. Haven't read it myself, but looking at the table of contents it seems like it might be helpful for you (SEP is generally pretty high-quality). People have made a lot of different arguments that start from the observation that human morality has likely been shaped by evolutionary pressures, and it's pretty complicated to try to figure out what conclusions to draw from this observation. It's not at all obvious that it implies we should try to "escape the shackles of evolution" as you put it. It may imply that, but it also may not. (In particular, "selective evolutionary debunking arguments" seem to have implications along these lines, but "general evolutionary debunking arguments" seem to lead to almost the opposite conclusion.)

You might also check out this post by Eliezer.

Should I transition from economics to AI research?

So I am a philosopher and thus fundamentally unqualified to answer this question. So take these thoughts with a grain of salt. However:

  1. From my outsider's perspective, it seems as though AI safety uses a lot of concepts from economics (especially expected utility theory). And if you're at the grad level in economics, then you probably have a decent math background. So at least many of your skills seem like they would transfer over.
  2. I don't know how much impact you can expect to have as an AI researcher compared to an economist. But that seems like the kind of question an economist would be well-equipped to work on answering! If you happen to not already be familiar with cause prioritization research, you might consider staying in economics and focusing on it, rather than switching to AI, as cause prioritization is pretty important in its own right.
  3. Similarly, you might focus on global priorities research: https://forum.effectivealtruism.org/posts/dia3NcGCqLXhWmsaX/an-introduction-to-global-priorities-research-for-economists. Last I knew the Global Priorities Institute was looking to hire more economists; don't know if that will still be true when you finish your grad program, but at the very least I expect they'll still be looking to collaborate with economists at that time.

In other words, it seems like you might have a shot at transitioning (though I am very, very unqualified to assess this), but also there seem to be good, longtermist-relevant research opportunities even within economics proper.

Some preliminaries and a claim

Let me say this: I am extremely confused, either about what your goals are with this post, or about how you think your chosen strategy for communication is likely to achieve those goals.

Some preliminaries and a claim

Not Jeff, but I agree with what he said, and here are my reasons:

  1. The feedback Jpmos is giving you is time-sensitive ("Since it is still relatively early in the life of this post...")
  2. The feedback Jpmos is giving you is not actually about what you said. Rather, it's simply about the way you're communicating it, letting you know that, at least in Jpmos's case, your chosen method of communication came close to not being effective (unless your goals in writing the post are significantly different than the usual goals of someone writing a post, i.e. to communicate and defend a claim. Admittedly, you say that "I want you to think carefully and spaciously for yourself about what is best, and then do the things that seem best as they come to you from that spacious place", and maybe that is a significantly different goal from the usual one, but even so, there's a higher likelihood of readers doing that if they get the claim, or at least the topic, of the post up front.)
What posts do you want someone to write?

Ooh, I would also very much like to see this post

Normative Uncertainty and the Dependence Problem

Hm. Do you think it would be useful for me to write a short summary of the arguments against taking normative uncertainty into account and post it to the EA forum? (Wrote a term paper last semester arguing against Weatherson, which of course involved reading a chunk of that literature.)

Ask Me Anything!

I'd be very interested in hearing more about the views you list under the "more philosophical end" (esp. moral uncertainty) -- either here or on the 80k podcast.

The Possibility of an Ongoing Moral Catastrophe (Summary)

Definitely, I'll send it along when I design it. Since intro ethics at my institution is usually taught as applied ethics, the basic concept would be to start by introducing the students to the moral catastrophes paper/concept, then go through at least some of the moral issues Williams brings up in the disjunctive portion of the argument to examine how likely they are to be moral catastrophes. I haven't picked particular readings yet though as I don't know the literatures yet. Other possible topics: a unit on historical moral catastrophes (e.g. slavery in the South, the Holocaust); a unit on biases related to moral catastrophes; a unit on the psychology of evil (e.g. Baumeister's work on the subject, which I haven't read yet); a unit on moral uncertainty; a unit on whether antirealism can escape or accommodate the possibility of moral catastrophes.

Assignment ideas:

  1. pick one of the potential moral catastophes Williams mentions, which you think is least likely to actually be a moral catastrophe. Now, imagine that you are yourself five years from now and you’ve been completely convinced that it is in fact a moral catastrophe. What convinced you? Write a paper trying to convince your current self that it is a moral catastrophe after all.
  2. Come up with a potential moral catastrophe that Williams didn’t mention, and write a brief (maybe 1-2 pages?) argument for why it is or isn’t one (whatever you actually believe). Further possibility: Once these are collected, I observe how many people argued that the one they picked was not a moral catastrophe, and if it’s far over 50%, discuss with the class where that bias might come from (e.g. status quo bias, etc.).

This is all still in the brainstorming stage at the moment, but feel free to use any of this if you're ever designing a course/discussion group for this paper.

Load More