Ikaxas

Posts

Sorted by New

Wiki Contributions

Comments

You should write about your job

I'd be interested in this. Even though "generalist researcher" is well-known, I think it's easy from the outside to get a distorted picture of the "content" of the job. Aside from this recent post, I don't know of write ups about it off the top of my head (though there could be ones I don't know about), and of course multiple writeups are useful since different people's situations and experiences will be different.

How to make people appreciate asynchronous written communication more?

I had this reaction as well. Can't speak for OP, but one issue with this is that audio is harder to look back at than writing; harder to skim when you're just looking for that one thing you think was said but you want to be sure. One solution here would be transcription, which could probably be automated because it wouldn't have to be perfect, just good enough to be able to skim to the part of the audio you're looking for.

What are the effects of considering that (human) morality emerges from evolutionary and other functional pressures?

You might check out this SEP article: https://plato.stanford.edu/entries/morality-biology/. Haven't read it myself, but looking at the table of contents it seems like it might be helpful for you (SEP is generally pretty high-quality). People have made a lot of different arguments that start from the observation that human morality has likely been shaped by evolutionary pressures, and it's pretty complicated to try to figure out what conclusions to draw from this observation. It's not at all obvious that it implies we should try to "escape the shackles of evolution" as you put it. It may imply that, but it also may not. (In particular, "selective evolutionary debunking arguments" seem to have implications along these lines, but "general evolutionary debunking arguments" seem to lead to almost the opposite conclusion.)

You might also check out this post by Eliezer.

Should I transition from economics to AI research?

So I am a philosopher and thus fundamentally unqualified to answer this question. So take these thoughts with a grain of salt. However:

  1. From my outsider's perspective, it seems as though AI safety uses a lot of concepts from economics (especially expected utility theory). And if you're at the grad level in economics, then you probably have a decent math background. So at least many of your skills seem like they would transfer over.
  2. I don't know how much impact you can expect to have as an AI researcher compared to an economist. But that seems like the kind of question an economist would be well-equipped to work on answering! If you happen to not already be familiar with cause prioritization research, you might consider staying in economics and focusing on it, rather than switching to AI, as cause prioritization is pretty important in its own right.
  3. Similarly, you might focus on global priorities research: https://forum.effectivealtruism.org/posts/dia3NcGCqLXhWmsaX/an-introduction-to-global-priorities-research-for-economists. Last I knew the Global Priorities Institute was looking to hire more economists; don't know if that will still be true when you finish your grad program, but at the very least I expect they'll still be looking to collaborate with economists at that time.

In other words, it seems like you might have a shot at transitioning (though I am very, very unqualified to assess this), but also there seem to be good, longtermist-relevant research opportunities even within economics proper.

Some preliminaries and a claim

Let me say this: I am extremely confused, either about what your goals are with this post, or about how you think your chosen strategy for communication is likely to achieve those goals.

Some preliminaries and a claim

Not Jeff, but I agree with what he said, and here are my reasons:

  1. The feedback Jpmos is giving you is time-sensitive ("Since it is still relatively early in the life of this post...")
  2. The feedback Jpmos is giving you is not actually about what you said. Rather, it's simply about the way you're communicating it, letting you know that, at least in Jpmos's case, your chosen method of communication came close to not being effective (unless your goals in writing the post are significantly different than the usual goals of someone writing a post, i.e. to communicate and defend a claim. Admittedly, you say that "I want you to think carefully and spaciously for yourself about what is best, and then do the things that seem best as they come to you from that spacious place", and maybe that is a significantly different goal from the usual one, but even so, there's a higher likelihood of readers doing that if they get the claim, or at least the topic, of the post up front.)
What posts do you want someone to write?

Ooh, I would also very much like to see this post

Normative Uncertainty and the Dependence Problem

Hm. Do you think it would be useful for me to write a short summary of the arguments against taking normative uncertainty into account and post it to the EA forum? (Wrote a term paper last semester arguing against Weatherson, which of course involved reading a chunk of that literature.)

Ask Me Anything!

I'd be very interested in hearing more about the views you list under the "more philosophical end" (esp. moral uncertainty) -- either here or on the 80k podcast.

Load More