Vanessa

Topic Contributions

Comments

Mental support for EA partners?

My spouse and I are both heavily involved with EA, but we nevertheless have significant differences in our philosophies. My spouse's world view is pretty much a central example of EA: impartiality, utilitarianism et cetera. On the other hand, I assign far greater weight to helping people who are close to me compared to helping random strangers[1]. Importantly, we know that we have value differences, we accept it, and we are consciously working towards solutions that are aimed to benefit both of our value systems, with some fair balance between the two. This is also reflected in our marriage vows.

I think that the critical thing is that your SO accepts that:

  • It is fine to have value differences.
  • They should be considerate of your values (and you should be considerate of their values, ofc). Both systems have to be taken into account when making decisions.
  • There is no "objective" standard s.t. they can "prove" their own values to be "better" according to that standard and you would have to accept it.
  • You don't need to justify your values. They are valid as is, without any justification.

If your SO cannot concede that much, it's a problem IMO. A healthy relationship is built on a commitment to each other, not on a commitment to some abstract philosophy. Philosophies enters it only inasmuch as they are important to each of you.

 

  1. ^

    That said, I also accept considerations of the form "help X (at considerable cost) if they would have used similar reasoning to decide whether to help you if the roles were reversed".

What are the coolest topics in AI safety, to a hopelessly pure mathematician?

By Scott Garrabrant et al:

By John Wentworth

By myself:

[Closed] Hiring a mathematician to work on the learning-theoretic AI alignment agenda

Thank you for this comment!

Knowledge about AI alignment is beneficial but not strictly necessarily. Casting a wider net is something I planned to do in the future, but not right now. Among other reasons, because I don't understand the academic job ecosystem and don't want to spend a huge effort studying it in the near-term. 

However, if it's as easy as posting the job on mathjobs.org, maybe I should do it. How popular is that website among applicants, as far as you know? Is there something similar for computer scientists? Is there any way to post a job without specifying a geographic location s.t. applicants from different places would be likely to find it?

"Long-Termism" vs. "Existential Risk"

This is separate to the normative question of whether or not people should have zero pure time preference when it comes to evaluating the ethics of policies that will affect future generations.

I am a moral anti-realist. I don't believe in ethics the way utilitarians (for example) use the word. I believe there are certain things I want, and certain things other people want, and we can coordinate on that. And coordinating on that requires establishing social norms, including what we colloquially refer to as "ethics". Hypothetically, if I have time preference and other people don't then I would agree to coordinate on a compromise. In practice, I suspect that everyone have time preference.

So if hypothetically we were alive around King Tut's time and we were given the mandatory choice to either torture him or, with certainty, cause the torture of all 7 billion humans today we would easily choose the latter with a 1% rate of pure time preference (which seems obviously wrong to me).

You can avoid this kind of conclusions if you accept my decision rule of minimax regret over all discount timescales from some finite value to infinity.

"Long-Termism" vs. "Existential Risk"

Because, ceteris paribus I care about things that happen sooner more than about things that happen latter. And, like I said, not having pure time preference seems incoherent. 

As a meta-sidenote, I find that arguments about ethics are rarely constructive, since there is too little in the way of agreed-upon objective criteria and too much in the way of social incentives to voice / not voice certain positions. In particular when someone asks why I have a particular preference, I have no idea what kind of justification they expect (from some ethical principle they presuppose? evolutionary psychology? social contract / game theory?)

"Long-Termism" vs. "Existential Risk"

I dunno if I count as "EA", but I think that a social planner should have nonzero pure time preference, yes.

"Long-Termism" vs. "Existential Risk"

The question is, what is your prior about extinction risk? If your prior is sufficiently uninformative, you get divergence. If you dogmatically  believe in extinction risk, you can get convergence but then it's pretty close to having intrinsic time discount.  To the extent it is not the same, the difference comes through privileging hypotheses that are harmonious with your dogma about extinction risk, which seems questionable.

"Long-Termism" vs. "Existential Risk"

IMO everyone have pure time preference (descriptively, as a revealed preference). To me it just seems commonsensical, but it is also very hard to mathematically make sense of rationality without pure time preference, because of issues with divergent/unbounded/discontinuous utility functions. My speculative 1st approximation theory of pure time preference for humans is: choose a policy according to minimax regret over all exponential time discount constants starting from around the scale of a natural human lifetime and going to infinity. For a better approximation, you need to also account for hyperbolic time discount.

The best EA Global yet? (And other updates.)

We plan to run 3 EA Global conferences in 2021

I'm guessing this is a typo and you meant 2022?

Load More