anonymous6

171Joined Mar 2022

Comments
29

Each chapter of Russell & Norvig's textbook "Artificial Intelligence: A Modern Approach" ends with historical notes. These are probably sparser than you want, but they are good and cover a very broad array of topics. The 4th edition of the book is decently up to date (for the time being!).

Trying to "do as the virtuous agent would do" (or maybe "do things for the sake of being a good person") seems to be a  really common problem for people.

Ruthless consequentialist reasoning totally short-circuits this, which I think is a large part of its appeal. You can be sitting around in this paralyzed fog, agonizing over whether you're "really" good or merely trying to fake being good for subconscious selfish reasons, feeling guilty for not being eudaimonic enough -- and then somebody comes along and says "stop worrying and get up and buy some bednets", and you're free.

I'm not philosophically sophisticated enough to have views on metaethics, but it does seem sometimes that the main value of ethical theories is therapeutic, so different contradictory ethical theories could be best for different people and at different times of life.

I would be inclined to replace “not thinking carefully” with “not thinking formally”. In real life everything tends to have exceptions and this is most people’s presumption, so they don’t feel a need to reserve language for the truly universal claims which are never meaningful.

Some people have practice in thinking about formal systems, where truly universal statements are meaningful, and where using different language to draw fine distinctions is important (“always” vs “with probability 1” vs “with high probability” vs “likely”).

Trying to push the second group’s norms on the first group might be tough even if perhaps it would be good.

I think when most people say “unequivocally” and “all”, they almost always mean “still maybe some exceptions” and “almost all”. If you don’t need to make mathematical/logical statements, which most people don’t, then reserving these words to act as universal quantifiers is not very useful. I used to be annoyed by this but I’ve learned to accept it.

Here's one set of lecture notes (don't endorse that they're necessarily the best, just first I found quickly) https://lucatrevisan.github.io/40391/lecture12.pdf

Keywords to search for other sources would be "multiplicative weight updates", "follow the leader", "follow the regularized leader".

Note that this is for what's sometimes called the "experts" setting, where you get full feedback on the counterfactual actions you didn't take. But the same approach basically works with some slight modification for the "bandit" setting, where you only get to see the result of what you actually did.

I have the feeling people sometimes just disappear even if we already agreed to have a call or to meet up (but for example did not agree on the time yet).

This is stereotypically seen as something people in California do, and is complained about by East Coasters. People will both agree to get coffee or lunch at some point and then never follow up. Maintaining the ambiguity is considered polite. Overrepresentation of Bay Area residents might be the explanation here.

One justification might be that in an online setting where you have to learn which options are best from past observations, the naive "follow the leader" approach -- exactly maximizing your  action based on whatever seems best so far -- is easily exploited by an adversary. 

This problem resolves itself if you make actions more likely if they've performed well, but regularize a little to smooth things out. The most common regularizer is entropy, and then as described on the "Softmax demystified" page, you basically end up recovering softmax (this is the well-known "multiplicative weight updates" algorithm).

Ignoring the exponential blowup, one could have a prediction market over all the causal models to elicit the best one (including an option "all of these are wrong/important variables are missing").

[on reflection, this seems hard unless you commit to doing a bunch of experiments or otherwise have a way to get the right outcome]

Then, with a presumptively trustworthy causal model, the "make adjustments to observational data" approach would be more reliable to estimate from other markets.

However, it feels like it could be the case that trying to do both of these things at once might screw up the incentives -- in other cases there are sometimes impossibility results like this.

"How can we design mechanisms to elicit causal information, not just distributional properties" seems like a really interesting question that seemingly hasn't received much attention.

unfortunately no, just idle musings. i would be interested in reading it, though.

How would EAs talk about climate change, if it were a weird niche issue that few people were working on and didn't have any political connotations? One can imagine that "catastrophic climate impacts due to carbon dioxide" would be another EA cause area that made normal people scratch their heads.

Giving a short description from that hypothetical world might be a good way to communicate why EAs worry less about climate in our actual world.

Load More