G Gordon Worley III

Director of Research at PAISRI

Comments

Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum?

On a related but different note, I wish there was a way to combine conversations on cross-posts between EA Forum and LW. I really like the way AI Alignment Forum works with LW and wish EA Forum worked the same way.

The Folly of "EAs Should"

I often make an adjacent point to folks, which is something like:

EA is not all one thing, just like the economy is not all one thing. Just as civilization as we know it doesn't work unless we have people willing to do different things for different reasons, EA depends on different folks doing different things for different reasons to give us a rounded out basket of altruistic "goods".

Like, if everyone thought saltine crackers were the best food and everyone competed to make the best saltines, we'd ultimately all be pretty disappointed that we had a mountain of amazing saltine crackers and literally nothing else, and so it makes sense even in the world where saltines really are the best food that generate the most benefit by their production that we instrumentally produce other things so we can enjoy our saltines in full.

I think the same is true of EA. I care a lot about AI  x-risk and it's what I focus on, but that doesn't mean I think everyone should do the same. In fact, if they did, I'm not sure it would be so good, because then maybe we stop paying attention to other causes that, if we don't address them, end up making trying to address AI risks moot. I'm always very glad to see folks working on things, even things I don't personally think are worthwhile, both because of uncertainty about what is best and because there's multiple dimensions along which it seems we can optimize (and would be happy if we did).

evelynciara's Shortform

I think it's worth saying that the context of "maximize paperclips" is not one where the person literally says the words "maximize paperclips" or something similar; this is instead an intuitive stand-in for building an AI capable of superhuman levels of optimization, such that if you set it the task, say via specifying a reward function, of creating an unbounded number of paperclips you'll get it doing things you wouldn't as a human do to maximize paperclips because humans have competing concerns and will stop when, say, they'd have to kill themselves or their loved ones to make more paperclips.

The objection seems predicated on interpretation of human language, which is aside the primary point. That is, you could address all the human language interpretation issues and we'd still have an alignment problem, it just might not look literally like building a paperclip maximizer if someone asks the AI to make a lot of paperclips.

What’s the low resolution version of effective altruism?

There's a lot to unpack in that tweet. I think something is going on like:

  • fighting about who is really the most virtuous
  • being upset people aren't more focused on the things you think are important
  • being upset that people claim status by doing things you can't or won't do
  • being jealous people are doing good doing things you aren't/can't/won't do
  • virtue signaling
  • righteous indignation
  • spillover of culture war stuff going on in SF

None of it looks like a real criticism of EA, but rather of lots of other things EA just happens to be adjacent to.

Doesn't mean it doesn't have to be addressed or isn't an issue, but I think also worth keeping these kinds of criticisms in context.

What’s the low resolution version of effective altruism?

I find others answers about what the actual low resolution version of EA they see in the wild fascinating.

I go with the classic and if people ask I give them a three word answer: "doing good better".

If they ask for more, it's something like: "People want to do good in the world, and some good doing efforts produce better outcomes than others. EA is about figuring out how to get the best outcomes (or the largest positive impact) for time/money/effort relative to what a person thinks is important."

How modest should you be?

I realize this is a total tangent to the point of your post, but I feel you're giving short-shrift here to continental philosophy.

If it were only about writing style I'd say fair: continental philosophy has chosen a style of writing that resembles that used in other traditions to try to avoid over-simplifying and not compressing understanding down into just a few words that are easily misunderstood. Whereas you see unclear writing, I see a desperate attempt to say anything detailed about reality without accidentally pointing in the wrong direction.

This is not to say that there aren't bad continental philosophers who hide behind this method to say nothing, but I think it's unfair to complain about it just because it's hard to understand and takes a lot of effort to suss out what is being said.

As to the central confusion you bring up, the unfortunate thing is that the worst argument in the world is technically correct, we can't know things as they are in themselves, only as we perceive them to be, i.e. there is no view from nowhere. Where it's wrong is thinking that just because we always know the world from some vantage point that trying to understanding anything is pointless and any belief is equally useful. It is can both be true that there is no objective way that things are and that some ways of trying to understand reality do better at helping us predict reality than others.

I think the confusion that the worst argument in the world immediately implies we can't know anything useful comes from only seeing that the map is not itself the territory but not also seeing that the map is embedded in the territory (no Cartesian dualism).

Morality as "Coordination" vs "Altruism"

I think this is often non-explicit in most discussions of morality/ethics/what-people-should-do. It seems common for people to conflate "actions that are bad because it ruins ability to coordinate" and "actions that are bad because empathy and/or principles tell me they are."

I think it's worth challenging the idea that this conflation is actually an issue with ethics.

Although it's true that things like coordination mechanisms and compassion are not literally the same thing and can have expressions that try to isolate themselves from each other (cf. market economies and prayer) and so things that are bad because they break coordination mechanisms or because they don't express compassion are not bad for exactly the same reasons, this need not mean there is not something deeper going on that ties them together.

I think this is why there tends to be a focus on meta-ethics among philosophers of ethics rather than directly trying to figure out what people should do, even when setting meta-ethical uncertainty aside. There's some notion of badness or undesireableness (and conversely goodness or desirableness) that powers both of these, and so they are both different expressions of this same underlying phenomenon. So we can reasonably ties these two approaches together by looking at this question of what makes something seem good or bad to us, and simply consider these different domains over which we consider how we make good or bad things happen.

As to what good and bad mean, well, that's a larger discussion. My best theory is that in humans it's rooted in prediction error plus some evolved affinities, but this is an ongoing place where folks are trying to figure out what good and bad mean beyond our intuitive sense that something is good or bad.

Wholehearted choices and "morality as taxes"

Weird, that sounds strange to me because I don't really regret things since I couldn't have done anything better than what I did under the circumstances or else I would have done that, so the idea of regret awakening compassion feels very alien. Guilt seems more clear cut to me, because I can do my best but my best may not be good enough and I may be culpable for the suffering of others as a result, perhaps through insufficient compassion.

Wholehearted choices and "morality as taxes"

These cases seem not at all analogous to me because of the differing amount of uncertainty in each.

In the case of the drowning child, you presumably have high certainty that the child is going to die. The case is clear cut in that way.

In the case of the distant commotion on an autumn walk, it's just that, a distant commotion. As the walker, you have no knowledge about what it is and whether or not you could do anything. That you later learn you could have done something might lead you to experience regret, but in the moment you lacked information to make it clear you should have investigated. I think this entirely accounts for the difference in feeling about the two cases, and eliminates the power of the second case.

In the second case, any imposition on the walker to do anything hinges on their knowledge of what the result of the commotion will be. Given the uncertainty, you might reasonably conclude in the moment that it is better to avoid the commotion, maybe because you might do more harm than good by investigating.

Further, this isn't a case of negligence, where you failing to respond to the commotion makes you complicit in the harm, because you seem to have no responsibility to the machinery or the conditions by which the man came to be pinned under it. Instead it seems to be a case where you are morally neutral throughout because of your lack of knowledge, and your lack of active effort to avoid gaining knowledge that would otherwise make you complicit by trying to avoid becoming morally culpable. That is not the case here and so your example seems to lack the necessary conditions to make the point.

Load More