Michael_Wulfsohn

Posts

Sorted by New

Comments

Open Thread #39

Ok, so you're talking about a scenario where humans cease to exist, and other intelligent entities don't exist or don't find Earth, but where there is still value in certain things being done in our absence. I think the answer depends on what you think is valuable in that scenario, which you don't define. Are the "best things" safeguarding other species, or keeping the earth at a certain temperature?

But this is all quite pessimistic. Achieving this sort of aim seems like a second best outcome, compared to humanity's survival.

For example, if earth becomes uninhabitable, colonisation of other planets is extremely good. Perhaps you could do more good by helping humans to move beyond earth, or to become highly resilient to environmental conditions? Surely the best way to ensure that human goals are met is to ensure that at least a few humans survive.

Anyway, going with your actual question, how you should pursue it really depends on your situation, skill set, finances, etc, as well as your values. The philosophical task of determining what should be done if we don't survive is one potential. (By the way, who should get to decide on that?) Robotics and AI seem like another, based on the examples you gave. Whatever you decide, I'd suggest keeping the flexibility to change course later, e.g. by learning transferrable skills, in case you change your mind about what you think is important.

When to focus and when to re-evaluate

I have another possible reason why focusing on one project might be better than dividing one's time between many projects. There may be returns to density of time spent. That is, an hour you spend on a project is more productive if you've just spent many hours on that project. For example, when I come back to a task after a few days, the details of it aren't as fresh in my mind. I have to spend time getting back up to speed, and I miss insights that I wouldn't have missed.

I haven't seen much evidence about this, just my own experience. There might also be countervailing effects, like time required for concepts to "sink in", and synergies, or insights for one project gleaned from involvement in another. It probably varies by task. My impression is that research projects feature very high returns to density of time spent.

EA should beware concessions

Thanks, it does a bit.

What I was saying is that if I were Andrew, I'd make it crystal clear that I'm happy to make the cup of tea, but don't want to be shouted at; there are better ways to handle disagreements, and demands should be framed as requests. Chances are that Bob doesn't enjoy shouting, so working out a way of making requests and settling disagreements without the shouting would benefit both.

More generally, I'd try to develop the relationship to be less "transactional", where you act as partners willing to advance each other's interests and where there is more trust, rather than only doing things in expectation of reward.

EA should beware concessions

Sounds like a really interesting and worthwhile topic to discuss. But it's quite hard to be sure I'm on the same page as you without a few examples. Even hypothetical ones would do. "For reasons that should not need to be said" - unfortunately I don't understand the reasons; am I missing something?

Anyway, speaking in generalities, I believe it's extremely tempting to assume an adversarial dynamic exists. 9 times out of 10, it's probably a misunderstanding. For example, if a condition is given that isn't palatable, it's worth finding out the underlying reasons for the condition being given, and trying to satisfy them in other ways. Since humans have a tendency towards "us vs them" tribal thinking, there's considerable value in making effort to find common ground, establish mutual understanding, and reframe the interaction as a collegiate rather than adversarial one.

This isn't meant as an argument against what you've said.

The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik

Ah, you're right about the hedonistic framework. On re-reading your intro I think I meant the idea of using pleasure as a synonym for happiness and taking pain and suffering as synonyms for unhappiness. This, combined with the idea of counting minutes of pleasure vs. pain, seems to focus on just the experiencing self.

The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik

Thanks for the post. I doubt the length is a problem. As long as you're willing to produce quality analysis, my guess is that most of the people on this forum would be happy to read it.

My thoughts are that destruction of ecosystems is not justifiable especially because many of its effects are probably irreversible (e.g. extinction of some species), and because there is huge uncertainty about its impact. The uncertainty arises because of the points you make, and because of the shakiness of even some of the assumptions you use such as the hedonistic framework. (For example, in humans the distinction between the "experiencing" and "remembering" selves diminishes the value of this framework, and we don't know the extent to which it applies to animals.) Additional uncertainty also exists because we do not know what technological capabilities we might have in the future to reduce wild animal suffering. So almost regardless of the specifics, I believe that it would certainly be better to wait at least until we know more about animal suffering and humanity's future capabilities, before seriously considering taking the irreversible and drastic measure of destroying habitats. This might be just a different point of emphasis rather than something you didn't cover.

Should Good Ventures focus on current giving opportunities, or save for future giving opportunities?

Sure. When I say "arbitrary", I mean not based on evidence, or on any kind of robust reasoning. I think that's the same as your conception of it.

The "conclusion" of your model is a recommendation between giving now vs. giving later, though I acknowledge that you don't go as far as to actually make a recommendation.

To explain the problem with arbitrary inputs, when working with a model, I often try to think about how I would defend any conclusions from the model against someone who wants to argue against me. If my model contains a number that I have simply chosen because it "felt" right to me, then that person could quite reasonably suggest a different number be used. If they are able to choose some other reasonable number that produces different conclusions, then they have shown that my conclusions are not reliable. The key test for arbitrary assumptions is: will the conclusions change if I assume other values?

Otherwise, arbitrary assumptions might be helpful if you want to conduct a hypothetical "if this, then that" analysis, to help understand a particular dynamic at play, like bayesian probability. But this is really hard if you've made lots of arbitrary assumptions (say 10-20); it's difficult to get any helpful insights from "if this and this and this and this and........, then that".

So yes, we are in a bind when we want to make predictions about the future where there is no data. Who was it that said "prediction is difficult, especially about the future"? ;-) But models that aren't sufficiently grounded in reality have limited benefit, and might even be counterproductive. The challenge with modelling is always to find ways to draw robust and useful conclusions given what we have.

What does Trump mean for EA?

EAs like to focus on the long term and embrace probabilistic achievements. What about pursuing policy reforms that are currently inconsequential, but might have profound effects in some future state of the world? That sort of reform will probably face little resistance from established political players.

I can give an example of something I briefly tried when I was working in Lesotho, a small, poor African country. One of the problems in poor countries is called the "resource curse". This is the counter-intuitive observation that the discovery of valuable natural resources (think oil) often leads to worse economic outcomes. There are a variety of reasons, but one is that abundant natural resources often cause countries with already-weak institutions to become even more corrupt, as powerful people scramble to get control of the resource wealth, methodically destroying checks and balances as they go.

In Lesotho, non-renewable natural resources--diamonds--currently account for only a small portion of Lesotho's GDP (around 10%). I introduced the idea of earmarking such natural resource revenues received by the government as "special", to be used only for infrastructure, education etc projects, instead of effectively just being consumed (for more info on this idea see this article or google "adjusted net savings"). Although this change would not have huge consequences right now, I thought that it might if there were a massive natural resource discovery in Lesotho in the future. Specifically, Lesotho might be able to avoid some of the additional corruption by already having a structure set up to protect the resource revenues from being squandered.

The idea I'm putting forward for a potential EA policy initiative is to pursue a variety of policy changes that seem painless, even inconsequential, to policymakers now, but have a small chance of a big impact in some hypothetical future. The idea is to get the right reforms passed before they become politically contentious. While it can be hard to get policymakers to pay attention to issues seen as small, there are plenty of examples of political capture that could have been mitigated by early action. And this kind of initiative is probably relatively neglected given humanity's generally short-term focus. I think EAs are uniquely well placed to prioritize it.

What does Trump mean for EA?

On political reform, I'm interested in EAs' opinions on this one.

In Australia, we have compulsory voting. If you are an eligible voter and you don't register and show up on election day, you get a fine. Some people do submit a blank ballot paper, but very few. I know this policy is relatively uncommon among western democracies, but I strongly support it. Basically it leaves the government with less places to hide.

Compulsory voting of course reduces individual freedom. But that reduction is small, and the advantages from (probably) more inclusive government policy seem well worth it. I've heard it said that if this policy were implemented in the US, then the democrats would win easily. I can't vouch for the accuracy of that, but if it's true, then in my opinion it means that the democrats should be the ones in power.

Should Good Ventures focus on current giving opportunities, or save for future giving opportunities?

Sorry, this is going to be a "you're doing it wrong" comment. I will try to criticize constructively!

There are too many arbitrary assumptions. Your chosen numbers, your categorization scheme, your assumption about whether giving now or giving later is better in each scenario, your assumption that there can't be some split between giving now and later, your failure to incorporate any interest rate into the calculations, your assumption that the now/later decision can't influence the scenarios' probabilities. Any of these could have decisive influence over your conclusion.

But there's also a problem with your calculation. Your conclusion is based on the fact that you expect higher utility to result from scenarios in which you believe giving now will be better. That's not actually an argument for deciding to give now, as it doesn't assess whether the world will be happier as a result of the giving decision. You would need to estimate the relative impact of giving now vs. giving later under each of those scenarios, and then weight the relative impacts by the probabilities of the scenarios.

Don't stop trying to quantify things. But remember the pitfalls. In particular, simplicity is paramount. You want to have as few "weak links" in your model as possible; i.e. moving parts that are not supported by evidence and that have significant influence on your conclusion. If it's just one or two numbers or assumptions that are arbitrary, then the model can help you understand the implications of your uncertainty about them, and you might also be able to draw some kind of conclusion after appropriate sensitivity testing. However, if it's 10 or 20, then you're probably going to be led astray by spurious results.

Load More