Michael_Wulfsohn

Posts

Sorted by New

Wiki Contributions

Comments

This Can't Go On

My interpretation of the argument is not that it is equating atoms to $. Rather, it invokes whatever computations are necessary to produce (e.g. through simulations) an amount of value equal to today's global economy. Can these computations be facilitated by a single atom? If not, then we can't grow at the current rate for 8200 years.

Making impact researchful

Thanks for your detailed reply. Absolutely, there is some academic reward available from solving problems. Naively, the goal is to impress other academics (and thus get published, cited), and academics are more impressed when the work solves a problem. 

You seem to encourage problem-solving work, and point out that governments are starting to push academia in that direction. This is great, and to me, it raises the interesting question of optimal policy in rewarding research. That is supremely difficult, at least outside of the commercialisable. My understanding is that optimal policy would pay each researcher something like the marginal societal benefit of their work, summed globally and intertemporally forever. How on earth do you estimate that for the seminal New Keynesian model paper? Governments won't come close, and (I imagine) will tend to focus on projects whose benefits can be more easily measured or otherwise justified. So we are back to the problem of misaligned researcher incentives. But surely a government push towards impact is a step in the right direction.

Until our civilisation solves that optimal policy problem, I think academia will continue to incentivise the pursuit of knowledge at least partly for knowledge's sake. I wrote the post because understanding the implications of that has been useful to me.

Making impact researchful

I should clarify - I don't mean a small amount of work, but a small conceptual adjustment. The example I give in the post is to adjust from fully addressing a specific application to partially addressing a more general question. And to do so in a way that is hopefully intellectually stimulating to other researchers.

In my own work, using a consumer intertemporal optimisation model, I've tried to calculate the optimal amount for humanity to spend now on mitigating existential risk. That is the sort of problem-solving question I'm talking about. A couple of possible ways forward for me: include multiple countries and explore the interactions between x-risk mitigation and global public good provision; or use the setting of existential risk to learn more about a particular type of utility function which someone pointed me to for that purpose.

Open Thread #39

Ok, so you're talking about a scenario where humans cease to exist, and other intelligent entities don't exist or don't find Earth, but where there is still value in certain things being done in our absence. I think the answer depends on what you think is valuable in that scenario, which you don't define. Are the "best things" safeguarding other species, or keeping the earth at a certain temperature?

But this is all quite pessimistic. Achieving this sort of aim seems like a second best outcome, compared to humanity's survival.

For example, if earth becomes uninhabitable, colonisation of other planets is extremely good. Perhaps you could do more good by helping humans to move beyond earth, or to become highly resilient to environmental conditions? Surely the best way to ensure that human goals are met is to ensure that at least a few humans survive.

Anyway, going with your actual question, how you should pursue it really depends on your situation, skill set, finances, etc, as well as your values. The philosophical task of determining what should be done if we don't survive is one potential. (By the way, who should get to decide on that?) Robotics and AI seem like another, based on the examples you gave. Whatever you decide, I'd suggest keeping the flexibility to change course later, e.g. by learning transferrable skills, in case you change your mind about what you think is important.

When to focus and when to re-evaluate

I have another possible reason why focusing on one project might be better than dividing one's time between many projects. There may be returns to density of time spent. That is, an hour you spend on a project is more productive if you've just spent many hours on that project. For example, when I come back to a task after a few days, the details of it aren't as fresh in my mind. I have to spend time getting back up to speed, and I miss insights that I wouldn't have missed.

I haven't seen much evidence about this, just my own experience. There might also be countervailing effects, like time required for concepts to "sink in", and synergies, or insights for one project gleaned from involvement in another. It probably varies by task. My impression is that research projects feature very high returns to density of time spent.

EA should beware concessions

Thanks, it does a bit.

What I was saying is that if I were Andrew, I'd make it crystal clear that I'm happy to make the cup of tea, but don't want to be shouted at; there are better ways to handle disagreements, and demands should be framed as requests. Chances are that Bob doesn't enjoy shouting, so working out a way of making requests and settling disagreements without the shouting would benefit both.

More generally, I'd try to develop the relationship to be less "transactional", where you act as partners willing to advance each other's interests and where there is more trust, rather than only doing things in expectation of reward.

EA should beware concessions

Sounds like a really interesting and worthwhile topic to discuss. But it's quite hard to be sure I'm on the same page as you without a few examples. Even hypothetical ones would do. "For reasons that should not need to be said" - unfortunately I don't understand the reasons; am I missing something?

Anyway, speaking in generalities, I believe it's extremely tempting to assume an adversarial dynamic exists. 9 times out of 10, it's probably a misunderstanding. For example, if a condition is given that isn't palatable, it's worth finding out the underlying reasons for the condition being given, and trying to satisfy them in other ways. Since humans have a tendency towards "us vs them" tribal thinking, there's considerable value in making effort to find common ground, establish mutual understanding, and reframe the interaction as a collegiate rather than adversarial one.

This isn't meant as an argument against what you've said.

The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik

Ah, you're right about the hedonistic framework. On re-reading your intro I think I meant the idea of using pleasure as a synonym for happiness and taking pain and suffering as synonyms for unhappiness. This, combined with the idea of counting minutes of pleasure vs. pain, seems to focus on just the experiencing self.

The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik

Thanks for the post. I doubt the length is a problem. As long as you're willing to produce quality analysis, my guess is that most of the people on this forum would be happy to read it.

My thoughts are that destruction of ecosystems is not justifiable especially because many of its effects are probably irreversible (e.g. extinction of some species), and because there is huge uncertainty about its impact. The uncertainty arises because of the points you make, and because of the shakiness of even some of the assumptions you use such as the hedonistic framework. (For example, in humans the distinction between the "experiencing" and "remembering" selves diminishes the value of this framework, and we don't know the extent to which it applies to animals.) Additional uncertainty also exists because we do not know what technological capabilities we might have in the future to reduce wild animal suffering. So almost regardless of the specifics, I believe that it would certainly be better to wait at least until we know more about animal suffering and humanity's future capabilities, before seriously considering taking the irreversible and drastic measure of destroying habitats. This might be just a different point of emphasis rather than something you didn't cover.

Should Good Ventures focus on current giving opportunities, or save for future giving opportunities?

Sure. When I say "arbitrary", I mean not based on evidence, or on any kind of robust reasoning. I think that's the same as your conception of it.

The "conclusion" of your model is a recommendation between giving now vs. giving later, though I acknowledge that you don't go as far as to actually make a recommendation.

To explain the problem with arbitrary inputs, when working with a model, I often try to think about how I would defend any conclusions from the model against someone who wants to argue against me. If my model contains a number that I have simply chosen because it "felt" right to me, then that person could quite reasonably suggest a different number be used. If they are able to choose some other reasonable number that produces different conclusions, then they have shown that my conclusions are not reliable. The key test for arbitrary assumptions is: will the conclusions change if I assume other values?

Otherwise, arbitrary assumptions might be helpful if you want to conduct a hypothetical "if this, then that" analysis, to help understand a particular dynamic at play, like bayesian probability. But this is really hard if you've made lots of arbitrary assumptions (say 10-20); it's difficult to get any helpful insights from "if this and this and this and this and........, then that".

So yes, we are in a bind when we want to make predictions about the future where there is no data. Who was it that said "prediction is difficult, especially about the future"? ;-) But models that aren't sufficiently grounded in reality have limited benefit, and might even be counterproductive. The challenge with modelling is always to find ways to draw robust and useful conclusions given what we have.

Load More