RobertHarling

Comments

Good v. Optimal Futures

Thanks for sharing this paper, I had not heard of it before and it sounds really interesting.

Good v. Optimal Futures

Thanks for your comment Jack, that's a really great point. I suppose that we would seek to influence AI slightly differently for each reason:

  1. Reduce chance of unaligned/uncontrolled AI
  2. Increase chance of useful AI
  3. Increase chance of exactly aligned AI

e.g. you could reduce the chance of AI risk by stopping all AI development but then lose the other two benefits, or you could create a practically useful AI but not one that would guide humanity towards an optimal future. That being said I reckon in practice a lot of work to improve the development of AI would hit all 3. Though maybe if you view one reason as much more important than the others then you focus on a specific type of AI work.

The Fermi Paradox has not been dissolved

Thank you very much for this post, I found it very interesting. I remember reading the original paper and feeling a bit confused by it. It's not too fresh in my mind so I don't feel too able to try to defend it. I appreciate you highlighting how the method they use to estimate f_l is unique and drives their main result.

A range of 0.01 to 1 for fl in your preferred model seems surprisingly high to me, though I don't understand the Lineweaver Davis paper well enough to really comment on its result which I think your range is based on.  I think they mention how their approach leaves uncertainty in n_e as to what counts as a terrestrial planet. I wonder if  most estimates of any one parameter have a tendency to shift uncertainty onto other parameters, so that when combining individual estimates of each of parameter you end up with an unrealistically certain result.

Good v. Optimal Futures

Thanks for your comment athowes. I appreciate your point that I could have done more in the post to justify this "binary" of good and optimal. 

Though the simulated minds scenario I described seems at first to be pretty much optimal, it could be much larger if you thought it would last for many more years. Given large enough uncertainty about future technology, maybe seeking to identify the optimal future is impossible.

I think your resources, value and efficiency model is really interesting. My intuition is that values is the limiting factor. I can believe there are pretty strong forces that mean that humanity will eventually end up optimising resources and efficiency,  but less confident values will converge to the best ones over time. This probably depends on whether you think a singleton will form at some point, and then it feels like the limit is how good the values of the singleton are.

Make a Public Commitment to Writing EA Forum Posts

Thanks again for creating this post Neel. I can confirm I managed to write and publish my post in time! 

I think without commiting to writing it here, my post would either have been made a few months later, or perhaps not been published at all.

A toy model for technological existential risk

Thanks for your comment!

I hadn't thought to think about selection effects, thanks for pointing that out. I suppose Bostrom actually describes black balls as technologies that cause catastrophe but doesn't set the bar as high as extinction. Then drawing a black ball doesn't affect future populations drastically, so perhaps selection effects don't apply?

Also, I think in The Precipice Toby Ord makes some inferences for natural extinction risk given the length of time humanity has existed for? Though I may not be remembering correctly. I think the logic was something like "Assume we're randomly distributed amongst possible humans. If existential risk was very high, then there'd be a very small set of worlds in which humans have been around for this long, and it would be very unlikely that we'd be in such a world. Therefore it's more likely that our estimate of existential risk is too high".   This then seems quite similar to my model of making inferences based on not having previously drawn a black ball. I don't think I understand selection effects too well though so I appreciate any comments on this!

Make a Public Commitment to Writing EA Forum Posts

Commitment: I commit to writing a post on a vague idea about where most of the value of the long term future is and how sensitive it is to different values by 7pm on 11th December.

Thanks for suggesting this Neel!

Things I Learned at the EA Student Summit

Thanks for this post Akash, I found it really interesting to read. I definitely agree with your point about how friendly EAs can be when you reach out to them. I think this is something I've been aware of for a while, but it still takes me time to internalise and make myself more willing to reach out to people. But it's definitely something I want to push myself to do more, and encourage other people to do. No one is  going to be unhappy about someone showing an interest in their work and ideas!

Idea: statements on behalf of the general EA community

This is a really interesting idea. I think I instinctively have a couple of concerns about such an idea

1) What is the benefit of such statements? Can we expect the opinion of the EA community to really carry much weight beyond relatively niche areas?

2) Can the EA community be sufficiently well defined to collect opinion? It is quite hard to work out who identifies as an EA, not least because some people are unsure themselves. I would worry any attempt to define the EA community too strictly (such as when surveying the community's opinion) could come across as exclusionary and discourage some people from getting involved.

X-risks to all life v. to humans

Thanks for your response!

I definitely see your point on the value of information to the future civilisation. The technology required to reach the moon and find the cache is likely quite different to the level required to resurrect humanity from the cache so the information could still be very valuable.

An interesting consideration may be how we value a planet being under human control vs control of this new civilisation. We may think we cannot assume that the new civilisation would be doing valuable things but that a human planet would be quite valuable. This consideration would depend a lot on your moral beliefs. If we don't extrapolate the value of humanity to the value of this new civilisation, we could then ask whether we can extrapolate from how humanity would respond to finding the cache on the moon to how the new civilisation would respond.

Load More