Azure

Student in mathematics. Interested in Philosophy

Wiki Contributions

Comments

Hi Peter.

Thank you very much for this! It's much appreciated and I'm glad my comments were somewhat helpful.

Perhaps you may wish to submit the new version as a new, separate post?

I think I would also contact Aaron Gertler, the forum moderator, to get some feedback if you chose to post the above as a separate post. All the best.

Hi Peter!

Thank you for the write up!

You're currently getting downvoted(unfortunately I think!), but I thought I would try to flesh out some reasons why this is the case currently, potentially to spur on discussion:

1. Whether unintentional or not, the 'flat earth' images do not seem to be a favourable presentation of your ideas and do not seem necessary to make the claims you are making.

2. There is not much structure to the post. I think we would appreciate it if you had some introduction and conclusion on what you are trying to address and how you've done so.

3. Some of the explanations are quite confusing (at least to me), e.g. it's not clear what you mean exactly by

'It can brighten - improving the enterpretation[sic] of a given sentience from a darker to a brighter sentience'

Does this mean 'higher utility/welfare'?

4. I don't think the post is sufficiently self-contained and free standing to make a credible case.

Also keen to hear whether people agree/disagree with the above!

Systemic change, global poverty eradication, and a career plan rethink: am I right?

Adding one more (hopefully relevant) link:

Dylan Matthews on “Global poverty has fallen, but what should we conclude from that?”

which is more or less a podcast version of the Vox Article by Dylan Matthew, where the link (and Hickel's response) can be found in Max_Daniel's very helpful list of links.

X-risks to all life v. to humans

Hey! Your link sends us to this very post. Is this intentional?

X-risks to all life v. to humans

Thank you for this post! Very interesting.

(1) Is this a fair/unfair summary of the argument?

P1 We should be indifferent on anti-speciesist grounds whether humans or some other intelligence life form enjoy a grand future.

P2 The risk of extinction of only humans is strictly lower than the risk of extinction of humans + all future possible (non human) intelligent life form.

C Therefore we should revise downwards the value of avoiding the former/raise the value of the latter.

(2) Is knowledge about current evolutionary trajectories of non-human animals today likely to completely inform us about 're-evolution'? What are the relevant considerations?

What would a pre-mortem for the long-termist project look like?

Additionally, is it not likely that those scenarios are correlated?