Arepo

Comments

What Makes Outreach to Progressives Hard

Helpful post!

What makes you say rejecting person-affecting views has uncomfortable (for progressive) and environmental ethics, out of curiosity? I would have thought the opposite: person-affecting views struggle not to treat environmental collapse as morally neutral if it leads to a different set of people existing than would have otherwise.

Deference for Bayesians

I've strong upvoted Ben's points, and would add a couple of concerns:
* I don't know how in any particular situation one would usefully separate the object-level from the general principle. What heuristic would I follow to judge how far to defer to experts on banana growers in Honduras on the subject of banana-related politics?
* The less pure a science gets (using https://xkcd.com/435/ as a guide), the less we should be inclined to trust its authorities, but the less we should also be inclined to trust our own judgement - the relevant factors grow at a huge rate

So sticking to the object level and the eg of minimum wage, I would not update on a study that much, but strong agree with Ben that 98% is far too confident, since when you say 'the only theoretical reason', you presumably mean 'as determined by other social science theory'.

(In this particular case,  it seems like you're conflating the (simple and intuitive to me as well fwiw) individual effect of having to pay a higher wage reducing the desirability of hiring someone with the much more complex and much less intuitive claim that higher wages in general would reduce number of jobs in general - which is the sort of distinction that an expert in the field seems more likely to be able to draw.)

So my instinct is that Bayesians should only strongly disagree with experts in particular cases where they can link their disagreement to particular claims the experts have made that seem demonstrably wrong on Bayesian lights.

Making decisions under moral uncertainty

There are some fundamental problems facing moral uncertainty that I haven't seen its proponents even refer to, let alone refute:
 

  • The xkcd.com/927 problem - whatever moral uncertainty theory one expounds to deal with theories T1...Tn seems likely to constitute Tn+1.  I've just been reading through Will's new book, and though it addresses this one, it does so very vaguely, basically by claiming that 'one ought under moral uncertainty theory X to do X1' is a qualitatively different claim than 'one ought under moral theory Y to do Y1'. This might be true, depending on some very murky questions about what norms look like, but it also seems that the latter is qualitatively different from the claim that 'one ought under moral theory Z to do Z1'. We use the same word 'ought' in all three cases, but it may well be a homonym.
  • If one of many of the subtypes of moral anti-realism is true, moral uncertainty is devoid of content - words like 'should', 'ought' etc are either necessarily wrong or not even meaningful.
AMA: Ajeya Cotra, researcher at Open Phil

One issue I feel the EA community has badly neglected is the probability given various (including modest) civilizational backslide scenarios of us still being able to (and *actually*) developing the economies of scale needed to become an interstellar species. 

To give a single example, a runaway Kessler effect could make putting anything in orbit basically impossible unless governments overcome the global tragedy of the commons and mount an extremely expensive mission to remove enough debris to regain effective orbital access - in a world where we've lost satellite technology and everything that depends on it. 

EA so far seem to have treated 'humanity doesn't go extinct' in scenarios like this as equivalent to 'humanity reaches its interstellar potential', which seems very dangerous to me - intuitively, it feels like there's at least a 1% chance that we wouldn't ever solve such a problem in practice, even if civilisation lasted for millennia afterwards. If so, then we should be treating it as (at least) 1/100th of an existential catastrophe - and a couple of orders of magnitude doesn't seem like that big a deal especially if there are many more such scenarios than there are extinction-causing ones.

Do you have any  thoughts on how to model this question in a generalisable way that it could give a heuristic for non-literal-extinction GCRs? Or do you think one would need to research specific GCRs to answer it for each of them?

AMA: Ajeya Cotra, researcher at Open Phil

What do you make of Ben Garfinkel's work on scepticism towards AI's capacity being separable from its goals/his broader skepticism of brain in a box scenarios?

Big List of Cause Candidates

Can you spell both of these points out for me? Maybe I'm looking in the wrong place, but I don't see anything in that tag description that recommends criteria for cause candidates.

As for Scott's post, I don't see anything more than a superficial analogy. His argument is something like 'the weight by which we improve our estimation of someone for their having a great idea should be much greater than the weight by which we downgrade our estimation of them for having a stupid idea'. Whether or not one agrees with this, what does it have to do with including on this list an expensive luxury that seemingly no-one has argued for on (effective) altruistic grounds?

Big List of Cause Candidates

Write a post on which aspect? You mean basically fleshing out the whole comment?

Big List of Cause Candidates

One other cause-enabler I'd love to see more research on is donating to (presumably early stage) for-profits. For all that they have better incentives it's still a very noisy space with plenty of remaining perverse incentives, so supporting those doing worse than they merit seems like it could be high value.

It might be possible to team up with some VCs on this, to see if any of them have a category of companies they like but won't invest in; perhaps because of a surprising lack of traction; or perhaps because of predatory pricing by companies with worse products/ethics; perhaps some other unmerited headwind.

Big List of Cause Candidates

Then I would suggest being more clear about what it's comprehensive of, ie by having clear criteria for inclusion. 

Big List of Cause Candidates

I would like to see more about 'minor' GCRs and our chance of actually becoming an interstellar civilisation given various forms of backslide. In practice, the EA movement seems to treat the probability as 1. We can see this attitude in this very post, 

I don't think this is remotely justified. The arguments I've seen are generally of the form 'we'll still be able to salvage enough resources to theoretically recreate any given  technology', which  doesn't mean we can get anywhere near the economies of scale needed to create global industry on today's scale, let alone that we actually will given realistic political development. And the industry would need to reach the point where we're a reliably spacefaring civilisation, well beyond today's technology, in order to avoid the usual definition of being an existential catastrophe (drastic curtailment of life's potential).

If the chance of recovery from any given backslide is 99%, then that's only two orders of magnitude between its expected badness and the badness of outright extinction, even ignoring other negative effects. And given the uncertainty around various GCRs, a couple of orders of magnitude isn't that big a deal (Toby Ord's The Precipice puts an order of magnitude or two between the probability of many of the existential risks we're typically concerned with).

Things I would like to see more discussion of in this area:

  • General principles for assessing the probability of reaching interstellar travel given specific backslide parameters and then, with reference to this:
  • Kessler syndrome
  • Solar storm disruption
  • CO2 emissions from fossil fuels and other climate change rendering the atmosphere unbreathable (this would be a good old fashioned X-risk, but seems like one that no-one has discussed - in Toby's book he details some extreme scenarios where a lot of CO2 could be released that wouldn't necessarily cause human extinction by global warming, but that some of my back-of-the-envelope maths based on his figures seemed consistent with this scenario)
  • CO2 emissions from fossil fuels and other climate change substantially reducing IQs
  • Various 'normal' concerns: antibiotic resistant bacteria; peak oil; peak phosphorus; substantial agricultural collapse; moderate climate change; major wars; reverse Flynn effect; supporting interplanetary colonisation; zombie apocalypse
  • Other concerns that I don't know of, or that no-one has yet thought of, that might otherwise be dismissed by zealous X-riskers as 'not a big deal'
Load More