SammyDMartin

Wiki Contributions

Comments

The Phil Torres essay in Aeon attacking Longtermism might be good

One substantive point that I do think is worth making is that Torres isn't coming from the perspective of common-sense morality Vs longtermism, but rather a different, opposing, non-mainstream morality that (like longtermism) is much more common among elites and academics.

Yet this Baconian, capitalist view is one of the most fundamental root causes of the unprecedented environmental crisis that now threatens to destroy large regions of the biosphere, Indigenous communities around the world, and perhaps even Western technological civilisation itself.

When he says that this Baconian idea is going to damage civilisation, presumably he thinks that we should do something about this, so he's implicitly arguing for very radical things that most people today, especially in the Global South, wouldn't endorse at all. If we take this claim at face value, it would probably involve degrowth and therefore massive economic and political change.

I'm not saying that longtermism is in agreement with the moral priorities of most people or that Torres's (progressive? degrowth?) worldview is overall similarly counterintuitive to longtermism. His perspective is more counterintuitive to me, but on the other hand a lot more people share his worldview, and it's currently much more influential in politics.

But I think it's still important to point out that Torres's world-view goes against common-sense morality as well, and that like longtermists he thinks it's okay to second guess the deeply held moral views of most people under the right circumstances.

Practically what that means is that, for the reasons you've given, many of the criticisms that don't rely on CSM, but rather on his morality, won't land with everyone reading the article. So I agree that this probably doesn't make longtermism look as bad as he thinks.

FWIW, my guess is that if you asked a man in the street whether weak longtermist policies or degrowth environmentalist policies were crazier, he'd probably choose the latter.

Robin Hanson on the Long Reflection

I don't think Hanson would disagree with this claim (that the future is more likely to be better by current values, given the long reflection, compared to e.g. Age of Em). I think it's a fundamental values difference.

Robin Hanson is an interesting and original thinker, but not only is he not an effective altruist, he explicitly doesn't want to make the future go well according to anything like present human values.

The Age of Em, which Hanson clearly doesn't think is an undesirable future, would contain very little of what we value. Hanson says this, but it's a feature, not a bug. Scott Alexander:

Hanson deserves credit for positing a future whose values are likely to upset even the sort of people who say they don’t get upset over future value drift. I’m not sure whether or not he deserves credit for not being upset by it. Yes, it’s got low-crime, ample food for everybody, and full employment. But so does Brave New World. The whole point of dystopian fiction is pointing out that we have complicated values beyond material security. Hanson is absolutely right that our traditionalist ancestors would view our own era with as much horror as some of us would view an em era. He’s even right that on utilitarian grounds, it’s hard to argue with an em era where everyone is really happy working eighteen hours a day for their entire lives because we selected for people who feel that way. But at some point, can we make the Lovecraftian argument of “I know my values are provincial and arbitrary, but they’re my provincial arbitrary values and I will make any sacrifice of blood or tears necessary to defend them, even unto the gates of Hell?”

Since Hanson doesn't have a strong interest in steering the long-term future to be good by current values, it's obvious why he wouldn't be a fan of an idea like the long reflection, which has that as its main goal but produces bad side effects in the course of giving us a chance of achieving that goal. It's just a values difference.

Forecasting Compute - Transformative AI and Compute [2/4]

Great post! You might be interested in this related investigation by the MTAIR project I've been working on, whch also attempts to build on Ajeya's TAI timeline model, although in a slightly different way to yours (we focus on incorporating non-DL based paths to TAI as well as trying to improve on the 'biological anchors' method already described): https://forum.effectivealtruism.org/posts/z8YLoa6HennmRWBr3/link-post-paths-to-high-level-machine-intelligence 

Summary of history (empowerment and well-being lens)

One thing that your account might miss is the impact of ideas on empowerment and well-being down the line. E.g. it's a very common argument that Christan ideas about the golden rule motivated anti-slavery sentiment, so if the Roman empire hadn't spread Christianity across Europe then we'd have ended up with very different values.

Similarly, even if the content of ancient Greek moral philosophy wasn't directly useful to improve wellbeing, they inspired the Western philosphical tradition that led to Enlignment ideals that led to the abolition of slavery.

I've told two stories about why the Greeks and Romans might have been necessary for future moral progress - are you skeptical of these appeals to historical contingency or are the long run causes of these events just outside the scope of this way of looking at history?

Why AI alignment could be hard with modern deep learning

Very good summary! I've been working on a (much drier) series of posts explaining different AI risk scenarios - https://forum.effectivealtruism.org/posts/KxDgeyyhppRD5qdfZ/link-post-how-plausible-are-ai-takeover-scenarios

But I think I might adopt 'Sycophant'/'Schemer' as better more descriptive names for WFLL1/WFLL2, Outer/Inner alignment failure going forward

I also liked that you emphasised how much the optimist Vs pessimist case depends on hard to articulate intuitions about things like how easily findable deceptive models are and how easy incremental course correction is. I called this the 'hackability' of alignment - https://www.lesswrong.com/posts/zkF9PNSyDKusoyLkP/investigating-ai-takeover-scenarios#Alignment__Hackability_

AMA: Jason Brennan, author of "Against Democracy" and creator of a Georgetown course on EA

Thanks for this reply. Would you say then that Covid has strengthened the case for some sorts of democracy reduction, but not others? So we should be more confident in enlightened preference voting but less confident in Garett Jones' argument (from 10% less democracy) in favour of more independent agencies?

AMA: Jason Brennan, author of "Against Democracy" and creator of a Georgetown course on EA

Do you think that the West's disastrous experience with Coronavirus (things like underinvesting in vaccines, not adopting challenge trials, not suppressing the virus, mixed messaging on masks early on, the FDA's errors on testing, and others as enumerated in this thread- or in books like The Premonition) has strengthened, weakened or not changed much the credibility of your thesis in 'Against Democracy', that we should expect better outcomes if we give the knowledgeable more freedom to choose policy?

For reasons it might weaken 'Against Democracy', it seems like a lot of expert bureaucracies did an unusually bad job because they couldn't take correction, see this summary post for examples:

https://forum.effectivealtruism.org/posts/dYiJLvcRJ4nk4xm3X#Vax 

For reasons it might strengthen the argument, it seems like the institutions that did better than average were the ones that were more able to act autonomously, see e.g. this from Alex Tabarok,

https://marginalrevolution.com/marginalrevolution/2021/06/the-premonition.html

Or this summary

Does Moral Philosophy Drive Moral Progress?

I don't think the view that moral philosophers had a positive influence on moral developments in history is a simple model of 'everyone makes a mistake, moral philosopher points out the mistake and convinces people, everyone changes their minds'. I think that what Bykvist, Ord and MacAskill were getting at is that these people gave history a shove at the right moment.

At the very least, it doesn't seem that discovering the correct moral view is sufficient for achieving moral progress in actuality.

I have no doubt that they'd agree with you about this. But if we all accept this claim, there are two further models we could look at.

One is a model where changing economic circumstances influence what moral views it is feasible to act on, but progress in moral knowledge still affects what we choose to do, given the constraints of our economic circumstances.

The other is a model where economics determines everything and the moral views we hold are an epiphenomenon blown about by these conditions (note this is very similar to some Marxist views of history). Your view is that 'the two are totally decoupled', but at most your examples just show that the two are decoupled somewhat, not that moral reasoning has no effect.  And there are plenty of examples that show explicit moral reasoning having at least some effect on events - see Bykvist, Ord and MacAskill's original list.

The strawman view that moral advances determine everything is not what's being proposed by Bykvist, Ord and MacAskill, it's the mixed view that ideas influence things within the realm of what's possible.

COVID: How did we do? How can we know?

Is there any public organisation which can be proud of last year?

This is an important question, because we want to find out what was done right organizationally in a situation where most failed, so we can do more of it. Especially if this is a test-run for X-risks.

There are two examples that come to mind of government agencies that did a moderately good job at a task which was new and difficult. One is the UK's vaccine taskforce, which was set up by Dominic Cummings and the UK's chief scientific advisor, Patrick Vallance and responsible for the relatively fast procurement and rollout. You might say similar for the Operation Warp Speed team, but the UK vaccine taskforce overordered to a larger extent than Warp Speed and was also responsible for other sane things like the simple oldest-first vaccine prioritization and the first doses first decision, which prevented a genuine catastrophe due to the B117 variant. (Also credit to the MHRA (the UK's regulator) for mostly staying out of the way.)

See this from Cummings' blog, which also outlines many of the worst early expert failures on covid, and my discussion of it here:

This is why there was no serious vaccine plan — i.e spending billions on concurrent (rather than the normal sequential) creation/manufacturing/distribution etc — until after the switch to Plan B. I spoke to Vallance on 15 March about a ‘Manhattan Project’ for vaccines out of Hancock’s grip but it was delayed by the chaotic shift from Plan A to lockdown then the PM’s near-death. In April Vallance, the Cabinet Secretary and I told the PM to create the Vaccine Taskforce, sideline Hancock, and shift commercial support from DHSC to BEIS. He agreed, this happened, the Chancellor supplied the cash. On 10 May I told officials that the VTF needed a) a much bigger budget, b) a completely different approach to DHSC’s, which had been mired in the usual processes, so it could develop concurrent plans, and c) that Bingham needed the authority to make financial decisions herself without clearance from Hancock.

This plan later went on to succeed and significantly outperform expectations for rollout speed, with early approval for the AZ and Pfizer vaccines and an early decision to delay second doses by 12 weeks. I see the success of the UK vaccine taskforce and its ability to have a somewhat appropriate sense of the costs and benefits involved and the enormous value of vaccinations, to be a good example of how it's institution design that is the key issue which most needs fixing. Have an efficient, streamlined taskforce, and you can still get things done in government.

The other example of success often discussed is the central banks, especially in the US, which responded quickly to the COVID-19 dip and prevented a much worse economic catastrophe. Alex Tabarrok:

So what lessons should we take from this? Lewis doesn’t say but my colleague Garett Jones argues for more independent agencies in his excellent book 10% Less Democracy. The problem with the CDC was that after 1976 it was too responsive to political pressures, i.e. too democratic. What are the alternatives?

The Federal Reserve is governed by a seven-member board each of whom is appointed to a single 14- year term, making it rare for a President to be able to appoint a majority of the board. Moreover, since members cannot be reappointed there is less incentive to curry political favor. The Chairperson is appointed by the President to a four-year term and must also be approved by the Senate. These checks and balances make the Federal Reserve a relatively independent agency with the power to reject democratic pressures for inflationary stimulus. Although independent central banks can be a thorn in the side of politicians who want their aid in juicing the economy as elections approach, the evidence is that independent central banks reduce inflation without reducing economic growth. A multi-member governing board with long and overlapping appointments could also make the CDC more independent from democratic politics which is what you want when a once in 100 year pandemic hits and the organization needs to make unpopular decisions before most people see the danger.

I really would like to be able to agree with Tabarrok here and say that, yes, choosing the right experts and protecting them from democratic feedback is the right answer and all we would need, and the expert failures we saw were due to democratic pressure in one form another, but the problem is that we can just look at SAGE in the UK early in the Pandemic or Anders Tegnell in Sweden, who were close to unfireable and much more independent, but underperformed badly. Or China, which is entirely protected from democratic interference and still didn't do challenge trials.

Just saying the words 'have the right experts and prevent them from being biased by outside interference' doesn't make it so. But, at the same time, it is possible to have fast-responding teams of experts that make the right decisions, if they're the right experts - the Vaccine Taskforce proves that. I think the advice from the book 10% less democracy still stands, but we have to approach implementing it with far more caution than I would have thought pre-covid.

It seems like following the 10% less democracy policy can give you either a really great outcome like the one you've described, and like we saw a small sliver of in the UK's vaccine procurement, or a colossal disaster like your impossible to fire expert epidemiologists torpedoing your economy and public health and then changing their mind a year late.

Suppose the UK had created a 'pandemic taskforce' with similar composition to the vaccine taskforce, in February instead of April, and with a wider remit over things like testing and running the trials. I think many of your happy timeline steps could have taken place.

One of the more positive signs that I've seen in recent times, is that well-informed elite opinion (going by, for example, the Economist editorials) has started to shift towards scepticism of these institutions and a recognition of how badly they've failed. We even saw an NYT article about the CDC and whether reform is possible.

Among the people who matter for policymaking, the scale of the failure has not been swept under the rug. See here:

We believe that Mr Biden is wrong. A waiver may signal that his administration cares about the world, but it is at best an empty gesture and at worst a cynical one.

...

Economists’ central estimate for the direct value of a course is $2,900—if you include factors like long covid and the effect of impaired education, the total is much bigger. 

This strikes me as the sort of remark I'd expect to see in one of these threads, which has to be a good sign.

Why should we *not* put effort into AI safety research?

Alignment by default: if we have very strong reasons to expect that the methods that are best suited for ensuring that AI is aligned are the same as the methods that are best suited for ensuring that we have AI that is capable enough to understand what we want and act on it, in the first place.

To the extent that alignment by default is likely we don't need a special effort to be put into AI safety because we can assume that the economic incentives will be such that we will put as much effort into AI safety as is needed, and if we don't put the sufficient effort into AI safety, we won't have capable AI or transformative AI anyway

Stuart Russell talks about this as a real possibility but see also, https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default

Load More