WD

Wei Dai

4353 karmaJoined

Posts
7

Sorted by New
9
· · 1m read

Comments
253

Another podcast linked below with some details about Will and Toby's early interactions with the Rationality community. Also Holden Karnofsky has an account on LW, and interacted with the Rationality community via e.g. this extensively discussed 2011 post.

https://80000hours.org/podcast/episodes/will-macaskill-what-we-owe-the-future/

Will MacAskill: But then the biggest thing was just looking at what are the options I have available to me in terms of what do I focus my time on? Where one is building up this idea of Giving What We Can, kind of a moral movement focused on helping people and using evidence and data to do that. It just seemed like we were getting a lot of traction there.

Will MacAskill: Alternatively, I did go spend these five-hour seminars at Future of Humanity Institute, that were talking about the impact of superintelligence. Actually, one way in which I was wrong is just the impact of the book that that turned into — namely Superintelligence — was maybe 100 times more impactful than I expected.

Rob Wiblin: Oh, wow.

Will MacAskill: Superintelligence has sold 200,000 copies. If you’d asked me how many copies I expected it to sell, maybe I would have said 1,000 or 2,000. So the impact of it actually was much greater than I was thinking at the time. But honestly, I just think I was right that the tractability of what we were working on at the time was pretty low. And doing this thing of just building a movement of people who really care about some of the problems in the world and who are trying to think carefully about how to make progress there was just much better than being this additional person in the seminar room. I honestly think that intuition was correct. And that was true for Toby as well. Early days of Giving What We Can, he’d be having these arguments with people on LessWrong about whether it was right to focus on global health and development. And his view was, “Well, we’re actually doing something.”

Rob Wiblin: “You guys just comment on this forum.”

Will MacAskill: Yeah. Looking back, actually, again, I will say I’ve been surprised by just how influential some of these ideas have been. And that’s a tremendous testament to early thinkers, like Nick Bostrom and Eliezer Yudkowsky and Carl Shulman. At the same time, I think the insight that we had, which was we’ve actually just got to build stuff — even if perhaps there’s some theoretical arguments that you should be prioritising in a different way — there are many, many, positive indirect effects from just doing something impressive and concrete and tangible, as well as the enormous benefits that we have succeeded in producing, which is tens to hundreds of millions of bed nets distributed and thousands of lives saved.

https://80000hours.org/podcast/episodes/will-macaskill-moral-philosophy/

Robert Wiblin: We’re going to dive into your philosophical views, how you’d like to see effective altruism change, life as an academic, and what you’re researching now. First, how did effective altruism get started in the first place?

Will MacAskill: Effective altruism as a community is really the confluence of 3 different movements. One was Give Well, co-founded by Elie Hassenfeld and Holden Karnofsky. Second was Less Wrong, primarily based in the bay area. The third is the co-founding of Giving What We Can by myself and Toby Ord. Where Giving What We Can was encouraging people to give at least 10% of their income to whatever charities were most effective. Back then we also had a set of recommended charities which were Toby and I’s best guesses about what are the organizations that can have the biggest possible impact with a given amount of money. My path into it was really by being inspired by Peter Singer and finding compelling his arguments that we in rich countries have a duty to give most of our income if we can to those organizations that will do the most good.

As I’ve said elsewhere, I have more complicated feelings about genetic enhancement. I think it is potentially beneficial, but also tends to be correlated with bad politics, and it could be the negative social effects of allowing it outweigh the benefits.

I appreciate you keeping on open mind on genetic enhancement (i.e., not grouping it with racism and fascism, or immediately calling for it to be banned). Nevertheless, it fills me with a sense of hopelessness to consider that one of the most thoughtful groups of people on Earth (i.e., EAs) might still realistically decide to ban the discussion of human genetic enhancement (I'm assuming that's the implied alternative to "allowing it"), on the grounds that it "tends to be correlated with bad politics".

When I first heard about the idea of greater than human intelligence (i.e., superintelligence), I imagined that humanity would approach it as one of the most important strategic decision we'll ever face, and there would be worldwide extensive debates about the relative merits of each possible route to achieving that, such as AI and human genetic enhancement. Your comment represents such a divergence from that vision, and occurring in a group like this...

If even we shy away from discussing a potentially world-altering technology simply because of its political baggage, what hope is there for broader society to engage in nuanced, good-faith conversations about these issues?

I think paying AIs to reveal their misalignment and potentially to work for us and prevent AI takeover seems like a potentially very promising intervention.

I'm pretty skeptical of this. (Found a longer explanation of the proposal here.)

An AI facing such a deal would be very concerned that we're merely trying to trick it into revealing its own misalignment (which we'd then try to patch out). It seems to me that it would probably be a lot easier for us to trick an AI into believing that we're honestly presenting it such a deal (including by directly manipulating it's weights and activations), than to actually honestly present such an deal and in doing so cause the AI to believe it.

Further, I think there is a substantial chance that AI moral patienthood becomes a huge issue in coming years and thus it is good to ensure that field has better views and interventions.

I agree with this part.

A couple of further considerations, or "stops on the crazy train", that you may be interested in:

(These were written in an x-risk framing, but implications for s-risk are fairly straightforward.)

As far as actionable points, I've been advocating working on metaphilosophy or AI philosophical competence, as a way of speeding up philosophical progress in general (so that it doesn't fall behind other kinds of intellectual progress, such as scientific and technological progress, that seem likely to be greatly sped up by AI development by default), and improving the likelihood that human-descended civilization(s) eventually reach correct conclusions on important moral and philosophical questions, and will be motivated/guided by those conclusions.

In posts like this and this, I have lamented the extreme neglect of this field, even among people otherwise interested in philosophy and AI, such as yourself. It seems particularly puzzling why no professional philosopher has even publicly expressed a concern about AI philosophical competence and related risks (at least AFAIK), even as developments such as ChatGPT have greatly increased societal attention on AI and AI safety in the last couple of years. I wonder if you have any insights into why that is the case.

Lower than 1%? A lot more uncertainty due to important unsolved questions in philosophy of mind.

I agree that there is a lot of uncertainty, but don't understand how that is compatible with a <1% likelihood of AI sentience. Doesn't that represent near certainty that AIs will not be sentient?

The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.

Thanks for the clarification. Why doesn't this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)? I feel like maybe you're adopting the framing of "comparative advantage" too much in a situation where the idea doesn't work well (because the situation is too adversarial / not cooperative enough). It seems a bit like a country, after suffering a military defeat, saying "We're better scholars than we are soldiers. Let's pursue our comparative advantage and reallocate our defense budget into our universities."

Rather, I think its impact will come from advocating for not-super-controversial ideas, but it will be able to generate them in part because it avoided the effects I listed in my comment above.

This part seems reasonable.

I've also updated over the last few years that having a truth-seeking community is more important than I previously thought - basically because the power dynamics around AI will become very complicated and messy, in a way that requires more skill to navigate successfully than the EA community has. Therefore our comparative advantage will need to be truth-seeking.

I'm actually not sure about this logic. Can you expand on why EA having insufficient skill to "navigate power dynamics around AI" implies "our comparative advantage will need to be truth-seeking"?

One problem I see is that "comparative advantage" is not straightforwardly applicable here, because the relevant trade or cooperation (needed for the concept to make sense) may not exist. For example, imagine that EA's truth-seeking orientation causes it to discover and announce one or more politically inconvenient truths (e.g. there are highly upvoted posts about these topics on EAF), which in turn causes other less truth-seeking communities to shun EA and refuse to pay attention to its ideas and arguments. In this scenario, if EA also doesn't have much power to directly influence the development of AI (as you seem to suggest), then how does EA's truth-seeking benefit the world?

(There are worlds in which it takes even less for EA to be shunned, e.g., if EA merely doesn't shun others hard enough. For example there are currently people pushing for EA to "decouple" from LW/rationality, even though there is very little politically incorrect discussions happening on LW.)

My own logic suggests that too much truth-seeking isn't good either. Would love to see how to avoid this conclusion, but currently can't. (I think the optimal amount is probably a bit higher than the current amount, so this is not meant to be an argument against more truth-seeking at the current margin.)

You probably didn't have someone like me in mind when you wrote this, but it seems a good opportunities to write down some of my thoughts about EA.

On 1, I think despite paying lip service to moral uncertainty, EA encourages too much certainty in the normative correctness of altruism (and more specific ideas like utilitarianism), perhaps attracting people like SBF with too much philosophical certainty in general (such as about how much risk aversion is normative), or even causing such general overconfidence (by implying that philosophical questions in general aren't that hard to answer, or by suggesting how much confidence is appropriate given a certain amount of argumentation/reflection).

I think EA also encourages too much certainty in descriptive assessment of people's altruism, e.g., viewing a philanthropic action or commitment as directly virtuous, instead of an instance of virtue signaling (that only gives probabilistic information about someone's true values/motivations, and that has to be interpreted through the lenses of game theory and human psychology).

On 25, I think the "safe option" is to give people information/arguments in a non-manipulative way and let them make up their own minds. If some critics are using things like social pressure or rhetoric to manipulate people into being anti-EA (as you seem to implying - I haven't looked into it myself), then that seems bad on their part.

On 37, where has EA messaging emphasized downside risk more? A text search for "downside" and "risk" on https://www.effectivealtruism.org/articles/introduction-to-effective-altruism both came up empty, for example. In general it seems like there has been insufficient reflection on SBF and also AI safety (where EA made some clear mistakes, e.g. with OpenAI, and generally contributed to the current AGI race in a potentially net negative way, but seem to have produced no public reflections on these topics).

On 39, seeing statements like this (which seems overconfident to me) makes me more worried about EA, similar to how my concern about each AI company is inversely related to how optimistic it is about AI safety.

The problem of motivated reasoning is in some ways much deeper than the trolley problem.

The motivation behind motivated reasoning is often to make ourselves look good (in order to gain status/power/prestige). Much of the problem seems to come from not consciously acknowledging this motivation, and therefore not being able to apply system 2 to check for errors in the subconscious optimization.

My approach has been to acknowledge that wanting to make myself look good may be a part of my real or normative values (something like what I would conclude my values are after solving all of philosophy). Since I can't rule that out for now (and also because it's instrumentally useful), I think I should treat it as part of my "interim values", and consciously optimize for it along with my other "interim values". Then if I'm tempted to do something to look good, at a cost to my other values or perhaps counterproductive on its own terms, I'm more likely to ask myself "Do I really want to do this?"

BTW I'm curious what courses you teach, and whether / how much you tell your students about motivated reasoning or subconscious status motivations when discussing ethics.

The CCP's current appetite for AGI seems remarkably small, and I expect them to be more worried that an AGI race would leave them in the dust (and/or put their regime at risk, and/or put their lives at risk), than excited about the opportunity such a race provides.

Yeah, I also tried to point this out to Leopold on LW and via Twitter DM, but no response so far. It confuses me that he seems to completely ignore the possibility of international coordination, as that's the obvious alternative to what he proposes, that others must have also brought up to him in private discussions.

Load more