New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+
24
Lizka
· 1y ago · 2m read
22
Lizka
· 3mo ago · 1m read

Posts tagged community

Quick takes

(Clarification about my views in the context of the AI pause debate) I'm finding it hard to communicate my views on AI risk. I feel like some people are responding to the general vibe they think I'm giving off rather than the actual content. Other times, it seems like people will focus on a narrow snippet of my comments/post and respond to it without recognizing the context. For example, one person interpreted me as saying that I'm against literally any AI safety regulation. I'm not. For a full disclosure, my views on AI risk can be loosely summarized as follows: * I think AI will probably be very beneficial for humanity. * Nonetheless, I think that there are credible, foreseeable risks from AI that could do vast harm, and we should invest heavily to ensure these outcomes don't happen. * I also don't think technology is uniformly harmless. Plenty of technologies have caused net harm. Factory farming is a giant net harm that might have even made our entire industrial civilization a mistake! * I'm not blindly against regulation. I think all laws can and should be viewed as forms of regulations, and I don't think it's feasible for society to exist without laws. * That said, I'm also not blindly in favor of regulation, even for AI risk. You have to show me that the benefits outweigh the harm * I am generally in favor of thoughtful, targeted AI regulations that align incentives well, and reduce downside risks without completely stifling innovation. * I'm open to extreme regulations and policies if or when an AI catastrophe seems imminent, but I don't think we're in such a world right now. I'm not persuaded by the arguments that people have given for this thesis, such as Eliezer Yudkowsky's AGI ruin post.
(COI note: I work at OpenAI. These are my personal views, though.) My quick take on the "AI pause debate", framed in terms of two scenarios for how the AI safety community might evolve over the coming years: 1. AI safety becomes the single community that's the most knowledgeable about cutting-edge ML systems. The smartest up-and-coming ML researchers find themselves constantly coming to AI safety spaces, because that's the place to go if you want to nerd out about the models. It feels like the early days of hacker culture. There's a constant flow of ideas and brainstorming in those spaces; the core alignment ideas are standard background knowledge for everyone there. There are hackathons where people build fun demos, and people figuring out ways of using AI to augment their research. Constant interactions with the models allows people to gain really good hands-on intuitions about how they work, which they leverage into doing great research that helps us actually understand them better. When the public ends up demanding regulation, there's a large pool of competent people who are broadly reasonable about the risks, and can slot into the relevant institutions and make them work well. 2. AI safety becomes much more similar to the environmentalist movement. It has broader reach, but alienates a lot of the most competent people in the relevant fields. ML researchers who find themselves in AI safety spaces are told they're "worse than Hitler" (which happened to a friend of mine, actually). People get deontological about AI progress; some hesitate to pay for ChatGPT because it feels like they're contributing to the problem (another true story); others overemphasize the risks of existing models in order to whip up popular support. People are sucked into psychological doom spirals similar to how many environmentalists think about climate change: if you're not depressed then you obviously don't take it seriously enough. Just like environmentalists often block some of the most valuable work on fixing climate change (e.g. nuclear energy, geoengineering, land use reform), safety advocates block some of the most valuable work on alignment (e.g. scalable oversight, interpretability, adversarial training) due to acceleration or misuse concerns. Of course, nobody will say they want to dramatically slow down alignment research, but there will be such high barriers to researchers getting and studying the relevant models that it has similar effects. The regulations that end up being implemented are messy and full of holes, because the movement is more focused on making a big statement than figuring out the details. Obviously I've exaggerated and caricatured these scenarios, but I think there's an important point here. One really good thing about the AI safety movement, until recently, is that the focus on the problem of technical alignment has nudged it away from the second scenario (although it wasn't particularly close to the first scenario either, because the "nerding out" was typically more about decision theory or agent foundations than ML itself). That's changed a bit lately, in part because a bunch of people seem to think that making technical progress on alignment is hopeless. I think this is just not an epistemically reasonable position to take: history is full of cases where people dramatically underestimated the growth of scientific knowledge, and its ability to solve big problems. Either way, I do think public advocacy for strong governance measures can be valuable, but I also think that "pause AI" advocacy runs the risk of pushing us towards scenario 2. Even if you think that's a cost worth paying, I'd urge you to think about ways to get the benefits of the advocacy while reducing that cost and keeping the door open for scenario 1.
It seems prima facie plausible to me that interventions that save human lives do not increase utility on net, due to the animal suffering caused by saving human life. Has anyone in the broader EA community looked into this? I'm not strongly committed to this, but I'd be interested in seeing what people have reasoned about this.
Why is it that I must return from 100% of EAGs with either covid or a cold? Perhaps my immune system just sucks or it's impossible to avoid due to asymptomatic cases, but in case it's not: If you get a cold before an EAG(x), stay home! For those who do this already, thank you! 
Wanted to give a shoutout to Ajeya Cotra (from OpenPhil), for her great work explaining AI stuff on a recent Freakonomics podcast series. Her explanations about both her work on the development of AI, and her easy to understand predictions of how AI might progress from here were great, she was my favourite expert on the series. People have been looking for more high quality public communicators to get EA/AI safety stuff out there, perhaps Ajeya could be a candidate if she's keen?  

Recent discussion

China has made several efforts to preserve their chip access, including smuggling, buying chips that are just under the legal limit of performance, and investing in their domestic chip industry.[1]

Sounds about right?

hotpot.ai/art-generator

This post centres around an email I sent to the Center for AI Safety (CAIS) expressing concern about their 2023-08-15 newsletter's coverage of US-China competition in the AI space[2], but the overall point is broader. There are some ways of discussing the topic of international relations regarding AI which strike me as un-nuanced in a counterproductive and dangerous way, by hiding certain truths or emphasising others, and supporting a conflict-oriented mindset.

In writing about this, I'm also gesturing at something about the more general topic of 'how to think and write about politically-charged topics'.

Jump to the summary...

5
aogara
7h
We appreciate the feedback! I fully agree that this was an ambiguous use of “China.” We should have been more specific about which actors are taking which actions. I’ve updated the text to the following: We’ve also cut the second sentence in this paragraph, as the paragraph remains comprehensible without it: More generally, we try to avoid zero-sum competitive mindsets on AI development. They can encourage racing towards more powerful AI systems, justify cutting corners on safety, and hinder efforts for international cooperation on AI governance. It’s important to discuss national AI policies which are often explicitly motivated by goals of competition without legitimizing or justifying zero-sum competitive mindsets which can undermine efforts to cooperate. While we will comment on the how the US and China are competing in AI, we avoid recommending "race with China."

This is an exemplary and welcome response: concise, full-throated, actioned. Respect, thank you Aidan.

Sincerely, I hope my feedback was all-considered good from your perspective. As I noted in this post, I felt my initial email was slightly unkind at one point, but I am overall glad I shared it - you appreciate my getting exercised about this, even over a few paragraphs!

It’s important to discuss national AI policies which are often explicitly motivated by goals of competition without legitimizing or justifying zero-sum competitive mindsets which can unde

... (read more)
3
Robi Rahman
10h
I'm out of the loop; what's the bullshit from high school civics class that needs to be thrown out of my head, and why is Mearsheimer unbalanced but also a good starting point?

tl;dr: Advocacy to the public is a large and neglected opportunity to advance AI Safety. AI Safety as a field is unfamiliar with advocacy, and it has reservations, some founded and others not. A deeper understanding of the dynamics of social change reveals the promise of pursuing outside game strategies to complement the already strong inside game strategies. I support an indefinite global Pause on frontier AI and I explain why Pause AI is a good message for advocacy. Because I’m American and focused on US advocacy, I will mostly be drawing on examples from the US. Please bear in mind, though, that for Pause to be a true solution it will have to be global. 

The case for advocacy in general

Advocacy can work

I’ve encountered many EAs who are...

It is good that 80k is making simple videos to explain the risks associated with EA

Do you mean "risks associated with AI"?

2
Holly_Elmore
5h
I consider the consumer regulation route complementary to what I’m doing and I think a diversity of approaches is more robust, as well.
2
Holly_Elmore
5h
I didn’t know about your book! Happy to hear it :)

Introduction

This post seeks to estimate how much we should expect a highly cost-effective charity to spend on reducing existential risk by a certain amount. By setting a threshold for cost-effectiveness, we can be selective about which longtermist charities to recommend to donors.

We appreciate feedback. We would like for this post to be the first in a sequence about cost-effectiveness thresholds for giving, and your feedback will help us write better posts.

How many beings does extinction destroy?

This chart gives six estimates for the size of the moral universe that would be lost in an extinction event on Earth this century. There is a truly incredible range in the possible size of the moral universe, and the value you see in the future depends on the moral weights you...

2
Vasco Grilo
14h
Agreed, but if the lifespan of the only world is much shorter due to risk of simulation shut-down, the loss of value due to extinction is smaller. In any case, this argument should be weighted together with many others. I personally still direct 100 % of my donations to the Long-Term Future Fund, which is essentially funding AI safety work. Thanks for your work in this space!

Thanks for your donations to the LTFF. I think they need to start funding stuff aimed at slowing AI down (/pushing for a global moratorium on AGI development). There's not enough time for AI Safety work to bear fruit otherwise.

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
2
Ben Stewart
2h
Things can be 'not the best', but still good. For example, let's say a systematic, well-run, whistleblower organisation was the 'best' way. And compare it to 'telling your friends about a bad org'. 'Telling your friends' is not the best strategy, but it still might be good to do, or worth doing. Saying "telling your friends is not the best way" is consistent with this. Saying "telling your friends is a bad idea" is not consistent with this.  I.e. 'bad idea' connotes much more than just 'sub-optimal, all things considered'.
2
Linch
1h
Sorry by "best" I was locally thinking of what's locally best given present limitations, not globally best (which is separately an interesting but less directly relevant discussion). I agree that if there are good actions to do right now, it will be wrong for me to say that all of them are bad because one should wait for (eg) a "systematic, well-run, whistleblower organisation."  For example, if I was saying "GiveDirectly is a bad charity for animal-welfare focused EAs to donate to," I meant that there are better charities on the margin for animal-welfare focused EAs to donate to. I do not mean that in the abstract we should not donate to charities because a well-run international government should be handling public goods provisions and animal welfare restrictions instead. I agree that I should not in most cases be comparing real possibilities against an impossible (or at least heavily impractical) ideal. Similarly, if I said "X is a bad idea for Bob to do," I meant there are better things for Bob to do with Bob's existing limitations etc, not that if Bob should magically overcome all of his present limitations and do Herculeanly impossible tasks. And in fact I was making a claim that there are practical and real possibilities that in my lights are probably better. Well clearly my choice of words on a quickly fired quick take at 1AM was sub-optimal, all things considered. Especially ex post. But I think it'd be helpful if people actually argued about the merits of different strategies instead of making inferences about my racism or lack thereof, or my rudeness or lack thereof. I feel like I'm putting a lot of work in defending fairly anodyne (denotatively) opinions, even if I had a few bad word choices.  After this conversation, I am considering retreating to more legalese and pre-filtering all my public statements for potential controversy by GPT-4, as a friend of mine suggested privately. I suspect this will be a loss for the EA forum being a place where peop

Happy to end this thread here. On a meta-point, I think paying attention to nuance/tone/implicatures is a better communication strategy than retreating to legalese, but it does need practice. I think reflecting on one's own communicative ability is more productive than calling others irrational or being passive-aggressive. But it sucks that this has been a bad experience for you. Hope your day goes better!

TL;DR:

AIs will probably be much easier to control than humans due to (1) AIs having far more levers through which to exert control, (2) AIs having far fewer rights to resist control, and (3) research to better control AIs being far easier than research to control humans. Additionally, the economics of scale in AI development strongly favor centralized actors.

Current social equilibria rely on the current limits on the scalability of centralized control, and the similar levels of intelligence between actors with different levels of resources. The default outcome of AI development is to disproportionately increase the control and intelligence available to centralized, well-resourced actors. AI regulation (including pauses) can either reduce or increase the centralizing effects of AI, depending on the specifics of the regulations. One of...

I'm not opposed to training AIs on human data, so long as those AIs don't make non-consensual emulations of a particular person which are good enough that strategies optimized to manipulate the AI are also very effective against that person. In practice, I think the AI does have to be pretty deliberately set up to mirror a specific person for such approaches to be extremely effective.

I'd be in favor of a somewhat more limited version of the restriction OpenAI is apparently doing, where the thing that's restricted is deliberately aiming to make really good ... (read more)

2
Chris Leong
6h
The scope of your argument against centralisation is unclear to me. Let’s consider maximum decentralisation. Purr open-source. I think that would be a disaster as terrorist groups and dictatorships would have access. What are your thoughts here? Would you go that far? Or are you only arguing on the margin?
1
Quintin Pope
1h
Most of the point of my essay is about the risk of AI leading to worlds where there's FAR more centralization than is currently possible. We're currently near the left end of the spectrum, and I'm saying that moving far to the right would be bad by the lights of our current values. I'm not advocating for driving all the way to the left. It's less an argument on the margin, and more concern that the "natural equilibrium of civilizations" could shift quite quickly towards the far right of the scale above. I'm saying that big shifts in the degree of centralization could happen, and we should be wary of that.

Listen to the audio version of this article (text-to-speech software)

Update, 3/8/2021: I (Hauke) gave a talk at Effective Altruism Global on this post:

Summary

Randomista development (RD) is a form of development economics which evaluates and promotes interventions that can be tested by randomised controlled trials (RCTs). It is exemplified by GiveWell (which primarily works in health) and the randomista movement in economics (which primarily works in economic development).

Here we argue for the following claims, which we believe to be quite weak:

  1. Prominent economists make plausible arguments which suggest that research on and advocacy for economic growth in low- and middle-income countries is more cost-effective than the things funded by proponents of randomista development.
  2. Effective altruists have devoted too little attention to these arguments.
  3. Assessing the soundness of these arguments should be
...

You probably know this by now, but what the heck. I don't think EA as a whole is RCT-only. GiveWell is, AFAIK, very randomista. But there are other EA-affiliated organizations that are not as randomista as GiveWell, notably Open Philanthropy and anything with a more x-risk or long-termist focus.

1
markov_user
12h
Would you say that, almost 4 years later, we've made progress on that front?

As humanity continues its era of rapid population growth and rising economic prosperity, the demand for animal protein is anticipated to reach unparalleled heights. This surge in consumption is set to drastically impact the lives of farmed animals worldwide. Nowhere is this growth more pronounced than in Africa.
 

The evidence

Previously, our anticipation of Africa’s sharp increase in livestock numbers was primarily grounded in the historical global expansion of farmed animal populations over the past decades, coupled with human population growth trends across the African continent. This post, however, delves into the specific projections of farmed animal numbers and animal farming intensification from 2012 to 2050, as outlined by the Food and Agriculture Organization of the United Nations (FAO)*, which is based on many more factors than just historical changes in animal...

Crop yields are extremely low in much of Africa so my guess is there's potential for farmed animals to be fed while keeping constant or even decreasing land use.

2
Thomas Kwa
3h
Some questions I would be interested in: * Where will Sub-Saharan Africa stand in terms of meat consumption and especially chicken consumption as the standard of living increases? When/if Nigeria hits China's current GDP per capita of $12.5k do we expect more or less meat consumption than China has? * Then there's the welfare side. As countries get richer we have seen animal welfare get worse due to factory farming, then perhaps slightly better due to ability to afford animal welfare measures. Will we see the same trajectory in Africa, or should we expect something different like the ability to "leapfrog" to some of the worst animal agriculture practices, or even past them?

How important is visualization w.r.t. the interpretability (or broader alignment) problem? Specifically, is there need+opportunity for impact of frontend engineers in that space?


Additional context:


I’ve got 10 years of experience in software engineering, most of which has been on frontend data visualization stuff, currently at Google (previously at Microsoft). I looked around at some different teams within Google and saw Tensorboard and the Learning Interpretability Tool, but it’s unclear to me how much those teams are bottlenecked by visualization implementation problems vs research problems of knowing where/how to even look, and I’d like to have more background before I cold-call them directly


I've started to get burned out by the earning to give path and am currently considering semi-retirement to focus on other pursuits, but if there’s somewhere I can contribute to alignment without needing to go back for a PhD that would be perfect (I have been eagerly studying ML on the side though)

Visualization is pretty important in exploratory mechanistic interp work, but this is more about fast research code: see any of Neel's exploratory notebooks.

When Redwood had a big interpretability team, they were also developing their own data viz tooling. This never got open-sourced, and this could have been due to lack of experience by the people who wrote such tooling. Anthropic has their own libraries too, Transformerlens could use more visualization, and I hear David Bau's lab is developing a better open-source interpretability library. My guess is th... (read more)

This post is part of AI Pause Debate Week. Please see this sequence for other posts in the debate.

An AI Moratorium of some sort has been discussed, but details matter - it’s not particularly meaningful to agree or disagree with a policy that has no details. A discussion requires concrete claims. 

To start, I see three key questions, namely:

  1. What does a moratorium include? 
  2. When and how would a pause work? 
  3. What are the concrete steps forward?

Before answering those, I want to provide a very short introduction and propose what is in or out of bounds for a discussion.

There seems to be a strong consensus that future artificial intelligence could be very bad. There is quite a significant uncertainty and dispute about many of the details - how bad it could...

Thank you for your carefully thought-through essay on AI governance. Given your success as a forecaster of geopolitical events, could you sketch out for us how we might implement AI governance on, for example, Iran, North Korea, and Russia? You mention sensors on chips to report problematic behavior, etc. However, badly behaving nations might develop their own fabs. We could follow the examples of attacks on Iran's nuclear weapons technologies. But would overt/covert military actions risk missing the creation of a "black ball" on the one hand, or escalation into global nuclear/chemical/biological conflict?