New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+
24
Lizka
· 1y ago · 2m read
22
Lizka
· 3mo ago · 1m read

Posts tagged community

Quick takes

In his recent interview on the 80000 Hours Podcast, Toby Ord discussed how nonstandard analysis and its notion of hyperreals may help resolve some apparent issues arising from infinite ethics (link to transcript). For those interested in learning more about nonstandard analysis, there are various books and online resources. Many involve fairly high-level math as they are aimed at putting what was originally an intuitive but imprecise idea onto rigorous footing. Instead of those, you might want to check out a book like that of H. Jerome Keisler's Elementary Calculus: An Infinitesimal Approach, which is freely available online. This book aims to be an introductory calculus textbook for college students, which uses hyperreals instead of limits and delta-epsilon proofs to teach the essential ideas of calculus such as derivatives and integrals. I haven't actually read this book but believe it is the best known book of this sort. Here's another similar-seeming book by Dan Sloughter.
(COI note: I work at OpenAI. These are my personal views, though.) My quick take on the "AI pause debate", framed in terms of two scenarios for how the AI safety community might evolve over the coming years: 1. AI safety becomes the single community that's the most knowledgeable about cutting-edge ML systems. The smartest up-and-coming ML researchers find themselves constantly coming to AI safety spaces, because that's the place to go if you want to nerd out about the models. It feels like the early days of hacker culture. There's a constant flow of ideas and brainstorming in those spaces; the core alignment ideas are standard background knowledge for everyone there. There are hackathons where people build fun demos, and people figuring out ways of using AI to augment their research. Constant interactions with the models allows people to gain really good hands-on intuitions about how they work, which they leverage into doing great research that helps us actually understand them better. When the public ends up demanding regulation, there's a large pool of competent people who are broadly reasonable about the risks, and can slot into the relevant institutions and make them work well. 2. AI safety becomes much more similar to the environmentalist movement. It has broader reach, but alienates a lot of the most competent people in the relevant fields. ML researchers who find themselves in AI safety spaces are told they're "worse than Hitler" (which happened to a friend of mine, actually). People get deontological about AI progress; some hesitate to pay for ChatGPT because it feels like they're contributing to the problem (another true story); others overemphasize the risks of existing models in order to whip up popular support. People are sucked into psychological doom spirals similar to how many environmentalists think about climate change: if you're not depressed then you obviously don't take it seriously enough. Just like environmentalists often block some of the most valuable work on fixing climate change (e.g. nuclear energy, geoengineering, land use reform), safety advocates block some of the most valuable work on alignment (e.g. scalable oversight, interpretability, adversarial training) due to acceleration or misuse concerns. Of course, nobody will say they want to dramatically slow down alignment research, but there will be such high barriers to researchers getting and studying the relevant models that it has similar effects. The regulations that end up being implemented are messy and full of holes, because the movement is more focused on making a big statement than figuring out the details. Obviously I've exaggerated and caricatured these scenarios, but I think there's an important point here. One really good thing about the AI safety movement, until recently, is that the focus on the problem of technical alignment has nudged it away from the second scenario (although it wasn't particularly close to the first scenario either, because the "nerding out" was typically more about decision theory or agent foundations than ML itself). That's changed a bit lately, in part because a bunch of people seem to think that making technical progress on alignment is hopeless. I think this is just not an epistemically reasonable position to take: history is full of cases where people dramatically underestimated the growth of scientific knowledge, and its ability to solve big problems. Either way, I do think public advocacy for strong governance measures can be valuable, but I also think that "pause AI" advocacy runs the risk of pushing us towards scenario 2. Even if you think that's a cost worth paying, I'd urge you to think about ways to get the benefits of the advocacy while reducing that cost and keeping the door open for scenario 1.
PEPFAR, a US program which funds HIV/AIDS prevention and treatment in the developing world, is in danger of not being reauthorized.[1] (The deadline is September 30th, although my current understanding is that even if the House of Representatives misses the deadline, it could still be reauthorized, there would just be a delay in funding.) Over the course of its existence, it's estimated as saving ~25 million lives[2] for a little over $100 billion, and my current understanding is that (even if the lives saved number is an overestimate) it's one of the most cost-effective things the US government does. I think it might be worth calling your representative to encourage them to reauthorize PEPFAR, particularly if they've indicated that they're uncertain of how to vote or might vote against it. My main uncertainty here is that I'm not sure how likely calling your representative is to actually change their mind, but I suspect this is fairly tractable compared to most forms of lobbying since it's literally just asking them to reauthorize a program that already exists (as opposed to asking them to pass a new law, majorly change how a program works, etc.) 1. ^ https://www.politico.com/news/2023/09/05/president-emergency-global-aids-program-00113796 2. ^ https://www.state.gov/pepfar/ (note that some sources think this is an overestimate - e.g. the comments section here thinks it could be more like 6 million as a low estimate, which would make it not competitive with GiveWell top charities, though still way more cost-effective than a lot of things the US government does (and I currently don't expect that if the program were eliminated the money would be redirected to something more cost effective))
(Clarification about my views in the context of the AI pause debate) I'm finding it hard to communicate my views on AI risk. I feel like some people are responding to the general vibe they think I'm giving off rather than the actual content. Other times, it seems like people will focus on a narrow snippet of my comments/post and respond to it without recognizing the context. For example, one person interpreted me as saying that I'm against literally any AI safety regulation. I'm not. For a full disclosure, my views on AI risk can be loosely summarized as follows: * I think AI will probably be very beneficial for humanity. * Nonetheless, I think that there are credible, foreseeable risks from AI that could do vast harm, and we should invest heavily to ensure these outcomes don't happen. * I also don't think technology is uniformly harmless. Plenty of technologies have caused net harm. Factory farming is a giant net harm that might have even made our entire industrial civilization a mistake! * I'm not blindly against regulation. I think all laws can and should be viewed as forms of regulations, and I don't think it's feasible for society to exist without laws. * That said, I'm also not blindly in favor of regulation, even for AI risk. You have to show me that the benefits outweigh the harm * I am generally in favor of thoughtful, targeted AI regulations that align incentives well, and reduce downside risks without completely stifling innovation. * I'm open to extreme regulations and policies if or when an AI catastrophe seems imminent, but I don't think we're in such a world right now. I'm not persuaded by the arguments that people have given for this thesis, such as Eliezer Yudkowsky's AGI ruin post.
I'm looking for AI safety projects with people with some amount of experience. I have 3/4 of a CS degree from Caltech, one year at MIRI, and have finished the WMLB and ARENA bootcamps. I'm most excited about activation engineering, but willing to do anything that builds research and engineering skill. If you've published 2 papers in top ML conferences or have a PhD in something CS related, and are interested in working with me, send me a DM.

Recent discussion

Citation: Romero Waldhorn, D., & Autric, E. (2022, December 21). Shrimp: The animals most commonly used and killed for food production. https://doi.org/10.31219/osf.io/b8n3t 

Summary

  • Decapod crustaceans or, for short, decapods[1] (e.g., crabs, shrimp, or crayfish) represent a major food source for humans across the globe. If these animals are sentient, the growing decapod production industry likely poses serious welfare concerns for these animals.
  • Information about the number of decapods used for food is needed to better assess the scale of this problem and the expected value of helping these animals.
  • In this work we estimated the number of shrimp and prawns farmed and killed in a year, given that they seem to be the vast majority of decapods used in the food system.
  • We estimated that around:
    • 440 billion (90% subjective confidence interval [SCI]: 300
...

I'm also confused about this. I found a paper[1] that estimates the annual amount of shrimp paste produced in China at 40,000 tons, and says that China is the largest shrimp paste producer in the world. The spreadsheet states that ~251,093 tons of A. japonicus were caught in the wild in 2020, so depending on what proportion of shrimp paste is produced in China[2] and how many tons of shrimp are needed to make one ton of shrimp paste, this could be accurate?

  1. ^
... (read more)

TL;DR: I argue for two main theses:

  1. [Moderate-high confidence] It would be better to aim for a conditional pause, where a pause is triggered based on evaluations of model ability, rather than an unconditional pause (e.g. a blanket ban on systems more powerful than GPT-4).
  2. [Moderate confidence] It would be bad to create significant public pressure for a pause through advocacy, because this would cause relevant actors (particularly AGI labs) to spend their effort on looking good to the public, rather than doing what is actually good.

Since mine is one of the last posts of the AI Pause Debate Week, I've also added a section at the end with quick responses to the previous posts.

Which goals are good?

That is, ignoring tractability and just assuming that we succeed at the...

I think these don’t bite nearly as hard for conditional pauses, since they occur in the future when progress will be slower

Your footnote is about compute scaling, so presumably you think that's a major factor for AI progress, and why future progress will be slower. The main consideration pointing the other direction (imo) is automated researchers speeding things up a lot. I guess you think we don't get huge speedups here until after the conditional pause triggers are hit (in terms of when various capabilities emerge)? If we do have the capabilities for automated researchers, and a pause locks these up, that's still pretty massive (capability) overhang territory. 

1
Aaron_Scher
1h
I appreciate flagging the uncertainty; this argument doesn't seem right to me.  One factor affecting the length of a pause would be the (opportunity cost from pause) / (risk of catastrophe from unpause) ratio of marginal pause days, or what is the ratio of the costs to the benefits. I expect both the costs and the benefits of AI pause days to go up in the future — because risks of misalignment/misuse are greater, and because AIs will be deployed in a way that adds a bunch of value to society (whether the marginal improvements are huge remains unclear, e.g., GPT-6 might add tons of value, but it's unclear how much more GPT-6.5 adds on top of that, seems hard to tell). I don't know how the ratio will change, which is probably what actually matters. But I wouldn't be surprised if that numerator (opportunity cost) shot up a ton.  I think it's reasonable to expect that marginal improvements to AI systems in the future (e.g., scaling up 5x) could map on to automating an additional 1-7% of a nation's economy. Delaying this by a month would be a huge loss (or a benefit, depending on how the transition is going).  What relevant decision makers think the costs and benefits are is what actually matters, not the true values. So even if right now I can look ahead and see that an immediate pause pushes back future tremendous economic growth, this feature may not become apparent to others until later.  To try and say what I'm getting at a different way: you're suggesting that we get a longer pause if we pause later than if we pause now. I think that "races" around AI are going to ~monotonically get worse and that the perceived cost of pausing will shoot up a bunch. If we're early on an exponential of AI creating value in the world, it just seems way easier to pause for longer than it will be later on. If this doesn't make sense I can try to explain more. 

I’m writing this in my own capacity. The views expressed are my own, and should not be taken to represent the views of Apollo Research or any other program I’m involved with. 

TL;DR: I argue why I think there should be more AI safety orgs. I’ll also provide some suggestions on how that could be achieved. The core argument is that there is a lot of unused talent and I don’t think existing orgs scale fast enough to absorb it. Thus, more orgs are needed. This post can also serve as a call to action for funders, founders, and researchers to coordinate to start new orgs.

This piece is certainly biased! I recently started an AI safety org and therefore obviously believe that there is/was a gap to be...

(crossposted from lesswrong)

I created a simple Google Doc for anyone interested in joining/creating a new org to put down their names, contact, what research they're interested in pursuing, and what skills they currently have. Overtime, I think a network can be fostered, where relevant people start forming their own research, and then begin building their own orgs/get funding. https://docs.google.com/document/d/1MdECuhLLq5_lffC45uO17bhI3gqe3OzCqO_59BMMbKE/edit?usp=sharing 

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

PEPFAR, a US program which funds HIV/AIDS prevention and treatment in the developing world, is in danger of not being reauthorized.[1] (The deadline is September 30th, although my current understanding is that even if the House of Representatives misses the deadline, it could still be reauthorized, there would just be a delay in funding.) Over the course of its existence, it's estimated as saving ~25 million lives[2] for a little over $100 billion, and my current understanding is that (even if the lives saved number is an overestimate) it's one o... (read more)

Less than a year ago, a community-wide conversation started about slowing down AI.

Some commented that outside communities won't act effectively to restrict AI, since they're not "aligned" with our goal of preventing extinction.  That's where I stepped in:


Communities are already taking action – to restrict harmful scaling of AI. 
I'm in touch with creatives, data workers, journalists, veterans, product safety experts, AI ethics researchers, and climate change researchers organising against harms.


Today, I drafted a plan to assist creatives.  It's for a funder, so I omitted details.  
Would love your thoughts, before the AI Pause Debate Week closes:

Plan

Rather than hope new laws will pass in 1-2 years, we can enforce established laws now. It is in AI Safety's interest to support creatives to enforce laws against data laundering.

To train...

I’m against these tactics. We can and should be putting pressure on the labs to be more safety conscious, but we don’t want to completely burn our relationships with them. Maintaining those relationships allows us to combine both inside game and outside game which is important as we need both pressure to take action and the ability to direct it in a productive way.

It’s okay to protest them and push for the government to impose a moratorium, but nuisance lawsuits is a great way to get sympathetic insiders off-side.

If these lawsuits would save us, then it could be worth the downsides, but my modal view is that they end up only being a minor nuisance.

Backstory

I recently was at a music festival, where we stood in a long queue in the scorching sun. The festival would go from around 10 am to 10 pm and all stages were outdoor with practically no shadows to be found. My group had another person besides myself with sunscreen and I decided on passing my sunscreen backwards to the group behind us who didn’t bring any, with the comment that they should continue passing it on afterwards. I did that just because it felt nice to do; a small act of good, but when thinking about it I became pretty sure that this is much more cost effective than typical interventions in rich countries. 

I spend a few hours reading up and calculating likely cost effectiveness of...

3
Cristina Schmidt Ibáñez
6h
Interesting! This actually reminded me of a flower farmer I interviewed 4 years ago as part of my masters thesis and the reason why I was interviewing him was that he had no (third party) "social certification" for his flower production but brought up giving sunscreen to his employees which no other farmer that I interviewed (including the ones with social certification mentioned) did. Unfortunately I wasn't able to prioritize the issue as part of the other things I was assessing, but it did leave me thinking (a lot!).
6
NunoSempere
6h
Nice! Two comments: * Sunburn risk without shared sunscreen seems a bit too high; do 30% of people at such concerts get sunburnt? * I recently got a sunburn, and I was thinking about the daly weight. A DALY improvement of 0.1 would mean prefering the experience of 9 days without a sunburn over 10 days with a sunburn seems... ¿reasonable? But also something confuses me here.

A DALY improvement of 0.1 would mean prefering the experience of 9 days without a sunburn over 10 days with a sunburn seems... ¿reasonable? But also something confuses me here.

 

Initially I thought this was unreasonably high, since e.g. lower back pain has a disability weight of ~0.035. But if we try an estimate based on GiveWell valuing 37 DALY as much as 116 consumption doublings, preventing the loss of 0.1 DALYs would be equivalent to a ~24% increase in consumption for 1 year. Daily, it would mean ~$20 for a person making $30k/year. This seems surpr... (read more)

EAs like to talk about voting methods. Ones that come up a lot are Ranked Choice and Approval.

Like most EAs, I went on the internet to try to find academic opinions the relative merits of different systems. There are many systems, and to the extent that there was anything like consensus about what was the best method, it didn't seem like the top ones were RCV or Approval. 

This analysis by Paul Cuff that feigns to be a general comparison and ends with "Use Condorcet".

Is there a page where EAs consider more than just those two options and have opinions about them?

I understand the argument about "gathering around any non-first-past-the-post" is good, and I don't mean to reopen what seems to be more  speculatory pasttime than substantive discussion, but it would be nice to have something to reference that injects more normative/utilitarian and practical arguments from an EA perspective.

Apologies if this is a dupe: if so, I'll later edit this to include a link to the original.

Labor unions are associations of workers that negotiate with employers. A union for AI workers such as data scientists, hardware and software engineers could organise labor to counterbalance the influence of shareholders or political masters.

Importantly, unions could play a unique, direct role in redirecting or slowing down the rapid development of AI technology across multiple companies when there is a concern about safety and race dynamics. With difficult to replace expertise, they could do so independent of employers wishes.

Unions often negotiate with multiple companies simultaneously, including in industries where competition is fierce. By uniting workers across AI labs, unions could exert significant collective bargaining power to demand a pause or slower, more cautious development of AI systems with a strong emphasis on safety.

If union demands are...

5
Larks
14h
Even if they were slightly more cautious than management, if they were less cautious than policymakers it could still be net negative due to unions' lobbying abilities.

Granted, in principle you could also have a situation where they're less cautious than management but more cautious than policymakers and it winds up being net positive, though I think that situation is pretty unlikely. Agree the consideration you raised is worth paying attention to.

I think most climate people are very suspicious of charities like this, rather than or in addition to not believing in ethical offsetting. See this Wendover Productions video on problematic, non-counterfactual, and outright fraudulent climate offsets. I myself am not confident that CATF offsets are good and would need to do a bunch of investigation, and most people are not willing to do this starting from, say, an 80% prior that CATF offsets are bad.