Citation: Romero Waldhorn, D., & Autric, E. (2022, December 21). Shrimp: The animals most commonly used and killed for food production. https://doi.org/10.31219/osf.io/b8n3t
TL;DR: I argue for two main theses:
Since mine is one of the last posts of the AI Pause Debate Week, I've also added a section at the end with quick responses to the previous posts.
That is, ignoring tractability and just assuming that we succeed at the...
I think these don’t bite nearly as hard for conditional pauses, since they occur in the future when progress will be slower
Your footnote is about compute scaling, so presumably you think that's a major factor for AI progress, and why future progress will be slower. The main consideration pointing the other direction (imo) is automated researchers speeding things up a lot. I guess you think we don't get huge speedups here until after the conditional pause triggers are hit (in terms of when various capabilities emerge)? If we do have the capabilities for automated researchers, and a pause locks these up, that's still pretty massive (capability) overhang territory.
I’m writing this in my own capacity. The views expressed are my own, and should not be taken to represent the views of Apollo Research or any other program I’m involved with.
TL;DR: I argue why I think there should be more AI safety orgs. I’ll also provide some suggestions on how that could be achieved. The core argument is that there is a lot of unused talent and I don’t think existing orgs scale fast enough to absorb it. Thus, more orgs are needed. This post can also serve as a call to action for funders, founders, and researchers to coordinate to start new orgs.
This piece is certainly biased! I recently started an AI safety org and therefore obviously believe that there is/was a gap to be...
(crossposted from lesswrong)
I created a simple Google Doc for anyone interested in joining/creating a new org to put down their names, contact, what research they're interested in pursuing, and what skills they currently have. Overtime, I think a network can be fostered, where relevant people start forming their own research, and then begin building their own orgs/get funding. https://docs.google.com/document/d/1MdECuhLLq5_lffC45uO17bhI3gqe3OzCqO_59BMMbKE/edit?usp=sharing
PEPFAR, a US program which funds HIV/AIDS prevention and treatment in the developing world, is in danger of not being reauthorized.[1] (The deadline is September 30th, although my current understanding is that even if the House of Representatives misses the deadline, it could still be reauthorized, there would just be a delay in funding.) Over the course of its existence, it's estimated as saving ~25 million lives[2] for a little over $100 billion, and my current understanding is that (even if the lives saved number is an overestimate) it's one o...
Less than a year ago, a community-wide conversation started about slowing down AI.
Some commented that outside communities won't act effectively to restrict AI, since they're not "aligned" with our goal of preventing extinction. That's where I stepped in:
Communities are already taking action – to restrict harmful scaling of AI.
I'm in touch with creatives, data workers, journalists, veterans, product safety experts, AI ethics researchers, and climate change researchers organising against harms.
Today, I drafted a plan to assist creatives. It's for a funder, so I omitted details.
Would love your thoughts, before the AI Pause Debate Week closes:
Rather than hope new laws will pass in 1-2 years, we can enforce established laws now. It is in AI Safety's interest to support creatives to enforce laws against data laundering.
To train...
I’m against these tactics. We can and should be putting pressure on the labs to be more safety conscious, but we don’t want to completely burn our relationships with them. Maintaining those relationships allows us to combine both inside game and outside game which is important as we need both pressure to take action and the ability to direct it in a productive way.
It’s okay to protest them and push for the government to impose a moratorium, but nuisance lawsuits is a great way to get sympathetic insiders off-side.
If these lawsuits would save us, then it could be worth the downsides, but my modal view is that they end up only being a minor nuisance.
Backstory
I recently was at a music festival, where we stood in a long queue in the scorching sun. The festival would go from around 10 am to 10 pm and all stages were outdoor with practically no shadows to be found. My group had another person besides myself with sunscreen and I decided on passing my sunscreen backwards to the group behind us who didn’t bring any, with the comment that they should continue passing it on afterwards. I did that just because it felt nice to do; a small act of good, but when thinking about it I became pretty sure that this is much more cost effective than typical interventions in rich countries.
I spend a few hours reading up and calculating likely cost effectiveness of...
A DALY improvement of 0.1 would mean prefering the experience of 9 days without a sunburn over 10 days with a sunburn seems... ¿reasonable? But also something confuses me here.
Initially I thought this was unreasonably high, since e.g. lower back pain has a disability weight of ~0.035. But if we try an estimate based on GiveWell valuing 37 DALY as much as 116 consumption doublings, preventing the loss of 0.1 DALYs would be equivalent to a ~24% increase in consumption for 1 year. Daily, it would mean ~$20 for a person making $30k/year. This seems surpr...
EAs like to talk about voting methods. Ones that come up a lot are Ranked Choice and Approval.
Like most EAs, I went on the internet to try to find academic opinions the relative merits of different systems. There are many systems, and to the extent that there was anything like consensus about what was the best method, it didn't seem like the top ones were RCV or Approval.
This analysis by Paul Cuff that feigns to be a general comparison and ends with "Use Condorcet".
Is there a page where EAs consider more than just those two options and have opinions about them?
I understand the argument about "gathering around any non-first-past-the-post" is good, and I don't mean to reopen what seems to be more speculatory pasttime than substantive discussion, but it would be nice to have something to reference that injects more normative/utilitarian and practical arguments from an EA perspective.
Apologies if this is a dupe: if so, I'll later edit this to include a link to the original.
Labor unions are associations of workers that negotiate with employers. A union for AI workers such as data scientists, hardware and software engineers could organise labor to counterbalance the influence of shareholders or political masters.
Importantly, unions could play a unique, direct role in redirecting or slowing down the rapid development of AI technology across multiple companies when there is a concern about safety and race dynamics. With difficult to replace expertise, they could do so independent of employers wishes.
Unions often negotiate with multiple companies simultaneously, including in industries where competition is fierce. By uniting workers across AI labs, unions could exert significant collective bargaining power to demand a pause or slower, more cautious development of AI systems with a strong emphasis on safety.
If union demands are...
Granted, in principle you could also have a situation where they're less cautious than management but more cautious than policymakers and it winds up being net positive, though I think that situation is pretty unlikely. Agree the consideration you raised is worth paying attention to.
I think most climate people are very suspicious of charities like this, rather than or in addition to not believing in ethical offsetting. See this Wendover Productions video on problematic, non-counterfactual, and outright fraudulent climate offsets. I myself am not confident that CATF offsets are good and would need to do a bunch of investigation, and most people are not willing to do this starting from, say, an 80% prior that CATF offsets are bad.
I'm also confused about this. I found a paper[1] that estimates the annual amount of shrimp paste produced in China at 40,000 tons, and says that China is the largest shrimp paste producer in the world. The spreadsheet states that ~251,093 tons of A. japonicus were caught in the wild in 2020, so depending on what proportion of shrimp paste is produced in China[2] and how many tons of shrimp are needed to make one ton of shrimp paste, this could be accurate?
- ^
... (read more)https://www.sciencedirect.com/science/article/pii/S0023643822010313 .
I have low conf