Quick takes

Besides Ilya Sutskever, is there any person not related to the EA community who quit or was fired from OpenAI for safety concerns?

2
Ian Turner
@Zvi  has a blog post about all the safety folks leaving OpenAI. It’s not a great picture. 

They seem all related to the EA community, and for many it's not clear if they left or were fired.

Linch
50
7
0
1

Do we know if @Paul_Christiano or other ex-lab people working on AI policy have non-disparagement agreements with OpenAI or other AI companies? I know Cullen doesn't, but I don't know about anybody else.

I know NIST isn't a regulatory body, but it still seems like standards-setting should be done by people who have no unusual legal obligations. And of course, some other people are or will be working at regulatory bodies, which may have more teeth in the future.

To be clear, I want to differentiate between Non-Disclosure Agreements, which are perfectly s... (read more)

Showing 3 of 8 replies (Click to show all)

These things are not generally enforced in court. It’s the threat that has the effect, which means the non-disparagement agreement works even if it’s of questionable enforceability and even if indeed it is never enforced.

7
Linch
We also have some reason to suspect that senior leadership at Anthropic, and probably many of the employees, have signed the non-disparagement agreements. This is all fairly bad.
3
James Payor
Additionally there was that OpenAI language stating "we have canceled the non-disparagement agreements except where they are mutual".

"UNICEF delivered over 43,000 doses of the R21/Matrix-M malaria vaccine by air to Bangui, Central African Republic, today, with more than 120,000 doses to follow in the next days. "
- Link

Pretty rookie numbers, need to scale. To be seen how this translates to actual distribution and acceptance. But sure did feel good to read the news, so thought I'd share! No takes yet, feel free to add.

Also, "Around 4.33 million doses of RTS,S have been delivered to 8 countries so far – Benin, Burkina Faso, Cameroon, Ghana, Kenya, Liberia, Malawi, and Sierra Leone".&n... (read more)

1DayAfrica has some discussion about supply shortfalls: https://1dayafrica.org/r21-campaign

I'm wondering what people's opinions are on how urgent alignment work is. I'm a former ML scientist who previously worked at Maluuba and Huawei Canada, but switched industries into game development, at least in part to avoid contributing to AI capabilities research. I tried earlier to interview with FAR and Generally Intelligent, but didn't get in. I've also done some cursory independent AI safety research in interpretability and game theoretic ideas my spare time, though nothing interesting enough to publish yet.

My wife also recently had a baby, and carin... (read more)

A life saved in a rich country is generally considered more valuable than one saved in a poor country because the value of a statistical life (VSL) rises with wealth. However, transferring a dollar to a rich country is less beneficial than transferring a dollar to a poor country because marginal utility decreases as wealth increases.

So, using [$ / lives saved] is the wrong approach. We should use [$ / (lives saved * VSL)] instead. This means GiveDirectly might be undervalued compared to other programs that save lives. Can someone confirm if this makes sense?

Showing 3 of 5 replies (Click to show all)

The point that it's better to save people with better lives than people with worse lives, all else equal, does make sense (at least from a utilitarian perspective). So you're right that [$ / lives saved] is not a perfect approach. I do think it's worth acknowledging this...!

But the right correction isn't to use VSLs. The way I'd put it is: a person's VSL--assuming it's been ideally calculated for each individual, putting aside issues about how governments estimate it in practice--is how many dollars they value as much as slightly lowering their chance of d... (read more)

12
BrownHairedEevee
VSL isn't directly comparable across countries. It's a measure of how much money people in a given country would be willing to spend to save their own lives. For example, if someone would be willing to pay up to $125,000 to reduce the chance of them dying by 1%, then their VSL is $12.5 million. These amounts are lower in poor countries simply because the people there have less money, and it has nothing to do with whether their lives are more or less valuable.
5
MichaelDickens
The value of a statistical life is determined by governments, right? Governments of rich countries value their own citizens more than they value the citizens of poor countries, which makes sense from their perspective, but it's not morally correct so you shouldn't accept their VSLs.

I'll post some extracts from the commitments made at the Seoul Summit. I can't promise that this will be a particularly good summary, I was originally just writing this for myself, but maybe it's helpful until someone publishes something that's more polished:

Frontier AI Safety Commitments, AI Seoul Summit 2024

The major AI companies have agreed to Frontier AI Safety Commitments. In particular, they will publish a safety framework focused on severe risks: "internal and external red-teaming of frontier AI models and systems for severe and novel threats; to wo... (read more)

I published a short piece on Yann LeCun posting about Jan Leike's exit from OpenAI over perceived safety issues, and wrote a bit about the difference between Low Probility - High Impact events and Zero Probability - High Impact events. 

https://www.insideaiwarfare.com/yann-versus/

We should expect that the incentives and culture for AI-focused companies to make them uniquely terrible for producing safe AGI. 
 

From a “safety from catastrophic risk” perspective, I suspect an “AI-focused company” (e.g. Anthropic, OpenAI, Mistral) is abstractly pretty close to the worst possible organizational structure for getting us towards AGI. I have two distinct but related reasons:

  1. Incentives
  2. Culture

From an incentives perspective, consider realistic alternative organizational structures to “AI-focused company” that nonetheless has enou... (read more)

Showing 3 of 8 replies (Click to show all)

Perhaps that the governments are no longer able to get enough funds for such projects (?)

On the competency topic - I got convinced by Mariana Mazzucato in the book Mission Economy, that public sector is suited for such large scale projects, if strong enough motivation is found. She also discusses the financial vs "public good" motivation of private and public sectors in detail.

7
Ulrik Horn
This is interesting. In my experience with both starting new businesses within larger organizations, and from working in startups, one of the main advantages of startups is exactly that they can have much more relaxed safety/take on much more risk. This is the very reason for the adage "move fast and break things". In software it is less pronounced but still important - a new fintech product developed within e.g. Oracle will have tons of scrutiny because of many reasons such as reputation but also if it was rolled out embedded in Oracle's other systems it might cause large-scale damage for the clients. Or, imagine if Bird (the electric scooter company) was an initiative from within Volvo - they absolutely would not have been allowed to be as reckless with their drivers' safety. I think you might find examples of this in approaches to AI safety in e.g. OpenAI versus autonomous driving with Volvo. 
4
Linch
Thanks! I think this is the crux here. I suspect what you say isn't enough but it sounds like you have a lot more experience than I do, so happy to (tentatively) defer.

I was reading the Charity Commission report on EV and came across this paragraph: 

During the inquiry the charity took the decision to reach a settlement agreement in relation to the repayment of funds it received from FTX in 2022. The charity made this decision following independent legal advice they had received. The charity then notified the Commission once this course of action had been taken. The charity returned $4,246,503.16 USD (stated as £3,340,021 in its Annual Report for financial year ending 30 June 2023). The Commission had no involvement

... (read more)
25
Rob Gledhill
Your guess that Zach's post refers to both EV US and EV UK, whereas the charity commission only looked at EV UK is correct - and this explains the difference in amounts

Thank you!

Gathering some notes on private COVID vaccine availability in the UK.

News coverage:

It sounds like there's been a licensing change allowing provision of the vaccine outside the NHS as of March 2024 (ish). Pharmadoctor is a company that supplies pharmacies and has been putting about the word that they'll soon be able to supply them with vaccine doses for private sale -- most media coverage I found... (read more)

Today I got a dose of Novavax for free, largely by luck that's probably not reproducible.

It turns out that vials of Novavax contain 5 doses and only last a short time, I think for 24 hours. Pharmacies therefore need to batch bookings together, and I guess someone got tired of waiting and opted to just buy the entire vial for themselves, letting whoever pick up the other doses. I then found about this via Rochelle Harris, who in turn found out about it via a Facebook group (UK Novavax Vaccine info) for coordinating these things.

2
Ben Millwood
I've been linked to The benefits of Novavax explained which is optimistic about the strengths of Novavax, suggesting it has the potential to offer longer-term protection, and protection against variants as well. I think the things the article says or implies about pushback from mRNA vaccine supporters seem unlikely to me -- my guess is that in aggregate Wall Street benefits much more from eliminating COVID than it does from selling COVID treatments, though individual pharma companies might feel differently -- but they seem like the sort of unlikely thing that someone who had reasonable beliefs about the science but spent too much time arguing on Twitter might end up believing. Regardless, I'm left unsure how to feel about its overall reliability, and would welcome thoughts one way or the other.

I wonder how the recent turn for the worse at OpenAI should make us feel about e.g. Anthropic and Conjecture and other organizations with a similar structure, or whether we should change our behaviour towards those orgs.

  • How much do we think that OpenAI's problems are idiosyncratic vs. structural? If e.g. Sam Altman is the problem, we can still feel good about peer organisations. If instead weighing investor concerns and safety concerns is the root of the problem, we should be worried about whether peer organizations are going to be pushed down the same p
... (read more)
2
ChanaMessinger
Say more about Conjecture's structure?

By that I meant it's an org doing AI safety which also takes VC capital / has profitmaking goals /produces AI products.

12
huw
On (1), these issues seem to be structural in nature, but exploited by idiosyncrasies. In theory, both OpenAI's non-profit board & Anthropic's LTBT should perform the roughly same oversight function. In reality, a combination of Sam's rebellion, Microsoft's financial domination, and the collective power of the workers shifted the decision to being about whether OpenAI would continue independently with a new board or re-form under Microsoft. Anthropic is just as susceptible to this kind of coup (led by Amazon), but only if their leadership and their workers collectively want it, which, in all fairness, I think they're a lot less likely to. But in some sense, no corporate structure can protect against all of the key employees organising to direct their productivity somewhere else. Only a state-backed legal structure really has that power. If you're worried about some bad outcome, I think you either have to trust that the Anthropic people have good intentions and won't sell themselves to Amazon, or advocate for legal restrictions on AI work.

In food ingredient labeling, some food items do not require expending. E.g, Article 19 from the relevant EU regulation:

  1. The following foods shall not be required to bear a list of ingredients:
    1. fresh fruit and vegetables, including potatoes, which have not been peeled, cut or similarly treated;
    2. carbonated water, the description of which indicates that it has been carbonated;
    3. fermentation vinegars derived exclusively from a single basic product, provided that no other ingredient has been added;
    4. cheese, butter, fermented milk and cream, to which no ingredient has
... (read more)
3
Jason
Exempting alt proteins seems unlikely to me. The presumed rationale for this exemption is that these are close to single-ingredient foodstuffs whose single ingredient is (or whose few ingredients are) obvious, so requiring them to bear an ingredient list is pointless.

Yeah, I mostly agree. My thinking is that maybe there are some cases where there are non-obvious ingredients (like the enzymes and microbes added to make cheese). But mostly I'm interested in the other direction - getting animal product replacements more simple to make use of.

Say, I'm sure that the cultivated meat industry has interest in being able to label their meat as something close to a single ingredient, rather than to write all of the ingredients in the cellular medium.

But, yeah, I am not hopeful that there'd be a really good intervention along these lines

Having a baby and becoming a parent has had an incredible impact on me. Now more than ever, I feel more connected and concerned about the wellbeing of others. I feel as though my heart has literally grown. I wanted to share this as I expect there are many others who are questioning whether to have children -- perhaps due to concerns about it limiting their positive impact, among many others. But I'm just here to say it's been beautiful, and amazing, and I look forward to the day I get to talk with my son about giving back in a meaningful way.  

I just looked at [ANONYMOUS PERSON]'s donations. The amount that this person has donated in their life is more than double the amount that I have ever earned in my life. This person appears to be roughly the same age as I am (we graduated from college ± one year of each other). Oof. It makes me wish that I had taken steps to become a software developer back when I was 15 or 18 or 22.

Oh, well. As they say, comparison is the thief of joy. I'll try to focus on doing the best I can with the hand I'm dealt.

Showing 3 of 5 replies (Click to show all)
3
yanni kyriacos
Hi Joseph :) Based on what you've written I'm going to guess you have probably donate more than 99% of the world's population to effective charities. So you're probably crushing it!
2
Joseph Lemien
Haha, thanks for bringing a smile to my face.

Two jobs in AI Safety Advocacy that AFAICT don't exist, but should and probably will very soon. Will EAs be the first to create them though? There is a strong first mover advantage waiting for someone -

1. Volunteer Coordinator - there will soon be a groundswell from the general population wanting to have a positive impact in AI. Most won't know how to. A volunteer manager will help capture and direct their efforts positively, for example, by having them write emails to politicians

2. Partnerships Manager - the President of the Voice Actors guild reached out... (read more)

What would stop you from paying for an LLM? Take an extreme case; Sam Altman turns around tomorrow and says "We're racing to AGI, I'm not going to worry about Safety at all."

Would that stop you from throwing him $20 a month?

(I currently pay for Gemini)

I don't think CEA has a public theory of change, it just has a strategy. If I were to recreate its theory of change based on what I know of the org, it'd have three target groups:

  1. Non-EAs
  2. Organisers
  3. Existing members of the community

Per target group, I'd say it has the following main activities:

  • Targeting non-EAs, it does comms and education (the VP programme).
  • Targeting organisers, you have the work of the groups team.
  • Targeting existing members, you have the events team, the forum team, and community health. 

Per target group, these activities are aiming fo... (read more)

2
Chris Leong
They has been writings from CEA on movement-building strategy. I think you might find them in the organiser handbook. These likely aren't to date though, especially since there's a new CEO.

Yeah, I'm aware of those, but I don't think they've published a ToC for CEA as an organisation anywhere. I think it would be good for CEA to have a public ToC because, as noted here, this is a basic good practice in the non-profit sector. 

Time to cancel my Asterisk subscription?

 

 

So Asterisk dedicates a whole self-aggrandizing issue to California, leaves EV for Obelus (what is Obelus?), starts charging readers, and, worst of all, celebrates low prices for eggs and milk?

Showing 3 of 7 replies (Click to show all)

FWIW EV has been off-boarding its projects, so it isn't surprising that Asterisk is now nested under something else. I don't know anything about Obelus Inc. 

1
Karthik Tadepalli
I see, fair enough.
1
Linch
You should cancel if you think it's not worth the money. The other reasons seem worse.
Load more