Hide table of contents

A while ago OpenPhil gave a decent sum of money to OpenAI to buy a board seat. Since then various criticisms of OpenAI have been made. Do we know anything about how OpenPhil used its influence via that board seat?

52

0
0

Reactions

0
0
Comments15


Sorted by Click to highlight new comments since:
[anonymous]85
13
0

I'm not sure what can be shared publicly for legal reasons, but would note that it's pretty tough in board dynamics generally to clearly establish counterfactual influence. At a high level, Holden was holding space for safety and governance concerns and encouraging the rest of the leadership to spend time and energy thinking about them.

I believe the implicit premise of the question is something like "do those benefits outweigh the potential harms of the grant." Personally, I see this as a misunderstanding, i.e. that OP helped OpenAI to come into existence and it might not have happened otherwise. I've gone back and looked at some of comms around the time (2016) as well as debriefed with Holden and I think the most likely counterfactual is that the time to the next fundraising (2019) and creation of the for-profit entity would have been shortened (due to less financial runway). Another possibility is that the other funders from the first round would have made larger commitments. I give effectively 0% of the probability mass to OpenAI not starting up.

[anonymous]24
3
0

Personally, I see this as a misunderstanding, i.e. that OP helped OpenAI to come into existence and it might not have happened otherwise.

I think some people have this misunderstanding, and I think it's useful to address it.

With that in mind, much of the time, I don't think people who are saying "do those benefits outweigh the potential harms" are assuming that the counterfactual was "no OpenAI." I think they're assuming the counterfactual is something like "OpenAI has less money, or has to take somewhat less favorable deals with investors, or has to do something that it thought would be less desirable than 'selling' a board seat to Open Phil."

(I don't consider myself to have strong takes on this debate, and I think there are lots of details I'm missing. I have spoken to some people who seem invested in this debate, though.)

My current ITT of a reasonable person who thinks the harms outweighed the benefits says something like this: "OP's investment seems likely to have accelerated OpenAI's progress and affected the overall rate of AI progress. If OP had not invested, OpenAI likely would have had to do something else that was worse for them (from a fundraising perspective) which could have slowed down OpenAI and thus slowed down overall AI progress."

Perhaps this view is mistaken (e.g., maybe OpenAI would have just fundraised sooner and started the for-profit entity sooner). But (at first glance), giving up a board seat seems pretty costly, which makes me wonder why OpenAI would choose to give up the board seat if they had some less costly alternatives.

(I also find it plausible that the benefits outweighed the costs, though my ITT of a reasonable person on the other side says something like "what were the benefits? Are there any clear wins that are sharable?")

[anonymous]35
1
0

Unless it's a hostile situation (as might happen with public cos/activist investors), I don't think it's actually that costly. At seed stage, it's just kind of normal to give board seats to major “investors”, and you want to have a good relationship with both your major investors and your board.

The attitude Sam had at the time was less "please make this grant so that we don't have to take a bad deal somewhere else, and we're willing to 'sell' you a board seat to close the deal" and more "hey would you like to join in on this? we'd love to have you. no worries if not."

[anonymous]5
0
0

Thanks for this context. Is it reasonable to infer that you think that OpenAI would've got a roughly-equally-desirable investment if OP had not invested? (Such that the OP investment had basically no effect on acceleration?)

[anonymous]21
0
0

Yes that’s my position. My hope is we actually slowed acceleration by participating but I’m quite skeptical of the view that we added to it.

[anonymous]7
0
0

Thanks! I found this context useful. 

I give effectively 0% of the probability mass to OpenAI not starting up.

I think an important question here is whether OpenAI would have reached a critical level of success that is necessary for, say, convincing Microsoft to throw $1B at them—before exhausting their runway—if OpenPhil had not recommended the $30M grant in 2017.

[...] I think the most likely counterfactual is that the time to the next fundraising (2019) and creation of the for-profit entity would have been shortened (due to less financial runway).

This seems reasonable. On the other hand, if OpenAI had not reached a critical level of success as a non-profit org, I think it is not obvious that their for-profit spinoff would have succeeded in raising investments that are sufficient to get them to a critical level of success. They would have probably needed to compete with many other for-profit AI startups for investments.

[anonymous]2
0
0

Why do you believe that’s binary? (Vs just less funding/smaller valuation at the first round)

I think this type of ML research (i.e. trying to train groundbreaking neural networks) is pretty messy and unpredictable; and money and luck are fungible to some extent. It's not like back in 2017 OpenAI's researchers could perfectly predict which ML experiments would succeed, and how to turn $X of GPU hours into an impressive model that would allow them to raise >$X in the next round, with probability 1.

For example, suppose OpenAI's researchers ran some expensive experiment in 2017, and did not get impressive results. They then need to decide whether to give up on that particular approach, or just tweak some hyperparameters and run another such experiment. The amount of remaining funding they have at that point may determine their decision.

[anonymous]4
0
0

Again, why does it have to be X=$1B and probability 1?

It seems like if the $30M mattered, then the counterfactual is that they needed to be able to raise $30M at the end of their runway, at any valuation, rather than $1B, in order to bridge to the more impressive model. There should be a sizeable gap in what constitutes a sufficiently impressive model between those scenarios. In theory they also had "up to $1B" in grants from their original funders, including Elon, that should have been possible to draw on if needed.

How did you come to the conclusion that funding ML research is "pretty messy and unpredictable"? I've seen many ML companies funded over the years as straightforwardly as other tech startups, esp. if they had great professional backgrounds as was clearly the case with OAI. Seems like an unnecessary assumption on top of other unnecessary assumptions.

How did you come to the conclusion that funding ML research is "pretty messy and unpredictable"? I've seen many ML companies funded over the years as straightforwardly as other tech startups, […]

I think it's important to distinguish here between companies that intend to use existing state-of-the-art ML approaches (where the innovation is in the product side of things) and companies that intend to advance the state-of-the-art in ML. I'm only claiming that research that aims to advance the state-of-the-art in ML is messy and unpredictable.

To illustrate my point: If we use an extreme version of the messy-and-unpredictable view, we can imagine that OpenAI's research was like repeatedly drawing balls from an urn, where drawing each ball costs $1M and there is a 1% chance (or whatever) to draw a Winning Ball (that is analogous to getting a super impressive ML model). The more funding OpenAI has the more balls they can draw, and thus the more likely they are to draw a Winning Ball. Giving OpenAI $30M increases their chance to draw a Winning Ball; though that increase must be small if they have access to much more funding than $30M (without a super impressive ML model).

[anonymous]6
0
0

I understood what you meant before, but still see it as a bad analogy.

For context I saw many rounds of funding as a board member at Vicarious which was a pure lab for most of its life (and then later attempted robotics but that small revenue actually devalued it in the eyes of investors). There, what it took was someone getting excited about the story and smaller performance milestones along the way.

[comment deleted]2
0
0

AFAIK this is not something that can be shared publicly. 

Source is that I remember Ajeya mentioning at one point that it led to positive changes and she doesn't think it was a bad decision in retrospect, but cannot get into said changes for NDA reasons. 

Curated and popular this week
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at