For what it's worth, this is not a prediction, Sundar Pichai said it in an NYT interview: https://www.nytimes.com/2023/03/31/technology/google-pichai-ai.html
My best guess is it will be announced once the switch happens in order to get some good press for Google Bard.
Apparently Bard currently uses an older and less sizable language model called LaMDA as its base (you may remember it as the model a Google employee thought was sentient). They're planning on switching over to a more capable model PaLM sometime soon, so Bard should get much closer to GPT at that point.
He talks about it here: https://www.dwarkeshpatel.com/p/holden-karnofsky#details (Ctrl+F OpenAI)
I think the implicit claim here is that because SBF (or Dustin/Cari for that matter) was a major EA donor, everything he donates counts as an EA donation. But I don't think that's the right way to look at it. It's not logic we'd apply to other people - I donate a chunk of my money to various EA-affiliated causes, but if I one day decided to donate to the Met most people would consider that separate from my EA giving.
I would classify donations as EA donations if they fall into one of the below two buckets:
Agree that this sounds promising. I think this could be an org that collected well-scoped, well-defined research questions that would be useful for important decisions and then provided enough mentorship and supervision to get the work done in a competent way; I might be trying to do this this year, starting at a small scale. E.g., there are tons of tricky questions in AI governance that I suspect could be broken down into lots of difficult but slightly simpler research questions. DM me for a partial list.
You may be able to draw lessons from management consulting firms. One big idea behind these firms is that bright 20-somethings can make big contributions to projects in subject areas they don't have much experience in as long as they are put on teams with the right structure.
Projects at these firms are typically led by a partner and engagement manager who are fairly familiar with the subject area at hand. Actual execution and research is mostly done by lower level consultants, who typically have little background in the relevant subject area.
Some high-level points on how these teams work:
Relevant: The von Neumann-Morgenstern utility theorem shows that under certain reasonable seeming axioms, a rational agent should act as to maximize expected value of their value function: https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem
There have of course been arguments people have raised against some of the axioms - I think most commonly people argue against axioms 3 and 4 from the link.
An AI that could perfectly predict human text would have a lot of capabilities that humans don't have. Some examples include:
In addition to this, modern LLM model training typically consists of two steps, a standard predict the next word first training step, and a reinforcement learning based second step. Models trained with reinforcement learning can become arbitrarily good, even if the training of the original base model was just trained on predicting the next word.