Do we have any gauge on how accurate the FTX numbers ended up being? More specifically, how much of the donated FTX money ended up either not being distributed, or was ultimately clawed back?
How do you decide what data/research to prioritize?
An AI that could perfectly predict human text would have a lot of capabilities that humans don't have. (Note that it is impossible for any AI to perfectly predict human text, but an imperfect text-predictor may have weaker versions of many of the capabilities a perfect predictor would have.) Some examples include:
In addition to this, modern LLM model training typically consists of two steps, a standard predict the next word first training step, and a reinforcement learning based second step. Models trained with reinforcement learning can in principle become even better than models just trained with next-token prediction.
For what it's worth, this is not a prediction, Sundar Pichai said it in an NYT interview: https://www.nytimes.com/2023/03/31/technology/google-pichai-ai.htmlMy best guess is it will be announced once the switch happens in order to get some good press for Google Bard.
Apparently Bard currently uses an older and less sizable language model called LaMDA as its base (you may remember it as the model a Google employee thought was sentient). They're planning on switching over to a more capable model PaLM sometime soon, so Bard should get much closer to GPT at that point.
He talks about it here: https://www.dwarkeshpatel.com/p/holden-karnofsky#details (Ctrl+F OpenAI)
Thanks for making this! It was a lot of fun to play and I imagine it will be good practice.
I think the implicit claim here is that because SBF (or Dustin/Cari for that matter) was a major EA donor, everything he donates counts as an EA donation. But I don't think that's the right way to look at it. It's not logic we'd apply to other people - I donate a chunk of my money to various EA-affiliated causes, but if I one day decided to donate to the Met most people would consider that separate from my EA giving.
I would classify donations as EA donations if they fall into one of the below two buckets:
Can someone who is not a student participate?
Agree that this sounds promising. I think this could be an org that collected well-scoped, well-defined research questions that would be useful for important decisions and then provided enough mentorship and supervision to get the work done in a competent way; I might be trying to do this this year, starting at a small scale. E.g., there are tons of tricky questions in AI governance that I suspect could be broken down into lots of difficult but slightly simpler research questions. DM me for a partial list.
You may be able to draw lessons from management consulting firms. One big idea behind these firms is that bright 20-somethings can make big contributions to projects in subject areas they don't have much experience in as long as they are put on teams with the right structure.
Projects at these firms are typically led by a partner and engagement manager who are fairly familiar with the subject area at hand. Actual execution and research is mostly done by lower level consultants, who typically have little background in the relevant subject area.
Some high-level points on how these teams work: