K

Kei

96 karmaJoined Apr 2022Boston, MA, USA

Posts
1

Sorted by New

Comments
11

Do we have any gauge on how accurate the FTX numbers ended up being? More specifically, how much of the donated FTX money ended up either not being distributed, or was ultimately clawed back?

How do you decide what data/research to prioritize?

An AI that could perfectly predict human text would have a lot of capabilities that humans don't have. (Note that it is impossible for any AI to perfectly predict human text, but an imperfect text-predictor may have weaker versions of many of the capabilities a perfect predictor would have.) Some examples include:

  • Ability to predict future events: Lots of text on the internet describes something that happened in the real world. Examples might include the outcome of some sports game, whether a company's stock goes up or down and by how much, or the result of some study or scientific research. Being able to predict such text would require the AI to have the ability to make strong predictions about complicated things.
  • Reversibility: There are many tasks that are easy to do in one direction but much harder to do in the reverse direction. Examples include factoring a number (it's easier to multiply two primes p and q to get a number N=pq, then to figure out p and q when given N), and hash functions (it's easy to calculate the hash of a number, but almost impossible to calculate the original number from the hash). An AI trained to do the reverse, more difficult direction of such a task would be incentivized to do things more difficult than humans could do.
  • Speed: Lots of text on the internet comes from very long and painstaking effort. If an AI can output the same thing a human can, but 100x faster, that is still a significant capability increase over humans.
  • Volume of knowledge: Available human text spans a wider breadth of subject areas than any single person has expertise in. An AI trained on this text could have a broader set of knowledge than any human - and in fact by some definition this may already be the case with GPT-4. To the extent that making good decisions is helped by having internalized the right information, advanced models may be able to make good decisions that humans are not able to make themselves.
  • Extrapolation: Modern LLMs can extrapolate to some degree from information provided in its training set. In some domains, this can result in LLMs performing tasks more complicated than any it had previously seen in the training data. It's possible with the appropriate prompt, these models would be able to extrapolate to generate text that would be made by slightly smarter humans.

In addition to this, modern LLM model training typically consists of two steps, a standard predict the next word first training step, and a reinforcement learning based second step. Models trained with reinforcement learning can in principle become even better than models just trained with next-token prediction.

For what it's worth, this is not a prediction, Sundar Pichai said it in an NYT interview: https://www.nytimes.com/2023/03/31/technology/google-pichai-ai.html

My best guess is it will be announced once the switch happens in order to get some good press for Google Bard. 

Apparently Bard currently uses an older and less sizable language model called LaMDA as its base (you may remember it as the model a Google employee thought was sentient). They're planning on switching over to a more capable model PaLM sometime soon, so Bard should get much closer to GPT at that point.

Thanks for making this! It was a lot of fun to play and I imagine it will be good practice.

Kei
1y15
15
0

I think the implicit claim here is that because SBF (or Dustin/Cari for that matter) was a major EA donor, everything he donates counts as an EA donation. But I don't think that's the right way to look at it. It's not logic we'd apply to other people - I donate a chunk of my money to various EA-affiliated causes, but if I one day decided to donate to the Met most people would consider that separate from my EA giving. 

I would classify donations as EA donations if they fall into one of the below two buckets:

  1. Donations given out by a major EA org: Examples include the Open Philanthropy Project, GiveWell, and The FTX Future Fund.
  2. Donations given out by EAs or EA-affiliated people to causes that have been discussed and argued for a lot in the EA community. Bonus points if it's explicitly listed as a cause area on major EA org websites. Examples include anti-malaria nets, animal welfare charities, pandemic preparedness, and AI safety research. I also think donations to Carrick Flynn's campaign would fall into this bucket given the amount of discussion there was about it here.

Can someone who is not a student participate?

Agree that this sounds promising. I think this could be an org that collected well-scoped, well-defined research questions that would be useful for important decisions and then provided enough mentorship and supervision to get the work done in a competent way; I might be trying to do this this year, starting at a small scale. E.g., there are tons of tricky questions in AI governance that I suspect could be broken down into lots of difficult but slightly simpler research questions. DM me for a partial list.

 

You may be able to draw lessons from management consulting firms. One big idea behind these firms is that bright 20-somethings can make big contributions to projects in subject areas they don't have much experience in as long as they are put on teams with the right structure.

Projects at these firms are typically led by a partner and engagement manager who are fairly familiar with the subject area at hand. Actual execution and research is mostly done by lower level consultants, who typically have little background  in the relevant subject area. 

Some high-level points on how these teams work:

  • The team leads formulate a structure for what specific tasks need to be done to make progress on the project
  • There is a lot of hand-holding and specific direction of lower-level consultants, at least until they prove they can do more substantial tasks on their own
  • There are regular check-ins and regular deliverables to ensure people are on the right track and to switch course if necessary
Load more