All of Tamay's Comments + Replies

>I'm mostly just noting that "altruistic people don't commit crimes" doesn't seem like a likely hypothesis.

I think your data is evidence in favor of a far more interesting conclusion.

I previously estimated that 1-2% of YCombinator-backed companies with valuations over $100M had serious allegations of fraud.

While not all Giving Pledge signatories are entrepreneurs, a large fraction are, which makes this a reasonable reference class. (An even better reference class would be “non-signatory billionaires”, of course.)

My guess is that YCombinator-backed founders tend to be young with shorter careers than pledgees, and in part because of this will likely have had fewer run-ins with the law. I think better reference class would be so... (read more)

5
David T
3mo
The YCombinator search also appears to have focused on individual young and apparently successful companies rather than all their founders' activities. Branson's conviction pre-dated the Virgin businesses that actually made him rich, and his alleged sexual misconduct on his private island wouldn't show up in a company search either. I'm not sure the YC search was sufficiently in-depth to uncover founders guilty of yacht license violations, a day in jail for non-business-related offences or alleged sexual misconduct outside business hours.  A 10% conviction rate for financial misconduct still looks very high though. 
2[comment deleted]3mo

You asked for examples of advocacy done well with respect to truth-seekingness/providing well-considered takes, and I provided examples.

0
yanni
4mo
You seem annoyed, so I will leave the conversation here.

I think much of the advocacy within EA is reasonably thoughtful and truth-seeking. Reasoning and uncertainties are often transparently communicated. Here are two examples based on my personal impressions:

  • advocacy around donating a fraction of one's income to effective charities is generally focused on providing accounts of key facts and statistics, and often acknowledges its demandingness and its potential for personal and social downsides
  •  wild animal suffering advocacy usually does things like acknowledging the second-order effects of interventions o
... (read more)
3
yanni
4mo
Thanks for your thoughtful reply, I appreciate it :) I am a bit confused still. I'm struggling to see how the work of GWWC is similar to the Pause Movement? Unless you're saying there is a vocal contingent of EAs (who don't work for GWWC) who publicly advocate (to non-EAs) for donating ≥ 10% of your income? I haven't seen these people. In short, I'm struggling to see how they're analogous situations.

I think EAs should be more critical of its advocacy contingents, and that those involved in such efforts should set a higher bar for offering more thoughtful and considered takes.

Short slogans and emojis-in-profiles, such as those often used in ‘Pause AI’ advocacy, are IMO inadequate for the level of nuance required for complex topics like these. Falling short can burn credibility and status of those involved in EA in the eyes of onlookers.

1
Dylan Richardson
4mo
I'm a bit skeptical that all identitarian tactics should be avoided, insofar as that is what this is. It's just too potent a tool - just about every social movement has promulgated itself by these means, by plan or otherwise. Part of this is a "growth of the movement" debate; I'm inclined to think that more money+idea proliferation is needed. I do think there are some reasonable constraints: 1. Identitarian tactics should be used self-consciously and cynically. It's when we forget that we are acting, that the worst of in/out groupiness presents itself. Do think we could do with some more reminding of this. 2. I would agree that certain people should refrain from this. Fine if early-stage career people do it, but I'll start being concerned if Macaskill loses his cool and starts posting "I AM AN EA💡" and roasting outgroups.
8
yanni
4mo
Can you provide a historical example of advocacy that you think reaches a high level of thoughtfulness and consideration?

As someone who runs one of EAs advocacy contingents, I think the overall idea of more criticism is probably a good idea (though I suspect I'll find it personally unpleasant when applied to things I work on), but I'd suggest a few nuances I think exist here:

  1. EA is not unitary, and different EAs and EA factions will have different and at times opposing policy goals. For example, many of the people who work at OpenAI/Anthropic are EAs (or EA adjacent), but many EAs think working at OpenAI/Anthropic leads to AI acceleration in a harmful way (EAs also have diffe
... (read more)

One issue I have with this is that when someone calls this the 'default', I interpret them as implicitly making some prediction about the likelihood of such countermeasures not being taken. The issue then that this is a very vague way to communicate one's beliefs. How likely does some outcome need to be for it to become the default? 90%? 70%? 50%? Something else?
 

The second concern is that it's improbable for minimal or no safety measures to be implemented, making it odd to set this as a key baseline scenario. This belief is supported by substantial e... (read more)

The term 'default' in discussions about AI risk (like 'doom is the default') strikes me as an unhelpful rhetorical move. It suggests an unlikely scenario where little-to-no measures are taken to address AI safety. Given the active research and the fact that alignment is likely to be crucial to unlocking the economic value from AI, this seems like a very unnatural baseline to frame discussions around.

1
Nick K.
7mo
While I agree that slogans like "doom is the default" should not take over the discussion in favour of actual engagement, it doesn't appear that your problem is with the specific phrasing but rather with the content behind this statement.
3
Brad West
7mo
It seems like you're not arguing about the rhetoric of the people you disagree with but rather on the substantive question of the likelihood of disastrous AGI. The reasons you have given tend to disconfirm the claim that "doom is the default." But rhetorically, their expression succinctly and well conveys their belief that AGI will be very bad for us unless a very difficult and costly solution is developed.
3
quinn
7mo
I take it as a kind of "what do known incentives do and neglect to do" ---- when I say "default" I mean "without philanthropic pressure" or "well-aligned with making someone rich". Of course, a lot of this depends on my background understanding of public-private partnerships through the history of innovation (something I'm liable to be wrong about).  The standard venn diagram of focused research organizations https://fas.org/publication/focused-research-organizations-a-new-model-for-scientific-research/ gives a more detailed view along the same lines, less clumsy, but still the point is "there are blindspots that we don't know how to incentivize".  It's certainly true that many parts of almost every characterization/definition of "alignment" can simply be offloaded to capitalism, but I think there are a bajillion reasonable and defensible views about which parts those are, if they're hard, they may be discovered in an inconvenient order, etc. 
2
rhollerith
7mo
Well, sure, but if there a way to avoid the doom, then why after 20 years has no one published the plan for how to do it that doesn't resemble a speculative research project of the type you try when you clearly don't understand the problem and that doesn't resemble the vague output of a politician writing about a sensitive issue?

I've updated the numbers based on today's predictions. Key updates:

  • AI-related risks have seen a significant increase, almost doubling both in terms of catastrophic (from 3.06% in Jun 2022 to 6.16% in September 2023) and extinction risk (from 1.56% to 3.39%).
  • Biotechnology risks have actually decreased in terms of catastrophe likelihood (from 2.21% to 1.52%), while staying constant for extinction risk (0.07% in both periods).
  • Nuclear War has shown an uptick in catastrophic risk (from 1.87% to 2.86%) but remains consistent in extinction risk (0.06% in both periods).
Answer by TamaySep 14, 202331
7
0
  • Owain Evans on AI alignment (situational awareness in LLM, benchmarking truthfulness)
  • Ben Garfinkel on AI policy (best practices in AI governance, open source, the UK's AI efforts)
  • Anthony Aguirre on AI governance, forecasting, cosmology
  • Beth Barnes on dangerous capability evals (GPT-4's and Claude's eval)
8
Neel Nanda
7mo
+1 to Beth Barnes on dangerous capability evals
Tamay
1y66
27
1

I agree the victim-perpetrator is an important lens through which to view this saga. But, I also think that an investor-investee framing is another important one; a framing that has different prescriptions for what lessons to take away, and what to do next. The EA community staked easily a billion dollars worth of its assets (in focus, time, reputation, etc.), and ended up losing it all. I think it's crucial to reflect on whether the extent of our due diligence and risk management was commensurate with the size of EA's bet.

Tamay
1y79
44
6

One specific question I would want to raise is whether EA leaders involved with FTX were aware of or raised concerns about non-disclosed conflicts of interest between Alameda Research and FTX.

For example, I strongly suspect that EAs tied to FTX knew that SBF and Caroline (CEO of Alameda Research) were romantically involved (I strongly suspect this because I have personally heard Caroline talk about her romantic involvement with SBF in private conversations with several FTX fellows). Given the pre-existing concerns about the conflicts of interest between Al... (read more)

I believe that, even in the face of this particular disaster, who EAs are fucking is none of EA's business.  There are very limited exceptions to this rule like "maybe don't fuck your direct report" or "if you're recommending somebody for a grant, whom you have fucked, you ought to disclose this fact to the grantor" or "Notice when somebody in a position of power seems to be leaving behind a long trail of unhappy people they've fucked", plus of course everything that shades over into harrassment, assault, and exploitation - none of which are being sug... (read more)

Is the romantic relationship that big a deal? They were known to be friends and colleagues + both were known to be involved with EA and FTX future fund, and I thought it was basically common knowledge that Alameda was deeply connected with FTX as you show with those links - it just seems kind of obvious with FTX being composed of former Alameda employees and them sharing an office space or something like that.

This is insightful.  Some quick responses:

  • My guess would be that the ability to commercialize these models would strongly hinge on the ability for firms to wrap these up with complementary products, that would contribute to an ecosystem with network effects, dependencies, evangelism, etc.
  • I wouldn't draw too strong conclusions from the fact that the few early attempts to commercialize models like these, notably by OpenAI, haven't succeeded in creating the preconditions for generating a permenant stream of profits. I'd guess that their business models l
... (read more)

By request,  I have updated the predictions based on the latest predictions. Previous numbers can be found here.

I won the Stevenson prize (a prize given out at my faculty) for  my performance in the  MPhil in Economics.  I gather Amartya Sen won the same prize some 64 years ago, which I think is pretty cool.

Amartya Sen won the same prize 

No pressure.

 

Just kidding, congratulations!

2
alex lawsen (previously alexrjl)
3y
Awesome!
3
Linch
3y
Damn congrats!!!