All of Michaël Trazzi's Comments + Replies

I don't know what to do for the url not to break on EA Forum by default.

Last time I tried the https without www and there was the same problem. Adding the www solved it.

I believe it's a bug in how urls are validated by EAF (because it doesn't break on LW and the urls are valid).

Not sure how to tag EAF devs but this is quite annoying.

On a related note, has someone looked into the cost-effectiveness of funding new podcasts vs. convincing mainstream ones to produce more impactful content, similarly to how OpenPhil funded Kurzgesagt?

For instance, has anyone tried to convince people like Lex Fridman who has already interviewed MacAskill and Bostrom, to interview more EA-aligned speakers?

My current analysis gives roughly an audience of 1-10M per episode for Lex, and I'd expect that something around $20-100k per episode would be enough of an incentive.

In comparison, when giving $10k to start... (read more)

I'm flattered for The Inside View to be included here among so many great podcasts. This is an amazing opportunity and I am excited to see more podcasts emerge, especially video ones.

If anyone is on the edge of starting and would like to hear some of the hard lessons I've learned and other hot takes I have on podcasting or video, feel free to message me at michael.trazzi at gmail or (better) comment here.

1
Ben Yeoh
2y
I'd love to hear any lessons learned, and even now good things you think about pods, and things we should avoid.

Note: if you want to discuss some of the content of this episode, or one of the above quotes, I'll be at EAG DC this weekend chatting about AI Governance–feel free to book a meeting!

Agreed!

As Zach pointed out below there might be some mistakes left in the precise numbers, for any quantitative analysis I would suggest reading AI Impacts' write-up: https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/

3
Zach Stein-Perlman
2y
AI Impacts also published our 2022 survey's data!

Thanks for the corrections!

Can you tell me exactly which numbers I should change and where?

2
Zach Stein-Perlman
2y
could be changed to either or something like depending on whether you want to preserve Katja's words or (almost) preserve her numbers.

Sorry about that! The AI generating the transcript was not conscious of the pain created by his terrible typos.

Thanks for the quotes and the positive feedback on the interview/series!

Re Gato: we also mention it as a reason why training across multiple domains does not increase performance in narrow domains, so there is also evidence against generality (in the sense of generality being useful). From the transcript:

"And there’s been some funny work that shows that it can even transfer to some out-of-domain stuff a bit, but there hasn’t been any convincing demonstration that it transfers to anything you want. And in fact, I think that the recent paper… The Gato paper

... (read more)

I think he would agree with "we wouldn't have GPT-3 from an economical perspective".  I am not sure whether he would agree with a theoretical impossibility. From the transcript:

"Because a lot of the current models are based on diffusion stuff, not just bigger transformers. If you didn’t have diffusion models [and] you didn’t have transformers, both of which were invented in the last five years, you wouldn’t have GPT-3 or DALL-E. And so I think it’s silly to say that scale was the only thing that was necessary because that’s just clearly not true."

To b... (read more)

Thanks for the reminder on the open-minded epistemics ideal of the movement. To clarify, I do spend a lot of time reading posts from people who are concerned about AI Alignment, and talking to multiple "skeptics" made me realize things that I had not properly considered before, learning where AI Alignment arguments might be wrong or simply overconfident.

(FWIW I did not feel any pushback in suggesting that skeptics might be right on the EAF, and, to be clear, that was not my intention. The goal was simply to showcase a methodology to facilitate a constructive dialogue between the Machine Learning and AI Alignment community.)

LessWrong has been A/B testing for a voting system separate from karma for  "agree/disagree". I would suggest contacting the LW team to know 1) the results from their experiments 2) how easy it would be to just copy the feature to the EAF (since codebases used to be the same).

2
Gavin
2y
I saw one of the experiments, it was really confusing.

Thanks for the thoughtful post. (Cross-posting a comment I made on Nick's recent post.)

My understanding is that people were mostly speculating on the EAF about the rejection rate for the FTX future fund's grants and distribution of $ per grantee. What might have caused the propagation of "free-spending" EA stories:

  • the selection bias at EAG(X) conferences where there was a high % of  grantees.
  • the fact that the FTX future fund did not (afaik) released their rejection rate publicly
  • other grants made by other orgs happening concurrently (eg. CEA)

This post ... (read more)

My understanding is that people were mostly speculating on the EAF about the rejection rate and distribution of $ per grantee. What might have caused the propagation of "free-spending" EA stories:

  • the selection bias at EAG(X) conferences where there was a high % of  grantees.
  • the fact that FTX did not (afaik) release their rejection rate publicly
  • other grants made by other orgs happening concurrently (eg. CEA)

I found this sentence in Will's recent post "For example, Future Fund is trying to scale up its giving rapidly, but in the recent open call it reje... (read more)

9
Linch
2y
I'm surprised that this is helpful fwiw. My impression is that the denominator of who applies to funding varies a lot across funding agencies, and it's pretty easy to (sometimes artificially) inflate or deflate the rejection rate from e.g. improper advertising/marketing to less suitable audiences, or insufficient advertising to marginal audiences. Concretely, Walmart DC allegedly had a rejection rate of 97.4% in 2014, but overall we should not expect Walmart to be substantially more selective than Future Fund. 

Note: the probabilities in the above quotes and in the podcast are the result of armchair forecasting. Please do not quote Peter on this. (I want to give some space for my guests to give intuitions about their estimates without having to worry about being extra careful.)

To make that question more precise, we're trying to estimate xrisk_{counterfactual world without those people} - xrisk_{our world}, with xrisk_{our world}~1/6 if we stick to The Precipice's estimate.

Let's assume that the x-risk research community completely vanishes right now (including the past outputs, and all the research it would have created). It's hard to quantify, but I would personally be at least twice as worried about  AI risk that I am right now (I am unsure about how much it would affect nuclear/climate change/natural disasters/engineered ... (read more)