aogara

2394Joined Jan 2019

Bio

CS student at the University of Southern California. Previously worked for three years as a data scientist at a fintech startup. Before that, four months on a work trial at AI Impacts. Currently working with Professor Lionel Levine on language model safety research.

Comments
323

I trust your judgement on this, but I think the Community section might be more fitting. This post is mainly about whether FTX money that was supposedly being spent to support pandemic preparedness was instead going to candidates that would further enrich FTX. Plenty of people (myself included) have lowered the visibility of Community posts on their frontpage, but those who  are interested in SBF's corruption would probably want this on their frontpage. The real discussion here is about SBF's potential dishonesty, not about any of the four topics outlined in the policy:

...the following types of post will remain in the “Personal Blog” category (meaning that they will not appear on the Forum’s homepage, but will appear in “All Posts,” in the author’s profile, and on any relevant tag pages):

  • Posts advocating for or against a specific political candidate or group of candidates (e.g. “Why effective altruists should vote for candidate Y”)
    • This policy also applies to posts which neutrally solicit opinions on a particular candidate, since those opinions are generally going to be advocacy for or against the candidate, which risks leading to the same issues.
  • Posts discussing policy issues with only tenuous connection to the main EA cause areas (e.g. “What John Smith’s position on gun rights means for EA voters”)

Some political content will continue to receive “Frontpage” categorization:

  • Posts discussing general systems for evaluating any political candidate (e.g. “Candidate Scoring System, Third Release”)
  • Posts discussing policy issues that are directly connected to core EA cause areas (e.g. this post on a campaign to boost Canadian development assistance)

I think it's noteworthy that surveys from 2016, 2019, and 2022 have all found roughly similar timelines to AGI (50% by ~2060) for the population of published ML researchers. On the other hand, the EA and AI safety communities seem much more focused on short timelines than they were seven years ago (though I don't have a source on that). 

The adversarial Turing test seems like an odd definition to forecast on. Nuno's linked blogpost makes one side of the argument well: There could be ways to identify an AI as different from a human long after AI becomes economically transformative or capable of taking over the world. On the other side, AI that passes an adversarial Turing test could still fail to have economic impact (perhaps because of regulation, or maybe it's too expensive to replace human labor) or pose a meaningful existential risk (because it's not goal directed, misaligned, or capable of overpowering humanity). 

I'd be more interested in your forecasts on a few other operationalizations of AI timelines:

  • Economic impact, as measured by GDP growth rate or AI as % of inputs to GDP, seems like an important aggregate to track and forecast. It has the important quality of being easily verifiable and continuous over time, making forecasts easy to validate with each passing year. On the other hand, economic impact will likely lag cutting edge capabilities, which might pose the most x-risk. 
  • X-Risk is what I actually care about. With all the debate over whether AI x-risk is disjunctive or conjunctive, I wouldn't want to use a model split into "Will we get AGI, and if so, will x-risks be realized?" that has clear cases where x-risk could occur without first meeting the AGI definition. A tougher question is whether to forecast the exact date of human disempowerment, or a preceding "point of no return", or another set of ideas. But all of these seem more directly aimed at the most important question of x-risk. 
  • A particularly clean decomposition is "In what year will world energy consumption first exceed 130% of every prior year?" from Matthew Barnett's Metaculus question. This is designed to forecast transformative AI while accounting for the possibility that AI will overpower humanity, causing GDP to collapse as AI seizes all available resources for its own goal. Forecasting both this question and the economic impact question might show your x-risk estimate in the difference, unless you think that AI could overpower humanity without  transformative industrial capacity. 

Your thinking on these questions has been pretty persuasive to me, especially Nuno's recent blog and Eli Lifland's writeup of thinking through the full case. It's nice to get a perspective that's just a bit outside of the constant AI hype bubble. But these forecasts just felt a bit less informative than they could otherwise be, driven by edge cases around the definition. Curious if you would disagree with the importance of those edge cases, or think other forecasting targets have important flaws. 

I wonder if a degree of randomization would help. Instead of showing the top 10 posts on the front page, show a new sample of the top 50 to each user. Then the bonus given to new posts could shrink, and there would be more nudges to continue engaging with something over the course of a week or month.

I spent about an hour today trying to convince a friend that works in private equity that OpenAI is undervalued at $30B. I pitched him on short AI timelines and transformative growth, and he didn’t disagree with those arguments directly. He mostly questioned whether OpenAI would reap the benefits of short timelines. A few of the points:

  • It’s a competitive industry with other players on par or not far behind. Google, Meta, and Anthropic are there already, and startups like Stability and Cohere could quickly close the gap. This is especially true if “scale is all you need”, rather than human capital or privately generated data.
  • The main opportunity is B2B, not B2C. Businesses are more cost sensitive and interested in cheaper alternatives than consumers, who gladly accept name brands.
  • Profits often lag behind research breakthroughs for years and even decades. There’s no billion dollar app for GPT yet. Investors “don’t care about anything that’s more than 15 years away.”

IMO these are boring economic arguments that don’t refute the core thesis of short timelines or AI risk. OpenAI is getting a similar evaluation to Grammarly, which also sells an LLM product, but with worse tech and better marketing. It’s being evaluated on short term revenue prospects more than considerations about TAI timelines.

Answer by aogaraJan 08, 202370

Also ML Safety Scholars: https://course.mlsafety.org/

And probably a course in Deep Learning where you write code in Pytorch

That’s a good argument, I think I agree.

What kinds of amendments to lobbying disclosure laws could be made? Is it practical to require disclosure of LLM-use in lobbying when detection is not yet reliable? Is disclosure even enough, or is it necessary to ban LLM lobbying entirely? I assume this would need to be a new law passed by Congress rather than an FEC rule — would you know if there is or has been any consideration of similar legislation?

Thanks for sharing. I have a friend who's in the Marines and loves his animal meat, but he found this funny and persuasive of the claim that lobsters can feel pain. 

Very interesting stuff. I'd be wary of the Streisand Effect, that calling attention to the danger of AI-powered corporate lobbying might cause someone to build AI for corporate lobbying. Your third section clearly explains the risks of such a plan, but might not be heeded by those excited by AI lobbying. 

Load More