FH

Ferenc Huszár

162 karmaJoined May 2022Cambridge, UK
inference.vc/about

Bio

Assistant Professor in AI at Cambridge University

Comments
10

A reread of 'Judicious Ambition' post from not so long ago is interesting:

"In 2013, it made sense for us to work in a poorly-lit basement, eating baguettes and hummus. Now it doesn’t. Frugality is now comparatively less valuable."

So, I guess, bring the hummus back?

Jokes aside, an explosion in funding changed EA from 'hedge fund for charity' into 'VC for charity'. This analogy goes a long way to explain shifts in attitude, decisions, exuberance. So perhaps going back to hedge-fundiness, and shifting the focus back from 'company builders' building the next big thing to less scalable but cost-effective operations is a good direction?

On your second point on overestimating stagnation I also had a few issues:

Understating effect of AI (not AGI):

This section does not acknowledge that AI (narrow, not AGI or superintelligence) is likely going to be a significant productivity booster on innovation.

  • AlphaFold is not AGI, won’t cause any catastrophes, but will likely contribute to the productivity of researchers in a variety of fields.
  • AI enables better imaging and control, possibly allowing some breakthroughs in plasma control thus harnessing fusion, as well as breakthroughs in areas where traditional control is just not good enough.
  • A language model focussed on formal mathematics might speed up all sorts of mathematical research.

It’s kind of pointless to wait for an “aligned AGI” to speed up scientific progress. We already have very very powerful specialised (thus aligned) tools for that.

economic growth vs development of technological capabilities

To a large extent, economic growth is driven by consumption, not necessarily technological or scientific progress that is useful for humanity. The type of innovation that drives the economy is about new flavours of candies, how to sell more candies, how to advertise more candies, How to build apps where candies can be advertised more.

Thus, when people say we might have to slow down economic growth as part of solving some problems, they typically mostly mean halting the pointless growth in some consumer product sales, not halting scientific progress leading to intellectual stagnation.

Another problem with the differential development argument is that even if you buy that “alignment can be solved”, it’s not like it’s a vaccine you can apply to all AI so it all suddenly turns beneficial. Other people, companies, nations will surely continue to train and deploy AI models, and why would they all apply your alignment principles or tools?

I heard two arguments in response to this concern: that (1) the first aligned AGI will then kill off all other forms of AGI and make all AI related problems go away and that (2) there are more good people than bad people in the world so once techniques for alignment become available, everyone will naturally adopt them. Both of these seem like fairy tales to me.

In other words the premise that any amount of AI capabilities research is OK so long as we “solve alignment” has serious issues, and you don’t even have to believe in AGI for this to bother you.

On the "Worries about bias towards AI and lack of AI expertise" section, can't you also make the argument that everyone finds AI cool, experts and novices alike?

AI novices also find AI cool, and finally, there is a way for them to get into an AI career, associate with a cool community full of funding opportunities even for novices.

I'm surprised by your reason for being skeptical about AI novices on the grounds that they don't know enough to be worried. Take a "novice" who has read all the x-risk books, forum posts and podcasts vs an  AI expert who's worked on ML for 15 years. It's possible that they know the same amount about AI X-risk mitigation, and would perhaps have similar success rate working on some alignment research (which to a great deal involves GPT-3 prompt hacking with near-0 maths).

What's more, a AI novice might be better off than an AI expert. They might find it easier to navigate the funding landscape, have more time/smaller opportunity cost to go to all the EA events, are less likely to critically argue all the time, and thus may have better opportunities to get involved in grantmaking or get maybe smaller grants themselves. Imagine that two groups wanted to organise an AI camp or event: a group of AI novice undergrads who have been engaged in EA vs a group of AI profs with no EA connections. Who is more likely to get funding?

EA-funded AI safety is actually a pretty sweet deal for an AI novice who gets to do something that's cool at very little cost.

Consequently, it's possible to be skeptical of the motivations anyone in AI safety, expert or novice, on the grounds that "isn't it convenient the best way to save the world is to do cool AI stuff?"


 

Add this to the list?

‘Building massive mega- language models is a good way of increasing AI capabilities and it’s also the best thing for AI alignment and safety’

A few thoughts:

I think while there may be no competing movements that have the community aspect of EA, there are lots of individuals (and orgs) out there who do charitable giving in an impact-driven/rational way, or take well paid positions with the view of using the income for good without branding it earning-to-give. Some might do this quietly. Some of these individuals might well agree with core EA ideas, and may have learnt from books like doing good better. You can do all of these without being a movement. If a critic thinks EA is a cult, why would they respond by forming a competing cult?

EA has also changed over time, it looks very different today than 5 years ago. It may be a good exercise to look at wether the criticisms that people formulate for EA today would have also applied to EA 5 years ago. A good Alt-EA movement might look like whatever EA was before longtermism and AI x-risk seemingly overpowered other areas of concern. How would the 2017 EA movement compete with the 2022 EA movement?

Thirdly, it’s pretty difficult to compete since EA hit the jackpot. In places like hiring talent, or funding students, there are limited resources that communities or concern areas compete over. If the EA community has this much more money, they suck the air from adjacent areas like near-term AI safety or AI ethics. Why would you work on alignment of not super intelligent but widely deployed ML if you can make three times as much training cool large language models next door? And for studentship funding, being EA-aligned will make an enormous difference to your funding prospects compared to other students who might work on the same thing but don’t go to EAglobal each year. I think this is where a lot of frustration originates.

Finally, it’s very common to point out that EA is open to good-faith criticism. There is indeed often very polite and thoughtful engagement on this forum, but I am not sure how easy it is to actually make people update their pre-existing beliefs on specific points.

Thanks for the response, and thanks for being open to  improving your process, and I agree with many of your points about the importance of scaling teams cautiously.

A friendly hello from your local persuasion-resistant  moderately EA-skeptic hole-picker :)

I'd like to challenge this. There are simultaneous claims that:

  1. it's impossible to give constructive feedback on thousands of applications
  2. It is possible to effectively (in an expected value sense) allocate $100m - $1b a year using this process which evaluates thousands of applications from a broad range of applicants related to a broad spectrum of ideas over just a two week period

I don't think both can be true in the long run. Like others in the comments suggested both may be a question of further investment in and improvement of the process. There is a lot of room for improvement: any feedback is better than no feedback, it doesn't have to be super constructive -just knowing if anyone even spent more than a minute looking at your application is useful info that applicants currently don't have.

Wanting to be constructive: would there be arguments against hiring an extra person whose job is to observe the decision making process (I assume there is a kind of internal log of decisions/opinions), and formulate non-zero feedback on applications?

Since your take-away is about undercommunication, please consider the tremendous value you could create by revising the "no feedback on rejected proposals" approach.

Rational case: You clearly create a lot of useful insight on projects in the review process you described here, and are in a superb position to guide applicants to value creation. You may identify weaknesses, red flags, strengths, alternative opportunities which the applicant might not realise. With a relatively small investment on your side you could share constructive feedback with rejected people, in turn creating a lot of downstream value at a low actual cost. A case can be made it would be rational to hire an additional full-time person (doesn't have to be an EA superstar) whose only job is to extract constructive feedback from the insights generated throughout the process.

Human, community-building case: You did say no feedback will be given, so one doesn't expect one. Even so,  when one receives a response and finds that it really contains nothing they can use to improve or simply disagree with, it does very strongly, and unnecessarily, contribute to the feeling of resentment mentioned in Will MacAskill's recent post: https://forum.effectivealtruism.org/posts/cfdnJ3sDbCSkShiSZ/ea-and-the-current-funding-situation

Load more