All of Andrea_Miotti's Comments + Replies

Thanks, great to hear you found it useful!

As you mention, the export controls are aimed at, and have the primary effect of, differentially slowing down a specific country's AI development, rather than AGI development overall.

This has a few relevant side effects, such as reduced proliferation and competition, but doesn't slow down the frontier of overall AGI development  (nor does it aim to do so).

 

4
Violet Hour
1y
Hm, I still feel as though Sanjay’s example cuts against your point somewhat. For instance, you mentioned encountering the following response:  To the extent that regulations slow down potential AGI competitors in China, I’d expect stronger incentives towards safety, and a correspondingly lower chance of encountering potentially dangerous capabilities races. So, even if export bans don’t directly slow down the frontier of AI development, it seems plausible that such bans could indirectly do so (by weakening the incentives to sacrifice safety for capabilities development). Your post + comment suggests that you nevertheless expect such regulation to have ~0 effect on AGI development races, although I’m unsure which parts of your model are driving that conclusion. I can imagine a couple of alternative pictures, with potentially different policy implications. * Your model could involve potential participants in AGI development races viewing themselves primarily in competition with other (e.g.) US firms. This, combined with short timelines, could lead you to expect the export ban to have ~0 effect on capabilities development. * On this view, you would be skeptical about the usefulness of the export ban on the basis of skepticism about China developing AGI (given your timelines), while potentially being optimistic about the counterfactual value of domestic regulation relating to chip production.  * If this is your model, I might start to wonder “Could the chip export ban affect the regulatory Overton Window, and increase the chance of domestic chip controls?”, in a way that makes the Chinese export ban potentially indirectly helpful for slowing down AGI.  * To be clear, I'm not saying the answer to my question above is "yes", only that this is one example of a question that I'd have on one reading of your model, which I wouldn't have on other readings. * Alternatively, your model might instead be skeptical about the importance of compute, and consequently sk

Thanks a lot and good  point, edited to include full names and links!

For more information on the current funding situation, here is OpenPhil's latest update indicating that their assets are down ~40%, limiting growth in their commitment compared to previous projections, and GiveWell's latest funding projection update indicating that they "don’t expect to have enough funding to support all the cost-effective opportunities [they] find".

Thank you very much for sharing this honest account of your experience!

Failure is an inherent byproduct of taking more risks, and it's really hard to write openly about career near-misses like this one, as failing (really) hurts. 

However, accounts of failures, near-misses, and missed opportunities are extremely valuable for all of us to learn and go forward, especially as we embark in more ambitious high-risk, high-reward projects.

This post is especially valuable as it also sheds more light on paths towards high-impact political roles, so thanks again... (read more)

Completely agree! Although I imagine that the situation will change soon due to 1) last funding decisions being finalized 2) funded projects coming out of stealth mode 3) more rejected applicants posting their applications publicly (when there are few downsides to doing so) 4) the Future Fund publishes a progress report in the next months.

So I expect the non-disclosure issue to be significantly reduced in the next few months.

Thanks for sharing the presentation, great work!

Regarding the third question from the audience, "What kind of resource could we share to a random person on the street if we want to introduce them to AI x-risk?", in addition to the resources you mention I think Stuart Russel's 2021 BBC Reith Lectures Series, "Living with Artificial Intelligence", is an excellent introduction for a generalist audience.

In addition to being accessible, the talks have the institutional gravitas of being from a prestigious lecture series from the BBC and an established academic,... (read more)

I think the marginal value of a pre-order of What We Owe the Future is much higher than a pre-order of Gates's book, as Gates's book has a much higher baseline probability of ending up as a bestseller and receiving significant press coverage thanks to Gates's fame.

As usual, it would be great to see downvotes accompanied by reasons for downvoting, especially in the case of NegativeNuno's comments, since it's an account literally created to provide frank criticism with a clear disclaimer in its bio.

Thanks for this writeup! I definitely never thought about snakebites as a major issue before, despite its similarity to "obvious" global health issues like malaria.

This issue is gaining the attention of EU policymakers, including MEPs.

On April 20, an MEP from the Greens/EFA political group tabled a parliamentary question on the issue, citing recent research reviews to note that high-welfare octopus farming is impossible.

He asks whether the European Commission can "confirm the incompatibility of commercial octopus farming investments with the ‘do no significant harm’ principle, which underpins the EU’s sustainable finance policies and is the basis for EU taxonomy".

The Fabian Society even went ahead with one of the megaprojects currently being discussed in EA: founding a new university

In 1894, Fabian Society members Beatrice and Sidney Webb, Graham Walls, and George Bernard Shaw established the London School of Economics and Political Science to improve social science education and address what they saw as the world's most pressing problems of the time.

Thank you very much for writing this! It's very timely advice for many people, as new organizations are being (or will soon be) set up  thanks to funding decisions of the Future Fund (e.g., another comment to this post).

I also couldn't find much information on campus recruitment expenses for top firms. However, according to the US National Association of Colleges and Employers (NACE), in 2018 average cost-per-hire from US universities was $6,110

FAANG and other top tier employers are likely to spend much more than the average.

Current (highly engaged) EAs mostly coming from well-off backgrounds can also be a good argument in favor of more funding for career building for students and recent graduates though.

EAs from less-affluent backgrounds  are those who benefit the most from career building and exploration funding, as they are the people most likely to face financial/other kinds of bottlenecks that prevent them from doing impactful stuff.

Reducing career building funding will just reinforce the trend of only well-off EAs that can afford taking risks staying engaged, while ... (read more)

4
freedomandutility
2y
Yes I agree with you with regards to amount of funding - one EA initiative I’d actually like to see is funding EA students from LMICs to go to the world’s best universities. And yes, my idea is more about fine-tuning the funding to go to people where the counterfactual impact is higher (another plus would be that less EA money is used up by wealthier people, freeing it up for less wealthy people). I think means-testing is fairly widely used (at least in the UK). I use it myself to selectively distribute products from my social enterprise towards kids from lower income backgrounds. I’m fairly confident that the downsides of means-testing - weird incentives, people trying to “game” the system and the indignity it makes some people feel, generally don’t outweigh the benefits of the better targeting of funding. And in the EA context, I think the benefits of better targeting funding will be larger than usual because of the cost effectiveness with which the saved EA money will be spent.

This may be counterproductive as many projects we would like to see funded face economic barriers to entry.

E.g., if starting any effective new advocacy org requires at minimum a 0.5 FTE salary of X and initial legal costs of Y, for a total of X+Y=Z, funding some people 20% below Z won't lead to a 20% less developed advocacy org, but no advocacy org at all.

Fixed costs also vary across projects, and only providing initial funding below a certain threshold could lead to certain high-value but high-fixed cost projects being de-prioritized compared to low-fixed cost, lower-value ones.

Thanks for this, it's great to have all of these resources in one place!