H

Habryka

CEO @ Lightcone Infrastructure
21710 karmaJoined Working (6-15 years)

Bio

Head of Lightcone Infrastructure. Wrote the forum software that the EA Forum is based on. Often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
1397

Topic contributions
1

I do actually have trouble finding a good place to link to. I'll try to dig one up in the next few days.

You cannot spend the money you obtain from a loan without losing the means to pay it back. You can do a tiny bit to borrow against your future labor income, but the normal thing to do is to declare personal bankruptcy, and so there is little assurance for that.

(This has been discussed many dozens of times on both the EA Forum and LessWrong. There exist no loan structures as far as I know that allow you to substantially benefit from predicting doom.)

Most concrete progress on worst-case AI risks — e.g. arguably the AISIs network, the draft GPAI code of practice for the EU AI Act, company RSPs, the chip and SME export controls, or some lines of technical safety work

My best guess (though very much not a confident guess) is the aggregate of these efforts are net-negative, and I think that is correlated with that work having happened in backrooms, often in context where people were unable to talk about their honest motivations. It sure is really hard to tell, but I really want people to consider the hypothesis that a bunch of these behind-the-scenes policy efforts have been backfiring, especially ex-post with a more republican administration.

The chip and SME export controls seem to currently be one of the drivers of the escalating U.S. and China arms race, the RSPs are I think largely ineffectual and have delayed the speed at which we could get regulation that is not reliant on lab supervision, and the overall EU AI act seems very bad, though I think the effect of the marginal help with drafting is of course much harder to estimate. 

Missing from this list: The executive order, which I think has retrospectively revealed itself as being a major driver of polarization of AI-risk concerns, by strongly conflating near-term risk with extinction risks. It did also do a lot of great stuff, though my best guess is we'll overall regret it (but on this I feel the least confident). 

I agree that a ton of concrete political implementation work needs to be done, but I think the people working in the space who have chosen to do that work in a way that doesn't actually engage in public discourse have made mistakes, and this has had large negative externalities. 

See also: https://www.commerce.senate.gov/services/files/55267EFF-11A8-4BD6-BE1E-61452A3C48E3

Again, really not confident here, and I agree that there is a lot of implementation work to be done that is not glorious and flashy, but I think a bunch of the ways it's been done in a kind of conspiratorial and secretive fashion has been counterproductive[1]

Ultimately as you say the bottleneck for things happening is political will and buy-in that AI systems pose a serious existential risk, and I think that means a lot of implementation and backroom work is blocked and bottlenecked on that public argumentation happening. And when people try to push forward anyways, they often end up forced to conflate existential risk with highly politicized short-term issues that aren't very correlated with the actual risks, and backfire when the political winds change and people update.

  1. ^

That's their... headline result?  "We do not find, however, any evidence for a systematic link between the scale of refugee immigration (and neither the type of refugee accommodation or refugee sex ratios) and the risk of Germans to become victims of a crime in which refugees are suspects" (pg. 3), "refugee inflows do not exert a statistically significant effect on the crime rate" (pg. 21), "we found no impact on the overall likelihood of Germans to be victimized in a crime" (pg. 31), "our results hence do not support the view that Germans were victimized in greater numbers by refugees" (pg. 34).

I haven't read their paper, but the chart sure seems like it establishes a clear correlation. Also, the quotes you are saying seem to be saying something else, claiming that "greater inflow was not correlated with greater crime", which is different than "refugees were not particularly likely to commit crimes against Germans". Indeed, at least on a quick skim of the data that Larks linked, the that statement seems clearly false (though it might still be true that for some reason it is not as clear that greater immigration inflow is necessarily correlated with greater crime, since it might lower crime in other ways, though my best guess is that claim is being chosen as a result of a garden of forking paths methodology).

One reasonable compromise model between these two perspectives is to tie the discount rate to the predicted amount of change that will happen at a given point of time. This could lead to a continuously increasing discounting rate for years that lead up to and include AGI, but then eventually a falling discounting rate for later years as technological progress becomes relatively saturated

Yeah, this is roughly the kind of thing I would suggest if one wants to stay within the discount rate framework.

I think fixed discount rates (i.e. a discount rate where every year, no matter how far away, reduces the weighting by the same fraction) of any amount seems pretty obviously crazy to me as a model of the future. We use discount rates as a proxy for things like "predictability of the future" and "constraining our plans towards worlds we can influence", which often makes sense, but I think even very simple thought-experiments produce obviously insane conclusions if you use practically any non-zero fixed discount rate in situations where it comes apart from the proxies (as it virtually guaranteed to happen in the long-run future).

See also my comment here: https://forum.effectivealtruism.org/posts/PArvxhBaZJrGAuhZp/report-on-the-desirability-of-science-given-new-biotech?commentId=rsqwSR6h5XPY8EPiT 

This report seems to assume exponential discount rates for the future when modeling extinction risk. This seems to lead to extreme and seemingly immoral conclusions when applied to decisions that previous generations of humans faced. 

I think exponential discount rates can make sense in short-term economic modeling, and can be a proxy for various forms of hard-to-model uncertainty and the death of individual participants in an economic system, but applying even mild economic discount rates very quickly implies pursuing policies that act with extreme disregard for any future civilizations and future humans (and as such overdetermine the results of any analysis about the long-run future). 

The report says: 

However, for this equation to equal to 432W, we would require merely that ρ = 0. 99526. In other words, we would need to discount utility flows like our own at 0.47% per year, to value such a future at 432 population years. This is higher than Davidson (2022), though still lower than the lowest rate recommended in Circular A-4. It suggests conservative, but not unheard of, valuations of the distant future would be necessary to prefer pausing science, if extinction imperiled our existence at rates implied by domain expert estimates.

At this discount rate, you would value a civilization that lives 10,000 years in the future, which is something that past humans decisions did influence, at less than a billion billion times of their civilization at the time. By this logic ancestral humans should have taken a trade where they had a slightly better meal, or a single person lived a single additional second (or anything else that improved the lives of a single person by more than a billionth of a percent), in favor of present civilization completely failing to come into existence.

This seems like a pretty strong reductio-ad-absurdum, so I have trouble taking the recommendations of the report seriously. From an extinction risk perspective it seems that if you buy exponential discount rates as aggressive as 1% you basically committed to not caring about future humans in any substantial way. It also seems to me that various thought experiments (like the above ancestral human facing the decision on whether to deal with the annoyance of stepping over a stone, or causing the destruction of our complete civilization) demonstrate that such discount rates almost inevitably recommend actions that seem strongly in conflict with various common sense notions of treating future generations with respect.

Maybe I missed this somewhere in the article, but what is the total annual spending of EA Germany?

I don't think something as strong as this, but I did think at the time that the work on export controls was bad and likely to exacerbate arms race dynamics, and continue to believe this (and the celebration of export controls as a great success of the EA policy efforts was one of the things that caused me to update on future EA-driven AI policy efforts probably being net harmful, though FTX played a bigger role).

Open Phil’s funding interests and priorities and constraints have drastically changed in the last year or two. I agree they funded many things like this in the past.

Load more