All of MarkusAnderljung's Comments + Replies

Semafor reporting confirms your view. They say Musk promised $1bn and gave $100mn before pulling out. 

Was there a $1bn commitment attributed to Musk? The OpenAI wikipedia article says: "The organization was founded in San Francisco in 2015 by Sam Altman, Reid Hoffman, Jessica Livingston, Elon Musk, Ilya Sutskever, Peter Thiel and others,[8][1][9] who collectively pledged US$1 billion."

4
CarlShulman
1y
Well Musk was the richest, who notably pulled out and then the money seems mostly not to have manifested. I haven't seen a public breakdown of commitments those sorts of statements were based on.

I suspect that it wouldn't be that hard to train models at datacenters outside of CA (my guess is this is already done to a decent extent today: 1/12 of Google's US datacenters are in CA according to wiki). That models are therefore a pretty elastic regulatory target. 

Data as a regulatory target is interesting, in particular if it transfers ownership or power over the data to data subjects in the relevant jurisdiction. That might e.g. make it possible for CA citizens to lodge complaints about potentially risky models being trained on data they've prod... (read more)

Yeah, I'm really bullish on data privacy being an effective hook for realistic AI regulation, especially in CA. I think that, if done right, it could be the best option for producing a CA effect for AI. That'll be a section of my report :)

Funnily enough, I'm talking to state legislators from NY and IL next week (each for a different reason, both for reasons completely unrelated to my project). I'll bring this up.

Thanks!

That sounds like really interesting work. Would love to learn more about it. 

"but also because a disproportionate amount of cutting-edge AI work (Google, Meta, OpenAI, etc) is happening in California." Do you have a take on the mechanism by which this leads to CA regulation being more important? I ask because I expect most regulation in the next few years to focus on what AI systems can be used in what jurisdictions, rather than what kinds of systems can be produced. Is the idea that you could start putting in place regulation that applies to s... (read more)

Just as a caveat, this is me speculating and isn't really what I've been looking into (my past few months have been more "would it produce regulatory diffusion if CA did this?"). With that said, the location in which the product is being produced doesn't really effect whether regulating that product produces regulatory diffusion -- Anu Bradford's criteria are market size, regulatory capacity, stringent standards, inelastic targets, and non-divisibility of production. I haven't seriously looked into it, but I think that, even if all US AI research magically... (read more)

We've already started to do more of this. Since May, we've responded to 3 RFIs and similar (you can find them here: https://www.governance.ai/research): the NIST AI Risk Management Framework; the US National AI Research Resource interim report; and the UK Compute Review. We're likely to respond to the AI regulation policy paper. Though we've already provided input to this process via Jonas Schuett and I being on-loan to the Brexit Opportunities Unit to think about these topics for a few months this spring. 

I think we'll struggle to build expertise in all of these areas, but we're likely to add more of it over time and build networks that allow us to input in these other areas should we find doing so promising. 

"I'd suggest being discerning with this list"

Definitely agree with this! 

One thing you can do is collect some demographic variables on non-respondents and see whether there is self-selection bias on those. You could then try to see if the variables that see self-selection correlate with certain answers. Baobao Zhang and Noemi Dreksler did some of this work for the 2019 survey (found in D1/page 32 here: https://arxiv.org/pdf/2206.04132.pdf ). 

3
Zach Stein-Perlman
2y
Ah, yes, sorry I was unclear; I claim there's no good way to determine bias from the MIRI logo in particular (or the Oxford logo, or various word choices in the survey email, etc.).

Really excited to see this! 

I noticed the survey featured the MIRI logo fairly prominently. Is there a way to tell whether that caused some self-selection bias? 

In the post, you say "Zhang et al ran a followup survey in 2019 (published in 2022)1 however they reworded or altered many questions, including the definitions of HLMI, so much of their data is not directly comparable to that of the 2016 or 2022 surveys, especially in light of large potential for framing effects observed." Just to make sure you haven't missed this: we had the 2016 respond... (read more)

3
Zach Stein-Perlman
2y
1. I don't think we have data on selection bias (and I can't think of a good way to measure this). 2. Yes, the 2019 survey's matched-panel data is certainly comparable, but some other responses may not be comparable (in contrast to our 2022 survey, where we asked the old questions to a mostly-new set of humans).

Hi Lexley, Good question. Kirsten's suggestions are all great. To that, I'd add: 

  • Try to work as a research assistant to someone who you think is doing interesting work. Quite often, more so than other roles, RA roles are quite often not advertised and set up on a more ad hoc basis. Perhaps the best route in is to read someone's work and 
  • Another thing you could do is to try to take a stab independently on some important-seeming question. You could e.g. pick a research question hinted at in a paper/piece (some have a section specifically with sugge
... (read more)
3
MichaelA
2y
Yeah, I update that whenever I learn of a new relevant collection of research questions.  That said, fwiw, I'd generally recommend that people interested in getting into research in some area: * Focus mostly on things like applying to jobs, expressing interest in working with some mentor, or applying to research training programs like the ERIs.  * See independent research as only (a) a "cheap test of fit" that you spend a few days on on weekends and such, rather than a few months on, or (b) a backup option if applying to lots of roles isn't yet working on, or a thing you do while waiting to hear back.  Some people/situations would be exceptions to that general advice, but generally I think having more structure, mentorship, feedback, etc. is better.

One other option: My AI Governance and Strategy team at Rethink Priorities offers 3-5 month fellowships and permanent research assistant roles, either of which can be done at anywhere from 20h/w to 40h/w depending on the candidates' preference. And we hire almost entirely based on performance in our work tests & interviews rather than on credentials/experience (though of course experience often helps people succeed in our work tests & interviews), and have sometimes hired people during or right after undergrad degrees.

We aren't currently actively h... (read more)

2
Lexley Villasis
2y
Thank you so much for taking the time reply! There's so many availabe resources and most advice doesn't seem to be aimed at people in my current career level, so these are really helpful in nudging me to the right direction :D 

Thanks Jeffrey! I hope we're a community where it doesn't matter so much whether you think we suck. If you think the EA community should engage more with nuclear security issues and should do so in different ways, I'm sure people would love to hear it. I would! Especially if you'd help answer questions like: How much can work on nuclear security reduce existential risk? What kind of nuclear security work is most important from an x-risk perspective?

I'd love to hear more about what your concerns and criticisms are. For example, I'd love to know: Is the Scob... (read more)

All things being equal, I'd recommend you publish in journals that are prestigious in your particular field (though it might not be worth the effort). In international relations / political science (which I know best) that might be e.g.: International Organization, International Security, American Journal of Political Science, PNAS.

Other journals that are less prestigious but more likely to be keen on AI governance work include: Nature Machine Intelligence, Global Policy, Journal of AI Research, AI & Society. There are also a number of conferences to c... (read more)

strong +1 to everything Markus suggests here.

Other journals (depending on the field) could include Journal of Strategic Studies, Contemporary Security Policy, Yale Journal of Law & Technology, Minds & Machines, AI & Ethics, 'Law, Innovation and Technology', Science and Engineering Ethics, Foresight, ...

As Markus mentions, there are also sometimes good disciplinary journals that have special issue collections on technology -- those can be opportunities to get it into high-profile journals even if they are usually more aversive to tech-focused pieces (e.g. I got a piece into Melbourne Journal of International Law); though it really depends what audiences you're trying to reach / position your work into.

3
Caro
2y
Thanks so much, this is very helpful!

Overall, I think it's not that surprising that this change is being proposed and I think it's a fairly reasonable. However, I do think it should be complemented with duties to avoid e.g. AI systems being put to high-risk uses without going through a conformity assessment and that it should be made clear that certain parts of the conformity assessment will require changes on the part of the producer of a general system if that's used to produce a system for a high-risk use.

In more detail, my view is that the following changes should be made: Goal 1: Avoid g... (read more)

We've now relaunched. We wrote up our current principles with regards to conflicts of interest and governance here: https://www.governance.ai/legal/conflict-of-interest. I'd be curious if folks have thoughts, in particular @ofer.

2
Ofer
2y
This is great! I hope GovAI will maintain this transparency about its funding sources, and publish a policy to that effect. I think it would be beneficial to have a policy that prevents such funding in the future as well. (There could be conflict of interest issues due to the mere possibility of receiving future funding from certain companies.) (Also, I take it that "private" here means private sector; i.e. this statement applies to public companies as well?) Great, this seems super important! Maybe there should be a policy that allows funding from a non-EA source only if all the board members approve it. In many potential future situations it won't be obvious whether certain funding might compromise the independence or accuracy of GovAI's work; and one's judgment about it will be subjective and could easily be influenced by biases (and it could be very tempting to accept the funding).

Thanks for the post! I was interested in what the difference between "Semiconductor industry amortize their R&D cost due to slower improvements" and "Sale price amortization when improvements are slower" are. Would the decrease in price stem from the decrease in cost as companies no longer need to spend as much on R&D?

2
lennart
3y
For "Semiconductor industry amortize their R&D cost due to slower improvements" the decreased price comes from the longer innovation cycles, so the R&D investments spread out over a longer time period. Competition should then drive the price down. While in contrast "Sale price amortization when improvements are slower" describes the idea that the sale price within the company will be amortized over a longer time period given that obsolescence will be achieved later. Those ideas stem from Cotra's appendices: "Room for improvements to silicon chips in the medium term".

Thanks! What happens to your doubling times if you exclude the outliers from efficient ML models?

2
lennart
3y
The described doubling time of 6.2 months is the result when the outliers are excluded. If one includes all our models, the doubling time was around ≈7 months. However, the number of efficient ML models was only one or two.

I really appreciated the extension on "AI and Compute". Do you have a sense of the extent to which your estimate of the doubling time differs from "AI and Compute" stems from differences in selection criteria vs new data since its publication in 2018? Have you done analysis on what the trend looks like if you only include data points that fulfil their inclusion criteria?

For reference, it seems like their criteria is "... results that are relatively well known, used a lot of compute for their time, and gave enough information to estimate the compute used." ... (read more)

2
lennart
3y
I have been wondering the same. However, given that OpenAI's "AI and Compute" inclusion criteria are also a bit vague, I'm having a hard time which of our data points would fulfill their criteria. In general, I would describe our dataset matching the same criteria because: 1. "relatively well known" equals our "lots of citations". 2. "used a lot of compute for their time" equals our dataset if we exclude outliers from efficient ML models. * There's a recent trend in efficient ML models that achieve similar performance by using less compute for inference and training (those models are then used for e.g., deployment on embedded systems or smartphones). 3. "gave enough information to estimate the compute": We also rely on estimates from us or the community based on the information available in the paper. For a source of the estimate see the note on the cell in our dataset. * We're working on gathering more compute data by directly asking researchers (next target n=100) . I'd be interested in discussing more precise inclusion criteria. As I say in the post:

Thanks for this! I really look forward to seeing the rest of the sequence, especially on the governance bits.

2
Eli Rose
3y
Ah, glad this seems valuable! : )

Thanks for the question. I agree that managing these kinds of issues is important and we aim to do so appropriately.

GovAI will continue to do research on regulation. To date, most of our work has been fairly foundational, though the past 1-2 years has seen an increase in research that may provide some fairly concrete advice to policymakers. This is primarily as the field is maturing, as policymakers are increasingly seeking to put in place AI regulation, and some folks at GovAI have had an interest in pursuing more policy-relevant work.

My view is that most... (read more)

3
MarkusAnderljung
2y
We've now relaunched. We wrote up our current principles with regards to conflicts of interest and governance here: https://www.governance.ai/legal/conflict-of-interest. I'd be curious if folks have thoughts, in particular @ofer.

FWIW I agree that for some lines of work you might want to do managing conflicts of interests is very important, and I'm glad you're thinking about how to do this.

Thanks! I agree that using a term like "socially beneficial" might be better. On the other hand, it might be helpful to couch self-governance proposals in terms of corporate social responsibility, as it is a term already in wide use. 

2
MaxRa
3y
Yeah. What I thought is that one might want to somehow use a term that also emphasizes the potentially transformative impact AI companies will have, as in „We think your AI research might fit into the reference class of the Manhattan Project“. And „socially beneficial“ doesn‘t really capture this either for me. Maybe something in the direction of „risk-aware“, „risk-sensitive“, „farsighted“, „robustly beneficial“, „socially cautious“… Edit: Just stumbled upon the word word „stewardship“ in the most recent EconTalk episode from a lecturer wanting to kindle a sense of stewardship over nuclear weapons in military personnel.

Some brief thoughts (just my quick takes. My guess is that others might disagree, including at GovAI):

  • Overall, I think the situation is quite different compared to 2018, when I think the talk was recorded.  AI governance / policy issues are much more prominent in the media, in politics, etc. The EU Commission has proposed some pretty comprehensive AI legislation. As such, there's more pressure on companies as well as governments to take action. I think there's also better understanding of what AI policy is sensible. All these things update me against
... (read more)

Happy to give my view. Could you say something about what particular views or messages you're curious about? (I don't have time to reread the script atm)

1
Misha_Yagudin
3y
Thank you for a speedy reply, Markus! Jade makes three major points (see the attached slide). I would appreciate your high-level impressions on these (if you lack time reporting oneliners like "mostly agree" or "too much nuance to settle on oneliner" still would be valuable). If you'd take time to elaborate on any of these, I would prefer the last one. Specifically on:

Thanks Michael! Yeah, I hope it ends up being helpful. 

I'm really excited to see LTFF being in a position to review and make such a large number of grants. IIRC, you're planning on writing up some reflections on how the scaling up has gone. I'm looking forward to reading them! 

4
Jonas V
3y
I personally am not planning to do so anytime soon – though maybe Asya is.

Thanks for pointing that out, Michael! Super helpful. 

You can find the talk here

2
MichaelA
3y
Thanks for that link!

Hello, I work at the Centre for the Governance of AI at FHI. I agree that more work in this area is important. At GovAI, for instance, we have a lot more talented folks interested in working with us than we have absorptive capacity.  If you're interested in setting something up at MILA, I'd be happy to advice if you'd find that helpful. You could reach out to me at markus.anderljung@governance.ai

That's exciting to hear! Is your plan still to head into EU politics for this reason? (not sure I'm remembering correctly!)

To make it maximally helpful, you'd work with someone at FHI in putting it together. You could consider applying for the GovAI Fellowship once we open up applications. If that's not possible (we do get a lot more good applications than we're able to take on) getting plenty of steer / feedback seems helpful (you can feel to send it past myself). I would recommend spending a significant amount of time making sure the piece is clearly written, such that someone can quickly grasp what you're saying and whether it will be relevant to their interests.

It definitely seems true that if I want to specifically figure out what to do with scenario a), studying how AI might affect structural inequality shouldn't be my first port of call. But it's not clear to me that this means we shouldn't have the two problems under the same umbrella term. In my mind, it mainly means we ought to start defining sub-fields with time.

A first guess at what might be meant by AI governance is "all the non-technical stuff that we need to sort out regarding AI risk". Wonder if that's close to the mark?

A great first guess! It's basically my favourite definition, though negative definitions probably aren't all that satisfactory either.

We can make it more precise by saying (I'm not sure what the origin of this one is, it might be Jade Leung or Allan Dafoe):

AI governance has a descriptive part, focusing on the context and institutions that shape the incentives an... (read more)

3
Pongo
4y
OK, thanks! The negative definition makes sense to me. I remain unconvinced that there is a positive definition that hits the same bundle of work, but I can see why we would want a handle for the non-technical work of AI risk mitigation (even before we know what the correct categories are within that).

It's a little hard to say, because it will largely depend on who we end up hiring. Taking into account the person's skills and interests, we will split up my current work portfolio (and maybe add some new things into the mix as well). That portfolio currently includes:

  • Operations: Taking care of our finances (including some grant reporting, budgeting, fundraising) and making sure we can spend our funds on what we want (e.g. setting up contracts, sorting out visas). It also includes things like setting up our new office and maintaining our website
... (read more)

Unfortunately, I'm not on that selection committee, and so don't have that detailed insight. I do know that there was quite a lot of applications this year, so it wouldn't surprise me if the tight deadlines originally set end up slipping a little.

I'd suggest you email: fhijobs@philosophy.ox.ac.uk

Probably there are a bunch more useful traits I haven't pointed to

Thanks, Jia!

Could you say more about the different skills and traits relevant to research project management?

Understanding the research: Probably the most important factor is that you're able to understand the research. This entails knowing how it connects to adjacent questions / fields, having well thought-out models about the importance of the research. Ideally, the research manager is someone who could contribute, at least to some extent, to the research they're helping manage. This often requires a decent amount of context on the research, of... (read more)

1
MarkusAnderljung
4y
Probably there are a bunch more useful traits I haven't pointed to

It is indeed! Editing the comment. Thanks!

I'll drop in my 2c.

AI governance is a fairly nascent field. As the field grows and we build up our understanding of it, people will likely specialise in sub-parts of the problem. But for now, I think there's benefit to having this broad category, for a few reasons:

  • There's a decent overlap in expertise needed to address these questions. By thinking about the first, I'll probably build up knowledge and intuitions that will be applicable to the second. For example, I might want to think about how previous powerful technologies such as nu
... (read more)
4
Pongo
4y
This doesn't yet seem obvious to me. Take the nuclear weapons example. Obviously in the Manhattan project case, that's the analogy that's being gestured at. But a structural risk of inequality doesn't seem to be that well-informed by a study of nuclear weapons. If we have a CAIS world with structural risks, it seems to me that the broad development of AI and its interactions across many companies is pretty different from the discrete technology of nuclear bombs. I want to note that I imagine this is a somewhat annoying criticism to respond to. If you claim that there are generally connections between the elements of the field, and I point at pairs and demand you explain their connection, it seems like I'm set up to demand large amounts of explanatory labor from you. I don't plan to do that, just wanted to acknowledge it.
3
Pongo
4y
Thanks for the response! It makes sense not to specialize early, but I'm still confused about what the category is. For example, the closest thing to a definition in this post (btw, not a criticism if a definition is missing in this post. Perhaps it's aimed at people with more context than me) seems to be: To me, that seems to be synonymous with the AI risk problem in its entirety. A first guess at what might be meant by AI governance is "all the non-technical stuff that we need to sort out regarding AI risk". Wonder if that's close to the mark?

Thanks for the question, Lukas.

I think you're right. My view is probably stronger than this. I'll focus on some reasons in favour of specialisation.

I think your ability to carry out a role keeps increasing for several years, but the rate on improvement presumably goes tapers off with time. However, the relationship between skill in a role and your impact is less clear. It seems plausible that there could be threshold effects and the like, such that even though your skill doesn't keep increasing at the same rate, the impact you have in the ... (read more)

5
Lukas Finnveden
4y
Thanks, that's helpful. Is this a typo? I expect uncertainty about cause prio and requirements of wide skillsets to favor less narrow career capital (and increased benefits of changing roles), not narrower career capital.

Thanks Misha!

Not sure I've developed any deep insights yet, but here are some things I find myself telling researchers (and myself) fairly often:

  • Consider a wide range of research ideas. It's easy to get stuck in a local optimum. We often have people write out at least 5 research ideas and rate them on criteria like "fit, importance, excitement, tractability", e.g. when they join as a GovAI Fellow. You should also have a list of research ideas that you periodically look through.
  • Think about what output you're aiming at from the sta
... (read more)
5
MichaelA
4y
I found this answer very interesting - thanks! On feedback, I also liked and would recommend these two recent posts: * Asking for advice * Giving and receiving feedback
You already give some examples later but, again, which fields do you have in mind?

Some categories that spring to mind:

  • Collecting important data series. One example is Benn Todd's recent work on retention in the EA movement. Another example is data regarding folk's views on AI timelines. I'm part of a group working on a follow-up survey to Grace et al 2018, where we've resurveyed folks who responded to these questions in 2016. This is the first time (to my knowledge) results where the same person is asked the same HLMI timeline questi
... (read more)
Could you say more about which fields / career paths you have in mind?

No particular fields or career paths in particular. But there are some strong reasons for reaching out to people who already have or are on a good track to having impact in a field/career path we care about. These people will need a lot less training to be able to contribute and they will already have been selected for being able to contribute to the field.

The issue that people point to is that it seems hard to change people's career plans or research agendas once they are already ... (read more)

You already give some examples later but, again, which fields do you have in mind?

Some categories that spring to mind:

  • Collecting important data series. One example is Benn Todd's recent work on retention in the EA movement. Another example is data regarding folk's views on AI timelines. I'm part of a group working on a follow-up survey to Grace et al 2018, where we've resurveyed folks who responded to these questions in 2016. This is the first time (to my knowledge) results where the same person is asked the same HLMI timeline questi
... (read more)

Thanks Alexander. Would be interested to hear how that project proceeds.

I read the US public opinion on AI report with interest, and thought to replicate this in Australia. Do you think having local primary data is relevant for influence?

I think having more data on public opinion on AI will be useful primarily for understanding the "strategic landscape". In scenarios where AI doesn't look radically different from other tech, it seems likely that the public will be a powerful actor in AI governance. The public was a powerful actor in the his... (read more)

Thanks, Pablo. Excellent questions!

how is taking a postdoctoral position at FHI seen comparatively with other "standard academia" paths? How could it affect future research career options?

My guess is that for folks who are planning on working on FHI-esque topics in the long term, FHI is a great option. Even if you treat the role as a postdoc, staying for say 2 years, I think you could be well set up to go to continue doing important research at other institutions. Examples of this model include Owain Evans, Jan Leike, and Miles Brundage. Though a... (read more)

Your paragraph on the Brussels effect was remarkably similar to the main research proposal in my FHI research scholar application that I hastily wrote, but didn't finish before the deadline.

The Brussels effect it strikes me as one of the best levers available to Europeans looking to influence global AI governance. It seems to me that better understanding how international law such as the Geneva conventions came to be, will shed light on the importance of diplomatic third parties in negotiations between super powers.

I have been pursuing this project on my own time, figuring that if I didn't, nobody would. How can I make my output the most useful to someone at FHI wanting to know about this?

Update: The post for the Foundation’s CEO is now open. Applications close on 28th September. You can find more information here.

I've been involved (in some capacity) with most of the publications at the Centre for the Governance of AI at FHI coming out over the past 1.5 years. I'd say that for most of our research there is someone outside the EA community involved. Reasonably often, one or more of the authors of the piece wouldn't identify as part of the EA community. As for input to the work: If it is academically published, we'd get input from reviewers. We also seek additional input for all our work from folks we think will be able to provide useful input. This often includes academics we know in relevant fields. (This of course leads to a bit of a selection effect)

8
Sean_o_h
4y
Likewise for publications at CSER. I'd add that for policy work, written policy submissions often provide summaries and key takaways and action-relevant points based on 'primary' work done by the centre and its collaborators, where the primary work is peer-reviewed. We've received informal/private feedback from people in policy/government roles at various points that our submissions and presentations have been particularly useful or influential. And we'll have some confidential written testimony to support this for a few examples for University REF (research excellence framework) assessment purposes; however unfortunately I don't have permission to share these publicly at this time. However, this comment I wrote last year provides some info that could be used as indirect indications of the work being seen as high-quality (being chosen as a select number to be invited to present orally; follow-up engagement, etc). https://forum.effectivealtruism.org/posts/whDMv4NjsMcPrLq2b/cser-and-fhi-advice-to-un-high-level-panel-on-digital?commentId=y7DjYFE3gjZZ9caij

Actually, the paper has already been published in Global Policy (and in a very similar form to the one linked above): https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12718.


I have similar worries about making the high-tech panopticon too sticky a meme. I've updated slightly against this being a problem since there's been very little reporting on the paper. The only thing I've seen so far is this article from Financial Times: https://www.ft.com/content/dda3537e-01de-11e9-99df-6183d3002ee1. It reports on the paper in a very nuanced way.

4
Aaron Gertler
4y
Your link is broken, but it looks like the paper came out in September 2019, well after my comment (though my reservations still apply if those sections of the paper were unchanged). Thanks for the update on media reporting! Vox also did a long piece on the working-paper version in Future Perfect, but with the nuance and understanding of EA that one would expect from Kelsey Piper.

Good suggestion. Do you know if other EA orgs have tried it out and if so, how it panned out? It seems a little odd to do, if you assume that the referee and the relevant organisation have broadly aligned interests.

1
jacobjacob
4y
Ought (~$5000) and Rethink Priorities (~$500) have both done it, with bounties roughly what I indicated (though I'm a bit uncertain). Don't think either has completed the relevant hiring rounds yet.
3
JP Addison
4y
My experience with it is confined to the SF tech world, where it is extremely common. Though usually it is offered to employees of the hiring company only. Maybe you could limit it to a group of people you then email about it.

I have some familiarity with the org. Happy to chat to people who are considering applying.

I got a sense of some of the considerations to keep in mind when thinking about this question. But I didn't get a sense of whether collapse of democracy is more or less likely than before. Did you update some direction? Does Runciman have a strong view either way?

Load more