I am a research engineer working on AI safety at DeepMind. Formerly working at Improbable on simulations for decision making
I'm interested in AGI safety, complexity science, software engineering, models and simulations.
I just wrote a relevant forum post on how simulation models / Agent-based models could be highly impactful for pandemic preparedness: https://forum.effectivealtruism.org/posts/2hTDF62hfHAPpJDvk/simulation-models-could-help-prepare-for-the-next-pandemic
A crucial aspect of this is better software tools for building large scale simulations, so I would say this is a large opportunity for someone who wants to work in software engineering.
Even just working as a research engineer in an existing academic group building epidemiological models would be impactful in my opinion. The role of research engineer within academia is quite neglected because it tends to pay less than equivalent industry jobs.
Thanks for writing this, in my opinion the field of complex systems provides a useful and under-explored perspective and set of tools for AI safety. I particularly like the insights you provide in the "Complex Systems for AI Safety" section, for example that ideas in complex systems foreshadowed inner alignment / mesa-optimisation.
I'd be interested in your thoughts on modelling AGI governance as a complex system, for example race dynamics.
I previously wrote a forum post on how complex systems and simulation could be a useful tool in EA for improving institutional decision making, among other things: https://forum.effectivealtruism.org/posts/kWsRthSf6DCaqTaLS/what-complexity-science-and-simulation-have-to-offer
I can think of a few other areas of direct impact which could particularly benefit from talented software engineers:
Improving climate models is a potential route for high impact on climate change, there are computational modelling initiatives such as the Climate Modeling Alliance and startups such as Cervest. It would also be valuable to contribute to open source computational tools such as the Julia programming language and certain Python libraries etc.
There is also the area of computer simulations for organisational / government decision making, such as Improbable Defence (disclosure: I am a former employee and current shareholder), Simudyne and Hash.ai. I've heard anecdotally that a few employees of Hash.ai are sympathetic to EA, but I don't have first hand evidence of this.
More broadly there are many areas of academic research, not just AI safety, which could benefit from more research software engineers. The Society of Research Software Engineering aims to provide a community for research engineers and to make this a more established career path. This type of work in academia tends to pay significantly lower than private sector software salaries, so this is worse for ETG, but on the flip side this is an argument for it being a relatively neglected opportunity.
Is there a list of the ideas that the fellows were working on? I'd be curious.
It's not surprising to me that there aren't many "product focused" traditional startup style ideas in the longtermist space, but what does that leave? Are most of the potential organisations research focused? Or are there some other classes of organisation that could be founded? (Maybe this is a lack of imagination on my part!)
Very useful to know, thanks for the context!
Congratulations, this is really great to hear, and seems like a fantastic opportunity!
Out of interest, what was the sequence of events, did you already have a PhD program lined up when you applied for funding? Or are you going to apply for one now that you have the funding? Also had you already discussed this with your current employer before applying for funding?
I only ask because I have been considering attempting to do something similar!
This is a good point, although I suppose you could still think of this in the framing of "just in time learning", i.e. you can attempt a deep RL project, realise you are hopelessly out of your depth, then you know you'd better go through Spinning Up in Deep RL before you can continue. Although the risk is that it may be demoralising to start something which is too far outside of your comfort zone.
I massively agree with the idea of "just do a project", particularly since it's a better way of getting practice of the type of research skills (like prioritisation and project management) that you will need to be a successful researcher.
I suppose the challenge may be choosing a topic for your project, but reaching out to others in the community may be one good avenue for harvesting project ideas.
What are your thoughts on re-implementing existing papers? It can be a good way to develop technical skills, and maybe a middle ground between learning pre-requisites and doing your own research project? Or would you say it's better to just go for your own project?
These links are excellent! I hadn't come across these before, but I am really excited about the idea of using roleplay and table top games as a way of generating insight and getting people to think through problems. It's great to see this being applied to AI scenarios.
@djbinder Thanks for taking the time to write these comments. No need to worry about being negative, this is exactly the sort of healthy debate that I want to see around this subject.
I think you make a lot of fair points, and it’s great to have these insights from someone with a background in theoretical physics, however I would still disagree slightly on some of them, I will try to explain myself below.
I don’t think the only meaningful definition of complex systems is that they aren’t amenable to mathematical analysis, that is perhaps a feature of them, but not always true. I would say the main hallmark is that there is a surprising level of sophisticated behaviour arising from only apparently simple rules at the level of the individual components that make up that system, and that it can be a challenge to manage and predict such systems.
It is true that the terms “complexity” and “emergence” are not formally defined, and this maybe means that they end up getting used in an overly broad way. The area of complexity science has also been a bit prone to hype. I myself have felt uncomfortable with the term “emergence” at times, it is maybe still a bit vague for my tastes, however I have landed on the opinion that it is a good way to recognise certain properties of a system and categorise different systems. I agree with Eliezer Yudkowky’s point that it isn’t a sufficient explanation of behaviour, but it is still a relevant aspect of a system to look for, and can shape expectations. The aspiration of complexity science is to provide more formal definitions of these terms. So I do agree that there is more work to do to further refine these terms. However just because these terms can’t be formally or mathematically defined yet, doesn’t mean they have no place in science. This is also true of words like “meaning” and “consciousness”, however these are still important concepts.
I think the main point of disagreement is whether “complexity science” is a useful umbrella term. I agree that plenty of valuable interdisciplinary work applying ideas from physics to social sciences is done without reference to “complexity” or “complex systems”, however by highlighting common themes between these different areas I think complexity science has promoted a lot more interdisciplinary work than would have been done otherwise. With the review paper you linked, I would be surprised if many of the authors of those papers didn’t have some connection to complexity science or SFI at some point. In fact one of the authors is director of a lab called “Center for complex networks and systems research”. Even Steven Strogatz, whose textbook you mentioned, was an external SFI professor for a while! Although it’s true that just because he’s affiliated with them doesn’t mean that complexity science can take credit for all his prior work. Most complexity scientists do not typically mention complexity or emergence much in their published papers, they will just look like rigorous papers in a specific domain. Although the flip side of this is maybe that casts into doubt the utility of these terms, as you argued. But I would say that this framing of the problem (as “complex systems” in different domains having underlying features in common) has helped to motivate and initiate a lot of this work. The area of complexity economics is a great example of this, economics has always borrowed ideas from physics (all the way back to Walrasian equilibrium), however this process had stalled somewhat in the latter half of the 20th century. Complexity science has injected a lot of new and valuable ideas into economics, and I would say this comes from the idea of framing the economy as a complex system, not just because SFI got the right people in the same room together (although that is a necessary part).
Perhaps I am just less optimistic than you about how easy it is to do good interdisciplinary work, and how much of this would happen organically in this area without a dedicated movement towards this. I maintain that complexity science is a good way to encourage researchers to push into problem areas that are less amenable to reductionism or mathematical analysis, since this is often very difficult and risky.
Anyway the main reason I wanted to write this blog post is not so that EA people go around waxing lyrical with words like “complexity” and “emergence” all the time, but to point to complexity science as an example of a successful interdisciplinary movement, which maybe EA can learn from (even just from a public relations point of view), and also to look at some of the tools from complexity science (eg. ABMs) and suggest that these might be useful. @Venkatesh makes a good point that my main recommendation here is that ABMs may be useful to apply to EA cause areas, so perhaps I should have separated that bit out into a separate forum post.