I am a research software engineer working at Improbable.
My research is focussed on models and simulations of complex systems, which has a large overlap with Machine Learning.
I am also very interested in AI safety, and AI research in general.
Hey Venkatesh, I am also really interested in Complexity Science, in fact I am going to publish a blog post on here soon about Complexity Science and how it relates to EA.I've also read Bookstaber's book, in fact Doyne Farmer has a similar book coming out soon which looks great, you can read the intro here.
I hadn't heard of the Complexity Weekend event but it looks great, will check that out!
This is an interesting thought experiment and I like the specific framing of the question.
My initial thoughts are that this clearly would have been a good thing to try to work on, mainly due to the fact that the 2008 financial crisis cost trillions of dollars and arguably also led to lots of bad political developments in the western world (eg. setting the stage for Trumpism). If you buy the Tyler Cowen arguemnts that economic growth is very important for the long term then that bolsters this case. However a caveat would be that due to moral uncertainty it's hard to actually know the long term consequences of such a large event.
Here are some other ways to think about this:
As Ramiro mentions below, very few people were alert to the potential risk before the crisis, so an additional person thinking and advocating for this would have increased the proportion of people thinking about this by a lot.
Even if you had predicted how the crisis would have unfolded and what caused it, could you have actually done anything about it? Would you just ring up the federal reserve and tell them to change their policy? Would anyone have listened to you as a random EA person? This is potentially the biggest problem in my view.
Other benefits - better economic modelling
I am a strong believer in alternative approaches to economic modelling which can take into account things like financial contagion (such as agent-based models), so a potential benefit of working on this type of thing before the crisis is that you might have developed and promoted these techniques further, and these tools could help with other economic problems. In my view this is still a valuable and neglected thing to work on.
Other benefits - reputation
An additional benefit of calling the alarm on the financial crisis before it happened is the reputational benefit. Even if no one listened to you at the time, you would be recognised as one of the few people who "foresaw the crisis", and therefore your opinions might be given more weight, for example the people from The Big Short who are always wheeled out on TV to provide opinions. You could say "hey, I predicted the financial crisis, so maybe you should also listen to me about this other <<insert EA cause area here>> stuff".
Hey JP, thanks for your question, here are some questions that may be useful in your search, and may help people other provide you more advice:
1. Do you have any criteria for what you consider a "job within EA"? There are many types of job which could be considered EA related, from jobs within EA organisations, to jobs which have a large potential impact but are not directly for EA orgs (for example working in certain government departments or private companies). It might be worth reframing how you think about this as "how can I find a job that has the biggest impact", rather than "how can I get an EA job".
2. Do you have a particular cause area that you care a lot about? That may help to focus your search, and help your CV stand out to any potential employers.
3. What would you consider to be your comparative advantage? Since you have worked as an economist and data scientist do you consider these technical skills to be your main strengths? Are you looking for a hands on technical job? Technical skills such as software engineering and data science are always in demand so this is worth bearing in mind.
I would add to this that it's obviously worth checking out 80,000 hours careers advice if you haven't already, they spend a lot more time thinking about this than me!
I wish you the best of luck in your job search!
Regarding AI alignment and existential risk in general, Cummings already has a blog post where he mentions these: https://dominiccummings.com/2019/03/01/on-the-referendum-31-project-maven-procurement-lollapalooza-results-nuclear-agi-safety/
So he is clearly aware and responsive to the these ideas, it would be great to have an EA minded person on his new team to emphasise these.