Jonny Spicer

Software Engineer
Working (0-5 years experience)
67London, UKJoined Feb 2022
jonnyspicer.com

Bio

I'm a London-based software engineer interested in AI safety, biorisk, great power conflict, climate change, community building, distillation, rationality, mental health, games, running and probably other stuff.

Before I was a programmer I was a professional poker player, where I picked up the habit of calculating the EV of almost every action in my life, and subsequently discovered EA.

If you want to learn more about me, you can check out my website here: https://jonnyspicer.com

If you're interested in chatting then I'm always open to meet new people! https://calendly.com/jonnyspicer/ea-1-2-1

Comments
9

My application

I dropped out of my undergrad about halfway through and think I made some learnable mistakes in the process.

Originally I was studying law, on the grounds that:

  1. My teachers and parents said I'd make a good lawyer;
  2. They similar pressured me to do a subject perceived as "highly academic";
  3. I was quite a good public speaker at school;
  4. I thought it represented the best chance I had of getting in to Oxbridge.

None of these reasons are particularly good ones for choosing a degree, but nobody ever sat me down and gave me a better framing for of the problem, so I just went along with it.  Point #4 was potentially true given that I did get an offer from Cambridge, and at one point my academic career seemed to be on a solid trajectory - I would go to Cambridge, become a barrister afterwards, be financially secure and successful and ride off into the sunset. Unfortunately I suffered from various executive dysfunction and mental health issues both at school and uni but didn't legitimise them and therefor didn't devote the necessary resources to resolve them. This led to me declining my Cambridge offer as I knew there was no way I was going to get the required grades, and subsequently also led to me dropping out of Lancaster University about halfway through my second year, at which point I went from having a solid life plan to no plan whatsoever as well as some potent mental health struggles.

Fast forward a few years and I'm a software engineer at a big tech company, which seems to be a significantly better fit for me than being a barrister. It's a great time to be a programmer - it's well paid and relatively low stress, building things is rewarding, it suits me as someone who has a quantitative worldview combined with a decent chunk of social anxiety. Being in big tech also appears to give me a reasonably clear pathway towards career success, by building career capital I can later parlay into impact, founding a start up or similar.

In general it's hard to evaluate the counterfactual of dropping out/remaining at uni, however one of the posts Gavin linked mentioned shame around dropping out that strongly resonates with me. At various points in the last 6 years I have considered if I ought to try to find a way to get a degree, and often find myself thinking that had I actually acquired a degree from Cambridge, my career might be significantly more advanced than it is now. It's also worth noting that not having an undergrad can make it harder to immigrate, for example it is very difficult for me to get a H1-B visa for the US.

I also think Eli is completely correct that much of the value of university is in the people rather than the education. The sense of belonging I felt as part of a sports team at university was one I've been wholly unable to replicate since, despite various attempts to do so. I suspect the majority of the value of an Oxbridge or similar degree is in the networking opportunities, particularly in the long term. 

Advice to my 17 year old self

  1. Think about what you want out of your life, even if it seems hard to imagine a life outside of school. Talk to people you find interesting, ask them difficult questions, try to imagine if being in their shoes would make you happy. Your parents want you to make the most stable possible option, which might not be the best.
  2. Take a year out to work on your mental health. Your career will be long, and a year spent wisely will more than pay for itself. Do anything you can to make those around you see that this is valuable. This is potentially also good advice for my 27 year old self.
  3. The opportunity to go to university is likely once-in-a-lifetime, and thus you should make sure that you go to the right one and study the right subject. I think I would've enjoyed, maths, economics and computer science significantly more than law.
  4. Equally, it is not the be-all and end-all. If you're smart, curious and hard-working then you'll land on your feet. You have a big safety net so you shouldn't be scared of falling.

Takeaways

  1. I think at least trying to do an undergrad is worth it, for the people more than the education. 
  2. Degrees are pretty clearly more valuable to some people than others, and this depends on how credentialist your field is and whether you have traits that would be sufficient to overcome your lack of degree when looking for opportunities.
  3. Most 17 year olds I know and have known do not seem well equipped to deal with big decisions like this.

Thanks for writing this up Frances! I just wanted to point out that the post on biosecurity talent you linked to includes the following:

Disclaimer: All uses of "engineering" in this post refer to the engineering of physical systems (e.g. materials engineering or civil engineering). Sorry, software engineers.

Thanks for pointing that out, I didn't realise how effective blood donation was. I think my original point still stands, if "donating blood" is substituted with a different proxy for something that is sub-maximally effective but feels good though.

Thanks for writing this Luke! Much like others have said, there are some sections in this that really resonate me and others I'm not so sure on. In particular I would offer a different framing on this point:

Celebrate all the good actions[6] that people are taking (not diminish people when they don't go from 0 to 100 in under 10 seconds flat).

Rather than celebrating actions that have altruistic intent but questionable efficacy, instead I think we could be more accepting of the idea that some of these things (eg donating blood) make us feel warm fuzzy feelings, and there's nothing wrong with wanting to feel those feelings and taking actions to achieve them, even if they might not be obviously maximally impactful. Impact is a marathon, not a sprint, and it's important that people who are looking to have a large impact make sustainable choices, including keeping their morale high. For example, for people working on causes like AI safety where it's difficult to see tangible impact, if donating blood gives you the boost you need to keep you feeling good about yourself and what you are doing with your life and therefor prevents you from becoming disillusioned with your choices and contributing less to AI safety, then I think that makes it very worth doing - however I think that is more an act of self-care rather than something that ought to be celebrated in the community (although perhaps acts of self-care ought to be more celebrated in the community).

I also think that a lot of average day-to-day charity (and perhaps other kinds of altruism) is primarily motivated by guilt, which I don't think is particularly helpful for donors and I'd be surprised if it proved to be sustainable for charities either. I think effective altruism does a great job of reframing this: when I donate to GiveWell MIF, instead of doing it to assuage a sense of guilt, I do it because it lets me feel good about myself, knowing that I am actually making a tangible difference in the world with my actions. These are the same warm fuzzy feelings as from before, and I think perhaps that's the framing I would prefer here: humans are warm-fuzzy-feeling-optimisers, and EA could do a better job at empowering people to feel those feelings when they make maximally impactful choices, rather than just ones where their impact is immediately obvious or provides some social kudos.

From a software engineering point of view there are a couple of things that would potentially put me off applying to an EA-org:

  • Lack of mentorship (this is somewhat covered by your small teams point but this is the specific part that I think of). I'm sure this isn't true for all EA orgs but the appeal of ie FAANG is that I am very confident that I'll be able to get mentored by engineers at the top of the field, who likely already have a lot of experience mentoring, having good structures for mentors and are generally empowered to be great at that. 
  • Small scope/scale for projects, particularly for frontend work. In SWE a big part of your career capital comes from being able to say you've worked on projects that are really big and/or really fast. There are plenty of fullstack jobs at EA orgs around at the moment but a lot of them are basically look after a website or build an app which will serve a niche community. 

I think there has been discussion before about SWEs feeling like EA orgs don't offer them enough career capital, but I can't remember where and it doesn't appear to have updated me much in favour of the EA orgs.

Earlier I had a conversation with Yonatan Cale about, among other things, ideas for EA projects that could be looking for founders. My prior is that "ideas are easy, execution is hard" and therefor there are plenty of good ideas, he pushed back on this and cited this thread.

Then I went for a run and tried to think of some ideas. I haven't checked to see if anyone has proposed any of them before, and given that I thought of them off the top of my head, they have likely already been discussed. We had talked about software projects specifically, but not all of these are software-centric. I haven't spent longer than five minutes thinking about any of them, and I think it's unlikely any of them are particularly good; this is an exercise in generating ideas, on the grounds that if there's enough of them, one of them might be good.

  • Prediction market aggregators that include markets from non-anglophone countries, eg Russia/China/India. Basically Metaforecast but with more markets (maybe I should reach out to Nuño to see if this would be possible?). I don't know if those markets already exist - if not, maybe they could be created.
  • A system to automate or crowd-source FOI requests, open source the data and provide tools to access and use it.
  • Basically any of the digital democracy tools Audrey Tang talked about on her episode of the 80000 Hours podcast. Build an MVP, pitch it to small local government, show some success, scale it up. Given the open-source nature of the tools currently used in Taiwan, "build an MVP" could be as easy as just running git clone polis. Low chance of success, potential for huge impact if successful regardless.
  • A system for polling representative samples of a population and/or having demographic information available about those sampled. Something like mechanical turk but in app-form and not associated with Amazon.
  • Web scraping/sentiment & quantitative analysis for information on Western/Russian/Chinese/Indian sites, similar to prediction market thing above. If lots of Chinese netizens suddenly start talking about shortening their AGI timelines, what information do they have that folks in the West don't? Similarly, the analysis should be published in the same languages to try to foster a more global community.
  • Org specifically for AI info security. Could either be for pen testing or building new defensive tools.
  • I originally thought "something like Guesstimate but more specific to Bayes calculations" but having looked at Guesstimate again, that already looks great.
  • A scientific journal which better incentivizes high quality research, eg by mandating preregistration, or by rewarding attempts to replicate studies. I presume this has been debated at length already though.

Obviously I know the last two definitely don't have legs, and #1 seems like it might just be submitting a pull request after a weekend or two of work, but still. I'm confident that there is a non-zero amount of value across all of these ideas, that if I thought about them more there's a possibility that they could yield an appreciable amount of value, and that I can think of a large number of similar ideas given more time, with at least some of them being better than the best one of these ideas.

Firstly, I would say that while Facebook is definitely deserving of the criticisms you've mentioned above, it's not clear to me that it's a net negative force on the world. Facebook does allow people to create, maintain or rediscover genuine human connection across time and space in a way that I think is really valuable for the world (although there might be some better alternative in the counterfactual). The general opinion of Facebook seems to suffer from some amount of negativity bias, but I've not tried to crunch any numbers to work out whether or not Facebook actually is providing more positive value than negative.

Secondly, even if Facebook is doing more harm than good overall, it seems possible that it would be easier to produce change from the inside rather than out. So if you're an altruistic-minded person and want to do good as well as advance your career, then maybe you could consider joining them and trying to influence others around you to try to make the products you work on have more positive (or less negative) value.

Thirdly, I think this decision depends a lot on your options, skills, and future plans. If you are early on in your career and want to work at MAANG so you can build career capital by having them on your CV and rapidly improve your product management skills by working on some of the most-used products in the world, then it could make sense, particularly if you are then planning on doing something particularly high-impact afterwards like becoming a founder. If, however, you could already get a decently high-impact job, eg something on  the 80k job board, and you could be really excellent at it, it seems like that could be the best way to maximise your overall career impact over the MAANG route. 

Finally, a recruiter reaching out to you is still a long way from considering whether or not to accept a job at a company, particularly at somewhere as competitive as Facebook (although maybe the recruiter was especially keen on you!). While it will have a time-cost associated with it, I think it's likely worth going through with the interview process just for the experience, and then if you get an offer you can make your decision then. You might have a better understanding of whether or not this job aligns with your aims after having had a chance to talk to the interviewers in any case.

One last thought off the back of my personal musings on this question is that for me, there is a lot of ego in my desire to work at a MAANG company. Getting a job there would make me feel like I was at the top of my field, a really talented engineer, someone my parents could finally be proud of etc etc. I then tried to rationalise this by saying it would allow me to earn to give, but there has been a lot of talk recently about EA having too much money or the fact that the biggest bottleneck in EA seems to be people not funding. Having these kinds of thoughts and feelings is totally normal but for me it was important to try to be self-aware about them and make sure I was being rational about my career choices. Also, when looking at options that have long-tail distributions for impact with some outcomes being exponentially better than others, it seems like there is a lot more scope for ambition when it comes to having an outsized personal impact on a EA-relevant product than it does to earn to give as a product manager and not eg a quant trader (or a founder and CPO!). 

This is largely advice regurgitated from 80k but hopefully I've managed to distill most of the relevant bits. Trying to get a job at FAANGULA is for the time being still one of the options I am considering, so I'd definitely be interested in chatting further about it or hearing more about how you decide to proceed.

I've taken a few concrete steps:

  • Applied for 80k career advising, which fortunately I got accepted for. My call is at the end of the month
  • Learned the absolute basics of the problem and some of the attempts in progress to try and solve it, by doing things like listening to the 80k podcasts with Chris Olah/Brian Christian, watching Rob Miles' videos etc
  • Clarified in my own mind that AI alignment is the most pressing problem, largely thanks to posts like Neel Nanda's excellent  Simplify EA Pitches to "Holy Shit, X-Risk" and Scott Alexander's "Long-Termism" vs "Existential Risk" (I'd not spent much time considering philosophy before engaging with EA and haven't had enough time to work out whether or not I have the beliefs required in order to subscribe to longtermism. Fortunately those two posts showed me I probably don't need to make a decision about that yet and can focus on alignment knowing that it's likely the highest impact cause I can work on).
  • Began cold-emailing AI safety folks to see if I can get them to give me any advice
  • Signed up to some newsletters, joined the AI alignment Slack group

I plan on taking a few more concrete steps:

  • Continuing to reach out to people working on AI safety who might be able to offer me practical advice on what skills to prioritise in order to get into the field and what options I might have available. 
  • In a similar vein to the above, try to find a mentor, who can help me both focus my technical skills as well as maximise my impact
  • Getting in contact with the folks at AI Safety Support
  • Complete the deep learning for coders fast.ai course

My first goal is to ascertain whether or not I'd be a good fit for this kind of work, but given that my prior is that software engineers are likely to be a good fit for working on AI alignment and I'm a good fit for a software engineer, I am confident this will turn out to be the case. If that turns out to be true, there are a few career next steps that I think seem promising:

  • Applying for relevant internships. A lot of these seem aimed at current students, but I'm hoping I can find some that would be suitable for me.
  • Getting an interim job that primarily uses python and ideally ML so I can upskill in those (at the moment my skills are more generic backend API development focused), even if the job isn't focused on safety.
  •  Apply for a grant to self-study for 3-6 months, ideally under the guidance of a mentor, with a view to building a portfolio that would enable me to get a job somewhere like Deepmind.
  • Applying for research associate positions focused on AI alignment.

I appreciate there's little context to my current situation which might be relevant here, but nonetheless any feedback on these would be greatly appreciated!

Hi, I'm Jonny, a software engineer based in London. I've recently come across EA and am looking to re-align my career along a higher impact path, most likely focusing on AI risk, however I've still not fully bought into longtermism just yet so am hedging by also considering working on climate change or global health. I look forward to using this forum to try and answer some of my questions and clarify my own thinking.