Jonny Spicer

Software Engineer @ AWS
Working (0-5 years experience)
218London, UKJoined Feb 2022



I'm a London-based software engineer interested in AI safety, biorisk, great power conflict, climate change, community building, distillation, rationality, mental health, games, running and probably other stuff.

Before I was a programmer I was a professional poker player, where I picked up the habit of calculating the EV of almost every action in my life, and subsequently discovered EA.

If you want to learn more about me, you can check out my website here:

If you're interested in chatting then I'm always open to meet new people!


Could you expand a bit on "software implementation" being a missing service? At first glance I would've thought the Altruistic Agency would provide that, am I mistaken?

I think this is a great idea, thanks for the feedback - I completely agree we want people to be able to hit the ground running on the day. I would imagine groups are most effective when they're formed around strong coders, perhaps there's a way we can work that into the doc.

One thing we're considering is an ongoing Discord server, where people could see ideas/projects/who's working on what, etc. The idea would be that the server would persist between events, and move more towards having ongoing projects as above. I think this could potentially solve some of the cold start issues, but I am also hesitant to ask people to join yet another Discord server, and it'd probably need to reach a critical mass of people in order to be valuable. Having written out this comment, I think we will likely start it and push to get it to a good size, and if not we can re-evaluate.

Thanks for pointing out the bad link, I've corrected it now!

Thanks :)

Yes that's correct, I'm a software engineer and Sam's a product manager. I think we both felt pretty comfortable with our roles despite of our lack of experience, and would recommend others who are interested in organizing not to be put off by not having done it before. I'm very happy to chat with anyone who feels this way and can hopefully offer more advice than I laid out in the post.

Scott Alexander offers one answer to this concern in his Prediction Market FAQ:

Another method (mostly associated with Manifold) is to just leave it up to human judgment - specifically, the judgment of the person who made the market. For example, I could make a market in “By 2050, will there be an AI which Scott Alexander thinks qualifies as ‘human-level’?” This will force market participants to price in the risk that I have bad judgment or act dishonestly. But perhaps these risks are small. For example, I might say elsewhere what I think qualifies as “human-level” AI, or you might think human-level AI will be so obvious when it comes that I will definitely agree with you about it. As for honesty, this could be enforced either legally or by reputation. Someone who has resolved their past 100 prediction markets honestly will probably resolve this one honestly too, especially if they get paid to do so and will never get customers again if they lie. When we invest on the normal stock market, we trust that our brokers / the NYSE / etc won’t run off with our money, and this trust is usually well-deserved. Even when we make an online purchase, we trust that the store we’re sending our money to won’t steal it and refuse to send us the product. It would be an exaggeration to say that trust is a solved problem, but evidence from Manifold suggests that most people price in a <1% chance that well-known market makers with good reputation resolve dishonestly.

I think you can see this adjustment-for-dishonesty effect in action on Manifold a fair bit - for example when a market is left up after the event it relates to has finished, and the probability is still at 2% rather than 0%.

If SBF makes the list then Ben Delo might count too.

I had several calls and exchanged messages with Yonatan for a couple of months last year while I was searching for a new job. I would strongly recommend his services. I've been programming for ~5 years now, although I wouldn't consider some of those years to be particularly high quality experience.

The calls felt a little like "career therapy". Yonatan tended to answer a lot of my questions with questions of his own, in order to help me draw my own conclusions. He was perceptive and particularly good at pointing out irrational thoughts I had around my career - it turned out there was a lot more of these than I was expecting!

Estimating counterfactual impact is obviously hard, but I'm going to try anyway. 

  • I ended up getting a FAANG job, which I estimate will reduce my time to getting a highly impactful job by 18-24 months compared to offers from good-but-not-FAANG companies.
  • I have slightly greater counterfactual earning-to-give potential, but I think it's negligible.
  • I don't have a reasonable estimate of how likely it was to get to interview stage for FAANG companies, so I can't comment on how impactful Yonatan was in helping me land interviews in the first place (I suspect his CV review probably added a few % points, but <10)
  • Without speaking to Yonatan, I would've applied for fewer jobs, done fewer interviews, been less prepared for them, and got less proficient in the process of doing them. Contingent on my getting a FAANG interview, my counterfactual estimate would be that I had a 10% chance of success, with Yonatan's help I think I was ~40% ex ante.
  • If we use the (poor) assumption that I was always going to at least get the interview, I estimate Yonatan's coaching added an extra 6-7 months of direct work to my career. If his help increased the chances of getting an interview in the first place, then the impact is higher.

My biggest takeaways from his coaching:

  • Have a low bar for applying to jobs, apply for lots of them, filter once you have more information about them. This helps you get better at interviews and allows you to compare potential offers against one another.
  • Only write a cover letter if the job is significantly higher value than others, otherwise cover letters are likely a poor spend of your time (apply for more jobs in the time you would've spent writing them).
  • Apply early, see what parts of the interview process you get rejected at, and work on that part. Don't assume you need to grind LeetCode for 6 months before you can apply anywhere - maybe you're already great at LeetCode but suck at other things.
  • If you think "I would really like this job. I want to wait 3 months before applying to maximise my chances of success", email them and ask how before you can reapply in the event that you fail. Same goes for various other questions you might have - ask the company rather than not applying.
  • If you are applying for jobs with the purpose of gaining career capital, ask about whether there is structured mentorship available. I found this question particularly valuable in interviews, and was surprised by the amount of times I got a long, convoluted answer that amounted to "no". If you are trying to improve, you want short feedback loops, and many companies don't have these in place.

I endorse this suggestion, and had it in mind in my original comment but should've made it explicit - thanks for pointing out the ambiguity

Answer by Jonny SpicerDec 05, 2022113

A project should be funded that aims to understand why 71% of EAs who responded to the 2020 survey self-identify as male, what problems could arise from the gender balance skewing this way, and what interventions could redress said balance.

Answer by Jonny SpicerDec 05, 20225516

"Recruitment" initiatives targeted at those under 18 should not receive further funding and should be discontinued.

Answer by Jonny SpicerDec 04, 202242

I would argue that the majority of software products are aiming at making a positive impact, and my guess would be that the products that have the biggest impact are wholly unrelated to EA. Having said that, here are a few that seem in line with the spirit of the question:

  • Spark Wave
  • QURI
  • Computation Democracy Project
  • Various other prediction markets, like Manifold
  • There are people who think that building any kind of ML-intensive product is not a good thing, even if the model isn't SOTA, and so might disagree that what Ought are doing is good. I neither endorse nor disendorse this opinion, but if you disagree with it then you might also like co:here
Load more