Jonny Spicer

Software Engineer @ AWS
446 karmaJoined Feb 2022Working (6-15 years)London, UK
jonnyspicer.com

Bio

Participation
3

I'm a London-based software engineer interested in AI safety, biorisk, great power conflict, climate change, community building, distillation, rationality, mental health, games, running and probably other stuff.

Before I was a programmer I was a professional poker player, where I picked up the habit of calculating the EV of almost every action in my life, and subsequently discovered EA.

If you want to learn more about me, you can check out my website here: https://jonnyspicer.com

If you're interested in chatting then I'm always open to meet new people! https://calendly.com/jonnyspicer/ea-1-2-1

Comments
35

Have you considered talking/working with Sage on this? It sounds like something that would fit well with the other tools on https://www.quantifiedintuitions.org/

I'd be interested to see you weigh the pros and cons of making it easier to contribute - you don't explicitly say it in the post, but you imply that this would be a good thing by default. The forum is the way it is for a reason, and there are mechanisms put in place both by the forum team and by the community in order to try to keep the quality of the discussion high. 

For example, I would argue that having a high bar for posting isn't a bad thing, and the sliding-scale karma system that helps regulate that is, in extension, valuable. If writing a full post of sufficient quality is time consuming, then there is the quick takes section. 

The Alignment Forum has a significantly higher barrier to entry than this one does, but I think that is fairly universally regarded as an important factor in facilitating a certain kind of discussion either. I can see a lot of value in the EA forum trying to maintain it's current norms in order to mean it still has the potential for productive discussion between people who are sufficiently well-researched. I think meaningfully lowering the bar for participation would mean that the forum would lose some of its ability to generate anything especially novel or useful to the community and I think the quote you included:

For an internet forum it's pretty good. But it's still an internet forum. Not many good discussions happen on the internet.

Somewhat points to that too. I think there should be other forums for people less familiar with EA to participate in discussions, and I think whether or not those currently exist is an interesting discussion.

Having said all that, I do wonder if that leaves the current forum community particularly vulnerable to groupthink. I'm not really sure what the solution to that is though.

My biggest takeaway from EA so far has been that the difference in expected moral value between the consensus choice and its alternative(s) can be vastly larger than I had previously thought.

I used to think that "common sense" would get me far when it came to moral choices. I even thought that the difference in expected moral value between the "common sense" choice and any alternatives was negligible, so much so that I made a deliberate decision not to invest time into thinking about my own values or ethics. 

EA radically changed my opinion, and now I hold the view that the consensus view is frequently wrong, even when the stakes are high, and that is possible to make dramatically better moral decisions by approaching them with rationality, and a better-informed ethical framework.

Sometimes I come across people who are familiar with EA ideas but don't particularly engage with them or the community. I often feel surprised, and I think the above is a big part of why. Perhaps more emphasis could be placed on this expected moral value gap in EA outreach?

Thanks for the feedback - it has indeed been a long time since I did high school statistics!

I specified that the numbers I gave were "approximations to prove my point" is because I know that I do not have a technical statistical model in my head, and I didn't want to pretend that was the case. Given this is a non-technical, shortform post, I thought it was clear what I meant - apologies if that wasn't so.

This is a good idea, thanks for the suggestion! I've never really tried any of the CFAR stuff but this seems like a good place to start. 

I'll give it a go over the weekend and if I'm struggling then I'll let you know and we can do it together :)

It means something like "my 90% confidence interval is 80% - 95%, with 90% as the mean".

Thanks for the suggestion! I have actually spent quite a lot of time thinking about this - I had my 80k call last April and this was their advice. I've hesitated against doing this for a number of reasons:
 

  • I'm worried that even if I do upskill in ML, I won't be a good enough software engineer to land a research engineering position, so part of me wants to improve as a SWE first
  • At the moment I'm very busy and a marginal hour of my time is very valuable, upskilling in ML is likely 200-500 hours, at the moment I would struggle to commit to even 5 hours per week
  • I don't know whether I would enjoy ML, whereas I know I somewhat enjoy at least some parts of the SWE work I currently do
  • Learning ML potentially narrows my career options vs learning broader skills, so it's hard to hedge
  • My impression is that there are a lot of people trying to do this right now, and it's not clear to me that doing so would be my comparative advantage. Perhaps carving out a different niche would be more valuable in the future.

There are probably good rebuttals to at least some of these points, and I think that is adding to my confusion. My intuition is to keep doing what I'm currently doing, rather than go try and learn ML, but maybe my intuition here is bad.

Edit: writing this comment made me realise that I ought to write a proper doc with the pros/cons of learning ML and get feedback on it if necessary. Thanks for helping pull this useful thought out of my brain :)

I suffer strongly from the following, and I suspect many EAs do too (all numbers are to approximations to illustrate my point):

  1. I think that AGI is coming within the next 50 years, 90% probability, with medium confidence
  2. I think that there is a ~10% chance that development of AGI leads to catastrophic outcomes for humanity, with very low confidence
  3. I think there is a ~50% chance that development of AGI leads to massive amounts of flourishing for humanity, with very low confidence
  4. Increasing my confidence in points 2 & 3 seems very difficult and time consuming, as the questions at hand are exceptionally complex, and even identifying personal cruxes will be a challenge
  5. I feel a moral obligation to prevent catastrophes and enable flourishing, where I have the influence to do so
  6. I want to take actions that accurately reflect my values
  7. Given the probabilities above, not taking strong, if not radical, action to try to influence the outcomes feels like a failure to embody my values, and a moral failure.

I'm still figuring out what to do about this. When you're highly uncertain it's obviously fine to hedge against being wrong, but again, given the numbers it's hard to justify hedging all the way down to inaction.

I am trying to learn more about AI safety, but I'm not spending very much time on it currently. I'm trying to talk to others about it, but I'm not evangelising it, nor necessarily speaking with a great sense of urgency. At the moment, it's low down my de factor priority list, even though I think there's a significant chance it changes everything I know and care about. Is part of this a lack of visceral connection to the risks and rewards? What can I do to feel like my values are in line with my actions?

The CEO has been inconsistent over time regarding his position on releasing LLMs

I find this to be a pretty poor criticism, and its inclusion makes me less inclined to accept the other criticisms in this piece at face value.

Updating your beliefs and changing your mind in light of new evidence is undoubtedly a good thing. To say that doing so leaves you with concerns about Connor's "trustworthiness and character" seems not only unfair, but also creates a disincentive for people to publicly update their views on key issues, for fear of this kind of criticism.  

Some quick thoughts following EAG London 2023

TL;DR

  • I'd recommend having a strategy for planning your conference
  • Different types of 1:1s are valuable in different ways, and some are more worth preparing for than others
  • It's natural to feel imposter syndrome and a sense of inadequacy when surrounded by so many highly competent, accomplished people, but arguably the primary purpose of the conference is for those people to help us mere mortals become highly competent and accomplished too (assuming accomplishment = impact)
  • I feel very conflicted about the insider/outsider nature of the EA community, and I'm going to keep thinking and talking about it more

Last year I attended EAG SF (but not EAG London 2022), and was newer to the EA community, as well as working for a less prestigious organisation. This context is probably important for many of these reflections.

Conference strategy

I made some improvements to my conference planning strategy this year, that I think made the experience significantly better:

  • I looked through the attendee list and sent out meeting invitations as soon as Swapcard was available. This way people had more slots free, both increasing their likelihood of accepting my meeting, and increasing the chances they'll accept for the specific time I've requested. This let me have more control over my schedule.
  • I left half an hour's break in between my 1:1s. This allowed meetings to go on longer if both participants wanted that, as well as gave me some time to process information, write down notes, and recharge.
  • I initially didn't schedule many meetings on Sunday. This meant that if anybody I talked to on Friday or Saturday suggested I talk to someone else at the conference, I'd have the best chance at being able to arrange a meeting with them on Sunday. This strategy worked really well, as I had a couple of particularly valuable 1:1s on Sunday with people who hadn't originally been on my radar to talk to.

I had a clearer sense of what I was trying to achieve out of the conference this year compared to last. This made it easier to decide who would be valuable to speak to. Everyone who I requested a meeting with accepted, including someone who I regarded as particularly impressive who had specified they weren't going to take many 1:1s - so have a low bar for requesting meetings! With this person in particular I felt a bit starstruck, and regretted not having spent more time preparing specific questions to ask. 

Some other meetings were totally fine without preparation though, so I think it's worth considering which ones would be more valuable to prepare for - in my case these were ones with more accomplished folks, or with people who I'm interested in working/collaborating with in the near future. 

I broadly had two kinds of 1:1s, both of which were valuable: meetings with people who had a track record in areas that I am considering pursuing, and meetings with people who are in a similar position to me currently. With the former, I had specific questions that I was trying to answer, and was potentially trying to impress them/gauge if they might hire me. With the latter, meetings were more exploratory, more casual, and more about trying to find out if I was missing anything important from my model. I think it's easy to feel like the former is far more valuable than the latter, but I think this is false, and I think scheduling some of these more relaxed meetings can help ease the stress of the conference (as well as provide valuable insight into your current situation and plans).

I've infiltrated the ingroup

Last year, I felt like I had something to prove in all of my meetings. I knew maybe 4-5 people who were at the conference, which is not a lot out of almost 1500, and I worked for a company that nobody in SF has heard of. I was still a prototypical EA (straight white male) in many ways, but I don't have an undergrad, and felt like I lacked any track record of achievements to show that I was competent (and by extension, worthy of other attendees' time).

This year, I had several close friends attending and had interacted with a much larger number of people in some capacity, either on social media or through EA tech events in London. I now work for a FAANG company. I have read the seminal Slate Star Codex posts, I broadly understand what a deceptively aligned mesa optimiser is, and I've speculated as to the true identity of Qualy the Lightbulb on Twitter. I have a legible track record of my competence, and I am fluent in EA jargon. I have infiltrated the ingroup.

I feel very conflicted about this - being part of the ingroup feels great. I feel like I have the respect of people who I view as being extremely talented and successful, and naturally this does wonders for my ego. Having so many common touchpoints has meant that I find it extremely easy to make meaningful connections within the EA community compared with outside. Using jargon to signal that your familiar with the relevant scriptures and ideas lets you skip straight to discussing cruxes, on the understanding that both parties already agree on some number of issues. Other group norms, such as a preference for openness, directness, and epistemic humility also seem better than their alternatives to me - I think these tendencies facilitate more constructive discussion, ultimately (hopefully) leading to greater impact.

But there are many obvious drawbacks to this. The ingroup is primarily made up of exceptionally privileged people (and in some ways I'm glad of this - I want privileged people like myself to be doing more to think about how they can do good with their privilege), and often the people that feel less comfortable around the EA community are less privileged. Other people have articulated these problems with the community better than me, and I don't want to speak for them so won't go into too much detail, but if you're reading this then you probably already know exactly what I mean. The fact that the EA community makes some people feel this way makes me feel really sad, which is hard to square with the happiness I get from the sense of belonging that I personally feel from the community.

I'm not entirely sure what to do about this. Trying to use less jargon seems like a good start, but that would be a costly signal for me, particularly when I feel like I am starting to gain status within the community. I would love to write that I am happy to sacrifice this, but I am human and flawed, and I don't know if I am. Making a conscious effort to treat people the same, regardless of whether I view them as highly accomplished or not, also seems like something I ought to write here, but seems hard (or even impossible) to do in practice.

80k has the idea of spending the initial part of your career acquiring career capital, which can later be traded in for impact. Perhaps EA ought to have the idea of community capital - a greater focus is placed on nurturing and growing a healthy community focused on doing good. Even if this focus detracts from doing the absolute most good possible in the short term, in the long term it could allow for greater impact (and, after all, we do love talking about the long term round here).

I feel like my thinking on these topics is generally pretty immature and lacks nuance, so I'd welcome any thoughts.

Outcomes

To end on a positive note, I found the conference incredibly valuable. I came away feeling like I had three promising paths to explore, and am now planning on trying to move to direct work immediately, rather than gaining more career capital as originally planned. I feel excited about trying to use my career to do good, and I feel excited about EA as a whole. There are jobs I will apply for that I counterfactually wouldn't have, seemingly promising opportunities for collaboration, and I made several new contacts that I think will be mutually beneficial in the future.

The conference itself seemed excellently run, the venue was amazing, as was the food, and I feel very appreciative for both the organisers and event staff for all their time and effort!

Load more