I'm a London-based software engineer interested in AI safety, biorisk, great power conflict, climate change, community building, distillation, rationality, mental health, games, running and probably other stuff.
Before I was a programmer I was a professional poker player, where I picked up the habit of calculating the EV of almost every action in my life, and subsequently discovered EA.
If you want to learn more about me, you can check out my website here: https://jonnyspicer.com
If you're interested in chatting then I'm always open to meet new people! https://calendly.com/jonnyspicer/ea-1-2-1
I think this is a great idea, thanks for the feedback - I completely agree we want people to be able to hit the ground running on the day. I would imagine groups are most effective when they're formed around strong coders, perhaps there's a way we can work that into the doc.
One thing we're considering is an ongoing Discord server, where people could see ideas/projects/who's working on what, etc. The idea would be that the server would persist between events, and move more towards having ongoing projects as above. I think this could potentially solve some of the cold start issues, but I am also hesitant to ask people to join yet another Discord server, and it'd probably need to reach a critical mass of people in order to be valuable. Having written out this comment, I think we will likely start it and push to get it to a good size, and if not we can re-evaluate.
Thanks for pointing out the bad link, I've corrected it now!
Thanks :)
Yes that's correct, I'm a software engineer and Sam's a product manager. I think we both felt pretty comfortable with our roles despite of our lack of experience, and would recommend others who are interested in organizing not to be put off by not having done it before. I'm very happy to chat with anyone who feels this way and can hopefully offer more advice than I laid out in the post.
Scott Alexander offers one answer to this concern in his Prediction Market FAQ:
Another method (mostly associated with Manifold) is to just leave it up to human judgment - specifically, the judgment of the person who made the market. For example, I could make a market in “By 2050, will there be an AI which Scott Alexander thinks qualifies as ‘human-level’?” This will force market participants to price in the risk that I have bad judgment or act dishonestly. But perhaps these risks are small. For example, I might say elsewhere what I think qualifies as “human-level” AI, or you might think human-level AI will be so obvious when it comes that I will definitely agree with you about it. As for honesty, this could be enforced either legally or by reputation. Someone who has resolved their past 100 prediction markets honestly will probably resolve this one honestly too, especially if they get paid to do so and will never get customers again if they lie. When we invest on the normal stock market, we trust that our brokers / the NYSE / etc won’t run off with our money, and this trust is usually well-deserved. Even when we make an online purchase, we trust that the store we’re sending our money to won’t steal it and refuse to send us the product. It would be an exaggeration to say that trust is a solved problem, but evidence from Manifold suggests that most people price in a <1% chance that well-known market makers with good reputation resolve dishonestly.
I think you can see this adjustment-for-dishonesty effect in action on Manifold a fair bit - for example when a market is left up after the event it relates to has finished, and the probability is still at 2% rather than 0%.
If SBF makes the list then Ben Delo might count too.
I had several calls and exchanged messages with Yonatan for a couple of months last year while I was searching for a new job. I would strongly recommend his services. I've been programming for ~5 years now, although I wouldn't consider some of those years to be particularly high quality experience.
The calls felt a little like "career therapy". Yonatan tended to answer a lot of my questions with questions of his own, in order to help me draw my own conclusions. He was perceptive and particularly good at pointing out irrational thoughts I had around my career - it turned out there was a lot more of these than I was expecting!
Estimating counterfactual impact is obviously hard, but I'm going to try anyway.
My biggest takeaways from his coaching:
I endorse this suggestion, and had it in mind in my original comment but should've made it explicit - thanks for pointing out the ambiguity
A project should be funded that aims to understand why 71% of EAs who responded to the 2020 survey self-identify as male, what problems could arise from the gender balance skewing this way, and what interventions could redress said balance.
"Recruitment" initiatives targeted at those under 18 should not receive further funding and should be discontinued.
I would argue that the majority of software products are aiming at making a positive impact, and my guess would be that the products that have the biggest impact are wholly unrelated to EA. Having said that, here are a few that seem in line with the spirit of the question:
Could you expand a bit on "software implementation" being a missing service? At first glance I would've thought the Altruistic Agency would provide that, am I mistaken?