I help EA orgs use AI better. Also curious about what strong-but-not-yet-superhuman AI might enable for us.
I'm really keen to meet people building things with AI, and people interested in making more of that happen within EA. If you're any of those, please have a really low bar for booking me at calendly.com/alejoacelas/30-minutesÂ
Oh, amazing! I didn't knew Alex had a Substack. And indeed, it's full of advice for using LLMs for work.Â
Here's a relevant link for those interested: https://lawsen.substack.com/p/lean-into-laziness
On the same genre there's also these posts from Shakeel Hashim and Peter Hartree on how they use LLMs:
For anyone considering working on the ORCID-TAXID mapping tool, I suspect it might be an unusually approachable project for those with some familiarity with biological publications and programming. Even without what ORCID or TAXID stood for before reading this post, I managed to construct a barebones demo in 30 minutes using ChatGPT and the Europe PMC API (which has an option to search by ORCID ID, though some quick manual searches suggest it isn't comprehensive). I think in under 30 hours you could build a decently useful product by adding features like:
I will message the post authors and offer to do this myself, but if you already have background in biological sciences and are looking for a cool upskilling project, you would probably be a better fit than me for this.
For those wanting a quick encapsulation of Nietzsche's morality, I recommend Arjun's other post on the topic. It's both unusually succinct and well-written.
A reason why the political orientation gap might be less worrying than it appears at first sight is that it probably stems partly from the overwhelmingly young bent of EA. Young people from many countries (and perhaps especially in the countries where EA has greater presence) tend to be more left leaning than the general population.
This might be another reason to try onboarding older people to EA more relative to the pool of new members, but if you thought that would involve significant costs (e.g. having less young talented EAs because less community building resources were directed towards that demographic), then perhaps in equilibrium we should have a somewhat skewed distribution of political orientations.
I'm not sure if I understand where you're coming from, but I'd be curious to know: do you think similarly of EAs who are Superforecasters or have a robust forecasting record?
In my mind, updating may as well be a ritual, but if it's a ritual that allows us to better track reality then there's little to dislike about it. As an example of how precise numerical reasoning could help, the book Superforcasting describes how rounding Superforecasters predictions (interpreting .67 probability of X happens as .7 probability of X happening) increases the error of the prediction. The book also includes many other examples where I think numerical reasoning confers a sizable advantage to its user.
Yep, agree that linear averages might not be the best tool for that. The product of odds sounds interesting. I might give it a read and see if it's easy to keep track of in my mind