AA

Alejandro Acelas

114 karmaJoined Working (0-5 years)Bogotá, Bogota, Colombia

Comments
15

For anyone considering working on the ORCID-TAXID mapping tool, I suspect it might be an unusually approachable project for those with some familiarity with biological publications and programming. Even without what ORCID or TAXID stood for before reading this post, I managed to construct a barebones demo in 30 minutes using ChatGPT and the Europe PMC API (which has an option to search by ORCID ID, though some quick manual searches suggest it isn't comprehensive). I think in under 30 hours you could build a decently useful product by adding features like:

  • Querying PubMed too alongside Europe PMC to get better publication coverage
  • Searching publications by author name (since PubMed doesn't offer filtering by ORCID ID) and filtering out clear false positives using simple heuristics like author affiliation and publication area
  • Creating more fine-grained criteria for identifying organisms in publication abstracts and texts

I will message the post authors and offer to do this myself, but if you already have background in biological sciences and are looking for a cool upskilling project, you would probably be a better fit than me for this.

For those wanting a quick encapsulation of Nietzsche's morality, I recommend Arjun's other post on the topic. It's both unusually succinct and well-written.

Oh, sorry. I'll expand the abbreviation in the original comment. It's 'Community Building resources'.

Thanks for putting numbers to my argument! I was expecting a greater proportion of left-leaning individuals among the college educated, so this was a useful update.

A reason why the political orientation gap might be less worrying than it appears at first sight is that it probably stems partly from the overwhelmingly young bent of EA. Young people from many countries (and perhaps especially in the countries where EA has greater presence) tend to be more left leaning than the general population.

This might be another reason to try onboarding older people to EA more relative to the pool of new members, but if you thought that would involve significant costs (e.g. having less young talented EAs because less community building resources were directed towards that demographic), then perhaps in equilibrium we should have a somewhat skewed distribution of political orientations.

Thanks for taking the time to respond! I find your point of view more plausible now that I understand it a little bit better (though I'm still not sure of how convincing I find it overall).

I'm not sure if I understand where you're coming from, but I'd be curious to know: do you think similarly of EAs who are Superforecasters or have a robust forecasting record?

In my mind, updating may as well be a ritual, but if it's a ritual that allows us to better track reality then there's little to dislike about it. As an example of how precise numerical reasoning could help, the book Superforcasting describes how rounding Superforecasters predictions (interpreting .67 probability of X happens as .7 probability of X happening) increases the error of the prediction. The book also includes many other examples where I think numerical reasoning confers a sizable advantage to its user.

What stops AI Safety orgs from just hiring ML talent outside EA for their junior/more generic roles?

Oh, I totally agree that giving people the epistemics is mostly preferable to hanging them the bottom line. My doubts come more from my impression that forming good epistemics in a relatively unexplored environment (e.g. cause prioritization within Colombia) is probably harder than in other contexts.
I know that at least our explicit aim with the group was to exhibit the kind of patience and rigour you describe and that I ended up somewhat underwhelmed with the results. I initially wanted to try to parse out where our differing positions came from, but this comment eventually got a little long and rambling.
For now I'll limit myself to thanking you for making what I think it's a good point.

Hi, thanks for the detailed reply. I mostly agree with the main thrust of your comment, but I think I feel less optimistic about what happens when you actually try to implement it.   

Like, we've had discussions in my group about how to prioritize cause areas and in principle everyone agrees with how we should work on causes that are bigger, more neglected and tractable, but when it comes to specific causes it turns out that the unmeasured effects are the most important thing and the flow through effects of the intervention I've always liked turn out to compensate for its lack of demonstrated impact.  

I'm not saying it's impossible to have those discussions, just that for a group constrained on people who've engaged with EA cause priorization arguments, being able to rely on the arguments that others have put forward (like we can often do for international causes) makes the job much easier. However, I'm open to the possibility that that the best compromise might be to simply to let people focus on local causes and double down on cultivating better epistemics.    

(P.s.: I now realize that answering to every comment on your post might be quite demanding, so feel free to not answer. I'll make sure to still update on the fact that you weren't convinced by my initial comment. If anything at least your comment is making me consider alternatives that don't restrict the growth and avoid the epistemic consequences I delineated. I'm not sure if I'll find something that pleases me, but I'll muse on it a little further)

Load more