Topic Contributions

Comments

What are the coolest topics in AI safety, to a hopelessly pure mathematician?

Question: would an impactful but not cool/popular/elegant topic interest you? What's your balance between coolness and impactfulness?

Ukraine: can we talk to the Russian soldiers?

The soldiers can just be officially "MIA" while in reality with Ukrainians, I don't feel it's such a problem. There is always a solution :)

Consider Not Changing Your Forum Username to Your Real Name
  • I am one of the people who supported Timnit in that (or similar) Twitter thread. See more of my position here: https://twitter.com/sergeivolodinch/status/1520150030518210561
  • I am also actively involved in EA (local AI safety reading groups, CHAI Berkeley internship, Google research internship on interpretability), saw Yudkowsky doing the show with weight loss at a Christmas party in 2019, and I feel we indeed have a cultural problem: ignoring Africa as "current issues irrelevant in the limit" doesn't work. More on this here: https://forum.effectivealtruism.org/posts/JtE2srazz4Yu6NcuQ/how-many-eas-failed-in-high-risk-high-reward-projects?commentId=WrtWTown797Kw77g8 . It happened to her then and now it happened to me. It helps me a lot emotionally that condemning Russia is so widespread now.
  • I do not believe that cancel culture is such a big deal in this case. I do disagree with Timnit on another quite personal and big issue (who should be included in our little "AI resistance" thingy, how to help people with #metoo SLAPP suits), I do not see myself canceled for that

I would like to present my position but I do not want to break anyone's consent by talking too much about all that here. If you want me to talk, please comment on what you want to hear from me/upvote. I believe we both (EA community/longtermists + AI Ethics people) would benefit from updating our models of the world given each other's respective experiences and thus have more impact. Together.

How many EAs failed in high risk, high reward projects?
Answer by sergiaApr 26, 202245

I have failed to do any meaningful work on recommender systems alignment. We launched an association, YouTube acknowledged the problem with disinformation when we talked to them privately (about COVID, for example, coming from Russia, for example), but said they will not do anything, with or without us. We worked alone, I was the single developer. I burned out to the point of being angry and alienating people around me (I understand what Timnit Gebru has went through, because Russia (my home country) is an aggressor country, and there is a war in Tigray as well, which is related to Ethiopia, her home country). I have sent many angry/confusing emails that made perfect sense for me at the time... I went through homelessness and unemployment after internships at CHAI Berkeley and Google and a degree from a prestigious European university. I felt really bad for not being able to explain the importance of the problem and stop Putin before it was too late... Our colleagues' papers on the topic were silenced by their employers. Now I'm slowly recovering and feel I want to write about all that, some sort of a guide / personal experience on aligning real systems / organizations, and that real change comes really, really hard.

We are a foundation that fights for malnutrition in Colombian laguajira, where in recent years more than 5,000 children have disappeared from thirst and hunger.

Would be awesome to know more, I personally feel EA could be a bit more down-to-earth in terms of doing actual things in the world and saving actual people :)