28Joined Oct 2020


Topic Contributions

For those wondering "and what about AGI and left-right brains" I did some on it in my thesis (left causal model and right neural decoder) - I put it as creative commons non-profit. Merging symbolic and neural systems is probably a way to go but maybe we don't want to go so fast into the unknown imho.

And date me, yes.

I don't get this - I posted it as a comment and the post itself went like -2 but was like +3 or something... Causality, correlation? Shall I delete it? Just a joke folks, so strange, people don't want to discuss dating ...

And now it's +8

Maybe I'm just seeing things that are not there, oh la la

I wrote something about empathy and strength, vibes and rationality, maybe that helps ("left and right brain", not taken well by modern neuroscientists, treat it as an abstraction rather than direct mapping)

I feel and think I need both. With only the left brain I forget about now (like now, not "what my actions would give" or "what happened in the past when similar actions were taken"). And I not see some options available if I forget the now. With only the right brain I forget about systems that sometimes help. Left brain implements models, right brain tunes and "reloads" them when they become too bloated, too contradictory and need new axioms. Those spiritual vibes (can be seen instantiated in music styles, for example) can be seen as different "cultures" of axiom-building. That's why people who recover from trauma (like me) talk about spirituality and vibes - old axioms went into contradictions and couldn't explain or predict or give actions to adapt to the environment. So it's time for music and new axioms.


Such an oh la la, I feel and think that EA forum needs more jokes or it's only L and no R

Question: would an impactful but not cool/popular/elegant topic interest you? What's your balance between coolness and impactfulness?

The soldiers can just be officially "MIA" while in reality with Ukrainians, I don't feel it's such a problem. There is always a solution :)

  • I am one of the people who supported Timnit in that (or similar) Twitter thread. See more of my position here:
  • I am also actively involved in EA (local AI safety reading groups, CHAI Berkeley internship, Google research internship on interpretability), saw Yudkowsky doing the show with weight loss at a Christmas party in 2019, and I feel we indeed have a cultural problem: ignoring Africa as "current issues irrelevant in the limit" doesn't work. More on this here: . It happened to her then and now it happened to me. It helps me a lot emotionally that condemning Russia is so widespread now.
  • I do not believe that cancel culture is such a big deal in this case. I do disagree with Timnit on another quite personal and big issue (who should be included in our little "AI resistance" thingy, how to help people with #metoo SLAPP suits), I do not see myself canceled for that

I would like to present my position but I do not want to break anyone's consent by talking too much about all that here. If you want me to talk, please comment on what you want to hear from me/upvote. I believe we both (EA community/longtermists + AI Ethics people) would benefit from updating our models of the world given each other's respective experiences and thus have more impact. Together.

Answer by sergiaApr 26, 202247

I have failed to do any meaningful work on recommender systems alignment. We launched an association, YouTube acknowledged the problem with disinformation when we talked to them privately (about COVID, for example, coming from Russia, for example), but said they will not do anything, with or without us. We worked alone, I was the single developer. I burned out to the point of being angry and alienating people around me (I understand what Timnit Gebru has went through, because Russia (my home country) is an aggressor country, and there is a war in Tigray as well, which is related to Ethiopia, her home country). I have sent many angry/confusing emails that made perfect sense for me at the time... I went through homelessness and unemployment after internships at CHAI Berkeley and Google and a degree from a prestigious European university. I felt really bad for not being able to explain the importance of the problem and stop Putin before it was too late... Our colleagues' papers on the topic were silenced by their employers. Now I'm slowly recovering and feel I want to write about all that, some sort of a guide / personal experience on aligning real systems / organizations, and that real change comes really, really hard.

Would be awesome to know more, I personally feel EA could be a bit more down-to-earth in terms of doing actual things in the world and saving actual people :)