All posts

Old

Week of Friday, 5 April 2024
Week of Fri, 5 Apr 2024

April Fools' Day 34
Building effective altruism 26
Community 22
Announcements and updates 22
AI safety 15
Research 14
More

Frontpage Posts

27

Personal Blogposts

-1
Monero
· · 1m read
36
· · 1m read
133
· · 1m read
228
7
· · 2m read
7
· · 3m read
-3
· · 1m read

Quick takes

Please people, do not treat Richard Hannania as some sort of worthy figure who is a friend of EA. He was a Nazi, and whilst he claims he moderated his views, he is still very racist as far as I can tell. Hannania called for trying to get rid of all non-white immigrants in the US, and the sterilization of everyone with an IQ under 90 indulged in antisemitic attacks on the allegedly Jewish elite, and even post his reform was writing about the need for the state to harass and imprison Black people specifically ('a revolution in our culture or form of government. We need more policing, incarceration, and surveillance of black people' https://en.wikipedia.org/wiki/Richard_Hanania).  Yet in the face of this, and after he made an incredibly grudging apology about his most extreme stuff (after journalists dug it up), he's been invited to Manifiold's events and put on Richard Yetter Chappel's blogroll.  DO NOT DO THIS. If you want people to distinguish benign transhumanism (which I agree is a real thing*) from the racist history of eugenics, do not fail to shun actual racists and Nazis. Likewise, if you want to promote "decoupling" factual beliefs from policy recommendations, which can be useful, do not duck and dive around the fact that virtually every major promoter of scientific racism ever, including allegedly mainstream figures like Jensen, worked with or published with actual literal Nazis (https://www.splcenter.org/fighting-hate/extremist-files/individual/arthur-jensen).  I love most of the people I have met through EA, and I know that-despite what some people say on twitter- we are not actually a secret crypto-fascist movement (nor is longtermism specifically, which whether you like it or not, is mostly about what its EA proponents say it is about.) But there is in my view a disturbing degree of tolerance for this stuff in the community, mostly centered around the Bay specifically. And to be clear I am complaining about tolerance for people with far-right and fascist ("reactionary" or whatever) political views, not people with any particular personal opinion on the genetics of intelligence. A desire for authoritarian government enforcing the "natural" racial hierarchy does not become okay, just because you met the person with the desire at a house party and they seemed kind of normal and chill or super-smart and nerdy.  I usually take a way more measured tone on the forum than this, but here I think real information is given by getting shouty.  *Anyone who thinks it is automatically far-right to think about any kind of genetic enhancement at all should go read some Culture novels, and note the implied politics (or indeed, look up the author's actual die-hard libertarian socialist views.) I am not claiming that far-left politics is innocent, just that it is not racist. 
The meat-eater problem is under-discussed. I've spent more than 500 hours consuming EA content and I had never encountered the meat-eater problem until today. https://forum.effectivealtruism.org/topics/meat-eater-problem (I had sometimes thought about the problem, but I didn't even know it had a name)
Looks like Charity Navigator is taking a leaf from the EA book! Here they're previewing a new ‘cause-based giving’ tool - they talk about rating charities based on effectiveness and refer to research by Founder's Pledge. 
One of the best experiences I've had at a conference was when I went out to dinner with three people that I had never met before. I simply walked up to a small group of people at the conference and asked "mind if I join you?" Seeing the popularity of matching systems like Donut in Slack workspaces, I wonder if something analogous could be useful for conferences. I'm imagining a system in which you sign up for a timeslot (breakfast, lunch, or dinner), and are put into a group with between two and four other people. You are assigned a location/restaurant that is within walking distance of the conference venue, so the administrative work of figuring out where to go is more-or-less handled for you. I'm no sociologist, but I think that having a small group is better for conversation than a large group, and generally also better than a two-person pairing. An MVP version of this could perhaps just be a Google Sheet with some RANDBETWEEN formulas. The topics of conversation were pretty much what you would expect for people attending an EA conference: we sought advice about interpersonal relationships, spoke about careers, discussed moral philosophy, meandered through miscellaneous interests, shared general life advice, and so on. None of us were taking any notes. None of us sent any follow up emails. We weren't seeking advice on projects or trying to get the most value possible. We were simply eating dinner and  having casual conversation. When I claim this was one of the best experiences, I don't mean "best" in the sense of "most impactful," but rather as as 1) fairly enjoyable/comfortable, 2) distinct from the talks and the one-on-ones (which often tend to blur together in my memory), and 3) I felt like I was actually interacting with people rather than engaging in "the EA game."[1] I think that third aspect felt like the most important for me. Of course, if could simply be that this particular group of individuals just happened to mesh well, and that this specific situation it isn't something which can be easily replicated. 1. ^ "The EA game" is very poorly conceptualized on my part. I apologize for the sloppiness of it, but I'll emphasize that this is a loose concept that I've just started thinking about, rather than something rigorous. I think of it as something along the lines of "trying to extract value or trying to produce value." Exploring job opportunities, sensing if someone is open to a collaboration of some type, getting advice on career plans, picking someone's brain on their area of expertise, getting intel on new funders and grants, and so on. It is a certain type of professional and para-professional networking. You have your game face on, because there is some outcome that is dependent on your actions and on how people perceive you. This is in contrast to something like interacting without an agenda, or being authentic and present. 
Here’s a puzzle I’ve thought about a few times recently: The impact of an activity (I) is due to two factors, X and Y. Those factors combine multiplicatively to produce impact. Examples include: * The funding of an organization and the people working at the org * A manager of a team who acts as a lever on the work of their reports * The EA Forum acts as a lever on top of the efforts of the authors * A product manager joins a team of engineers Let’s assume in all of these scenarios that you are only one of the players in the situation, and you can only control your own actions. From a counterfactual analysis, if you can increase your contribution by 10%, then you increase the impact by 10%, end of story. From a Shapley Value perspective, it’s a bit more complicated, but we can start with a prior that you split your impact evenly with the other players. Both these perspectives have a lot going for them! The counterfactual analysis has important correspondences to reality. If you do 10% better at your job the world gets 0.1I better. Shapley Values prevent the scenario where the multiplicative impact causes the involved agents to collectively contribute too much. I notice myself feeling relatively more philosophically comfortable running with the Shapely Value analysis in the scenario where I feel aligned with the other players in the game. And potentially the Shapley Value approach downsides go down if I actually run the math (Fake edit: I ran a really hacky guess as to how I’d calculate this using this calculator and it wasn’t that helpful). But I don’t feel 100% bought-in to the Shapley Value approach, and think there’s a value in paying attention to the counterfactuals. My unprincipled compromise approach would be to take some weighted geometric mean and call it a day. Interested in comments.