Taymon

Posts

Sorted by New

Comments

Some quick notes on "effective altruism"

Reading this thread, I sort of get the impression that the crux here is between people who want EA to be more institutional (for which purpose the current name is kind of a problem) and people who want it to be more grassroots (for which purpose the current name works pretty okay).

There are other issues with the current name, like the thing where it opens us up to accusations of hypocrisy every time we fail to outperform anyone on anything, but I'm not sure that that's really what's driving the disagreement here. Partly, this is because people have tried to come up with better names over the years (though not always with a view towards driving serious adoption of them; often just as an intellectual exercise), and I don't think any of the candidates have produced widespread reactions of "oh yeah I wish we'd thought of that in 2012", even among people who see problems with the current name. So coming up with a name that's better than "effective altruism", by the lights of what the community currently is, seems like a pretty hard problem. (Obviously this is skewed somewhat by the inertia behind the current name, but I don't think that fully explains what's going on here.) When people do suggest different names, it tends to be because they think some or all of the community is emphasizing the wrong things, and want to pivot towards right ones.

"Global priorities community" definitely sounds incompatible with a grassroots direction; if I said that I was starting a one-person global priorities project in my basement, this would sound ridiculously grandiose and like I'd been severely Dunning-Krugered, whereas with an EA project this is fine.

For what it's worth, I'd prefer a name that's clearly compatible with both the institutional and the grassroots side, because it seems clear to me that both of these are in scope for the EA mandate and it's not acceptable to trade off either of them. The current name sounds a little more grassroots than I'd like, but again, I don't have any better ideas.

At one point I pitched Impartialist Maximizing Rationalist-Empiricist-Epistemological Welfarist-Axiological Ideology, or IMREEWAI for short, but for some strange reason nobody liked that idea :-P

Where are you donating in 2020 and why?

Do you think the Biden campaign had room for more funding, i.e., that your donation made a Biden victory more likely on the margin (by enough to be worth it)? I am pretty skeptical of this; I suspect they already had more money than they were able to spend effectively. (I don't have a source for this other than Maciej Cegłowski, who has relevant experience but whom I don't agree with on everything; on the other hand, I can't recall ever hearing anyone make the case that U.S. presidential general-election campaigns do have room for more funding, and I'd be pretty surprised if there were such a case and it was strong.)

"Neglectedness" is a good heuristic for cause areas but I think that when donating to specific orgs it can wind up just confusing things and RFMF is the better thing to ask about.

I'm less certain about the Georgia campaign but still skeptical there, partly because it's a really high-profile race (since it determines control of the Senate and isn't competing for airtime with any other races) and partly because I think substantive electoral reform is likely to remain intractable even if the Democrats win. But I'd be interested to see a more thorough analysis of this.

Where are you donating in 2020 and why?

Alcor claims on their brochure that membership dues "may be" tax-deductible. It's not clear to me how they concluded that. Somebody should probably ask them.

Plan for Impact Certificate MVP

The second point there seems like the one that's actually relevant. It strikes me as unlikely that doing this with blockchain is less work than with conventional payment systems even if the developers have done blockchain things before, and conventional payment systems are even faster and more fungible with other assets than Ethereum. I'm reading the second point there as suggesting something like, you're hoping that funding for this will come in substantial part from people who are blockchain enthusiasts rather than EAs, and who therefore wouldn't be interested if it used conventional payment infrastructure?

(I agree that the "relics" idea is, at best, solving a different problem.)

The Hammer and the Dance

The post seems relatively optimistic. I'm worried that this may be motivated reasoning, and/or political reasoning (e.g., that people won't listen to anyone who isn't telling them that we can solve the crisis without doing anything too costly). Mind you, I'm not any kind of expert, I'm just suspicious-by-default given that most other analysis I've seen seems less optimistic (note that there are probably all kinds of horrible selection biases in what I'm reading and I have no idea what they are). Also, the author isn't an expert; they seem to have consulted experts for the post, but this still reduces my confidence in its conclusions, because those experts could have been selected for agreeing with a conclusion that the author came up with for non-expert-informed reasons.

Advice for getting the most out of one-on-ones

I'm more likely to do this if there's a specific set of data I'm supposed to collect, so that I can write it down before I forget.

Should you familiarize yourself with the literature before writing an EA Forum post?

Yeah, I should have known I'd get called out for not citing any sources. I'm honestly not sure I'd particularly believe most studies on this no matter what side they came out on; too many ways they could fail to generalize. I am pretty sure I've seen LW and SSC posts get cited as more authoritative than their epistemic-status disclaimers suggested, and that's most of why I believe this; generalizability isn't a concern here since we're talking about basically the same context. Ironically, though, I can't remember which posts. I'll keep looking for examples.

Should you familiarize yourself with the literature before writing an EA Forum post?

"Breakthroughs" feel like the wrong thing to hope for from posts written by non-experts. A lot of the LW posts that the community now seems to consider most valuable weren't "breakthroughs". They were more like explaining a thing, such that each individual fact in the explanation was already known, but the synthesis of them into a single coherent explanation that made sense either hadn't previously been done, or had been done only within the context of an academic field buried in inferential distance. Put another way, it seems like it's possible to write good popularizations of a topic without being intimately familiar with the existing literature, if it's the right kind of topic. Though I imagine this wouldn't be much comfort to someone who is pessimistic about the epistemic value of popularizations in general.

The Huemer post kind of just felt like an argument for radical skepticism outside of one's own domain of narrow expertise, with everything that implies.

Should you familiarize yourself with the literature before writing an EA Forum post?

It seems clear to me that epistemic-status disclaimers don't work for the purpose of mitigating the negative externalities of people saying wrong things, especially wrong things in domains where people naturally tend towards overconfidence (I have in mind anything that has political implications, broadly construed). This follows straightforwardly from the phenomenon of source amnesia, and anecdotally, there doesn't seem to be much correlation between how much, say, Scott Alexander (whom I'm using here because his blog is widely read) hedges in the disclaimer of any given post and how widely that post winds up being cited later on.

Information security careers for GCR reduction

This post caused me to apply to a six-month internal rotation program at Google as a security engineer. I start next Tuesday.

Load More