Taymon

Taymon's Comments

Should you familiarize yourself with the literature before writing an EA Forum post?

Yeah, I should have known I'd get called out for not citing any sources. I'm honestly not sure I'd particularly believe most studies on this no matter what side they came out on; too many ways they could fail to generalize. I am pretty sure I've seen LW and SSC posts get cited as more authoritative than their epistemic-status disclaimers suggested, and that's most of why I believe this; generalizability isn't a concern here since we're talking about basically the same context. Ironically, though, I can't remember which posts. I'll keep looking for examples.

Should you familiarize yourself with the literature before writing an EA Forum post?

"Breakthroughs" feel like the wrong thing to hope for from posts written by non-experts. A lot of the LW posts that the community now seems to consider most valuable weren't "breakthroughs". They were more like explaining a thing, such that each individual fact in the explanation was already known, but the synthesis of them into a single coherent explanation that made sense either hadn't previously been done, or had been done only within the context of an academic field buried in inferential distance. Put another way, it seems like it's possible to write good popularizations of a topic without being intimately familiar with the existing literature, if it's the right kind of topic. Though I imagine this wouldn't be much comfort to someone who is pessimistic about the epistemic value of popularizations in general.

The Huemer post kind of just felt like an argument for radical skepticism outside of one's own domain of narrow expertise, with everything that implies.

Should you familiarize yourself with the literature before writing an EA Forum post?

It seems clear to me that epistemic-status disclaimers don't work for the purpose of mitigating the negative externalities of people saying wrong things, especially wrong things in domains where people naturally tend towards overconfidence (I have in mind anything that has political implications, broadly construed). This follows straightforwardly from the phenomenon of source amnesia, and anecdotally, there doesn't seem to be much correlation between how much, say, Scott Alexander (whom I'm using here because his blog is widely read) hedges in the disclaimer of any given post and how widely that post winds up being cited later on.

Information security careers for GCR reduction

This post caused me to apply to a six-month internal rotation program at Google as a security engineer. I start next Tuesday.

What would EAs most want to see from a "rationality" or similar project in the EA space?

I would like to see efforts at calibration training for people running EA projects. This would be useful for helping to push those projects in a more strategic direction, by having people lay out predictions regarding outcomes at the outset, kind of like what Open Phil does with respect to their grants.

[Link] The Optimizer's Curse & Wrong-Way Reductions

Can you give an example of a time when you believe that the EA community got the wrong answer to an important question as a result of not following your advice here, and how we could have gotten the right answer by following it?

How Can Each Cause Area in EA Become Well-Represented?

Apologies if this is a silly question, but could you give examples of specific, concrete problems that you think this analysis is relevant to?

EAs and EA Orgs Should Move Cash from Low-Interest to High-Interest Options

Does your recommendation account for the staff-time costs of doing anything other than whatever an org's current setup is? Orgs like CEA have stated that this is why they don't do financial-optimization things like this.

EAGx Boston 2018 Postmortem

I don't think there was necessarily anything wrong with it, I'd just encourage future organizers to consider more explicitly what the goal is and how to achieve it.

Load More