Topic Contributions


Pre-announcing a contest for critiques and red teaming

Hmmm I think it’s actually really hard to critique EA in a way that EAs will find convincing. I wrote about this below. Curious for feedback: https://twitter.com/tyleralterman/status/1511364183840989194?s=21&t=n_isE2vL3UIJsassqyLs8w

Pre-announcing a contest for critiques and red teaming

I would expect there to be higher quality submissions if the team running this were willing to compile a list of (what they consider to be) all the high quality critiques of EA thus far, from both the EA Forum and beyond. Otherwise I expect you’ll get submissions rehashing the same or similar points.

Milan Griffes on EA blindspots

Fwiw “… Tyler Alterman who don't glom with EA” Just to clarify, I glom very much with broad EA philosophy, but I don’t glom with many cultural tendencies inside the movement, which I believe make the movement a non-sufficient vehicle to implement the philosophy. There seem to be an increasing amount of former hardcore EA movement folks with the same stance. (Though this is what you might expect as movements grow, change, and/or ossify.)

(I used to do EA movement-building full time. Now I think of myself as an EA who collaborated with the movement from the outside, rather than the inside.)

Planning to write up my critique and some suggested solutions soon.

Why more effective altruists should use LinkedIn


Though I suspect it will be difficult to get to a sufficient threshold of EAs using LinkedIn as their social network without something similar to a marketing campaign. Any takers?

Should you start your own project now rather than later?

I agree with Owen's comments and the others. The basic message of my post, however, seems to be something like, "Make sure you compare your plans to reality" while emphasizing the failure mode I see more often in EA (that people overestimate the difficulty of launching their own project).

Would it be correct to say that your comments don't disagree with the underlying message, but rather believe that my framing will have net harmful effects because you predict that many people reading this forum will be incited to take unwise actions?

Should you start your own project now rather than later?

Fascinating - this ranks as both my most downvoted and most shared post of all time.

Why and how to assess expertise

Yup, this is an important thing to keep in the background of expert assessment.

Why and how to assess expertise

I'm glad you think it's nonsense, since - in some strange state of affairs - a certain unnamed person has been crushing on the communal Pom sheet lately. =P

Why and how to assess expertise

Well-observed! Here's my guess on where I rank on the various conditions above:

  • P - Process: Medium. I think my explicit process is still fairly decent, but my implicit processes still need work. E.g., I might perform well at identifying an expert if you gave me a decent amount of time to check markers with my framework, but I'm not fluent enough in my explicit models to do expertise assessments on the fly very well, Sherlock Holmes-style.
  • I - Interaction: Medium. I've spent dozens of hours interacting with expertise assessment tasks, as mentioned in the article. However, for much of this interaction with the data, I did not have strong explicit models (I only developed the expert assessment framework last month.) Since my interaction with the data was not very model-guided for the majority of the time, it's likely that I often didn't pay attention to the right features of the data. So I may have been rather like Bob above:

    Bob, a graphic design novice, pays no attention to the signs and advertisements along the side of the street, even though they are within his field of vision. It may have been that lots of data relating to expertise was literally and metaphorically in my field of vision, but that I wasn't focusing on it very well, or wasn't focusing on the proper features.

  • F - Feedback: Low. Since I've only had well-developed explicit models for about a month, I still have only gotten minor feedback on my predictive power. I have run a few predictive exercises - they went well by the n is still small. My primary feedback method has been to generate lots of examples of people I am confident have expertise and check whether each marker can be found in all the examples. I also did the opposite: generate lots of examples of people I am confident lack expertise, and check whether each marker is absent from all the examples. I also used normal proxy methods that one can apply to check the robustness of theories without knowing much about them. (E.g., are there logical contradictions?) I used a couple other methods (e.g., running simulations and checking whether my system 1 yielded error signals), but I'd need to write a full-length article about them for these to make sense. For now, I will just say that they were weak feedback processes, but useful ones. Overall, I looked for correlation between the various feedback methods.
  • T - Time: Low-medium. I have probably spent more time training in specifically domain-general expertise assessment relative to most people in the world. But this is not saying much, since domain-general expertise assessment is not a thriving or even recognized field, as far as I can tell. Also, I have been only a small amount of time on the skill relative to the amount of training required to become skilled in domains falling into a similar reference class. (e.g., I think expertise assessment could be it's own scientific discipline, and people spend years in order to gain sufficient expertise in scientific disciplines.)
Load More