This is a special post for quick takes by Joseph Lemien. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 1:31 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Ben West recently mentioned that he would be excited about a common application. It got me thinking a little about it. I don't have the technical/design skills to create such a system, but I want to let my mind wander a little bit on the topic. This is just musings and 'thinking out out,' so don't take any of this too seriously.

What would the benefits be for some type of common application? For the applicant: send an application to a wider variety of organizations with less effort. For the organization: get a wider variety of applicants.

Why not just have t... (read more)

One of the best experiences I've had at a conference was when I went out to dinner with three people that I had never met before. Seeing the popularity of matching systems like Donut in Slack workspaces, I wonder if something analogous could be useful for conferences. I'm imagining a system in which you sign up for a timeslot (breakfast, lunch, or dinner), and are put into a group with between two and four other people. You are assigned a location/restaurant that is within walking distance of the conference venue, so the administrative work of figuring out where to go is more-or-less handled for you. I'm no sociologist, but I think that having a small group is better for conversation than a large group, and better than a two-person group. An MVP version of this could  perhaps just be a Google Sheet with some RANDBETWEEN formulas.

The topics of conversation were pretty much what you would expect for people attending an EA conference: we spoke about interpersonal relationships, careers, moral philosophy, miscellaneous interests, general life advice, and so on. None of us were taking any notes. None of us sent any follow up emails. We weren't seeking advice on projects. We were si... (read more)

4
James Herbert
22d
@Christoph Hartmann has developed a tool that might be useful! Might try to see if we can use it at EAGxUtrecht. Below is a message he sent me explaining it:  
1
Christoph Hartmann
22d
Thanks for tagging me! Fully agree with you Joseph that an easier way to socialise with strangers at conferences would be great and that's exactly what I'm trying to do with this app. Let me know if you know anybody organising conferences or communities for whom this could be helpful.

Note: I'm sharing this an undisclosed period of time after the conference has occurred, because I don't want to inadvertently reveal who this individual is, and I don't want to embarrass this person.

I'm preparing to attend a conference, and I've been looking at the Swapcard profile of someone who lists many areas of expertise that I think I'd be interested in speaking with them about: consulting, people management, operations, policymaking, project management/program management, global health & development... wow, this person knows about a lot of different areas. Wow, this person even lists Global coordination & peace-building as an area of expertise! And Ai strategy & policy! Then I look at this person's LinkedIn. They graduated from their bachelor's degree one month ago. So many things arise in my mind.

  • One is about how this typifies a particular subtype of person who talks big about what they can do (which I think has some overlap with "grifter" or "slick salesman," and has a lot of overlap with people who promote themselves on social media).
  • Another is that I notice that this person attended Yale, and it makes me want to think about
... (read more)

I list "social media manager" for Effective Altruism on LinkedIn - but I highlight that it's a voluntary role, not a job. I have done this for over 10 years, maintaining the "effective altruism" page amongst others, as well as other volunteering for EA.

4
Joseph Lemien
4mo
Ya know what? That strikes me as 100% legitimate. I had approached it from the perspective of "there isn't an organization called Effective Altruism, so anyone claiming to work for it is somehow stretching/obfuscating the truth," but I think I was wrong. While I have seen people use an organization's name on LinkedIn without being associated with the organization, your example of maintaining a resource for the EA community seems permissible, especially since you note that it is volunteering.

+1 to the EAG expertise stuff, though I think that it’s generally just an honest mistake/conflicting expectations, as opposed to people exaggerating or being misleading. There aren’t concrete criteria for what to list as expertise so I often feel confused about what to put down.

 

@Eli_Nathan maybe you could add some concrete criteria on swapcard?

e.g. expertise = I could enter roles in this specialty now and could answer questions of curious newcomers (or currently work in this area)

interest = I am either actively learning about this area, or have invested at least 20 hours learning/working in this area .

4
Ivan Burduk
4mo
Hi Caleb, Ivan from the EAG team here — I'm responsible for a bunch of the systems we use at our events (including Swapcard). Thanks for flagging this! It's useful to hear that this could do with more clarity. Unfortunately, there isn't a way we can add help text or sub text to the Swapcard fields due to Swapcard limitations. However, we could rename the labels/field names to make this clearer..? For example * Areas of Expertise (3+ months work experience) * Areas of Interest (actively seeking to learn more) Does that sound like something that would be helpful for you to know what to put down? I'll take this to the EAG team and see if we can come up with something better. Let me know if you have other suggestions!
2
Joseph Lemien
26d
For what it is worth, I'd want the bar for expertise to be a lot higher than a few months of work experience. I can't really think of any common career (setting aside highly specialized fields with lots of training, such as astronaut) in which a few months of work experience make someone an expert. Maybe Areas of Expertise (multiple years work experience)? It is tricky, because there are so many edge cases, and maybe someone had read all the research on [AREA] and is incredibly knowledge without having ever worked in that area.
2
calebp
4mo
That would help me! Right now I mostly ignore the expertise/interest fields, but I could imagine using this feature to book 1:1s if people used a convention like the one you suggested.
8
Tyler Johnston
4mo
The mention of "Pareto Productivity Pro" rang a bell, so I double-checked my copy of How to Launch a High-Impact Nonprofit — and sure enough, towards the end of the chapter on productivity, the book actually encourages the reader to add that title to their Linkedin verbatim. Not explicitly as a certification, nor with CE as the certifier, but just in general. I still agree that it could be misleading, but I imagine it was done in fairly good faith given the book suggests it. However, I do think this sort of resume padding is basically the norm rather than the exception. Somewhat related anecode from outside EA: Harvard College has given out a named award for many decades to the "top 5% of students of the year by GPA." Lots of people — including myself — put this award in their resume hoping it will help them stand out among other graduates. The catch is that grade inflation has gotten so bad that something like 30-40% of students will get a 4.0 in any given year, and they all get the award on account of having tied for it (despite it now not signifying anything like "top 5%.") But the university still describes it as such, and therefore students still describe it that way on resumes and social media (you can actually search "john harvard scholar" in quotes on LinkedIn and see the flexing yourself). Which just illustrates how even large, reputable institutions support this practice through fluffy, misleading awards and certifications. This post actually spurred me to go and remove the award from my LinkedIn, but I still think it's very easy and normal to accidentally do things that make yourself look better in a resume — especially when there is a "technically true" justificaiton for it (like "the school told me I'm in the top 5%" or "the book told me I could add this to my resume!"), whether or not this is really all that informative for future employers. Also, in the back of my mind, I wonder whether choosing to not do this sort of resume padding creates bad sel
5
Joseph Lemien
4mo
Thanks for mentioning this. I wasn't aware of this context, which changes my initial guesswork quite a bit. I just looked it up at in Chapter 10 (Take Planning), section 10.6 has this phrase: "As you implement most or some of the practices introduced here, you have every right to add the title Pareto Productivity Pro to your business card and LinkedIn profile." So I guess that is endorsed by Charity Entrepreneurship. While I disagree with their choice to encourage people to add what I view as a meaningless title to LinkedIn, I think it I can't put so much blame on the individual who did this.
3
Tyler Johnston
4mo
Yeah, agreed that it's an odd suggestion. The idea of putting it on a business card feels so counterintuitive to me that I wonder how literally it's meant to be taken, or if the sentence is really just a rhetorical device the authors are using to encourage the reader.
4
Joseph Lemien
4mo
That is definitely something for us to be aware of. The simplistic narrative of "lots of people are exaggerating and inflating their experiences/skills, so if I don't do it I will be at a disadvantage" is something that I think of when I am trying to figure out wording on a resume.
6
PeterSlattery
4mo
Thanks for writing this, Joseph.  Minor, but I don't really understand this claim: Someone made a forum post about taking several months off work to hike, claiming that it was a great career decision and that they gained lots of transferable skills. I see this as LinkedIn-style clout-seeking behavior. I am curious why you think this i) gains them clout or ii) was written with that intention?  It seems very different to the other examples, which seem about claiming unfair competencies or levels of impact etc.  I personally think that taking time off work to hike is more likely to cost you status than give you status in EA circles! I therefore read that post as an attempt to promote new community norms (around work and life balance and self-discovery etc) than to gain status. One disclaimer here is that I think I know this person, so I am probably biased. I am genuinely curious though and not feeling defensive etc.
9
Joseph Lemien
4mo
Sure, I'll try to type out some thoughts on this. I've spent about 20-30 minutes pondering this, and this is what I've come up with. I'll start by saying I don't view this hiking post as a huge travesty; I have a general/vague feeling of a little yuckiness (and I'll acknowledge that such gut instincts/reactions are not always a good guide to clear thinking), and I'll also readily acknowledge that just because I interpret a particular meaning doesn't mean that other people interpreted the same meaning (nor that the author intended that meaning). (I'll also note that if the author of that hiking post reads this: I have absolutely no ill-will toward you. I am not angry, I enjoyed reading about your hike, and it looked really fun. I know that tone is hard to portray in writing, and that the internet is often a fraught place with petty and angry people around every corner. If you are reading this it might come across as if I am angrily smashing my keyword simply because I disagree with something. I assure you that I am not angry. I am sipping my tea with a soft smile while I type about your post. I view this less like "let's attack this person for some perceived slight" and more like "let's explore the semantics and implied causation of an experience.") One factor is that it doesn't seem generalizable. If 10,000 people took time off work to do a hike, how many of them would have the same positive results? From the perspective of simply sharing a story of "this is what happened to me" I think it is fine. But the messaging of "this specific action I took helped me get a new job" seems like the career equivalent of "I picked this stock and it went up during a decade-long bear market, so I will share my story about how I got wealthy."  A second factor is the cause-and-effect. I don't know for sure, but I suspect that the author's network played a much larger role in getting a job than the skills picked up while hiking. The framing of the post was "It was a great career d
2
PeterSlattery
4mo
Thanks for the detailed response, I appreciate it!
4
Rebecca
4mo
The main Swapcard example you mention seems to me like a misunderstanding of EAGs and 1-1s. To take consulting as an example, say I am a 1st year undergrad looking to get into management consulting. I don’t need to speak to a consulting expert (probably they should change the name to be about experience instead of expertise), but I’d be very keen to get advice from someone who recently went through the whole consulting hiring process and got multiple offers, say someone a month out of undergrad. Or another hypothetical: say I’m really interested in working in an operations/HR role within global health. I reach out to the handful of experts in the field who will be at the conference, but I want to fit in as many 1-1s as I can, and anyway the experts may be too busy, so I also reach out to someone who did an internship on the operations team of a global health charity during college. They’re not an expert in the field, but they could still brain-dump a bunch of stuff they learnt from the internship in 25 min. And these could be about the same recently graduated person. With the trekking example, I also know the person, and it seems extremely unlikely to me they were trying to gain power or influence (ie clout), by writing the post. It also seems to be the case that it did result in some minor outdoorsy career opportunities. A lot of the points about transferability seem like they would apply to many job to job changes - e.g. ‘why would you think your experience running a startup would be transferable to working for a large corporation?’ But people change career direction all the time, and indeed EA has a large focus on helping people to do so.
4
SiebeRozendal
4mo
I agree with everything but the last point. Director or CEO simply refers to a name of the position, doesn't it?
4
Joseph Lemien
4mo
Yes, it refers to a position. So if this is actually someone's job title, then there kind of isn't anything wrong with it. And I sympathize with people who found or start their own organization. If I am 22 and I've never had a job before but I create a startup, I am the CEO.  So by the denotation there is nothing wrong with it. The connotation makes it a bit tricky, because (generally speaking) the title of CEO (or director, or senior manager, or similar titles) refers to people with a lot of professional experience. I perceive a certain level of ... self-aggrandizement? inflating one's reputation? status-seeking? I'm not quite sure how to articulate the somewhat icky feeling I have about people giving themselves impressive-sounding titles.
4
cata
4mo
I don't know if this is a fair assessment, but it's hard for me to expect anything else as long as many EAs are getting sourced from elite universities, since that's basically the planetary focus for the consumption and production of inflated credentials.

A very tiny, very informal announcement: if you want someone to review your resume and give you some feedback or advice, send me your resume and I'll help. If we have never met before, that is okay. I'm happy to help you, even if we are total strangers.

For the past few months I've been active with a community of  Human Resources professionals and I've found it quite nice to help people improve their resumes. I think there are a lot of people in EA that are looking for a job as part of a path to greater impact, but many people feel somewhat awkward or ashamed to ask for help. There is also a lot of 'low-hanging fruit' for making a resume look better, from simply formatting changes that make a resume easier to understand to wordsmithing the phrasings.

To be clear: this is not a paid service, I'm not trying to drum up business for some kind of a side-hustle, and I'm not going to ask you to subscribe to a newsletter. I am just a person who is offering some free low-key help.

This is both a very kind and a very helpful thing to offer. This is something that can help people an awful lot in terms of their career. 

4
Clifford
6mo
Just to say I took Joseph up on this and found it very helpful! I recommend doing the same!

Best books I've read in 2023

(I want to share, but this doesn't seem relevant enough to EA to justify making a standard forum post. So I'll do it as a quick take instead.)

People who know me know that I read a lot.[1] Although I don’t tend to have a huge range, I do think there is a decent variety in the interests I pursue: business/productivity, global development, pop science, sociology/culture, history. Of all the books I read in 2023, here is my best guess as to the ones that would be of most interest to an effective altruist.

For people who haven’t explored much yet

  • Scrum: The Art of Doing Twice the Work in Half the Time. If you haven’t worked in 'startupy' or lean organizations, this books may introduce you to some new ideas. I first worked for a startup in my late 20s, and I wish that I had read this book at that point.
  • Developing Cultural Adaptability: How to Work Across Differences. This 32 page PDF is a good introduction to ideas of working with people from other cultures. This will be particularly useful if you are going to work in a different country (although there are cultural variations within a single country). This is fairly light introduction, so don't stop h
... (read more)
4
Stephen Clare
4mo
Super interesting list! I hadn't heard of most of these and have ordered a few of them to read. Thank you!

I'm currently reading a lot of content to prepare for HR certification exams (from HRCI and SHRM), and in a section about staffing I came across this:

some disadvantages are associated with relying solely on promotion from within to fill positions of increasing responsibility:
■ There is the danger that employees with little experience outside the organization will 
have a myopic view of the industry

Just the other day I had a conversation about the tendency of EA organizations to over-weight how "EA" a job candidate is,[1] so it particularly stuck me to come across this today. We had joked about how a recent grad with no work experience would try figuring out how to do accounting from first principles (the unspoken alternative was to hire an accountant). So perhaps I would interpret the above quotation in the context of EA as "employees with little experience outside of EA are more likely to have a myopic view of the non-EA world." In a very simplistic sense, if we imagine EA as one large organization with many independent divisions/departments, a lot of the hiring (although certainly not all) is internal hiring.[2]

And I'm wondering how much expertise, skill, or experience i... (read more)

I think that the worries about hiring non-EAs are slightly more subtly than this.

Sure, they may be perfectly good at fulfilling the job description, but how does hiring someone with different values affect your organisational culture? It seems like in some cases it may be net-beneficial having someone around with a different perspective, but it can also have subtle costs in terms of weakening the team spirit.

Then you get into the issue where if you have some roles you are fine hiring EAs for and some you want them to be value-aligned for, then you may have an employee who you would not want to receive certain promotions or be elevated into certain positions, which isn't the best position to be in.

Not to mention, often a lot of time ends up being invested in skilling up an employee and if they are value-aligned then you don't necessarily lose all of this value when they leave.

2
Joseph Lemien
10mo
Chris, would you be willing to talk more about this issue? I'd love to hear about some of the specific situations you've encountered, as well as to explore broad themes or general trends. Would it be okay if I messaged you to arrange a time to talk?
2
Chris Leong
10mo
Sorry, I’m pretty busy. But feel free to chat if we ever run into each other at an EA event or to B book a 1-on-1 at an EA Global.

I'm concerned whenever I see things like this:

"I want to place [my pet cause], a neglected and underinvested cause, at the center of the Effective Altruism movement."[1]

In my mind, this seems anti-scouty. Rather than finding what works and what is impactful, it is saying "I want my team to win." Or perhaps the more charitable interpretation is that this person is talking about a rough hypothesis and I am interpreting it as a confident claim. Of course, there are many problems with drawing conclusions from small snippets of text on the internet, and if I meet this person and have a conversation I might feel very differently. But at this point it seems like a small red flag, demonstrating that there is a bit less cause-neutrality here (and a bit more being wedded to a particular issue) than I would like. But it is hard to argue with personal fit; maybe this person simply doesn't feel motivated about lab grown meat or bednets or bio-risk reduction, and this is their maximum impact possibility.

  1. ^

    I changed the exact words to that I won't publicly embarrass or draw attention to the person who wrote this. But to be clear, this is not a thought experiment of mine, someone actually wrote thi

... (read more)

In my experience, many of those arguments are bad and not cause-neutral, though to me your take seems too negative -- cause prioritization is ultimately a social enterprise and the community can easily vet and detect bad cases, and having proposals for new causes to vet seems quite important (i.e. the Popperian insight, individuals do not need to be unbiased, unbiasedness/intersubjectivity comes from open debate).

3
Joseph Lemien
2mo
You make a good point. I probably allow myself to be too affected by claims (such as "saving the great apes should be at the center of effective altruism"), when in reality I should simply allow the community sieve to handle them.

This feels misplaced to me. Making an argument for some cause to be prioritised highly is in some sense one of the core activities of effective altruism. Of course, many people who'd like to centre their pet cause make poor arguments for its prioritisation, but in that case I think the quality of argument is the entire problem, not anything about the fact they're trying to promote a cause. "I want effective altruists to highly prioritise something that they currently don't" is in some sense how all our existing priorities got to where they are. I don't think we should treat this kind of thing as suspicious by nature (perhaps even the opposite).

8
Ian Turner
2mo
Hi Ben, It seems to me that one should draw a distinction between, “I see this cause as offering good value for money, and here is my reasoning why”, and “I have this cause that I like and I hope I can get EA to fund it”. Sometimes the latter is masquerading as the former, using questionable reasoning. Some examples that seem like they might be in the latter category to me: * https://forum.effectivealtruism.org/posts/Dytsn9dDuwadFZXwq/fundraising-for-a-school-in-liberia * https://forum.effectivealtruism.org/posts/R5r2FPYTZGDzWdJEY/how-to-get-wealthier-folks-involved-in-mutual-aid * https://forum.effectivealtruism.org/posts/zsLcixRzqr64CacfK/zzappmalaria-twice-as-cost-effective-as-bed-nets-in-urban In any case though, I’m not sure it makes a difference in terms of the right way to respond. If the reasoning is suspect, or the claims of evidence are missing, we can assume good faith and respond with questions like, “why did you choose this program”, “why did you conduct the analysis in this way”, or “have you thought about these potentially offsetting considerations”. In the examples above, the original posters generally haven’t engaged with these kind of questions. If we end up with people coming to EA looking for resources for ineffective causes, and then sealioning over the reasoning, I guess that could be a problem, but I haven’t seen that here much, and I doubt that sort of behavior would ultimately be rewarded in any way.    Ian

The third one seems at least generally fine to me -- clearly the poster believes in their theory of change and isn't unbiased, but that's generally true of posts by organizations seeking funding. I don't know if the poster has made a (metaphorically) better bednet or not, but thought the Forum was enhanced by having the post here.

The other two are posts from new users who appear to have no clear demonstrated connection to EA at all. The occasional donation pitch or advice request from a charity that doesn't line up with EA very well at all is a small price to pay for an open Forum. The karma system dealt with preventing diversion of the Forum from its purposes. A few kind people offered some advice. I don't see any reason for concern there.

1
Ian Turner
2mo
I agree, and to be clear I’m not trying to say that any forum policy change is needed at this time.
4
Elizabeth
2mo
those posts all go out of their way to say they're new to EA. I feel pretty differently about someone with an existing cause discovering EA and trying to fundraise vs someone who integrated EA principles[1] and found a new cause they think is important.  1. ^ I don't love the phrase "EA principles", EA gets some stuff critically wrong and other subcultures get some stuff right. But it will do for these purposes. 
2
Joseph Lemien
2mo
I think that to a certain extent that is right, but this context was less along the lines of "here is a cause that is going to be highly impactful" and more along the lines of "here is a cause that I care about." Less "mental health coaching via an app can be cost effective" and more like "let's protect elephants." But I do think that in a broad sense you are correct: proposing new interventions, new cause areas, etc., is how the overall community progresses.

I think a lot of the EA community shares your attitude regarding exuberant people looking to advance different cause areas or interventions, which actually concerns me. I am somewhat encouraged by the disagreement with you regarding your comment that makes this disposition more explicit. Currently, I think that EA, in terms of extension of resources, has much more solicitude for thoughts within or adjacent to recognized areas. Furthermore, an ability to fluently convey ones ideas in EA terms or with an EA attitude is important. 

Expanding on jackva re the Popperian insight, having individuals passionately explore new areas to exploit is critical to the EA project and I am a bit concerned that EA is often disinterested in exploring in directions where a proponent lacks some of the EA's usual trappings and/or lacks status signals. I would be inclined to be supportive of passion and exuberance in the presentation of ideas where this is natural to the proponent. 

4
Joseph Lemien
2mo
I suspect you are right that many of us (myself included) focus more than we ought to on how similar an idea sounds in relation to ideas we are already supporting. I suppose maybe a cruxy aspect of this is how much effort/time/energy we should spend considering claims that seem unreasonable at first glance? If someone honestly told me that protecting elephants (as an example) should be EA's main cause area, the two things that go through my heard first are that either that this person doesn't understand some pretty basic EA concepts[1], or that there is something really important to their argument that I am completely ignorant of. But depending on how extreme a view it is, I also wonder about their motives. Which is more-or-less what led me to viewing the claim as anti-scouty. If John Doe has been working for elephant protecting (sorry to pick on elephants) for many years and now claims that elephant protection should be a core EA cause area, I'm automatically asking if John is A) trying to get funding for elephant protection or B) trying to figure out what does the most good and to do that. While neither of those are villainous motives, the second strikes me as a bit more intellectually honest. But this is a fuzzy thing, and I don't have good data to point to.  I also suspect that I myself may have an over-sensitive "bullshit detector" (for lack of a more polite term), so that I end up getting false positives sometimes. 1. ^ Expected value, impartiality, ITN framework, scout mindset, and the like 
6
Brad West
2mo
I agree that advocacy inspired by other-than-EA frameworks is a concern, I just think that the EA community is already quite inclined to express skepticism for new ideas and possible interventions. So, the worry that someone with high degrees of partiality for a particular cause manages to hijack EA resources is much weaker than the concern that potentially promising cases may be ignored because they have an unfortunate messenger. 

the worry that someone with high degrees of partiality for a particular cause manages to hijack EA resources is much weaker than the concern that potentially promising cases may be ignored because they have an unfortunate messenger

I think you've phrased that very well. As much as I may want to find the people who are "hijacking" EA resources, the benefit of that is probably outweighed by how it disincentivized people to try new things. Thanks for commenting back and forth with me on this. I'll try to jump the gun a bit less from now on when it comes to gut feeling evaluations of new causes. 

2
Brad West
2mo
I can only aspire to be as good a scout as you, Joseph. Cheers
4
Jason
2mo
I think it's important to consider that the other person may be coming from a very different ethical framework than you are. I wouldn't likely support any of the examples in your footnote, but one can imagine an ethical framework in which the balance looks closer than it does to me. To be clear, I highly value saving the lives of kids under five as the standard EA lifesaving projects do. But: I can't objectively show that a framework that assigns little to no value to averting death (e.g., because the dead do not suffer) is a bad one. And such a significant difference in values could be behind some statements of the sort you describe.

Some people involved in effective altruism have really great names for their blogs: Ollie Base has Base Rates, Diontology from Dion Tan, and Ben West has Benthamite. It is really cool how people are able to take their names and with some slight adjustments make them into cool references. If I was the blogging type and my surname wasn't something so uncommon/unique, I would take a page from their book.

"When life gives you Lemiens"?

2
Joseph Lemien
2mo
Oh, that's not bad! Maybe I'll use that someday. 🤣  Unfortunately, I think that will encourage people to mispronounce my surname; it isn't pronounced less like "lemon" and more in a way that rhymes with "the mean" or "the keen."
1
Arvin
2mo
"Lemiently Stoic"

I just had a call with a young EA from Oyo State in Nigeria (we were connected through the excellent EA Anywhere), and it was a great reminder of how little I know regarding malaria (and public health in developing countries more generally). In a very simplistic sense: are bednets actually the most cost effective way to fight against malaria?

I've read a variety of books on the development economics canon, I'm a big fan of the use of randomized control trials in social science, I remember worm wars and microfinance not being so amazing as people thought and... (read more)

Some questions cause me to become totally perplexed. I've been asked these (or variations of these) by a handful people in the EA community. These are not difficulties or confusions that require PhD-level research to explain, but instead I think they represent a sort of communication gap/challenge/disconnect and differing assumptions.

Note that this fuzzy musings on communication gaps, and on differing assumptions of what is normal. In a very broad sense you could think of this as an extension of the maturing/broadining of perspectives that we all do when w... (read more)

A brief thought on 'operations' and how it is used in EA (a topic I find myself occasionally returning to).

It struck me that operations work and non-operations work (within the context of EA) maps very well onto the concept of staff and line functions. Line function are those that directly advances an organization's core work, while staff functions are those that do not. Staff functions have advisory and support functions; they help the line functions. Staff functions are generally things like accounting, finance, public relations/communication, legal, and HR. Line functions are generally things like sales, marketing, production, and distribution. The details will vary depending on the nature of the organization, but I find this to be a somewhat useful framework for bridging concepts between EA and the broader world.

It also helps illustrate how little information is conveyed if I tell someone I work in operations. Imagine 'translating' that into non-EA verbiage as I work in a staff function. Unless the person I am talking to already has a very good understanding of how my organization works, they won't know what I actually do.

A very minor thought.

TLDR: Try to be more friendly and supportive, and to display/demonstrate that in a way the other person can see.

Slightly longer musings: if you attend an EA conference (or some other event that involves you listening to a speaker), I suggest that you:

  • look at the speaker while they are speaking
  • have some sort of smile, nodding, or otherwise encouraging/supportive body language or facial expression.

This is likely less relevant for people that are very experienced public speakers, but for people that are less comfortable and at ease speaking in front of a crowd[1] it can be pretty disheartening to look out at an audience and see the majority of people looking at their phone and their laptops.

I was at EAGxNYC recently, and I found it a little disheartening at how many people in the audience were paying attention to their phones and laptops instead of paying attention to the speaker.[2] I am guilty of doing this in at least one talk that I didn't find interesting, and I am moderately ashamed of my behavior. I know that I wouldn't want someone to do that to me if I was speaking in front of a crowd. One speaker mentioned to me later that they appreciated my n... (read more)

I'm skimming through an academic paper[1] that I'd roughly describe as cross-cultural psychology about morality, and the stark difference between what kinds of behaviors Americans and China view as immoral[2] was surprising to me.

The American list has so much of what I could consider as causing harm to others, or malicious. The Chinese list has a lot of what I would consider as rude, crass, or ill-mannered. The differences here remind me of how I have occasionally pushed against the simplifying idea of words having easy equivalents between English and Chinese.[3]

There are, of course, issues with taking this too seriously: issues like spitting, cutting in line, or urinating publicly are much more salient issues in Chinese society than in American society. I'm also guessing that news stories about murders and thefts are more commonly seen in American media than in China's domestic media. But overall I found it interesting, and a nice nudge/reminder against the simplifying idea that "we are all the same."

  1. ^

    Dranseika, V., Berniūnas, R., & Silius, V. (2018). Immorality and bu daode, unculturedness and bu wenming. Journal of Cultural Cognitive Science, 2, 71-84.

  2. ^

    Note that th

... (read more)
4
MichaelStJules
8mo
I wonder if the main difference is that the Americans and Lithuanians are responding more based on how bad the things seem to be, while the Chinese are responding more based on how common they are. Most of the stuff on the Chinese list also seems bad to me, just not nearly as bad as violence.
4
Siao Si
8mo
I'd think the article you're referencing (link) basically argues against considering "daode" to mean "morality" and vice-versa.  The abstract: "In contemporary Western moral philosophy literature that discusses the Chinese ethical tradition, it is a commonplace practice to use the Chinese term daode 道德 as a technical translation of the English term moral. The present study provides some empirical evidence showing a discrepancy between the terms moral and daode."
3
Joseph Lemien
8mo
Yes. The idea of English immoral and Chinese bu daode not being quite the same is a big part of the paper.
2
trevor1
8mo
I think this is a really big and valuable finding, and generally agree with your thinking about language and morality differences, which are valuable research areas. Anyone doing a deeper dive in the paper might want to think about whether Chinese survey participants are surprised to see relatively extreme and serious crimes like theft and violence and decide not to touch those concepts with a ten foot pole, and default to things that people frequently talk about or are frequently criticized by official news sources and propaganda. Not that they're super afraid of checking a box or anything; it's just that it's only a survey and they don't know the details of what's going on, and by default the tiny action is not worth something complicated happening or getting involved in something weird that they don't understand. Or maybe it's only that they think it's acceptable to criticize things that everyone is obviously constantly criticizing, especially in an unfamiliar environment where everything is being recorded on paper permanently (relative to verbal conversations which are widely considered safer and more comfortable). It's not that people are super paranoid, but, like, why risk it if some unfair and bizarre situation could theoretically happen (e.g. corruption-related, someone's filling quotas), and conformity is absolutely guaranteed to be safe and cause no major or minor disturbances to your daily life? I didn't read the paper, and these musings should only be seriously considered as potentially helpful for people reading the paper. The paper seems to have run other forms of surveys that point towards similar conclusions.
3
Timothy Chan
8mo
From the study it looks like participants were given a prompt and asked to "free-list" instead checking boxes so it might be more indicative of what's actually on people's minds. The immoral behaviors prompt being: My impression is that the differences between the American and Chinese lists (with the Lithuanian list somewhat in between) appear to be a function of differences in the degree of societal order (i.e., crime rates, free speech), cultural differences (i.e., extent of influence of: Anglo-American progressivism, purity norms of parts of Christianity, traditional cultures, and Confucianism), and demographics (i.e, topics like racism/discrimination that might arise in contexts that are ethnically diverse instead of homogenous).

I'm very pleased to see that my writing on the EA Forum is now referenced in a job posting from Charity Entrepreneurship to explain to candidates what operations management is, described as "a great overview of Operations Management as a field." This gives me some warm fuzzy feelings.

I wish that people wouldn't use "rat" as shorthand for "rationalist."

For people who aren't already aware of the lingo/jargon it makes things a bit harder to read and understand. Unlike terms like "moral patienthood" or "mesa-optimizers" or "expected value," a person can't just search Google to easily find out what is meant by a "rat org" or a "rat house."[1] This is a rough idea, but I'll put it out there: the minimum a community needs to do in order to be welcoming to newcomers is to allow newcomers to figure out what you are saying.

Of course, I don't expect that reality will change to meet my desires, and even writing my thoughts here makes me feel a little silly, like a linguistic prescriptivist tell people to avoid dangling participles.

  1. ^

    Try searching Google for what is rat in effective altruism and see how far down you have to go before you find something explaining that rat means rationalist. If you didn't know it already and a writer didn't make it clear from context that "rat" means "rationalist", it would be really hard to figure out what "rat" means.

3
Buck
10mo
For what it’s worth, gpt4 knows what rat means in this context: https://chat.openai.com/share/bc612fec-eeb8-455e-8893-aa91cc317f7d
4
Joseph Lemien
10mo
(I'm writing with a joking, playful, tongue-in-cheek intention) If we are setting the bar at "to join our community you need to be at least as well read at GPT4," then I think we are setting the bar too high. More seriously: I agree that it isn't impossible for someone to figure out what it means, it is just a bit harder than I would like. Like when someone told me to do a "bow tech" and I had no idea what she was talking about, but it turns out she was just using a different name for a Fermi estimate (a BOTEC).
4
Buck
10mo
I agree that we should tolerate people who are less well read than GPT-4 :P
1
JanPro
10mo
I have the opposite stance, it is a cool and cute shorthand, so I'd like for it to be the widely accepted meaning of rat.

I want to provide an alternative to Ben West's post about the benefits of being rejected. This isn't related to CEA's online team specifically, but is just my general thoughts from my own experience doing hiring over the years.

While I agree that "the people grading applications will probably not remember people whose applications they reject," two scenarios[1] come to mind for job applicants that I remember[2]:

  • The application is much worse than I expected. This would happen if somebody had a nice resume, a well-put together cover letter, and then showed up to an interview looking slovenly. Or if they said they were good at something, and then were unable to demonstrate it when prompted.[3]
  • Something about the application is noticeably abnormal (usually bad). This could be the MBA with 20 years of work experience who applied for an entry level part-time role in a different city & country than where he lived[4]. This could be the French guy I interviewed years ago who claimed to speak unaccented American English, but clearly didn't.[5] It could be the intern who came in for an interview and requested a daily stipend that was higher than the salary of anyone on my team. I
... (read more)

Anyone can call themselves a part of the EA movement.

I sort of don't agree with this idea, and I'm trying to figure out why. It is so different from a formal membership (like being a part of a professional association like PMI), in which you have a list of members and maybe a card or payment.

Here is my current perspective, which I'm not sure that I fully endorse: on the 'ladder' or being an EA (or of any other informal identity) you don't have to be on the very top rung to be considered part of the group. You probably don't even have to be on the top handful of rungs. Is halfway up the ladder enough? I'm not sure. But I do think that you need to be higher than the bottom rung or two. You can't just read Doing Good Better and claim to be an EA without any additional action. Maybe you aren't able to change your career due to family and life circumstances. Maybe you don't earn very much money, and thus aren't donating. I think I could still consider you an EA if you read a lot of the content and are somehow engaged/active. But there has to be something. You can't just take one step up the ladder, then claim the identity and wander off.

My brain tends to jump to analogies, so I'll use t... (read more)

7
zchuang
10mo
To give more colour to this. During the hype of FTX Future Fund a lot of people called themselves EAs in order to try show value alignment to try get funding and it was painfully awkward and obvious. I think the feeling you're naming is something like a fair-weather EA effect that dilutes trust within the community and the self-commitment of the label.
6
Joseph Lemien
10mo
That is a good point, and I like the phrasing of fair-weather EA.
4
Julia_Wise
10mo
I interpreted it in a more literal way, like it's just true that anyone can literally call themselves part of EA. That doesn't mean other people consider it accurate.
2
Joseph Lemien
10mo
Good point.
2
NickLaing
10mo
I get the sentiment, but what's the alternative?  I don't think you can define who gets to identify as something, whether that's gender or religion or group membership. I'm a Christian and I think anyone should be able to call themselves call themselves a Christian, no issue with that at all no matter what they believe or whatever their level of commitment or how good or bad they are as a person.  Any alternative means that someone else has to make a judgement call based on objective or subjective criteria, which I'm not comfortable with. TBH I doubt people will be clamouring for the EA title for status or popularity haha.
5
Joseph Lemien
10mo
Yeah, I think you are right in implying there aren't really any good alternatives. We could try having a formal list of members who all pay dues to a central organization, but (having put almost no thought into it) I assume that would come with it's own set of problems. And I also feel comfortability with an implication that we should have someone else making a judgment based on externally visible criteria. I probably wouldn't make the cut! (I hardly donate at all, and my career hasn't been particularly impactful either) Your example of Christianity makes me think about EA being a somewhat "action-based identity." This is what I mean: I can verbally claim a particular identity (Christianity, or EA, or something else), and that matters to an extent. But what I do matters a lot also, especially if it is not congruent with the identity I claim. If I claim to be Christian but I fail to treat my fellow man with love and instead I am cruel, other people might (rightly) question how Christian I am. If I claim to be an EA but I behave in anti-EA ways (maybe I eat lots of meat, I fail to donate discretionary funds, I don't work toward reducing suffering, etc.) I won't have a lot of credibility as an EA. I'm not sure how to parse the difference between a claimed identity and a demonstrated identity, but I'd guess that I could find some good thoughts about it if I were willing to spend several hours diving into some sociology literature about identity. I am curious about it, but I am 20-minutes curious, not 8-hours curious. Haha.   EDIT: after mulling over this for a few more minutes, I've made this VERY simplistic framework that roughly illustrated my current thinking. There is a lot of interpretation to be made regarding what behavior counts as in accordance with an EA identity or incongruent with an EA identity (eating meat? donating only 2%? not changing your career?). I'm not certain that I fully endorse this, but it gives me a starting point for thinking about it.
2
NickLaing
10mo
100% I really like this. You can claim any identity, but how much credibility you have with that identity depends on your "demonstrated identity". There is risk though to the movement with this kind of all takers appoach. Before I would have thought that the odd regular person behaving badly while claiming to be EA wasn't a big threat. Then there was SBF and the sexual abuse scandals. These however were not so much an issue of fringe, non-committed people claiming to EA and tarnishing the movement, but mostly high profile central figures tarnishing the movement. Reflecting on this, perhaps the actions of high profile or "core" people matter more than people on the edge, who might claim to be EA without serious committment.
3
zchuang
10mo
I mean I think it'll come in waves. As I said in my comment below when FTX Future Fund was up and regrants were abound I had many people around me fake the EA label with hilarious epistemic tripwires abound. Then when FTX collapsed those people were quiet. I think as AI Safety gets more prominent this will happen again in waves. I know a few humanities people pivoting to talking about AI Safety and AI bias people thinking of how to get grant money. 

If anybody wants to read and discuss books on inclusion, diversity, and similar topics, please let me know. This is a topic that I am interested in, and a topic that I want to learn more about. My main  interest is on the angle/aspect of diversity in organizations (such as corporations, non-profits, etc.), rather than broadly society-wide issues (although I suspect they cannot be fully disentangled).

I have a list of books I intend to read on DEI topics (I've also listed them at the bottom of this quick take in case anybody can't access my shared Notio... (read more)

Every now and I then I see (or hear) people involved in EA refer to Moloch[1], as if this is a specific force that should be actively resisted and acted against. Genuine question: are people just using the term "Moloch" to refer to incentives [2] that nudge us to do bad things? Is there any reason why we should say "Moloch" instead of "incentives," or is this merely a sort of in-group shibboleth? Am I being naïve or otherwise missing something here?

  1. ^

    Presumably, Scott Alexander's 2014 Meditations on Moloch essay has been very widely read among EAs.

  2. ^

    As well as the other influences on our motives from things external to ourselves, such as the culture and society that we grew up in, or how we earn respect and admiration from peers.

5
Will Howard
8mo
I see it as "incentives that nudge us to do bad things", plus this incentive structure being something that naturally emerges or is hard to avoid ("the dictatorless dictatorship"). I think "Moloch" gets this across a bit better than just "incentives" which could include things like bonuses which are deliberately set up by other people to encourage certain behaviour.
4
trevor1
8mo
This is actually a pretty big issue. It was basically locked in to Meditations on Moloch because it was too good. The essay does a really good job explaining it, and giving examples that create the perspective you need to understand the broad applicability of the concept, but has too many words; "incentives" or even a single phrase (e.g. "race to the bottom") would have fewer words, but it wouldn't give the concept the explanation that it's worth. Maybe there could be some kind of middle ground.
3
Joseph Lemien
8mo
I'll admit that I really like how there are so many examples shared in Meditations on Moloch, which helps it serve as a kind of intuition flooding.
0
trevor1
8mo
oh my GOD I cannot tell you how much I needed this

In a recent post on the EA forum (Why I Spoke to TIME Magazine, and My Experience as a Female AI Researcher in Silicon Valley), I couldn't help but notice that a comments from famous and/or well-known people got lots more upvotes than comments by less well-known people, even though the content of the comments was largely similar.

I'm wondering to what extent this serves as one small data point in support of the "too much hero worship/celebrity idolization in EA" hypothesis, and (if so) to what extent we should do something about it. I feel kind of conflicted, because in a very real sense reputation can be a result of hard work over time,[1] and it seems unreasonable to say that people shouldn't benefit from that. But it also seems antithetical to the pursuit of truth, philosophy, and doing good to weigh to the messenger so heavily over the message.

I'm mulling this over, but it is a complex and interconnected enough issue that I doubt I will create any novel ideas with some casual thought.

Perhaps just changing the upvote buttons to something more like this content creates nurtures a discussion space that lines up with the principles of EA? I'm not confident that would change muc... (read more)

I'm not convinced by this example; in addition to expressing the view, Toby's message is a speech act that serves to ostracize behaviour in a way that messages from random people do not. Since his comment achieves something the others do not it makes sense for people to treat it differently. This is similar to the way people get more excited when a judge agrees with them that they were wronged than when a random person does; it is not just because of the prestige of the judge, but because of the consequences of that agreement.

8
Joseph Lemien
10mo
I'm glad that you mentioned this. This makes sense to me, and I think it weakens the idea of this particular circumstance as an example of "celebrity idolization." If the EA forum had little emoji reactions for this made me change my mind or this made me update a bit, I would use them here. 😁
2
Jason
10mo
I agree as to the upvotes but don't find the explanation as convincing on the agreevotes. Maybe many people's internal business process is to only consider whether to agreevote after having decided to upvote?
3
Larks
10mo
Yeah, and in general there's an extremely high correlation between upvotes and agreevotes, perhaps higher than there should be. It's also possible that some people don't scroll to the bottom and read all the comments.
3
Habryka
10mo
I definitely think you should expect a strong correlation between "number of agree-votes" and "number of approval-votes", since those are both dependent on someone choosing to engage with a comment in the first place, my guess is this explains most of the correlation. And then yeah, I still expect a pretty substantial remaining correlation. 
2
Joseph Lemien
10mo
I wish that it was possible for agree votes to be disabled on comments that aren't making any claim or proposal. When I write a comment saying "thank you" or "this has given me a lot to think about" and people agree vote (or disagree vote!), it feels to odd: there isn't even anything to agree or disagree with there!
1
Aleksi Maunu
10mo
In those cases I would interpret agree votes as "I'm also thankful" or "this has also given me a lot to think about"

If we interpret an up-vote as "I want to see more of this kind of thing", is it so surprising that people want to see more such supportive statements from high-status people?

I would feel more worried if we had examples of e.g. the same argument being made by different people and the higher-status person getting rewarded more. Even then - perhaps we do really want to see more of high-status people reasoning well in public.

Generally, insofar as karma is a lever for rewarding behaviour, we probably care more about the behaviour of high-status people and so we should expect to see them getting more karma when they behave well, and also losing more when they behave badly (which I think we do!). Of course, if we want karma to be something other than an expression of what people want to see more of then it's more problematic.

2
Jason
10mo
Toby's average karma-per-comment definitely seems higher than average, but it isn't so much higher than that of other (non-famous) quality posters I spot-checked as to suggest that there are a lot of people regularly upvoting his comments due to hero worship/celebrity idolization. I can't get the usual karma leaderboard to load to more easily point to actual numbers as opposed to impressionistic ones.
2
quinn
10mo
I have this concept I've been calling "kayfabe inversion" where attempts to create a social reality that $P$ accidentally enforces $\not P$. The EA vibe of "minimize deference, always criticize your leaders" may just be, by inscrutable social pressures, increasing deference and hero worship and so on. Spurred by my housemate's view of DoD and it's ecosystem of contractors (because their dad has a long career in it) that perhaps the military's explicit deference and hierarchies actually make it easier to do meaningful criticism of or disagreement with leaders, compared to the implicit hierarchies that emerge when you say that you want to minimize deference.  Something along these lines. Perhaps this hypothesis is made clear by a close reading of tyranny of structurelessness, idk. 
2
Joseph Lemien
10mo
Could I bother you to rephrase "$P$ accidentally enforces $\not P$"? I don't know what you mean by using these symbols.
2
quinn
10mo
Oh sorry I just meant a general form for "any arbitrary quality a community may wish to cultivate" 

I suspect that the biggest altruistic counterfactual impact I've had in my life was merely because I was in the right place at the right time: a moderately heavy cabinet/shelf thing was tipping over and about to fall on a little kid (I don't think it would have killed him. He probably would have had some broken bones, lots of bruising, and a concussion). I simply happened to be standing close enough to react.

It wasn't as a result of any special skillset I had developed, nor of any well thought-out theory of change; it was just happenstance. Realistically, ... (read more)

This is in relation to the Keep EA high-trust idea, but it seemed tangential enough and butterfly idea-ish that it didn't make sense to share this as a comment on that post.

Rough thoughts: focus a bit less on people and a bit more on systems. Some failures are 'bad actors,' but my rough impression is that far more often bad things happen because either:

  • the system/structures/incentives nudge people toward bad behavior, or
  • the system/structures/incentives allow bad behavior

It very much reminds me of "Good engineering eliminates users being able to do the wrong thing as much as possible. . . . You don't design a feature that invites misuse and then use instructions to try to prevent that misuse." I've also just learned about the hierarchy of hazard controls, which seems like a nice framework for thinking about 'bad things.'

I think it is great to be able to trust people, but I also want institutions designed in such a way that it is okay if someone is in the 70th percentile of trustworthiness rather than the 95th percentile of trustworthiness.

Low confidence guess: small failures often occur not because people are malicious or selfish, but because they aren't aware of better ways to do t... (read more)

Decoding the Gurus is a podcast in which an anthropologist and a psychologist critique popular guru-like figures (Jordan Peterson, Nassim N. Taleb, Brené Brown, Imbram X. Kendi, Sam Harris, etc.). I've listened to two or three previous episodes, and my general impression is that the hosts are too rambly/joking/jovial, and that the interpretations are harsh but fair. I find the description of their episode on Nassim N. Taleb to be fairly representative:

Taleb is a smart guy and quite fun to read and listen to. But he's also an infinite singularity of arrogance and hyperbole. Matt and Chris can't help but notice how convenient this pose is, when confronted with difficult-to-handle rebuttals.

Taleb is a fun mixed bag of solid and dubious claims. But it's worth thinking about the degree to which those solid ideas were already well... solid. Many seem to have been known for decades even by all the 'morons, frauds and assholes' that Taleb hates.

To what degree does Taleb's reputation rest on hyperbole and intuitive-sounding hot-takes?

A few weeks ago they released an episode about Eliezer Yudkowksy titled Eliezer Yudkowksy: AI is going to kill us all. I'm only partway through listening to it... (read more)

7
Manuel Del Río Rodríguez
10mo
You're right, but it does feel like some pretty strong induction, though not just to not accepting the claim at face value, but for demanding some extraordinary evidence. I'm speaking from the p.o.v. of a person ignorant of the topic, and just making the inference from the perennially recurring apocalyptic discourses.
5
titotal
10mo
True, but you only have a finite amount of time to spend investigating claims of apocalypses. If you do a deep dive into the arguments of one of the main proponents of a theory, and find that it relies on dubious reasoning and poor science (like the "mix proteins to make diamondoid bacteria" scenario), then dismissal is a fairly understandable response.  If AI safety wants to avoid this sort of thing from happening, they should pick better arguments and better spokespeople, and be more willing to call out bad reasoning when it happens. 

I'm reading Brotopia: Breaking Up the Boys' Club of Silicon Valley, and this paragraph stuck in my head. I'm wondering about EA and "mission alignment" and similar things.

Which brings me to a point the PayPal Mafia member Keith Rabois raised early in this book: he told me that it’s important to hire people who agree with your “first principles”—for example, whether to focus on growth or profitability and, more broadly, the company’s mission and how to pursue it. I’d agree. If your mission is to encourage people to share more online, you shouldn’t hire some

... (read more)

I'm been thinking about small and informal ways to build empathy[1]. I don't have big or complex thoughts on this (and thus I'm sharing rough ideas as a quick take rather than as a full post). This is a tentative and haphazard musing/exploration, rather than a rigorous argument.

  • Read about people who have various hardships or suffering. I think that this is one of the benefits of reading fiction: it helps you more realistically understand (on an emotional level) the lives of other people. Not all fiction is created equal, and you probably won't won't develo
... (read more)

I didn't learn about Stanislav Petrov until I saw announcements about Petrov Day a few years ago on the EA Forum. My initial thought was "what is so special about Stanislav Petrov? Why not celebrate Vasily Arkhipov?"

I had known about Vasily Arkhipovfor years, but the reality is that I don't think one of them is more worthy of respect or idolization than the other. My point here is more about something like founder effects, path dependency, and cultural norms. You see, at some point someone in EA (I'm guessing) arbitrarily decided that Stanislav Petrov was ... (read more)

4
Aaron Gertler
11mo
The origin of Petrov Day, as an idea for an actual holiday, is this post by Eliezer Yudkowsky. Arkhipov got a shout-out in the comments almost immediately, but "Petrov Day" was the post title, and it's one syllable shorter. There are many other things like Petrov Day, in this and every culture — arbitrary decisions that became tradition.  But of course, "started for no good reason" doesn't have to mean "continued for no good reason". Norms that survive tend to survive because people find them valuable. And there are plenty of things that used to be EA/rationalist norms that are now much less influential than they were, or even mostly forgotten. The first examples that come to mind for me: * Early EA groups sometimes did "live below the line" events where participants would try to live on a dollar a day (or some other small amount) for a time. This didn't last long, because there were a bunch of problems with the idea and its implementation, and the whole thing faded out of EA pretty quickly (though it still exists elsewhere). * The Giving What We Can pledge used to be a central focus of student EA groups; it was thought to be really important and valuable to get your members to sign up. Over time, people realized this led students to feel pressure to make a lifelong decision too early on, some of whom regretted the decision later. The pledge gradually attained an (IMO) healthier status — a cool part of EA that lots of people are happy to take part in, but not an "EA default" that people implicitly expect you to do.
4
DC
11mo
I would be happy to celebrate an Arkhipov Day. Is there anything that could distinguish the rituals and themes of the day? Arkhipov was in a submarine and had to disagree with two other officers IIRC? (Also when is it?)
3
Joseph Lemien
11mo
Haha, I don't think we need another holiday for Soviet military men who prevented what could have been WWIII. More so, I think we should ask ourselves (often) "Why do we do things the way we do, and should we do things that way?"
2
Pablo
11mo
As Aaron notes, the "Petrov Day" tradition started with a post by Yudkowsky. It is indeed somewhat strange that Petrov was singled out like this, but I guess the thought was that we want to designate one day of the year as the "do not destroy the world day", and "Petrov Day" was as good a name for it as any. Note that this doesn't seem representative of the degree of appreciation for Petrov vs. Arkhipov within the EA community. For example, the Future of Humanity Institute has both a Petrov Room and an Arkhipov Room (a fact that causes many people to mix them up), and the Future of Life Award was given both to Arkhipov (in 2017) and to Petrov (in 2018). I think Arkhipov's actions are in a sense perhaps even more consequential than Petrov's, because it was truly by chance that he was present in that particular nuclear submarine, rather than in any of the other subs from the flotilla. This fact justifies the statement that, if history had repeated itself, the decision to launch a nuclear torpedo would likely not have been vetoed. The counterfactual for Petrov is not so clear.

Random musing from reading a reddit comment:

Some jobs are proactive: you have to be the one doing the calls and you have to make the work yourself and no matter how much you do you're always expected to carry on making more, you're never finished. Some jobs are reactive: The work comes in, you do it, then you wait for more work and repeat.

Proactive roles are things like business development/sales, writing a book, marketing and advertising, and research. You can almost always do more, and there isn't really an end point unless you want to impose an arbitrar... (read more)

This is about donation amounts, investing, and patient philanthropy. I want to share a simple excel graph showing the annual donation amounts from two scenarios: 10% of salary, and 10% of investment returns.[1] While back a friend was astounded at the difference in dollar amounts, so I thought I should share this a bit more widely. The specific outcomes will change based on the assumptions that we input, of course.[2] A person could certainly combine both approaches, and there really isn't anything stopping you from donating more than 10%, so int... (read more)

2
Benny Smith
4mo
Yeah I think this is a good point! Donor-advised funds seem like a good way to benefit from compound interest (and tax deductions) while avoiding the risk of value drift.

I've been reading about performance management, and a section of the textbook I'm reading focuses on The Nature of the Performance Distribution. It reminded me a little of Max Daniel's and Ben Todd's How much does performance differ between people?, so I thought I'd share it here for anyone who is interested.

The focus is less on true outputs and more on evaluated performance within an organization. It is a fairly short and light introduction, but I've put the content here if you are interested.

A theme that jumps out at me is situational specificity, as it ... (read more)

2
aogara
11mo
Very interesting. Another discussion of the performance distribution here. 
4
Joseph Lemien
11mo
Thanks for sharing this. I found this to be quite interesting.

(caution: grammatical pedantry, and ridiculously low-stakes musings. possibly the most mundane and unexciting critique of EA org ever)

The name of Founders Pledge should actually be Founders' Pledge, right? It is possessive, and the pledge belongs to multiple founders. If I remember my childhood lessons, apostrophes come after the s for plural things:

  • the cow's friend (this one cow has a friend)
  • the birds' savior (all of these birds have a savior)

A new thought: maybe I've been understanding it wrong. I've always thought of the "pledge" in Founders Pledge as a... (read more)

7
Larks
8mo
I assumed it was functioning as a compound noun rather than a possessive. The word 'Founders' is modifying the type of Pledge, not claiming ownership of it.

I just finished reading Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. I think the book is worth reading for anyone interested in truth and the figuring out what is real, but I especially liked the aspiration Mertonian norms, a concept I had never encountered before, and which served as a theme throughout the book.

I'll quote directly from the book to explain, but I'll alter the formatting a bit to make it easier to read:

In 1942, Merton set out four scientific values, now known as the ‘Mertonian Norms’. None of them

... (read more)

(not well thought-out musings. I've only spent a few minutes thinking about this.)

In thinking about the focus on AI within the EA community, the Fermi paradox popped into my head. For anyone unfamiliar with it and who doesn't want to click through to Wikipedia, my quick summary of the Fermi paradox is basically: if there is such a high probability of extraterrestrial life, why haven't we seen any indications of it? 

On a very naïve level, AI doomerism suggests a simple solution to the Fermi paradox: we don't see signs of extraterrestrial life because c... (read more)

I remember being very confused by the idea of an unconference. I didn't understand what it was and why it had a special name distinct from a conference. Once I learned that it was a conference in which the talks/discussions were planned by participants, I was a little bit less confused, but I still didn't understand why it had a special name. To me, that was simply a conference. The conferences and conventions I had been to had involved participants putting on workshops. It was only when I realized that many conferences lack participative elements that I r... (read more)

I was recently reminded about BookMooch, and read a short interview with the creator, John Buckman.

I think that the interface looks a bit dated, but it works well: you send people books you have that you don't want, and other people send you books that you want but you don't have. I used to use BookMooch a lot from around 2006 to 2010, but when I moved outside of the USA in 2010 I stopped using it. One thing I like is that it feels very organic and non-corporate: it doesn't cost a monthly membership, there are no fees for sending and receiving books,[1]&nb... (read more)

I guess shortform is now quick takes. I feel a small amount of negative reaction, but my best guess its that this reaction is nothing more than a general human "change is bad" feeling.

Is quick takes a better name for this function that shortform? I'm not sure. I'm leaning toward yes.

I wonder if this will have an effect to nudge people to not write longer posts using the quick takes function.

This is a random musings of cultural norms, mainstream culture, and how/where we choose to spend our time and attention.

Barring the period when I was roughly 16-20 and interested in classic rock, I've never really been invested in music culture. By 'music culture' I mean things like knowing the names of the most popular bands of the time, knowing the difference between [subgenre A] and [subgenre A] off the top of my head, caring about the lives of famous musicians, etc.[1] Celebrity culture in general is something I've never gotten into, but avoiding ... (read more)

This is just for my own purposes. I want to save this info somewhere so I don't lose it. This has practically nothing to do with effective altruism, and should be viewed as my own personal blog post/ramblings.

I read the blog post What Trait Affects Income the Most?, written by Blair Fix, a few years ago, I really enjoyed seeing some data on it. At some point later I wanted to find it and I couldn't find it, and today I stumbled upon it again.  The very short and simplistic summary is that hierarchy (a fuzzy concept that I understand to be roughly "cla... (read more)

I vaguely remember reading something about buying property with a longtermism perspective, but I can't remember the justification against doing it. This is basically using people's inclination to choose immediate rewards over rewards that come later in the future. The scenario was (very roughly) something like this:

You want to buy a house, and I offer to help you buy it. I will pay for 75% of the house, you will pay for 25% of the house. You get to own/use the house for 50 years, and starting in year 51 ownership transfers to me. You get a huge discount to

... (read more)
4
Jason
11mo
That's like what is known as a "life estate" except for a fixed term of years. It has similiarities to offering a long-term lease for an upfront payment . . and many of the same problems. The temporary possessor doesn't care about the value of the property in year 51, so has every incentive to defer maintenance and otherwise maximize their cost/benefit ratio. Just ask anyone in an old condo association about the tendency to defer major costs until someone else owns their unit . . . If you handle the maintenance, then this isn't much different than a lease . . . better to get a bank loan and be an ordinary lessor, because the 50-year term and upfront cash requirement are going to depress how much you make. If you plan on enforcing maintenance requirements for the other person, that will be a headache and could be costly.

I'm grappling with an idea of how to schedule tasks/projects, how to prioritize, and how to set deadlines. I'm looking for advice, recommending readings, thoughts, etc.

The core question here is "how should we schedule and prioritize tasks whose result becomes gradually less valuable over time?" The rest of this post is just exploring that idea, explaining context, and sharing examples.


Here is a simple model of the world: many tasks that we do at work (or maybe also in other parts of life?) fall into either sharp decrease to zero or sharp reduction in value... (read more)

Would anyone find it interesting/useful for me to share a forum post about hiring, recruiting, and general personnel selection? I have some experience running hiring for small companies, and I have been recently reading a lot of academic papers from the Journal of Personnel Psychology regarding the research of most effective hiring practices. I'm thinking of creating a sequence about hiring, or maybe about HR and managing people more broadly.

1
Yitz
2y
Please do! I'd absolutely love to read that :)
1[comment deleted]2y

I've been reading a few academic papers on my "to-read" list, and The Crisis of Confidence in Research Findings in Psychology: Is Lack of Replication the Real Problem? Or Is It Something Else? has a section that made me think about epistemics, knowledge, and how we try to make the world a better place. I'll include the exact quote below, but my rough summary of it would be that multiple studies found no relationship between the presence or absence of highway shoulders and accidents/deaths, and thus they weren't built. Unfortunately, none of the studies had... (read more)

Evidence-Based Management

What? Isn't it all evidence-based? Who would take actions without evidence? Well, often people make decisions based on an idea they got from a pop-business book (I am guilty of this), off of gut feelings (I am guilty of this), or off of what worked in a different context (I am definitely guilty of this).

Rank-and-yank (I've also heard it called forced distribution and forced ranking, and Wikipedia describes it as vitality curve) is an easy example to pick on, but we could easily look at some other management practice in hiring, mark... (read more)

2
Linch
2y
I'm curious if you have evidence-based answers to Ben West's question here.
1
Joseph Lemien
2y
I haven't read any research or evidence demonstrating one leadership style is better than another.  My intuitions and other people's anecdotes that I've heard tell me that certain behaviors are more likely or less likely to lead to success, but I haven't got anything more solid to go on that that at the moment. Similarly, I haven't read any research showing (in a fairly statistically rigorous way) that lean, or agile, or the Toyota Production System, or other similar concepts are effective. Anecdote tells me that they are, and the reasoning for why they work makes sense to me, but I haven't seen anything more rigorous. Nicholas Bloom's research is great, and I am glad to see his study of consulting in India referenced on the EA forum. I would love to see more research measuring impacts of particular management practices, and if I was filthy rich that is probably one of the things that I would fund. I'm assuming that there are studies about smaller-level actions/behaviors, but it is a lot easier to A-B test what color a button on a homepage should be than to A-B test having a cooperative work culture or a competitive work culture. I think of the the tricky things is how context matters to much. Just because practice A is more effective than practice B in a particular culture/industry/function, doesn't mean it will apply to all situations. As a very simplistic example, rapid iteration is great for a website's design, but imagine how horrible it would be for payroll policy.
Curated and popular this week
Relevant opportunities