This description has been out of date for a long time, and I thought the Forum team had updated it a while ago. Might be a merge issue that reinstituted some old language, or maybe we updated in some places but not others.
While I'm no longer a moderator, I should clarify that we never used "Personal Blog" to "torpedo a post's visibility". If we thought a post shouldn't have been visible, we moved it back to a draft (many instances of spam, exactly one instance I can recall of an infohazard concern). Otherwise, it's always been up to the voters.
Personal Blo... (read more)
Zach's comment linked to a study of different "elevator pitches" for giving. That study has also been discussed on the Forum.
I've seen a bunch of threads on Slack/Facebook where groups share elevator pitches and other introductory materials, and I wouldn't be surprised if this question has been asked before on the Forum, but I don't have time to hunt down other threads at the moment. (You may want to try asking in the EA Groups Slack or in EA Group Organizers to see if anyone knows about past threads or wants to share a current pitch.)
I didn't expect it to get as many votes as it did, but I think people just like hearing nice things — there's not really anything to miss.
Most of the top posts of all time have 2+ karma per vote, while this post has less. That lines up with many people reading it, most people liking it, but few people loving it.
(Note: No longer a moderator, just thinking out loud.)
If this is meant to be forum rules, it's not where it needs to be. I have not seen a TOS or a set of rules before I got an account, and as a forum veteran I was looking for those.
The "About the Forum" page is linked from the main menu, which appears on every page of the Forum. I assume that the title made it sound like it wasn't worth checking to look for rules, which is helpful to know.
Things that could make the rules easier to find, in case moderators want to try a change:
Thanks for the notice — looks like we just merged this feature from LW! Thrilled to now be removing this from the post.
A competition for steering-type material seems like a reasonable contest theme (along the lines of the contests we had for creative writing and Wiki entries).
Now that I won't be running any future events, I'm not sure what the best place to put ideas like this is. Perhaps a comment here (I imagine that future EA Forum leaders will check that thread when thinking about contests).
I've also added your idea to the document I'll send our next Content Specialist, but that's a really long document, so having the idea in more places seems good!
(Finally, when the n... (read more)
If a post is meant to be private to a certain audience, maybe it's better not to share — I just think sharing is a good default outside of extenuating circumstances.
TLYCS moves relatively little money compared to GiveWell, and cataract surgery rarely comes up on the Forum. That's where my point came from. (Note that my explanation is meant to be a guess at why AECS hasn't gotten much attention, not an argument that it shouldn't get much attention.)
My rough summary of GiveWell's cataract research (as of 2018): it seems cost-competitive with other work, but there's a lot of uncertainty around cost-effectiveness estimates and they've struggled to find orgs that meet their standards for monitoring and evaluation.
While this post (on reducing catastrophic risk through a career in infosec) isn't strictly about protecting the United States (and EA, as an international community, isn't especially focused on U.S. interests), it's the most relevant one I'm aware of for your question.
In cases where you do this, I strongly recommend linking back to the original question on Facebook. This lets people see any edits someone has made to the question + the answers they've already gotten (so that someone doesn't waste time writing something on the Forum that's redundant with something on Facebook).
Here are two promising first steps, which I'd love to see publicised much more widely.
This recommendation appears briefly in the middle of a long post, but it seems like one of the most concrete things someone could do (and among the easiest).
Have you considered just doing this yourself by writing a separate post that highlights these two opportunities? (And maybe talking to Will/Jonas first to see if there are details they would add?)
Will's comment might already be the most popular in the Forum's history (not sure), but it wouldn't hurt to have more eyeba... (read more)
Questions like this, which involve a really specific paper or program, are much more likely to get good answers if they include a summary of the relevant parts of the paper/program.
Someone who starts reading this post will see the words "I am reading that the petty trade option...".
Their next thoughts are likely to be "the petty trade option of what? What is this post talking about? Do I need to read this entire paper to understand the post?"
The post would be easier to understand if you started by explaining what the THP is, the fact that they've tri... (read more)
A comment on some of your recent posts (not an answer to this question): I find your use of bold text for lots of words to be very distracting. I think your posts would be easier to read if you didn't use bold text at all.
It might be good to send a version of this question to the Fred Hollows Foundation; it seems fairly technical, and I'd guess that very few Forum users will have the requisite knowledge to weigh in on whether the system should be used more widely than it currently is. Having one conversation with someone who works in this sector could... (read more)
Some organizations maintain lists of people who have either applied to roles there or who might be good candidates for future roles. They often share these lists around when other orgs in their space are hiring. (I only know directly of CEA's list, but when I was recently looking to hire a content person, I got ideas for candidates from staff at many other orgs, often with quick turnaround that made me think they had those names on hand already.)
Typing at the speed of thought, not very confident in any of the below:
This project seems reasonable for someone... (read more)
We could instead have "praise" tags to match each of the "criticism" tags — maybe that would make more sense?
I do think the external/internal distinction matters much more for praise. We should take criticism seriously whether it comes from outside or inside of EA, but praise for EA from people who are already deeply embedded in the movement seems qualitatively different than praise from people who admire from afar.
In the sphere of secular politics, a sober-minded philanthropist gradually learns to divide into three classes the reforms which he is anxious to bring about: those which he can begin to carry out himself, trusting to the direct effect of his individual energy and the indirect influence of his example; those which it is worthwhile to attempt, if a sufficiently powerful private organization can be set on foot; and those which necessitate the intervention of the State, and, consequently, a great stirring of the public mind on the subject.
— Henry Sidgwick, Th... (read more)
I'd prefer to move this to the front page — would that be alright with you? I think it deserves more readers than it's gotten.
Is there a specific type of venture you're thinking about starting?
And are you looking specifically for "value alignment" with effective altruism, or just having as big a positive impact as you can? Do you see any difference between those two things?
Changing the title to distinguish this from our tag for effective giving as a practice.
I am anonymous because vocally disagreeing with the status quo would probably destroy any prospects of getting hired or funded by EA orgs (see my heavily downvoted comment about my experiences somewhere at the bottom of this thread).This clearly doesn't apply to Rubi, so what's up?
I am anonymous because vocally disagreeing with the status quo would probably destroy any prospects of getting hired or funded by EA orgs (see my heavily downvoted comment about my experiences somewhere at the bottom of this thread).
This clearly doesn't apply to Rubi, so what's up?
There are many reasons for people to use pseudonyms on the Forum, and we allow it with few restrictions. It's also fine to have multiple accounts.
To clarify, that's not to say Rubi is necessarily Seán Ó hÉigeartaigh. I have no idea and I don't know Seán.However, this situation is
To clarify, that's not to say Rubi is necessarily Seán Ó hÉigeartaigh. I have no idea and I don't know Seán.
However, this situation is
I don't have much time to respond here and haven't thought much about my thesis since I wrote it almost seven years ago (and would probably find much of it embarrassing now in the light of the replication crisis + my better grasp on philosophy). A few notes:
While this comment was deleted, the moderators discussed it in its original form (which included multiple serious insults to another user) and decided to issue a two-week ban to Charles, starting today. We don't tolerate personal insults on the Forum.
There are many reasons that humans tend to do this, and I'm very familiar with them! I wrote part of my thesis on this topic.
Nevertheless, my feelings remain. The problem isn't ignorance, but (the concept I was trying to represent with "irrationality").
Do you feel that existing data on subjective wellbeing is so compelling that it's an indictment on EA for GiveWell/OpenPhil not to have funded more work in that area? (Founder's Pledge released their report in early 2019 and was presumably working on it much earlier, so they wouldn't seem to be blameworthy.)
I can't say much more here without knowing the details of how Michael/others' work was received when they presented it to funders. The situation I've outlined seems to be compatible both with "this work wasn't taken seriously enough" and "this work was ... (read more)
Do you feel that existing data on subjective wellbeing is so compelling that it's an indictment on EA for GiveWell/OpenPhil not to have funded more work in that area?
Tl;dr. Hard to judge. Maybe: Yes for GW. No for Open Phil. Mixed for EA community as a whole.
I think I will slightly dodge the question and answer the separate question – are these orgs doing enough exploratory type research. (I think this is a more pertinent question, and although I think subjective wellbeing is worth looking into as an example it is not clear it is at the very top of t... (read more)
If, for instance, someone who has written about AI more than once argues that the Chinese government funding AI research for solely humanitarian reasons...
I think there are a bunch of examples we could use here, which fall along a spectrum of "believability" or something like that.
Where the unbelievable end of the spectrum is e.g. "China has never imprisoned a Uyghur who wasn't an active terrorist", and the believable end of the spectrum is e.g. "gravity is what makes objects fall".
If someone argues that objects fall because of something something the lumi... (read more)
I was using very casual language here, and there might be a better word than "irrational".
The complex concept I was casually representing: "It seems good to be someone who feels more satisfaction when they do more good for more people. This isn't how my own feelings of satisfaction work, which makes me less motivated to do more good for more people than I wish I were."
"Irrational" refers to the desire to feel a different way than I actually feel, with a hint of "this is especially awkward because I've had plenty of time to reflect on these feelings and try to change them". Maybe "unreasonable" is a better word, or even "imperfect".
One reason is that the studies may consist of filtered evidence—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.
The "incentives" point is reasonable, and it's part of the reason I'd want to deprioritize checking into claims with dishonest origins.
However, I'll note that establishing a rule like "we won... (read more)
Personally I read this as a straightforward accusation of dishonesty - something I would expect moderators to object to if the comment was critical (rather than supportive) of EA orthodoxy.
As a moderator, I wouldn't object to this comment no matter who made it. I see it as a criticism of someone's work, not an accusation that the person was dishonest.
If someone wrote a paper critiquing the differential technology paradigm and spoke to lots of reviewers about it — including many who were known to be pro-DT — but didn't cite any pro-DT arguments, it would be... (read more)
I agree that knowing someone's personal motives can help you judge the likelihood of unproven claims they make, and should make you suspicious of any chance they have to e.g. selectively quote someone. But some of the language I've seen used around Torres seems to imply "if he said it, we should just ignore it", even in cases where he actually links to sources, cites published literature, etc.
Of course, it's much more difficult to evaluate someone's arguments when they've proven untrustworthy, so I'd give an evaluation of Phil's claims lower priority than ... (read more)
I've seen "in bad faith" used in two ways:
While it's obvious that we should point out lies where we see them, I think we should distinguish between (1) and (2). An argument's original promoter not believing it isn't a reason for no one to believe it, and shouldn't stop us from engaging with arguments that aren't obviously false.
(See this comment for more.)
I agree that there is a relevant difference, and I appreciate your pointing it out. However, I also think that knowledge of the origins of a claim or an argument is sometimes relevant for deciding whether one should engage seriously with it, or engage with it at all, even if the person presenting it is not himself/herself acting in bad faith. For example, if I know that the oil or the tobacco industries funded studies seeking to show that global warming is not anthropogenic or that smoking doesn't cause cancer, I think it's reasonable to be skeptical  ... (read more)
In your view, what would it look like for EA to pay sufficient attention to mental health?
To me, it looks like there's a fair amount of engagement on this:
I've only just seen this and thought I should chime in. Before I describe my experience, I should note that I will respond to Luke’s specific concerns about subjective wellbeing separately in a reply to his comment.
TL;DR Although GiveWell (and Open Phil) have started to take an interest in subjective wellbeing and mental health in the last 12 months, I have felt considerable disappointment and frustration with their level of engagement over the previous six years.
I raised the "SWB and mental health might really matter" concerns in meetings with GiveWell st... (read more)
To me (as someone who has funded the Happier Lives institute) I just think it should not have taken founding an institute and 6 years and of repeating this message (and feeling largely ignored and dismissed by existing EA orgs) to reach the point we are at now.
I think expecting orgs and donors to change direction is certainly a very high bar. But I don’t think we should pride ourselves on being a community that pivots and changes direction when new data (e.g. on subjective wellbeing) is made available to us.
As a moderator: the "basic background knowledge" point is skirting the boundaries of the Forum's norms; even if you didn't intend to condescend, I found it condescending, for the reasons I note in my other reply.
The initial comment — which claims that Halstead is misrepresenting a position, when "he understands and disagrees" is also possible — also seems uncharitable.
I do see this charitable reading as an understandable thing to miss, given that everyone is leaving brief comments about a complex question and there isn't much context. But I als... (read more)
Even if it's only a "mildly insulting caricature", it's still a way to claim that certain people are unintelligent or unserious without actually presenting an argument.
The first of these feels like it's trying to do the same thing as the second, without actually backing up its claim.
When I read the second, I feel like someone is trying to make me think. When I read the first, I feel like someone is trying to make me stop thinking.
I think Halstead knows what degrowth advocates claim about degrowth (that it won't have built-in humanitarian costs). And I think he disagrees with them, which isn't the same as not understanding their arguments.
Imagine people arguing whether to invade Iraq in the year following the 9/11 attacks. One of them points out that invading the country will involve enormous built-in humanitarian costs. Their interlocutor replies:
"Your characterization of an Iraq invasion as having "enormous humanitarian costs" "built in" is flatly untrue in a way that is obvious t... (read more)
As a moderator, I thought Lukas's comment was fine.
I read it as a humorous version of "this doesn't sound like something someone would say in those words", or "I cast doubt on this being the actual thing someone said, because people generally don't make threats that are this obvious/open".
Reading between the lines, I saw the comment as "approaching a disagreement with curiosity" by implying a request for clarification or specification ("what did you actually hear someone say"?). Others seem to have read the same implication, though Lukas could have ... (read more)
As a moderator, I agree with David that this comment doesn't abide by community norms.
It's not a serious offense, because "oh dear" is a mild comment that isn't especially detrimental to a conversation on its own. But if a reply implies that a post or comment is representative of some bad trend, or that the author should feel bad/embarrassed about what they wrote, and doesn't actually say why, it adds a lot more heat than light.
Note: I discuss Open Phil to some degree in this comment. I also start work there on January 3rd. These are my personal views, and do not represent my employer.
Epistemic status: Written late at night, in a rush, I'll probably regret some of this in the morning but (a) if I don't publish now, it won't happen, and (b) I did promise extra spice after I retired.
I think you contributed something important, and wish you had been met with more support.
It seems valuable to separate "support for the action of writing the paper" from "support for the arguments... (read more)
This is a great comment, thank you for writing it. I agree - I too have not seen sufficient evidence that could warrant the reaction of these senior scholars. We tried to get evidence from them and tried to understand why they explicitly feared that OpenPhil would not fund them because of some critical papers. Any arguments they shared with us were unconvincing. My own experience with people at OpenPhil (sorry to focus the conversation only on OpenPhil, obviously the broader conversation about funding should not only focus on them) in fact suggests the opp... (read more)
I don't think I've seen anyone reference the Culture series in connection with these posts yet. The series places a utopian post-scarcity and post-death society — the Culture, run by benevolent AIs that do a good job of handling human values — in conflict with societies that are not the Culture.
I've only read The Player of Games myself, and that book spends more time with the non-utopian than the utopian society, but it's still a good book, and one that many people recommend as an entry point into the series.
This Twitter thread from economist Chris Blattman, who "spent the last 15 years studying cash and also CBT", is an interesting response to the Vox article based on this study. An excerpt:
There ought to be huge amounts of investment in testing whether these techniques can be automated into apps, implemented by non experts, performed in groups or over mass media. Some of this testing is already happening but it needs to explode in scale.That’s because scaling these interventions is harder than the CBT enthusiasts are letting on. Helping an average villager b
There ought to be huge amounts of investment in testing whether these techniques can be automated into apps, implemented by non experts, performed in groups or over mass media. Some of this testing is already happening but it needs to explode in scale.
That’s because scaling these interventions is harder than the CBT enthusiasts are letting on. Helping an average villager b
As a mod, I was a bit confused when I saw two events with identical titles and thought you might have double-posted accidentally. You may want to include dates in your titles when you share two events that are this similar.
Thanks, Stuart! This answer was outstanding. I'll follow up with you privately about the bounty payment.
You had a non-syntactical space between [LinkedIn] and your URL. I removed it.
(Note that you don't need to turn on the Markdown editor to edit your bio — the bio is in Markdown no matter what.)
See this post — outdated in places, but the "personal blog" section is still accurate.
Currently meant to be "personal blog":
Posts related to personal health or productivity (unless there is a clear connection to EA work; for example, a post on research productivity)
That's why it's hard to categorize the stress post. It could make some reader more productive and impactful, but if that's the case, so would a post about buying a more comfortable chair, or a post about finding the best ice cream to make yourself happier and more motivated — there's a lin... (read more)
This question is oddly worded, such that it seems meant to elicit only answers about dishonesty, rather than more nuanced takes on the balance of honesty and dishonesty in recruiting.
When I went through a series of interviews with many organizations in 2018, I mostly remember it feeling really honest:
This may be the best execution I've seen of one of my EA Forum writing prompts:
Have you tried to explain EA to anyone recently? How did it go? Based on your experience, are there any frames or phrasings that you would/wouldn’t recommend?
I don't remember the book's plot very well, but I do remember thinking it was brilliantly written, and I'd recommend it highly.
I agree that it could be easier to get back to the index — there's a lot more we can do with sequences!
When you get to the end of a post, you have to scroll all the way up and look for the hard to see arrow to move to the next post.
When you get to the end of a post, you should see a navigation area like this:
Are you not seeing that? If so, what browser are you using?
These posts are mostly about personally improving one's own life, but also have an element of "these are promising ways many people could improve their lives, and this problem could be important to focus on". This makes it hard to place them conclusively in "frontpage" vs. "personal blog".
I wound up leaving the sleep post in "frontpage" and will do the same here, but I'd be happy to hear from anyone who disagrees/doesn't want to see content like this on frontpage.
You should expect to see the announcement early next week!
I'm surprised to hear that so many people you speak with feel that way. My experience of using Facebook (with an ad blocker) is that it's a mix of interesting thinkposts from friends in EA or other academic circles + personal news from people I care about, but would be unlikely to proactively keep in touch with (extended family, people I knew in college, etc.).
I certainly scroll past my fair share of posts, but the average quality of things I see on FB is easily competitive with what I see on Twitter (and I curate my Twitter carefully, so this is pra... (read more)