All of Aaron Gertler's Comments + Replies

Open Thread: Winter 2021

This description has been out of date for a long time, and I thought the Forum team had updated it a while ago. Might be a merge issue that reinstituted some old language, or maybe we updated in some places but not others.

While I'm no longer a moderator, I should clarify that we never used "Personal Blog" to "torpedo a post's visibility". If we thought a post shouldn't have been visible, we moved it back to a draft (many instances of spam, exactly one instance I can recall of an infohazard concern). Otherwise, it's always been up to the voters.

Personal Blo... (read more)

Most successful EA elevator pitches?

Zach's comment linked to a study of different "elevator pitches" for giving. That study has also been discussed on the Forum.

I've seen a bunch of threads on Slack/Facebook where groups share elevator pitches and other introductory materials, and I wouldn't be surprised if this question has been asked before on the Forum, but I don't have time to hunt down other threads at the moment. (You may want to try asking in the EA Groups Slack or in EA Group Organizers to see if anyone knows about past threads or wants to share a current pitch.)

Bryan Caplan on EA groups

I didn't expect it to get as many votes as it did, but I think people just like hearing nice things — there's not really anything to miss.

Most of the top posts of all time have 2+ karma per vote, while this post has less. That lines up with many people reading it, most people liking it, but few people loving it.

How to use the Forum

(Note: No longer a moderator, just thinking out loud.)

If this is meant to be forum rules, it's not where it needs to be. I have not seen a TOS or a set of rules before I got an account, and as a forum veteran I was looking for those.

The "About the Forum" page is linked from the main menu, which appears on every page of the Forum. I assume that the title made it sound like it wasn't worth checking to look for rules, which is helpful to know.

Things that could make the rules easier to find, in case moderators want to try a change:

  • Adding hover text to the "Abo
... (read more)
3HoratioVonBecker7dEven a scrollover requires that the user be actively trying to explore the site on a meta level - and the mobile-based UX has made finding the sitenav bar a nontrivial endeavor! I would at least put it as the 'before you start typing' background. r/ChangeMyView is, by its' very subject, going to dodge most of the reasons a forum would need to enforce policies like this. The approach also seems quite dependent on moderators. Hacker News has the best debate policy I've ever seen. I do have a soft spot for snark, but I can live without it. Thank you for introducing me to it! Quora looks so elitist. 'Correct' grammar and punctuation? No-explanation 'hate speech'? Also, most of their content policy is only incorporated by linking it, which is a huge pet peeve of mine. (I'm the sort of person who tries to actually read any contract I agree to, and lying about how long the TOS is feels so disrespectful.) This was very interesting, thank you!
How to use the Forum

Thanks for the notice — looks like we just merged this feature from LW! Thrilled to now be removing this from the post.

Rowing and Steering the Effective Altruism Movement

A competition for steering-type material seems like a reasonable contest theme (along the lines of the contests we had for creative writing and Wiki entries).

Now that I won't be running any future events, I'm not sure what the best place to put ideas like this is. Perhaps a comment here (I imagine that future EA Forum leaders will check that thread when thinking about contests).

I've also added your idea to the document I'll send our next Content Specialist, but that's a really long document, so having the idea in more places seems good!

(Finally, when the n... (read more)

1jtm14dSounds good! I'll post a comment and make sure to reach out to the next content specialist. Thanks!
How Big a Problem is Status Quo Bias in the EA Community?

If a post is meant to be private to a certain audience, maybe it's better not to share — I just think sharing is a good default outside of extenuating circumstances.

2Evan_Gaensbauer15dIt wasn't a private group but only people need to request to join if they're on Facebook. I agree with you though.
Should the Aravind Eye Care System be funded in lieu of The Fred Hollows Foundation?

TLYCS moves relatively little money compared to GiveWell, and cataract surgery rarely comes up on the Forum. That's where my point came from. (Note that my explanation is meant to be a guess at why AECS hasn't gotten much attention, not an argument that it shouldn't get much attention.)

My rough summary of GiveWell's cataract research (as of 2018): it seems cost-competitive with other work, but there's a lot of uncertainty around cost-effectiveness estimates and they've struggled to find orgs that meet their standards for monitoring and evaluation.

*****

Do y

... (read more)
1brb24316dOh, that is interesting. I see that TLYCS moves a few million $/year, GW more than a hundred million. Yes. It is interesting that TLYCS specifies a 10× greater cost-effectiveness than GW estimates in its updated report. OK, thank you for the tip!
Cybersecurity as a career

While this post (on reducing catastrophic risk through a career in infosec) isn't strictly about protecting the United States (and EA, as an international community, isn't especially focused on U.S. interests), it's the most relevant one I'm aware of for your question.

How Big a Problem is Status Quo Bias in the EA Community?

In cases where you do this, I strongly recommend linking back to the original question on Facebook. This lets people see any edits someone has made to the question + the answers they've already gotten (so that someone doesn't waste time writing something on the Forum that's redundant with something on Facebook).

2Evan_Gaensbauer15dThat's a good idea but the post was in a private group, so I figured that might complicate things if people aren't on Facebook or they have to join a while other group anyway before they join the conversation. I'll do it next time though. Thanks for the suggestion.
Rowing and Steering the Effective Altruism Movement

Here are two promising first steps, which I'd love to see publicised much more widely.

This recommendation appears briefly in the middle of a long post, but it seems like one of the most concrete things someone could do (and among the easiest).

Have you considered just doing this yourself by writing a separate post that highlights these two opportunities? (And maybe talking to Will/Jonas first to see if there are details they would add?)

Will's comment might already be the most popular in the Forum's history (not sure), but it wouldn't hurt to have more eyeba... (read more)

2jtm15dThanks, Aaron, this is a great suggestion! I'll try to get around to writing a very brief post about it this weekend. On a related note, I'd be curious to hear what you think of the idea of using EA Forum prizes for this sort of purpose? Of course, there'd have to be some more work on specifying what exactly the prize should be for, etc. If you know who will be working on the Forum going forward, I'd love to get a sense of whether they'd be interested in doing some version of this. If so, I'd be more than happy to set up a meeting to discuss.
Should Founders Pledge support only the non-conscious asset transfer arm of Bandhan’s Targeting the Hardcore Poor poverty graduation program?

Questions like this, which involve a really specific paper or program, are much more likely to get good answers if they include a summary of the relevant parts of the paper/program.

Someone who starts reading this post will see the words "I am reading that the petty trade option...". 

Their next thoughts are likely to be "the petty trade option of what? What is this post talking about? Do I need to read this entire paper to understand the post?"

The post would be easier to understand if you started by explaining what the THP is, the fact that they've tri... (read more)

1brb24316dOk the wording has been changed. This is also a semi-rhetorical question, something like, wouldn't it be better if animals weren't factory farmed by humans but rather taken considerate motivated care of in order to exchange pleasant cooperation on meaningful objectives? These can sound a bit weird if they are presented in a way that does not compel people to empathize but rather get data in a concise manner to make further progress? Am I too influenced by the outside-of-EA world? Yes, it makes sense. Maybe some people prefer livestock, just like many GD beneficiaries, because it provides a continuous source of income (such as from milk) and also can be sold in cases of emergencies. Still, assuming that there are enough persons who would benefit from the non-livestock transfer option (while those who would rather or more feasibly receive an animal asset would be left without funding), supporting only the non-conscious asset beneficiaries can set an important institutional norm of human economic growth not at the cost of other individuals' suffering?
Should the Aravind Eye Care System be funded in lieu of The Fred Hollows Foundation?

A comment on some of your recent posts (not an answer to this question): I find your use of bold text for lots of words to be very distracting. I think your posts would be easier to read if you didn't use bold text at all.

*****

It might be good to send a version of this question to the Fred Hollows Foundation; it seems fairly technical, and I'd guess that very few Forum users will have the requisite knowledge to weigh in on whether the system should be used more widely than it currently is. Having one conversation with someone who works in this sector could... (read more)

1brb24316dOk, noted. But then if people just want to skim the post in seconds (especially those who may not be so interested in the first place) do you think maybe headings or infographics would be more appropriate? What would you recommend? To the Fred Hollows Foundation? Will they not assert that they first do not operate in India and their services are needed where they operate, plus they are not an investing company? This is why funders are a better audience to decide? Especially considering the innovativeness of EAs? Nevertheless noted. To answer: Extent of FHF support from EA: FHF is a TLYCS recommended [https://www.thelifeyoucansave.org/best-charities/fred-hollows-foundation/] charity. The cost of treating a cataract surgery versus that of training a guide dog for blind people is a commonly cited [https://forum.effectivealtruism.org/posts/SMRHnGXirRNpvB8LJ/fact-checking-comparison-between-trachoma-surgeries-and] example in EA. The revenue of FHF was $650m [https://www.charitynavigator.org/ein/822851329] in 2019. I am not finding any funding from Open Philanthropy/Good Ventures but I think in EA Brazil they selected [https://doebem.org.br/caviver/] a cataract organization as one of the 3 non-GW ones and in the Philippines they seem to have also tried to find charities that do similar work to GW organizations, and include [https://forum.effectivealtruism.org/posts/RhenjBX6aDx6xTjnv/feedback-request-ea-philippines-local-charity-effectiveness#EA_Philippines__tentative_list_of_recommended_charities_in_the_Philippines] The Fred Hollows Foundation on their list [https://forum.effectivealtruism.org/posts/RhenjBX6aDx6xTjnv/feedback-request-ea-philippines-local-charity-effectiveness#EA_Philippines__tentative_list_of_recommended_charities_in_the_Philippines] . Other local effective donation organizations may include FHF. So, I would suggest that people are at least thinking, if not donating. Cost-effectiveness analyses: From the TLYCS page, which cites a World Bank [https
Does anything like a "resume book" exist in EA?

Some organizations maintain lists of people who have either applied to roles there or who might be good candidates for future roles. They often share these lists around when other orgs in their space are hiring. (I only know directly of CEA's list, but when I was recently looking to hire a content person, I got ideas for candidates from staff at many other orgs, often with quick turnaround that made me think they had those names on hand already.)

Typing at the speed of thought, not very confident in any of the below:

This project seems reasonable for someone... (read more)

2jlemien16dRegarding using something like an "EA group" on LinkedIn, I like that concept. I think that might be a better idea for a MVP than a Google Sheet. Thanks for mentioning it. I'll let the idea percolate for a while. Regarding several orgs to agreeing to handle their hiring through a single system, my first thought about hiring was actually somewhat related: having a centralized hiring/recruiting team for multiple large EA orgs. This way, rather than organization A, B, C, and D all employing a recruiter, getting a subscription to a applicant sourcing service, learning/training how to write job descriptions and do interviews, a "shared services" team (part of a traditional HR model) could do these tasks for all member organizations. I'm not highly confident that it would be a net positive, but I am intrigued by the idea and it seems worth exploring. Of course, that would require changes/agreement from multiple large/central orgs to get started, which would be another challenge. Regarding not making it easier to get hired, my thought is that this would be something that would be "background active." If there are 10 openings a job seeker is interested in, she has to spend effort to seek out (generally reviewing far more than 10) and apply to each one. If she submits information to a system like this (if I have relevant skills) then multiple hiring managers would see the info over the coming weeks with no additional effort from the job seeker. The hiring managers might not contact her if she is not a good fit, so this system might not do any good for job seeker moral. But my hunch (totally untested and unproved) is that there would be more job openings that she would be considered for. (I suspect that part of the source for my hunch is that I personally applied to a role at one organization a while back, and about a month later I got an email from a different organization saying that they had been passed my info and would like to consider me for a role; thus I'm fairly co
External praise for EA

We could instead have "praise" tags to match each of the "criticism" tags — maybe that would make more sense?

I do think the external/internal distinction matters much more for praise. We should take criticism seriously whether it comes from outside or inside of EA, but praise for EA from people who are already deeply embedded in the movement seems qualitatively different than praise from people who admire from afar.

2Pablo18dI like having a single "external praise" tag rather than three "praise" tags corresponding to the three "criticism" tags, for the reasons you note.
Effective altruism quotes

In the sphere of secular politics, a sober-minded philanthropist gradually learns to divide into three classes the reforms which he is anxious to bring about: those which he can begin to carry out himself, trusting to the direct effect of his individual energy and the indirect influence of his example; those which it is worthwhile to attempt, if a sufficiently powerful private organization can be set on foot; and those which necessitate the intervention of the State, and, consequently, a great stirring of the public mind on the subject.

— Henry Sidgwick, Th... (read more)

Hits-based development: funding developing-country economists

I'd prefer to move this to the front page — would that be alright with you? I think it deserves more readers than it's gotten.

1Michael_Wiebe2dHey, go for it!
What are some of the best books/reads on starting value-aligned for-profit ventures?

Is there a specific type of venture you're thinking about starting? 

And are you looking specifically for "value alignment" with effective altruism, or just having as big a positive impact as you can? Do you see any difference between those two things?

3Afrothunder22dHi Aaron! Thanks for your response. Yes, some friends and I have been thinking about one/two ventures that sell carbon credits in exchange for financing transitions to plant-based consumption or production (can clarify further if that would be helpful). I'm thinking of value alignment more broadly i.e. that the venture starters may hold some values (in our case concern for animal welfare, global + local pollution) that they would like to see the venture advance, or at least not compromise, while also satisfying investor demand for profit. But the values could also be other things. For instance, Lyft might have held as founding values to pay drivers 'fair' wages - what things could they have read/learned to help them guide the growth of their company in such a way that it was aligned with this goal? Let me know if that makes sense - not sure it does!
Effective Giving (organization)

Changing the title to distinguish this from our tag for effective giving as a practice.

Democratising Risk - or how EA deals with critics
Aaron Gertler1moModerator Comment23

I am anonymous because vocally disagreeing with the status quo would probably destroy any prospects of getting hired or funded by EA orgs (see my heavily downvoted comment about my experiences somewhere at the bottom of this thread).

This clearly doesn't apply to Rubi, so what's up?

There are many reasons for people to use pseudonyms on the Forum, and we allow it with few restrictions. It's also fine to have multiple accounts.

To clarify, that's not to say Rubi is necessarily Seán Ó hÉigeartaigh. I have no idea and I don't know Seán.

However, this situation is

... (read more)
It's OK to feed stray cats

I don't have much time to respond here and haven't thought much about my thesis since I wrote it almost seven years ago (and would probably find much of it embarrassing now in the light of the replication crisis + my better grasp on philosophy). A few notes:

  • I think that humans do something akin to CEV as part of our daily lives — we experience impulses, then rein them back ("take a deep breath", "think of what X would say", "put yourself in their shoes"...) It seems like we're usually happier with our choices when we've taken more time to think them over (
... (read more)
1acylhalide1moThank you for replying! Fair enough. I guess I meant our desires have evolved into our neural circuitry as part of System 1. And we can't use a lot of thinking alone (System 2 activation) to decide what our goals, we need to first have first-hand experiences of pleasure or pain. You're right humans do end up doing this decision + retroactive decision, I just don't think that always leads to one consistent place, different humans can justify different things to themselves (or even the same human at different points in time). There are a bunch of different things that System 1 strongly reacts to (our "core values"), and I don't think our brains naturally have way of trading them off against each other that doesn't lose pennies [https://arbital.com/p/coherence_theorems/]. System-2 tries its best to ignore these inconsistencies, but then where it ends up is random, because there's no real way to decide what to ignore. We don't often encounter such situations where we're forced to make such trades, but we can in theory, as in trolley problems. Edit: deleted "prisoner dilemma" that was by mistake
Democratising Risk - or how EA deals with critics
Aaron Gertler1moModerator Comment10

While this comment was deleted, the moderators discussed it in its original form (which included multiple serious insults to another user) and decided to issue a two-week ban to Charles, starting today. We don't tolerate personal insults on the Forum.

It's OK to feed stray cats

There are many reasons that humans tend to do this, and I'm very familiar with them! I wrote part of my thesis on this topic.

Nevertheless, my feelings remain. The problem isn't ignorance, but (the concept I was trying to represent with "irrationality").

1acylhalide1moI read your post - and decided to write up my thoughts anyway. It might be a weird take, but I would really appreciate your opinion on it, if you have the time. I've spent way too much time unsure about the best way to explain it, yet felt the need to explain it in so many places, so it would mean a lot to me if you read it. It also describes why I'm kinda skeptical of AI alignment being solveable. -------------------------------------------------------------------------------- I just found this from your post. I somehow feel like System 2 has no genuine desires of its own, it simply borrows them from System 1. System 1 = desires + primitive tools to trade-off different desires These tools aren't super advanced, they can't do math or formal logic and are mostly heuristics, they often throw up random answers if situations don't cleanly fit into exactly one heuristic, and decisions guided purely by System 1 will lose pennies [https://arbital.com/p/coherence_theorems/]. System 1 is also super-inflexible - you can't simply choose to rewire all of your System 1, this is beyond the reach of free will. (Maybe neurosurgery can change it.) System 2 = advanced reasoning tools System 2 just borrows desires from System 1. You won't do cold-hearted calculation on what saves the most lives unless you've already had System-1 experiences of other people's pain or joy, and System-1 desire to help people. Problem with System 2 is, no matter how much math or logic it throws at the problem, it can't find a consistent way of trading-off different desires that is also consistent with System-1. Why - because System-1 was never coherent in the first place. It also can't choose to just ignore System-1 and formulate its all-important theory of ethics because it has no desires (/values/ethics) of its own, nor any objective way to compare them. Ground truth on such matters comes from System-1. (I see ethics as a subset of desires btw, I don't think we should assume something fundam
1acylhalide1moOh okay, nice to know. I will check it out.
Democratising Risk - or how EA deals with critics

Do you feel that existing data on subjective wellbeing is so compelling that it's an indictment on EA for GiveWell/OpenPhil not to have funded more work in that area? (Founder's Pledge released their report in early 2019 and was presumably working on it much earlier, so they wouldn't seem to be blameworthy.)

I can't say much more here without knowing the details of how Michael/others' work was received when they presented it to funders. The situation I've outlined seems to be compatible both with "this work wasn't taken seriously enough" and "this work was ... (read more)

Do you feel that existing data on subjective wellbeing is so compelling that it's an indictment on EA for GiveWell/OpenPhil not to have funded more work in that area?

Tl;dr. Hard to judge. Maybe: Yes for GW. No for Open Phil. Mixed for EA community as a whole.

 

I think I will slightly dodge the question and answer the separate question – are these orgs doing enough exploratory type research. (I think this is a more pertinent question, and although I think subjective wellbeing is worth looking into as an example it is not clear it is at the very top of t... (read more)

Democratising Risk - or how EA deals with critics

If, for instance, someone who has written about AI more than once argues that the Chinese government funding AI research for solely humanitarian reasons...

I think there are a bunch of examples we could use here, which fall along a spectrum of "believability" or something like that.

Where the unbelievable end of the spectrum is e.g. "China has never imprisoned a Uyghur who wasn't an active terrorist", and the believable end of the spectrum is e.g. "gravity is what makes objects fall".

If someone argues that objects fall because of something something the lumi... (read more)

It's OK to feed stray cats

I was using very casual language here, and there might be a better word than "irrational".

The complex concept I was casually representing: "It seems good to be someone who feels more satisfaction when they do more good for more people. This isn't how my own feelings of satisfaction work, which makes me less motivated to do more good for more people than I wish I were."

"Irrational" refers to the desire to feel a different way than I actually feel, with a hint of "this is especially awkward because I've had plenty of time to reflect on these feelings and try to change them". Maybe "unreasonable" is a better word, or even "imperfect".

1acylhalide1moThank you this makes sense. I somehow feel like I understand why humans tend to do this, I'll write it up one-day and let you know!
Democratising Risk - or how EA deals with critics

 One reason is that the studies may consist of filtered evidence—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.

The "incentives" point is reasonable, and it's part of the reason I'd want to deprioritize checking into claims with dishonest origins. 

However, I'll note that establishing a rule like "we won... (read more)

6Pablo1moThanks for the comments. They have helped me clarify my thoughts, though I feel I'm still somewhat confused. Yes, I agree that this is a concern. I am reminded of an observation by Nick Bostrom [https://www.nickbostrom.com/revolutions.pdf]: So I recognize both that it is sometimes legitimate (and even required) to refuse to engage with arguments based on how they originated, and that a norm that licenses this behavior has significant abuse potential. I haven't thought about ways in which the norm could be refined, or about heuristics one could adopt to decide when to apply it. I'd like to see someone (Greg Lewis?) investigate this issue more. I mostly agree. My sense is that we often misclassify as "specific piece[s] of evidence that would be damning if true" things that should be assessed as part of a much larger whole. E.g. it is sometimes relevant to consider the sheer number of things someone has said when deciding how outraged to be that this person said something seemingly outrageous.
Democratising Risk - or how EA deals with critics
Aaron Gertler1moModerator Comment11

Personally I read this as a straightforward accusation of dishonesty - something I would expect moderators to object to if the comment was critical (rather than supportive) of EA orthodoxy.

As a moderator, I wouldn't object to this comment no matter who made it. I see it as a criticism of someone's work, not an accusation that the person was dishonest.

If someone wrote a paper critiquing the differential technology paradigm and spoke to lots of reviewers about it — including many who were known to be pro-DT — but didn't cite any pro-DT arguments, it would be... (read more)

7anonymousEA1moHonestly, fair enough.
Noticing the skulls, longtermism edition

I agree that knowing someone's personal motives can help you judge the likelihood of unproven claims they make, and should make you suspicious of any chance they have to e.g. selectively quote someone. But some of the language I've seen used around Torres seems to imply "if he said it, we should just ignore it", even in cases where he actually links to sources, cites published literature, etc.

Of course, it's much more difficult to evaluate someone's arguments when they've proven untrustworthy, so I'd give an evaluation of Phil's claims lower priority than ... (read more)

Democratising Risk - or how EA deals with critics

I've seen "in bad faith" used in two ways:

  1. This person's argument is based on a lie.
  2. This person doesn't believe their own argument, but they aren't lying within the argument itself.

While it's obvious that we should point out lies where we see them, I think we should distinguish between (1) and (2). An argument's original promoter not believing it isn't a reason for no one to believe it, and shouldn't stop us from engaging with arguments that aren't obviously false.

(See this comment for more.)

I agree that there is a relevant difference, and I appreciate your pointing it out. However, I also think that knowledge of the origins of a claim or an argument is sometimes relevant for deciding whether one should engage seriously with it, or engage with it at all, even if the person presenting it is not himself/herself acting in bad faith. For example, if I know that the oil or the tobacco industries funded studies seeking to show that global warming is not anthropogenic or that smoking doesn't cause cancer, I think it's reasonable to be skeptical  ... (read more)

Democratising Risk - or how EA deals with critics

In your view, what would it look like for EA to pay sufficient attention to mental health?

To me, it looks like there's a fair amount of engagement on this:

  • Peter Singer obviously cares about the issue, and he's a major force in EA by himself.
  • Michael Plant's last post got a positive writeup in Future Perfect and serious engagement from a lot of people on the Forum and on Twitter (including Alexander Berger, who probably has more influence over neartermist EA funding than any other person); Alex was somewhat negative on the post, but at least he read it.
  • Forum
... (read more)

I've only just seen this and thought I should chime in. Before I describe my experience, I should note that I will respond to Luke’s specific concerns about subjective wellbeing separately in a reply to his comment.

TL;DR Although GiveWell (and Open Phil) have started to take an interest in subjective wellbeing and mental health in the last 12 months, I have felt considerable disappointment and frustration with their level of engagement over the previous six years.

I raised the "SWB and mental health might really matter" concerns in meetings with GiveWell st... (read more)

To me (as someone who has funded the Happier Lives institute) I just think it should not have taken founding an institute and 6 years and of repeating this message (and feeling largely ignored and dismissed by existing EA orgs) to reach the point we are at now.

I think expecting orgs and donors to change direction is certainly a very high bar. But I don’t think we should pride ourselves on being a community that pivots and changes direction when new data (e.g. on subjective wellbeing) is made available to us.

Democratising Risk - or how EA deals with critics
Aaron Gertler1moModerator Comment3

As a moderator: the "basic background knowledge" point is skirting the boundaries of the Forum's norms; even if you didn't intend to condescend, I found it condescending, for the reasons I note in my other reply. 

The initial comment — which claims that Halstead is misrepresenting a position, when "he understands and disagrees" is also possible — also seems uncharitable. 

I do see this charitable reading as an understandable thing to miss, given that everyone is leaving brief comments about a complex question and there isn't much context. But I als... (read more)

-7anonymousEA1mo
Democratising Risk - or how EA deals with critics

Even if it's only a "mildly insulting caricature", it's still a way to claim that certain people are unintelligent or unserious without actually presenting an argument.

Compare:

  • "A small handful of incredibly wealthy techbros"
  • "A small handful of incredibly wealthy people with similar backgrounds in technology, which could lead to biases X and Y"

The first of these feels like it's trying to do the same thing as the second, without actually backing up its claim. 

When I read the second, I feel like someone is trying to make me think. When I read the first, I feel like someone is trying to make me stop thinking.

Democratising Risk - or how EA deals with critics
Aaron Gertler1moModerator Comment7

I think Halstead knows what degrowth advocates claim about degrowth (that it won't have built-in humanitarian costs). And I think he disagrees with them, which isn't the same as not understanding their arguments.

Imagine people arguing whether to invade Iraq in the year following the 9/11 attacks. One of them points out that invading the country will involve enormous built-in humanitarian costs. Their interlocutor replies:

"Your characterization of an Iraq invasion as having "enormous humanitarian costs" "built in" is flatly untrue in a way that is obvious t... (read more)

Democratising Risk - or how EA deals with critics
Aaron Gertler1moModerator Comment14

As a moderator, I thought Lukas's comment was fine.

I read it as a humorous version of "this doesn't sound like something someone would say in those words", or "I cast doubt on this being the actual thing someone said, because people generally don't make threats that are this obvious/open".  

Reading between the lines, I saw the comment as "approaching a disagreement with curiosity" by implying a request for clarification or specification ("what did you actually hear someone say"?). Others seem to have read the same implication, though Lukas could have ... (read more)

Democratising Risk - or how EA deals with critics
Aaron Gertler1moModerator Comment16

As a moderator, I agree with David that this comment doesn't abide by community norms. 

It's not a serious offense, because "oh dear" is a mild comment that isn't especially detrimental to a conversation on its own. But if a reply implies that a post or comment is representative of some bad trend, or that the author should feel bad/embarrassed about what they wrote, and doesn't actually say why, it adds a lot more heat than light.

Democratising Risk - or how EA deals with critics

Note: I discuss Open Phil to some degree in this comment. I also start work there on January 3rd. These are my personal views, and do not represent my employer.

Epistemic status: Written late at night, in a rush, I'll probably regret some of this in the morning but (a) if I don't publish now, it won't happen, and (b) I did promise extra spice after I retired.

I think you contributed something important, and wish you had been met with more support. 

It seems valuable to separate "support for the action of writing the paper" from "support for the arguments... (read more)

This is a great comment, thank you for writing it. I agree - I too have not seen sufficient evidence that could warrant the reaction of these senior scholars. We tried to get evidence from them and tried to understand why they explicitly feared that OpenPhil would not fund them because of some critical papers. Any arguments they shared with us were unconvincing. My own experience with people at OpenPhil (sorry to focus the conversation only on OpenPhil, obviously the broader conversation about funding should not only focus on them) in fact suggests the opp... (read more)

Comments for shorter Cold Takes pieces

I don't think I've seen anyone reference the Culture series in connection with these posts yet. The series places a utopian post-scarcity and post-death society — the Culture, run by benevolent AIs that do a good job of handling human values — in conflict with societies that are not the Culture.

I've only read The Player of Games myself, and that book spends more time with the non-utopian than the utopian society, but it's still a good book, and one that many people recommend as an entry point into the series.

1Taymon21dThe Fun Theory Sequence (which is on a similar topic) had some things to say [https://www.lesswrong.com/posts/vwnSPgwtmLjvTK2Wa/amputation-of-destiny] about the Culture.
4tessa1moI haven't read The Culture series but/and I really enjoyed this meta piece about it: Why The Culture Wins: An appreciation of Iain M. Banks [https://www.sciphijournal.org/index.php/2017/11/12/why-the-culture-wins-an-appreciation-of-iain-m-banks/] for a really excellent discussion of meaning-seeking within a post-scarcity utopia. An excerpt:
Donating money, buying happiness: new meta-analyses comparing the cost-effectiveness of cash transfers and psychotherapy in terms of subjective well-being

This Twitter thread from economist Chris Blattman, who "spent the last 15 years studying cash and also CBT", is an interesting response to the Vox article based on this study. An excerpt:

There ought to be huge amounts of investment in testing whether these techniques can be automated into apps, implemented by non experts, performed in groups or over mass media. Some of this testing is already happening but it needs to explode in scale.

That’s because scaling these interventions is harder than the CBT enthusiasts are letting on. Helping an average villager b

... (read more)
2021/2022 Principle-driven Reflection/Planning Workshop Dec 30th Session

As a mod, I was a bit confused when I saw two events with identical titles and thought you might have double-posted accidentally. You may want to include dates in your titles when you share two events that are this similar.

1gty33101moThanks, for the feedback Aaron, changing the name now!
(Answered) Tax question: Help us donate millions, get a $500 bounty!

Thanks, Stuart! This answer was outstanding. I'll follow up with you privately about the bounty payment.

Open Thread: Winter 2021

You had a non-syntactical space between [LinkedIn] and your URL. I removed it.

(Note that you don't need to turn on the Markdown editor to edit your bio — the bio is in Markdown no matter what.)

2JackM1moThanks Aaron!
Stress - effective ways to reduce it

See this post — outdated in places, but the "personal blog" section is still accurate.

Currently meant to be "personal  blog":

Posts related to personal health or productivity (unless there is a clear connection to EA work; for example, a post on research productivity)

That's why it's hard to categorize the stress post. It could make some reader more productive and impactful, but if that's the case, so would a post about buying a more comfortable chair, or a post about finding the best ice cream to make yourself happier and more motivated — there's a lin... (read more)

With how many EA professionals have you noticed some degree of dishonesty about how impactful it would be to work for them?

This question is oddly worded, such that it seems meant to elicit only answers about dishonesty, rather than more nuanced takes on the balance of honesty and dishonesty in recruiting.

When I went through a series of interviews with many organizations in 2018, I mostly remember it feeling really honest:

  • I had applied for a position as Stuart Russell's personal assistant. When I spoke to him about the role, he frankly told me that he wasn't sure the position would work out at all, and that past personal assistants had done very little to boost his productivity
... (read more)
High School Seniors React to 80k Advice

This may be the best execution I've seen of one of my EA Forum writing prompts:

Have you tried to explain EA to anyone recently? How did it go? Based on your experience, are there any frames or phrasings that you would/wouldn’t recommend?

Wonderful work!

[Book rec] The War with the Newts as “EA fiction”

I don't remember the book's plot very well, but I do remember thinking it was brilliantly written, and I'd recommend it highly.

The Effective Altruism Handbook

I agree that it could be easier to get back to the index — there's a lot more we can do with sequences!

When you get to the end of a post, you have to scroll all the way up and look for the hard to see arrow to move to the next post.

When you get to the end of a post, you should see a navigation area like this:

Are you not seeing that? If so, what browser are you using?

2casebash1moOh, I feel silly, I missed that!
Stress - effective ways to reduce it

These posts are mostly about personally improving one's own life, but also have an element of "these are promising ways many people could improve their lives, and this problem could be important to focus on". This makes it hard to place them conclusively in "frontpage" vs. "personal blog".

I wound up leaving the sleep post in "frontpage" and will do the same here, but I'd be happy to hear from anyone who disagrees/doesn't want to see content like this on frontpage.

4Ben Williamson1moI appreciate how this straddles both "frontpage" and "personal blog". I'll be publishing a case for the value of the project as a cost-effective mental health intervention later this week, which may help better demonstrate this as a 'problem area' somewhat separate from more general life advice. Happy to defer to collective opinion though on the best placement of the content.
3Brendon_Wong1moOut of curiosity, what is the inclusion criteria for frontpage posts? Ignoring the broader global well-being considerations, if this is a "meta intervention" to increase the well-being/effectiveness of EAs, would that be "relevant to doing good effectively" which is the stated description for frontpage posts?
2Dvir Caspi1moAaron, any update on this?
Flimsy Pet Theories, Enormous Initiatives

I'm surprised to hear that so many people you speak with feel that way. My experience of using Facebook (with an ad blocker) is that it's a mix of interesting thinkposts from friends in EA or other academic circles + personal news from people I care about, but would be unlikely to proactively keep in touch with (extended family, people I knew in college, etc.). 

I certainly scroll past my fair share of posts, but the average quality of things I see on FB is easily competitive with what I see on Twitter (and I curate my Twitter carefully, so this is pra... (read more)

Load More