All of IanDavidMoss's Comments + Replies

FWIW, in the (rough) BOTECs we use for opportunity prioritization at Effective Institutions Project, this has been our conclusion as well. GCR prevention is tough to beat for cost-effectiveness even only considering impacts on a 10-year time horizon, provided you are comfortable making judgments based on expected value with wide uncertainty bands.

I think people have a cached intuition that "global health is most cost-effective on near-term timescales" but what's really happened is that "a well-respected charity evaluator that researches donation opportunit... (read more)

Could you say more about what you see as the practical distinction between a "slow down AI in general" proposal vs. a "pause" proposal?

Fun! I'm glad that you're working with experts on administering this and applaud the intention to post lessons learned. If you haven't already come across them, you might find these resources on participatory grantmaking helpful.

2
David Clarke
1y
Thanks very much! I've spoken to a couple of experts on PGM and have signed up to this mailing list: https://www.participatorygrantmaking.org/. I hadn't seen those resources though so I'll check them out. 

a system of governance that has been shown repeatedly to lead to better organizational performance.

This is a pretty strong empirical claim, and I don't see documentation for it either in your comment or the original post. Can you share what evidence you're basing this on?

Several years ago, 12 self-identified women and people of color in EA wrote a collaborative article that directly addresses what it's like to be part of groups and spaces where conversation topics like this come up. It's worth a read. Making discussions in EA groups inclusive

I'll bite on the invitation to nominate my own content. This short piece of mine spent little time on the front page and didn't seem to capture much attention, either positive or negative. I'm not sure why, but I'd love for the ideas in it to get a second look, especially by people who know more about the topic than I do.

Title: Leveraging labor shortages as a pathway to career impact? [note: question mark was added today to better reflect the intended vibe of the post]

Author: Ian David Moss

URL: https://forum.effectivealtruism.org/posts/xdMn6FeQGjrXDPnQj/le... (read more)

Hi David, thanks for your interest in our work! I need to preface this by emphasizing that the primary purpose of the quantitative model was to help us assess the relative importance of and promise of engaging with different institutions implicated in various existential risk scenarios. There was less attention given to the challenge of nailing the right absolute numbers, and so those should be taken with a super-extra-giant grain of salt.

With that said, the right way to understand the numbers in the model is that the estimates were about the impact over 1... (read more)

2
Denkenberger
1y
Thanks for the clarification. I would say this is quite optimistic, but I look forward to your future cost-effectiveness work.

Dustin & Cari were also among the largest donors in 2020: https://www.vox.com/recode/2020/10/20/21523492/future-forward-super-pac-dustin-moskovitz-silicon-valleys

Wow, I didn't see it at the time but this was really well written and documented. I'm sorry it got downvoted so much and think that reflects quite poorly on Forum voting norms and epistemics.

I like how Hacker News hides comment scores. Seems to me that seeing a comment's score before reading it makes it harder to form an independent impression.

I fairly frequently find myself thinking something like: "this comment seems fine/interesting and yet it's got a bunch of downvotes; the downvoters must know something I don't, so I shouldn't upvote". If others also reason this way, the net effect is herd behavior? What if I only saw a comment's score after voting/opting not to vote?

Maybe quadratic voting could help, by encouraging everyone to focus t... (read more)

I think the post ended up around 0 or 1 karma, is that right? (I mean before people changed their voting based on hindsight!) I think it's important to distinguish between "got downvoted a lot but ended up at neutral karma" vs. "got downvoted double digits into no longer being visible." The former reflects somewhat poorly on EA, the latter very poorly. 

Moreover, Sven Rone is a pseudonym. The author used a pen name astheir views were unpopular and underappreciated at the time; they likely feared career repercussions if they went public with it. It's unfortunate that this was the environment they found themselves in. 

Arepo
1y43
26
3

Seconded. This whole saga has really made me sour on some already mixed views on EA epistemics.

I think it would have been very easy for Jonas to communicate the same thing in less confrontational language. E.g., "FWIW, a source of mine who seems to have some inside knowledge told me that the picture presented here is too pessimistic." This would have addressed JP's first point and been received very differently, I expect.

2
Emrik
1y
To clarify, was it this sentence you found confrontational? (I'm not counter-arguing, I am genuinely asking, because I seem to lack an eye for this sort of thing, or alternatively I'm usually right and most people are wrong. The truth is probably in the middle somewhere if I were to guess.)
4
JP Addison
1y
Agreed.

I understood the heart of the post to be in the first sentence: "what should be of greater importance to effective altruists anyway is how the impacts of all [Musk's] various decisions are, for lack of better terms, high-variance, bordering on volatile." While Evan doesn't provide examples of what decisions he's talking about, I think his point is a valid one: Musk is someone who is exceptionally powerful, increasingly interested in how he can use his power to shape the world, and seemingly operating without the kinds of epistemic guardrails that EA leaders try to operate with. This seems like an important development, if for no other reason that Musk's and EA's paths seem more likely to collide than diverge as time goes on.

2
Jackson Wagner
1y
As I write in my answer above, I think high-variance and volatile decisions are kinda just the name of the game when you are trying to make billions of dollars and change industries in a very-competitive world. Agreed that Musk is "operating without the kinds of epistemic guardrails that EA leaders try to operate with", and that it would be better if Musk was wiser.  But it is always better if people were wiser, stronger versions of themselves!  The problem is that people can't always change their personalities very much, and furthermore it's not always clear (from the inside) which direction of personality change would be an improvement.  The problem of "choosing how epistemically modest I should be", is itself a deep and unsettled question. (Devil's advocate perspective: maybe it's not Musk that's being too wild and volatile, but EAs who are being too timid and unambitious -- trying to please everyone, fly under the radar, stay apolitical, etc!  I don't actually believe this 100%, but maybe 25%: Musk is more volatile than would be ideal, but EA is also more timid than would be ideal.  So I don't think we can easily say exactly how much more epistemically guard-railed Musk should ideally be, even if we in the EA movement had any influence over him, and even if he had the capability to change his personality that much.)
3
Evan_Gaensbauer
1y
Strongly upvoted. You've put my main concern better than I knew how to put it myself.
6
Charles He
1y
What you said seems valid. However, unfortunately, it seems low EV to talk a lot about this subject. Maybe the new EA comms and senior people are paying attention to the issues, and for a number of reasons that seems best in this situation. If that's not adequate, it seems valid to push or ask them about it.

I agree this is an important point, but also think identifying top-ranked paths and problems is one of 80K's core added values, so don't want to throw out the baby with the bathwater here.

One less extreme intervention that could help would be to keep the list of top recommendations, but not rank them. Instead 80K could list them as "particularly promising pathways" or something like that, emphasizing in the first paragraphs of text that personal fit should be a large part of the decision of choosing a career and that the identification of a top tier of car... (read more)

I was also going to say that it's pretty confusing that this list is not the same as either the top problem areas listed elsewhere on the site or the top-priority career paths, although it seems derived from the latter. Maybe there are some version control issues here?

I feel like this proposal conflates two ideas that are not necessarily that related:

  1. Lots of people who want to do good in the world aren't easily able to earn-to-give or do direct work at an EA organization.
  2. Starting altruistically-motivated independent projects is plausibly good for the world.

I agree with both of these premises, but focusing on their intersection feels pretty narrow and impact-limiting to me. As an example of an alterative way of looking at the first problem, you might consider  instead or in addition having people on who work in high... (read more)

2
Stan Pinsent
2y
Thanks, Ian. You make an excellent point: I don't want to unnecessarily narrow my focus here. Perhaps I should focus on 1) because it also allows a broader scope of episode ideas. "How can ordinary people maximise the good they do in the world?" allows lots of different responses. Independent projects could be one of them. On the other hand 2) seems more neglected. There's probably lots out there about startups or founding charities, but I can't find anything on running altruistic projects (except a few one-off posts).

Hmm, I guess I'm more optimistic about 3 than you are. Billionaires are both very competitive and often care a lot about how they're perceived, and if a scaled-up and properly framed version of this evaluation were to gain sufficient currency (e.g. via the billionaires who score well on it), you might well see at least some incremental movement. I'd put the chances of that around 5%.

I thought this was great! With a good illustrator and some decent connections I think you could totally get it published as a picture book. A couple of feedback notes:

  • The transition from helping people in Johnny's life to helping people far away via the internet felt a bit forced. If Johnny is supposed to be a student in primary school like the intended reader, it wasn't clear where he gets his donation budget from  and I wonder how relatable that would be (a donation of $25 is mentioned, which I guess could come from allowance/gift money, but it's im
... (read more)
1
William Spaul
2y
Thanks very much for the encouragement and feedback Ian, those are excellent ideas, will get writing! 

I'm not aware of anyone working on it really seriously!

It's possible there's a more comprehensive writeup somewhere, but I can offer two data points regarding the removal of $30B in pandemic preparedness funding that was originally part of Biden's Build Back Better initiative (which ultimately evolved into the Inflation Reduction Act):

  • I had an opportunity to speak earlier this summer with a former senior official in the Biden administration who was one of the main liaisons between the White House and Congress in 2021 when these negotiations were taking place. According to this person, they couldn't fight effec
... (read more)
5
weeatquince
1y
I don’t follow the US pandemic policy but wasn’t some $bn (albeit much less than $30bn) still approved for pandemic preparedness and isn't more still being discussed (a very quick google points to $0.5b here and $2b here etc and I expect there is more)? If so that seems like a really significant win. Also your reply was about government, not about EA or adjacent organisations. I am not sure anyone in this post / thread has given any evidence of any "a valiant effort" yet, such as listing campaigns run or even policy papers written etc. The only post-COVID policy work I know of (in the UK, see comment below) seemed very successful and I am not sure it makes sense to update against "making the government sane" without understanding what the unsuccessful campaigns have been. (Maybe also Guarding Against Pandemics, are they doing stuff that people feel ought to have had an impact by now, and has it?)
5
Susan II
2y
As opposed to speaking with Congressmen, is "prepare a scientific report and meet with the NIH director/his advisors" an at-all plausible mechanism for shutting down the specific research grant Soares linked? Or if not, becoming NIH peer reviewers?
6
Peter Wildeford
2y
Surely grassroots support for pandemic preparedness wouldn't be too hard to get, would it? Is anyone working on this? Should someone work on this?

I have some sympathy for the second view, although I'm skeptical that sane advisors have significant real impact. I'd love a way to test it as decisively as we've tested the "government (in its current form) responds appropriately to warning shots" hypotheses.

On my own models, the "don't worry, people will wake up as the cliff-edge comes more clearly into view" hypothesis has quite a lot of work to do. In particular, I don't think it's a very defensible position in isolation anymore....if you want to argue that we do need government support but (fortunatel

... (read more)

"I think the second view is basically correct for policy in general, although I don't have a strong view yet of how it applies to AI governance specifically. One thing that's become clear to me as I've gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that's possible in those settings."

 

This is a problem I've spoken often about, and I'm curren... (read more)

Amazing resource, thanks so much! I'll add that the Effective Institutions Project is in the process of setting up an innovation fund to support initiatives like these, and we are planning to make our first recommendations and disbursements later this year. So if anyone's interested in supporting this work generally but doesn't have the time/interest to do their own vetting, let us know and we can get you set up as a participant in our pooled fund (you can reach me via PM on the Forum or write info@effectiveinstitutionsproject.org).

Also worth noting that you can be influential on Twitter without necessarily having a large audience (e.g., by interacting strategically with elites and frequently enough that they get to know you).

It seems worth noting that you can get famous on Twitter for tweeting, or you can happen to be famous on Twitter as a result of becoming famous some other way. The two pathways imply very different promotional strategies and theories of impact. But my sense is that it's pretty hard to grow an audience on Twitter through tweeting alone, no matter how good your content is.

3
IanDavidMoss
2y
Also worth noting that you can be influential on Twitter without necessarily having a large audience (e.g., by interacting strategically with elites and frequently enough that they get to know you).

He seems like a natural fit for the American economist-public intellectual cluster (Yglesias/Cowen/WaitButWhy/etc.) that's already pretty sympathetic to EA. The twitter content is basically "EA in depth," but retaining the normie socially responsible brand they've come to expect and are comfortable with. Max Roser would be another obvious candidate to promote Peter. I'd start there and see where it goes.

I'm curious how this applies to infohazards specifically. Without actually spilling any infohazards, could you comment on how one could do a good job applying this model in such a situation?

1
ChanaMessinger
2y
Perhaps "I won't tell you things I think will be negative for the world to be more public" or "by default, I won't tell you things I think will make you worse off"

I'm a little surprised that Rob Wiblin doesn't have more followers, but he's already high-profile enough that it wouldn't take that big of a push to get him into another tier. He's also the most logical person to leverage 80K's broader content on social media given his existing profile and activity. (ETA: although Habiba could do this too, per your suggestion.)

Amanda Askell consistently has thoughtful and underrated takes on Twitter.

2
Nathan Young
2y
Having read your link, she’s an AI expert, great suggestion.
2
Nathan Young
2y
what’s her field again?

Peter Wildeford is an A+ follow on Twitter IMHO. I think it's realistic to get him a bunch more followers if that's something he wanted.

4
Nathan Young
2y
What strategy would you propose to get him more followers?

I assume you're being modest in not suggesting "Nathan Young," so I'll do it for you.

2
Nathan Young
2y
I don’t think I should be high on this list. I’m a good networker but I don’t think EA as a whole benefits from me being a much bigger account.

Do we know that he doesn't already have a social media manager? He's had a lot of help to promote the book.

3
Nathan Young
2y
Given how superb the team were at promoting wills book, I struggle to believe they are seriously attempting twitter promotion right now.

In light of the two-factor voting, I'm unclear what you mean by "upvote." I would suggest using the "agree/disagree" box as the scoring, with "upvote/downvote" meant to refer to your wisdom in suggesting the person and/or the analysis you provided. But I think you should clarify which one you intend to actually pay attention to.

4
Nathan Young
2y
You’re right, but I think the opposite way round makes more sense. “ Use upvotes to signal the priority of the answer, and the agree/disagree to support the specific reasoning given by the answer. IE the upvote ordering should be the correct ordering.”

I think raising one's own kids is often significantly more rewarding than raising adopted kids, just because one's own kids will share so much more of one's cognitive traits, personality traits, quirks, etc, that you can empathize better with them.

I'm extremely skeptical of this claim. Many parents I know with multiple biological children report that they have immensely different personalities, and it seems intuitively obvious that any statistical correlations of such traits between child and parent that are driven by genes will be overwhelmed by statistic... (read more)

9
Jeff Kaufman
2y
One confounding factor here is that the children that you might potentially adopt are pretty different from the children you might have biologically. Most adoptees have gone through some form of trauma, they are rarely newborns, they often had worse prenatal environments, their biological parents probably wouldn't enjoy the forum, etc. I think if somehow one of my children had been swapped at birth with a child from similar parents it probably wouldn't have much of an impact on what raising them would be like, but that's not really what we're talking about? (I do also think it's cute the various more specific ways our kids resemble us, but I agree this is not a major contribution to the experience of parenting.)

Haha, well it would depend a lot on the specifics but we'd probably at least be up for having a conversation about it :)

Maybe indirectly? Addressing talent gaps within the EA community isn't a primary focus of ours, but it does seem that our outreach is helping to increase the pool of mid-career and senior people out in the world who take EA seriously.

5
Yonatan Cale
2y
May I be a capitalist for a moment? If another EA org would offer you $300,000 for finding them a really good really senior person they'd hire, how would you feel about that? </terribleCapitalist>

Effective Institutions Project here. As of now I'd say our number is more like $150-200K, assuming we're talking about an annual commitment. The number is lower because our networks give us access to a large talent pool and I'm fairly optimistic that we can fill openings easily once we have the budget for them.

4
Yonatan Cale
2y
Hearing this, I wonder if you could maybe close talent gaps in other orgs?
2
Yonatan Cale
2y
Thank you! I added a link directly here from the post

Thanks for the response!

I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside

That's fair, and I should also be clear that I'm less familiar with LTFF's grantmaking than some others in the EA universe.

It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.

Oh, I totally agree that the kind of risk analysis I mentioned is not costless, a... (read more)

I strongly agree with Sam on the first point regarding downside risks. My view, based on a range of separate but similar interactions with EA funders, is that they tend to overrate the risks of accidental harm [1] from policy projects, and especially so for more entrepreneurial, early-stage efforts.

To back this up a bit, let's take a closer look at the risk factors Asya cited in the comment above. 

  • Pushing policies that are harmful. In any institutional context where policy decisions matter, there is a huge ecosystem of existing players, rang
... (read more)
7
abergal
2y
“even if the upside of them working out could really be quite valuable” is the part I disagree with most in your comment. (Again, speaking just for myself), I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside; my overall calculus was something like “this doesn’t seem like it has big upside (because the policy asks don’t seem all that good), and also has some downside (because of person/project-specific factors)”. It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high. On potential risk factors: * I agree that (1) and (2) above are very unlikely for most grants (and are correlated with being unusually successful at getting things implemented). * I feel less in agreement about (3)-- my sense is that people who want to interact with policymakers will often succeed at taking up the attention of someone in the space, and the people interacting with them form impressions of them based on those interactions, whether or not they make progress on pushing that policy through. * I think (4) indeed isn’t specific to the policy space, but is a real downside that I’ve observed affecting other EA projects– I don’t expect the main factor to be that there’s only one channel for interacting with policymakers, but rather that other long-term-focused actors will perceive the space to be taken, or will feel some sense of obligation to work with existing projects / awkwardness around not doing so. Caveating a lot of the above: as I said before, my views on specific grants have been informed heavily by others I’ve consulted, rather than coming purely from some inside view.

Re: "Why haven't I heard of OR?", I think your comments on the fragmentation and branding challenges are extremely on point. Last year Effective Institutions Project did a scoping exercise looking at different fields and academic disciplines that intersect with institutional decision-making, and it was amazing to see the variety of names and frames for what is ultimately a collection of pretty similar ideas. With that said, I think the directions that have been explored under the OR banner are particularly interesting and impressive, and am really glad to ... (read more)

One thing that occurs to me is that your post assumes that the only way to address the issues raised here is to hire different people and/or give them different responsibilities. But another possible route is for EA organizations to make more use of management consultancies. That could be a path worth considering for small nonprofits whose leaders mainly do just want to hire someone to take care of all the tasks they don't want to do themselves, and whose opportunity to make use of more strategic and advanced operations expertise is likely to be too sporad... (read more)

1
Weaver
2y
Well thanks for putting this brain worm into my ear. As I'm trying to make a decision between more project management for the Government or going back to the private sector, this looks very appealing.

I think this post is excellent overall, but I do want to register a disagreement with your bid to separate operations work from the work that PAs do in most small nonprofit organizations. You have a keen observation about how the nature of operations work changes with scale: at top levels of a multinational corporation, the notion of a senior operations executive doing PA-style work is ludicrous. But for most EA organizations, that comparison is kind of nonsensical; we're talking about small outfits with 2-6 staff members and a mishmash of interns, contrac... (read more)

2
Joseph Lemien
2y
I think you are right that there can be a lot of overlap between the type of work that an operations associate/junior staff does and the work that a personal assistant does. My main push is that I don't want people to conflate the two. Seeing job descriptions for "operations managers" that involve managing a boss's calendar and handling emails for the boss made me think of how frustrated a person would be to apply and accept a manager-level job only to be given menial tasks, similar to what was written in Senior EA 'ops' roles: if you want to undo the bottleneck, hire differently. Nonetheless, I think you point stands that at a small team the border can be quite fuzzy. Regarding the janitor, you make a good point. I had thought about my own experience working with small and medium enterprises. I hadn't even thought about the facilities department being overseen my the COO, but now that you mention it it makes a lot of sense. EDIT: An Operations Manager role at Open Philanthropy describes the work as including: * processing invoices * keeping various repositories of internal information organized and up-to-date * scheduling/calendar management for a senior staff person * data cleaning * handling mail * managing reception * taking notes on calls * assisting the recruiting team with tasks such as emailing candidates

Do you think that some of the people who would have been attracted to effective philanthropy in the past now just join effective altruism?

Some, sure. EA seems to be a lot more mainstream now than it was even 3-4 years ago, so that's probably the main reason.

While I think EP has been influential, I just didn't find the work from CEP and similar places as intellectually engaging as what EA puts out (or as important overall).

I think the main thing EA has going for it over EP is that it has a much better track record of taking ideas seriously. EP explored a lo... (read more)

I wasn't there at the very beginning, but have followed the effective philanthropy "scene" since 2007 or so. My sense is that most EA community members aren't very knowledgeable about this whole side of institutional philanthropy, so I was pleasantly surprised to see the history recounted pretty accurately here! With that said, one quibble is that the book you cited entitled Effective Philanthropy by Mary Ellen Capek and Molly Mead is not one I'd ever heard of before reading this post; I think this is just a case of a low-profile resource happening to get ... (read more)

9
BrownHairedEevee
2y
I don't think EP has fizzled out entirely. ImpactMatters is perhaps part of the second wave of EP. Charity Navigator acquired it in 2020 and incorporated its impact ratings into its Encompass Rating System.
4
ColdButtonIssues
2y
Do you think that some of the people who would have been attracted to effective philanthropy in the past now just join effective altruism?  I've had a loose interest in institutional philanthropy for a while (I knew of your work at Createquity while it was on-going!) and while I think EP has been influential, I just didn't find the work from CEP and similar places as intellectually engaging as what EA puts out (or as important overall).

I don't have any inside info here, but based on my work with other organizations I think each of your first three hypotheses are plausible, either alone or in combination.

Another consideration I would mention is that it's just really hard to judge how to interpret advocacy failures over a short time horizon. Given that your first try failed, does that mean the situation is hopeless and you should stop throwing good money after bad? Or does it mean that you meaningfully moved the needle on people's opinions and the next campaign is now likelier to succeed? ... (read more)

One context note that doesn't seem to be reflected here is that in 2014, there was a lot of optimism for a bipartisan political compromise on criminal justice reform in the US. The Koch network of charities and advocacy groups had, to some people's surprise, begun advocating for it in their conservative-libertarian circles, which in turn motivated Republican participation in negotiations on the hill. My recollection is that Open Phil's bet on criminal justice reform funding was not just a "bet on Chloe," but also a bet on tractability: i.e., that a relativ... (read more)

CW
2y24
0
0

I do not believe this explains the funding rationale. If you look at the groups funded (as per my comment), these are not groups interested in bipartisan political compromise. If OP were interested in bipartisan efforts there are surely better and more effective groups to fund in that direction rather than the groups funded here with very particular, and rather strong, political beliefs which cannot in many cases (even charitably) be described as likely to contribute to bipartisan efforts at reform.

So this is good context. What are your thoughts on why they kept donating?

Separating out how important networking is for different kinds of roles seems valuable, not only for the people trying to climb the ladder but also for the people already on the ladder. (e.g., maybe some of these folks desperate to find good people to own valuable projects that otherwise wouldn't get done should be putting more effort into recruiting outside of the Bay.)

I like this comment because it does a great job of illustrating how socioeconomic status influences the risks one can take. Consider the juxtaposition of these two statements:

(from the comment)

Maybe this is mainly targeted at undergraduate students, who are more likely to have a few months of time over the summer with no commitments. But in that case how do they have the money to do what is basically an extended vacation? Most students aren't earning much/any money. 

  • Maybe this is only targeted at students who have wealthy families willing to fund expe
... (read more)
6
Charles He
2y
In theory, there is funding specifically to cover exactly the scenarios you are worried about (“40%”), for promising AI safety people going to the Bay Area. If there is a systemic gap, the funders would very much like to know and people should comment (or PM and concerns can be referred if appropriate).

Really appreciate you writing this! Echoing others, I think many of these more self-serving motivations are pretty common in the community. With that said, I think some of these are much more potentially problematic than others, and the list is worth disaggregating on that dimension. For example, your comment about EA helping you not feel so fragile strikes me as prosocial, if anything, and I don't think anyone would have a problem with someone gaining hope that their own suffering could be reduced from engaging in EA.

The ones that I think are most worryin... (read more)

I think the issue is more that different users have very disparate norms about how often to vote, when to use a strong vote, and what to use it on. My sense (from a combination of noticing voting patterns and reading specific users' comments about how they vote) is that most are pretty low-key about voting, but a few high-karma users are much more intense about it and don't hesitate to throw their weight around. These users can then have a wildly disproportionate effect on discourse because if their vote is worth, say, 7 points, their opinion on one piece ... (read more)

3
Charles He
2y
It would be interesting to get peoples guesses/hypothesis of this specific behavior,  then see how often it actually occurs. Personally, my guess is that EA forum accounts with large karma don't often do this behavior "negatively" (they rarely strong downvote and and negatively comment). When this does happen, I expect to find this to be positive and reasonable.   In case anyone is interested, my activity referred to above, begun almost half a year ago, provides data to identify instances of the behavior mentioned by IanDavidMoss, and other potentially interesting temporal patterns of voting and commenting. There are several other interesting things that can examined too. I would be willing to share this data, as well as provide various kinds of technical assistance to people working on any principled technical project[1] related to the forum or relevant data. 1. ^ I do not currently expect to personally work on an EA associated project related to the forum or accept EA funding to do so.
4
Linch
2y
This feels reasonable to me. Personally, I very rarely strong-upvote (except the default strong-upvote for my own posts), and almost never strong-downvote unless it's clear spam. If there's a clearer "use it or lose it" policy, I think I'd be more inclined to ration out more strong-upvotes and strong-downvotes for favorite/least favorite (in terms of usefulness) post or comment that week.

Sorry if I'm being dense, but where is this 4-tuple available?

2
Charles He
2y
I sent a PM.

I would be in favor of eliminating strong downvotes entirely. If a post or comment is going to be censored or given less visibility, it should be because a lot of people wanted that to happen rather than just two or three.

1
MichaelStJules
2y
Ya, I agree. I think the only things I strong downvote are things worth reporting as norm-breaking, like spam or hostile/abusive comments. We could also just weaken (strong) downvotes relative to upvotes.
Load more