Hide table of contents

We should put all possible changes/reforms in a big list, that everyone can upvote/downvote, agree disagree.

EA is governed but a set of core EAs, so if you want change, I suggest that giving them less to read and a strong signal of community consensus is good.

The top-level comments should be a short clear explanation of a possible change. If you want to comment on a change, do it as a reply to the top level comment
 

This other post gives a set of reforms, but they are a in a big long list at the bottom. Instead we can have a list that changes by our opinions! https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1 


Note that I do not agree with all comments I post here.

43

0
0

Reactions

0
0
New Answer
New Comment

38 Answers sorted by

Beyond transparently disclosing financial and personal relationships with (e.g.) podcast guests or grantees, EA institutions should avoid apparent conflicts of interest more strictly. For example, grant reviewers should recuse themselves from reviewing proposals by their housemates.

I'd be curious to hear disagreements with this.

I guess the latter half of this suggestion already happens.

[This comment is no longer endorsed by its author]Reply
4
Muireall
Does it? The Doing EA Better post made it sound like conflict-of-interest statements are standard (or were at one point), but recusal is not, at least for the Long-Term Future Fund. There's also this Open Philanthropy  OpenAI grant, which is infamous enough that even I know about it. That was in 2017, though, so maybe it doesn't happen anymore.
3
Nathan Young
Sorry what was the CoI with that OpenAI grant?
9
Muireall
I'm mainly referring to this, at the bottom: Holden is Holden Karnofsky, at the time OP's Executive Director, who also joined OpenAI's board as part of the partnership initiated by the grant. Presumably he wasn't the grant investigator (not named), just the chief authority of their employer. OP's description of their process does not suggest that he or the OP technical advisors from OpenAI held themselves at any remove from the investigation or decision to recommend the grant:
2
Nathan Young
Hm. I still don't really see the issue here. These people all work at OpenPhil right?  I guess maybe it looks fishy, but in hindsight do we think it was?

No, Dario Amodei and Paul Christiano were at the time employed by OpenAI, the recipient of the $30M grant. They were associated with Open Philanthropy in an advisory role.

I'm not trying to voice an opinion on whether this particular grant recommendation was unprincipled. I do think that things like this undermine trust in EA institutions, set a bad example, and make it hard to get serious concerns heard. Adopting a standard of avoiding appearance of impropriety can head off these concerns and relieve us of trying to determine on a case-by-case basis how fishy something is (without automatically accusing anyone of impropriety).

Give users the ability to choose among several karma-calculation formulas for how they experience the Forum. If they want a Forum experience where everyone's votes have equal weight, there could be a Use Democratic Karma setting. Or stick with Traditional Karma. Or Show Randomly / No Karma. There's no clear need for the Forum to impose the same sorting values on everyone.

That’s actually a pretty creative idea.

EA should engage more with existing academic research in fields such as such as Disaster Risk Reduction, Futures Studies, and Science and Technology Studies

I'd recommend splitting these up into different answers, for scoring.  I imagine this community is much more interested in some of these groups than others.

Ways of engaging #3: inviting experts from fields to EAG(X)s

Ways of engaging #2: proactively offering funding experts from respective fields to work on EA relevant topics

Ways of engaging #1: literature reviews and introductions of each field for an EA audience.

2
Nathan Young
And put them on the forum wiki.

Ways of engaging #4: making a database of experts in fields who are happy to review papers and reports from EAs

Ways of engaging #5: prioritise expertise over value alignment during hiring (for a subset of jobs).

...and updated research on climate risk.

2
Nathan Young
80k's view is pretty recent right?

Set up at least one EA fund that uses the quadratic funding mechanism combined with a minimal vetting process to ensure that all donation recipients are aligned with EA.

How this could work:

  • Users can nominate organizations/projects to be included in the quadratic funding pool.
  • Admins vet donation candidates against a minimal set of criteria to ensure quality, e.g. "Does this project clear a bar for cost-effectiveness according to a defined value system?"
  • Approved projects are displayed on the website with estimates of cost-effectiveness w.r.t. relevant outcomes.
  • Users donate money to any project in the pool that they want, and their donations are matched according to the QF mechanism every month or quarter.

This dovetails with increasing diversity of moral views in EA.

Have you considered applying for funding for running one?

4
Eevee🔹
I have thought of it but it wasn't a priority for me at the time. Gitcoin has retired their original grants platform, but they're replacing it with a new decentralized grants protocol that anyone can use, which will launch in early Q2, 2023. I would like to wait until then to use that.

There should be 1 searchable database of all EA grants.

EA orgs should experiment with hiring ads targets specifically at experts in the field and consider how much those experts need a knowledge of EA for the specific role.

I'm going to interpret this to include "hiring outreach beyond ads" for fields where hiring isn't done mostly through ads.

Wait, is this not the case? 0.0

I worked in some startups and a business consultancy and this is like, the first thing I learned in hiring/headhunting. While writing up Superlinear prize ideas, I made a few variations of SEO prizes targeting mid to senior-level experts, such as field-specific jargon, upcoming conferences, common workflow queries and new regulations.

This seems like an inefficient way to approach experts initially

Acknowledge that sometimes issues are fraught and that we should discuss them more slowly (while still having our normal honesty norms)

I don't understand this suggestion.  How is this not just applause lights? What would be a sensical opposing view?

5
Tsunayoshi
While Nathan's suggestion is certainly framed very positively, people might object that sometimes the only way to change a system where power is highly concentrated at the top is to use anger about current news as a coordination mechanism to demand immediate change. Once attention invariably fades away, it becomes more difficult to enact bottom up changes. Or to put it differently: often slowing down discussions really is an attempt at shutting them down ("we will form a committee to look into your complaints"). That's why I think that even though I agreed with the decision to collect all Bostrom discussion in one post, it's important to honestly signal to people that their complaints are read and taken seriously.
4
Nathan Young
It certainly felt like the Bostrom stuff needed to be discussed now. I wish I'd felt comfortable to say "let's wait a couple of days". 

How would we ensure this happens? Censorship, e.g. keeping related posts in Personal category rather than Community? Heavier moderation?

3
MichaelStJules
Or should it just be the EA Forum mods' job to pin comments to such posts or making centralized threads with such reminders? Or is it everyone's job? Will responsibility become too diffuse and nothing changes?
2
Nathan Young
I think "ensure" is too strong. I think if several people say "let's take a day" then that would be effective.

Employees of EA organisations should not be pressured by their superiors against publishing work critical of core beliefs. 

Is there evidence that they are?

While I agree with this question in the particular, there's a real difficulty because absence of evidence is only weak evidence of absence with this kind of thing.

1
titotal
There are allegations of this occurring in the doing EA better post. Ironically, if this is occurring, then it easily explains why we don't have concrete evidence of it yet: people would be worried about their jobs/careers. 
1
Arepo
Can you point me to where? I don't have time to read the post in full, and searching 'pressure' didn't find anything that looked relevant (I saw something about funding being somewhat conditional on not criticising core beliefs, but didn't see anything about employees specifically feeling so constrained).

A study should be conducted that records and analyses the reactions and impressions of people when first encountering EA. Special attention should be paid to reactions of underrepresented groups such as groups based on demographics (age, race, gender, etc.), worldview (politics, religion, etc.) or background (socio economic status, major etc.).

EA should recruit more from the humanities and social studies fields.

We should recruit more from every field.

Is a more precise idea: "EA should spend less time trying to recruit from philosophy, economics and STEM, in order to spend more time trying to recruit from the humanities and social studies"?

Edit: although with philosophy and economics, those are already humanities and social studies...

3
titotal
I think this is revealing of the shortcomings of making decisions using this kind of upvoted and downvoted poll, in that the results will be highly dependent on the "vibe" or exact wording of a proposal.  I think your wording would end up with a negative score, but if instead I phrased it as "the split between STEM and humanity focus should be 80-20 instead of 90-10" (using made up numbers), then it might swing the other way again. The wording is a way of arguing while pretending we're not arguing. 
2
kbog
I think the format is fine, you just have to write a clear and actionable proposal, with unambiguous meaning.

Any answer below this shouldn't happen.

ie any answer with less upvotes on it's top level comment shouldn't happen. This is a way to broadly signal at what point you think the answers "become worth doing". Edited for clarity, thanks Guy

What is the line: Karma or Agreement?

1
Max Clarke
It's karma - which is kind of wrong here.
2
Nathan Young
Can an opinion be right but unimportant?
6
Max Clarke
Definitely, for example if people are bikeshedding (vigorously discussing something that doesn't matter very much)

I'm confused, what did you mean to happen with this comment?

This post makes it harder than usual for me to tell if I'm supposed to upvote something because it is well-written, kind, and thoughtful vs whether I agree with it.

I'm going to continue to use up/downvote for good comment/bad comment and disagree/agree for my opinion on the goodness of the idea.

[EDIT: addressed in the comments. Nathan at least seems to endorse my interpretation]

I think because the sorting is solely on karma, the line is "Everything above this is worth considering" / "Everything below this is not important" as opposed to "Everything above this is worth doing"

The parent of this comment shouldn't happen.

Paradox!

Cap the number of strong votes per week.

Strong votes with large weights have their uses in uncommon situations. But these situations are uncommon, so instead of weakening strong votes, make them rarer.

The guideline says use them only in exceptional cases, but there is no mechanism enforcing it: socially, strong votes are anonymous and look like standard votes; and technically, any number of them could be used. They could make a comment section appear very one-sided, but with rarity, some ideas can be lifted/hidden, and the rest of the section can be more diverse.

I do not think this is a problem now, because current power users are responsible. But this is our fortune and not a fact, and could change in the future. Incidentally, this would also set a bar for what is considered exceptional, like this comment is in the top X this week.

The guideline says use them only in exceptional cases

I've never noticed this guideline! If this is the case, I would prefer to make it technically harder to do. I've just been doing it if I feel somewhat strongly about the issue...

5
Jason
Do we know what number of votes Forumwide are strong vs standard? If it is fairly low, publishing that might help us all understand how to use them better. (My take is that I should not be using a looser standard than the norm because that would make my voice count more than it should. So if I saw data suggesting my standard were looser than the norm, it would inform when I strongvote in the future.)

I think some kind of "strong vote income", perhaps just a daily limit as you say, would work.

I will sort of admit to not being that responsible. I probably use a couple of strong votes a blog - when I think something is really underrated usually.  I guess I might be more sparing now.

One situation I use strong votes for is whenever I do "upvote/disagree" or "downvote/agree". I do this to offset others who tend not to split their votes.

Central EA organizations should not make any major reforms for 6 months to allow for a period of reflection and avoid hasty decisions

Every EA-affiliated org should clearly state in their website their sources of funding that contributed over >$100k.

Why? I don't see the point except that then a reader can shame the org for taking money from someone the reader doesn't like. Let orgs be judged on their outputs per dollar spent please

7
Jaime Sevilla
More transparency about money flows seems important for preventing fraud, understanding centralization of funding (and so correlated risk) and allowing people to better understand the funding ecosystem!

I have to be honest.. I think this is a horrible solution for all three of those problems. As in, if you enact this solution you can't say you've made meaningful progress on any of those.

Not only that but I don't think EA actually contains those 3 as "problems" to a degree that they would even warrant new watchdogging policies for orgs. Like, maybe those 3 aspects of EA aren't actually on fire or otherwise notably bad?

Example: People like to say that funding is not transparent in EA. But are they talking about the type of transparency which would be solved by this proposal? I think not. I think EA Funds and OPP are very transparent. You just have to go to their websites, which is a much better tactic than visiting dozens of EA org grantee websites. I think what people who are in the know mean when they say "EA needs funding transparency" is something like "people should be told why their grants were not approved" and "people ought to know how much money is in each fund so applicants know how likely it is to get a grant at what scale of project and so donors know which funds are neglected". Which is fair, but it has nothing to do with EA orgs listing their major donors on their we... (read more)

The fact that a commonsensical proposal like this gets downvoted so much is actually fairly indicative of current problems with  tribalism and defensiveness in EA culture.

I disagree, I think people just disagree with it. If it's tribalism because people downvote it it would be tribalism if they upvoted it too. 

4
Michael_PJ
You really don't think there are any legitimate reasons to disagree with this? I can think of at least a few: * The cost in terms of time and maintenance is non-negligible. * The benefit is small, especially if you think that funding conflicts are not actually a big deal right now.

We should encourage and possibly fund adversarial collaborations on controversial issues in EA.

I thought the sense was adversarial collab was a bit overrated.

1
Tsunayoshi
[epistemic status: my imprecise summaries of previous attempts] Well, I guess it depends on what you want to get out of them. I think they can be useful as epistemic tools in the right situation: They tend to work better if they are focused on empirical questions, and they can be help by forcing the collaborators to narrow down broad statements like "democratic decision making is good/bad for organisations". It's probably unrealistic however to expect that the collaborators will change their minds completely and arrive at a shared conclusion. They might also be good for building community trust. My instinct is that it would be really helpful in the current situation if the two sides see that their arguments are being engaged with reasonably by the other side. (see this ac on transgender children transitioning, nobody in the comments expresses anger at the author holding opposite views)

We could pester Scott Alexander to do another, EA themed, adversarial collaboration contest.

2
pradyuprasad
this seems like a very good idea!

Peer reviewed academic research on a given subject should be given higher credence than blogposts by EA friendly sources. 

Seems highly dependent of the subject, how established the field is

Really depends on context and I don't recall a concrete example of the community going awry here. You're proposing this as a change to EA, but I'm not sure it isn't already true.

If you compare apples to apples, a paper and a blog answering the same question, and the blog does not cite the paper, then sure the paper is better. But usually there are good contextual reasons for referring to blogs.

Also, peer review is pretty crappy, the main thing is having an academic sit down and write very carefully.

Karma should have equal weight between users.

edited to add "between users"

I feel like there is an inherent problem with trying to use the current upvote system to determine whether the current upvote system is good. 

4
Nathan Young
Ehhh only if you don't think you can convince people to change their minds. 

Another proposal: Visibility karma remains 1 to 1, and agreement karma acts as a weak multiplier when either positive or negative.

So:

  • A comment with [ +100 | 0 ] would have a weight of 100
  • A comment with [ +100 | 0 ] but with 50✅ and 50❌ would have a weight of 100 + log10(50 + 50) = 200
  • A comment with [ +100 | 100✅ ] would have a weight of say 100 * log10(✓100) = 200
  • A comment with [+0 | 1000✅ ] would have a weight of 0.

Could also give karma on that basis.

However thinking about it, I think the result would be people would start using the visibility vote to express opinion even more...

A little ambiguous between  "disagree karma & upvote karma should have equal weight" and "karma should have equal weight between people"

Noting that I strongly disagreed with this, rather than it being the case that someone with weighty karma did a normal disagree. 

2
Guy Raveh
Both weak and strong votes increase in power when you get more karma, although I think for every currently existing user the weak vote is at most 2 (and the strong vote up to 9).

Making EA appeal to a wider range of moral views

EA is theoretically compatible with a wide range of moral views, but our own rhetoric often conflates EA with utilitarianism. Right now, if you hold moral views other than utilitarianism (including variants of utilitarianism such as negative utilitarianism), you often have to do your own homework as to what those views imply you should do to achieve the greatest good. Therefore, we should spend more effort making EA appeal to a wider range of moral views besides utilitarianism.

What this could entail:

  • More practical advice (including donation and career advice) for altruists with common moral views besides utilitarianism, such as:
    • Views that emphasize distributive justice, such as prioritarianism and egalitarianism
      • This blog post from 2016 claims that EA priorities, at least in the global health and development space, are aligned with prioritarian and egalitarian views. However, this might not generalize to other EA causes such as longtermism.
    • Special consideration for rectifying historical injustices
  • Creating donation funds for people who hold moral views other than utilitarianism
  • Describing EA in ways that generalize to moral views besides utilitarianism, at least in some introductory texts

Would this include making EA appeal to and include practical advice for views like nativism and traditionalism?

3
kbog
Let's not forget retribution - ensuring that wrongdoers experience the suffering that they deserve. Or more modestly, disregarding their well-being.
2
EricHerboso
I incorrectly (at 4a.m.) first read this as saying "Would this include making EA apparel…for views like nativism and traditionalism?", and my mind immediately started imagining pithy slogans to put on t-shirts for EAs who believe saving a single soul has more expected value than any current EA longtermist view (because ∞>3^^^3).
1
Eevee🔹
What do you mean by nativism and traditionalism?
6
Ariel Simnegar 🔸
A nativist may believe that the inhabitants of one's own country or region should be prioritized over others when allocating altruistic resources. A traditionalist may perceive value in maintaining traditional norms and institutions, and seek interventions to effectively strengthen norms which they perceive as being eroded.
1
Eevee🔹
Thanks for clarifying. Yes, I think EA should (and already does, to some extent) give practical advice to people who prioritize the interests of their own community. Since many normies do prioritize their own communities, doing this could help them get their feet in the door of the EA movement. But I would hope that they would eventually come to appreciate cosmopolitanism. As for traditionalism, it depends on the traditional norm or institution. For example, I wouldn't be comfortable with someone claiming to represent the EA movement advising donors on how to "do homophobia better" or reinforce traditional sexual norms more effectively, as I think these norms are bad for freedom, equality, and well-being. At least the views we accommodate should perhaps not run counter to the core values that animate utilitarianism.

I actually think EA is inherently utilitarian, and a lot of the value it provides is allowing utilitarias to have a conversation among ourselves without having to argue the basic points of utilitarianism with every other moral view. For example, if a person is a nativist (prioritizing the well being of their own country-people), then they definitionally aren't an EA. I don't want EA to appeal to them, because I don't want every conversation to be slowed down by having to argue with them, or at least find another way to filter them out. EA is supposed to be the mechanism to filter the nativists out of the conversation.

For those disagreeing with this idea, is it because you think EA should only appeal to utilitarians, should not try to appeal to other moral views more than it does, or should try to appeal to other moral views but not too much?

1
kbog
#2. From the absolute beginnings, EA has been vocal about being broader than utilitarianism. The proposal being voted on here looks instead like elevating progressivism to the same status as utilitarianism, which is a bad idea.

"EA institutions should select for diversity with respect to hiring"

Paraphrased from https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique 

I am hesitant to agree. Often proponents for this position emphasize the value of different outlooks in decision making as justification, but the actual implemented policies select based on diversity in a narrow subset of demographic characteristics, which is a different kind of diversity.

5
Arepo
I'm sceptical of this proposal, but to steelman it against your criticism, I think we would want to say that the focus should be diversity of a) non-malleable traits that b) correlate with different life experiences - a) because that ensures genuine diversity rather than (eg) quick opinion shifts to game the system, and b) because it gives you a better protection against unknown unknowns. There are experiences a cis white guy is just far more/less likely to have had than a gay black woman, and so when you hire the latter (into a group of otherwise cisish whiteish mannish people), you get a bunch of intangible benefits which, by their nature, the existing group are incapable of recognising.  The traits typically highlighted by proponents of diversity tend to score pretty well on both counts - ethnicity, gender, and sexuality are very hard to change and (perhaps in decreasing order these days) tend to go hand in hand with different life experiences. By comparison, say, a political viewpoint is fairly easy to change, and a neurodivergent person probably doesn't have that different a life experience than a regular nerd (assuming they've dealt with their divergence well enough to be a remotely plausible candidate for the job).
3
kbog
If you want different life experiences, look first for people who had a different career path (or are parents), come from a foreign country with a completely different culture, or are 40+ years old (rare in EA). I think these things cause much more relevant differences in life experience compared to things like getting genital surgery, experiencing microaggressions, getting called a racial slur, etc.
3
Tsunayoshi
Thanks for the reply! I had not considered how easily game-able some selection criteria based on worldviews would be. Given that on some issues the worldview of EA orgs is fairly uniform, and the competition for those roles, it is very conceivable that some people would game the system! I should however note that the correlation between opinions on different matters should apriori be stronger than the correlation between these opinions and e.g. gender. I.e. I would wager that the median religious EA differs more from the median EA in their worldview than the median woman differs from the median EA. Your point about unknown unknowns is valid. However, it must be balanced against known unknowns, i.e. when an organization knows that its personnel is imbalanced in some characteristic that is known or likely to influence how people perform their job. It is e.g. fairly standard to hire a mix of mathematicians, physicists and computer scientists for data science roles, since these majors are known to emphasize slightly different skills. I must say that my vague sense is that for most roles the backgrounds that influence how people perform in a role are fairly well known because the domain of the work is relatively fixed. Exceptions are jobs where you really want decisions to be anticorrelated and where the domain is constantly changing, like maybe an analyst at a venture fund. I am not certain at all however, and if people disagree would very much like links to papers or blog posts detailing to such examples.

I sense that EA orgs should look at some appropriate baseline for different communities and then aim to be above that by blind hiring, adversing outside the community etc. 

3
dan.pandori
It's hard to be above baseline for multiple dimensions, and eventually gets impossible.
1
dan.pandori
Agreed with the specific reforms. Blind hiring and advertising broadly seem wise.

Further question: If EA has diversity hires, should this be explicitly acknowledged? And what is the demographic targets?

"EA should establish public conference(s) or assemblies for discussing reforms within 6 months, with open invitations for EAs to attend without a selection process. For example, an “online forum of concerns”:

  • Every year invite all EAs to raise any worries they have about EA central organisations
  • These organisations declare beforehand that they will address the top concerns and worries, as voted by the attendees
  • Establish voting mechanism, e.g. upvotes on worries that seem most pressing"

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique 

OpenPhil should found a counter foundation that has as its main goal critical reporting, investigative journalism and “counter research” about EA and other philanthropic institutions.

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique 

I think that paying people/orgs to produce critiques of EA ideas etc. for an EA audience could be very constructive, i.e. from the perspective of "we agree with the overall goal of EA, here's how we think you can do it better".

By contrast, paying an org to produce critiques of EA from the perspective of EA being inherently bad would be extremely counterproductive (and there's no shortage of people willing to do it without our help).

There could be a risk of fake scandals, misquoting and taking things out of context that will damage EA.

The wiki should aim to contain distillations of useful knowledge in other fields in EA language - feminism, psychology etc.

Curious to hear from people who disagree with this

4
Ariel Simnegar 🔸
Hi Nathan! If a field includes an EA-relevant concept which could benefit from an explanation in EA language, then I don’t see why we shouldn’t just include an entry for that particular concept. For concepts which are less directly EA-relevant, the marginal value of including entries for them in the wiki (when they’re already searchable on Wikipedia) is less clear to me. On the contrary, it could plausibly promote the perception that there’s an “authoritative EA interpretation/opinion” of an unrelated field, which could cause needless controversy or division.
2
Chris Leong
I don’t think the wiki adequately covers EA topics yet, so I wouldn’t expand the scope until we've covered these topics well.
1
niplav
Writing good Wiki articles is hard, and translating between worldviews even harder. If someone wants to do it, that's cool and I would respect them, but funding people to do it seems odd—"explain X to the ~10k EAs in the world". Surely those fields have texts that can explain themselves?
0
1mkl32j201091
I didn’t vote but would assume some the feminists part is an issue for some. I think that it’s a good idea but controversial issues might look like unanimous endorsement or might be wrong on certain matters. Very relevant is the current controversy about psychometric testing, race, etc.

"When EA books or sections of books are co-written by several authors, co-authors should be given appropriate attribution"

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique 

Though EA books are still published by normal publishers and this may be a big ask. I asked someone about this in relation to WWOTF and they were like Will did a huge amount to acknowledge contributions and while it would be good to acknowledge all that's just not how it works. 

I'd still like us to push for a film like model "directed by, produced by" but it's not a high priority.

EA should periodically re-evaluate and re-examine core beliefs to see if they still hold up over time. 

Disagree voted for being too vague - what specifically would people to do implement this?

What is the EA that you think should do this re-examining? In what sense is something that has different beliefs still EA? If an individual re-evaluates their beliefs and changes their mind about core EA ideas, wouldn't they leave EA, go do something else, EA gets smaller, newer better philosophies get bigger, and resources therefor get allocated as they should?

Feel less of a need to quantify everything

"I am 90% sure that" - you don't need to say it.

Note I am providing options for people to vote on. I disagree with this one.

Note that as someone who strongly agrees with this, saying you're 90% sure is still fine sometimes. More problematic are things like over-simplifying and flattening ideas by conflating them with small set of numbers, or giving guesses and confidence intervals on things you basically have no idea about.

EA orgs should aim to be less politically and demographically homogenous.

I'm curious how people are interpreting the "and" here. Because EA is only 3% right or center right politically, it seems that increasing demographic diversity along lines of race/gender/sexuality, at least in developed countries, would make EA more politically homogeneous. So is the suggestion that EA recruit more older people, people from rural areas, and potentially people from low and middle income countries?

You should be able to give away your forum karma.

If without restraints then note: this opens up an influence market, which could lead into plutocracy.

There should be some way of telling whether a karma score is caused by a number of small upvotes by several people or whether it is a result of a single strong upvote/downvote by one person. Edit: Turns out there's already a way to do this, see the comment below.

Hovering over the karma score displays how many votes there are. Does that address your request, or is there something missing?

3
Lin BL
This does not give a complete picture though. Say something has 5 karma and 5 votes. First obvious thought: 5 users upvoted the post, each with a karma of 1. But that's not the only option: * 1 user upvotes (value +9), 4 users downvote (each value -1) * 2 users upvote (values +4 and +6), 3 users downvote (values -1, -1 and -3) * 3 users upvote (values +1 and +2 and +10), 2 users downvote (values -1 and -7) Or a whole range of other permutations one can think of that add up to 5, given that different users' votes have different values (and in some cases strong up/downvoting). Hovering just shows the overall karma and overall number of people who have voted, unless I am missing a feature that shows this in more detail?
1
Sarah Cheng
Yeah I was wondering if this was what the question asker was getting at. Thank you for clearly explaining it. You're right that this doesn't exist. My instinct is that this doesn't provide enough value to be worth the cost of the extra UX complication and the slight deanonymizing affect on voting. I'd be curious to hear how this kind of feature would be helpful for you.
1
Lin BL
They'd have the information of upvotes and downvotes already (to calculate the overall karma). I don't know how the forum is coded, but I expect they could do this without too much difficulty if they wanted to. So if you hover, it would say something like: "This comment has x overall karma, (y upvotes and z downvotes)." So the user interface/experience would not change much (unless I have misinterpreted what you meant there). It'll give extra information. Weighting some users higher due to contribution to the forum may make sense with the argument that these are the people who have contributed more, but even if this is the case it would be good to also see how many people overall think it is valuable or agree or disagree. Current information: * How many votes * How valuable these voters found it adjusted by their karma/overall Forum contribution New potential information: * How many votes * How valuable these voters found it adjusted by their karma/overall Forum contribution * How many overall voters found this valuable e.g. 2 people strongly agreeing and 3 people weakly disagreeing may update me differently to 5 people weakly agreeing. One is unanimous, the other people have more of a divided opinion of, and it would be good for me to know that as it might be useful to ask why (when drawing conclusions based on what other people have written, or when getting feedback on my own writing). I would like to see this implemented, as the cost seems small, but there is a fair bit of extra information value.
1
Coafos
Note: I tried to do it on mobile, and it's not working everywhere? I tried to tap on post karma or question answer karma but it did not show total vote count. (On my laptop it works.)
3
Sarah Cheng
Yeah, the forum relies a lot on hover effects, which don't work very well on mobile. To avoid that in this case seems like it would overcomplicate the UI though, so I'm not sure what an improved UX would look like. I'll add this to our backlog for triage.

"Funding bodies should not be able to hire researchers who have previously been recipients in the last e.g. 5 years, nor should funders be able to join recipient organisations within e.g. 5 years of leaving their post"

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Employment

"EA institutions should recruit known critics of EA and offer them e.g. a year of funding to write up long-form deep critiques"

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique

For me, this would depend heavily on how good these critics are and probably not sensible to pay people who are just going to use their time to write more attacks, rather than constructive feedback.

Mostly seems to me like at least with EA as it currently is, they won't be interested.

We should consider answers on this thread based on agreement karma not upvoting

I honestly don't know. I personally agreevoted but did not upvote suggestions that, for example, I thought would be good in theory but impossible to implement.

There should be a way to repost something with 0 karma so that I don't have to keep writing this same post every few months. 

Can you elaborate? I don't understand what problem this solves.