Hide table of contents

We should put all possible changes/reforms in a big list, that everyone can upvote/downvote, agree disagree.

EA is governed but a set of core EAs, so if you want change, I suggest that giving them less to read and a strong signal of community consensus is good.

The top-level comments should be a short clear explanation of a possible change. If you want to comment on a change, do it as a reply to the top level comment
 

This other post gives a set of reforms, but they are a in a big long list at the bottom. Instead we can have a list that changes by our opinions! https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1 


Note that I do not agree with all comments I post here.

43

0
0

Reactions

0
0
New Answer
New Comment

38 Answers sorted by

Beyond transparently disclosing financial and personal relationships with (e.g.) podcast guests or grantees, EA institutions should avoid apparent conflicts of interest more strictly. For example, grant reviewers should recuse themselves from reviewing proposals by their housemates.

I'd be curious to hear disagreements with this.

I guess the latter half of this suggestion already happens.

[This comment is no longer endorsed by its author]Reply
4
Muireall
1y
Does it? The Doing EA Better post made it sound like conflict-of-interest statements are standard (or were at one point), but recusal is not, at least for the Long-Term Future Fund. There's also this Open Philanthropy  OpenAI grant, which is infamous enough that even I know about it. That was in 2017, though, so maybe it doesn't happen anymore.
3
Nathan Young
1y
Sorry what was the CoI with that OpenAI grant?
9
Muireall
1y
I'm mainly referring to this, at the bottom: Holden is Holden Karnofsky, at the time OP's Executive Director, who also joined OpenAI's board as part of the partnership initiated by the grant. Presumably he wasn't the grant investigator (not named), just the chief authority of their employer. OP's description of their process does not suggest that he or the OP technical advisors from OpenAI held themselves at any remove from the investigation or decision to recommend the grant:
2
Nathan Young
1y
Hm. I still don't really see the issue here. These people all work at OpenPhil right?  I guess maybe it looks fishy, but in hindsight do we think it was?

No, Dario Amodei and Paul Christiano were at the time employed by OpenAI, the recipient of the $30M grant. They were associated with Open Philanthropy in an advisory role.

I'm not trying to voice an opinion on whether this particular grant recommendation was unprincipled. I do think that things like this undermine trust in EA institutions, set a bad example, and make it hard to get serious concerns heard. Adopting a standard of avoiding appearance of impropriety can head off these concerns and relieve us of trying to determine on a case-by-case basis how fishy something is (without automatically accusing anyone of impropriety).

Give users the ability to choose among several karma-calculation formulas for how they experience the Forum. If they want a Forum experience where everyone's votes have equal weight, there could be a Use Democratic Karma setting. Or stick with Traditional Karma. Or Show Randomly / No Karma. There's no clear need for the Forum to impose the same sorting values on everyone.

That’s actually a pretty creative idea.

EA should engage more with existing academic research in fields such as such as Disaster Risk Reduction, Futures Studies, and Science and Technology Studies

I'd recommend splitting these up into different answers, for scoring.  I imagine this community is much more interested in some of these groups than others.

Ways of engaging #3: inviting experts from fields to EAG(X)s

Ways of engaging #2: proactively offering funding experts from respective fields to work on EA relevant topics

Ways of engaging #1: literature reviews and introductions of each field for an EA audience.

2
Nathan Young
1y
And put them on the forum wiki.

Ways of engaging #4: making a database of experts in fields who are happy to review papers and reports from EAs

Ways of engaging #5: prioritise expertise over value alignment during hiring (for a subset of jobs).

...and updated research on climate risk.

2
Nathan Young
1y
80k's view is pretty recent right?

Set up at least one EA fund that uses the quadratic funding mechanism combined with a minimal vetting process to ensure that all donation recipients are aligned with EA.

How this could work:

  • Users can nominate organizations/projects to be included in the quadratic funding pool.
  • Admins vet donation candidates against a minimal set of criteria to ensure quality, e.g. "Does this project clear a bar for cost-effectiveness according to a defined value system?"
  • Approved projects are displayed on the website with estimates of cost-effectiveness w.r.t. relevant outcomes.
  • Users donate money to any project in the pool that they want, and their donations are matched according to the QF mechanism every month or quarter.

This dovetails with increasing diversity of moral views in EA.

Have you considered applying for funding for running one?

4
BrownHairedEevee
1y
I have thought of it but it wasn't a priority for me at the time. Gitcoin has retired their original grants platform, but they're replacing it with a new decentralized grants protocol that anyone can use, which will launch in early Q2, 2023. I would like to wait until then to use that.

There should be 1 searchable database of all EA grants.

EA orgs should experiment with hiring ads targets specifically at experts in the field and consider how much those experts need a knowledge of EA for the specific role.

I'm going to interpret this to include "hiring outreach beyond ads" for fields where hiring isn't done mostly through ads.

Wait, is this not the case? 0.0

I worked in some startups and a business consultancy and this is like, the first thing I learned in hiring/headhunting. While writing up Superlinear prize ideas, I made a few variations of SEO prizes targeting mid to senior-level experts, such as field-specific jargon, upcoming conferences, common workflow queries and new regulations.

This seems like an inefficient way to approach experts initially

Acknowledge that sometimes issues are fraught and that we should discuss them more slowly (while still having our normal honesty norms)

I don't understand this suggestion.  How is this not just applause lights? What would be a sensical opposing view?

5
Tsunayoshi
1y
While Nathan's suggestion is certainly framed very positively, people might object that sometimes the only way to change a system where power is highly concentrated at the top is to use anger about current news as a coordination mechanism to demand immediate change. Once attention invariably fades away, it becomes more difficult to enact bottom up changes. Or to put it differently: often slowing down discussions really is an attempt at shutting them down ("we will form a committee to look into your complaints"). That's why I think that even though I agreed with the decision to collect all Bostrom discussion in one post, it's important to honestly signal to people that their complaints are read and taken seriously.
4
Nathan Young
1y
It certainly felt like the Bostrom stuff needed to be discussed now. I wish I'd felt comfortable to say "let's wait a couple of days". 

How would we ensure this happens? Censorship, e.g. keeping related posts in Personal category rather than Community? Heavier moderation?

3
MichaelStJules
1y
Or should it just be the EA Forum mods' job to pin comments to such posts or making centralized threads with such reminders? Or is it everyone's job? Will responsibility become too diffuse and nothing changes?
2
Nathan Young
1y
I think "ensure" is too strong. I think if several people say "let's take a day" then that would be effective.

Employees of EA organisations should not be pressured by their superiors against publishing work critical of core beliefs. 

Is there evidence that they are?

While I agree with this question in the particular, there's a real difficulty because absence of evidence is only weak evidence of absence with this kind of thing.

1
titotal
1y
There are allegations of this occurring in the doing EA better post. Ironically, if this is occurring, then it easily explains why we don't have concrete evidence of it yet: people would be worried about their jobs/careers. 
1
Arepo
1y
Can you point me to where? I don't have time to read the post in full, and searching 'pressure' didn't find anything that looked relevant (I saw something about funding being somewhat conditional on not criticising core beliefs, but didn't see anything about employees specifically feeling so constrained).

A study should be conducted that records and analyses the reactions and impressions of people when first encountering EA. Special attention should be paid to reactions of underrepresented groups such as groups based on demographics (age, race, gender, etc.), worldview (politics, religion, etc.) or background (socio economic status, major etc.).

EA should recruit more from the humanities and social studies fields.

kbog
1y16
6
0

We should recruit more from every field.

Is a more precise idea: "EA should spend less time trying to recruit from philosophy, economics and STEM, in order to spend more time trying to recruit from the humanities and social studies"?

Edit: although with philosophy and economics, those are already humanities and social studies...

3
titotal
1y
I think this is revealing of the shortcomings of making decisions using this kind of upvoted and downvoted poll, in that the results will be highly dependent on the "vibe" or exact wording of a proposal.  I think your wording would end up with a negative score, but if instead I phrased it as "the split between STEM and humanity focus should be 80-20 instead of 90-10" (using made up numbers), then it might swing the other way again. The wording is a way of arguing while pretending we're not arguing. 
2
kbog
1y
I think the format is fine, you just have to write a clear and actionable proposal, with unambiguous meaning.

Any answer below this shouldn't happen.

ie any answer with less upvotes on it's top level comment shouldn't happen. This is a way to broadly signal at what point you think the answers "become worth doing". Edited for clarity, thanks Guy

What is the line: Karma or Agreement?

1
Max Clarke
1y
It's karma - which is kind of wrong here.
2
Nathan Young
1y
Can an opinion be right but unimportant?
6
Max Clarke
1y
Definitely, for example if people are bikeshedding (vigorously discussing something that doesn't matter very much)

I'm confused, what did you mean to happen with this comment?

This post makes it harder than usual for me to tell if I'm supposed to upvote something because it is well-written, kind, and thoughtful vs whether I agree with it.

I'm going to continue to use up/downvote for good comment/bad comment and disagree/agree for my opinion on the goodness of the idea.

[EDIT: addressed in the comments. Nathan at least seems to endorse my interpretation]

I think because the sorting is solely on karma, the line is "Everything above this is worth considering" / "Everything below this is not important" as opposed to "Everything above this is worth doing"

The parent of this comment shouldn't happen.

Paradox!

Cap the number of strong votes per week.

Strong votes with large weights have their uses in uncommon situations. But these situations are uncommon, so instead of weakening strong votes, make them rarer.

The guideline says use them only in exceptional cases, but there is no mechanism enforcing it: socially, strong votes are anonymous and look like standard votes; and technically, any number of them could be used. They could make a comment section appear very one-sided, but with rarity, some ideas can be lifted/hidden, and the rest of the section can be more diverse.

I do not think this is a problem now, because current power users are responsible. But this is our fortune and not a fact, and could change in the future. Incidentally, this would also set a bar for what is considered exceptional, like this comment is in the top X this week.

The guideline says use them only in exceptional cases

I've never noticed this guideline! If this is the case, I would prefer to make it technically harder to do. I've just been doing it if I feel somewhat strongly about the issue...

5
Jason
1y
Do we know what number of votes Forumwide are strong vs standard? If it is fairly low, publishing that might help us all understand how to use them better. (My take is that I should not be using a looser standard than the norm because that would make my voice count more than it should. So if I saw data suggesting my standard were looser than the norm, it would inform when I strongvote in the future.)

I think some kind of "strong vote income", perhaps just a daily limit as you say, would work.

I will sort of admit to not being that responsible. I probably use a couple of strong votes a blog - when I think something is really underrated usually.  I guess I might be more sparing now.

One situation I use strong votes for is whenever I do "upvote/disagree" or "downvote/agree". I do this to offset others who tend not to split their votes.

Central EA organizations should not make any major reforms for 6 months to allow for a period of reflection and avoid hasty decisions

Every EA-affiliated org should clearly state in their website their sources of funding that contributed over >$100k.

Why? I don't see the point except that then a reader can shame the org for taking money from someone the reader doesn't like. Let orgs be judged on their outputs per dollar spent please

7
Jaime Sevilla
1y
More transparency about money flows seems important for preventing fraud, understanding centralization of funding (and so correlated risk) and allowing people to better understand the funding ecosystem!

I have to be honest.. I think this is a horrible solution for all three of those problems. As in, if you enact this solution you can't say you've made meaningful progress on any of those.

Not only that but I don't think EA actually contains those 3 as "problems" to a degree that they would even warrant new watchdogging policies for orgs. Like, maybe those 3 aspects of EA aren't actually on fire or otherwise notably bad?

Example: People like to say that funding is not transparent in EA. But are they talking about the type of transparency which would be solved by this proposal? I think not. I think EA Funds and OPP are very transparent. You just have to go to their websites, which is a much better tactic than visiting dozens of EA org grantee websites. I think what people who are in the know mean when they say "EA needs funding transparency" is something like "people should be told why their grants were not approved" and "people ought to know how much money is in each fund so applicants know how likely it is to get a grant at what scale of project and so donors know which funds are neglected". Which is fair, but it has nothing to do with EA orgs listing their major donors on their we... (read more)

The fact that a commonsensical proposal like this gets downvoted so much is actually fairly indicative of current problems with  tribalism and defensiveness in EA culture.

I disagree, I think people just disagree with it. If it's tribalism because people downvote it it would be tribalism if they upvoted it too. 

4
Michael_PJ
1y
You really don't think there are any legitimate reasons to disagree with this? I can think of at least a few: * The cost in terms of time and maintenance is non-negligible. * The benefit is small, especially if you think that funding conflicts are not actually a big deal right now.

We should encourage and possibly fund adversarial collaborations on controversial issues in EA.

I thought the sense was adversarial collab was a bit overrated.

1
Tsunayoshi
1y
[epistemic status: my imprecise summaries of previous attempts] Well, I guess it depends on what you want to get out of them. I think they can be useful as epistemic tools in the right situation: They tend to work better if they are focused on empirical questions, and they can be help by forcing the collaborators to narrow down broad statements like "democratic decision making is good/bad for organisations". It's probably unrealistic however to expect that the collaborators will change their minds completely and arrive at a shared conclusion. They might also be good for building community trust. My instinct is that it would be really helpful in the current situation if the two sides see that their arguments are being engaged with reasonably by the other side. (see this ac on transgender children transitioning, nobody in the comments expresses anger at the author holding opposite views)

We could pester Scott Alexander to do another, EA themed, adversarial collaboration contest.

2
pradyuprasad
1y
this seems like a very good idea!

Peer reviewed academic research on a given subject should be given higher credence than blogposts by EA friendly sources. 

Seems highly dependent of the subject, how established the field is

Really depends on context and I don't recall a concrete example of the community going awry here. You're proposing this as a change to EA, but I'm not sure it isn't already true.

If you compare apples to apples, a paper and a blog answering the same question, and the blog does not cite the paper, then sure the paper is better. But usually there are good contextual reasons for referring to blogs.

Also, peer review is pretty crappy, the main thing is having an academic sit down and write very carefully.

Karma should have equal weight between users.

edited to add "between users"

I feel like there is an inherent problem with trying to use the current upvote system to determine whether the current upvote system is good. 

4
Nathan Young
1y
Ehhh only if you don't think you can convince people to change their minds. 

Another proposal: Visibility karma remains 1 to 1, and agreement karma acts as a weak multiplier when either positive or negative.

So:

  • A comment with [ +100 | 0 ] would have a weight of 100
  • A comment with [ +100 | 0 ] but with 50✅ and 50❌ would have a weight of 100 + log10(50 + 50) = 200
  • A comment with [ +100 | 100✅ ] would have a weight of say 100 * log10(✓100) = 200
  • A comment with [+0 | 1000✅ ] would have a weight of 0.

Could also give karma on that basis.

However thinking about it, I think the result would be people would start using the visibility vote to express opinion even more...

A little ambiguous between  "disagree karma & upvote karma should have equal weight" and "karma should have equal weight between people"

Noting that I strongly disagreed with this, rather than it being the case that someone with weighty karma did a normal disagree. 

2
Guy Raveh
1y
Both weak and strong votes increase in power when you get more karma, although I think for every currently existing user the weak vote is at most 2 (and the strong vote up to 9).

Making EA appeal to a wider range of moral views

EA is theoretically compatible with a wide range of moral views, but our own rhetoric often conflates EA with utilitarianism. Right now, if you hold moral views other than utilitarianism (including variants of utilitarianism such as negative utilitarianism), you often have to do your own homework as to what those views imply you should do to achieve the greatest good. Therefore, we should spend more effort making EA appeal to a wider range of moral views besides utilitarianism.

What this could entail:

  • More practical advice (including donation and career advice) for altruists with common moral views besides utilitarianism, such as:
    • Views that emphasize distributive justice, such as prioritarianism and egalitarianism
      • This blog post from 2016 claims that EA priorities, at least in the global health and development space, are aligned with prioritarian and egalitarian views. However, this might not generalize to other EA causes such as longtermism.
    • Special consideration for rectifying historical injustices
  • Creating donation funds for people who hold moral views other than utilitarianism
  • Describing EA in ways that generalize to moral views besides utilitarianism, at least in some introductory texts

Would this include making EA appeal to and include practical advice for views like nativism and traditionalism?

3
kbog
1y
Let's not forget retribution - ensuring that wrongdoers experience the suffering that they deserve. Or more modestly, disregarding their well-being.
2
EricHerboso
1y
I incorrectly (at 4a.m.) first read this as saying "Would this include making EA apparel…for views like nativism and traditionalism?", and my mind immediately started imagining pithy slogans to put on t-shirts for EAs who believe saving a single soul has more expected value than any current EA longtermist view (because ∞>3^^^3).
1
BrownHairedEevee
1y
What do you mean by nativism and traditionalism?
6
Ariel Simnegar
1y
A nativist may believe that the inhabitants of one's own country or region should be prioritized over others when allocating altruistic resources. A traditionalist may perceive value in maintaining traditional norms and institutions, and seek interventions to effectively strengthen norms which they perceive as being eroded.
1
BrownHairedEevee
1y
Thanks for clarifying. Yes, I think EA should (and already does, to some extent) give practical advice to people who prioritize the interests of their own community. Since many normies do prioritize their own communities, doing this could help them get their feet in the door of the EA movement. But I would hope that they would eventually come to appreciate cosmopolitanism. As for traditionalism, it depends on the traditional norm or institution. For example, I wouldn't be comfortable with someone claiming to represent the EA movement advising donors on how to "do homophobia better" or reinforce traditional sexual norms more effectively, as I think these norms are bad for freedom, equality, and well-being. At least the views we accommodate should perhaps not run counter to the core values that animate utilitarianism.

I actually think EA is inherently utilitarian, and a lot of the value it provides is allowing utilitarias to have a conversation among ourselves without having to argue the basic points of utilitarianism with every other moral view. For example, if a person is a nativist (prioritizing the well being of their own country-people), then they definitionally aren't an EA. I don't want EA to appeal to them, because I don't want every conversation to be slowed down by having to argue with them, or at least find another way to filter them out. EA is supposed to be the mechanism to filter the nativists out of the conversation.

For those disagreeing with this idea, is it because you think EA should only appeal to utilitarians, should not try to appeal to other moral views more than it does, or should try to appeal to other moral views but not too much?

1
kbog
1y
#2. From the absolute beginnings, EA has been vocal about being broader than utilitarianism. The proposal being voted on here looks instead like elevating progressivism to the same status as utilitarianism, which is a bad idea.

"EA institutions should select for diversity with respect to hiring"

Paraphrased from https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique 

I am hesitant to agree. Often proponents for this position emphasize the value of different outlooks in decision making as justification, but the actual implemented policies select based on diversity in a narrow subset of demographic characteristics, which is a different kind of diversity.

5
Arepo
1y
I'm sceptical of this proposal, but to steelman it against your criticism, I think we would want to say that the focus should be diversity of a) non-malleable traits that b) correlate with different life experiences - a) because that ensures genuine diversity rather than (eg) quick opinion shifts to game the system, and b) because it gives you a better protection against unknown unknowns. There are experiences a cis white guy is just far more/less likely to have had than a gay black woman, and so when you hire the latter (into a group of otherwise cisish whiteish mannish people), you get a bunch of intangible benefits which, by their nature, the existing group are incapable of recognising.  The traits typically highlighted by proponents of diversity tend to score pretty well on both counts - ethnicity, gender, and sexuality are very hard to change and (perhaps in decreasing order these days) tend to go hand in hand with different life experiences. By comparison, say, a political viewpoint is fairly easy to change, and a neurodivergent person probably doesn't have that different a life experience than a regular nerd (assuming they've dealt with their divergence well enough to be a remotely plausible candidate for the job).
3
kbog
1y
If you want different life experiences, look first for people who had a different career path (or are parents), come from a foreign country with a completely different culture, or are 40+ years old (rare in EA). I think these things cause much more relevant differences in life experience compared to things like getting genital surgery, experiencing microaggressions, getting called a racial slur, etc.
3
Tsunayoshi
1y
Thanks for the reply! I had not considered how easily game-able some selection criteria based on worldviews would be. Given that on some issues the worldview of EA orgs is fairly uniform, and the competition for those roles, it is very conceivable that some people would game the system! I should however note that the correlation between opinions on different matters should apriori be stronger than the correlation between these opinions and e.g. gender. I.e. I would wager that the median religious EA differs more from the median EA in their worldview than the median woman differs from the median EA. Your point about unknown unknowns is valid. However, it must be balanced against known unknowns, i.e. when an organization knows that its personnel is imbalanced in some characteristic that is known or likely to influence how people perform their job. It is e.g. fairly standard to hire a mix of mathematicians, physicists and computer scientists for data science roles, since these majors are known to emphasize slightly different skills. I must say that my vague sense is that for most roles the backgrounds that influence how people perform in a role are fairly well known because the domain of the work is relatively fixed. Exceptions are jobs where you really want decisions to be anticorrelated and where the domain is constantly changing, like maybe an analyst at a venture fund. I am not certain at all however, and if people disagree would very much like links to papers or blog posts detailing to such examples.

I sense that EA orgs should look at some appropriate baseline for different communities and then aim to be above that by blind hiring, adversing outside the community etc. 

3
dan.pandori
1y
It's hard to be above baseline for multiple dimensions, and eventually gets impossible.
1
dan.pandori
1y
Agreed with the specific reforms. Blind hiring and advertising broadly seem wise.

Further question: If EA has diversity hires, should this be explicitly acknowledged? And what is the demographic targets?

"EA should establish public conference(s) or assemblies for discussing reforms within 6 months, with open invitations for EAs to attend without a selection process. For example, an “online forum of concerns”:

  • Every year invite all EAs to raise any worries they have about EA central organisations
  • These organisations declare beforehand that they will address the top concerns and worries, as voted by the attendees
  • Establish voting mechanism, e.g. upvotes on worries that seem most pressing"

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique 

OpenPhil should found a counter foundation that has as its main goal critical reporting, investigative journalism and “counter research” about EA and other philanthropic institutions.

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique 

I think that paying people/orgs to produce critiques of EA ideas etc. for an EA audience could be very constructive, i.e. from the perspective of "we agree with the overall goal of EA, here's how we think you can do it better".

By contrast, paying an org to produce critiques of EA from the perspective of EA being inherently bad would be extremely counterproductive (and there's no shortage of people willing to do it without our help).

There could be a risk of fake scandals, misquoting and taking things out of context that will damage EA.

The wiki should aim to contain distillations of useful knowledge in other fields in EA language - feminism, psychology etc.

Curious to hear from people who disagree with this

4
Ariel Simnegar
1y
Hi Nathan! If a field includes an EA-relevant concept which could benefit from an explanation in EA language, then I don’t see why we shouldn’t just include an entry for that particular concept. For concepts which are less directly EA-relevant, the marginal value of including entries for them in the wiki (when they’re already searchable on Wikipedia) is less clear to me. On the contrary, it could plausibly promote the perception that there’s an “authoritative EA interpretation/opinion” of an unrelated field, which could cause needless controversy or division.
2
Chris Leong
1y
I don’t think the wiki adequately covers EA topics yet, so I wouldn’t expand the scope until we've covered these topics well.
1
niplav
1y
Writing good Wiki articles is hard, and translating between worldviews even harder. If someone wants to do it, that's cool and I would respect them, but funding people to do it seems odd—"explain X to the ~10k EAs in the world". Surely those fields have texts that can explain themselves?
0
Ives Parr
1y
I didn’t vote but would assume some the feminists part is an issue for some. I think that it’s a good idea but controversial issues might look like unanimous endorsement or might be wrong on certain matters. Very relevant is the current controversy about psychometric testing, race, etc.

"When EA books or sections of books are co-written by several authors, co-authors should be given appropriate attribution"

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique 

Though EA books are still published by normal publishers and this may be a big ask. I asked someone about this in relation to WWOTF and they were like Will did a huge amount to acknowledge contributions and while it would be good to acknowledge all that's just not how it works. 

I'd still like us to push for a film like model "directed by, produced by" but it's not a high priority.

EA should periodically re-evaluate and re-examine core beliefs to see if they still hold up over time. 

Disagree voted for being too vague - what specifically would people to do implement this?

What is the EA that you think should do this re-examining? In what sense is something that has different beliefs still EA? If an individual re-evaluates their beliefs and changes their mind about core EA ideas, wouldn't they leave EA, go do something else, EA gets smaller, newer better philosophies get bigger, and resources therefor get allocated as they should?

Feel less of a need to quantify everything

"I am 90% sure that" - you don't need to say it.

Note I am providing options for people to vote on. I disagree with this one.

Note that as someone who strongly agrees with this, saying you're 90% sure is still fine sometimes. More problematic are things like over-simplifying and flattening ideas by conflating them with small set of numbers, or giving guesses and confidence intervals on things you basically have no idea about.

EA orgs should aim to be less politically and demographically homogenous.

I'm curious how people are interpreting the "and" here. Because EA is only 3% right or center right politically, it seems that increasing demographic diversity along lines of race/gender/sexuality, at least in developed countries, would make EA more politically homogeneous. So is the suggestion that EA recruit more older people, people from rural areas, and potentially people from low and middle income countries?

You should be able to give away your forum karma.

If without restraints then note: this opens up an influence market, which could lead into plutocracy.

There should be some way of telling whether a karma score is caused by a number of small upvotes by several people or whether it is a result of a single strong upvote/downvote by one person. Edit: Turns out there's already a way to do this, see the comment below.

Hovering over the karma score displays how many votes there are. Does that address your request, or is there something missing?

3
Lin BL
1y
This does not give a complete picture though. Say something has 5 karma and 5 votes. First obvious thought: 5 users upvoted the post, each with a karma of 1. But that's not the only option: * 1 user upvotes (value +9), 4 users downvote (each value -1) * 2 users upvote (values +4 and +6), 3 users downvote (values -1, -1 and -3) * 3 users upvote (values +1 and +2 and +10), 2 users downvote (values -1 and -7) Or a whole range of other permutations one can think of that add up to 5, given that different users' votes have different values (and in some cases strong up/downvoting). Hovering just shows the overall karma and overall number of people who have voted, unless I am missing a feature that shows this in more detail?
1
Sarah Cheng
1y
Yeah I was wondering if this was what the question asker was getting at. Thank you for clearly explaining it. You're right that this doesn't exist. My instinct is that this doesn't provide enough value to be worth the cost of the extra UX complication and the slight deanonymizing affect on voting. I'd be curious to hear how this kind of feature would be helpful for you.
1
Lin BL
1y
They'd have the information of upvotes and downvotes already (to calculate the overall karma). I don't know how the forum is coded, but I expect they could do this without too much difficulty if they wanted to. So if you hover, it would say something like: "This comment has x overall karma, (y upvotes and z downvotes)." So the user interface/experience would not change much (unless I have misinterpreted what you meant there). It'll give extra information. Weighting some users higher due to contribution to the forum may make sense with the argument that these are the people who have contributed more, but even if this is the case it would be good to also see how many people overall think it is valuable or agree or disagree. Current information: * How many votes * How valuable these voters found it adjusted by their karma/overall Forum contribution New potential information: * How many votes * How valuable these voters found it adjusted by their karma/overall Forum contribution * How many overall voters found this valuable e.g. 2 people strongly agreeing and 3 people weakly disagreeing may update me differently to 5 people weakly agreeing. One is unanimous, the other people have more of a divided opinion of, and it would be good for me to know that as it might be useful to ask why (when drawing conclusions based on what other people have written, or when getting feedback on my own writing). I would like to see this implemented, as the cost seems small, but there is a fair bit of extra information value.
1
Coafos
1y
Note: I tried to do it on mobile, and it's not working everywhere? I tried to tap on post karma or question answer karma but it did not show total vote count. (On my laptop it works.)
3
Sarah Cheng
1y
Yeah, the forum relies a lot on hover effects, which don't work very well on mobile. To avoid that in this case seems like it would overcomplicate the UI though, so I'm not sure what an improved UX would look like. I'll add this to our backlog for triage.

"Funding bodies should not be able to hire researchers who have previously been recipients in the last e.g. 5 years, nor should funders be able to join recipient organisations within e.g. 5 years of leaving their post"

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Employment

"EA institutions should recruit known critics of EA and offer them e.g. a year of funding to write up long-form deep critiques"

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#Critique

For me, this would depend heavily on how good these critics are and probably not sensible to pay people who are just going to use their time to write more attacks, rather than constructive feedback.

Mostly seems to me like at least with EA as it currently is, they won't be interested.

We should consider answers on this thread based on agreement karma not upvoting

I honestly don't know. I personally agreevoted but did not upvote suggestions that, for example, I thought would be good in theory but impossible to implement.

There should be a way to repost something with 0 karma so that I don't have to keep writing this same post every few months. 

Can you elaborate? I don't understand what problem this solves.

At least on LessWrong you can move something to drafts and then publish it again, IIRC. Given the underlying infrastructure is the same this should also work on the EA forum?

New users' strong vote equals two votes, and the moment you get a 100 karma it equals 5 votes. But after that it doesn't keep increasing.

(Agree vote this even if you don't agree with the specific numbers but just the general gist of it.)

I would rather have no increases at all, or perhaps a nominal one (eg an unlock of a 2-karma strong upvote) after a relatively cursory amount of karma - just enough to prove that you're not a troll.

I do not think that my contributions to this forum merit me having ~3.5x as much weight as someone like Jobst Heitzig just because he's too busy with a successful academic career to build up a backlog on this forum. Weighted karma selects for people whose time has low market value in the same way that long job interviews do.

Karma weighting also encourages Goodha... (read more)

2
Nathan Young
1y
I think Jobst is very unrepresentative. From the recommendations, he's getting I wish I could transfer some of my karma to him.
3
Arepo
1y
I don't know about unrepresentative. New poster to this forum run a gamut from 'probably above averagely smart' to 'extremely intelligent and thoughtful'. Obviously we're going to have far more of the former, but we should also expect some number of the latter - and the karma system hides both.  I think Scott's argument for for openness to eccentrics on the ground that a couple of great ideas have far more positive value than a whole bunch of negative ones have negative value in generalises to an argument for being open to 'eccentrics' who comprise large numbers of new or intermittent posters.
2
Gordon Seidoh Worley
1y
You've got to consider the base rates. Most eccentrics are actually just people with ungrounded ideas that are wrong since it's easy to have wild ideas and hard to have correct ideas and thus even harder to have wild and correct ideas. In the old days of Less Wrong excess criticism was actually a huge problem and did silence a bunch of folks incorrectly. EAF and Less Wrong (which has basically the same cultural norms) have this problem to a much lesser extent now due a few structural changes: * new posters don't post directly to the front page and instead only can post there once they get enough karma or explicit approval by moderators * this lets new posters work out the site norms without being exposed to the full brunt of the community * weighted voting also allows respected users to correct errors on their own, so when they see something of value they can give it a strong upvote rather than it languishing due to five other new people voting it down If your concern is that the site is not making it easy enough for eccentrics with good ideas to post here, I can say from the experience of the way Less Wrong used to run that it's likely they'd have an even worse time if it weren't for weighted voting.
7
Arepo
1y
It is tiresome to have conversations in which you assume I only started thinking about this yesterday and haven't considered basic epistemic concepts.  a) I am not talking about actual eccentrics; I'm drawing the analogy of a gestalt entity mimicking (an intelligent) eccentric. You don't have to agree that the tradeoff is worthwhile, but please claim that about the tradeoff I'm proposing, not some bizarre one where we go recruiting anyone who has sufficiently heterodox ideas. b) I am not necessarily suggesting removing the karma system. I'm suggesting toning it down, which could easily be accompanied by other measures to help users find the content they'd most like to see. There's plenty of room for experimentation - the forum seems to have been stuck in a local maximum (at best - perhaps not a maximum) for the last few years, and CEA should have the resources for some A/B testing of new ideas. c) Plenty of pre-Reddit internet forums have been successful in pursuing their goal with no karma system at all, let alone a weighted one. Looking at the current posts on the front page of the EA Reddit, only one is critical of EA, and that's the same Bostrom discussion that's been going on here. So I don't see good empirical evidence that toning down the karma system would create the kind of wild west you fear.
5
Grayden
1y
If only there were some kind of measure of an individuals contribution. Maybe we could call it something like PELTIV

Why do people think vote weight should keep on increasing after a certain amount of karma? I'm curious!

This is a mechanism for maintaining cultural continuity.

Karma represents how much the community trusts you, and in return, because you are trusted, you're granted greater ability to influence what others see because your judgement has been vetted over a long series of posts. The increase in voting power is roughly logarithmic with karma, so the increased influence in practice hits diminishing returns pretty quickly.

If we take this away it allows the culture of the site to drift more quickly, say because there's a large influx of new folks. Right now existing members can curate what happens on the Forum. If we take away the current voting structure, we're at greater risk of this site becoming less the site the existing user base wants.

I don't speak for the Forum by any means, but as I see it we're trying to create a space here to talk about certain things in a certain way, and that means we want new people to learn the norms and be part of what exists first before they try to change it, since outsiders often fail to understand why things work the way they do until they've gotten enough experience to see how the existing mechnismism make things work. Once you understand how things work, it becomes possible to try to change things in ways that keeps what works and changes what doesn't. The voting mechanism is downstream of this and is an important tool of the membership to curate the site.

That said, you can also just ignore the votes if you don't agree with them and read whatever you want.

7
Jeroen Willems
1y
I really don't think the libertarian "if you don't like it, go somewhere else" works here as the EA forum is pretty much the place where EA discussions are held. Sure, they happen on twitter and reddit too but you have to admit it's not the same. Most discussions start here and are then picked up there. I agree with your other arguments, I don't want the culture of the site to drift too quickly because of a large influx of new folks. But why wouldn't a cut off be sufficient for that? I don't see why the power has to keep on increasing after, say, a 200 karma. Because at that point value lock-in might become an issue. Reminds me a bit of the average age of US senators being 64 years old. Not too dismiss the wisdom of experienced people, but insights from new folks is important too.
4
Arepo
1y
This doesn't seem self-evidently bad or obviously likely.
9
Gordon Seidoh Worley
1y
Sure, not everyone likes curated gardens. If that's not the kind of site you want, there's other places. Reddit, for example, has active communities that operate under different norms. The folks who started the Forum prefer the sort of structure it has. If you want something else and you don't have a convincing argument that convinces us, you're free to participate in discussions elsewhere. As to deeper reasons why the Forum is the way it is, see, for example, https://www.lesswrong.com/posts/tscc3e5eujrsEeFN4/well-kept-gardens-die-by-pacifism
7
Arepo
1y
'There are other places' seems like a terrible benchmark to judge by. Reddit is basically the only other active forum on the internet for EA discussion and nowhere else has any chance of materially affecting EA culture. The existence of this place suppresses alternatives - I used to run a utilitarianism forum that basically folded into this because it didn't seem sensible at the time to compete with people we almost totally agreed with. Posting a a single unevidenced LW argument as though it were scripture as an argument against being exposed to a wider range of opinions seems like a poor epistemic practice. In any case, that thread is about banning, which I've become more sympathetic to, and which is totally unrelated to the karma system.

EA should deprioritize human welfare causes i.e. global health (unless it is an existential risk) and global poverty.

I think this post fails on three really important metrics: 1)the EA forum is a highly selective non-representative sample. Its not at all legitimate to suggest the views of people 2)the EA forum is a forum for discussion; it is not a way to

[This comment is no longer endorsed by its author]

Can you move this to comments. 

  1. I think it's better than any other discussion space we have
2
AlasdairGives
1y
Apologies for the mistake!

An investigation into the encouragement into the diagnosis of ADHD and use of amphetamines within EA.

I think at this point, I'll say that I have a diagnosis of ADHD and take Adderall, before y'all eat me alive.

I mainly wrote it cos someone said this was taboo and I don't think it is.

I can't tell if this comment is saying 1 or 2:

  1. "let's encourage more EAs who might have ADHD to get assessed, and treated if needed" (which I strongly agree with)

  2. "Let's do an investigation into why so many EAs do amphetamines recklessly or are falsely diagnosed with ADHD" (which I strongly disagree with doing, and disagree is an issue)

I voted against this one because it's not specific to EA. This is a general phenomenon of people who have a "disfunction" of not having 99th percentile executive function seeking ADHD diagnoses to get access to amphetamines. It might be happening in EA, but it's not clear there is an EA problem rather than a society-wide problem.

Addressing it as a general problem might be worthwhile, but we'd need to analyze it (maybe someone already has!).

This is good for calibrating what the votes mean across the responses

You already got a lot of karma/voting-power when you asked the same thing last month. As I pointed out then, we cannot conclude what the community believes or wants based on an undemocratic karma system.

EDIT: Everyone who wants some easy undemocratic voting-power, go to last months question-post and copy the top level suggestions.

FWIW I was delaying engaging with recent proposals for improving EA, and I really appreciate that Nathan is taking the time to facilitate that conversation.

Sure, but the discussion is happening agian and this feels like a strictly better way to do it rather than in the comments at the bottom of a massive post.

So what you're saying is that the mechanism which exists to reward people for doing things that other people will like, is incentivising people (Nathan) to do things that people like (make helpful posts with polls). Seems good to me.

Large EA institutions should be run democratically.

Almost no organization in the world that gets stuff done on reasonable timelines operates this way. I think there's a very high prior against this.

Democracy makes sense for things you are forced into, like the government you're born under and forced to be ruled by. EA organizations are voluntary orgs that are already democratic in that funders can take their money and go elsewhere if they don't like how they are run. This would add a level of complication to decision making that would basically guarantee that large EA orgs would fail to achieve their missions.

[anonymous]1y4
0
0

do the votes mean that it would be undemocratic to impose democratic rule?

Comments18
Sorted by Click to highlight new comments since: Today at 3:54 PM

As of this writing, the suggestion "EA institutions should select for diversity with respect to hiring" has a karma of 17 upvotes, -21 disagreement (with 52 votes).

My suggestion "EA orgs should aim to be less politically and demographically homogenous" has 14 upvotes, +27 agreement (with 21 votes). 

Why are these two statements so massively different in agreement score?

These suggestions, while not exactly equivalent, seem very similar. (How exactly will you become less demographically homogenous without aiming to be more diverse in hiring?) 

My hypothesis is that either EA likes vaguer statements, but is allergic to more concrete proposals,  or that people are reflexively downvoting anything that comes off as culture warrish or "woke". I'd be interested in hearing from anyone that downvoted statement 1 and upvoted statement 2. 

This also reveals the limitations of this method for actually making decisions: small changes in wording can have a huge effect on the result. 

Statement 2 can be furthered by a number of methods -- e.g., seeking new people and new hires in more/different places. Its easy to agree as long as you think there is at least one method of furthering the end goal you would support.

Statement 1 reads like a specific method with a specific tradeoff/cost. As I read it, it calls for sometimes hiring Person X for diversity reasons even though you think Person Y would have been a better choice otherwise (otherwise, "select for diversity" isn't actually doing any work).

I don't think this is just a small change in wording. It's unsurprising to me that more people would endorse a goal like Statement 2 than a specific tradeoff like Statement 1.

I think that makes sense as a reason, if that's how people interpreted the two statements. However, statement 1 was explicitly not referring to a narrow "hire a worse candidate" situtation.  Statement 1 came from the  megapost, which was linked along with statement 1. Heres a relevant passage:

Worryingly, EA institutions seem to select against diversity. Hiring and funding practices often select for highly value-aligned yet inexperienced individuals over outgroup experts, university recruitment drives are deliberately targeted at the Sam Demographic (at least by proxy) and EA organisations are advised to maintain a high level of internal value-alignment to maximise operational efficiency. The 80,000 Hours website seems purpose-written for Sam, and is noticeably uninterested in people with humanities or social sciences backgrounds.

They are advocating for the exact same things you are, eg "seeking new people and new hires in more/different places", and that's what they meant by selecting for diversity in hiring. 

I think this makes it clearer what happened. Statement 1 resembles an existing culture war debate, so people assumed it was advocating for a side and position in said debate, and downvoted, whereas statement 2 appeared more neutral, so it was upvoted. I think this really just tells us to to be careful with interpreting these upvote/downvote polls. 

People likely read it as a standalone statement without referring back to the megapost, and gave "select" its most common meaning in ordinary jargon. I agree that the wording of these items is tricky and can skew outcomes, I just feel the summary here did not accurately capture what the broader statement said. So I am not convinced that voters were actually inconsistent or that this finding represents a deep problem with this kind of sorting exercise.

To be clear, you're saying that Nathan took the megapost out of context in a way that suggested a different interpretation of their words, which lead to a highly downvoted answer.  (I'm not suggesting he did this on purpose). In other words, the framing of an answer has a large effect on the final result. 

I think this does represent a problem with the sorting exercise. If it hadn't been for my followup, the takeaway could have easily been "EA doesn't like diversity", when the actual takeaway is "EA likes diversity, but doesn't like this one specific hiring tactic, which was never actually mentioned anywhere". 

Yes. We may not be that far apart on this one now. The validity of the results is only as good as the extent to which the answer stems accurately convey what you are trying to measure.

Although I understand why Nathan wrote it as he did, this answer stem isn't (in my opinion) a good reflection of the underlying text because that text used "select" in a less common way that is only clear in context. Thus, the response to the stem only has validity, at most, for what the stem itself actually says.

I think the need for a summary to accurately reflect the idea in question is endemic to all attempts to gauge opinion, not just this method. Writing good summaries can be hard.

See my comment above on the political version - usually when people call for more diversity, they are not referring to adding political diversity. So I think the additional of political makes it significantly different.

How are we supposed to use agree/disagree votes? It looks to me like regular votes are to be used to move responses up and down the page.

You can think something is important but wrong. I'm not allowed to agree or disagree with my own posts, but if I could I would upvote but disagree with this. 

It's a good discussion but the point is wrong.

So I have time for Bob Jacob's criticism that this is the same post as I posted last month. It looks similar doesn't it. 

But it's gonna get highly upvoted so I don't think people felt they had that discussion. Lizka could post this every 2 months if she wants, but I think the desire for this discussion is here and this is the best way to have it. If I get the karma, sobeit. 

I used to allow ways to devote my karma, but it's just a huge hassle to try and create that and It confused everyone.

You could've made a poll. That wouldn't have given you nearly as much karma/voting-power, and that wouldn't have given those who already have a lot of power the ability to influence the results. For the record I'm not angry at you, I'm angry at the karma system and the groupthink it generates. Given that I also have undemocratic power, I will stick to my own principles and not vote on these questions.

I don't like how much karma have. I agree that's a bit ridiculous at this stage, though some disagree. But I think that those who have spent a long time on the forum do tend to be better informed and I do want their votes to count for more.

Democracy is good at avoiding famine and war, but I am unconvinced it is best at making decisions. So a little upweighting of those who the community tends to agree with seems good. 

Honestly, I might suggest it more. 

But I think that those who have spent a long time on the forum do tend to be better informed and I do want their votes to count for more.

You made about +448 karma from the last post. When an actual scientist like Jobst comes here and posts a very well informed post, it get's +1 karma (from me, love ya Jobst). People like Jobst have a fulltime job as a scientist and are too productive to  spend most of their time online, and when they do go online they are so well informed it won't give them any voting power because terminally online people like us are simply not informed enough to understand him, and we have all the voting power. If you say something true but unpopular to those who already have power, you might even lose karma. There is no reason to think that those who have more voting power are more informed, more productive or more altruistic.

EDIT: To clarify: not literally his last post, his last post like this. Splitting things up into smaller vote-able chunks (like this post) nets you more voting-power than making the big posts of criticisms that inspire them. Having a high quantity is a better path to gaining voting power than high quality. This allows a few highly active (and thus most likely orthodox) users to boost or tank any piece of writing.
When we combine this with the fact that low karma comments are hidden we basically allow people with high karma (most likely orthodox users) to soft-censor their own critics.

If it's a good post, can't it convince people to upvote it? I think the question is, if on average people with high karma have a better sense of what the community is gonna value than those with low karma. Maybe I would like jobst to have more, but most people aren't jobst. 

On balance I still like that that the top forum users have the ability to do some moderation.

But I'd be open to turning it off and seeing how that affects stuff.

I think the "fulltime job as a scientist" situation could be addressed with an "apply for curation" process, as outlined in the second half of this comment.

Would you gift your karma if that option was available?

Destroying it seems better. Gifting it requires identification of a worthy recipient and seems like it opens all kinds of additional problems.

Yes. I think so, haven't looked at the utility curves but I imagine I can find people I think are underrated.

Curated and popular this week
Relevant opportunities