I often see people thinking that this is bragading or something when actually most people just don't want to write a response, they either like or dislike something
If it were up to me I might suggest an anonymous "I don't know" button and an anonymous "this is poorly framed" button.
When I used to run a lot of facebook polls, it was overwhelmingly men who wrote answers, but if there were options to vote, the gender was much more even. My hypothesis was that a kind of argumentative usually man tended to enjoy writing long responses more. And so blocking lower effort/less antagonistic/ more anonymous responses meant I heard more from this kind of person.
I don't know if that is true on the forum, but I would guess that the higher effort it is to respond the more selective the responses become in some direction. I guess I'd ask if you think that the people spending the most effort are likely to be the most informed. In my experience, they aren't.
More broadly I think it would be good if the forum optionally took some information about users - location, income, gender, cause area, etc and on answers with more than say 10 votes would dis... (read more)
It seems like we could use the new reactions for some of this. At the moment they're all positive but there could be some negative ones. And we'd want to be able to put the reactions on top level posts (which seems good anyway).
I think that it is generally fine to vote without explanations, but it would be
nice to know why people are disagreeing or disliking something. Two scenarios
come to mind:
* If I write a comment that doesn't make any claim/argument/proposal and it
gets downvotes, I'm unclear what those downvotes mean.
* If I make a post with a claim/argument/proposal and it gets downvoted without
any comments, it isn't clear what aspect of the post people have a problem
with.
I remember writing in a comment several months ago about how I think that theft
from an individual isn't justified even if many people benefit from it, and
multiple people disagreed without continuing the conversation. So I don't know
why they disagreed, or what part of the argument they through was wrong. Maybe I
made a simple mistake, but nobody was willing to point it out.
I also think that you raise good points regarding demographics and the
willingness of different groups of people to voice their perspectives.
2
Nathan Young
23d
I agree it would be nice to know, but in every case someone has decided they do
want to vote but don't want to comment. Sometimes I try and cajole an answer,
but ultimately I'm glad they gave me any information at all.
1
Rebecca
24d
What is bragading?
4
Brad West
24d
Think he was referring to "brigading", referred to in this thread
Generally, it is voting more out of allegiance or affinity to a particular
person, rather than an assessment of the quality of the post/comment.
Looking forward to how it plays out! LessWrong made the intentional decision to
not do it, because I thought posts are too large and have too many claims and
agreement/disagreement didn't really have much natural grounding any more, but
we'll see how it goes. I am glad to have two similar forums so we can see
experiments like this play out.
4
NickLaing
7d
My hope would be that it would allow people to decouple the quality of the post
and whether they agree with it or not. Hopefully people could even feel better
about upvoting posts they disagreed with (although based on comments that may be
optimistic).
Perhaps combined with a possible tweak in what upvoting means (as mentioned by a
few people), someone mentioned we could change "how much do you like this
overall" to something that moves away form basing the reaction on an emotions. I
think someone suggested something like "Do you think this post adds value"
(That's just a real hack at the alternative, I'm sure there are far better ones)
4
Nathan Young
8d
I think another option is to have reactions on a paragraph level. That would be
interesting.
I think 90% of the answer to this is risk aversion from funders, especially LTFF and OpenPhil, see here. As such many things struggled for funding, see here.
We should acknowledge that doing good policy research often involves actually talking to and networking with policy people. It involves running think tanks and publishing policy reports, not just running academic institutions and publishing papers. You cannot do this kind of research well in a vacuum.
That fact combined with funders who were (and maybe still are) somewhat against funding people (except for people they knew extremely well) to network with policy makers in any way, has lead to (maybe is still leading to) very limited policy research and development happening.
I am sure others could justify this risk averse approach, and there are totally benefits to being risk averse. However in my view this was a mistake (and is maybe an ongoing mistake). I think was driven by the fact that funders were/are: A] not policy people, so do/did not understand the space so are were hesitant to make grants; B] heavily US centric, so do/did not understand the non-US policy space; and C] heavily capacity constrained, so do/did ... (read more)
I am going on what Ngo said. So I guess, what does he think of it?
-3
Larks
13d
This sounds like the sort of question you should email Richard to ask before you
make blanket accusations.
4
Nathan Young
13d
Ehhh, not really. I think it's not a crazy view to hold and I wrote it on a
shortform.
9
Habryka
14d
My current model is that actually very few people who went to DC and did "AI
Policy work" chose a career that was well-suited to proposing policies that help
with existential risk from AI. In-general people tried to choose more of a path
of "try to be helpful to the US government" and "become influential in the
AI-adjacent parts of the US government", but there are almost no people working
in DC whose actual job it is to think about the intersection of AI policy and
existential risk. Mostly just people whose job it is to "become influential in
the US government so that later they can steer the AI existential risk
conversation in a better way".
I find this very sad and consider it one of our worst mistakes, though I am also
not confident in that model, and am curious whether people have alternative
models.
5
Lukas_Gloor
14d
That's probably true because it's not like jobs like that just happen to exist
within government (unfortunately), and it's hard to create your own role
descriptions (especially with something so unusual) if you're not already at the
top.
That said, I think the strategy you describe EAs to have been doing can be
impactful? For instance, now that AI risk has gone mainstream, some groups in
government are starting to work on AI policy more directly, and if you're
already working on something kind of related and have a bunch of contacts and so
on, you're well-positioned to get into these groups and even get a leading
role.
What's challenging is that you need to make career decisions very autonomously
and have a detailed understanding of AI risk and related levers to carve out
your own valuable policy work at some point down the line (and not be complacent
with "down the line never comes until it's too late"). I could imagine that
there are many EA-minded individuals who went into DC jobs or UK policy jobs
with the intent to have an impact on AI later, but they're unlikely to do much
with that because they're not proactive enough and not "in the weeds" enough
with thinking about "what needs to happen, concretely, to avert an AI
catastrophe?."
Even so, I think I know several DC EAs who are exceptionally competent and super
tuned in and who'll likely do impactful work down the line, or are already about
to do such things. (And I'm not even particularly connected to that sphere,
DC/policy, so there are probably many more really cool EAs/EA-minded folks there
that I've never talked to or read about.)
4
OllieBase
12d
The slide Nathan is referring to. "We didn't listen" feels a little strong; lots
of people were working on policy detail or calling for it, it just seems ex post
like it didn't get sufficient attention. I agree directionally though, and
Richard's guesses at the causes (expecting fast take-off + business-as-usual
politics) seem reasonable to me.
Also, *EAGxBerlin.
This is neat, kudos!
I imagine it might be feasible to later add probability distributions, though
that might unnecessarily slow people down.
Also, some analysis would likely be able to generate a relative value function,
after which you could do the resulting visualizations and similar.
4
Nathan Young
13d
Note I didn't build the app, I just added the choices. Do you think geting the
full relative values is worth it?
1
Nathan Young
13d
Why do people give to EA funds and not just OpenPhil?
4
David Mears
11d
does OpenPhil accept donations? I would have guessed not
3
ChrisSmith
11d
It does not. There are a small number of co-funding situations where money from
other donors might flow through Open Philanthropy operated mechanisms, but it
isn't broadly possible to donate to Open Philanthropy itself (either for opex or
regranting).
I continue to think that a community this large needs mediation functions to avoid lots of harm with each subsequent scandal.
People asked for more details. so I wrote the below.
Let's look at some recent scandals and I'll try and point out some different groups that existed.
FTX - longtermists and non-lontermists, those with greater risk tolerance and less
Bostrom - rationalists and progressives
Owen Cotton-Barrett - looser norms vs more robust, weird vs normie
Nonlinear - loyalty vs kindness, consent vs duty of care
In each case, the community disagrees on who we should be and what we should be. People write comments to signal that they are good and want good things and shouldn't be attacked. Other people see these and feel scared that they aren't what the community wants.
This is tiring and anxiety inducing for all parties. In all cases here there are well intentioned, hard working people who have given a lot to try and make the world better who are scared they cannot trust their community to support them if push comes to shove. There are people horrified at the behaviour of others, scared that this behaviour will repeat itself, with all the costs attached. I feel this way, and I ... (read more)
The CEA community health team does serve as a mediation function sometimes, I
think. Maybe that's not enough, but it seems worth mentioning.
5
Chris Leong
17d
Community health is also like the legal system in that they enforce sanctions so
I wonder if that reduces the chance that someone reaches out to them to mediate.
2
Nathan Young
17d
I think this is the wrong frame tbh
3
Chris Leong
17d
How so?
2
Nathan Young
16d
I think I want them to be a mediation and boundary setting org, not just legal
system
Some things I don't think I've seen around FTX, which are probably due to the investigation, but still seems worth noting. Please correct me if these things have been said.
I haven't seen anyone at the FTXFF acknowledge fault for negligence in not noticing that a defunct phone company (north dimension) was paying out their grants.
This isn't hugely judgemental from me, I think I'd have made this mistake too, but I would like it acknowledged at some point
The FTX Foundation grants were funded via transfers from a variety of bank accounts, including North Dimension-8738 and Alameda-4456 (Primary Deposit Accounts), as well as Alameda-4464 and FTX Trading-9018
I haven't seen anyone at CEA acknowledge that they ran an investigation in 2019-2020 on someone who would turn out to be one of the largest fraudsters in the world and failed to turn up anything despite seemingly a number of flags.
I remain confused
As I've written elsewhere I haven't seen engagement on this point, which I find relatively credible, from one of the Time articles:
Did you mean for the second paragraph of the quoted section to be in the quote
section?
2
Nathan Young
2mo
I can't remember but you're right that it's unclear.
3
Rían O.M
2mo
I haven't read too much into this and am probably missing something.
Why do you think FTXFF was receiving grants via north dimension? The brief
googling I did only mentioned north dimension in the context of FTX customers
sending funds to FTX (specifically this SEC complaint). I could easily have
missed something.
7
Jason
2mo
Grants were being made to grantees out of North Dimension's account -- at least
one grant recipient confirmed receiving one on the Forum (would have to search
for that). The trustee's second interim report shows that FTXFF grants were
being paid out of similar accounts that received customer funds.
It's unclear to me whether FTX Philanthrophy (the actual 501c3) ever had any
meaningful assets to its name, or whether (m)any of the grants even flowed
through accounts that it had ownership of.
Certainly very concerning. Two possible mitigations though:
Any finding of negligence would only apply to those with duties or oversight responsibilities relating to operations. It's not every employee or volunteer's responsibility to be a compliance detective for the entire organization.
It's plausible that people made some due dilligence efforts that were unsuccessful because they were fed false information and/or relied on corrupt experts (like "Attorney-1" in the second interim trustee report). E.g., if they were told by Legal that this had been signed off on and that it was necessary for tax reasons, it's hard to criticize a non-lawyer too much for accepting that. Or more simply, they could have been told that all grants were made out of various internal accounts containing only corporate monies (again, with some tax-related justification that donating non-US profits through a US charity would be disadvantageous).
It is worth noting when systems introduce benefits in a few obvious ways but many small harms. An example is blocking housing. It benefits the neighbours a lot - they don't have to have construction nearby - and the people who are harmed are just random marginal people who could have afforded a home but just can't.
But these harms are real and should be tallied.
Much recent discussion in EA has suggested common sense risk reduction strategies which would stop clear bad behavior. Often we all agree on the clear bad behaviour... (read more)
I notice some people (including myself) reevaluating their relationship with EA.
This seems healthy.
When I was a Christian it was extremely costly for me to reduce my identification and resulted in a delayed and much more final break than perhaps I would have wished[1]. My general view is that people should update quickly, and so if I feel like moving away from EA, I do it when I feel that, rather than inevitably delaying and feeling ick.
Notably, reducing one's identification with the EA community need not change one's poise towards effective work/donations/earn to give. I doubt it will change mine. I just feel a little less close to the EA community than once I did, and that's okay.
I don't think I can give others good advice here, because we are all so different. But the advice I would want to hear is "be part of things you enjoy being part of, choose an amount of effort to give to effectiveness and try to be a bit more effective with that each month, treat yourself kindly because you too are a person worthy of love"
I think the strategy fortnight worked really well. I suggest that another one is put in the calendar (for say 3 months time) and then rather than dripfeeding comment we sort of wait and then burst it out again.
It felt better to me, anyway to be like "for these two weeks I will engage"
I hold that there could be a well maintained wiki article on top EA orgs and then people could anonymously have added many non-linear stories a while ago. I would happily have added comments about their move fast and break things approach and maybe had a better way to raise it with them.
There would have been edit wars and an earlier investigation.
How much would you pay to have brought this forward 6 months or a year. And likewise for whatever other startling revelations there are. In which case, I suggest a functional wiki is worth 5% - 10% of that amount, per case.
My question is "Who would want to run an EA org or project in that kind of
environment?". Presumably, you'd be down, but my bet is that the vast majority
of people wouldn't.
2
Nathan Young
16d
Given that people are suggesting a length set of org norms, I'm not sure that
avoiding taxing orgs is their top concern.
2
Nathan Young
16d
While I support your right to disagreevote anonymously, I also challenge someone
to articulate the disagreement.
I was reading this article about Nuclear winter a couple of days ago and I struggled. It's a good article but there isn't an easy slot in my worldview for it. The main thrust was something like "maybe nuclear winter is worse than other people think". But I don't really know how bad other people think it is.
Compare this to community articles, I know how the community functions and I have opinions on things. Each article fits neatly into my brain.
If a had a globe my worldview the EA community section is like very well mapped out... (read more)
I wouldn't recommend people tweet about the nonlinear stuff a lot.
There is an appropriate level of publicity for things and right now I think the forum is the right level for this. Seems like there is room for people to walk back and apologise. Posting more widely and I'm not sure there will be.
If you think that appropriate actions haven't been taken in say a couple months then I get tweeting a bit more.
I think the substance of your take may be right, but there is something that
doesn't sit well with me about an EA suggesting to other EAs (essentially) "I
don't think EAs should talk about this publicly to non-EAs." (I take it that is
the main difference between discussing this on the Forum vs. Twitter—like,
"let's try to have EA address this internally at least for now.") Maybe it's
because I don't fully understand your justification—"there is room for people to
walk back and apologize"—but the vibe here feels a bit to me like "as EAs, we
need to control the narrative around this ('there is an appropriate level of
publicity,')" and that always feels a bit antithetical to people reasoning about
these issues and reaching their own conclusions.
I think I would've reacted differently if you had said: "I don't plan to talk
about this publicly for a while because of x, y, and z" without being
prescriptive about how others should communicate about this stuff.
5
Nathan Young
17d
Yeah i get that.
I think in general people don't really understand how virality works in
community dynamics. Like there are actions that when taken cannot be reversed.
I don't say "never share this" but I think sharing publicly early will just make
it much harder to have a vulnerable discussion.
I don't mind EAs talking about this with non-EAs but I think twitter is
sometimes like a feeding frenzy, particularly around EA stuff. And no, I don't
want that.
Notably, more agree with me than disagree (though some big upvotes on agreement
obscure this - I generally am not wild about big agreeevotes).
As I've written elsewhere I think there is a spectrum from private to public.
Some things should be more public than they are and other things more private.
Currrently I am arguing this is about right. I thought that it turned out many
issues with FTX were too private.
I think that a mature understanding of sharing things is required for navigating
vulnerable situations (an I imagine you agree - many disliked the sharing of
victims names around the time article why because that was too public for that
information in their opinion)
I appreciate that you said it didn't sit well with you. It doesn't really sit
well with me either. I welcome someone writing it better
3
lilly
16d
Yeah, again, I think you might well be right on the substance. I haven't tweeted
about this and don't plan to (in part because I think virality can often lead to
repercussions for the affected parties that are disproportionate to the
behavior—or at least, this is something a tweeter has no control over). I just
think EA has kind of a yucky history when it comes to being prescriptive about
where/when/how EAs talk about issues facing the EA community. I think this is a
bad tendency—for instance, I think it has, ironically, contributed to the
perception that EA is "culty" and also led to certain problematic behaviors
getting pushed under the rug—and so I think we should strongly err on the side
of not being prescriptive about how EAs talk about issues facing the community.
Again, I think it's totally fine to explain why you yourself are choosing to
talk or not talk about something publicly.
2
Nathan Young
16d
I guess I plan for the future, not the past. But I agree that my stance is
generally more public than most EAs. I talk to journalists about stuff, for
instance, and I think more people should.
I imagine we might agree in cases.
Feels like we've had about 3 months since the FTX collapse with no kind of leadership comment. Uh that feels bad. I mean I'm all for "give cold takes" but how long are we talking.
Do you think this is not due to "sound legal advice"?
5
Habryka
7mo
I am pretty sure there is no strong legal reason for people to not talk at this
point. Not like totally confident but I do feel like I've talked to some people
with legal expertise and they thought it would probably be fine to talk, in
addition to my already bullish model.
I know of at least 1 NDA of an EA org silencing someone for discussing what bad behaviour that happened at that org. Should EA orgs be in the practice of making people sign such NDAs?
I think I want a Chesterton's TAP for all questions like this that says "how
normal are these and why" whenever we think about a governance plan.
2
Peter Wildeford
7mo
What's a "Chesterton's TAP"?
2
ChanaMessinger
7mo
Not a generally used phrase, just my attempting to point to "a TAP for asking
Chesterton's fence-style questions"
2
Peter Wildeford
7mo
What's a TAP? I'm still not really sure what you're saying.
4
NunoSempere
7mo
"Trigger action pattern", a technique for adopting habits proposed by CFAR
<https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps>.
7
Peter Wildeford
7mo
Thanks!
"Chesterton's TAP" is the most rationalist buzzword thing I've ever heard LOL,
but I am putting together that what Chana said is that she'd like there to be
some way for people to automatically notice (the trigger action pattern) when
they might be adopting an abnormal/atypical governance plan and then reconsider
whether the "normal" governance plan may be that way for a good reason even if
we don't immediately know what that reason is (the Chesterton's fence)?
2
ChanaMessinger
7mo
Oh, sorry! TAPs are a CFAR / psychology technique.
https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps
2
Nathan Young
8mo
I am unsure what you mean? As in, because other orgs do this it's probably
normal?
4
ChanaMessinger
8mo
I have no idea, but would like to! With things like "organizational structure"
and "nonprofit governance", I really want to understand the reference class
(even if everyone in the reference class does stupid bad things and we want to
do something different).
0
Yitz
10mo
Strongly agree that moving forward we should steer away from such organizational
structures; much better that something bad is aired publicly before it has a
chance to become malignant
I get why I and other give to Givewell rather than catastrophic risk - sometimes it's good to know your "Impact account" is positive even if all the catastrophic risk work was useless.
But why do people not give to animal welfare in this case? Seems higher impact?
And if it's just that we prefer humans to animals that seems like something we should be clear to ourselves about.
Also I don't know if I like my mental model of an "impact account". Seems like my giving has maybe once again become about me rather than impact.
This is exactly why I mostly give to animal charities. I do think there's higher
uncertainty of impact with animal charities compared to global health charities
so I still give a bit to AMF. So roughly 80% animal charities, 20% global
health.
3
Aaron Bergman
4mo
Thanks for brining our convo here! As context for others, Nathan and I had a
great discussion about this which was supposed to be recorded...but I managed to
mess up and didn't capture the incoming audio (i.e. everything Nathan said) 😢
Guess I'll share a note I made about this (sounds AI written because it mostly
was, generated from a separate rambly recording). A few lines are a little
spicier than I'd ideally like but 🤷
7
Jason
4mo
Thanks for posting this. I had branching out my giving strategy to conclude some
animal-welfare organizations on the to-do list, but this motivated me to
actually pull the trigger on that.
4
RedStateBlueState
4mo
I think most of the animal welfare neglect comes from the fact that if people
are deep enough into EA to accept all of its "weird" premises they will donate
to AI safety instead. Animal welfare is really this weird midway spot between
"doesn't rest on controversial claims" and "maximal impact".
8
Aaron Bergman
4mo
Definitely part of the explanation, but my strong impression from interaction
irl and on Twitter is that many (most?) AI-safety-pilled EAs donate to GiveWell
and much fewer to anything animal related.
I think ~literally except for Eliezer (who doesn’t think other animals are
sentient), this isn’t what you’d expect from the weirdness model implied.
Assuming I’m not badly mistaken about others’ beliefs and the gestalt (sorry) of
their donations, I just don’t think they’re trying to do the most good with
their money. Tbc this isn’t some damning indictment - it’s how almost all
self-identified EAs’ money is spent and I’m not at all talking about ‘normal
person in rich country consumption.’
I'm sorry to hear this (and grateful that you're reporting them). We have
systems for flagging when a user's DM pattern is suspicious, but it's imperfect
(I'm not sure if it's too permissive right now).
In case it's useful for you to have a better picture of what's going on, I think
you get more of the DM spam because you're very high up in the user list.
2
Nathan Young
12d
I don't really mind. It's not hard for me to just report the user (which is what
you'd like right)
This is like 1 minute a week, so not a big deal for me. Thanks again for your
and the team's work.
"I don't think drinking is bad, but we have a low-alcohol culture so the fact you host parties with alcohol is bad"
Often the easiest mark of bad behaviour is that it breaks a norm we've agreed. Is it harmful in a specific case to be shoplift? Depends on what was happening to the things you stole. But it seems easier just to appeal to our general norm that shoplifting is bad. On average it is harmful and so even if it wasn't in this specific case, being willing to shoplift is a bad sign. Even if you're stealing me... (read more)
A previous partner and I did a sex and consent course together online I think it's helped me be kinder in relationships.
Useful in general.
More useful if, you:
- have sex casually - see harm in your relationships and want to grow - are poly
As I've said elsewhere I think a very small proportion of people in EA are responsible for most of the relationship harms. Some of bad actors, who need to be removed, some are malefactors, who have either lots of interactions or engage in high risk behaviours and accidentally cause harm. I would guess I have more traits of the second category than almost all of you. So people like me should do the most work to change.
So most of you probably don't need this, but if you are in some of the above groups, I'd recommend a course like this. Save yourself the heartache of upsetting people you care about.
I have heard one anecdote of an EA saying that they would be less likely to hire someone on the basis of their religion because it would imply they were less good at their job less intelligent/epistemically rigorous. I don't think they were involved in hiring, but I don't think anyone should hold this view.
Here is why:
As soon as you are in a hiring situation, you have much more information than priors. Even if it were true that, say, ADHD[1] were less rational then the interview process should provide much more information than such a prior. If that's not the case, get a better interview process, don't start being prejudiced!
People don't mind meritocracy, but they want a fair shake. If I heard that people had a prior that ADHD folks were less likely to be hard working, regardless of my actual performance in job tests, I would be less likely to want to be part of this community. You might lose my contributions. It seems likely to me that we come out ahead by ignoring small differences in groups so people don't have to worry about this. People are very sensitive to this. Let's agree not to defect. We judge on our best guess of your performance, not on appearances.
I would be unsurprised if this kind of thinking cut only one way. Is anyone suggesting they wouldn't hire poly people because of the increased drama or men because of the increased likelihood of sexual scandal?
In the wake of the financial crisis it was not uncommon to see suggestions that banks etc. should hire more women to be traders and risk managers because they would be less temperamentally inclined towards excessive risk taking.
I have not heard for such calls in EA, which was my point.
But neat example
6
Joseph Lemien
4mo
These thoughts are VERY rough and hand wavy.
I think that we have more-or-less agreed as societies that there are some traits
that is is okay to use to make choices about people (mainly: their
actions/behaviors), and there are some traits that is is not okay to use
(mainly: things that the person didn't choose and isn't responsible for). Race,
religion, gender, and the like are widely accepted[1] as not socially acceptable
traits to use when evaluating people's ability to be a member of a team.[2] But
there are other traits that we commonly treat as acceptable to use as the basis
of treating people differently, such as what school someone went to, how many
years of work experience they have, if they have a similar communication style
as us, etc.
I think I might split this into two different issues.
1. One issue is: it isn't very fair to give or withhold jobs (and other
opportunities) based on things that people didn't really have much choice in
(such as where they were born, how wealthy their parents were, how good of
an education they got in their youth, etc.)
2. A separate issue is: it is ineffective to employment decisions (hiring,
promotions, etc.) based on things that don't predict on-the-job success.
Sometimes these things line up nicely (such as how it isn't fair to base
employment decisions on hair color, and it is also good business to not base
employment decisions on hair color). But sometimes they don't line up so nicely:
I think there are situations where it makes sense to use "did this person go to
a prestigious school" to make employment decisions because that will get you
better on-the-job performance; but it also seems unfair because we are in a
sense rewarding this person for having won the lottery.[3]
In a certain sense I suppose this is just a mini rant about how the world is
unfair. Nonetheless, I do think that a lot of conversations about hiring and
discriminations get the two different issues conflated.
1. ^
Pe
0
quinn
4mo
I know lots of people with lots of dispositions experience friction with just
declining their parents' religions, but that doesn't mean I "get it" i.e.,
conflating religion with birth lotteries and immutability seems a little
unhinged to me.
There may be a consensus that it's low status to say out loud "we only hire
harvard alum" or maybe illegal (or whatever), but there's not a lot of pressure
to actually try reducing implicit selection effects that end up in effect quite
similar to a hardline rule. And I think harvard undergrad admissions have way
more in common with lotteries than religion does!
I think the old sequencesy sort of "being bad at metaphysics (rejecting
reductionism) is a predictor of unclear thinking" is fine! The better response
to that is "come on, no one's actually talking about literal belief in literal
gods, they're moreso saying that the social technologies are valuable or they're
uncomfortable just not stewarding their ancestors' traditions" than like a DEI
argument.
4
Nathan Young
4mo
There is more to get into here but two main things:
* I guess some EAs, and some who I think do really good work do literally
believe in literal gods
* I don't actually think this is that predictive. I know some theists who are
great at thinking carefully and many athiests who aren't. I reckon I could
distinguish the two in a discussion better than rejecting the former out of
hand.
*
5
Aaron Gertler
4mo
Some feedback on this post: this part was confusing. I assume that what this
person said was something like "I think a religious person would probably be
harder to work with because of X", or "I think a religious person would be less
likely to have trait Y", rather than "religious people are worse at jobs".
The specifics aren't very important here, since the reasons not to discriminate
against people for traits unrelated to their qualifications[1] are collectively
overwhelming. But the lack of specifics made me think to myself: "is that
actually what they said?". It also made it hard to understand the context of
your counterarguments, since there weren't any arguments to counter.
1. ^
Religion can sometimes be a relevant qualification, of course; if my
childhood synagogue hired a Christian rabbi, I'd have some questions. But I
assume that's not what the anecdotal person was thinking about.
7
Kirsten
4mo
The person who was told this was me, and the person I was talking to straight up
told me he'd be less likely to hire Christians because they're less likely to be
intelligent
Please don't assume that EAs don't actually say outrageously offensive things -
they really do sometimes!
Edit: A friend told me I should clarify this was a teenage edgelord - I don't
want people to assume this kind of thing gets said all the time!
8
Nathan Young
4mo
And since posting this I've said this to several people and 1 was like "yeah no
I would downrate religious people too"
I think a poll on this could be pretty uncomfortable reading. If you don't, run
it and see.
Put it another way, would EAs discriminate against people who believe in
astrology? I imagine more than the base rate. Part of me agrees with that, part
of me thinks its norm harming to do. But I don't think this one is "less than
the population"
6
Aaron Gertler
4mo
That's exactly what I mean!
"I think religious people are less likely to have trait Y" was one form I
thought that comment might have taken, and it turns out "trait Y" was
"intelligence".
Now that I've heard this detail, it's easier to understand what misguided ideas
were going through the speaker's mind. I'm less confused now.
"Religious people are bad at jobs" sounds to me like "chewing gum is dangerous"
— my reaction is "What are you talking about? That sounds wrong, and also...
huh?"
By comparison, "religious people are less intelligent" sounds to me like
"chewing gum is poisonous" — it's easier to parse that statement, and compare it
to my experience of the world, because it's more specific.
*****
As an aside: I spend a lot of time on Twitter. My former job was running the EA
Forum. I would never assume that any group has zero members who say offensive
things, including EA.
5
Linch
4mo
I think the strongest reason to not do anything that even remotely looks like
employer discrimination based on religion is that it's illegal, at least for the
US, UK, and European Union countries, which likely jointly encompasses >90% of
employers in EA.
(I wouldn't be surprised if this is true for most other countries as well, these
are just the ones I checked).
4
Jason
4mo
There's also the fact that, as a society and subject to certain exceptions,
we've decided that employers shouldn't be using an employee's religious beliefs
or lack thereof as an assessment factor in hiring. I think that's a good rule
from a rule-utilitarian framework. And we can't allow people to utilize their
assumptions about theists, non-theists, or particular theists in hiring without
the rule breaking down.
The exceptions generally revolve around personal/family autonomy or expressive
association, which don't seem to be in play in the situation you describe.
4
Joseph Lemien
4mo
I think that I generally agree with what you are suggesting/proposing, but there
are all kinds of tricky complications. The first thing that jumps to my mind is
that sometimes hiring the person who seems most likely to do the best job ends
up having a disparate impact, even if there was no disparate treatment. This is
not a counterargument, of course, but more so a reminder that you can do
everything really well and still end up with a very skewed workforce.
3
Timothy Chan
4mo
I generally agree with the meritocratic perspective. It seems a good way (maybe
the best?) to avoid tit-for-tat cycles of "those holding views popular in some
context abuse power -> those who don't like the fact that power was abused
retaliate in other contexts -> in those other contexts, holding those views
results in being harmed by people in those other contexts who abuse power".
Good point about the priors. Strong priors about these things seem linked to
seeing groups as monoliths with little within-group variance in ability.
Accounting for the size of variance seems under-appreciated in general. E.g., if
you've attended multiple universities, you might notice that there's a lot of
overlap between people's "impressiveness", despite differences in official
university rankings. People could try to be less confused by thinking in terms
of mean/median, variance, and distributions of ability/traits more, rather than
comparing groups by their point estimates.
Some counter-considerations:
* Religion and race seem quite different. Religion seems to come with a bunch
of normative and descriptive beliefs that could affect job performance -
especially in EA - and you can't easily find out about those beliefs in a job
interview. You could go from one religion to another, from no religion to
some religion, or some religion to no religion. The (non)existence of that
process might give you valuable information about how that person thinks
about/reflects on things and whether you consider that to be good
thinking/reflection.
* For example, from a irreligious perspective, it might be considered
evidence of poor thinking if a candidate thinks the world will end in ways
consistent with those described in the Book of Revelation, or think that
we're less likely to be in a simulation because a benevolent, omnipotent
being wouldn't allow that to happen to us.
* Anecdotally, on average, I find that people who have gone through the
2
Joseph Lemien
4mo
Oh, another thought. (sorry for taking up so much space!) Sometimes something
looks really icky, such as evaluating a candidate via religion, but is actually
just standing in for a different trait. We care about A, and B is somewhat
predictive of A, and A is really hard to measure, then maybe people sometimes
use B as a rough proxy for A.
I think that this is sometimes used as the justification for sexism/racism/etc,
where the old-school racist might say "I want a worker who is A, and B people
are generally not A." If the relationship between A and B is non-existent or
fairly weak, then we would call this person out for discriminating unfairly. But
now I'm starting to think of what we should do if there really is a correlation
between A and B (such as sex and physical strength). That is what tends to
happen if a candidate is asked to do an assessment that seems to have nothing to
do with the job, such as clicking on animations of colored balloons: it appears
to have nothing to do with the job, but it actually measures X, which is
correlated with Y, which predicts on-the-job success.
I'd rather be evaluated as an individual than as a member of a group, and I
suspect that in-group variation is greater than between-group variation, echoing
what you wrote about the priors being weak.
0
Nathan Young
4mo
You don't need to apologise for taking up space! It's a short form, write what
you like.
As with many statements people make about people in EA, I think you've identified something that is true about humans in general.
I think it applies less to the average person in EA than to the average human. I think people in EA are more morally scrupulous and prone to feeling guilty/insufficiently moral than the average person, and I suspect you would agree with me given other things you've written. (But let me know if that's wrong!)
I find statements of the type "sometimes we are X" to be largely uninformative when "X" is a part of human nature.
Compare "sometimes people in EA are materialistic and want to buy too many nice things for themselves; EA has a materialism problem" — I'm sure there are people in EA like this, and perhaps this condition could be a "problem" for them. But I don't think people would learn very much about EA from the aforementioned statements, because they are also true of almost every group of people.
The vibe at EAG was chill, maybe a little downbeat, but fine. I can get myself riled up over the forum, but it's not representative! Most EAs are just getting on with stuff.
(This isn't to say that forum stuff isn't important, its just as important as it is rather than what should define my mood)
I hope Will MacAskill is doing well. I find it hard to predict how he's doing as a person. While there have been lots of criticisms (and I've made some) I think it's tremendously hard to be the Shelling person for a movement. There is a seperate axis however, and I hope in himself he's doing well and I imagine many feel that way. I hope he has an accurate picture here.
Being open minded and curious is different from holding that as part of my identity.
Perhaps I never reach it. But it seems to me that "we are open minded people so we probably behave open mindedly" is false.
Or more specifically, I think that it's good that EAs want to be open minded, but I'm not sure that we are purely because we listen graciously, run criticism contests, talk about cruxes.
The problem is the problem. And being open minded requires being open to changing one's mind in difficult or set situations. And I don't have a way that's guaranteed to get us over that line.
I guess African, Indian and Chinese voices are underrepresented in the AI Governance discussion. And in the unlikely case we die, we all die and it think it's weird that half the people who will die have noone loyal to them in the discussion.
We want AI that works for everyone and it seems likely you want people who can represent billions who aren't currently with a loyal representative.
I'm actually more concerned about the underrepresentation of certain voices as it applies to potential adverse effects of AGI (or even near-AGI) on society that don't involve all of us dying. In the everyone-dies scenario, I would at least be similarly situated to people from Africa, India, and China in terms of experiencing the exact same bad thing that happens. But there are potential non-fatal outcomes, like locking in current global power structures and values, that affect people from non-Western countries much differently (and more adversely) than they'd affect people like me.
Yeah, in a scenario with "nation-controlled" AGI, it's hard to see people from
the non-victor sides not ending up (at least) as second-class citizens - for a
long time. The fear/lack of guarantee of not ending up like this makes
cooperation on safety more difficult, and the fear also kind of makes sense?
Great if governance people manage to find a way to alleviate that fear - if it's
even possible. Heck, even allies of the leading state might be worried - doesn't
feel too good to end up as a vassal state. (Added later (2023-06-02): It may be
a question that comes up as AGI discussions become mainstream.)
Wouldn't rule out both American and Chinese outside of respective allied
territory being caught in the crossfire of a US-China AI race.
Political polarization on both sides in the US is also very scary.
3
Nathan Young
4mo
Sorry, yes. I think that ideally we don't all die. And in those situations
voices loyal to representative groups seem even more important.
5
Joseph Lemien
4mo
This strikes me as another variation of "EA has a diversity problem." Good to
keep in mind that is it not just about progressive notions of inclusivity,
though. There may be VERY significant consequences for the people in vast swaths
of the world if a tiny group of people make decisions for all of humanity. But
yeah, I also feel that it is a super weird aspect of the anarchic system (in the
international relations sense of anarchy) that most of the people alive today
have no one representing their interests.
It also seems to echo consistent critiques of development aid not including
people in decision-making (along the lines of Ivan Illich's To Hell with Good
Intentions, or more general post-colonial narratives).
1
harfe
4mo
What means "have noone loyal to them" and "with a loyal representative"? Are you
talking about the indian government? Or are you talking about EAs talking part
in discussions such as yourself? (In which case, who are you loyal to?)
3
Nathan Young
4mo
I think that's part of the problem.
Who is loyal to the chinese people?
And I don't think I'm good here. I think I try to be loyal to them, but I don't
know what the chinese people want and I think if I try and guess I'll get it
wrong in some key areas.
I'm reminded of when givewell?? asked recipients how they would trade money for
children's lives and they really fucking loved saving children's lives. If we
are doing things for others benefit we should take their weightings into
account.
We have thought about that. Probably the main reason we haven't done this is because of this reason, on which I'll quote myself on from an internal slack message:
Currently if someone makes an anon account, they use an anonymous email address. There's usually no way for us, or, by extension, someone who had full access to our database, to deanonymize them. However, if we were to add this feature, it would tie the anonymous comments to a primary account. Anyone who found a vulnerability in that part of the code, or got an RCE on us, would be able post a dump that would fully deanonymize all of those accounts.
If you're commenting on a post, it helps to start off with points of agreement
and genuine compliments about things you liked. Try to be honest and
non-patronizing: a comment where the only good thing you say is "your english is
very good" will not be taken well, or a statement that "we both agree that
murder is bad". And don't overthink it, a simple "great post" (if honest) is
never unappreciated.
Another point is that the forum tends to have a problem with "nitpicking", where
the core points of a post are ignored in favor of pointing out minor,
unimportant errors. Try to engage with the core points of an argument, or if you
are pointing out a small error, preface it with "this is a minor nitpick", and
put it at the end of your comment.
So a criticism would look like:
"Very interesting post! I think X is a great point that more people should be
talking about. However, I strongly disagree with core point Y, for [reasons].
Also, a minor nitpick: statement Z is wrong because [reasons]"
I think the above is way less likely to feel like an "attack", even though the
strong disagreements and critiques are still in there.
I talked to someone outside EA the other day who said that in a competive tender they wouldn't apply to EA funders because they thought the process would likely to go to someone with connections to OpenPhil.
Someone told me they don't bet as a matter of principle. And that this means EA/Rats take their opinions less seriously as a result. Some thoughts
I respect individual EAs preferences. I regularly tell friends to do things they are excited about, to look after themselves, etc etc. If you don't want to do something but feel you ought to, maybe think about why, but I will support you not doing it. If you have a blanket ban on gambling, fair enough. You are allowed to not do things because you don't want to
Gambling is addictive, if you have a problem with it,
I don't bet because I feel it's a slippery slope. I also strongly dislike how opinions and debates in EA are monetised, as this strengthens even more the neoliberal vibe EA already has, so my drive for refraining to do this in EA is stronger than outside.
Edit: and I too have gotten dismissed by EAs for it in the past.
* I don't want you to do something you don't want to.
* A slippery slope to what?
3
Guy Raveh
3mo
To gambling on anything else and taking an actual financial risk.
2
Nathan Young
3mo
Yeah, I guess if you think there is a risk of gambling addiction, don't do it.
But I don't know that that's a risk for many.
Also I think many of us take a financial risk by being involved in EA. We are
making big financial choices.
2
Guy Raveh
3mo
There's a difference between using money to help others and using it for
betting?
2
Nathan Young
3mo
Yes obviously, but not in the sense that you are investing resources.
Is there a difference between the financial risk of a bet and of a standard
investment? Not really, no.
6
DC
3mo
I don't bet because it's not a way to actually make money given the frictional
costs to set it up, including my own ignorance about the proper procedure and
having to remember it and keep enough capital for it. Ironically, people who are
betting in this subculture are usually cargo culting the idea of
wealth-maximization with the aesthetics of betting with the implicit assumption
that the stakes of actual money are enough to lead to more correct beliefs when
following the incentives really means not betting at all. If convenient,
universal prediction markets weren't regulated into nonexistence then I would
sing a different tune.
2
Nathan Young
3mo
I guess I do think the "wrong beliefs should cost you" is a lot of the gains. I
guess I also think that bets should be able to be at scale of the disagreement
is important, but I think that's a much more niche view.
5
Jason
3mo
There are a number of possible reasons that the individual might not want to
talk about publicly:
* A concern about gambling being potentially addictive for them;
* Being relatively risk-averse in their personal capacity (and/or believing
that their risk tolerance is better deployed for more meaningful things than
random bets);
* Being more financially constrained than their would-be counterparts; and
* Awareness of, and discomfort with, the increased power the betting norm could
give people with more money.
On the third point: the bet amount that would be seen as meaningful will vary
based on the person's individual circumstances. It is emotionally tough to say
-- no, I don't have much money, $10 (or whatever) would be a meaningful bet for
me even though it might take $100 (or whatever) to be meaningful to you.
On the fourth point: if you have more financial resources, you can feel freer
with your bets while other people need to be more constrained. That gives you
more access to bet-offers as a rhetorical tool to promote your positions than
people with fewer resources. It's understandable that people with fewer
resources might see that as a financial bludgeon, even if not intended as such.
-2
Nathan Young
3mo
I think the first one is good, the not so much.
I think there is something else going on here.
5
Sol3:2
3mo
I have yet to see anyone in the EA/rat world make a bet for sums that matter, so
I really don't take these bets very seriously. They also aren't a great way to
uncover people's true probabilities because if you are betting for money that
matters you are obviously incentivized to try to negotiate what you think are
the worst possible odds for the person on the other side that they might be dumb
enough to accept.
2
Nathan Young
3mo
kind of fair. I'm pretty sure I've seen $1000s
2
Mohammad Ismam Huda
3mo
If anything... I probably take people less seriously if they do bet (not saying
that's good or bad, but just being honest), especially if there's a
bookmaker/platform taking a cut.
2
Nathan Young
3mo
I think this is more about 1-1 bets.
I guess it depends if they win or lose on average. I still think knowing I
barely win is useful self knowledge.
Can we have some people doing AI Safety podcast/news interviews as well as Yud?
I am concerned that he's gonna end up being the figurehead here. I assume someone is thinking of this, but I'm posting here to ensure that it is said. I am pretty sure that people are working on this, but I think it's good to say this anyway.
We aren't a community who says "I guess he deserves it" we say "who is the best person for the job?". Yudkowsky, while he is an expert isn't a median voice. His estimates of P(doom) are on the far tail of EA experts here. So if I could pick 1 person I wouldn't pick him and frankly I wouldn't pick just one person.
Some other voices I'd like to see on podcasts/ interviews:
Toby Ord
Paul Christiano
Ajeya Cotra
Amanda Askell
Will MacAskill
Joe Carlsmith*
Katja Grace*
Matthew Barnett*
Buck Schlegeris
Luke Meulhauser
Again, I'm not saying noone has thought of this (80%) they have. But I'd like to be 97% sure, so I'm flagging it.
I am a bit confused by your inclusion of Will MacAskill. Will has been on a lot
of podcasts, while for Eliezer I only remember 2. But your text sounds a bit
like you worry that Eliezer will be too much on podcasts and MacAskill too
little (I don't want to stop MacAskill from going on podcasts btw. I agree that
having multiple people present different perspectives on AGI safety seems like a
good thing).
4
Nathan Young
7mo
I think in the current discourse I'd like to see more of Will, who is a blanaced
and clear communicator.
8
RobertM
7mo
I don't think you should be optimizing to avoid extreme views, but in favor of
those with the most robust models, who can also communicate them effectively to
the desired audience. I agree that if we're going to be trying anything
resembling public outreach it'd be good to have multiple voices for a variety of
reasons.
On the first half of the criteria I'd feel good about Paul, Buck, and Luke. On
the second half I think Luke's blog is a point of evidence in favor. I haven't
read Paul's blog, and I don't think that LessWrong comments are sufficiently
representative for me to have a strong opinion on either Paul or Buck.
The Scout Mindset deserved 1/10th of the marketing campaign of WWOTF. Galef is a great figurehead for rational thinking and it would have been worth it to try and make her a public figure.
I think much of the issue is that:
1. It took a while to ramp up to being able to do things such as the marketing
campaign for WWOTF. It's not trivial to find the people and buy-in necessary.
Previous EA books haven't had similar.
2. Even when you have that capacity, it's typically much more limited than we'd
want.
I imagine EAs will get better at this over time.
I sense that it's good to publicly name serial harassers who have been kicked out of the community, even if the accuser doesn't want them to be. Other people's feeling matter too and I sense many people would like to know who they are.
I think there is a difference between different outcomes, but if you've been banned from EA events then you are almost certainly someone I don't want to invite to parties etc.
I notice I am pretty skeptical of much longtermist work and the idea that we can make progress on this stuff just by thinking about it.
I think future people matter, but I will be surprised if, after x-risk reduction work, we can find 10s of billions of dollars of work that isn't busywork and shouldn't be spent attempting to learn how to get eg nations out of poverty.
Several journalists (including those we were happy to have write pieces about WWOTF) have contacted me but I think if I talk to them, even carefully, my EA friends will be upset with me. And to be honest that upsets me.
We are in the middle of a mess of our own making. We deserve scrutiny. Ugh, I feel dirty and ashamed and frustrated.
To be clear, I think it should be your own decision to talk to journalists, but I do also just think that it's just better for us to tell our own story on the EA Forum and write comments, and not give a bunch of journalists the ability to greatly distort the things we tell them in a call, with a platform and microphone that gives us no opportunity to object or correct things.
I have been almost universally appalled at the degree to which journalists straightforwardly lie in interviews, take quotes massively out of context, or make up random stuff related to what you said, and I do think it's better that if you want to help the world understand what is going on, that you write up your own thoughts in your own context, instead of giving that job to someone else.
I suggest there is waaaay to much to be on top of in EA and noone knows who is checking what. So some stuff goes unchecked. If there were a narrower set of "core things we study" then it seems more likely that those things would have been gone over by someone in detail and hence fewer errors in core facts.
One of the downsides of EA being so decentralized, I guess. I'm imagining an
alternative history EA in which is was all AI alignment or it was all tropical
disease prevention, and in those worlds the narrowing of "core things we study"
would possibly result in more eyeballs on each thing.
2
Nathan Young
4mo
I think we could still be better in this universe though no idea how.
My call: EA gets 3.9 out of 14 possible cult points.
The group is focused on a living leader to whom members seem to display excessively zealous, unquestioning commitment.
No
The group is preoccupied with bringing in new members.
Yes (+1)
The group is preoccupied with making money.
Partial (+0.8)
Questioning, doubt, and dissent are discouraged or even punished.
No
Mind-numbing techniques (such as meditation, chanting, speaking in tongues, denunciation sessions, debilitating work routines) are used to suppress doubts about the group and its leader(s).
No
The leadership dictates sometimes in great detail how members should think, act, and feel (for example: members must get permission from leaders to date, change jobs, get married; leaders may prescribe what types of clothes to wear, where to live, how to discipline children, and so forth).
No
The group is elitist, claiming a special, exalted status for itself, its leader(s), and members (for example: the leader is considered the Messiah or an avatar; the group and/or the leader has a special mission to save humanity).
Partial (+0.5)
The group has a polarized us- versus-them mentality, which causes conflict with the w
I went through and got 5.2/14 cult points:
I think this is nonzero, I think subsets of the community do display
"excessively zealous" commitment to a leader given "What would SBF do" stickers.
Outside views of LW (or at least older versions of it would probably worry that
this was an EY cult.
+0.1
+1
+1
I think this is probably partial, given claims in this post, and
positive-agreevote concerns here (though clearly all of the agree voters might
be wrong).
+0.2
No
No (outside of Leverage research, perhaps)
Yes for elitist, and yes for saving humanity.
+0.5
+0.1
No
+1
No (if we only consider "intentional" inducement
+0.5
+0.8
No
1
Peter Wildeford
6mo
I think you may have very high standards? By these standards, I don't think
there are any communities at all that would score 0 here.
~
I was not aware of "What would SBF do" stickers. Hopefully those people feel
really dumb now. I definitely know about EY hero worship but I was going to
count that towards a separate rationalist/LW cult count instead of the EA cult
count.
5
pseudonym
6mo
I think where we differ is that I'm not making a comparison of whether EA is
worse than this compared to other groups, if every group scores in the range of
0.5-1 I'll still score 0.5 as 0.5, and not scale 0.5 down to 0 and 0.75 down to
0.5. Maybe that's the wrong way to approach it but I think the least culty
organization can still have cult-like tendencies, instead of being 0 by
definition.
Also if it's true that someone working at GPI was facing these pressures from
"senior scholars in the field", then that does seem like reason for others to
worry. There also has been a lot of discussion on the forum about the types of
critiques that seem like they are acceptable and the ones that aren't etc. Your
colleague also seems to believe this is a concern, for example, so I'm currently
inclined to think that 0.2 is pretty reasonable and I don't think I should
update much based on your comment-but happy for more pushback!
4
MHR
6mo
I think
has to get more than 0.2, right? Being elitist and on a special mission to save
humanity is a concerningly good descriptor of at least a decent chunk of EA.
3
Peter Wildeford
6mo
Ok updated to 0.5. I think "the leader is considered the Messiah or an avatar"
being false is fairly important.
1
Paul_Crowley
6mo
>> The group teaches or implies that its supposedly exalted ends justify means
that members would have considered unethical before joining the group (for
example: collecting money for bogus charities).
> Partial (+0.5)
This seems too high to me, I think 0.25 at most. We're pretty strong on "the
ends don't justify the means".
>>The leadership induces guilt feelings in members in order to control them.
> No
This on the other hand deserves at least 0.25...
I don't think it makes sense to say that the group is "preoccupied with making money". I expect that there's been less focus on this in EA than in other groups, although not necessarily due to any virtue, but rather because of how lucky we have been in having access to funding.
It was pointed out to me that I probably vote a bit wrong on posts.
I generally just up and downvote how I feel, but occasionally if I think a post is very overrated or underrated I will strong upvote or downvote even though I feel less strong than that.
But this is I think the wrong behaviour and a defection. Since if we all did that then we'd all be manipulating the post to where we think it ought to be and we'd lose the information held in the median of where all our votes leave it.
Withholding the current score of a post till after a vote is cast (but the
casting is committal) should be enough to prevent strategic behavior. But it
comes with many downsides (I think feed ordering / recsys could work with
private information, so the scores may be in principle inferrable from patterns
in your feed, but you probably won't actually do it. The worse problem is
commitment, I do like to edit my votes quite a bit after initial impressions).
I imagine there's a more subtle instrument, withholding the current score until
committal votes have been cast seems almost like a limit case.
5
MichaelStJules
2mo
Although this isn't in response to your specific case (correcting for overrated
or underrated posts), but in response to
I think it's okay to "defect" to correct the results of others' apparent
defection or to keep important information from being hidden. I've used upvotes
correctively when I think people are too harsh with downvotes or when the
downvotes will make important information/discussion much less visible. To
elaborate, I've sometimes done this for cases like these:
1. When a comment or post is at low or negative karma due to downvotes, despite
being made in good faith (especially if it makes plausible, relevant and
useful claims), and without being uncivil or breaking other norms, even if
it expresses an unpopular view (e.g. opinion or ethical view) or makes some
significant errors in reasoning. I don't think we should disincentivize or
censor such comments, and I think that's what disagreement voting and
explanations should be used for. I find when people use downvotes like this
without explanation to be especially unfair. This also includes when
downvotes crush well-intentioned and civil but poorly executed newbie
posts/comments, which I think is unkind and unwelcoming. (I've used upvotes
correctively like this even before we had disagree voting.)
2. For posts with low or negative karma due to downvotes, if they contain (imo)
important information, possibly even if poorly framed, with bad argument in
them or made in apparently bad faith, if there's substantial valuable
discussion on the issue or it isn't being discussed visibly somewhere else
on the EA Forum. Low karma risks effectively hiding (making much less
visible) that information and surrounding discussion through the ranking
algorithm. This is usually for community controversies and criticism.
I very rarely downvote at all, but maybe I'd refrain from downvoting something I
would otherwise downvote because its karma is already l
5
Jason
2mo
Right -- in my view, net-negative karma conveys a particular message (something
like "this post would be better off not existing") that is meaningfully stronger
than the median voter's standard for downvoting. It can therefore easily exist
in circumstances where the median voter would not have endorsed that conclusion.
4
MichaelStJules
2mo
FWIW, I don't think this is against the explicit EA Forum norms around voting,
and using upvotes and strong upvotes this way seems in line with some of their
"suggestions" in the table from that section. In particular, they suggest it's
appropriate to strong upvote if
These could be more or less true depending on the karma of the post or comment
and how visible you think it is.
I don't think using downvotes against overrated posts or comments falls under
the suggestions, though, but doing it only for upvotes and not downvotes could
bias the karma.
How are we going to deal emotionally with the first big newspaper attack against EA?
EA is pretty powerful in terms of impact and funding.
It seems only an amount of time before there is a really nasty article written about the community or a key figure.
Last year the NYT wrote a hit piece on Scott Alexander and while it was cool that he defended himself, I think he and the rationalist community overreacted and looked bad.
I would like us to avoid this.
If someone writes a hit piece about the community, Givewell, Will MacAskill etc, how are we going to avoid a kneejerk reaction that makes everything worse?
I suggest if and when this happens:
individuals largely don't respond publicly unless they are very confident they can do so in a way that leads to deescalation.
articles exist to get clicks. It's worth someone (not necessarily me or you) responding to an article in the NYT, but if, say a niche commentator goes after someone, fewer people will hear it if we let it go.
let the comms professionals deal with it. All EA orgs and big players have comms professionals. They can defend themselves.
if we must respond (we often needn't) we should adopt a stance of grace, curiosity and hu
Yeah, I think the community response to the NYT piece was counterproductive, and I've also been dismayed at how much people in the community feel the need to respond to smaller hit pieces, effectively signal boosting them, instead of just ignoring them. I generally think people shouldn't engage with public attacks unless they have training in comms (and even then, sometimes the best response is just ignoring).
Unbalanced karma is good actually. it means that the moderators have to do less. I like the takes of the top users more than the median user and I want them to have more but not total influence.
Appeals to fairness don't interest me - why should voting be fair?
Give Directly has a President (Rory Stewart) paid $600k, and is hiring a Managing Director. I originally thought they had several other similar roles (because I looked on the website) but I talked to them an seemingly that is not the case. Below is the tweet that tipped me off but I think it is just mistaken.
Once could still take issue with the $600k (though I don't really)
I wish the forum had a better setting for "I wrote this post and maybe people will find it interesting but I don't want it on the front page unless they do because that feels pretenious"
I think if I knew that I could trade "we all obey some slightly restrictive set of romance norms" for "EA becomes 50% women in the next 5 years" then that's a trade I would advise we take.
That's a big if. But seems trivially like the right thing to do - women do useful work and we should want more of them involved.
To say the unpopular reverse statement, if I knew that such a set of norms wouldn't improve wellbeing in some average of women in EA and EA as a whole then I wouldn't take the trade.
Seems worth acknowledging there are right answers here, if only we knew the outcomes of our decisions.
I've been musing about some critiques of EA and one I like is "what's the biggest thing that we are missing"
In general, I don't think we are missing things (lol) but here are my top picks:
It seems possible that we reach out to sciency tech people because they are most similar to us. While this may genuinely be the cheapest way to get people now there may be costs to the community in terms of diversity of thought (most Sci/tech people are more similar than the general population)
I'm glad to see more outreach to people in developing nations
More likely to me is a scenario of diminishing returns. Ie, tech people might be
the most important to first order, but there's already a lot of brilliant tech
people working on the problem, so one more won't make much of a difference.
Whereas a few brilliant policy people could devise a regulatory scheme that
penalises reckless AI deployment, etc, making more differences on a marginal
basis.
In defence of Will MacAskill and Nick Beckstead staying on the board of EVF
While I've publicly said that on priors they should be removed unless we hear arguments otherwise, I was kind of expecting someone to make those arguments. If noone will, I will.
MacAskill
MacAskill is very clever, personally kind, is a superlative networker and communicator. Imo he oversold SBF, but I guess I'd do much worse in his place. It seems to me that we should want people who have made mistakes and learned from them. Seems many EA orgs would be glad to have someone like... (read more)
I would like to see posts give you more karma than comments (which would hit me hard). Seems like a highly upvtoed post is waaaaay more valuable than 3 upvoted comments on that post, but it's pretty often the latter gives more karma than the former.
Sometimes comments are better, but I think I agree they shouldn't be worth
exactly the same.
6
ChanaMessinger
8mo
People might also have a lower bar for upvoting comments.
-1
Nathan Young
8mo
There you go, 3 mana. Easy peasy.
2
Pat Myron
8mo
simplest first step would be just showing both separately like Reddit
2
Nathan Young
8mo
You can see them separately, but it's how they combine that matters.
3
Pat Myron
8mo
I know you can figure them out, but I don't see them presented separately on
users pages. Am I missing something? Is it shown on the website somewhere?
1
jimrandomh
8mo
They aren't currently shown separately anywhere. I added it to the ForumMagnum
feature-ideas repo but not sure whether we'll wind up doing it.
3
Nathan Young
8mo
They are shown separately here:
https://eaforum.issarice.com/userlist?sort=karma
Feels like there should be some kind of community discussion and research in the wake of FTX, especially if no leadership is gonna do it. But I don't know how that discussion would have legitimacy. I'm okay at such things, but honestly tend to fuck them up somehow. Any ideas?
If I were king
Use the ideas from all the varous posts
Have a big google doc where anyone can add research and also put a comment for each idea and allow people to discuss
Then hold another post where we have a final vote on what should happen
then EA orgs can see at least what some kind of community concensus things
I wrote a post on possible next steps but it got little engagement -- unclear if it was a bad post or people just needed a break from the topic. On mobile, so not linking it -- but it's my only post besides shortform.
The problem as I see it is that the bulk of proposals are significantly underdeveloped, risking both applause light support and failure to update from those with skeptical priors. They are far too thin to expect leaders already dealing with the biggest legal, reputational, and fiscal crisis in EA history to do the early development work.
Thus, I wouldn't credit a vote at this point as reflecting much more than a desire for a more detailed proposal. The problem is that it's not reasonable to expect people to write more fleshed-out proposals for free without reason to believe the powers-that-be will adopt them.
I suggested paying people to write up a set of proposals and then voting on those. But that requires both funding and a way to winnow the proposals and select authors. I suggested modified quadratic funding as a theoretical ideal, but a jury of pro-reform posters as a more practical alternative. I thought that problem was manageable, but it is a problem. In particular, at the proposal-development stage, I didn't want tactical voting by reform skeptics.
Strong +1 to paying people for writing concrete, actionable proposals with clear
success criteria etc. - but I also think that DEI / reform is just really,
really hard, and I expect relatively few people in the community to have 1) the
expertise 2) the knowledge of deeper community dynamics / being able to know the
current stsances on things.
(meta point: really appreciate your bio Jason!)
8
ChanaMessinger
7mo
I really liked Nate's post and hope there can be more like it in the future.
Let's assume that the time article is right about the amount of sexual harassment in EA. How big a problem is this relative to other problems? If we spend $10mn on EAGs (a guess) how much should we spend if we could halve sexual harassment in the community.
The whole sexual harassment issue isn't something that can be easily fixed with
money I think. It's more a project of changing norms and what's acceptable
within the EA community.
The issue is it seems like many folks at the top of orgs, especially in SF, have
deeply divergent views from the normal day-to-day folks joining/hearing about
EA. This is going to be a huge problem moving forward from a public relations
standpoint IMO.
7
Jason
8mo
Money can't fix everything, but it can help some stuff, like hiring
professionals outside of EA and supporting survivors who fear retaliation if
they choose to speak out.
You are an EA, if you want to be. Reading this forum is enough. Giving a little of your salary effectively is enough. Trying to get an impactful job is enough. If you are trying even with a fraction of your resources to make the world better and chatting with other EAs about it, you are one too.
I imagine that it has cost and does cost 80k to push for AI safety stuff even when it was wierd and now it seems mainstream.
Like, I think an interesting metric is when people say something which shifts some kind of group vibe. And sure, catastrophic risk folks are into it, but many EAs aren't and would have liked a more holistic approach (I guess).
Sam Harris takes Giving What We Can pledge for himself and for his meditation company "Waking Up"
Harris references MacAksill and Ord as having been central to his thinking and talks about Effective Altruism and exstential risk. He publicly pledges 10% of his own income and 10% of the profit from Waking Up. He also will create a series of lessons on his meditation and education app around altruism and effectiveness.
Harris has 1.4M twitter followers and is a famed Humanist and New Athiest. The Waking Up app has over 500k downloads on android, so I guess over 1 million overall.
Harris is a marmite figure - in my experience people love him or hate him.
It is good that he has done this.
Newswise, it seems to me it is more likely to impact the behavior of his
listeners, who are likely to be well-disposed to him. This is a significant but
currently low-profile announcement. As will the courses be on his app.
I don't think I'd go spreading this around more generally, many don't like
Harris and for those who don't like him, it could be easy to see EA as more of
the same (callous superior progessivism).
In the low probability (5%?) event that EA gains traction in that space of the
web (generally called the Intellectual Dark Web - don't blame me, I don't make
the rules) I would urge caution for EA speakers who might pulled into polarising
discussion which would leave some groups feeling EA ideas are "not for them".
This seems quite likely given EA Survey data where, amongst people who indicated they first heard of EA from a Podcast and indicated which podcast, Sam Harris' strongly dominated all other podcasts.
More speculatively, we might try to compare these numbers to people hearing about EA from other categories. For example, by any measure, the number of people in the EA Survey who first heard about EA from Sam Harris' podcast specifically is several times the number who heard about EA from Vox's Future Perfect. As a lower bound, 4x more people specifically mentioned Sam Harris in their comment than selected Future Perfect, but this is probably dramatically undercounting Harris, since not everyone who selected Podcast wrote a comment that could be identified with a specific podcast. Unfortunately, I don't know the relative audience size of Future Perfect posts vs Sam Harris' EA podcasts specifically, but that could be used to give a rough sense of how well the different audiences respond.
Notably, Harris has interviewed several figures associated with EA; Ferriss only
did MacAskill, while Harris has had MacAskill, Ord, Yudkowsky, and perhaps
others.
3
David_Moss
3y
This is true, although for whatever reason the responses to the podcast question
seemed very heavily dominated by references to MacAskill.
This is the graph from our original post, showing every commonly mentioned
category, not just the host (categories are not mutually exclusive). I'm not
sure what explains why MacAskill really heavily dominated the Podcast category,
while Singer heavily dominated the TED Talk category.
4
Nathan Young
3y
The address (in the link) is humbling and shows someone making a positive change
for good reasons. He is clear and coherent.
Good on him.
I'll sort of publicly flag that I sort of break the karma system. Like the way I like to post comments is little and often and this is just overpowered in getting karma.
eg I recently overtook Julia Wise and I've been on the forum for years less than anyone else.
I don't really know how to solve this - maybe someone should just 1 time nuke my karma? But yeah it's true.
Note that I don't do this deliberately - it's just how I like to post and I think it's honestly better to split up ideas into separate comments. But boy is it good at getting karma. And soooo m... (read more)
Having EA Forum karma tells you two things about a person:
They had the potential to have had a high impact in EA-relevant ways
They chose not to.
I wouldn't worry too much about the karma system. If you're worried about having undue power in the discourse, one thing I've internalized is to use the strong upvote/downvote buttons very sparingly (e.g. I only strong-upvoted one post in 2022 and I think I never strong-downvoted any post, other than obvious spam).
Hey Nathan,
thank you for the ranking list. :)
I don't think you need to start with zero karma again. The karma system is not
supposed to mean very much. It is heavily favoured in certain aspects than a
true representation of your skill or trustworthiness as a user on this forum. It
is more or less a xp bar for social situations and is an indicator that someone
posts good content here.
Let's look at an example:
Aaron Gertler retired from the forum, someone who is in high regard, which got a
lot of attention and sympathy. Many people were interested in the post, and it's
an easy topic to participate. So many were scrolling down to the comments to
write something nice and thanking him for his work.
JP Addison did so too. He works for CEA and as a developer for the forum. His
comment got more Karma than any post he made so far.
Karma is used in many places with different concepts behind it. The sum of it
gives you no clear information. What I would think in your case: you are an
active member of the forum, participate positively with only one post with
negative karma. You participated in the FTX crisis discussion, which was an
opportunity to gain or lose significant amounts of karma, but you survived it,
probably with a good score.
Internetpoints can make you feel fantastic, they are a system to motivate for
social interaction and to follow the community norms (in positive and negative
ways).
Your modesty suits you well, but there is no need to. Stand upwards. There will
always be those with few points but really good content, and those who overshoot
the gems by far with activity.
I agree that it’s worth saying something about sexual behaviour. Here are my broad thoughts:
I am sad about women having bad experiences, I think about it a lot
I want to be accurate in communication
I think it's easy to reduce harms a lot without reducing benefits
Firstly, I’m sad about the current situation. Seems like too many women in EA have bad experiences. There is a discussion about what happens in other communities or tradeoffs. But first it’s really sad.
Daniel's Heavy Tail Hypothesis (HTH) vs. this recent comment from
Brian saying that he thinks that classic piece on 'Why Charities Usually Don't
Differ Astronomically in Expected Cost-Effectiveness' is still essentially
valid.
Seems like Brian is arguing that there are at most 3-4 OOM differences between
interventions whereas Daniel seems to imply there could be 8-10 OOM differences?
Similarly here: Valuing research works by eliciting comparisons from EA
researchers - EA Forum (effectivealtruism.org)
And Ben Todd just tweeted about this as well.
7
Nathan Young
7mo
Here is my first draft, basically there will be a plan money prediction market
predicting what they community will vote on a central question (here "are the
top 1% more than 10,000x as efffective as the median") then we have a discussion
and we vote and then resolve.
https://docs.google.com/document/d/14WpLjsS6idm8Ma-izKFOwkzy-B2F6RDpZ0xlc8aHlXg/edit
It used to be done by just typing the @ symbol followed by the person's name,
but that doesn't seem to work anymore.
4
Sarah Cheng
2mo
That's right, you should be able to mention users with @ and posts with #.
However, it does seem like they're both currently broken, likely because we
recently updated our search software. Thanks for flagging this! We'll look into
it.
It is unclear to me that if we chose cause areas again, we would choose global developement
The lack of a focus on global development would make me sad
This issue should probably be investigated and mediated to avoid a huge community breakdown - it is naïve to think that we can just swan through this without careful and kind discussion
Does EA have a clearly denoted place for exit interviews? Like if someone who was previously very involved was leaving, is there a place they could say why?
Please post your jobs to Twitter and reply with @effective_jobs. Takes 5 minutes. and the jobs I've posted and then tweeted have got 1000s of impressions.
Or just DM me on twitter (@nathanpmyoung) and I'll do it. I think it's a really cheap way of getting EAs to look at your jobs. This applies to impactful roles in and outside EA.
Here is an example of some text:
-tweet 1
Founder's Pledge Growth Director
@FoundersPledge are looking for someone to lead their efforts in growing the amount that tech entrepreneurs give to effective charities when they IPO.
I listened to this episode today Nathan, I thought it was really good, and you
came across well. I think EAs should consider doing more podcasts, including
those not created/hosting by EA people or groups. They're an accessible medium
with the potential for a lot of outreach (the 80k podcast is a big reason why I
got directly involved with the community).
I know you didn't want to speak for EA has a whole, but I think it was a good
example with EA talking to the leftist community in good faith,[1] which is
(imo) one of our biggest sources of criticism at the moment. I'd recommend
others check out the rest of Rabbithole's series on EA - it's a good piece of
data on what the American Left thinks of EA at the moment.
Summary:
+1 to Nathan for going on this podcast
+1 for people to check out the other EA-related Rabbithole episodes
1. ^
A similar podcast for those interested would be Habiba's appearance on
Garrison's podcast The Most Interesting People I Know
I sense that conquest's law is true -> that organisations that are not specifically right wing move to the left.
I'm not concerned about moving to the left tbh but I am concerned with moving away from truth, so it feels like it would be good to constantly pull back towards saying true things.
Any time that you read a wiki page that is sparse or has mistakes, consider adding what you were trying to find. I reckon in a few months we could make the wiki really good to use.
I am frustrated and hurt when I take flack for criticism.
It seems to me that people think I'm just stirring shit by asking polls or criticising people in power.
Maybe I am a bit. I can't deny I take some pleasure in it.
But there are a reasonable amount of personal costs too. There is a reason why 1-5 others I've talked to have said they don't want to crticise because they are concerned about their careers.
I more or less entirely criticise on the forum. Believe me, if I wanted to actually stir shit, I could do it a lot more effectively than shortform comments.
Nuclear risk is in the news. I hope: - if you are an expert on nuclear risk, you are shopping around for interviews and comment - if you are an EA org that talks about nuclear risk, you are going to publish at least one article on how the current crisis relates to nuclear risk or find an article that you like and share it - if you are an EA aligned journalist, you are looking to write an article on nuclear risk and concrete actions we can take to reduce it
Any EA leadership have my permission to put scandal on the back burner until we have a strategy on bing by the way. feels like a big escalation to have an ML reading it's own past messages and running a search engine.
EA internal issues matter but only if we are alive.
Reasons I would disagree: (1) Bing is not going to make us 'not alive' on a coming-year time scale. It's (in my view) a useful and large-scale manifestation of problems with LLMs that can certainly be used to push ideas and memes around safety etc, but it's not a direct global threat. (2) The people best-placed to deal with EA 'scandal' issues are unlikely to perfectly overlap with the people best-placed to deal wit the opportunities/challenges Bing poses. (3) I think it's bad practice for a community to justify backburnering pressing community issues with an external issue, unless the case for the external issue is strong; it's a norm that can easily become self-serving.
I think the community health team should make decisions on the balance of harms rather than beyond reasonable doubt. If it seems likely someone did something bad they can be punished a bit until we don't think they'll do it again. But we have to actually take all the harms into account.
"beyond reasonable doubt" is a very high standard of proof, which is reasonable
when the effect of a false conviction is being unjustly locked in a prison. It
comes at a cost: a lot of guilty people go free and do more damage.
Theres no reason to use that same standard for a situation where the punishments
are things like losing a job or being kicked out of a social community. A high
standard of proof should still be used, but it doesn't need to be "beyond
reasonable doubt" level. I would hate to be falsely kicked out of an EA group,
but at the end of the day I can just do something else.
4
Jason
7mo
I agree that the magnitude of the proposed deprivation is highly relevant to the
burden of proof. The social benefit from taking the action on a true positive,
and the individual harm from acting on a false positive also weigh in the
balance.
In my view, the appropriate burden of proof also takes into account the extent
of other process provided. A heightened burden of proof is one procedure for
reducing the risk of erroneous deprivations, but it is not the only or even the
most important one.
In most cases, I would say that the thinner the other process, the higher the
BOP needs to be. For example, discipline by the bar, medical board, etc is
usually more likely than not . . . but you get a lot of process like an
independent adjudicator, subpoena power, and judicial review. So we accept 51
percent with other procedural protections in play. (And as a practical matter,
the bar generally wouldnt prosecute a case it thought was 51 percent anyway due
to resource constraints). With significantly fewer protections, I'd argue that a
higher BOP would be required -- both as a legal matter (these are government
agencies) and a practical one. Although not beyond a reasonable doubt.
Of course, more process has costs both financial and on those involved. But it's
a possible way to deal with some situations where the current evidence seems too
strong to do nothing and too uncertain to take significant action.
I strongly dislike the following sentence on effectivealtruism.org:
"Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on."
It reads to me as arrogant, and epitomises the worst caracatures my friends do of EAs. Read it in a snarky voice (such as one might if they struggled with the movement and were looking to do research) "Rather that just doing what feels right..."
I suggest it gets changed to one of the following:
"We use evidence and careful analysis to find the very best causes to work on."
"It's great when anyone does a kind action no matter how small or effective. We have found value in using evidence and careful analysis to find the very best causes to work on."
I am genuinely sure whoever wrote it meant well, so thank you for your hard work.
I also thought this when I first read that sentence on the site, but I find it
difficult (as I'm sure its original author does) to communicate its meaning in a
subtler way. I like your proposed changes, but to me the contrast presented in
that sentence is the most salient part of EA. To me, the thought is something
like this:
"Doing good feels good, and for that reason, when we think about doing charity,
we tend to use good feeling as a guide for judging how good our act is. That's
pretty normal, but have you considered that we can use evidence and analysis to
make judgments about charity?"
The problem IMHO is that without the contrast, the sentiment doesn't land. No
one, in general, disagrees in principle with the use of evidence and careful
analysis: it's only in contrast with the way things are typically done that the
EA argument is convincing.
3
Nathan Young
3y
I would choose your statement over the current one.
I think the sentiment lands pretty well even with a very toned down statement.
The movement is called "effective altruism". I think often in groups are worried
that outgroups will not get their core differences when generally that's all
outgroups know about them.
I don't think that anyone who visits that website won't think that effectiveness
isn't a core feature. And I don't think we need to be patronising (as EAs are
charactured as being in conversations I have) in order to make known something
that everyone already knows.
[epistemic status - low, probably some element are wrong]
tl;dr - communities have a range of dispute resolution mechanisms, whether voting to public conflict to some kind of civil war - some of these are much better than others - EA has disputes and resources and it seems likely that there will be a high profile conflict at some point - What mechanisms could we put in place to handle that conflict constructively and in a positive sum way?
When a community grows as powerful as EA is, there can be disagreements about resource allocation.  ... (read more)
By and large I think this aspect is going surprisingly well, largely because
people have adopted a "disagree but respect" ethos.
I'm a bit unsure of such a fund - I guess that would pit different cause areas
against each other more directly, which could be a conflict framing.
Regarding the mechanism of bargains, it's a bit unclear to me what problem that
solves.
I'm relatively pro casual sex as a person, but I will say that EA isn't about being a sex-positive community - it's about effectively doing good. And if one gets in the way of the other, I know what I'm choosing (doing good).
I think there is a positive sum compromise possible, but it seems acknowledging how I will trade off if it comes to it.
No engagement: I’ve heard of effective altruism, but do not engage with effective altruism content or ideas at all
Mild engagement: I’ve engaged with a few articles, videos, podcasts, discussions, events on effective altruism (e.g. reading Doing Good Better or spending ~5 hours on the website of 80,000 Hours)
Moderate engagement: I’ve engaged with multiple articles, videos, podcasts, discussions, or events on effective altruism (e.g. subscribing to the
I think this is part of a more general problem that people say things like "I'm
not totally EA" when they donate 1%+ of their income and are trying hard. Why
create a club where so many are insecure about their membership.
I can't speak for everyone, but if you donate even 1% of your income to
charities which you think are effective, you're EA in my book.
5
Aaron Gertler
2y
It is one of my deepest hopes, and one of my goals for my own work at CEA, that
people who try hard and donate feel like they are certainly, absolutely a part
of the movement. I think this is determined by lots of things, including:
1. The existence of good public conversations about donations, cause
prioritization, etc., where anyone can contribute
2. The frequency of interesting news and stories about EA-related initiatives
that make people feel happy about the progress their "team" is making
I hope that the EA Survey's categories are a tiny speck compared to these.
3
Aaron Gertler
2y
Thanks for providing a detailed suggestion to go with this critique!
While I'm part of the team that puts together the EA Survey, I'm only answering
for myself here.
1. People can consider themselves anything they want! It's okay! You're
allowed! I hope that a single question on the survey isn't causing major
changes to how people self-identify. If this is happening, it implies a
side-effect the Survey wasn't meant to have.
2. Have you met people who specifically cited the survey (or some other place
the question has showed up — I think CEA might have used it before?) as a
source of disillusionment?
I'm not sure I understand why people would so strongly prefer being in a "highly
engaged" category vs. a "considerably engaged" category if those categories
occupy the same relative position on a list. Especially since people don't use
that language to describe themselves, in my experience. But I could easily be
missing something.
I want someone who earns-to-give (at any salary) to feel comfortable saying "EA
is a big part of my life, and I'm closely involved in the community". But I
don't think this should determine how the EA Survey splits up its categories on
this question, and vice-versa.
*****
One change I'd happily make would be changing "EA-aligned organization" to
"impact-focused career" or something like that. But I do think it's reasonable
for the survey to be able to analyze the small group of people whose
professional lives are closely tied to the movement, and who spend thousands of
hours per year on EA-related work rather than hundreds.
(Similarly, in a survey about the climate movement, it would seem reasonable to
have one answer aimed at full-time paid employees and one answer aimed at
extremely active volunteers/donors. Both of those groups are obviously critical
to the movement, but their answers have different implications.)
Earning-to-give is a tricky category. I think it's a matter of degree, like the
difference betwee
2
Nathan Young
2y
It's possible that this question is mean to measure something about non-monetary
contribution size, not engagement. In which case, say that.
Call it, "non-financial contribution" and put 4 as " I volunteer more than X
hours" and 5 as "I work on a cause area directly or have taken a lower than
salary rate jobs".
A friend asked about effective places to give. He wanted to donate through his payroll in the UK. He was enthusiastic about it, but that process was not easy.
It wasn't particularly clear whether GiveWell or EA Development Fund was better and each seemed to direct to the other in a way that felt at times sketchy.
It wasn't clear if payroll giving was an option
He found it hard to find GiveWell's spreadsheet of effectiveness
Feels like making donations easy should be a core concern of both GiveWell and EA Funds and my experience made me a little embarrassed to be honest.
People voting without explaining is good.
I often see people thinking that this is bragading or something when actually most people just don't want to write a response, they either like or dislike something
If it were up to me I might suggest an anonymous "I don't know" button and an anonymous "this is poorly framed" button.
When I used to run a lot of facebook polls, it was overwhelmingly men who wrote answers, but if there were options to vote, the gender was much more even. My hypothesis was that a kind of argumentative usually man tended to enjoy writing long responses more. And so blocking lower effort/less antagonistic/ more anonymous responses meant I heard more from this kind of person.
I don't know if that is true on the forum, but I would guess that the higher effort it is to respond the more selective the responses become in some direction. I guess I'd ask if you think that the people spending the most effort are likely to be the most informed. In my experience, they aren't.
More broadly I think it would be good if the forum optionally took some information about users - location, income, gender, cause area, etc and on answers with more than say 10 votes would dis... (read more)
It seems like we could use the new reactions for some of this. At the moment they're all positive but there could be some negative ones. And we'd want to be able to put the reactions on top level posts (which seems good anyway).
Being able to agree and disagreevote on posts feels like it might be great. Props to the forum team.
Richard Ngo just gave a talk at EAG berlin about errors in AI governance. One being a lack of concrete policy suggestions.
Matt Yglesias said this a year ago. He was even the main speaker at EAG DC https://www.slowboring.com/p/at-last-an-ai-existential-risk-policy?utm_source=%2Fsearch%2Fai&utm_medium=reader2
Seems worth asking why we didn't listen to top policy writers when they warned that we didn't have good proposals.
I think 90% of the answer to this is risk aversion from funders, especially LTFF and OpenPhil, see here. As such many things struggled for funding, see here.
We should acknowledge that doing good policy research often involves actually talking to and networking with policy people. It involves running think tanks and publishing policy reports, not just running academic institutions and publishing papers. You cannot do this kind of research well in a vacuum.
That fact combined with funders who were (and maybe still are) somewhat against funding people (except for people they knew extremely well) to network with policy makers in any way, has lead to (maybe is still leading to) very limited policy research and development happening.
I am sure others could justify this risk averse approach, and there are totally benefits to being risk averse. However in my view this was a mistake (and is maybe an ongoing mistake). I think was driven by the fact that funders were/are: A] not policy people, so do/did not understand the space so are were hesitant to make grants; B] heavily US centric, so do/did not understand the non-US policy space; and C] heavily capacity constrained, so do/did ... (read more)
What do you think of Thomas Larson's bill? It seems pretty concrete to me, do you just think it is not good?
Relative Value Widget
It gives you sets of donations and you have to choose which you prefer. If you want you can add more at the bottom.
https://allourideas.org/manifund-relative-value
so far:
I continue to think that a community this large needs mediation functions to avoid lots of harm with each subsequent scandal.
People asked for more details. so I wrote the below.
Let's look at some recent scandals and I'll try and point out some different groups that existed.
In each case, the community disagrees on who we should be and what we should be. People write comments to signal that they are good and want good things and shouldn't be attacked. Other people see these and feel scared that they aren't what the community wants.
This is tiring and anxiety inducing for all parties. In all cases here there are well intentioned, hard working people who have given a lot to try and make the world better who are scared they cannot trust their community to support them if push comes to shove. There are people horrified at the behaviour of others, scared that this behaviour will repeat itself, with all the costs attached. I feel this way, and I ... (read more)
I'd bid for you to explain more what you mean here - but it's your quick take!
Some things I don't think I've seen around FTX, which are probably due to the investigation, but still seems worth noting. Please correct me if these things have been said.
- I haven't seen anyone at CEA acknowledge that they ran an investigation in 2019-2020 on someone who would turn out to be one of the largest fraudsters in the world and failed to turn up anything despite seemingly a number of flags.
- I remain confused
- As I've written elsewhere I haven't seen engagement on this point, which I find relatively credible, from one of the Time articles:
... (read more)Yeah seems right, but uh still seems worth saying.
Certainly very concerning. Two possible mitigations though:
Clear benefits, diffuse harms
It is worth noting when systems introduce benefits in a few obvious ways but many small harms. An example is blocking housing. It benefits the neighbours a lot - they don't have to have construction nearby - and the people who are harmed are just random marginal people who could have afforded a home but just can't.
But these harms are real and should be tallied.
Much recent discussion in EA has suggested common sense risk reduction strategies which would stop clear bad behavior. Often we all agree on the clear bad behaviour... (read more)
I notice some people (including myself) reevaluating their relationship with EA.
This seems healthy.
When I was a Christian it was extremely costly for me to reduce my identification and resulted in a delayed and much more final break than perhaps I would have wished[1]. My general view is that people should update quickly, and so if I feel like moving away from EA, I do it when I feel that, rather than inevitably delaying and feeling ick.
Notably, reducing one's identification with the EA community need not change one's poise towards effective work/donations/earn to give. I doubt it will change mine. I just feel a little less close to the EA community than once I did, and that's okay.
I don't think I can give others good advice here, because we are all so different. But the advice I would want to hear is "be part of things you enjoy being part of, choose an amount of effort to give to effectiveness and try to be a bit more effective with that each month, treat yourself kindly because you too are a person worthy of love"
- ^
... (read more)I think a slow move away from Christianity would have been healthier for me. Strangely I find it possible to imagine still being a Christian, had thi
I think the strategy fortnight worked really well. I suggest that another one is put in the calendar (for say 3 months time) and then rather than dripfeeding comment we sort of wait and then burst it out again.
It felt better to me, anyway to be like "for these two weeks I will engage"
I also thought it was pretty decent, and it caused me to get a post out that had been sitting in my drafts for quite a while.
This could have been a wiki
I hold that there could be a well maintained wiki article on top EA orgs and then people could anonymously have added many non-linear stories a while ago. I would happily have added comments about their move fast and break things approach and maybe had a better way to raise it with them.
There would have been edit wars and an earlier investigation.
How much would you pay to have brought this forward 6 months or a year. And likewise for whatever other startling revelations there are. In which case, I suggest a functional wiki is worth 5% - 10% of that amount, per case.
Space in my brain.
I was reading this article about Nuclear winter a couple of days ago and I struggled. It's a good article but there isn't an easy slot in my worldview for it. The main thrust was something like "maybe nuclear winter is worse than other people think". But I don't really know how bad other people think it is.
Compare this to community articles, I know how the community functions and I have opinions on things. Each article fits neatly into my brain.
If a had a globe my worldview the EA community section is like very well mapped out... (read more)
I wouldn't recommend people tweet about the nonlinear stuff a lot.
There is an appropriate level of publicity for things and right now I think the forum is the right level for this. Seems like there is room for people to walk back and apologise. Posting more widely and I'm not sure there will be.
If you think that appropriate actions haven't been taken in say a couple months then I get tweeting a bit more.
Feels like we've had about 3 months since the FTX collapse with no kind of leadership comment. Uh that feels bad. I mean I'm all for "give cold takes" but how long are we talking.
I know of at least 1 NDA of an EA org silencing someone for discussing what bad behaviour that happened at that org. Should EA orgs be in the practice of making people sign such NDAs?
I suggest no.
Confusion
I get why I and other give to Givewell rather than catastrophic risk - sometimes it's good to know your "Impact account" is positive even if all the catastrophic risk work was useless.
But why do people not give to animal welfare in this case? Seems higher impact?
And if it's just that we prefer humans to animals that seems like something we should be clear to ourselves about.
Also I don't know if I like my mental model of an "impact account". Seems like my giving has maybe once again become about me rather than impact.
ht @Aaron Bergman for surfacing this
I've been getting more spam mail on the forum recently.
I realise you can report users, which I think is quicker than figuring out who to mail and then copying the nameove.
I really like saved posts.
I think if I save them and then read mainly from my saved feed, that's a better, less addictive, more informative experience.
Norms are useful so let's have useful norms.
"I don't think drinking is bad, but we have a low-alcohol culture so the fact you host parties with alcohol is bad"
Often the easiest mark of bad behaviour is that it breaks a norm we've agreed. Is it harmful in a specific case to be shoplift? Depends on what was happening to the things you stole. But it seems easier just to appeal to our general norm that shoplifting is bad. On average it is harmful and so even if it wasn't in this specific case, being willing to shoplift is a bad sign. Even if you're stealing me... (read more)
A previous partner and I did a sex and consent course together online I think it's helped me be kinder in relationships.
Useful in general.
More useful if, you:
- have sex casually
- see harm in your relationships and want to grow
- are poly
As I've said elsewhere I think a very small proportion of people in EA are responsible for most of the relationship harms. Some of bad actors, who need to be removed, some are malefactors, who have either lots of interactions or engage in high risk behaviours and accidentally cause harm. I would guess I have more traits of the second category than almost all of you. So people like me should do the most work to change.
So most of you probably don't need this, but if you are in some of the above groups, I'd recommend a course like this. Save yourself the heartache of upsetting people you care about.
Happy to DM.
https://dandelion.events/e/pd0zr?fbclid=IwAR0cIXFowU7R4dHZ4ptfpqsnnhdnLIJOfM_DjmS_5HR-rgQTnUzBdtQEnjE
I have heard one anecdote of an EA saying that they would be less likely to hire someone on the basis of their religion because it would imply they were
less good at their jobless intelligent/epistemically rigorous. I don't think they were involved in hiring, but I don't think anyone should hold this view.Here is why:
- As soon as you are in a hiring situation, you have much more information than priors. Even if it were true that, say, ADHD[1] were less rational then the interview process should provide much more information than such a prior. If that's not the case, get a better interview process, don't start being prejudiced!
- People don't mind meritocracy, but they want a fair shake. If I heard that people had a prior that ADHD folks were less likely to be hard working, regardless of my actual performance in job tests, I would be less likely to want to be part of this community. You might lose my contributions. It seems likely to me that we come out ahead by ignoring small differences in groups so people don't have to worry about this. People are very sensitive to this. Let's agree not to defect. We judge on our best guess of your performance, not on appearances.
- I would b
... (read more)In the wake of the financial crisis it was not uncommon to see suggestions that banks etc. should hire more women to be traders and risk managers because they would be less temperamentally inclined towards excessive risk taking.
I think EAs have a bit of an entitlement problem.
Sometimes we think that since we are good we can ignore the rules. Seems bad
As with many statements people make about people in EA, I think you've identified something that is true about humans in general.
I think it applies less to the average person in EA than to the average human. I think people in EA are more morally scrupulous and prone to feeling guilty/insufficiently moral than the average person, and I suspect you would agree with me given other things you've written. (But let me know if that's wrong!)
I find statements of the type "sometimes we are X" to be largely uninformative when "X" is a part of human nature.
Compare "sometimes people in EA are materialistic and want to buy too many nice things for themselves; EA has a materialism problem" — I'm sure there are people in EA like this, and perhaps this condition could be a "problem" for them. But I don't think people would learn very much about EA from the aforementioned statements, because they are also true of almost every group of people.
The vibe at EAG was chill, maybe a little downbeat, but fine. I can get myself riled up over the forum, but it's not representative! Most EAs are just getting on with stuff.
(This isn't to say that forum stuff isn't important, its just as important as it is rather than what should define my mood)
I hope Will MacAskill is doing well. I find it hard to predict how he's doing as a person. While there have been lots of criticisms (and I've made some) I think it's tremendously hard to be the Shelling person for a movement. There is a seperate axis however, and I hope in himself he's doing well and I imagine many feel that way. I hope he has an accurate picture here.
Being open minded and curious is different from holding that as part of my identity.
Perhaps I never reach it. But it seems to me that "we are open minded people so we probably behave open mindedly" is false.
Or more specifically, I think that it's good that EAs want to be open minded, but I'm not sure that we are purely because we listen graciously, run criticism contests, talk about cruxes.
The problem is the problem. And being open minded requires being open to changing one's mind in difficult or set situations. And I don't have a way that's guaranteed to get us over that line.
I guess African, Indian and Chinese voices are underrepresented in the AI Governance discussion. And in the unlikely case we die, we all die and it think it's weird that half the people who will die have noone loyal to them in the discussion.
We want AI that works for everyone and it seems likely you want people who can represent billions who aren't currently with a loyal representative.
I'm actually more concerned about the underrepresentation of certain voices as it applies to potential adverse effects of AGI (or even near-AGI) on society that don't involve all of us dying. In the everyone-dies scenario, I would at least be similarly situated to people from Africa, India, and China in terms of experiencing the exact same bad thing that happens. But there are potential non-fatal outcomes, like locking in current global power structures and values, that affect people from non-Western countries much differently (and more adversely) than they'd affect people like me.
Feels like there should be a "comment anonymously" feature. Would save everyone having to manage all these logins.
We have thought about that. Probably the main reason we haven't done this is because of this reason, on which I'll quote myself on from an internal slack message:
It is just really hard to write comments that challenge without seeming to attack people. Anyone got any tips?
We are good at discussion but bad at finding the new thing to update to.
Look at the recent Happier Lives Institute discussion; https://forum.effectivealtruism.org/posts/g4QWGj3JFLiKRyxZe/the-happier-lives-institute-is-funding-constrained-and-needs
Lots of discussion, a reasonable amount of new information, but what should our final update be:
- Have HLI acted fine or badly?
- Is there a pattern of misquoting and bad scholarship?
- Have global health orgs in general moved towards Self-reported WellBeing (SWB) as a way to measure interventions?
- Has HLI generally done g
... (read more)I talked to someone outside EA the other day who said that in a competive tender they wouldn't apply to EA funders because they thought the process would likely to go to someone with connections to OpenPhil.
Seems bad.
Someone told me they don't bet as a matter of principle. And that this means EA/Rats take their opinions less seriously as a result. Some thoughts
- I respect individual EAs preferences. I regularly tell friends to do things they are excited about, to look after themselves, etc etc. If you don't want to do something but feel you ought to, maybe think about why, but I will support you not doing it. If you have a blanket ban on gambling, fair enough. You are allowed to not do things because you don't want to
- Gambling is addictive, if you have a problem with it,
... (read more)I don't bet because I feel it's a slippery slope. I also strongly dislike how opinions and debates in EA are monetised, as this strengthens even more the neoliberal vibe EA already has, so my drive for refraining to do this in EA is stronger than outside.
Edit: and I too have gotten dismissed by EAs for it in the past.
Can we have some people doing AI Safety podcast/news interviews as well as Yud?
I am concerned that he's gonna end up being the figurehead here. I assume someone is thinking of this, but I'm posting here to ensure that it is said. I am pretty sure that people are working on this, but I think it's good to say this anyway.
We aren't a community who says "I guess he deserves it" we say "who is the best person for the job?". Yudkowsky, while he is an expert isn't a median voice. His estimates of P(doom) are on the far tail of EA experts here. So if I could pick 1 person I wouldn't pick him and frankly I wouldn't pick just one person.
Some other voices I'd like to see on podcasts/ interviews:
Again, I'm not saying noone has thought of this (80%) they have. But I'd like to be 97% sure, so I'm flagging it.
*I am personally fond of this person so am biased
The Scout Mindset deserved 1/10th of the marketing campaign of WWOTF. Galef is a great figurehead for rational thinking and it would have been worth it to try and make her a public figure.
I sense that it's good to publicly name serial harassers who have been kicked out of the community, even if the accuser doesn't want them to be. Other people's feeling matter too and I sense many people would like to know who they are.
I think there is a difference between different outcomes, but if you've been banned from EA events then you are almost certainly someone I don't want to invite to parties etc.
I notice I am pretty skeptical of much longtermist work and the idea that we can make progress on this stuff just by thinking about it.
I think future people matter, but I will be surprised if, after x-risk reduction work, we can find 10s of billions of dollars of work that isn't busywork and shouldn't be spent attempting to learn how to get eg nations out of poverty.
I would appreciate being able to vote forum articles as both agree disagree and upvote downvote.
Lots of things where I think they are false but interesting or true but boring.
Several journalists (including those we were happy to have write pieces about WWOTF) have contacted me but I think if I talk to them, even carefully, my EA friends will be upset with me. And to be honest that upsets me.
We are in the middle of a mess of our own making. We deserve scrutiny. Ugh, I feel dirty and ashamed and frustrated.
To be clear, I think it should be your own decision to talk to journalists, but I do also just think that it's just better for us to tell our own story on the EA Forum and write comments, and not give a bunch of journalists the ability to greatly distort the things we tell them in a call, with a platform and microphone that gives us no opportunity to object or correct things.
I have been almost universally appalled at the degree to which journalists straightforwardly lie in interviews, take quotes massively out of context, or make up random stuff related to what you said, and I do think it's better that if you want to help the world understand what is going on, that you write up your own thoughts in your own context, instead of giving that job to someone else.
I suggest there is waaaay to much to be on top of in EA and noone knows who is checking what. So some stuff goes unchecked. If there were a narrower set of "core things we study" then it seems more likely that those things would have been gone over by someone in detail and hence fewer errors in core facts.
Seems worth considering that
A) EA has a number of characteristic of a "High Demand Group" (cult). This is a red flag and you should wrestle with it yourself.
B) Many of the "Sort of"s are peer pressure. You don't have to do these things. And if you don't want to, don't!
In what sense is it "sort of" true that members need to get permission from leaders to date, change jobs, or marry?
I think that one's a reach, tbh.
(I also think the one about using guilt to control is a stretch.)
My call: EA gets 3.9 out of 14 possible cult points.
No
Yes (+1)
Partial (+0.8)
No
No
No
Partial (+0.5)
... (read more)I don't think it makes sense to say that the group is "preoccupied with making money". I expect that there's been less focus on this in EA than in other groups, although not necessarily due to any virtue, but rather because of how lucky we have been in having access to funding.
I notice we are great at discussing stuff but not great at coming to conclusions.
It was pointed out to me that I probably vote a bit wrong on posts.
I generally just up and downvote how I feel, but occasionally if I think a post is very overrated or underrated I will strong upvote or downvote even though I feel less strong than that.
But this is I think the wrong behaviour and a defection. Since if we all did that then we'd all be manipulating the post to where we think it ought to be and we'd lose the information held in the median of where all our votes leave it.
Sorry.
How are we going to deal emotionally with the first big newspaper attack against EA?
EA is pretty powerful in terms of impact and funding.
It seems only an amount of time before there is a really nasty article written about the community or a key figure.
Last year the NYT wrote a hit piece on Scott Alexander and while it was cool that he defended himself, I think he and the rationalist community overreacted and looked bad.
I would like us to avoid this.
If someone writes a hit piece about the community, Givewell, Will MacAskill etc, how are we going to avoid a kneejerk reaction that makes everything worse?
I suggest if and when this happens:
individuals largely don't respond publicly unless they are very confident they can do so in a way that leads to deescalation.
articles exist to get clicks. It's worth someone (not necessarily me or you) responding to an article in the NYT, but if, say a niche commentator goes after someone, fewer people will hear it if we let it go.
let the comms professionals deal with it. All EA orgs and big players have comms professionals. They can defend themselves.
if we must respond (we often needn't) we should adopt a stance of grace, curiosity and hu
Yeah, I think the community response to the NYT piece was counterproductive, and I've also been dismayed at how much people in the community feel the need to respond to smaller hit pieces, effectively signal boosting them, instead of just ignoring them. I generally think people shouldn't engage with public attacks unless they have training in comms (and even then, sometimes the best response is just ignoring).
Unbalanced karma is good actually. it means that the moderators have to do less. I like the takes of the top users more than the median user and I want them to have more but not total influence.
Appeals to fairness don't interest me - why should voting be fair?
I have more time for transparency.
I still don't really like the idea of CEA being democratically elected but I like it more than I once did.
Post I spent 4 hours writing on a topic I care deeply about: 30 karma
Post I spent 40 minutes writing on a topic that the community vibes with: 120 karma
I guess this is fine - iys just people being interested but it can feel weird at times.
If you type "#" follwed by the title of a post and press enter it will link that post.
Example:
Examples of Successful Selective Disclosure in the Life Sciences
This is wild
edited
Give Directly has a President (Rory Stewart) paid $600k, and is hiring a Managing Director. I originally thought they had several other similar roles (because I looked on the website) but I talked to them an seemingly that is not the case. Below is the tweet that tipped me off but I think it is just mistaken.
Once could still take issue with the $600k (though I don't really)
https://twitter.com/carolinefiennes/status/1600067781226950656?s=20&t=wlF4gg_MsdIKX59Qqdvm1w
Seems in line with CEO pay for US nonprofits with >100M in budget, at least when I spot check random charities near the end of this list.
I feel confused about the president/CEO distinction however.
I wish the forum had a better setting for "I wrote this post and maybe people will find it interesting but I don't want it on the front page unless they do because that feels pretenious"
89 people responded to my strategy poll so far.
Here are the areas of biggest uncertainty.
Seems we could try and understand these better.
Poll link: https://viewpoints.xyz/polls/ea-strategy-1
Analytics like: https://viewpoints.xyz/polls/ea-strategy-1/analytics
I think if I knew that I could trade "we all obey some slightly restrictive set of romance norms" for "EA becomes 50% women in the next 5 years" then that's a trade I would advise we take.
That's a big if. But seems trivially like the right thing to do - women do useful work and we should want more of them involved.
To say the unpopular reverse statement, if I knew that such a set of norms wouldn't improve wellbeing in some average of women in EA and EA as a whole then I wouldn't take the trade.
Seems worth acknowledging there are right answers here, if only we knew the outcomes of our decisions.
I am so impressed at the speed with which Sage builds forecasting tools.
Props @Adam Binks and co.
Fatebook: the fastest way to make and track predictions looks great.
I've been musing about some critiques of EA and one I like is "what's the biggest thing that we are missing"
In general, I don't think we are missing things (lol) but here are my top picks:
- It seems possible that we reach out to sciency tech people because they are most similar to us. While this may genuinely be the cheapest way to get people now there may be costs to the community in terms of diversity of thought (most Sci/tech people are more similar than the general population)
- I'm glad to see more outreach to people in developing nations
- It seems obvious t
... (read more)In defence of Will MacAskill and Nick Beckstead staying on the board of EVF
While I've publicly said that on priors they should be removed unless we hear arguments otherwise, I was kind of expecting someone to make those arguments. If noone will, I will.
MacAskill
MacAskill is very clever, personally kind, is a superlative networker and communicator. Imo he oversold SBF, but I guess I'd do much worse in his place. It seems to me that we should want people who have made mistakes and learned from them. Seems many EA orgs would be glad to have someone like... (read more)
I would like to see posts give you more karma than comments (which would hit me hard). Seems like a highly upvtoed post is waaaaay more valuable than 3 upvoted comments on that post, but it's pretty often the latter gives more karma than the former.
Feels like there should be some kind of community discussion and research in the wake of FTX, especially if no leadership is gonna do it. But I don't know how that discussion would have legitimacy. I'm okay at such things, but honestly tend to fuck them up somehow. Any ideas?
If I were king
I wrote a post on possible next steps but it got little engagement -- unclear if it was a bad post or people just needed a break from the topic. On mobile, so not linking it -- but it's my only post besides shortform.
The problem as I see it is that the bulk of proposals are significantly underdeveloped, risking both applause light support and failure to update from those with skeptical priors. They are far too thin to expect leaders already dealing with the biggest legal, reputational, and fiscal crisis in EA history to do the early development work.
Thus, I wouldn't credit a vote at this point as reflecting much more than a desire for a more detailed proposal. The problem is that it's not reasonable to expect people to write more fleshed-out proposals for free without reason to believe the powers-that-be will adopt them.
I suggested paying people to write up a set of proposals and then voting on those. But that requires both funding and a way to winnow the proposals and select authors. I suggested modified quadratic funding as a theoretical ideal, but a jury of pro-reform posters as a more practical alternative. I thought that problem was manageable, but it is a problem. In particular, at the proposal-development stage, I didn't want tactical voting by reform skeptics.
Let's assume that the time article is right about the amount of sexual harassment in EA. How big a problem is this relative to other problems? If we spend $10mn on EAGs (a guess) how much should we spend if we could halve sexual harassment in the community.
Dear reader,
You are an EA, if you want to be. Reading this forum is enough. Giving a little of your salary effectively is enough. Trying to get an impactful job is enough. If you are trying even with a fraction of your resources to make the world better and chatting with other EAs about it, you are one too.
I imagine that it has cost and does cost 80k to push for AI safety stuff even when it was wierd and now it seems mainstream.
Like, I think an interesting metric is when people say something which shifts some kind of group vibe. And sure, catastrophic risk folks are into it, but many EAs aren't and would have liked a more holistic approach (I guess).
So it seems a notable tradeoff.
Sam Harris takes Giving What We Can pledge for himself and for his meditation company "Waking Up"
Harris references MacAksill and Ord as having been central to his thinking and talks about Effective Altruism and exstential risk. He publicly pledges 10% of his own income and 10% of the profit from Waking Up. He also will create a series of lessons on his meditation and education app around altruism and effectiveness.
Harris has 1.4M twitter followers and is a famed Humanist and New Athiest. The Waking Up app has over 500k downloads on android, so I guess over 1 million overall.
https://dynamic.wakingup.com/course/D8D148
I like letting personal thoughts be up or downvoted, so I've put them in the comments.
My guess is people who like Sam Harris are disproportionately likely to be potentially interested in EA.
This seems quite likely given EA Survey data where, amongst people who indicated they first heard of EA from a Podcast and indicated which podcast, Sam Harris' strongly dominated all other podcasts.
More speculatively, we might try to compare these numbers to people hearing about EA from other categories. For example, by any measure, the number of people in the EA Survey who first heard about EA from Sam Harris' podcast specifically is several times the number who heard about EA from Vox's Future Perfect. As a lower bound, 4x more people specifically mentioned Sam Harris in their comment than selected Future Perfect, but this is probably dramatically undercounting Harris, since not everyone who selected Podcast wrote a comment that could be identified with a specific podcast. Unfortunately, I don't know the relative audience size of Future Perfect posts vs Sam Harris' EA podcasts specifically, but that could be used to give a rough sense of how well the different audiences respond.
I'll sort of publicly flag that I sort of break the karma system. Like the way I like to post comments is little and often and this is just overpowered in getting karma.
eg I recently overtook Julia Wise and I've been on the forum for years less than anyone else.
I don't really know how to solve this - maybe someone should just 1 time nuke my karma? But yeah it's true.
Note that I don't do this deliberately - it's just how I like to post and I think it's honestly better to split up ideas into separate comments. But boy is it good at getting karma. And soooo m... (read more)
To modify a joke I quite liked:
I wouldn't worry too much about the karma system. If you're worried about having undue power in the discourse, one thing I've internalized is to use the strong upvote/downvote buttons very sparingly (e.g. I only strong-upvoted one post in 2022 and I think I never strong-downvoted any post, other than obvious spam).
It is frustrating that I cannot reply to comments from the notification menu. Seems like a natural thing to be able to do.
Some thoughts on: https://twitter.com/FreshMangoLassi/status/1628825657261146121?s=20
I agree that it’s worth saying something about sexual behaviour. Here are my broad thoughts:
Firstly, I’m sad about the current situation. Seems like too many women in EA have bad experiences. There is a discussion about what happens in other communities or tradeoffs. But first it’s really sad.
More th... (read more)
What is a big open factual non community question in EA. I have a cool discussion tool I want to try out.
Should we want openai to turn off bing for a bit? We should, right? Should we create memes to that effect?
Sorry, how do I tag users or posts? I've forgotten and can't find a shortcuts section on the forum
Should I tweet this? I'm very on the margin. Agree disagreevot (which doesn't change karma)
Does EA have a clearly denoted place for exit interviews? Like if someone who was previously very involved was leaving, is there a place they could say why?
EAs please post your job posting to twitter
Please post your jobs to Twitter and reply with @effective_jobs. Takes 5 minutes. and the jobs I've posted and then tweeted have got 1000s of impressions.
Or just DM me on twitter (@nathanpmyoung) and I'll do it. I think it's a really cheap way of getting EAs to look at your jobs. This applies to impactful roles in and outside EA.
Here is an example of some text:
-tweet 1
Founder's Pledge Growth Director
@FoundersPledge are looking for someone to lead their efforts in growing the amount that tech entrepreneurs give to effective charities when they IPO.
Salary: $135 - $150k
Location: San Francisco
https://founders-pledge.jobs.personio.de/job/378212
-tweet 2, in reply
@effective_jobs
-end
I suggest it should be automated but that's for a different post.
If I find this forum exhausting to post on some times I can only imagine how many people bounce off entirely.
I did a podcast where we talked about EA, would be great to hear your criticisms of it. https://pca.st/i0rovrat
Should I do more podcasts?
I think the EA forum wiki should allow longer and more informative articles. I think that it would get 5x traffic. So I've created a market to bet.
I sense that conquest's law is true -> that organisations that are not specifically right wing move to the left.
I'm not concerned about moving to the left tbh but I am concerned with moving away from truth, so it feels like it would be good to constantly pull back towards saying true things.
Any time that you read a wiki page that is sparse or has mistakes, consider adding what you were trying to find. I reckon in a few months we could make the wiki really good to use.
I am frustrated and hurt when I take flack for criticism.
It seems to me that people think I'm just stirring shit by asking polls or criticising people in power.
Nuclear risk is in the news. I hope:
- if you are an expert on nuclear risk, you are shopping around for interviews and comment
- if you are an EA org that talks about nuclear risk, you are going to publish at least one article on how the current crisis relates to nuclear risk or find an article that you like and share it
- if you are an EA aligned journalist, you are looking to write an article on nuclear risk and concrete actions we can take to reduce it
I would quite like Will MacAskill back right about now. I think he was generally a great voice in the discourse.
Confidence 60%
Any EA leadership have my permission to put scandal on the back burner until we have a strategy on bing by the way. feels like a big escalation to have an ML reading it's own past messages and running a search engine.
EA internal issues matter but only if we are alive.
Reasons I would disagree:
(1) Bing is not going to make us 'not alive' on a coming-year time scale. It's (in my view) a useful and large-scale manifestation of problems with LLMs that can certainly be used to push ideas and memes around safety etc, but it's not a direct global threat.
(2) The people best-placed to deal with EA 'scandal' issues are unlikely to perfectly overlap with the people best-placed to deal wit the opportunities/challenges Bing poses.
(3) I think it's bad practice for a community to justify backburnering pressing community issues with an external issue, unless the case for the external issue is strong; it's a norm that can easily become self-serving.
I think the community health team should make decisions on the balance of harms rather than beyond reasonable doubt. If it seems likely someone did something bad they can be punished a bit until we don't think they'll do it again. But we have to actually take all the harms into account.
I strongly dislike the following sentence on effectivealtruism.org:
"Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on."
It reads to me as arrogant, and epitomises the worst caracatures my friends do of EAs. Read it in a snarky voice (such as one might if they struggled with the movement and were looking to do research) "Rather that just doing what feels right..."
I suggest it gets changed to one of the following:
I am genuinely sure whoever wrote it meant well, so thank you for your hard work.
Are the two bullet points two alternative suggestions? If so, I prefer the first one.
Factional infighting
[epistemic status - low, probably some element are wrong]
tl;dr
- communities have a range of dispute resolution mechanisms, whether voting to public conflict to some kind of civil war
- some of these are much better than others
- EA has disputes and resources and it seems likely that there will be a high profile conflict at some point
- What mechanisms could we put in place to handle that conflict constructively and in a positive sum way?
When a community grows as powerful as EA is, there can be disagreements about resource allocation.  ... (read more)
The amount of content on the forum is pretty overwhelming at the moment and I wonder if there is a better way to sort it.
I'm relatively pro casual sex as a person, but I will say that EA isn't about being a sex-positive community - it's about effectively doing good. And if one gets in the way of the other, I know what I'm choosing (doing good).
I think there is a positive sum compromise possible, but it seems acknowledging how I will trade off if it comes to it.
I dislike the framing of "considerable" and "high engagement" on the EA survey.
This copied from the survey:
... (read more)A friend asked about effective places to give. He wanted to donate through his payroll in the UK. He was enthusiastic about it, but that process was not easy.
Feels like making donations easy should be a core concern of both GiveWell and EA Funds and my experience made me a little embarrassed to be honest.
EA infrastructure idea: Best Public Forecaster Award