I thought the same thing when I read it. I got similar vibes from some of the other merch (but “do good better” the strongest).
I think that “moral consideration for all” reads similarly. Especially since most people reading it will read it out of context.
Also “maximise impact” a little bit.
I’ve been considering writing a post about my experience of receiving a grant, and the downsides I didn’t anticipate beforehand. It would probably look similar to this comment but with more info.
General encouragement for having done something risky (a wacky title) and then deciding against it and changing it. The first sentence of the changed post made me laugh.
I’ve been considering writing a post about my experience of receiving a grant, and the downsides I didn’t anticipate beforehand. It would probably look similar to this comment but with more info.
“Better” could mean lots of things here. Including: more entertaining; higher quality discussion; more engagement; it’s surpassed a ‘critical mass’ of people to sustain a regular group of posters and a community; better memes; more intellectually diverse; higher frequency of high quality takes; the best takes are higher quality; more welcoming and accessible conversations etc.
The aims of EA Twitter are different to the forum. But I think the most important metrics are the “quantity of discussion” ones.
My impression is that:
If I was going to spend longer on this post, I'd make it more empirical and talk through evidence for/against the effectiveness of ACT.
As it is, I didn't want to spend significantly longer writing it, so I've gone for a summary of the core ideas -- so that readers can assess the vibe and see if it's something that sounds interesting to them.
This might have been the wrong call though.
I did a shallow review of the evidence for ACT last year: "Anxiety defusion and acceptance (acceptance and commitment therapy) Mind Ease’s anxiety defusion exercise is based on acceptance and commitment therapy (ACT), which is backed by the following evidence: Traditional ACT with a therapist: A 2017 review of RCTs of ACT to treat anxiety and depression shows that ACT improves depression relative to no treatment up to 6-months follow-up. (ds = 0.32 to 1.18). Two studies compared ACT with minimally active comparison conditions (expressive writing and minima...
Sorry that your experience of this has been rough.
Some quick thoughts I had whilst reading:
Hi Evie,
I appreciate that you decided to post this.
I think the first point is subtly wrong in an important way.
EAGs are not only useful in so far as they let community members do better work in the real world. EAGs are useful insofar as they result in a better world coming to be.
One way in which EAGs might make the world better is by fostering a sense of community, validation, and inclusion among those who have committed themselves to EA, thus motivating people to so commit themselves and to maintain such commitments. This function doesn't bare on "letting" people do better work per se.
Insofar ...
There was a vague tone of "the goal is to get accepted to EAG" instead of "the goal is to make the world better," which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world.
Hm, I understand why you say that, and you might be right (e.g., I see some signs of the OP that are compatible with this interpretation). Still, I want to point out that there's a risk of being a bit uncharitable. It seems worth saying that anyone who cares a lot about having a lot of impact...
nearly everyone I know with EA funding would be willing to criticise CEA if they had a good reason to.
I have received EA funding in multiple capacities, and feel quite constrained in my ability to criticise CEA publicly.
I could be wrong, but I have a pretty strong sense that nearly everyone I know with EA funding would be willing to criticise CEA if they had a good reason to. I'd be surprised if {being EA funded} decreased willingness to criticise EA orgs. I even expect the opposite to be true.
I disagree, I know several people who fit this description (5 off the top of my head) who would find this very hard. I think it very much depends on factors like how well networked you are, where you live, how much funding you've received and for how long, and whether you think you could work for and org in the future.
...
- There was a vague tone of "the goal is to get accepted to EAG" instead of "the goal is to make the world better," which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world.
- Because of this, I don't feel strongly about the EAG team providing feedback to people on why they were rejected. The EAG team's goals isn't to advise on how applicants can fill up their "EA resume." It's to facilitate impactful work in the world.
- I remembered a comment that I really lik
(Currently reading the post and noticing that many of the links go to the top of the same google doc. I assume this isn’t supposed to be the case. This could be because I’m on mobile, but also could be an error with the links.)
(Also congrats on your first forum post! Go you :) )
This relates to a caveat in my recent post:
Part of me wants to fles...
A distinction I've found useful is "object-level" vs "social reality". They are both adjectives that describe types of conversation/ ideas.
Object-level discussions are about ideas and actions (e.g. AI timelines, the mechanics of launching a successful startup). Object-level ideas are technical, empirical, and often testable. Object-level refers to what ideas are important or make sense. It is focused on truth-seeking and presenting arguments clearly.
Social reality discussions are about people and organisations (e.g. Will MacAskill, Open Philanthropy)...
Thanks for your comment!
this could've been mostly avoided by a consideration of Chesterton's Fence
Meh, I don't think so. This taken to its extreme looks like "be normie."
I'm now worried about e.g. high school summer camp programs that prioritize the development of unbalanced agency.
I'm pretty confident that (for ESPR at least) this was a one off fluke! I'm not worried about this happening again (see gavin's comment above).
Hey, thanks for writing.
I also used to feel extremely confused about this (e.g. I thought that in-person university groups were "woefully inefficient" compared to social media outreach). I did not understand why there weren't EA youtubers or social media marketing campaigns. Much of my own social conscious had been shaped by online creators (e.g. veganism and social justice ideas), and it felt like a tragedy that EA was leaving so much lying on the table.
I now am less optimistic about short-form social media outreach. Mostly because:
If I ask three people for their time, they don't know whether they're helping me get from 0 to 1 person helping with this, 1 to 2 or 9 to 10 for all they know.
Nice, yup, agreed that the information asymmetry is a big part of the problem in this case. I wonder if it could be a good norm to give someone this information when making requests. Like, explicitly say in a message "I've asked one other person and think that they will be about as well placed as you to help with this" etc
I do also think that there's a separate cost to making requests, in that it act...
Hmm, interesting. Thanks for clarifying, that does work better in this context (although it's confusing if you don't have the info above)
Hey, thanks for asking.
On the first point:
Thanks for your comment Aaron! :)
I don't think I ever got the sense-even intuitively or in a low-fidelity way- that "agency" was identical to/implied/strongly overlapped with "a willingness to be social domineering or extractive of others' time and energy"
I wrote about this because it was the direction in which I noticed myself taking "be agentic" too far. It's also based on what I've observed in the community and conversations I've had over the past few months. But I would expect people to "take the message too far" in different ways (obvs whether s...
Wait, I'm not actually sure I want to change the inside view thing, I'm confused. I was kinda just describing a meme-y version of hustling -- therefore the low-resolution version of "has inside views" is fine.
has strong inside views which overrule the outside view
I'm not really sure what you mean by this.
Thanks :)
Your learned helplessness is interesting, because to me the core of agency is indeed nonsocial
Yeah, the learned helplessness is a weird one. I felt kinda sad about it because I used to really pride myself on being self sufficient. I agree that the social part of agency should only be a small part -- I think I leaned too far into it.
strong inside views which overrule the outside view
Thanks for the inside view correction. Changing that now, and will add that Owen originally coined social agency.
ESPR 2021 went a little too hard on agency
Fwiw I did ge...
Ah nice, I was missing that context. Yup, the angle of confidence building seems good for this audience.
Congrats for organising EAGx -- that's huge! :)
Sorry for being a downer, but I want to push back on the subtext that it's (always) good for people to be willing to "lend a helping hand, whether it's sending a message, reviewing a draft or hopping onto a call?"
My rough thoughts:
Thanks for pointing this out, it is an important consideration, and this might not be a good exercise depending on the audience present.
For EAGxSG, it was definitely being more afraid that the audience would err on the side of not asking for help. I also checked with some experienced EAs at the conference on whether or not this was something I should mention. Context: The attendees of EAGxSG were mainly from areas without large established EA communities (Africa, Asia, Middle East), and many of them would likely not be able to go to other conferences due t...
Thanks! Also this is a small point but I find it easier to skim articles when they have formatted headings (so there's an overview of the article on the left hand side). You can do this using the forum formatting features.
“C1: A person in a poor country whose life is saved experiences less welfare than a person in a rich country whose life is saved”
(Asking a dumb question here, but,) is this true? Ie, does an increase in material wealth actually increase psychological wellbeing?
I have an intuition that psychological well-being is mostly affected by how wealthy you are compared to your peer group.
Maybe you’re talking about individuals poor countries who are below the poverty line (in which case, I agree that they would experience much less psychology wellbeing).
But I would be surprised if individuals in rich countries are actually happier than individuals in poor countries (who have all their basic needs met).
Thanks for writing this post ! It resonated and I feel like I've fallen into a similar mindset before.
It reminds me of a point made here: "like, will we wish in 5 years that EAs had more outside professional experience to bring domain knowledge and legitimacy to EA projects rather than a resume full of EA things?"
When reading the post, this felt especially true and unfortunate: "They get the reputation as someone who can “get shit done” but in practice, they’re usually solving ops bottlenecks at the cost of building harder-to-acquire skills."
I like this, and think that networking as a teen is super useful and high ROI. (Maybe I'm biased because networking opened up opportunities for me.)
I really like this post as a starting guide! Thanks for writing
I feel concerned about versions of this where there is implicit social pressure to:
Like, if its implicitly socially costly to opt out, it's pretty hard for an individual to opt out.
I also think that it is hard to avoid these pressure-y dynamics in practice. Especially when people really want to be included in the social group.
I can imagine a scenario where there is a subtext of:
"You can opt out. Of course. But, as we know, the real hard-core and truth-seeking people stay. An...
Given that I was aiming to spend only a few mins on the census, I don't expect that I would have scrolled through the post to find the description of the cause area.
But some people might, so could be useful.
Pretty confused by what some of the cause areas are (e.g. epistemic institutions). I expect my responses were less helpful/ accurate bc of not knowing what some of them meant.
Thank you for the comments!
I agree with some of what you wrote. I don't want the subtext of the post to be "you should amass social capital so that senior people will do you favours."
Some thoughts:
I get where you're coming from (although I think domineeringness is less universally rewarded than intelligence across different parts of society). But given that we don't think the ideal society consists of people being very domineering, I worry that the indirect harms of pushing this in EA culture may be significant. I think it's harder to know what these are than the benefits, but I'm worried that it's a kind of naive consequentialist stance to privilege the things we have cleaner explicit arguments for.
At the very least I think there's something like a...
My guess is that it’s just very context dependent — I’m not sure how generalisable these sorts of numbers are.
It also seems like the size of favours would vary a ton and make it hard to give a helpful number.
I'm sure it's context dependent and depends on size of favours. But I'm not sure it depends that much -- and I'm worried that if we don't discuss numbers it's easy for people who are naturally disinclined to ask to think "oh I'm probably doing this enough already" (or people who are naturally inclined to do this a lot already to think "oh yeah I totally need to do that more").
Maybe you could give a context where you think my numbers are badly off?
[On the title -- you gotta have fun with these things haha]
Thanks Gavin!
Yes, the laws of equal and opposite advice defo apply here.
I also wonder whether this sort of thing becomes zero sum within a small enough environment (e.g. if everyone starts lowering their bar for asking for help, people will raise their bar for saying yes, because they will be inundated with requests). Could lead to competitor dynamics (discussed in the comments of this post), which seems unfortunate.
I really like the point of spending years 'becoming yourself'. Like, I ...
From memory:
As an occasional antidote to forced-march life: consider yourself as a homeostatic organism with a particular trajectory. Like a plant in a pot.
What does a plant need? Water, light, space, soil, nitrogen, pest defence, pollinators. What are the potted human equivalents? What would an environment which gave you this without striving look like? What do you need to become yourself?
(You can reshape a plant, like bonsai, but really not too much or you'll kill it or stunt it.)
To avoid the "opposite advice" thing, maybe we can just talk about in absolute terms what are good amounts to ask for help?
My guess is that people should ask their friends/colleagues/acquaintances for help with things a few times a week, and ask senior people they don't know for help with things a few times a year. This is based on a sense of "imagining everyone was doing this" and wondering where I want to turn the dial to. I'm interested if others have different takes about the ideal level.
I think if people are asking noticeably less than that they shoul...
This comment is great, and resonates with a lot of the stuff I found hard when I was first immersed in the community at an EA hub.
I really really loved section 2 of this post!! It articulates a mindset shift that I think is important and valuable, and I've not seen it written out like that before.
Maybe imperative sentences will always tend read as preachy. I’m not sure.