All of Evie's Comments + Replies

Maybe imperative sentences will always tend read as preachy. I’m not sure.

I thought the same thing when I read it. I got similar vibes from some of the other merch (but “do good better” the strongest).

I think that “moral consideration for all” reads similarly. Especially since most people reading it will read it out of context.

Also “maximise impact” a little bit.

1
MHR
1y
FWIW, I get a bit of a holier-than-thou vibe from the "do good better" shirt but don't see a problem with the "moral consideration for all" stickers.
1
Evie
1y
Maybe imperative sentences will always tend read as preachy. I’m not sure.
Evie
2y16
1
0

I’ve been considering writing a post about my experience of receiving a grant, and the downsides I didn’t anticipate beforehand. It would probably look similar to this comment but with more info.

4
Joseph Lemien
2y
I imagine that such a post could be quite helpful for other young people who are considering applying for funding, and it could also be helpful for other people to understand more of this "ecosystem." I, for one, would be interested to read your story.
Evie
2y10
5
0

General encouragement for having done something risky (a wacky title) and then deciding against it and changing it. The first sentence of the changed post made me laugh.

6
Aaron Bergman
2y
Haha thanks! But be sure to only read this comment if it's a good use of your time
Evie
2y38
4
1

(Removed this comment. Don't know how to delete it.)

7
maxfieldwallace
2y
Thanks for sharing your experience. I'm sure I would have also felt shame and guilt if I were in your situation, though obviously this is not what we want to happen! My general feeling about situations like this is that there are some grants that are better off not being shared publicly, if the context allows for it (this depends on many complex social factors). Wealthy people spend money on all kinds of outlandish things all over the world yet receive comparably little opprobrium simply because this spending is rarely public. It's unfair for you to be exposed to the vitriol from regular people expressing their frustration with inequality. I'm reluctant to say too much about your particular circumstance (given I don't have context, and this is quite a personal thing), but I think if it were me, I might look for ways to tactfully omit discussion of the grant when first getting to know non-EAs socially. Not because it *is* shameful but just because it may unconsciously make some people uncomfortable. If it does come up, I think there is a way to "check your privilege" while also expressing confidence that you did nothing wrong. I've found in my experience, ironically, if I express contrition about something, people are more likely to actually think I did something shameful. Whereas if I sound confident, they tend to have a positive impression of me. These aren't necessarily bad people, that's just how humanity is. While socializing with EAs is wonderful, I agree that it is better to have a diverse social circle including non EAs too!
1
Sharmake
2y
This could be titled as "The curse of non-consequentialist ethics plus social media means that there is no reasonable way to prioritize what matters, and the news contributes to that by essentially equalizing all crises under similar names, especially in the headline."
Evie
2y16
1
0

I’ve been considering writing a post about my experience of receiving a grant, and the downsides I didn’t anticipate beforehand. It would probably look similar to this comment but with more info.

Evie
2y10
4
0

“Better” could mean lots of things here. Including: more entertaining; higher quality discussion; more engagement; it’s surpassed a ‘critical mass’ of people to sustain a regular group of posters and a community; better memes; more intellectually diverse; higher frequency of high quality takes; the best takes are higher quality; more welcoming and accessible conversations etc.

The aims of EA Twitter are different to the forum. But I think the most important metrics are the “quantity of discussion” ones.

My impression is that:

  • There are more “high quality ta
... (read more)

Thanks for sharing! That's useful to know.

I'll look into adding to the post later today.

If I was going to spend longer on this post, I'd make it more empirical and talk through evidence for/against the effectiveness of ACT. 

As it is, I didn't want to spend significantly longer writing it, so I've gone for a summary of the core ideas -- so that readers can assess the vibe and see if it's something that sounds interesting to them.

This might have been the wrong call though.

I did a shallow review of the evidence for ACT last year: "Anxiety defusion and acceptance (acceptance and commitment therapy) Mind Ease’s anxiety defusion exercise is based on acceptance and commitment therapy (ACT), which is backed by the following evidence: Traditional ACT with a therapist: A 2017 review of RCTs of ACT to treat anxiety and depression shows that ACT improves depression relative to no treatment up to 6-months follow-up. (ds = 0.32 to 1.18). Two studies compared ACT with minimally active comparison conditions (expressive writing and minima... (read more)

3
alexherwix
2y
While I have not done a deep dive into the literature and checked the claims in depth, afaik ACT counts as one of the more evidence based psychotherapies with several hundred studies including RCTs demonstrating good effects. There is also a whole scientific paradigm “contextual behavioral science” based on “functional contextualism” which grounds the development of ACT. This is one of the clearest theoretical foundations for a scientific field I have come across (i.e., it’s a coherent account grounded in Pragmatism) and should be refreshing to have a look at for people interested in philosophy of science as well as behavioral science in general. I am pretty bullish on ACT and would recommend anyone interested in mental health to have a good look for aspects that might work for them. What I would maybe add to the post is a short description of the ACT Matrix, which is a thinking tool that can be useful for organizing thoughts about problematic situations. While it certainly depends on the person, some friends I have showed it to found it easy to grasp and very helpful for navigating difficult situations. It’s not a panacea but may be a good starting point for people who appreciate a hands-on learning approach. I also recommend the tools section in a liberated mind. Should be pretty relatable for people who have done or are generally interested in CFAR workshops / rationality techniques. Thanks for writing the post!
Evie
2y44
18
0

I also wanna give general encouragement for sharing a difficult rejection story.

Evie
2y61
26
30

Sorry that your experience of this has been rough. 

Some quick thoughts I had whilst reading:

  • There was a vague tone of "the goal is to get accepted to EAG" instead of "the goal is to make the world better," which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world. 
    • Because of this, I don't feel strongly about the EAG team providing feedback to people on why they were rejected. The EAG team's goals isn't to advise on how applicants can fill up their "EA resum
... (read more)

Hi Evie,

I appreciate that you decided to post this. 

  • Tone - I did worry that the tone might read like that. To me, getting into EAG was only instrumental for my greater goal of making the world a better place. I do have a tendency to focus a lot of energy into on perceived barriers to efficicacy so it might have come off like getting into EAG was my final objective. Please feel free to point out various parts of the post that seem to suggest otherwise and I can update them. 
  • Making the world a better place - This is a really difficult thing to meas
... (read more)
2
Arepo
2y
I'm wary of this claim. Obviously in some top level sense it's true, but it seems reminiscent of the paradox of hedonism, in that I can easily believe that if you consciously optimise events for abstract good-maximisation, you end up maximising good less than if you optimise them for the health of a community of do-gooders. (I'm not saying this is a case for or against admitting the OP - it's just my reaction to your reaction)
9
Cornelis Dirk Haupt
2y
EDIT: Lukas Gloor does a much better job than me at getting across everything I wanted to in this comment here   From my reading her goals are not simply get into EAG.  It seems obvious to me that her goal to get into EAG is instrumental to the end of making the world a better place. The crux is not "Constance just wants to get into EAG." The crux I think is Constance believes she can help make the world a better place much more through connecting with people at EAG. The CEA does not appear to believe this to be the case. The crux should be the focus. Focusing on how badly she wants to get into EAG is a distraction. For many EAs you cannot have a well-run conference that makes the world a better place without it also being a place that makes many EAs very happy. I'd think the two goals are synonymous for  a great many EAs. In their comment Eli says: Let's also remember that EAs that get rejected from EAG that believe their rejection resulted in the world being a worse place overall will also be sad - probably moreso because they get both the FOMO but also a deeper moral sting. In fact, they might be so sad it motivates them to write an EA Forum post about it in the hopes of making sure that the CEA didn't make a mistake. I like Eli's comment. It captures something important. But I also don't like it because it can also provide a false sense of clarity - seperating goals that aren't actually always that seperate - and this false clarity can possibly provide a motivated reasoning basis that can be used to more easily believe the EAG admission process didn't make a mistake and make the world a worse place. Why? Because it makes it easier to dismiss an EA that is very sad about being rejected from EAG as just someone who "wants to get into EAG."

I think the first point is subtly wrong in an important way. 

EAGs are not only useful in so far as they let community members do better work in the real world. EAGs are useful insofar as they result in a better world coming to be.

One way in which EAGs might make the world better is by fostering a sense of community, validation, and inclusion among those who have committed themselves to EA, thus motivating people to so commit themselves and to maintain such commitments. This function doesn't bare on "letting" people do better work per se. 

Insofar ... (read more)

There was a vague tone of "the goal is to get accepted to EAG" instead of "the goal is to make the world better," which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world. 

Hm, I understand why you say that, and you might be right (e.g., I see some signs of the OP that are compatible with this interpretation). Still, I want to point out that there's a risk of being a bit uncharitable. It seems worth saying that anyone who cares a lot about having a lot of impact... (read more)

Arepo
2y54
15
1

nearly everyone I know with EA funding would be willing to criticise CEA if they had a good reason to.

I have received EA funding in multiple capacities, and feel quite constrained in my ability to criticise CEA publicly.

I could be wrong, but I have a pretty strong sense that nearly everyone I know with EA funding would be willing to criticise CEA if they had a good reason to. I'd be surprised if {being EA funded} decreased willingness to criticise EA orgs. I even expect the opposite to be true.

I disagree, I know several people who fit this description (5 off the top of my head) who would find this very hard. I think it very much depends on factors like how well networked you are, where you live, how much funding you've received and for how long, and whether you think you could work for and org in the future.

  • There was a vague tone of "the goal is to get accepted to EAG" instead of "the goal is to make the world better," which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world. 
    • Because of this, I don't feel strongly about the EAG team providing feedback to people on why they were rejected. The EAG team's goals isn't to advise on how applicants can fill up their "EA resume." It's to facilitate impactful work in the world. 
  • I remembered a comment that I really lik
... (read more)
Evie
2y44
18
0

I also wanna give general encouragement for sharing a difficult rejection story.

Evie
2y17
1
0

(Currently reading the post and noticing that many of the links go to the top of the same google doc. I assume this isn’t supposed to be the case. This could be because I’m on mobile, but also could be an error with the links.)

(Also congrats on your first forum post! Go you :) )

-1
Constance Li
2y
Thank you! And yeah I noticed one link was broken and fixed that. Otherwise I think I agree with Rebecca that it is probably just a mobile issue unfortunately.
1
Rebecca
2y
I had the same issue. Also guessing that it’s a mobile issue
4
Luke Freeman
2y
Thanks for letting me know 😀

This relates to a caveat in my recent post:

  • I’m concerned about too much EA meta conversation — about worlds where most of EA dialogue is talking about EAs talking about EAs (lots of social reality and not enough object-level). 
  • These sorts of convos are often very far removed from {concrete things that help the world}, and I worry about them taking away attention from more important ones. 
  • I think it’s probably much better (for the world) for conversations to stay focused on the real world, object-level claims and arguments.

Part of me wants to fles... (read more)

A distinction I've found useful is "object-level" vs "social reality". They are both adjectives that describe types of conversation/ ideas.

Object-level discussions are about ideas and actions (e.g. AI timelines, the mechanics of launching a successful startup). Object-level ideas are technical, empirical, and often testable. Object-level refers to what ideas are important or make sense. It is focused on truth-seeking and presenting arguments clearly. 

Social reality discussions are about people and organisations (e.g. Will MacAskill, Open Philanthropy)... (read more)

2
Evie
2y
This relates to a caveat in my recent post: * I’m concerned about too much EA meta conversation — about worlds where most of EA dialogue is talking about EAs talking about EAs (lots of social reality and not enough object-level).  * These sorts of convos are often very far removed from {concrete things that help the world}, and I worry about them taking away attention from more important ones.  * I think it’s probably much better (for the world) for conversations to stay focused on the real world, object-level claims and arguments. Part of me wants to flesh this thought out properly soon. But even this conversation is meta! And I'm trying to encourage/ focus more on object-level ideas. So do I write it? I'm not sure. 

Thanks for your comment!

this could've been mostly avoided by a consideration of Chesterton's Fence

Meh, I don't think so. This taken to its extreme looks like "be normie."

I'm now worried about e.g. high school summer camp programs that prioritize the development of unbalanced agency.

I'm pretty confident that (for ESPR at least) this was a one off fluke! I'm not worried about this happening again (see gavin's comment above).

Evie
2y11
3
0

Hey, thanks for writing. 

I also used to feel extremely confused about this (e.g. I thought that in-person university groups were "woefully inefficient" compared to social media outreach). I did not understand why there weren't EA youtubers or social media marketing campaigns. Much of my own social conscious had been shaped by online creators (e.g. veganism and social justice ideas), and it felt like a tragedy that EA was leaving so much lying on the table. 

I now am less optimistic about short-form social media outreach. Mostly because:

  • It seems re
... (read more)
1
SereneDesiree
2y
The part about being associated with specific influencers could be a major downfall. If someone gains a negative reputation on the internet, will the movement as a whole suffer? Quite possibly. I'm unsure about whether EA being a low resolution household name would be net good or bad. I wonder how often people don't give to charity because it's too much effort to find good organizations. If this is a legitimate problem, having EA in the back of their mind might make it easier to donate without thinking about it.  Also, it may simply introduce people to the concept that we can measure the usefulness of charities, and we have a really good idea of which ones are the best.   Selecting for nerdiness and curiosity is great, but I believe EA becoming a household name will attract more nerdy, curious people. I believe there are lots of people who would be involved in EA, but simply don't know it exists. I don't think we should expect people to find EA by themselves- part of our job should be to pique their interest.  I follow a lot of internet debaters, and I'm just not seeing that sort of stuff in the EA space. Why? Youtube is a great place for people to hash out ideas, have conversations, and for the audience to get involved. Blogs aren't as accessible as long form video content.  As for short-form content, it has a lot of downsides. The upside however, is that we can reach a lot of people with a single message, which I believe could have large scale political influence (something I believe EA lacks). 

If I ask three people for their time, they don't know whether they're helping me get from 0 to 1 person helping with this, 1 to 2 or 9 to 10 for all they know.

Nice, yup, agreed that the information asymmetry is a big part of the problem in this case. I wonder if it could be a good norm to give someone this information when making requests. Like, explicitly say in a message "I've asked one other person and think that they will be about as well placed as you to help with this" etc

I do also think that there's a separate cost to making requests, in that it act... (read more)

1
ChanaMessinger
2y
1. I got a request just last night and was told that the person was asking three people, and while this isn't perfect for them, I think it was a great thing from my perspective to know. 2. I don't think it's massively relevant for me right now except vaguely paying attention to my mental health and well being, but I think it's super relevant for new-to-EA and/or young people deciding very quickly how invested to be.

Hmm, interesting. Thanks for clarifying, that does work better in this context (although it's confusing if you don't have the info above)

Yup, another commenter is correct in that I am assuming that the goals are altruistic.

Hey, thanks for asking.

On the first point:

  • Throughout both of my posts, I've been using a nicher definition than the one given by the Stanford philosophy dictionary. In my last post, I defined it as "the ability to get what you want across different contexts – the general skill of coming up with ambitious goals [1] and actually achieving them, whatever they are."
  • But, as another commenter said, the term (for me at least) is now infused with a lot of community context and implicit assumptions. 
  • I took some of this as assumed knowledge when writi
... (read more)

Thanks, that's useful to know! :)

Thanks for your comment Aaron! :)

I don't think I ever got the sense-even intuitively or in a low-fidelity way- that "agency" was identical to/implied/strongly overlapped with "a willingness to be social domineering or extractive of others' time and energy" 

I wrote about this because it was the direction in which I noticed myself taking "be agentic" too far. It's also based on what I've observed in the community and conversations I've had over the past few months. But I would expect people to "take the message too far" in different ways (obvs whether s... (read more)

Wait, I'm not actually sure I want to change the inside view thing, I'm confused. I was kinda just describing a meme-y version of hustling -- therefore the low-resolution version of "has inside views" is fine. 

has strong inside views which overrule the outside view

I'm not really sure what you mean by this.

3
Gavin
2y
1. "Having inside views": just having your own opinion, whether or not you shout about it and whether or not you think that it's better than the outside view. 2. "Having strong inside views...": asserting your opinion when others disagree with it, including against the majority of people, majority of experts, etc.   (1) doesn't seem that agenty to me, it's just a natural effect of thinking for yourself. (2) is very agenty and high-status (and can be very useful to the group if it brings in decorrelated info), but needs to be earned.

Thanks :)

Your learned helplessness is interesting, because to me the core of agency is indeed nonsocial

Yeah, the learned helplessness is a weird one. I felt kinda sad about it because I used to really pride myself on being self sufficient. I agree that the social part of agency should only be a small part -- I think I leaned too far into it.

strong inside views which overrule the outside view

Thanks for the inside view correction. Changing that now, and will add that Owen originally coined social agency.

ESPR 2021 went a little too hard on agency

Fwiw I did ge... (read more)

1
Evie
2y
Wait, I'm not actually sure I want to change the inside view thing, I'm confused. I was kinda just describing a meme-y version of hustling -- therefore the low-resolution version of "has inside views" is fine.  I'm not really sure what you mean by this.

Ah nice, I was missing that context. Yup, the angle of confidence building seems good for this audience.

Evie
2y11
6
2

Congrats for organising EAGx -- that's huge! :)

Sorry for being a downer, but I want to push back on the subtext that it's (always) good for people to be willing to "lend a helping hand, whether it's sending a message, reviewing a draft or hopping onto a call?"

My rough thoughts:

  • Some people say yes to too many things and don't value their time highly enough. 
  • Sometimes, it's the right call for someone to say no to helping others in their immediate environment.
  • It's often hard to say no, even when it's the right call.
  • I'm worried about a culture where {sayi
... (read more)
Dion
2y17
8
0

Thanks for pointing this out, it is an important consideration, and this might not be a good exercise depending on the audience present.

For EAGxSG, it was definitely being more afraid that the audience would err on the side of not asking for help. I also checked with some experienced EAs at the conference on whether or not this was something I should mention. Context: The attendees of EAGxSG were mainly from areas without large established EA communities (Africa, Asia, Middle East), and many of them would likely not be able to go to other conferences due t... (read more)

3
Kirsten
2y
Yes, I would be hesitant about putting my hand up (I'd help with some things, but not others, what exactly is being asked?) but I'd be really embarrassed if everyone else was putting their hands up and I didn't.

Thanks! Also this is a small point but I find it easier to skim articles when they have formatted headings (so there's an overview of the article on the left hand side). You can do this using the forum formatting features.

1
Fai
2y
Thank you for the recommendation! I spent a few minutes looking for function but I can't find it. I wonder if you can teach me?
6
Fai
2y
Argh! I shouldn't assume people would understand insider terminology. It means "plant-based/cultivated meat". I changed the title.

“C1: A person in a poor country whose life is saved experiences less welfare than a person in a rich country whose life is saved”

(Asking a dumb question here, but,) is this true? Ie, does an increase in material wealth actually increase psychological wellbeing?

I have an intuition that psychological well-being is mostly affected by how wealthy you are compared to your peer group.

Maybe you’re talking about individuals poor countries who are below the poverty line (in which case, I agree that they would experience much less psychology wellbeing).

But I would be surprised if individuals in rich countries are actually happier than individuals in poor countries (who have all their basic needs met).

4
freedomandutility
2y
So my understanding of the economics literature on income and subjective well-being is that currently we think that: Relative income has a large effect on subjective well-being Absolute income has a smaller effect on subjective well-being but the effect is still there Relevant abstract: https://scholar.google.co.uk/scholar?q=income+and+subjective+well-being&hl=en&as_sdt=0&as_vis=1&oi=scholart#d=gs_qabs&t=1662383007403&u=%23p%3DXTFeCIDXcGcJ
Evie
2y22
0
0

Thanks for writing this post ! It resonated and I feel like I've fallen into a similar mindset before. 

It reminds me of a point made here: "like, will we wish in 5 years that EAs had more outside professional experience to bring domain knowledge and legitimacy to EA projects rather than a resume full of EA things?"

 

When reading the post, this felt especially true and unfortunate: "They get the reputation as someone who can “get shit done” but in practice, they’re usually solving ops bottlenecks at the cost of building harder-to-acquire skills."

I like this, and think that networking as a teen is super useful and high ROI. (Maybe I'm biased because networking opened up opportunities for me.)

I really like this post as a starting guide! Thanks for writing

Evie
2y31
0
0

I feel concerned about versions of this where there is implicit social pressure to:

  • stay;
  • seem fine with the critiques given;
  • participate in the first place.

Like, if its implicitly socially costly to opt out, it's pretty hard for an individual to opt out.

I also think that it is hard to avoid these pressure-y dynamics in practice. Especially when people really want to be included in the social group.

 

I can imagine a scenario where there is a subtext of: 

"You can opt out. Of course. But, as we know, the real hard-core and truth-seeking people stay. An... (read more)

6
Amy Labenz
2y
Makes sense, thanks for flagging.   I also think normally, people tend to have strong social rules in place to "be nice". When someone shares a goal they have, or if someone has something that seems to be holding them back there is pressure not to say the ways we expect them/their efforts to fail. For example, I think one of the most common pitfalls with new managers is ruinous empathy: A Doom Circle is meant to be a special, voluntary space where these rules are suspended. I agree there are risks and some people probably shouldn't participate. I also think it is useful to have facilitators who can share the risks mentioned in the "Should you try this" section as part of the setup.  I'll be testing a new variant of this at an event soon so might have more suggestions in the next month or so.

Given that I was aiming to spend only a few mins on the census, I don't expect that I would have scrolled through the post to find the description of the cause area. 

But some people might, so could be useful. 

Evie
2y10
0
0

Pretty confused by what some of the cause areas are (e.g. epistemic institutions). I expect my responses were less helpful/ accurate bc of not knowing what some of them meant.

5
Niel_Bowerman
2y
Thanks for the feedback.  Epistemic institutions are one of the FTX Future Fund  project categories that they use in this post.  I appreciate that that is fairly obscure!  Do you think it would be helpful to link to this post and the 80k problem areas page from that question?

Wow haha this is pretty cool! And also an entertaining read

Thank you for the comments!

I agree with some of what you wrote. I don't want the subtext of the post to be "you should amass social capital  so that senior people will do you favours."

Some thoughts:

  • It’s generally the case that ‘social domineeringness’ is a trait that is rewarded by society. Similar to intelligence, people who have this quality will probs be more likely to achieve their goals. (This makes me kinda uncomfortable, but I think it’s broadly true and it doesn’t seem good to ignore it).
  • Given that this is the case, I want to encourage this qu
... (read more)

I get where you're coming from (although I think domineeringness is less universally rewarded than intelligence across different parts of society). But given that we don't think the ideal society consists of people being very domineering, I worry that the indirect harms of pushing this in EA culture may be significant. I think it's harder to know what these are than the benefits, but I'm worried that it's a kind of naive consequentialist stance to privilege the things we have cleaner explicit arguments for.

At the very least I think there's something like a... (read more)

My guess is that it’s just very context dependent — I’m not sure how generalisable these sorts of numbers are. 

It also seems like the size of favours would vary a ton and make it hard to give a helpful number.

I'm sure it's context dependent and depends on size of favours. But I'm not sure it depends that much -- and I'm worried that if we don't discuss numbers it's easy for people who are naturally disinclined to ask to think "oh I'm probably doing this enough already" (or people who are naturally inclined to do this a lot already to think "oh yeah I totally need to do that more").

Maybe you could give a context where you think my numbers are badly off?

[On the title -- you gotta have fun with these things haha]

Thanks Gavin! 

Yes, the laws of equal and opposite advice defo apply here. 

I also wonder whether this sort of thing becomes zero sum within a small enough environment (e.g. if everyone starts lowering their bar for asking for help, people will raise their bar for saying yes, because they will be inundated with requests). Could lead to competitor dynamics (discussed in the comments of this post), which seems unfortunate.

I really like the point of spending years 'becoming yourself'. Like, I ... (read more)

From memory:

As an occasional antidote to forced-march life: consider yourself as a homeostatic organism with a particular trajectory. Like a plant in a pot.

What does a plant need? Water, light, space, soil, nitrogen, pest defence, pollinators. What are the potted human equivalents? What would an environment which gave you this without striving look like? What do you need to become yourself?

(You can reshape a plant, like bonsai, but really not too much or you'll kill it or stunt it.)

To avoid the "opposite advice" thing, maybe we can just talk about in absolute terms what are good amounts to ask for help?

My guess is that people should ask their friends/colleagues/acquaintances for help with things a few times a week, and ask senior people they don't know for help with things a few times a year. This is based on a sense of "imagining everyone was doing this" and wondering where I want to turn the dial to. I'm interested if others have different takes about the ideal level.

I think if people are asking noticeably less than that they shoul... (read more)

This comment is great, and resonates with a lot of the stuff I found hard when I was first immersed in the community at an EA hub.

I really really loved section 2 of this post!! It articulates a mindset shift that I think is important and valuable, and I've not seen it written out like that before. 

2
Akash
2y
Thanks, Evie!