Hide table of contents
by Evie
16 min read 34

144

Summary

  • Recently, “agency” has become heavily encouraged and socially rewarded within EA, and I am concerned about the unintended consequences.
  • The un-nuanced encouragement of “social agency” could be harmful for the community; it encourages violating social norms and social dominance. At the extreme, it can feel parasitic, with dominant individuals monopolising resources.
  • “Agency” being a high status buzzword incentivises Goodharting– where big and bold actions are encouraged (at the expense of actions that actually achieve an individual's goals).
  • I have an impression that, sometimes, people say “agency” and others hear “work more hours; be more ambitious; make bolder moves! Why haven’t you started a project already?!” “Agency” doesn’t entail “hustling hard” – it entails acting intentionally, and these are importantly different. Agency is not a tradeoff against rest; in fact, doing things to high standards and achieving your goals often requires tons of slack.
  • My experience and mistakes
    • If you feel uncomfortable when violating a social norm or taking a bold move, take this feeling into consideration, there is probably a reason for this feeling.
    • It’s not always low cost for others to say no to requests; failing to model this can make others uncomfortable and lead to a relative overconsumption of resources.
    • Making authorities upset with you is (emotionally and instrumentally) costly. Don’t ignore this cost when taking big and bold moves.
    • Being too willing to ask for help made me worse at solo problem-solving. 

Introduction

Recently, “agency” has been heavily encouraged within EA. I often feel like it’s become both a status symbol and an unquestionable good.[1]

I wrote a post on agency a few months ago, which I now think missed important caveats. While I broadly think that the world (and EA) could do with an injection of proactiveness and intentionality, I’m concerned about some of the unintended consequences.

In this post, I lay out some concerns, some unexpected costs of taking agentic actions for me (and mistakes I’ve made), and things I’ve changed my mind on since my last post.

Important caveats

  1. Most people don’t ask for help enough, and this post is applicable to a minority of people; [2]
  2. This post is many layers of abstraction away from object-level issues that improve the world. I’m worried about meta-conversations (like this) taking away attention from more important and relevant topics. If you’ve read the summary, I’m not sure how much benefit you’ll get from the rest of the post. Consider not reading it.

“Social Agency” in moderation

I’m going to define “social agency”[3] as a willingness to make bold social moves, leverage social capital, and make trades within your social network to achieve your goals.[4]

My previous post mostly focused on encouraging social agency – “ask people for help,” “network online,” and “don’t be too constrained by social boundaries,” are some key themes. I am now concerned that this lacked nuance.

Social agency is a small part of doing good. Having a network and mentors and friends-in-high-places is not enough to actually do meaningful work in the world. The other part of agency is about Actually Doing Things: the nitty-gritty, engagement with reality that actually makes things happen. Taking heroic responsibility; learning about the technical, object level details of important problems; intentionally building models of the world; noticing gaps in existing strategies; developing a plan to solve bottlenecks; executing those plans and actually trying; developing feedback loops; rapidly iterating your existing plans.

The un-nuanced encouragement of social agency could be harmful for the community; it encourages individuals to violate social norms to achieve their goals, be socially dominant, and not constrain their use of finite resources (e.g. the time and attention of senior people). Taken to its extreme, agency can feel parasitic, with grasping, “agentic” individuals monopolising resources at the expense of others. I worry that operating with high “social agency” pattern-matches onto climbing a status ladder, with little regard for the consequences on others.

Don’t get me wrong – social agency is important and often necessary for achieving your goals. And the flip side is too much “Actually Doing Things,” where a lone wolf fails to benefit from cooperation, mentorship, and feedback.

Goodharting agency

"Agency" simply refers to the ability to achieve your goals, which is always instrumentally useful, but low-resolution versions can be harmful.

I claim that the following traits are being rewarded due to the “agency is high status and always good” meme:

  • Willingness to violate social norms to achieve your goals;
  • Being a confident go-getter;
  • Having a low bar for making requests of others (and assuming that others will say no to your requests if they want to);
  • Willingness to be socially dominant and “take up space”;
  • Not attempting to limit your consumption of finite resources (e.g. in-demand person’s time; the attention of a teacher in a class);
  • A “hustle hard” attitude.

Most of these are helpful in moderation. But in a community where “agency” becomes an increasingly virtuous trait (and “you’re so agentic!!” is highly desired praise), we may see a lot of Goodharting.

I’m concerned that this meme incentivises big and bold actions – that pattern-match to “being agentic” – instead of the actions that actually achieve your goal. [5]

It’s important not to take actions that look agentic for their own sake; use agency as a vehicle to get to the things you actually care about. Agency isn’t a helpful terminal goal, but like productivity or ambition, it is instrumentally useful.

“Agency” ≠ “hustle hard”

I am concerned about the agency meme having Hustle Culture undertones.[6] There seems to be a caricature of agency that: founds a start-up; sends an absurd number of cold emails; does weird and unconventional experiments; writes lots of blog posts; has strong inside views which overrule the outside view.
 

I prefer a version of agency that advocates for acting intentionally. Agency does not entail working long hours and having an ambitious to-do list, and it is not a tradeoff against rest; in fact, doing things to high standards and achieving your goals often requires tons of slack.

We all (probably) have goals that do not feel like hustling and striving, for example: 

  • make sure I can get 8-10 hours of sleep a night; 
  • create room for slack in my life; 
  • have meaningful and emotionally close relationships; 
  • call my mum regularly; 
  • be a caring friend.

Intentionally taking steps to achieve these goals are just as agentic as reaching out to a potential mentor or upskilling at [virtuous technical skill]. [7]

Ways that “agency” has been costly for me (and mistakes I’ve made)

Most people are not proactive enough, so the law of equal and opposite advice applies throughout all the following examples. 

However, the following might especially apply to people who: have spent time in an environment where agency is socially rewarded; aren’t afraid to “take up space” socially; or have deliberately worked on the skill of becoming more agentic.

Ignoring a gut feeling that I was overstepping a boundary

Sometimes it is “worth it” to violate social norms to achieve your goals. Sometimes it is not.

Last year, I attended the European Summer Program on Rationality (ESPR), where agency was a core theme and was socially encouraged. A common joke was to enthusiastically yell “AGENCY!” whenever anyone did anything weird/ bold/ unconventional.[8] This was useful and it helped me overcome fears that others would perceive me as obnoxious or annoying if I “took up space” socially – something that was seriously holding me back at the time. 

But afterwards, whenever I had a voice of doubt in my head (you might be overstepping a boundary! You might be being “too much”! You might be making it harder for others to participate in this conversation! Are you sure it’s good for you to ask for this?”), I would crush it with the memory of an instructor yelling “AGENCY!” [9]

Unfortunately, this voice in my head was serving a very real purpose. It was trying to make sure that others around me were comfortable, and that I wasn’t socially domineering. 

I am concerned that a takeaway from the agency meme could be: “do big and bold things, even if you’re uncomfortable and scared! Don’t let fear hold you back,” and I want to give a reminder that discomfort is serving a purpose.

Assuming that it is low-cost for others to say “no” to requests

I have become increasingly ask culture over the past year. I have developed a low bar to asking for things from others and a (mostly) thick skin for people saying “no”.

I recently realised that I had been expecting others to know and enforce their own boundaries, and overestimating how easy it is to say “no." I now think that this is an unrealistic expectation to have.

Someone recently told me that I was “terrible at guess culture.” This was helpful honest feedback, so I polled some friends on whether they agreed.[10] After pondering the evidence for a while (and considering whether I was too neurodiverse to be good at silly social games like guess culture[11]), I realised that I thought it wasn’t my responsibility to participate in guess culture – I implicitly believed it was dumb.

My implicit belief was something like:

"If someone doesn't want to agree to something I ask, then it’s their responsibility to say no! We’re all agents with decision making capacity here! I want to take people at their word – if they agree to something I ask, then I’ll believe them. It would be patronising to second guess them. I don’t want to be tracking subtext and implicit social cues to see whether they actually mean what they say. 

Of course, I want to deliberately make it easy for people to say no to me. I only want them to agree if they actually want to. I don’t want to exert pressure on people at all. But I also want to be careful to not micromanage their emotions and wrap them in a blanket – they can enforce their own boundaries."

I still endorse much of the sentiment here.[12] But I now want to incorporate the reality that some people actually just find it hard to say no, and this is fine, and I don’t want to put them in uncomfortable situations. (Very big caveat that it is not unvirtuous to find it hard to say no to things! I do not want that to be the subtext of this section.)

Now, I’m making more of an effort to track how people respond when I make a request (e.g. body language, tone of voice, eye contact). I am also trying to build models of whether I expect a given person to feel comfortable saying no. For people who might find it harder to say no, I will usually: make fewer requests; be very clear that it’s fine if they don’t respond (and I want them not to respond if that’s the right call); give them space (and time) to think about whether they actually want to say yes; and generally approach the conversation with more tact.

Ideally, we would implicitly reward decisions not to respond when it’s the right call (but I’m not really sure what this looks like in practice.)

Not regulating my consumption of finite resources (e.g. the time of others)

I now realise that I used to model resources (in EA) as a free-market. For example, I thought that:

  • I can ask people for their time, because if it’s not worth it, they will say no;
  • I can ask lots of questions in a classroom, because the instructor will answer fewer of my questions if they aren’t useful for the whole class;
  • I can send emails to whoever I want, because they will ignore them if they don’t want to answer.

I now think this model is flawed, and want to regulate my consumption of resources within the community more. Especially with regard to people’s time, which is valuable and finite.

(Again, this will not apply to most people reading this.)

Not cooperating with authority systems

For two months last year, I was studying for an important, difficult university entrance exam. Unfortunately, I had to be in school for seven hours a day, where the working conditions were unideal. There wasn’t a quiet place where I could reliably work uninterrupted; it was costly to take my textbooks to school every day; my school mandated that everyone attend sessions that I didn’t find useful (for example, about apprenticeships and applying to university). 

I decided to be absent from school for three weeks before the exam (which, combined with a school break, meant that I didn’t attend school for five weeks straight). At home, I had control over my sleep schedule, working routine, and eating times – which I optimised the hell out of. I also gained the useful data that I enjoyed self-studying.[13]

Unfortunately, my poorly explained absence made my teachers and peers unhappy with me. My mum had told the school that I was unwell, but this was suspected to be false.[14] I believe that my teachers felt betrayed and confused: my actions had been uncooperative. It was perceived as me arrogantly thinking I was above the rules. This burned through a lot of trust and social capital with my teachers – which had been slowly built up over years. One day, I was taken out of class and scolded, which felt really bad and shame-provoking. I also felt more distant from my peers upon returning. The strained relationships with my teachers and peers made school a much more unpleasant and difficult environment for me to be in.[15]

To be clear, I endorse having taken time off school to study. This decision meaningfully increased my exam success (and therefore the likelihood that I could attend my top-choice university). But, there were major costs to not cooperating with my school authorities. For one, I care about the feelings of people around me, and this situation left others feeling undermined, which I wish I could have avoided. For another, life is much more enjoyable when you’re playing cooperatively with other agents in your environment. Life is much more enjoyable when you're regarded as a "nice person" who others like and trust. [16]

I’m honestly not sure what I would have done differently here.[17] My closing point is just that actions have costs – and that big, “agentic,” unconventional actions have bigger costs.

Learned helplessness from leaning too far into social agency

I suspect that leaning heavily into “social agency” can make people worse at “Actually Doing Things agency.” If you’re in a headspace of readily seeking help from others, you might temporarily develop a helplessness around doing things for yourself.

I experienced a period of reduced sense of self-sufficiency recently. When I encountered a problem, my default thought process was often“who can help me?” instead of ‘how can I solve this problem myself?” 

Examples:

  • My bike chain fell off my bike and I didn’t really consider putting it back on myself (even though I would be perfectly capable of figuring out how to do that). Instead, some guy on the street helped. [18]
  • When I feel sad, my default is to message friends and talk about it with them. This can be a very good default – definitely better than feeling too ashamed to my friends about what’s bothering me. But my friends aren’t always available and it’s important to me that I don’t need to depend on them to feel better. If my friends aren’t available, I often feel stuck in a rut until I can speak to them. 
  • Occasionally, my Anki flashcards build up and I develop an ugh-field around reviewing my flashcards. When this happens, I tend to avoid it until I am stressed enough to tell a friend about it. Then, the friend usually helps me overcome it (by coworking, or encouragement, or a monetary fine if I don’t do it). Until I tell the friend, I feel helpless and defeated. I then rely on my friend to overcome it.
  • I rarely debug anything by myself, instead defaulting to problem-solving with friends. 

This is a clear case where the laws of equal and opposite advice apply; all of these examples are completely fine in moderation, but I could certainly do with developing my solo-problem-solving skills more than my asking-for-help skills.

Things I’ve changed my mind on since my last post

I think my previous post (perhaps unhelpfully) contributed to the “agency is high status and always good” meme.

I have changed my mind the following things since then:

  • The title of my previous post on agency was meant to be ironic, but I was glorifying the idea of being “unstoppable,” in a way that I no longer endorse. “Unstoppable” has some of the “parasite-like” undertones.
  • In the post, I advocate for having a very low bar for asking for help, but I no longer think that this is universally good. Some people could certainly do with asking for help more. But, it’s possible for dominant individuals to monopolise resources – in a way that is net harmful to the community.
  • I said in a comment: "it seems strictly good for EAs to be socially domineering in non-EA contexts. Like… I want young EAs to out-compete non-EAs for internship or opportunities that will help them skill build. (This framing has a bad aesthetic, but I can’t think of a nicer way to say it.)"  I now disagree with this; it feels selfish in a way that's hard to imagine is good, deontologically.[19]
  • It’s less clear to me that Twitter is a good use of a given person’s time.[20]

 I really like Owen’s comments on the post.

“I feel uneasy about is the reinforcement of the message "the things you need are in other people's gift", and the way the post (especially #1) kind of presents "agency" as primarily a social thing (later points do this less but I think it's too late to impact the implicit takeaway.

Sometimes social agency is good, but I'm not sure I want to generically increase it in society, and I'm not sure I want EA associated with it. I'm especially worried about people getting social agency without having something like "groundedness".

... At the very least I think there's something like a "missing mood" of ~sadness here about pushing for EAs to do lots of  [being socially dominant]. The attitude I want EAs to be adopting is more like "obviously in an ideal world this wouldn't be rewarded, but in the world we live in it is, and the moral purity of avoiding this generally isn't worth the foregone benefits". If we don't have that sadness I worry that (a) it's more likely that we forget our fundamentals and this becomes part of the culture unthinkingly, and (b) a bunch of conscientious people who intuit the costs of people turning this dial up see the attitudes towards it and decide that EA isn't for them."

 

I'm grateful for the conversations that prompted much of this post, and for helpful feedback from friends on a draft.

  1. ^

    "Agency" refers to a range of proactive, ambitious, deliberate, goal-directed traits and habits. See the beginning of this post for a less abstract definition. 

  2. ^

    A friend commented that this post could be an info-hazard for people who aren't proactive enough. 

  3. ^

    This was coined by Owen Cotton-Barratt in a comment on the original post.

  1. ^

    For example: reaching out to someone you admire online and asking to have a call with them; organising social events; asking for help on a research project from someone more senior; asking your peers for feedback; organising a coworking session with a friend where you both overcome your ugh-fields.

  2. ^

    For example, maybe the best action for a given individual is quiet and simple, like studying hard in their room. But maybe they are incentivised to do other things like organise events and write blog posts, because these actions look agentic.

  3. ^

    This website defines Hustle Culture: "Also known as burnout culture and grind culture, hustle culture refers to the mentality that one must work all day every day in pursuit of their professional goals."

  4. ^

    To be clear, I mean "virtuous" in a tongue in cheek way here.

  5. ^

     Examples: yell to a room to be quiet and listen; give an impromptu lightning talk; jump in a river at night; ask for more food at dinner; seek out instructors and ask them for a 1-1.

  6. ^

     I mainly used this example because I find it funny, and I am not blaming ESPR or any staff member. I am responsible for my own actions .

  7. ^

    My close friends all felt moderately to strongly positively towards my communication style – which is to be expected because of the extremely strong selection bias. Someone who I know less well said that they have sometimes found it hard to say no t0 me, which was very helpful feedback.

  8. ^

    This is a joke; I don't actually think guess culture is silly. I think it serves a valuable purpose, socially.

  9. ^

    Of treating people as agents and not trying to enforce their boundaries for them.

  10. ^

    Note that I found self-studying much harder when I did it for five months this year.

  11. ^

    Towards the end, she told the school that I was staying home to study for the exam.

  12. ^

    I had been struggling in school before then, but my absence certainly amplified it.

  13. ^

    Obviously, my goal isn't to be regarded as a nice person, it's to be kind and act with care towards the emotional wellbeing of others (and the two are importantly different). 

    But it does feel bad (for me at least) when others don't think of you as a "nice person," and this is a cost worth tracking, even when taking actions that you overall endorse.

  14. ^

    Maybe I would have discussed it with my teachers beforehand. Maybe I would have told the school that I was taking time off to study, instead of ill. I don’t know; I’m doubtful that either of those solutions would have led to better outcomes.

  15. ^

    This happened again recently and I put the chain back on my bike by myself! Win.

  16. ^

    Multiplied, this strategy destroys everything and eats up all the resources. 

  17. ^

    I am concerned about an implicit assumption that engaging with the EA community on Twitter makes the world better. I also feel uncomfortable that an implicit goal of Twitter is to climb the Twitter status ladder. My rough thoughts/feelings now are probably: “if you’re bottlenecked by your network, consider using Twitter to grow your network. But seriously consider the possibility that Twitter could be a time and attention sink.” The above is conditional on using Twitter for impact and networking. But if you find Twitter fun and exciting (which is great), it might be worth it.

    I personally find Twitter too addictive if I use it regularly, and it's better for my mental health if I'm on social media less (but this is just my experience).

Show all footnotes
Comments34


Sorted by Click to highlight new comments since:

Well done on public correction! That's always hard. 

It's key to separate out "social agency"  from the rest of the concept, and coining that term makes this post worthwhile on its own. Your learned helplessness is interesting, because to me the core of agency is indeed nonsocial: fixing the thing yourself, thinking for yourself, writing a blog for yourself, taking responsibility for your own growth (including emotional growth, wisdom, patience, and yes chores).

has inside views

I think you mean "has strong inside views which overrule the outside view". Inside views are innocuous if you simultaneously maintain an "all things considered" view.

Because of a quirk of the instructors and students that landed in our sample, ESPR 2021 went a little too hard on agency. We try to promote agency and wisdom in equal measure, which usually ends up sounding a lot like this post. Got there in the end!

Small nitpick: "social agency" coined by OCB in comments on the original.

Thanks :)

Your learned helplessness is interesting, because to me the core of agency is indeed nonsocial

Yeah, the learned helplessness is a weird one. I felt kinda sad about it because I used to really pride myself on being self sufficient. I agree that the social part of agency should only be a small part -- I think I leaned too far into it.

strong inside views which overrule the outside view

Thanks for the inside view correction. Changing that now, and will add that Owen originally coined social agency.

ESPR 2021 went a little too hard on agency

Fwiw I did get a lot of value out of the push for agency at ESPR. Before that,  I was too far in the other direction. Eg: I was anxious that others would think I was "entitled" if I asked for anything or just did stuff; felt like I had to ask for permission for things; cared about not upsetting authority figures, like teachers. I think that I also cared about signalling that I was agreeable -- and ESPR helped me get over this.

Wait, I'm not actually sure I want to change the inside view thing, I'm confused. I was kinda just describing a meme-y version of hustling -- therefore the low-resolution version of "has inside views" is fine. 

has strong inside views which overrule the outside view

I'm not really sure what you mean by this.

  1. "Having inside views": just having your own opinion, whether or not you shout about it and whether or not you think that it's better than the outside view.
  2. "Having strong inside views...": asserting your opinion when others disagree with it, including against the majority of people, majority of experts, etc.

 

(1) doesn't seem that agenty to me, it's just a natural effect of thinking for yourself. (2) is very agenty and high-status (and can be very useful to the group if it brings in decorrelated info), but needs to be earned.

Hmm, interesting. Thanks for clarifying, that does work better in this context (although it's confusing if you don't have the info above)

Your learned helplessness is interesting, because to me the core of agency is indeed nonsocial

Some quotes on agency that I liked, which I think is more representative of the "do it yourself" attitude.

Really good post, and I feel like I have way too many thoughts for this one comment. But anyway here are a few...

  • Maybe I'm the odd one out (or wrong about my intuition) but I don't think I ever got the sense-even intuitively or in a low-fidelity way- that "agency" was identical to/implied/strongly overlapped with "a willingness to be social domineering or extractive of others' time and energy" 
    • Insofar as I have something like a voice in my head giving me corrective advice on this front, it saying asking "if not you than who?" much more than it's saying "go get it!" 
      • Of course,"if not you, then who?" isn't always rhetorical; sometimes, you really should refrain from doing something you're not qualified to do or don't understand! 
  • Worth distinguishing between being a "team player" in a vibes/aesthetic sense and in a moral/functional sense. If skipping school meant that you were getting out of, say, a tutoring commitment that you had signed up for, I'd say that yeah, you should think hard about whether it's worth reneging
    • But what you wrote seems to imply that there was no functional or causally attributable harm to anyone from your missing school. If so, I think doing the weird rationalist thing and resisting the dysfunctional social norm of going to school was completely the right thing to do. 
      • Also, while emotional reactions often encode valuable information, I think the negative emotional reaction you got from being scolded by your teachers was more like a non-functional misfiring due to  being out of the evolutionary context; in a 100 person tribe, being scolded by two others indicates that your behavior is indeed likely to harm you, and in a pseudo-moral sense may be meaningfully uncooperative. In this case, (probably) it's fine for you, your teachers, and whomever else if your teachers don't think highly of you

I don't think you're the odd one out, I think via people's psychology and other factors, some people hear what you heard and some people hear what Evie describes.

Agree with you that the school example for me doesn't track what the broader thing is about.

Thanks for your comment Aaron! :)

I don't think I ever got the sense-even intuitively or in a low-fidelity way- that "agency" was identical to/implied/strongly overlapped with "a willingness to be social domineering or extractive of others' time and energy" 

I wrote about this because it was the direction in which I noticed myself taking "be agentic" too far. It's also based on what I've observed in the community and conversations I've had over the past few months. But I would expect people to "take the message too far" in different ways (obvs whether someone has taken it too far is subjective, but you know what I mean).

But what you wrote seems to imply that there was no functional or causally attributable harm to anyone from your missing school.

Yeah, nobody was harmed, and I do endorse that I did it. It did feel like a big cost that my teachers trusted/liked me less though. 

Note that I was a bit reluctant to include the school example, because there's lots of missing context, so it's not conveying the full situation. But the main point was that doing unconventional stuff can make people mad, and this can feel bad and has costs.

I enjoyed reading these updated thoughts!

A benefit of some of the agency discourse, as I tried to articulate in this post, is that it can  foster a culture of encouragement. I think EA is pretty cool for giving people the mindset to actually go out and try to improve things; tall poppy syndrome and 'cheems mindsets' are still very much the norm in many places!

I think a norm of encouragement is distinct from installing an individualistic sense of agency in everyone, though. The former should reduce the chances of Goodharting, since you'll ideally be working out your goals iteratively with likeminded people (mitigating the risk of single-mindedly pursuing an underspecified goal). It's great to have conviction — but conviction in everything you do by default could stop you from finding the things you really believe in.

Great post. This put words to some vague concerns I've had lately with people valorizing "agent-y" characteristics. I'm agentic in some ways and very unagentic in other ways, and I'm mostly happy with my impact, reputation, and "social footprint". I like your section on not regulating consumption of finite resources: I think that modeling all aspects of a community as a free market is really bad (I think you agree with this, at least directionally).

This post, especially the section on "Assuming that it is low-cost for others to say 'no' to requests"  reminded me of Deborah Tannen's book That's Not What I Meant — How Conversational Style Makes or Breaks Relationships. I found it really enlightening, and I'd recommend it for help understanding the unexpected ways other people approach social interactions.

Props for writing! Some things I think strengthen this point even more:
1. 

"I want to take people at their word – if they agree to something I ask, then I’ll believe them."

If I ask three people for their time, they don't know whether they're helping me get from 0 to 1 person helping with this, 1 to 2 or 9 to 10 for all they know. I also may not know that, but it's a different ask, and people can reasonably have their bar for 0 to 1 or after a certain amount of effort the asker has put in that they've assumed they have

2.

We all (probably) have goals that do not feel like hustling and striving, for example: 

  • make sure I can get 8-10 hours of sleep a night; 
  • create room for slack in my life; 
  • have meaningful and emotionally close relationships; 
  • call my mum regularly; 
  • be a caring friend.

I think people who care about EA and / or their impact should probably be willing to take a bunch of steps / take on a bunch of costs to avoid resenting EA in the medium to long term.

3. A frame I've found useful is to model EA as sort of an agent. If ten minutes of someone else's time can save me two hours of mine, that can be a very reasonable trade. If I could have spent 30 minutes figuring something out to save someone else 20, I might value their time that much more, and then wouldn't want to ask them to give that up.
 

If I ask three people for their time, they don't know whether they're helping me get from 0 to 1 person helping with this, 1 to 2 or 9 to 10 for all they know.

Nice, yup, agreed that the information asymmetry is a big part of the problem in this case. I wonder if it could be a good norm to give someone this information when making requests. Like, explicitly say in a message "I've asked one other person and think that they will be about as well placed as you to help with this" etc

I do also think that there's a separate cost to making requests, in that it actually does impose a cost on the person. Like, saying no takes time/energy/decision-power. Obviously this is often small, and in many cases it's worth asking. But it's a cost worth considering.

 

(Now I've written this out, I realise that you weren't claiming that the info asymmetry is the only problem, but I'm going to leave the last paragraph in).

to avoid resenting EA in the medium to long term

This is great and only something I've started modelling recently. Curious about what you think this looks like in practice. Like, is it more getting at a mindset of "don't beat yourself up when you fall short of your altruistic ideals"? Or does it also inform real world decisions for you?

 model EA as sort of an agent

Nice 

  1. I got a request just last night and was told that the person was asking three people, and while this isn't perfect for them, I think it was a great thing from my perspective to know.
  2. I don't think it's massively relevant for me right now except vaguely paying attention to my mental health and well being, but I think it's super relevant for new-to-EA and/or young people deciding very quickly how invested to be.
[anonymous]5
3
0

If you’ve read the summary, I’m not sure how much benefit you’ll get from the rest of the post. Consider not reading it.

Okay. Still upvoting though for this general thing:

...things I’ve changed my mind on since my last post.

[anonymous]6
5
0

Incidentally, I also appreciate comments like the first quote - not only have you given a summary, you've also given an indication of how much of the value of the post is contained in the summary 🙏 

Thanks, that's useful to know! :)

I'm glad you wrote this! I was worried about your previous post, and was thinking about writing something on this dimension myself.

It's funny: this could've been mostly avoided by a consideration of Chesterton's Fence and the EMH? ("If AGENCY was so good, why wouldn't everyone do it?")

Anyways, I'm now worried about e.g. high school summer camp programs that prioritize the development of unbalanced agency.

Thanks for your comment!

this could've been mostly avoided by a consideration of Chesterton's Fence

Meh, I don't think so. This taken to its extreme looks like "be normie."

I'm now worried about e.g. high school summer camp programs that prioritize the development of unbalanced agency.

I'm pretty confident that (for ESPR at least) this was a one off fluke! I'm not worried about this happening again (see gavin's comment above).

This was a great read! I relate to a lot of these thoughts - have swung back and forth on how much I want to lean into social norms / "guess culture" vs be a stereotypically rationalist-y person, and had a very similar experience at school. I think it's great you're thinking deeply and carefully about these issues. I've found that my attitude towards how to be "agentic" / behave in society has affected a lot of my major object-level decisions, both good and not so good.

LB
1
1
0

I'm curious about your definition of agency  and I wonder if its one that is shared by other effective altruists? Agency for me, from not an EA background although a long time observer, has little to do with these sort of necessarily self-serving goals that you are now arguing need to be reined in a bit, maybe. Rather, agency is your ability to consider and pursue them at all - from the Stanford philosophy dictionary: "...an agent is a being with the capacity to act..." Agency is, for me then, more akin to self-determination than an imperative to pursue something for one's self. 

You end your current essay affirmatively quoting what I see as a quite clearly paternalistic view about how much  agency, as you define it, others should basically be allowed to have. I am curious, do you believe that agency is something for you or anyone to control for anyone else, in any capacity? A thing to be regulated? I'd be curious to know if you believe that for your definition of agency as well as my definition (or the one from the Stanford philosophy dictionary).

I am asking these questions neutrally - its genuine curiosity to understand your perspective a bit better.

Am not the author of this post, but I think EAs and rationalists have somewhat coopted the term "agentic" and infused it with a load of context and implicit assumptions about how an "agentic" person behaves, so that it no longer just means "person with agency". This meaning is transmitted via conversations with people in this social cluster as well as through books and educational sessions at camps/retreats etc. 

Often, one of the implicit assumptions is that an "agentic" person is more rational and so pursues their goal more effectively, occasionally acting in socially weird ways if the net effect of their actions seems positive to them. 

LB
1
1
0

OK, how interesting. I tend to stick to the academic  EA literature, 'visionary' type EA books and when on this forum, generally not posts like these kinds of posts. So I suppose I might not encounter too much of this... appropriation... for lack of a better term. You imply this self-serving quality is also infused into what it means for EAs to be rational as well? As if the economic definition of 'rational self-interest' is the definition of rational? Am I reading that correctly?

Thanks for the response!

As NinaR said, 'round these parts the word "agentic" doesn't imply self-interest. My own gloss of it would be "doesn't assume someone else is going to take responsibility for a problem, and therefore is more likely to do something about it". For example, if the kitchen at your workplace has no bin ('trashcan'), an agentic person might ask the office manager to get one, or even just order one in that they can get cheaply. Or if you see that the world is neglecting to consider the problem of insect welfare, instead of passively hoping that 'society will get its act together', you might think about what kind of actions would need to be taken by individuals for society to get its act together, and consider doing some of those actions.

LB
1
0
0

That's a definition I would assume an EA would give for the question- what it means to have agency for EAs, but from my view, it doesn't mesh with how the OP is describing agency and therefore what is "agentic" in this post nor the post this post is a self-response to. In that first post, the one this is referencing, the OP gives seven recommendations that are clearly self-interested directives (e.g., "Figure out what you need, figure out who can help you get it, ask them for it"). If karma is an indicator, that first post was well received. What I am getting here, if I try to merge your position and my view of the OP's position, is that self-interested pursuits are means to an ends which is a sort of eventual greater good? Like, I can pursue everything "agentic-ly" because my self-interest is virtuous (This sounds like objectivism to me, btw)?

Now, I actually think this current post is about rolling back that original post a bit, but only a little. Hence my questions about regulating 'agency' in others - to try and get some parameters or lane markers for this concept here.

These aren't defined topics (agency, agentic) in the EA forum, btw. This is partly why I am so interested in how people define them since they seem to be culturally defined here (if in multiple ways) differently than they would be in non-critical philosophy and mainstream economics (from which EA is rooted). 

When I read the original OP that this OP is a response to, I am "reading in" some context or subtext based on the fact I know the author/blogger is an EA; something like "when giving life advice, I'm doing it to help you with your altruistic goals". As a result of that assumption, I take writing that looks like 'tips on how to get more of what you want' to be mainly justified by being about altruistic things you want.

I don't fully understand your comment, but I think agency is meant to be a relatively goal-agnostic cognitive tool, similar to e.g. "being truth-seeking" or "good social skills." Altruism is about which goals you load in, but at least in theory this is orthogonal to how high your ability to achieve your goals are.

LB
0
0
2

That definition aligns more with the sort of traditional, non-critical philosophical and mainstream economic approach to agency, which means there are now basically three definitions to agency in this post and comment thread from EA peeps. I'm glad I asked because I've always just assumed the sort of standard non-critical philosophy and mainstream economic definition (because that's the origin story of EA too) when reading things here and encountering the term agency - which I do not feel fits at all with the definition intimated or literally given by the OP in this post or the referenced, original post (which was highly karma'd).

"Agency" refers to a range of proactive, ambitious, deliberate, goal-directed traits and habits.

I see that as a definition driven by self-interest, the original essay confirms this. There's probably room for debate there, but its definitely not a "relatively goal-agnostic cognitive tool," type definition, for instance. 

So, I think I've got to stop assuming a shared definition here for this one and probably more importantly, stop assuming that EAs actually take it under consideration, really (there'd probably be a terms entry if it's definition were valuable here, I suppose). I had no idea there were such post-y things in EA. How exciting. Ha! 

If I were to guess what the 'disagreement' downvotes were picking up on, it would be this:

I see that as a definition driven by self-interest

Whereas to me, all of the adjectives 'proactive, ambitious, deliberate, goal-directed' are goal-agnostic, such that whether they end up being selfish or selfless depends entirely on what goal 'cartridge' you load into the slot (if you'll forgive the overly florid metaphor).

I don't think self-interest is relevant here if you believe that it is possible for an agent to have an altruistic goal.

Also, as with all words, "agentic" will have different meanings in different contexts, and my comment was based on its use when referring to people's behaviour/psychology which is not an exact science, therefore words are not being used in very precise scientific ways :) 

LB
1
0
0

It was the OP that injected self-interest into the concept of agency, hence my original question. And I totally agree about the meaning of words varying. None of this is a science at all, in my opinion, just non-critical philosophy and economics and where the two meet and intertwine. I'm just trying to understand how EA and EAs define these things, that's all. 

Thanks so much for your input. I really appreciate it. 

Yup, another commenter is correct in that I am assuming that the goals are altruistic.

Hey, thanks for asking.

On the first point:

  • Throughout both of my posts, I've been using a nicher definition than the one given by the Stanford philosophy dictionary. In my last post, I defined it as "the ability to get what you want across different contexts – the general skill of coming up with ambitious goals [1] and actually achieving them, whatever they are."
  • But, as another commenter said, the term (for me at least) is now infused with a lot of community context and implicit assumptions. 
  • I took some of this as assumed knowledge when writing this post, so maybe that was a mistake on my part.

On the second point:

  • I'm a bit confused by the question. I'm not claiming that there's an ideal amount of agency or that it should be regulated. 
  • Saying that, I expect that some types of agency will be implicitly socially regulated. Like, if someone frequently makes requests of others in a community, other people might start to have a higher bar for saying yes. Ie, there might be some social forces pushing in the opposite direction. 
    • I don't think that this is what you were getting at, but I wanted to add.
More from Evie
47
Evie
· · 7m read
Curated and popular this week
 ·  · 1m read
 · 
 ·  · 14m read
 · 
1. Introduction My blog, Reflective Altruism, aims to use academic research to drive positive change within and around the effective altruism movement. Part of that mission involves engagement with the effective altruism community. For this reason, I try to give periodic updates on blog content and future directions (previous updates: here and here) In today’s post, I want to say a bit about new content published in 2024 (Sections 2-3) and give an overview of other content published so far (Section 4). I’ll also say a bit about upcoming content (Section 5) as well as my broader academic work (Section 6) and talks (Section 7) related to longtermism. Section 8 concludes with a few notes about other changes to the blog. I would be keen to hear reactions to existing content or suggestions for new content. Thanks for reading. 2. New series this year I’ve begun five new series since last December. 1. Against the singularity hypothesis: One of the most prominent arguments for existential risk from artificial agents is the singularity hypothesis. The singularity hypothesis holds roughly that self-improving artificial agents will grow at an accelerating rate until they are orders of magnitude more intelligent than the average human. I think that the singularity hypothesis is not on as firm ground as many advocates believe. My paper, “Against the singularity hypothesis,” makes the case for this conclusion. I’ve written a six-part series Against the singularity hypothesis summarizing this paper. Part 1 introduces the singularity hypothesis. Part 2 and Part 3 together give five preliminary reasons for doubt. The next two posts examine defenses of the singularity hypothesis by Dave Chalmers (Part 4) and Nick Bostrom (Part 5). Part 6 draws lessons from this discussion. 2. Harms: Existential risk mitigation efforts have important benefits but also identifiable harms. This series discusses some of the most important harms of existential risk mitigation efforts. Part 1 discus
 ·  · 2m read
 · 
THL UK protestors at the Royal Courts of Justice, Oct 2024. Credit: SammiVegan.  Four years of work has led to his moment. When we started this, we knew it would be big. A battle of David versus Goliath as we took the Government to court. But we also knew that it was the right thing to do, to fight for the millions of Frankenchickens that were suffering because of the way that they had been bred. And on Friday 13th December, we got the result we had been nervously waiting for. Represented by Advocates for Animals, four years ago we started the process to take the Government to court, arguing that fast-growing chicken breeds, known as Frankenchickens, are illegal under current animal welfare laws. After a loss, and an appeal, in October 2024 we entered the courts once more. And the judgment is now in on one of the most important legal cases for animals in history. The judges have ruled in favour on our main argument - that the law says that animals should not be kept in the UK if it means they will suffer because of how they have been bred. This is a huge moment for animals in the UK. A billion Frankenchickens are raised with suffering coded into their DNA each year. They are bred to grow too big, too fast, to make the most profit possible. In light of this ruling, we believe that farmers are breaking the law if they continue to keep these chickens. However, Defra, the Government department responsible for farming, has been let off the hook on a technicality. Because Defra has been silent on fast-growing breeds of chicken, the judges found they had no concrete policy that they could rule against. This means that our case has been dismissed and the judges have not ordered Defra to act. It is clear: by not addressing this major animal welfare crisis, Defra has failed billions of animals - and the farming community. This must change. While this ruling has failed to force the Government to act, it has confirmed our view that farmers are acting criminally by using