Sophia

Working (0-5 years experience)
553Joined Aug 2015

Bio


What I am thinking about at the moment ๐Ÿค“ and why my profile is not my full name ๐Ÿ˜ถ

Effective altruism community building ๐Ÿ•๏ธ

I used to do a lot of effective altruism community building and I am currently thinking about how the effective altruism community can maximize its expected impact, given the rapidly changing conditions. 

My own personal productivity and wellbeing bottlenecks : why I think prioritising working on these is important, progress I have made and my current challenges 
 

I suspect that working on my personal productivity and wellbeing bottlenecks is a great way for me to increase how much impact I can expect to have over my career.

I am currently focused on working out how to become much more reliable than I am/have been in the past. 

In particular, I am working on finishing my top priority tasks in a timely manner, becoming better calibrated on what commitments I can and cannot meet, getting better at promising the 5th percentile scenario rather than the 50th (or the 95th) and on becoming better at quick communication when plans break.  I find this quite challenging for a number of reasons (one of those reasons being a cluster of personality traits that got me an ADHD diagnosis and another being a number of non-ADHD mental health/self-esteem issues that I'm slowly but surely working through).


Progress I have made so far ๐Ÿคฉ: I have become much more self-compassionate over the years which has helped a lot with a lot of the mental health struggles I've had.

 I have a process that works for getting me to consistently start on my top priority (but is very time-consuming and maybe only works consistently when I control enough of the variables, e.g. I'm not sure it would work if I had a very different home environment and work environment to the ones I have now).

My next challenges ๐Ÿง—๐Ÿผ: I struggle to keep the scope narrow enough to finish a lot of what I start. I finish what I want to finish some of the time, but nowhere near consistently enough for me to feel like I won't become the bottleneck sometimes in a team (and therefore slow the progress of the whole team a lot). 

I also find it very hard to under-promise (I sometimes find it hard to accept that I realistically need to promise to do 15% of what I think is actually possible when I'm excited about things, especially when everything I'm saying no to feels important if I could fit it in). 


More broadly, I am currently thinking about how to leverage the upsides of being a bit more ADHD than the average person and managing the downsides to still reach the payoffs I care about. 

Please do reach out about pretty much anything (as long as youโ€™re okay with me maybe not getting around to replying because messages and emails, every now and again, turn into a massive ugh field and I can only progress on so many ugh fields at a time ๐Ÿ˜Š๐Ÿ˜ถ๐Ÿ™ƒ๐ŸŒž).



Why my forum username is not my full name ๐Ÿคซ

I donโ€™t have my full name on the forum because, even though anyone who knows me in person can probably tell who I am because I always say the same things (Iโ€™m a bit of a broken record sometimes), I want to be able to be honest about how I'm thinking about the best strategies to increase my expected impact over my lifetime. This means I want to sometimes discuss my personal productivity bottlenecks and my mental health (as I do here) and also how I'm working on these. I donโ€™t necessarily really want to be this open about my mental health and other parts of who I am right now that I am working on improving (to, hopefully, do more good in the longer term) with anyone who thinks to google my full name.

(But I promise I'm still a real person - though maybe that is what a fake person would say ๐Ÿ˜…) 
 

How others can help me

How other people manage their own productivity bottlenecks to reach the payoffs they care deeply about (especially if people who have been diagnosed with ADD or people who relate to the experiences of other people who have been diagnosed with ADD).[1][2] 

  1. ^

    or anyone who relates to Tim Urban's description of the feeling of absolutely not wanting to do anything but the shiny thing offered up by the fun monkey in his blog, Wait but Why.   

    The "What but Why" description of what goes on in a master procrastinator's head is a pretty accurate description of what my mind does naturally and seems to have caused a lot of the "productivity bottlenecks" that made a psychologist think to get me assessed for ADHD (and I know at least one person, who I've had the good fortune to observe up close, who doesn't relate much to Time Urban's description, who's strategies are therefore a little less useful to me because they don't address the challenges I  seem to face at the moment).  

    Some more rambly thoughts on this train of thought that I may or may not endorse at all if I thought more about them and may later really regret leaving here (but am going to leave them just because I think it'll be easier to get back to what I'm supposed to be doing if I put them somewhere for the moment ๐Ÿคฃ): 
    I'm pretty sure a lot of people manage varying levels of these characteristics wouldn't meet the clinical criteria of ADHD (I'd guess the characteristic that me and other people in my family have in common that causes higher levels of some ADHD symptoms than other people I am close to who lack those characteristics  are not a binary: they seem to come on a pretty continuous spectrum in the people I know well enough/who I've seen try to work often enough and have been open about what is and isn't easy for them). If these characteristics are a continuous spectrum, then there are plenty of people who probably have great strategies that utilise the best of this nature and manage the challenges who wouldn't be diagnosable with ADHD, many who this is due to some combo of them having not that extreme levels of this characteristic and/or particularly amazing mindsets and strategies that help them  make the most of who they are. E.g. I  think that is not outside the realm of possibility that people in my family who have the same challenges as me in how their dopamine seems to distribute itself among all the various possible activities they could be doing might not actually be able to get diagnosed with ADHD because their strategies mean that they don't really have the symptoms to the same extreme level I had when I got my diagnosis.

    (The reason these thoughts seem relevant to my "how can you help me help others" profile is that managing my "productivity bottlenecks" is my current focus so I can better help others over my lifetime so if there is a place to put these reflections, this doesn't seem the worst place. This way people might also have nuance to add to these sorts of low-confidence thoughts so I can get a better model to progress on these bottlenecks faster if my current low-confidence, best-guess model is inaccurate)

  2. ^

    I have started documenting some of the strategies that work for me too (but there is a lot of personal info so I've decided to not leave up a public google doc). Feel free to reach out if you're curious to see what progress I've made so far and what bottlenecks I'm tackling next (whether that be to cross-reference your own experiences and see if anything I've done seems helpful or because you're wonderfully helpful and you want to see some more specifics to work out what you do that might be applicable to where I am right now or a bit of both or some other reasonable reason I haven't conjured up ๐Ÿ˜Š). 

How I can help others

[loading...] 

Feel free to reach out to me about anything where I seem like I might be helpful based on the various things I've said elsewhere in the meantime while this field loads.

Note: if I take a while to respond, it's just because your message probably got a bit ugh because I care about replying and I feel bad about not having replied already.  I tend to overcome many of my ugh fields eventually but I have a lot of them so it sometimes takes a bit of time for any particular one to get to the top of the pile (so sometimes it takes me a little longer than ideal to get back to people ๐Ÿคฃ๐Ÿ™ƒ๐ŸŒž).

Comments
140

tl;dr:

  • I am not sure that the pressure on community builders to communicate all the things that matter is having good consequences.
  • This pressure makes people try to say too much, too fast.
  • Making too many points too fast makes reasoning less clear.
  • We want a community full of people who have good reasoning skills.
  • We therefore want to make sure community builders are demonstrating good reasoning skills to newcomers
  • We therefore want community builders to take the time they need to communicate the key points
  • This sometimes realistically means not getting to all the points that matter

I completely agree that you could replace "jargon" with "talking points".

I also agree with Rohan that it's important to not shy away from getting to the point if it is possible you can make the point in a well-reasoned way.

However, I actually think it's possibly quite important for improving the epistemics of people new to the community for there to be less pressure to communicate "all the things that matter". At least, I think there needs to be less pressure to communicate all the things that matter all at once.

The sequences are long for a reason. Legible, clear reasoning is slow. I think too much pressure to get to every bottom line in a very short time makes people skip steps. This means that not only are we not showing newcomers what good reasoning processes look like, we are going to be off-putting to people who want to think for themselves and aren't willing to make huge jumps that are missing important parts of the logic.

Pushing community builders to get to all the important key points, many bottom lines, will maybe make it hard for newcomers to feel like they have permission to think for themselves and make their own minds up. To feel rushed to a conclusion, to feel like you must come to the same conclusion as everyone else, no matter how important it is, will always make clear thinking harder.

If we want a community full of people who have good reasoning processes, we need to create environments where good reasoning processes can thrive. I think this, like most things, is a hard trade-off and requires community builders to be pretty skilled or to have much less asked of them.

If it's a choice between effective altruism societies creating environments where good reasoning processes can occur or communicating all the bottom lines that matter, I think it might be better to focus on the former. I think it makes a lot of sense to have effective altruism societies to be about exploration.

We still need people to execute. I think having AI risk specific societies, bio-risk societies, broad longtermism societies, poverty societies (and many other more conclusion focused mini-communities) might help make this less of a hard trade-off (especially as the community grows and there becomes more room for more than one effective altruism related society on any given campus). It is much less confusing to be rushed to a conclusion when that conclusion is well-labelled from the get-go (and effective altruism societies then can point interested people in the right direction to find out why certain people think certain bottom lines are sound).

Whatever the solution, I do worry rushing people to too many bottom lines too quickly does not create the community we want. I suspect we need to ask community builders to communicate less (we maybe need to triage our key points more), in order for them to communicate those key points in well-reasoned way.

Does that make sense?

Also, I'm glad you liked my comment (sorry for writing an essay objecting to a point made in passing, especially since your reply was so complementary; clearly succinctness is not my strength so perhaps other people face this trade-off much less than me :p).

Lucky people maybe just have an easier time doing anything they want to do, including helping others, for so many reasons.

I didn't go to an elite university but I am exceptionally lucky in so many extreme ways (extremely loving family, friends, citizen of a rich country, good at enough stuff to feel valued throughout my life including at work etc).

While there is a counterfactual world where of course I could have put myself in a much worse position, it would have been impossible for most people to have it as good as I have it even if they worked much harder than me their entire lives.

Because of my good luck, it is much easier for me to think about people (and other sentient beings) beyond my immediate friends and family. It is very hard to have a wide moral circle when you, your friends and your family are under real threat. My loved ones are not under threat and haven't ever been. I care a tonne about the world. Clearly this is largely due to luck. I have no idea what I would have become without a life of good fortune.

I think it makes sense to try and find people who are in a position to help others significantly even though it is always going to be largely through luck. Things just are incredibly unfair. If they were fairer, effective altruism would be less needed.

It's probably easier to find people who are exceptionally lucky at elite universities.

I do think it makes sense to target the luckiest people to use their luck as well as possible to make things better for everyone else.

The challenge is doing this while making sure a much wider variety of people can feel a sense of belonging here in this community.

I do think we have to be better at making it clear that many different types of people can belong in the effective altruism movement.

Who should feel welcome should they stumble upon us and want to contribute is a much, much larger group of people than to whom should we spend scarce resources promoting the idea that lucky people can help others enormously.

I think part of the answer comes from the people who don't fit the mould who stay engaged anyway because they resonate with the ideas. Because they care a tonne about helping others. These people are trailblazers. By being in the room, they make anyone else who walks in the room who is more like them and less like the median person in the existing community feel like this space can be for them too.

I don't think this is the full answer. Other social movements have had great successes from making those with less luck notice they have much more power than they sometimes feel they do. I'm not sure how compatible EA ideas are with that kind of mass mobilisation though. This is because the message isn't simple so when it's spread on mass, key points have a tendency to get lost.

I do think it's fair to say that due to comparative advantage and diminishing returns, there is a tonne of value to building a community of people who come from all walks of life who have access to all sorts of different silos of information.

Regardless, I think it's incredibly important to not mistake the focus on elite universities for a judgement on whether they deserve to be there.

I think it is actually purely a judgement on what they might be able to do from that position to make things better for everyone else.

Effective altruism is about working out how to help others as much as we can.

If lucky people can help others more, then maybe we want to focus on finding lucky people to make all sentient beings luckier for the rest of time. If less lucky people can help more on the margin to help others effectively, then we should focus our efforts there.

This is independent of value judgements on anyone's intrinsic worth. Everyone is valuable. That's why we all want to help everyone as much as we can. Hopefully we can do this and make sure that everyone in our community still feels valued. This is hard because naturally people don't feel as valued because we all tie our instrumental value to our sense of self-worth even if instrumental value is usually pretty much entirely luck. This is a challenge we maybe need to rise to rather than a tension we can just accept because a healthy, happy effective altruism community where everyone feels valued will just be more effective at helping others. I think it's pretty clear that everyone can contribute (e.g. extreme poverty still exists and a small amount of money still sadly goes a very long way). I know I can contribute much, much less than many others but being able to contribute something is enough. I don't need to contribute more than anyone else to still be a net positive member of this community.

We're all on the same team. It's a good thing if other people are able to do much more than me. . If luckier people can do more, then I'm glad that they are the ones that are being most encouraged to use their luck for good. If those with less luck want to contribute what they can, I hope they can still feel valued regardless.

Hopefully we can all feel valued for being a part of something good and contributing what we can independent of whether luckier people are, due to their luck, able to do more (and therefore might be focused on more in efforts to communicate this community's best guesses on how to help others effectively).

I liked this commentary even if I disagreed with a lot of the bottom line conclusions. Since we have an inferential gap that could be quite large, I don't expect everything you say to make sense to me.

You are probably directionally correct so I have strong upvoted this to encourage you to continue writing.

I don't have the energy right now to get into the object-level but feel free to share future draft posts as your thoughts develop. If I have a spare moment, I'd be very happy to share any feedback I have on your future thoughts with you.

(all good humor tends to be pointing to some angle of the truth that needs time to become nuanced enough to be more widely legible)

I strongly agree. I think this question getting downvoted reveals everything wrong with the EA movement. I am thinking it might be to start a new kind of revolution of compassion, patience and rationality. ๐Ÿคฃ

What do you think?

I think that you are pointing to an important grain of truth.

I think that crossing inferential gaps is hard.

Academic writing is one medium. I think that facial expressions have a tonne of information that is hard to capture in writing but can be captured in a picture. To understand maths, writing is fine. To understand the knowledge in people's heads, more high fidelity mediums than writing (like video) is better.

Cool . I'm curious, how does this feeling change for you if you found out today that AI timelines are almost certainly less than a decade?

I'm curious because my intuitions change momentarily whenever a consideration pops into my head that makes me update towards AI timelines being shorter.

I think my intuitions change when I update towards shorter AI timelines because legibility/the above outlined community building strategy has a longer timeline before the payoffs. Managing reputation and goodwill seem like good strategies if we have a couple of decades or more before AGI.

If we have time, investing in goodwill and legibility to a broader range of people than the ones who end up becoming immediately highly dedicated seems way better to me.

Legible high-fidelity messages are much more spreadable than less legible messages but they still some take more time to disseminate. Why? The simple bits of it sound like platitudes. And the interesting takeaways require too many steps in logic from the platitudes to go viral.

However, word of mouth spread of legible messages that require multiple steps in logic still seem like they might spread exponentially (just with a lower growth rate than simpler viral messages).

If AI timelines are short enough, legibility wouldn't matter in those possible worlds. Therefore, if you believe timelines are extremely short then you probably don't care about legibility or reputation (and you also don't advise people to do ML PhDs because by the time they are done, it's too late).

Does that seem right to you?

Thanks for this analysis! I would be excited to see this cause area explored/investigated further.

Note: edited significantly for clarity the next day

Tl;dr: Weirdness is still a useful sign of sub-optimal community building. Legibility is the appropriate fix to weirdness. 

I know I used the terms "nuanced" and "high-fidelity" first but after thinking about it a few more days, maybe "legibility" more precisely captures what we're pointing to here?

Me having the hunch that the advice "don't be weird" would lead community builders to be more legible now seems like the underlying reason I liked the advice in the first place. However, you've very much convinced me you can avoid sounding weird by just not communicating any substance.  Legibility seems to capture what community builders should do when they sense they are being weird and alienating.  

 EA community builders probably should stop and reassess when they notice they are being weird, "weirdness" is a useful smoke alarm for a lack of legibility. They should then aim to be more legible. To be legible, they're  probably strategically picking their battles on what claims they prioritize justifying to newcomers. They are legibly communicating something, but they're probably not making alienating uncontextualized claims they can't back-up in a single conversation. 

They are also probably using clear language the people they're talking to can understand.  

I now think the advice "make EA more legible" captures the upside without the downsides of the advice "make EA sound less weird". Does that seem right to you?

I still agree with the title of the post. I think EA could  and should sound less weird by prioritizing legibility at events where newcomers are encouraged to attend. 

Noticing and preventing weirdness by being more legible seems important as we get more media attention and brand lock-in over the coming years. 

I definitely appreciate the enthusiasm in this post, I'm excited about Will's book too. 

However, for the reasons Linch shared in their comment, I would recommend editing this post a little.

 I think it is important to only recommend the book to people who we know well enough to judge that they probably would get a lot of a book like this one and to whom we can legibly articulate why we think they'd get a lot out of the book. 

A recommended edit to this post

I recommend editing the friends bit to something like this (in your own words of course, my words are always lacking succinctness so I recommend cutting as you see fit):

If you feel comfortable reaching out to a couple of friends who you think might find "what we owe the future" a good read, now might be a particularly excellent time to give them this book recommendation. 

Now seems like a particularly good time to give this book recommendation to friends who might be interested because the book is getting more media attention than it likely will get in the future because it just launched so there is a lot more media coverage on it than usual. Therefore,  they are more likely to be reminded of your recommendation and so are more likely to actually read the book if you recommend it now than if you recommend it later. 

Having said that, after you have read the book or after you've come across something they'd find  interesting might possibly be better times to share it with certain people so it's probably best to make a judgement call based on the particular friend you are considering recommending the book to.

Other thoughts on who to recommend this book to and how to do it in a sensitive way that leaves room for them to say no if they don't feel it's their vibe

I think it's particularly good to recommend the book to people to whom you can explain clearly why you think they would get a lot out of the book. I also think it could be very good to explicitly encourage them to think critically about it and send you their critical thoughts or discuss their critical thoughts with you in person. If you send it to people you know well enough to have enough context to tell they would probably enjoy the book, you're almost definitely going to genuinely want to hear what they have to say after reading it so that's a double bonus that you've started a conversation! 

I think requests like this can come off as pushy if the person who you are recommending the book to doesn't understand why you think they should read the book. By making the reasoning clear and also leaving room in the message for them to just not follow your recommendation (by not assuming they'll necessarily read it just because you think they'd enjoy it), a recommendation can give a good vibe instead of a bad one. Basically, it's important to vibe it and only recommend the book in ways that leaves everyone feeling good about the interaction whether or not they read the book. 

For example, you could say something like (obviously find your own words, this is definitely a message written with my vibe so it would probably be weird to copy it exactly word-for-word because it probably wouldn't sound like you and therefore wouldn't sound as genuine):

I am so excited about this book being released because I think that future generations matter a tonne and Will MacAskill thinks a little differently to other people who have thought hard about this so I'm looking forward to seeing what he has to say. I thought I'd send you a link too because from previous conversations we've had, I get the sense that you'd enjoy a book detailing someone's thinking on what we can do to benefit future generations. 

I'd be really keen to discuss it with you if you do end up reading it and especially keen to here any pushback you had because I'm a little in my bubble so I won't necessarily be able to see my water as well as you could.

Let me know what you think about my suggestions (and feel free to pushback on anything I've said here). I think I could easily change my mind on the above, I'm not at all confident in my recommendations so I'd be keen to here what you or anyone else thinks. 

Load More