All of Darius_M's Comments + Replies

Emphasizing emotional altruism in effective altruism

At the risk of self-promotion, I wrote a motivational essay on EA a few years ago, Framing Effective Altruism as Overcoming Indifference

3MichelJusten1mo
Thanks for sharing! I hadn’t come across this but I like the framing.
Wikipedia editing is important, tractable, and neglected

Well done! The article receives about 50,000 page views each year, so there are a lot of people out there who benefit from your contribution.

4AllAmericanBreakfast1mo
Wow, I hadn't thought to check! Thanks for pointing that out, and for writing this post!
Affectable universe?

Toby Ord explains several related distinctions very clearly in his paper 'The Edges of Our Universe'. Highly recommended: https://arxiv.org/abs/2104.01191

Books / book reviews on nuclear risk, WMDs, great power war?

Copied from my post: Notes on "The Myth of the Nuclear Revolution" (Lieber & Press, 2020)

I recently completed a graduate school class on nuclear weapons policy, where we read the 2020 book “The Myth of the Nuclear Revolution: Power Politics in the Atomic Age” by Keir A. Lieber and Daryl G. Press. It is the most insightful nuclear security book I have read to date and while I disagree with some of the book’s outlook and conclusions, it is interesting and well written. The book is also very accessible and fairly short (180 pages). In sum, I believe more

... (read more)
What moral philosophies besides utilitarianism are compatible with effective altruism?

In "The Definition of Effective Altruism", William MacAskill writes that 

"Effective altruism is often considered to simply be a rebranding of utilitarianism, or to merely refer to applied utilitarianism...It is true that effective altruism has some similarities with utilitarianism: it is maximizing, it is primarily focused on improving wellbeing, many members of the community make significant sacrifices in order to do more good, and many members of the community self-describe as utilitarians.

But this is very different from effective altruism being the... (read more)

What moral philosophies besides utilitarianism are compatible with effective altruism?

The following paper is relevant: Pummer & Crisp (2020). Effective Justice, Journal of Moral Philosophy, 17(4):398-415.

From the abstract: 
"Effective Justice, a possible social movement that would encourage promoting justice most effectively, given limited resources. The latter minimal view reflects an insight about justice, and our non-diminishing moral reason to promote more of it, that surprisingly has gone largely unnoticed and undiscussed. The Effective Altruism movement has led many to reconsider how best to help others, but relatively little ... (read more)

Effectiveness is a Conjunction of Multipliers

Great post! While I agree with your main claims, I believe the numbers for the multipliers (especially in aggregate and for ex ante impact evaluations) are nowhere near as extreme in reality as your article suggests for the reasons that Brian Tomasik elaborates on in these two articles:

(i) Charity Cost-Effectiveness in an Uncertain World 

(ii) Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness  

4Adam Binks5mo
I agree - and if the multiplier numbers are lower, then some claims are don't hold, e.g.: This doesn't hold if the set of multipliers includes 1.5x, for example. Instead we might want to talk about the importance of hitting as many big multipliers as possible. And being willing to spend more effort on these over the smaller (e.g. 1.1x) ones. (But want to add that I think the post in general is great! Thanks for writing this up!)
4gwern5mo
Well, you know what the stereotype is about women in Silicon Valley high tech companies & their sock needs... (Incidentally, when I wrote a sock-themed essay [https://www.gwern.net/Socks], which was really not about socks, I was surprised how many strong opinions on sock brands people had, and how expensive socks could be.) If you don't like the example 'buy socks', perhaps one can replace it with real-world examples like spending all one's free time knitting sweaters for penguins [https://web.archive.org/web/20111025143342/http://www.giantflightlessbirds.com/2011/10/the-great-penguin-sweater-fiasco/] . (With the rise of Ravelry and other things, knitting is more popular than it has been in a long time.)

I mostly agree; the uncertain flow-through effects of giving socks to one's colleagues totally overwhelm the direct impact and are probably at least 1/1000 as big as the effects of being a charity entrepreneur (when you take the expected value according to our best knowledge right now). If Ana is trying to do good by donating socks, instead of saying she's doing 1/20,000,000th the good she could be, perhaps it's more accurate to say that she has an incorrect theory of change and is doing good (or harm) by accident.

I think the direct impacts of the best int... (read more)

EA Projects I'd Like to See

Excellent post! I really appreciate your proposal and framing for a book on utilitarianism. In line with your point, William MacAskill, Richard Yetter Chappell and I also perceived a lack of accessible, modern, and high-quality resources on utilitarianism (and related ideas). This is what motivated us to create utilitarianism.net, an online textbook on utilitarianism. The website has been getting a lot of traction over the past year, and we are still expanding and improving its content (including plans to experiment with non-text media and translations int... (read more)

Thanks for working on this website, it's a great idea!

Possible additions to your list of books (I've only read the first one so forgive me if they aren't as good/relevant as I think they are):

... (read more)
7finm5mo
Big fan of utilitarianism.net [https://www.utilitarianism.net/] — not sure how I forgot to mention it!
4JackRoyal5mo
As a beginner exploring normative ethics this looks very helpful. Thanks!
What psychological traits predict interest in effective altruism?

Strong upvote! I found reading this very interesting and the results seem potentially quite useful to inform EA community building efforts.

Open Thread: Spring 2022

Hi Timothy, it's great that you found your way here! I am also from Germany and am happy to report that there is a vibrant German EA community (including an upcoming conference in Berlin in September/October that you may want to join). 

Regarding your university studies, I essentially agree with Ryan's comment. However, while studying in the UK and US can be great (I've done both!), I appreciate that doing so may be daunting and financially infeasible for many young Germans. If you decide to study in Germany and are more interested in the social scienc... (read more)

Open Thread: Spring 2022

Time to up your game, Linch! 😉

4Linch4mo
I'm ahead of both him and MichaelA now. Currently #2!
Would you like to run the EARadio podcast?

I want to express my deep gratitude to you, Patrick, for running EA Radio for all these years! 🙏 Early in my EA involvement (2015-16), I listened to all the EA Radio talks available at the time and found them very valuable. 

The Toba Supervolcanic Eruption

Excellent! A well-deserved second prize in the Creative Writing Contest.

Is EA compatible with technopessimism?

In my experience, many EAs have a fairly nuanced perspective on technological progress and aren't unambiguous techno-optimists. 

For instance, a substantial fraction of the community is very concerned about the potential negative impacts of advanced technologies (AI, biotech, solar geoengineering, cyber, etc.) and actively works to reduce the associated risks. 

Moreover, some people in the community have promoted the idea of "differential (technological) progress" to suggest that we should work to (i) accelerate risk-reducing, welfare-enhancing tec... (read more)

1acylhalide8mo
Thanks this is super useful. Although I guess the question now becomes, should we improve existing institutions or build new ones in ways that allow for differential tech progress, or is it better to prevent all progress.
Arguing for utilitarianism

Utilitarianism.net has also recently published an article on Arguments for Utilitarianism, written by Richard Yetter Chappell. (I'm sharing this article since it may interest readers of this post)

1Omnizoid8mo
Yeah, that has some good arguments, thank you for sharing that.
Wikipedia editing is important, tractable, and neglected

Thanks, it's valuable to hear your more skeptical view on this point! I've included it after several reviewers of my post brought it up and still think it was probably worth including as one of several potential self-interested benefits of Wikipedia editing. 

I was mainly trying to draw attention to the fact that it is possible to link a Wikipedia user account to a real person and that it is worth considering whether to include it in certain applications (something I've done in previous applications). I still think Wikipedia editing is a decent signal ... (read more)

Wikipedia editing is important, tractable, and neglected

Thanks for this comment, Michael! I agree with all the points you make and should have been more careful to compare Wikipedia editing against the alternatives (I began doing this in an earlier draft of this post and then cut it because it became unwieldy). 

In my experience, few EAs I've talked to have ever seriously considered Wikipedia editing. Therefore, my main objective with this post was to get more people to recognize it as one option of something valuable they might do with a part of their time; I wasn't trying to argue that Wikipedia editing i... (read more)

Wikipedia editing is important, tractable, and neglected

I strongly agree that we should learn our lessons from this incident and seriously try to avoid any repetition of something similar. In my view, the key lessons are something like:

  1. It's probably best to avoid paid Wikipedia editing
  2. It's crucial to respect the Wikipedia community's rules and norms (I've really tried to emphasize this heavily in this post)
  3. It's best to really approach Wikipedia editing with a mindset of "let's look for actual gaps in quality and coverage of important articles" and avoid anything that looks like promotional editing

I think it wou... (read more)

6Pablo9mo
I strongly endorse each of these points.
Wikipedia editing is important, tractable, and neglected

As an example, look at this overview of the Wikipedia pages that Brian Tomasik has created and their associated pageview numbers (screenshot of the top 10 pages below). The pages created by Brian mostly cover very important (though fringe) topics and attract ~ 100,000 pageviews every year.  (Note that this overview ignores all the pages that Brian has edited but didn't create himself.)

Wikipedia editing is important, tractable, and neglected

Someone (who is not me) just started a proposal for a WikiProject on Effective Altruism! To be accepted, this proposal will need to be supported by at least 6-12 active Wikipedia editors. If you're interested in contributing to such a WikiProject, please express "support" for the proposal on the proposal page.  

The proposal passed!! Everyone who's interested should add themselves as a participant on the official wikiproject! https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Effective_Altruism

Wikipedia editing is important, tractable, and neglected

This is the best tool I know of to get an overview of Wikipedia article pageview counts (as mentioned in the post); the only limitation with it is that pageview data "only" goes back to 2015.

How can we make Our World in Data more useful to the EA community?

Create a page on biological weapons. This could include, for instance,

  1. An overview of offensive BW programs over time (when they were started, stopped, funding, staffing, etc.; perhaps with a separate section on the Soviet BW program)
  2. An overview of different international treaties relating to BW, including timelines and membership over time (i.e., the Geneva Protocol, the Biological Weapons Convention (BWC), Australia Group, UN Security Council Resolution 1540)
  3. Submissions of Confidence-Building Measures in the BWC over time (including as a percentage of the
... (read more)
3EdMathieu6mo
Not much yet, but on (5) we now have this world map: Number of biosafety level 4 facilities [https://ourworldindata.org/grapher/number-of-biosafety-level-4-facilities-by-country?country=USA~HUN~KOR~CHN~FRA~PHL~ITA~AUS]

(This does sound useful, though I'd note this is also a relatively sensitive area and OWID are - thankfully! - a quite prominent site, so OWID may wish to check in with global catastrophic biorisk researchers regarding whether anything they'd intend to include on such a page might be best left out.)

For many people interested in but not yet fully committed to biosecurity, it may make more sense to choose a more general master's program in international affairs/security and then concentrate on biosecurity/biodefense to the extent possible within their program.

Some of the best master's programs to consider to this end:

  1. Georgetown University: MA in Security Studies (Washington, DC; 2 years) 
  2. Johns Hopkins University: MA in International Relations (Washington, DC; 2 years)
  3. Stanford University: Master's in International Policy (2 years)
  4. King's College Lon
... (read more)

The GMU Biodefense Master's is also offered as an online-only degree

Georgetown University offers a 2-semester MSc in "Biohazardous Threat Agents & Emerging Infectious Diseases". Course description from the website: "a one year program designed to provide students with a solid foundation in the concepts of biological risk, disease threat, and mitigation strategies. The curriculum covers classic biological threats agents, global health security, emerging diseases, technologies, CBRN risk mitigation, and CBRN security."

New Articles on Utilitarianism.net: Population Ethics and Theories of Well-Being

Website traffic was initially low (i.e. 21k pageviews by 9k unique visitors from March to December 2020) but has since been gaining steam (i.e. 40k pageviews by 20k unique visitors in 2021 to date) as the website's search performance has improved. We expect traffic to continue growing significantly as we add more content, gather more backlinks and rise up the search  rank. For comparison, the Wikipedia article on utilitarianism has received ~ 480k pageviews in 2021 to date, which suggests substantial room for growth for utilitarianism.net.

2trammell1y
Thanks!
6kuhanj1y
Pageviews would also go up a lot if (as suggested in the post) articles from the website were included in intro fellowships/other educational programs. I'll discuss adding these articles/others on the site to our intro syllabi. One potential concern with adding articles from utilitarianism.net is that many (new-to-EA) people (from experience running many fellowships) have negative views towards utilitarianism (e.g. find it off-putting, think people use it to justify selfish/horrible/misguided actions, think it's too demanding (e.g. implications of the drowning child argument), think it's naive, etc etc. I think utilitarianism is often not brought up very charitably in philosophy/other classes (again, based on my impressions running fellowships). So I worry about introducing ideas through the lens of utilitarianism. So one potential solution is to include these readings in fellowship syllabi after talking about utilitarianism more broadly (for what it's worth, in our fellowship we try to present utilitarianism as we/EAs tend interpret it and address misconceptions, but we can also do so much), or to bring them up in in-depth fellowships/non-intro programs where what I've brought up might be less of a concern.
Towards a Weaker Longtermism

I'm not sure what counts as 'astronomically' more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii).

This may be the crux - I would not count a ~ 1000x multiplier as anywhere near "astronomical" and should probably have made this clearer in my original comment. 

Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term,  refers to differences in value of something like 1030 x.

All my comment was meant to say is that it seems hi... (read more)

Towards a Weaker Longtermism

I'd like to point to the essay Multiplicative Factors in Games and Cause Prioritization as a relevant resource for the question of how we should apportion the community's resources across (longtermist and neartermist) causes:

TL;DR: If the impacts of two causes add together, it might make sense to heavily prioritize the one with the higher expected value per dollar.  If they multiply, on the other hand, it makes sense to more evenly distribute effort across the causes.  I think that many causes in the effective altruism sphere interact more multip

... (read more)
Towards a Weaker Longtermism

Please see my above response to jackmalde's comment. While I understand and respect your argument, I don't think we are justified in placing high confidence in this  model of the long-term flowthrough effects of near-term targeted interventions. There are many similar more-or-less plausible models of such long-term flowthrough effects, some of which would suggest a positive net effect of near-term targeted interventions on the long-term future, while others would suggest a negative net effect. Lacking strong evidence that would allow us to accurately ... (read more)

Yep, not placing extreme weight. Just medium levels of confidence that when summed over, add up to something pretty low or maybe mildly negative. I definitely am not like 90%+ confidence on the flowthrough effects being negative.

Towards a Weaker Longtermism

No, we probably don’t. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional  degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly... (read more)

9anonymous_ea1y
Phil Trammell's point inWhich World Gets Saved [https://forum.effectivealtruism.org/posts/cYf6Xx8w7bt9ivbon/which-world-gets-saved] is also relevant:
4Jack Malde1y
For the record I'm not really sure about 1030 times, but I'm open 1000s of times. Pretty much every action has an expected impact on the future in that we know it will radically alter the future e.g. by altering the times of conceptions and therefore who lives in the future. But that doesn't necessarily mean we have any idea on the magnitude or sign of this expected impact. When it comes to giving to the Against Malaria Foundation for example I have virtually no idea of what the expected long-run impacts are and if this would even be positive or negative - I'm just clueless. I also have no idea what the flow-through effects of giving to AMF are on existential risks. If I'm utterly clueless about giving to AMF but I think giving to an AI research org has an expected value of 1030 then in a sense my expected value of giving to the AI org is astronomically greater than giving to AMF (although it's sort of like comparing 1030 to undefined so it does get a bit weird...). Does that make any sense?
[PR FAQ] Sharing readership data with Forum authors

Agreed, I'd love this feature! I also frequently rely on pageview statistics to prioritize which Wikipedia articles to improve.

Towards a Weaker Longtermism

There is a big difference between (i) the very plausible claim that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, and (ii) the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/cost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).

Basically, in this context the same points apply that Brian Tomasik made in his essay "Why Ch... (read more)

I tentatively believe (ii), depending on some definitions. I'm somewhat surprised to see Ben and Darius implying it's a really weird view, and makes me wonder what I'm missing.

I don't want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don't mean indirect effects more broadly in the sense of 'better health in poor countries' --> '... (read more)

8Habryka1y
I think I believe (ii), but it's complicated and I feel a bit confused about it. This is mostly because many interventions that target the near-term seem negative from a long-term perspective, because they increase anthropogenic existential risk by accelerating the speed of technological development. So it's pretty easy for there to be many orders of magnitude in effectiveness between different interventions (in some sense infinitely many, if I think that many interventions that look good from a short-term perspective are actually bad in the long term).

It seems to me that many longtermists believe (i) but that almost no-one believes (ii).

Really? This surprises me. Combine (i) with the belief that we can tractably influence the far future and don't we pretty much get to (ii)?

3Davidmanheim1y
I'm unwilling to pin this entirely on the epistemic uncertainty, and specifically don't think everyone agrees that, for example, interventions targeting AI safety aren't the only thing that matters, period. (Though this is arguably not even a longtermist position.) But more generally, I want to ask the least-convenient-world question of what the balance should be if we did have certainty about impacts, given that you seem to agree strongly with (i).
Writing about my job: Research Fellow, FHI

I really appreciated the many useful links you included in this post and would like to encourage others to strive to do the same when writing EA Forum articles.

Thanks Darius! It was my pleasure.

How to reach out to orgs en masse?

Happy to have you here, Linda! It sounds like you have some really important skills to offer and I wish you will find great opportunities to apply them.

1LindaMartin1y
Thank you, Darius! I'm excited to use my skills to give back after a long career.
AMA: The new Open Philanthropy Technology Policy Fellowship

The listed application documents include a "Short essay (≤500 words)" without further details. Can you say more about what this entails and what you are looking for?

4Technology Policy Fellowship1y
The specific prompts were included in the application form. Apologies that this was not clear. We’ve now added a note along those lines to the fellowship page. The prompts are: * Personal statement: “What do you want to get out of this fellowship? Why do you think you are a good fit? Please describe your interest in (and any experience with) policy as well as your area of focus (e.g. AI or biosecurity).” * Short essay: “What is one specific policy idea related to AI, biosecurity, or related emerging technology areas that you think the US government should pursue today? Why do you think this idea would be beneficial?” * Statement of motivation: “How do your interests and plans align with Open Philanthropy's goals related to societal impacts of technology?”
AMA: The new Open Philanthropy Technology Policy Fellowship

Are non-US citizens who hold a US work authorization disadvantaged in the application process even if they seek to enter a US policy career (and perhaps aim to become naturalized eventually)?

2Technology Policy Fellowship1y
Non-citizens are eligible to apply for the program if they do not require visa sponsorship in order to receive a placement. For example, someone with a green card should be eligible to work at any think tank. As long as applicants are eligible to work in the roles that they are applying for, non-citizens who aspire to US policy careers will not be disadvantaged. It’s our understanding that it is difficult for non-citizens to get a security clearance, which is required for many federal government roles, and executive branch offices are generally hesitant about bringing on non-citizens. Congressional offices are legally allowed to take on permanent residents (and even some temporary visa holders), but individual offices may adopt policies favoring US citizens. Out of the three categories, we therefore expect non-citizens to have the easiest time matching with a think tank. However, a lot depends on individual circumstances, so it is difficult to generalize. We encourage non-citizens with work authorization to apply, and would work through these sorts of questions with them individually if they reach the later stages of the application process.
What novels, poetry, comics have EA themes, plots or characters?

There is Eliezer Yudkowsky's Harry Potter fan fiction "Harry Potter and the Methods of Rationality" (HPMOR), which conveys many ideas and concepts that are relevant to EA: http://www.hpmor.com/

Please note that there is also a fan-produced audio version of HPMOR: https://hpmorpodcast.com/

3MaxRa1y
I also really enjoyed the unofficial sequal, Significant Digits. http://www.anarchyishyperbole.com/p/significant-digits.html [http://www.anarchyishyperbole.com/p/significant-digits.html]
The EA Forum Podcast is up and running

Great initiative! Unfortunately, I cannot seem to find the podcast on either of my two podcast apps (BeyondPod and Podcast Addict). Do you plan to make the podcast available across all major platforms?

9D0TheMath1y
Anchor sends messages to podcast platforms to get the podcast on them. They say this takes a few business days to complete. In the meantime, you can use Ben Schifman's method.
5BenSchifman1y
On mobile but not for some reason on the web version there is a "more platforms" button that gives you an RSS feed that should work on any player: https://anchor.fm/s/62cbeec4/podcast/rss
You are allowed to edit Wikipedia

Strongly agree! I'm currently writing an EA Forum post making the case for Wikipedia editing.

6ChristianKleineidam1y
Given the discussion here and over at LessWrong where I crossposted this, I think when it comes to writing a larger post to make a more effective argument it's important to explain how Wikipedia works. It seems to me like many people think that changing Wikipedia articles is just about making an edit and hoping it doesn't get reverted. This works for smaller issues but when it comes to big issues it needs more then one person to create change. I'm currently in a deep discussion on a contentious issue where I wrote a lot. If 3-4 people would join in and back me up, I likely could make the change and it wouldn't take much effort for everyone of those people. When it comes to voting on an election you don't need to explain to people that even so they didn't get what they wanted this doesn't mean that there wasn't a democratic election. People have a mental model for how elections work but they don't have one for how decisions on Wikipedia get made and thus think that if they alone don't have the power to create change it's not worth speaking up on the talk page. I also read that people think the goal of Wikipedia is truth when it isn't it's to reflect what secondary sources say. While it might be great to have an encyclopedia that has truth as a goal having a place where you find a synthesis of other secondary sources is valuable. Understanding that helps to know when it's worth to speak up and when it isn't.
The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020)

Awesome episode! I really enjoyed listening to it when it came out and was excited for Sam's large audiences across Waking Up and Making Sense to learn about EA in this way.

Open Thread: June 2021

Welcome Naghma! It is great to have you here and learn about your background and interests.

Know anyone interested in litigation?

Hi Alene! I suspect you already know Jay Shooster (https://www.richmanlawpolicy.com/team/jay-shooster)? In case you don't, he might be a great contact for you.

YESSSSS he's amazing and has volunteered to help me think through some things! Jay Shooster, if you're reading this: You rock. 

Help me find the crux between EA/XR and Progress Studies

Regarding your question:

What would moral/social progress actually look like?

This is a big and difficult question, but here are some pointers to relevant concepts and resources:

  • Moral circle expansion (MCE) - MCE is "the attempt to expand the perceived boundaries of the category of moral patients." For instance, this could involve increasing the moral concern in the wider public (or, more  targeted, among societal decision-makers) for non-human animals or future people. Arguably,  MCE could help reduce the risk of societies committing further atroc
... (read more)
Help me find the crux between EA/XR and Progress Studies

Regarding your question:

Does XR consider tech progress default-good or default-bad?

Leopold Aschenbrenner's paper Existential risk and growth provides one interesting perspective on this question (note that while I find the paper informative, I don't think it settles the question).

A key question the paper seeks to address is this:

Does faster economic growth accelerate the development of dangerous newtechnologies, thereby increasing the probability of an existential catastrophe?

The paper's (preliminary) conclusion is 

we could be living in a unique “time

... (read more)
4Gavin1y
Aschenbrenner's model strikes me as a synthesis of the two intellectual programmes, and it doesn't get enough attention.
AMA: Working at the Centre for Effective Altruism

My impression is that as an organisation CEA has undergone substantial change over time. How might working at CEA today be different compared to working there, say, 3/5/7 years ago?

8MaxDalton1y
I agree with a lot of Amy/Julia's impressions. Some other thoughts: 7 years ago (I was an intern over the summer, so I'm probably missing some things). I think "CEA" was really just a legal entity for a wide variety of other projects. There was a bit more research being done in-house (e.g Global Priorities Project), and I think basically everything was happening in Oxford. Compared to then: more cohesive, less research, people more distributed across the world. 5 years ago: things were beginning to get a bit more integrated. Different teams were coming together and trying to figure out what the internal culture was. I think CEA was also really figuring out what to focus on: there were research projects, projects promoting effective giving, EA community building etc. Compared to then: Narrower focus and more established/consistent team culture. 2 years ago: I think there was a lot of uncertainty: we were searching for new leadership, and didn't have a solid long-term strategy. However, I think we were beginning to integrate a bunch of cool hires that we made in 2018, and we had a supportive culture. We were focused on making sure we followed through on existing commitments (rather than ambitious goals/new things). We had an office in Berkeley as well as in Oxford. Compared to then: Clearer goals/leadership, more focus on expansion, no Berkeley office and more focus on remote work. I think I listed mostly good or neutral things. When I reflect on what I miss from previous eras, the main thing is the in-person office culture (though I hope we'll get this back as we move into our new Oxford office).
9Amy Labenz1y
I agree that a lot has changed! Five years ago CEA was only had a couple of staff members and was more of an “umbrella organization” to incubate other projects. My ops role was with “CEA core” and my events role was with EA Outreach (one of the projects we supported, which no longer exists as a separate org). Julia was on the EA outreach team at that time along with a handful of teammates - I don’t think anyone else on the current team worked with us at the time. During my first 6 months or so at CEA, EAO merged with CEA core. In terms of culture, five years ago it felt much more like CEA was a startup. I think teammates were much more stretched then: I worked two roles at once (Director of US Operations and Head of Events), and the people I managed were working on different projects. And many of the EAO teammates were part time. That meant I worked lots of hours and still couldn’t always execute at the level I would have liked. Now I have a team of people helping with events (see our post, we are hiring!). And a whole different team manages Operations. This helps us improve our execution. CEA has also shifted to a culture that emphasizes self care more. I feel more able to take time off now because I have a team around me, which helps because I have two very young kids (2 yo and 4 months old). Julia managed this back in the old days but I don’t think I could have at the time.
Load More