Thanks for the thoughtful reply.
I think we are probably agreed that we should be cautious against prescribing EAs to go to charities or cause areas where the culture doesn't seem welcoming. Especially given the younger age of many EAs, and lower income and career capital produced by some charities, this could be a very difficult experience or even a trap for some people.
I think I have updated based on your comment. It seems that having not just acceptance but also active discussion or awareness of "non-canonical" cause areas seems useful.
I wonder, to what degree is your post or concerns addressed if new cause areas were substantively explored by EAs to add to the "EA roster"? (even if few cause areas were ultimately "added" as a result, e.g. because they aren't feasible).
The identification of EA with a small set of cause areas has many manifestations, but the one I’m mostly worried about is the feeling shared by many in the community that if they work on a cause that is not particularly prioritized by the movement (like feminism) then what they do "is not really EA", even if they use evidence and reason to find the most impactful avenues to tackle the problem they try to solve….However, this calculus can be somewhat incomplete, as it doesn’t take into account the personal circumstances of the particular biologist debating her career.
I think I strongly agree with this and I expect most EA do too.
My interpretation is that EA as a normative, prescriptive guide for life doesn’t seem right. Indeed, if anything, there’s evidence that EA doesn’t really do a good job, or maybe even substantively neglects this while appearing to do so, in a pernicious way. From a “do no harm” perspective, addressing this is important. This seems like a “communication problem” (which seems historically undervalued in EA and other communities).
From the perspective of the entire EA movement, it might be a better strategy to allocate the few individuals who possess the rare “EA mindset” across a diverse set of causes, rather than stick everyone in the same 3-4 cause areas. Work done by EAs (who explicitly think in terms of impact) could have a multiplying effect on the work and resources that are already allocated to causes. Pioneer EAs who choose such “EA-neglected” causes can make a significant difference, just because an EA-like perspective is rare and needed in those areas, even in causes that are well-established outside of EA (like human rights or nature conservation). For example, they could carry out valuable intra-cause prioritization (as opposed to inter-cause prioritization).
This is a really different thought than your other above and I want to comment more to make sure I understand.
While agreeing with the essence, I think I differ and I want to get at the crux of the difference:
Overall, I think “using data”, cost effective analysis, measurement and valuation, aren’t far from mainstream in major charities. To get a sense of this, I have spoken (worked with?) to leaders in say, environmental movements and they specifically “talk the talk”, e.g. there’s specific grants for “data science” like infrastructure, for example. However, while nominally trying, many of these charities don’t succeed—the reason is an immense topic beyond the scope of this comment or post.
But the point is that it seems hard to make these methodological or leadership changes that motivates you dissemination.
Note that it seems very likely we would agree and trust any EA who reported that any particular movement or cause area would benefit from better methods.
However, actually effecting change is really difficult.
To be tangible, imagine trying to get the Extinction Rebellion to use measurement and surveys to regularly interrogate their theory of change.
For another example, the leadership and cohesion of many movements can be far lower than they appear. Together with the fact that applying reasoning might foreclose large sections of activity or initiatives, this would make implementation impractical.
While rational, data driven and reasoned approaches are valuable, it’s unclear if EA is the path to improving this, and this is a headwind to your point that EAs should disseminate widely. I guess the counterpoint would be that focus is valuable and this supports focus on cause areas closer to the normal sense that you argue against.
I'm sorry but I just saw this comment now. My use of the forum can be infrequent.
I think your point is fascinating and your shift in perspective and using history is powerful.
I take your point about this figure and how disruptive (in the normal, typical sense of the word and not SV sense) he was.
I don't have much deep thoughts. I guess that it is true that institutions are more important now, at least for the reason since there's 8B people so single people should have less agency.
I am usually suspicious about stories like this since it's unclear how institutions and cultures are involved. But I don't understand the context well (classical period Greece). I guess they had https://en.wikipedia.org/wiki/Ostracism#Purpose for a reason.
This is so well written, so thoughtful and so well structured.
BE VERY CAREFUL NOT TO GET SUCKED INTO HORRIBLE PUBLISHING INCENTIVES.
This theme or motif has come up a few times. It seems important but maybe this particular point is not 100% clear to the new PhD audience you are aiming for.
For clarity, do you mean:
Also, note that "publications" can be so different between disciplines.
A top publication in economics during a PhD is rare, but would basically be worth $1M in net present value over their career. It's probably totally optimal to tag such a publication, even in business, because of the signaling value.Note that my academic school is way below you in academic prestige/rank/productivity. It would be interesting to know more about your experiences at MIT and what it offers.
C-dawg in the house!
I have concerns about how this post and research is framed and motivated.
This is because its methods imply a certain worldview and is trying to help hiring or recruiting decisions in EA orgs, and we should be cautious.
Like, I think, loosely speaking, I think “star systems” is a useful concept / counterexample to this post.
In this view of the world, someone’s in a “star system” if a small number of people get all the rewards, but not from what we would comfortably call productivity or performance.
So, like, for intuition, most Olympic athletes train near poverty but a small number manage to “get on a cereal box” and become a millionaire. They have higher ability, but we wouldn’t say that Gold medal winners are 1000x more productive than someone they beat by 0.05 seconds.
You might view “Star systems” negatively because they are unfair—Yes, and in addition to inequality, they have may have very negative effects: they promote echo chambers in R1 research, and also support abuse like that committed by Harvey Weinstein.
However, “star systems” might be natural and optimal given how organizations and projects need to be executed. For intuition, there can be only one architect of a building or one CEO of an org.
It’s probably not difficult to build a model where people of very similar ability work together and end up with a CEO model with very unequal incomes. It’s not clear this isn’t optimal or even “unfair”.
Your paper is a study or measure of performance.
But as suggested almost immediately above, it seems hard (frankly, maybe even harmful) to measure performance if we don't take into account structures like "star systems", and probably many other complex factors.
Your intro, well written, is very clear and suggests we care about productivity because 1) it seems like a small number of people are very valuable and 2) suggests this in the most direct and useful sense of how EA orgs should hire.
Honestly, I took a quick scan (It’s 51 pages long! I’m willing to do more if there's specific need in the reply). But I know someone is experienced in empirical economic research, including econometrics, history of thought, causality, and how various studies, methodologies and world-views end up being adopted by organizations.
It’s hard not to pattern match this to something reductive like “Cross-country regressions”, which basically is inadequate (might say it’s an also-ran or reductive dead end).
Overall, you are measuring things like finance, number of papers, and equity, and I don’t see you making a comment or nod to the “Star systems” issue, which may be one of several structural concepts that are relevant.
To me, getting into performance/productivity/production functions seems to be a deceptively strong statement.
It would influence cultures and worldviews, and greatly worsen things, if for example, this was an echo-chamber.
Alternative / being constructive?
It's nice to try to end with something constructive.
I think this is an incredibly important area.
I know someone who built multiple startups and teams. Choosing the right people, from a cofounder to the first 50 hires is absolutely key. Honestly, it’s something akin to dating, for many of the same reasons.
So, well, like my 15 second response is that I would consider approaching this in a different way:
I think if the goal is help EA orgs, you should study successful and not successful EA orgs and figure out what works. Their individual experience is powerful and starting from interviews of successful CEOs and working upwards from what lessons are important and effective in 2021 and beyond in the specific area.
If you want to study exotic, super-star beyond-elite people and figure out how to find/foster/create them, you should study exotic, super-star beyond-elite people. Again, this probably involves huge amounts of domain knowledge, getting into the weeds and understanding multiple world-views and theories of change.
Well, I would write more but it's not clear there’s more 5 people who will read to this point, so I'll end now.
Also, here's a picture of a cat:
This seems fantastic, both for doing the work itself and sharing it!
I know someone who built multiple orgs, based on this, startups seem to be a dizzying mess, basically you need to do everything at 1000% quality (e.g. early hires and strategic decisions) and have 1000% more tasks than time/energy to do.
This makes writing about the process difficult (it's a surreal situation where it's hard to know what reality is, writing itself can create ideas and structures that may not be Truth).
It's impressive that you are writing about it!
Now, I'm writing this comment because of what you said here:
Most of the writings are going to be very basic for most of you guys here, as it really is intended for a non-EA audience, but it might still be interesting to take a look behind the scenes.
I don't know what this means, or more honestly, I disagree with it.
Even if someone has seen 100 startups, they would still benefit from learning about your instance of startup and your experiences, especially shared honestly as you seem to.
Also, in your post you say:
How looking for inspiration in other countries helps shape the idea, building an MVP, getting together the initial team (and how we find unexpected help). There will for sure also be an article on how our egos almost got in the way of the easiest solution to the problem (and how it still might) and what we do to get around it.
How do you start a nonprofit? Why should you start a nonprofit? How do you have the biggest impact you can? How do you raise donations? How do you get a bank account? How do you become a registered charity?
It's not clear why the EA community would have advanced skills for building startups, non-profits, MVPs or teams.
Really, I'm saying is that it could be the opposite, this knowledge is new and valuable (if so, an easy read would be particularly valuable).
(But maybe I'm wrong, maybe I'm misrepresenting your views, and maybe someone else reading this will correct me.)
Thanks for your great post!
This seems like great points and of course, your question stands.
I wanted to say that most R1 research is problematic for new grads: this is because of difficulty of success, low career capital, and frankly "impact" can also be dubious. It is also hard to get started. It typically requires PhD, post-doc(s), all poorly paid—contrast with say, software engineering.
My motivation for writing the above is for others, akin to the "bycatch" article—I don't think you are here to read my opinions.
Thanks for responding thoughtfully and I'm sure you will get an interesting answer from Holden.
(Uh, I just interacted with you but this is not related in any sense.)
I think your are interpreting Open Phil's giving to "Scientific research" to mean it is a distinct cause priority, separate from the others.
For example, you say:
... EA groups and CEA don't feature scientific research as one of EA's main causes - the focus still tends to be on global health and development, animal welfare, longtermism, and movement building / meta
To be clear, in this interpretation, someone looking for an altruistic career could go into "scientific research" and make an impact distinct from "Global Health and Development" and other "regular" cause areas.
However, instead, is it possible that "scientific research" mainly just supports Open Philanthropy's various "regular" causes?
For example, a malaria research grant is categorized under "Scientific Research", but for all intents and purposes is in the area of "Global Health and Development".
So this interpretation, funding that is in "Scientific Research" sort of as an accounting thing, not because it is a distinct cause area.
In support of this interpretation, taking a quick look at the recent grants for "Scientific Research" (on March 18, 2021) shows that most are plausibly in support of "regular" cause areas:
Similarly, sorted by largest amount of grant, the top grants seem to be in the areas of "Global Health", and "Biosecurity".
Your question does highlight the importance of scientific research in Open Philanthropy.
Somewhat of a digression (but interesting) are secondary questions:
I don't have the energy to fully engage with these, but maybe we just misunderstand each other in terms of what we define as UI/UX design. To me, and many other UI/UX designers, the UI/UX design is the end-to-end experience of using a website, product, or service, so I think everything I pointed out still falls into the realm of UI/UX design. It's not just about better interactions. And I think content choices / tradeoffs still can be considered part of the UI/UX design.
The control or selection of specific content, especially the choices you illustrated, being under the purview of UX seems improbable.
It unworkably expands into decisions that are basically always controlled by other parts of the organization (e.g. exec).
To see this another way with examples: we would not accept exec blaming their UX designers for racist or inappropriate content. Similarly, a board would find it ridiculous if a CEO said their "community groups" initiative failed because their UX designer decided it did not belong on the front page.
I know someone who worked adjacent to this space (e.g. hiring and working with the people who hire UX designers).
Someone presenting a UX design that then comprised of the choices in your upper level comment would risk being perceived to be advancing an agenda.
Thanks for the thoughtful reply:
There's some comments below. They verge on debate, but I am not trying to be contentious.
Comments on #1-3:
I think your points #1-#3 are more like along the lines of a specific "business choice". Importantly, choices have drawbacks. Promoting one aspect or feature in a limited space is a choice to use a limited resource.
Based on what you said, it seems like #1 and #2 are important and valuable. If one of EA's core activities are its communities, that should be emphasized and adding it would be an huge improvement. If EA's contributors are substantially from non-white people, this can't be neglected in photos.
Now, I personally like the idea of promoting communities and genuinely reflecting on the population. However, it also verges into what I might call "politics" or at least non-UX improvements.Comments on #4:
Overall I think the visual design of CEA and GWWC's current websites are better than the EA website. I think CEA's is really good currently, mainly because of their use of nice photos, especially of people.
Below are the top of these pages and maybe what you are referring to:
The pages are excellent, but also are not what I would call "UX design" as I imagined.
They use visual principles that I see commonly on many websites made in the last 5 years.
To try to emphasize this, for a side project, someone I know created a similar page (similar, I think, in every sense, performance, design, and high quality photos) in a few hours and it took off. I might be brutalizing/offending UX designers here.
Also, the main difference in design is simplicity of the elements, in particular CEA design is an extremely simple and effective "landing page". Also, simple, GWWC, presents a strong narrative in a top down scroll. (I might be messing up terms of art.) The current EA website is busier, having a few more elements, does not really use scrolling, and has more words. Again, as in my previous comment, it's not clear this is a bad thing and I might prefer it.
The theme of this comment is that your reply is different than what expected. I might have expected to learn of a "UX improvement" as some strictly better design choice ("stop using garish colors"), or a better mode of use in some sense ("swiping right on Tinder").
I agree that design (e.g. "minimalism" or something) might help EA and I wanted to learn what this is.
But my bias is to avoid technological solutions unless it's clearly needed.
Also, if you have a distinct goal, "we need more non-white people in photos as it better reflects and welcomes the actual community", I prefer to just state it instead of risking conflating a distinct objective.
Also, really going off topic here, I would like to know more about your experiences with your ethnicity if you have them (note that technically I might have the same ethnicity as you).
You seem to have a lot of thoughtful content and this would be an interesting perspective.