All of Benevolent_Rain's Comments + Replies

As Brad points out, even now, and with some (high?) likelihood in the near future, EA will be begging for people to start new things. So please disregard downvotes. Instead, if you think you can pull this off and have credentials, just take tips like mine, Brad's and others' and see downvotes as "not ready yet", and do not interpret it as "a project similar to this is not worthwhile". There are tons of people right now working on starting new things, and this will only accelerate as the need for it is large.

And criticism is on the EA community if we make p... (read more)

Have you read AIM (formerly Charity Entrepreneurship) material? They have a book out on starting a non-profit. If you read that and present this idea either absorbing lessons from that or clearly arguing why this idea is still good, I think that might make it easier for readers here to assess your idea and potentially consider joining. As well as some more detail on your background perhaps. I am a bit sad that you get downvoted as a newcomer - we want people to join and be agentic which is exactly what you are doing.

1
Elli
Dear Benevolent_Rain, thank you very much for your valuable feedback. I already incorporated some of your points in my post and will definitely consider making bigger changes after reading the book you suggested. At the same time, this post is also meant to test whether this high-level concept can potentially have traction at all before I deep dive into the details. If I get mostly downvotes and none shows interest, that's also feedback for me as it might mean that this idea is not viable as it is currently. 

This resonates a lot. I’m keen to connect with others who are actively thinking about when it becomes justified to hand off specific parts of their work to AI.

Reading this, it seems like the key discovery wasn’t “Claude is good at critique in general,” but that a particular epistemic function — identifying important conceptual mistakes in a text — crossed a reliability threshold. The significance, as I read it, is that you can now trust Claude roughly like a reasonable colleague for spotting such mistakes, both in your own drafts and in texts you rely on a... (read more)

8
Linch
I wouldn't go quite this far, at least from my comment. There's a saying in startups, "never outsource your core competency", and unfortunately reading blog posts and spotting conceptual errors of a certain form is a core competency of mine. Nonetheless I'd encourage other Forum users less good at spotting errors (which is most people) to try to do something like this and post posts that seem a little fishy to Claude and see if it's helpful.[1] For me, Claude is more helpful for identifying factual errors, and for challenging my own blog posts at different levels (eg spelling, readability, conceptual clarity, logical flow, etc). I wouldn't bet on it spotting conceptual/logical errors in my posts I missed, but again, I have a very high opinion of myself here.   1. ^ (To be clear I'm not sure the false positives/false negatives ratio is good enough for other people).

Have anyone seen any of the following?

1 - EA orgs skipping tests/trials on candidates and instead using candidate performance on tests/trials from other EA orgs? The closest I get is the "top candidate" tag in the HIP database that some EA orgs send
2 - Have you seen hints of top talent applying to less positions due to "test/trial burn-out"? I think this might potentially especially severe for top talent, as they often get to test/trial stages, and might be doing back-to-back tests and trials for weeks or months on end (and for mid career professionals in ... (read more)

Ah, now I see - thanks for clarifying. Yes historically I do not know how much each set-back to nuclear mattered. I can see that e.g. constantly changing regulation, for example during builds (which I think Isabelle actually mentioned) could cause a significant hurdle for continuing build-out. Here I would defer to other experts like you and Isabelle.

Porting this over to "we might over regulate AI too", I am realizing it is actually unclear to me whether people who use the "nuclear is over regulated" example means the literal same "historical" thing could ... (read more)

Good question. I agree: people in EA who’ve actually worked on nuclear don’t usually claim over-regulation is the only or even dominant driver of the cost/buildout problem.

What I’m reacting to is more the “hot take” version that shows up in EA-adjacent podcasts — often as an analogy when people talk about AI policy: “look at nuclear, it got over-regulated and basically died, so don’t do that to AI.” In that context it’s not argued carefully, it’s just used as a rhetorical example, and (to me) it’s a pretty lossy / misleading compression of what’s going on.... (read more)

What I’m reacting to is more the “hot take” version that shows up in EA-adjacent podcasts — often as an analogy when people talk about AI policy: “look at nuclear, it got over-regulated and basically died, so don’t do that to AI.” In that context it’s not argued carefully, it’s just used as a rhetorical example, and (to me) it’s a pretty lossy / misleading compression of what’s going on.

 

I agree it's a bit lossy and sometimes reflexive (this is what I meant with relying on libertarian priors), but I am still confused about your argument.

Because the ar... (read more)

Good question — I think it’s mostly untrue as commonly used. It implies regulation is the main bottleneck, but as the podcast lays out, there are likely much better levers for driving down cost. So it’s both misleading and counterproductive as a talking point, even if you’re broadly pro-nuclear (which I and the podcast guest are).

6
jackva
Out of curiosity: Where have EAs argued that "nuclear is overregulated" and, more specifically, where have EAs argued that over-regulation is the only or dominant driver of the cost problem? It's probably true that this sometimes happens -- especially when EAs outside of climate/energy point to "nuclear is overregulated" as something in line with libertarian / abundance-y priors -- but I think those in EA that have done work on nuclear would not subscribe to or spread the view that regulation is the only driver of nuclear problem. That said, it seems clearly true -- and I do think Isabelle agrees with that -- that regulatory reform is a necessary component of making nuclear in the West buildable at scale again (alongside many other factors, such as sustained political will, technological progress, re-established supply chains, valuing clean firm power for its attributes, etc).

This is genuinely incredibly impressive — a proof point that a small, dedicated team can create meaningful x-risk reduction impact through "policy" (e.g. if scientific consensus is a precursor to policy action). If so, subsequent progress here may also be relatively cost-effective: compared to stockpiles or hard infrastructure, the marginal public spend to adopt guidance and implement early measures could be low.

Also: I think this is extra impressive because my (anecdotal) experience is that many people in mainstream bio who hear “mirror bio” dismiss it as a non-issue — so shifting scientific consensus here seems like a significant achievement.

I’m pro-nuclear, but the commonly used EA framing of “nuclear is overregulated” seems net negative more often than not. Clearer Thinking’s new nuclear episode is one of the more epistemically rigorous discussions I’ve heard in EA-adjacent spaces (and Founders Pledge has also done nuanced work).

Nuclear is worth pursuing, but we should argue for it clear-eyed.

2
Hugh P
Net negative because it is a true statement? Or some other reason?

My read was that a major success was that they seem to have broad, initial agreement, even among previously bullish scientists, that we should be extremely cautious when developing the scaffolding of mirror bio, if at all. I think that is truly remarkable, borderline historic. This is agreement across national borders, scientific disciplines and the argument they put forward was not watertight - there was no definite proof that mirror bio would assuredly be catastrophic. So this consensus was built on plausible risk only. It was extremely well pulled off. It is what skeptics might easily and still do dismiss as "sci-fi".

I ran this very lightweight poll and super crudely (probably massive sampling bias) 4 out of 9 EAs residing in the US considered moving abroad.

4
Denkenberger🔸
Thanks for doing this and for pointing it out to me. Yeah, participation bias could be huge, but it's still good to get some idea.

Naïve question: Do you know if there is data on YouTube's potential to convert to highly engaged EAs that would not otherwise convert? I think YouTube is worth testing, but if there is little data already I would be interested to see anything on conversion or even proxies for it. I know 80k hrs is rigorous so they probably have some hypothesis it can work out, or maybe they have hard evidence.

3
James Brobin
I haven't found anything about how much YouTube converts people to highly engaged EAs. I also haven't seen anything about what actually motivates people to be highly engaged in EA either. That said, I did just find this article from 80,000 Hours, which discusses how the organization moved away from ads and sponsorships but started to focus more on making their own videos. As such, it's probably not too unlikely we'll have a good answer from them in the near future.

I would really recommend to look into pre-schools in the Nordics. They have high sickness rates and importantly: The government pays parents to stay home with sick kids. Even a 5% reduction in absence is worth millions and the government explicitly asks for solutions to this. 

But there is more, anyone can set up a nursery, and the authorities track absence rates across pre-schools (I know, because kids who are immunocompromised get preference in pre-schools with the lowest absence rates). Setting up one's own pre-school is paid for by the state - they... (read more)

Just a note that if anyone is interested in talking about this, please drop me a DM. I have some experience and think there might be something to do in this space.

Do you know if Longview does something like assign a person to the new potential donor? I think, for example, a donor going to their first EAG might not have enough bandwidth themselves to make sense of the whole ecosystem and get the most out of engaging with all donation opportunities.

4
OscarD🔸
My understanding is Longview does a combination of actively reaching out (or getting exisitng donors to reach out) to possible new donors, and talking to people who express interest to them directly. But I don't know much about their process or plans.

My alma mater! A completely irrational and sentimental upvote from me haha!

This warms my heart, thanks for writing Julia! A note from a dad trying to be supportive: I also want to acknowledge the mothers that let dads take care of the kids their own way. While it is not possible to generalize, having observed dads with children, at least here in Scandinavia, they might do things differently. Letting fathers parent their own way and trusting them makes it much easier for dads to care for children. Someone mentioned interest in taking care of kids - this interest can be increased, in my experience drastically, by letting fathers ta... (read more)

To be clear, I think there is absolutely no intention of doing this. EA existed before AI became hot, and many EAs have expressed concerns about the recent, hard pivot towards AI. It seems in part, maybe mostly (?), to be a result of funding priorities. In fact, a feature of EA that hopefully makes it more immune than many impact focused communities to donor influence (although far from total immunity!) is the value placed on epistemics - decisions and priorities should be argued clearly and transparently, why AI should take priority over other cause areas. Glad to have you engage skeptically on this!

Love this framing — in my own EA work I’ve found that leaning into boldness in marketing outperforms caution. Still, I’d be really curious if anyone has data on how coolness affects downstream outcomes — not just reach, but who we attract and any data that might indicate how it shapes culture over time.

What I’ve learned from informal background checks in EA

I sometimes do informal background or reference checks on "semi-influential" people in and around EA. A couple of times I decided not to get too close — nothing dramatic, just enough small signals that stepping back felt wiser. (And to be fair, I had solid alternatives; with fewer options, one might reasonably accept more risk.)

I typically don’t ask for curated references, partly because it feels out of place outside formal hiring and partly because I’m lazy — it’s much quicker to ask a trusted friend ... (read more)

This is  super helpful - do you feel like your overview even points at what potentially useful safety work is currently not covered by anyone?

5
Sudhanshu Kasewa
"anyone" is a high bar! Maybe worth looking at what notable orgs might want to fund, as a way of spotting "useful safety work not covered by enough people"? I notice you're already thinking about this in some useful ways, nice. I'd love to see a clean picture of threat models overlaid with plans/orgs that aim to address them.  I think the field is changing too fast for any specific claim here to stay true in 6-12m.

Very good point on coming new to EA. Maybe hearing about different cause areas in an intro workshop then landing here and wondering if it is the Alignment Forum. It might even feel a bit like bait and switch? If this is a recurring theme for newcomers to EA, this is something that should be looked at. Not sure if anyone is tracking the funnel of onboarding into EA? If so, one might see people being interested initially, then dropping off when they meet a "wall of AI". 

1
AndreuAndreu
this is concerning if the bait is cool, old fashioned, volunteering, and the switch is to AI. Read my answer to David's comment, from my background I interpret AI risk to be a fad, not without its merits, and will be relevant when/if robots self-manufacture and also control all the means of production, but that realistically is at least 2-3 human generations away.  A cool read on a related topic, the technosphere   https://theconversation.com/climate-change-weve-created-a-civilisation-hell-bent-on-destroying-itself-im-terrified-writes-earth-scientist-113055 and the original coining of the 2014 term by Peter Haff https://journals.sagepub.com/doi/10.1177/2053019614530575  

I’m skeptical that corporate AI safety commitments work like @Holden Karnofsky suggests. The “cage-free” analogy breaks: one temporary defector can erase ~all progress, unlike with chickens.

I'm less sure about corporate commitments to AI safety than Karnofsky. In the latest 80k hrs podcast episode, Karnofsky uses the cage free example of why it might be effective to push frontier AI companies on safety. I feel the analogy might fail in potentially a significant way in that the analogy breaks in terms of how many companies need to be convinced:
-For cage fre... (read more)

I like the idea of just accepting it as moral imperfection rather than rationalizing it as charity — thanks for challenging me! One benefit of framing it as imperfection is that it helps normalize moral imperfection, which might actually be net positive for the most dedicated altruists, since it could help prevent burnout or other mental strain.

Still, I’m not completely decided. I’m unclear about cases where someone needs to use their runway:

A. They might have chosen not to build runway and instead donated effectively, and then later, when needing runway, ... (read more)

Thanks for posting this — I came to similar conclusions during a recent strategy sprint for a small org transitioning off major-donor dependence.

One thing I tried to push further was: how can small orgs actually operationalize this tradeoff? A few concrete ideas that might help others:

  • Run small experiments early — not just to test donor conversion, but to triage which sources are worth pursuing at all. You might find several are cost-efficient, in which case diversification isn’t so costly. Quick tests: EA Forum post, alumni fundraising email to 100–300 pe
... (read more)

Just to add my personal experience, if you might be planning direct work, especially entrepreneurship and/or might want to have children - a personal runway has served me well. Not sure if this is stretching the "giving 10%" too far, but you could mentally consider it donated and in case you don't need it later, you can donate it then. I think at least 12 months of runway at your anticipated future expenses might be the right level (so not a student expense, but if you might want children, accounting for all related expenses). Another situation that could ... (read more)

6
Davidmanheim
Strongly both agree and disagree - it's incredibly valuable to have saving, it should definitely be prioritized, and despite being smart, it's not a donation! So if you choose to save instead of fulfilling your full pledge, I think that's a reasonable decision, though I'd certainly endorse trying to find other places to save money instead. But given that, don't claim it's charitable, say you're making a compromise. (Moral imperfection is normal and acceptable, if not inevitable. Trying to justify such compromises as actually fully morally justified, in my view, is neither OK, nor is it ever necessary.)

Have you checked  with a nearby local EA group if they have younger people looking for mentors? I find that sometimes the youthful optimism energizes me too - like going to church!

7
Michael_PJ
I have not - maybe I should!

Btw for anyone this helps: My Norton antivirus did not like the download. I decided this was high trust enough that I disabled it and as far as I know nothing bad happened. I could turn it on again after installing the excellent software.

3
Christoph Hartmann 🔸
Ah yes that unfortunately happens sometimes. Because the software offers (optional) keystroke tracking it has some dependencies that I'd imagine trigger antivirus software.

Yesssss!!!! I am trying it right away. I also think for many here, timing is important to set limits. Like cap your work week at 50 or at most 60 hours (or less if you have caretaking responsibilities). That way you don't let guilt push you into unhealthy territory. That's how I use timers. Also great for parents that are both ambitious to make sure one does not get a career advantage by feeling more nervous or something.

8
Christoph Hartmann 🔸
Yes fully agree that capping is important. I'd probably cap it much lower (I guess I average about 20-30h/week of actual work on DoneThat). I like this post where people share how many hours they work https://forum.effectivealtruism.org/posts/byMQvEHWur23bLpQw/how-much-do-you-actually-work#GBXjoJZudHpLh72Mg. Anecdotally I also talked with somebody who tracked productive hours in a high-paid US tech job, averaged about 4h/d and got promoted with that.

I agree. Reading your comment made me think that it might be interesting — even if just as a small experiment — to map out which historical figures we feel struck the ~right balance between ambition and caution.

I don’t know if it would reveal much, but perhaps reading about a few such people could help me (and maybe others) better calibrate our own mix of drive and risk averseness. I find it easier to internalize these balances through real people and stories than through abstract arguments. And perhaps that kind of reflection could, in perhaps only a small way, help prevent future crises of judgment like FTX.

Perhaps mentioned elsewhere here, but if we look for precedent for people doing an enormous amount of good (I can only think of Stanislav Petrov and people making big steps in curing disease), these actually did not act recklessly I think. It seems more like they persistently applied themselves to a problem, not super forcing an outcome and aligning a lot with others (like those eradicating smallpox). So if one wants a hero mindset, it might be good to emulate actual heroes we both think did a lot of good and that also reduced the risk of their actions.

I think there are examples supporting many different approaches and it depends immensely on what you're trying to do, the levers available to you and the surrounding context. E.g. in the more bold and audacious, less cooperative direction, Chiune Sugihara or Osckar Schindler come to mind. Petrov doesn't seem like a clear example in the "non-reckless" direction, and I'd put Arkhipov in a similar boat (they both acted rapidly under uncertainty in a way the people around them disagreed with, and took responsibility for a whole big situation when it probably would have been very easy to say to themselves that it wasn't their job to do things other than obey orders and go with the group). 

I am really sorry to hear that it got this bad. I must admit I did not actually consider the diversity of our community's experiences when crafting this poll, and instead wrote just quickly, knee-jerk from a white, het-cis, male perspective but you point out that the situation might be much worse for people affected more directly by the aspects you point out and might also extend to reproductive rights and more. I really hope you will soon find a place where you are safe and I feel a bit inadequate for not having capacity to do more than write these words.

A proposal for an "Anonymity Mediator" ("AM") in EA. This would be a person that mostly would strip identity from information. For example, if person A has information about an EA (person B) enabling dangerous work at a big AI lab, the AM would be someone person A could connect with, giving extremely minimal information in a highly secure way (ideally in-person with no devices). The AM would then be able to alert people that perhaps should know, with minimal chance of person A's identity being revealed. I would love to see a post for a proposal for such a person and if it seems helpful (community issues, information security, etc.) maybe a way to make progress on funding and finding such a person.

A combined guide for EA talent to move to stable democracies and a call to action for EA hubs in such countries to explore facilitating such moves. I know there are people working on making critical parts of the EA ecosystem less US-centric. It might be that I am missing other work in this direction but I think this is a good time for EA hubs in e.g. Switzerland and the Nordics to see if they can help make EA more resilient when it might be needed in possibly rough times ahead. Perhaps also preparing for sudden influxes of people, or facilitate more rapid support in case things start to change quickly.

4
LintzA
Interesting! Hadn't read this newsletter yet. Excerpting the text here: "It remains a good idea for readers concerned about tail risks to consider getting a residency permit, or a passport, in countries such as Mexico, Panama, Paraguay, Uruguay, etc., in case the political climate in the US becomes more turbulent."

Since you are pursuing E2G, you might actually want to let your job search dictate your choice of city - just an idea. There are several good contenders, and flight between cities in Europe is cheap. Berlin and Stockholm have good tech scenes if you are thinking of joining a start-up early. Otherwise you might just want to look for jobs across the top EA cities, and pick the one where you find the highest wage. Depending on your AI timelines, you might or might not want to consider career progression - something like how many CS jobs are there in total in the city - and HQs/large offices of any large tech companies with high wages.

4
ceselder
Yeah! Those are my thoughts exactly! I will just mass apply all over european hubs and get the highest impact position (within reason, living in a car dependent city for example is a no-no for me). I wonder if there would be interest in a follow up post breaking down every major EA hub city and comparing general pros/cons (to the extent that is even possible)  I could break down tax code laws as well

Thanks Neel, I totally agree. I hope me updating the relevant answers to "U.S. citizens or green card/work permit holders" is not too hard to understand.

Non-US (MIC/LMIC) – started considering moving abroad in the last 12–24 months

Non-US (HIC) – started considering moving abroad in the last 12–24 months

U.S. citizens or green card/work permit holders – not considering moving abroad

U.S. citizens or green card/work permit holders – started considering moving abroad in the last 12–24 months

3
Wyatt S.
a. Probably Canada or Australia/NZ. b. Lack of ability to do good in authoritarian/anti-science (in particular related to vaccinations and medicine) regimes is the first factor. The second factor is questioning/fluctuating gender identity, which may cause me to be targeted. Additional Notes: I think I started making serious efforts around June 2024, but I thought the authoritarianism would be seriously limited, so for the sake of my mental health, I decided not to transfer colleges. So far, it seems like most tell-tale signs of authoritarianism (threats to invade countries, kidnappings off the streets) have come to pass. So I am now trying to move again, hopefully by this winter or spring.

Community > Epistemics
Community is more important to EA than epistemics. What drives EA's greater impact isn’t just reasoning, but collaboration. Twenty “90% smart” people are much more likely identify more impactful interventions than two “100% smart” people.

I may be biased by how I found EA—working alone on “finding most impactful work” before stumbling into the EA community—but this is the point: EA isn’t unique for asking, “How can I use reason to find the most impactful interventions?” Others ask that too. EA is unique because it gathers those people, and facilitates funding and coordination, enabling far more careful and comprehensive work.

1
idea21
EA, unlike other large humanitarian organizations such as Oxfam (for example), has the valuable originality of focusing its activity not so much on the humanitarian tasks to be carried out, but rather on the altruistic disposition of donors and collaborators (who thus form a community). It therefore has its own ideological character, which is rationalist and apolitical, and which can be further developed in this sense.

I'm not so sure, there are quite a lot of groups that gather together, but not as many that trade off the community side in favour of epistemics (I imagine EA could be much bigger if it focused more on climate or other less neglected areas).

I also wouldn't use the example of 20 vs 2, but with 10,000 people with average epistemics vs 1,000 with better epistemics I'd predict the better reasoning group would have more impact.

I have not contemplated deeply the meaning of Rethink Priority's findings on cross cause prioritization, but my perhaps shallow understanding was that despite somewhat high likelihood of AI catastrophe arriving quite soon, "traditional" animal welfare looked good in expectation. I think the point was something like despite quite high chances of AI catastrophe, the even higher chances (but far from 100%) of survival means in expectation animal welfare looks very good. So while it is not guaranteed animal welfare interventions will pay off due to an interven... (read more)

One non-expert idea here is to assume that all the building blocks of mirror bacteria exist - what would it take then to create effective mirror phages? Is there any way we can make progress on this already, without those building blocks, but knowing roughly what they are? And in a defense favoring way? Again I would really align with other biosec folks on this at OP, Blueprint and MBDF as I feel very hesitant about unilateral actions. But something like this might have legs, especially if some plausible work can be outlined that can be done with current techniques.

1
Nnaemeka Emmanuel Nnadi
I have an idea that appears harmless but will help us see how normal phages will interact with mirror bacteria. I however do not know how to approach any of these funders. 

Hi Nnaemeka, yeah I totally agree about not doing something potentially advancing the creation of dangerous mirror organisms. I am commenting just to iterate what I said about "defense-favoring" - I know little of microbiology but thought I would mention just in case there might be some way to very lightly modify an existing non-mirror phage to "hunt and kill" mirror microbes (e.g. just altering their "tracking" and "ingestion" system). But this is probably an incredibly naive idea but thought I would put it out there as there is a whole chapter on phages ... (read more)

I know little of microbiology, but I know there is some focus on mirror bacteria. One possible pivot that could attract funding would be to look if phages can be made to track and consume mirror bacteria. This is a super speculative idea, but I think there might be some funding for defenses against mirror life. Perhaps you have already looked at the detailed report on mirror life published at the end of last year (my non-expert read was that it was believed phages would not work - but maybe it is possible to make "mirror phages" in a defense-favoring way)?

3
Nnaemeka Emmanuel Nnadi
Thank you for sharing this—it’s a fascinating idea. I haven’t read the detailed report you mentioned, but I’ve followed some of the broader discussions around mirror life. You’re right that conventional phages wouldn’t work against mirror bacteria because of the chirality mismatch. In theory, only “mirror phages” built from mirror-biological components could infect them. The idea of mirror phages is interesting because, if mirror organisms were ever discovered or engineered, they might be immune to all our natural defenses and medical tools. In that context, mirror phages could represent one of the very few biological defenses available. Exploring that possibility would also stretch our understanding of what life could look like beyond Earth, which is scientifically exciting. My concern, however, is twofold. First, the technical barrier is enormous—we don’t currently have the capacity to build entire mirror-biological systems. Second, and more importantly, creating self-replicating mirror entities—whether bacteria or phages—would carry profound risks. Once released, they would operate on completely different biochemistry, outside the checks and balances of our ecosystems. We could neither predict nor easily contain their behavior, because no existing biological process in our world could break them down. That means even if they posed no direct harm to us, they could persist indefinitely, occupying niches, competing for resources, or interacting with the environment in ways we cannot anticipate. Another layer of complexity is that phages are natural genetic transducers—they move genes between organisms. If mirror phages were ever created, we cannot be certain how they might interact with ordinary bacteria. While direct gene transfer across chiral systems seems unlikely, biology has a way of surprising us, and even small, indirect interactions could have unforeseen consequences. This uncertainty makes their study both intriguing and potentially risky. So while t

One point I have raised earlier: If one is worried about neocolonialism, reducing the risk from powerful technology might look like a better option. It is clear that the global south is bearing a disproportionate burden from fossil fuel burning by rich nations. Similarly, misuse or accidents in nuclear, biotechnology and/or AI might also cause damage to people who had little say in how these technologies were being rolled out. Especially preventing nuclear winter seems like something that would disproportionately affect poor people, but I think AI Safety and Biosecurity are likely candidates for lowering the risk of perpetuating colonial dynamics as well. 

Fixed! Thanks for pointing that out.

Load more