I have received funding from the LTFF and the SFF and am also doing work for an EA-adjacent organization.
My EA journey started in 2007 as I considered switching from a Wall Street career to instead help tackle climate change by making wind energy cheaper – unfortunately, the University of Pennsylvania did not have an EA chapter back then! A few years later, I started having doubts whether helping to build one wind farm at a time was the best use of my time. After reading a few books on philosophy and psychology, I decided that moral circle expansion was neglected but important and donated a few thousand sterling pounds of my modest income to a somewhat evidence-based organisation. Serendipitously, my boss stumbled upon EA in a thread on Stack Exchange around 2014 and sent me a link. After reading up on EA, I then pursued E2G with my modest income, donating ~USD35k to AMF. I have done some limited volunteering for building the EA community here in Stockholm, Sweden. Additionally, I set up and was an admin of the ~1k member EA system change Facebook group (apologies for not having time to make more of it!). Lastly, (and I am leaving out a lot of smaller stuff like giving career guidance, etc.) I have coordinated with other people interested in doing EA community building in UWC high schools and have even run a couple of EA events at these schools.
Lately, and in consultation with 80k hours and some “EA veterans”, I have concluded that I should consider instead working directly on EA priority causes. Thus, I am determined to keep seeking opportunities for entrepreneurship within EA, especially considering if I could contribute to launching new projects. Therefore, if you have a project where you think I could contribute, please do not hesitate to reach out (even if I am engaged in a current project - my time might be better used getting another project up and running and handing over the reins of my current project to a successor)!
I can share my experience working at the intersection of people and technology in deploying infrastructure/a new technology/wind energy globally. I can also share my experience in coming from "industry" and doing EA entrepreneurship/direct work. Or anything else you think I can help with.
I am also concerned about the "Diversity and Inclusion" aspects of EA and would be keen to contribute to make EA a place where even more people from all walks of life feel safe and at home. Please DM me if you think there is any way I can help. Currently, I expect to have ~5 hrs/month to contribute to this (a number that will grow as my kids become older and more independent).
The book "Careless People" starts as a critique of Facebook — a key EA funding source — and unexpectedly lands on AI safety, x-risk, and global institutional failure.
I just finished Sarah Wynn-Williams' recently published book. I had planned to post earlier — mainly about EA’s funding sources — but after reading the surprising epilogue, I now think both the book and the author might deserve even broader attention within EA and longtermist circles.
The early chapters examine the psychology and incentives behind extreme tech wealth — especially at Facebook/Meta. That made me reflect on EA’s deep reliance (although unclear how much as OllieBase helpfully pointed out after I first published this Quick Take) on money that ultimately came from:
These issues are not new (they weren’t to me), but the book’s specifics and firsthand insights reveal a shocking level of disregard for social responsibility — more than I thought possible from such a valuable and influential company.
To be clear: I don’t think Dustin Moskovitz reflects the culture Wynn-Williams critiques. He left Facebook early and seems unusually serious about ethics.
But the systems that generated that wealth — and shaped the broader tech landscape could still matter.
Especially post-FTX, it feels important to stay aware of where our money comes from. Not out of guilt or purity — but because if you don't occasionally check your blind spot you might cause damage.
Meta is now a major player in the frontier AI race — aggressively releasing open-weight models with seemingly limited concern for cybersecurity, governance, or global risk.
Some of the same dynamics described in the book — greed, recklessness, detachment — could well still be at play. And it would not be completely surprising if such culture to some extent is being replicated across other labs and institutions involved in frontier AI.
In the final chapters, Wynn-Williams pivots toward global catastrophic risks: AI, great power conflict, and nuclear war.
Her framing is sober, high-context, and uncannily aligned with longtermist priorities. She seems to combine rare access (including relationships with heads of state), strategic clarity, and a grounded moral compass — the kind of person who can get in the room and speak truth to power. People recruiting for senior AI policy roles might want to reach out to her if they have not already.
I’m still not sure what the exact takeaway is. I just have a strong hunch this book matters more than I can currently articulate — and that Wynn-Williams herself may be an unusually valuable ally, mentor, or collaborator for those working on x-risk policy or institutional outreach.
If you’ve read it — or end up reading it — I’d be curious what it sparks for you. It works fantastically as an audiobook, a real page turner with lots of wit and vivid descriptions.
I think this is super useful to share - thanks! One question: Do you think you are striking the right balance between detail and speed of making an application? I am asking as e.g. Lightspeed Grants tried to make applying for funding as quick and easy as possible, and after a quick skim your application seems to pull more in the direction of impressive detail. I am commenting mostly as I could see some first-time grant applicants might take away that funding applications require a lot of time to put together.
Actually reading this again, I think maybe you have a point about complexity of arguments/assumptions. Not sure if it is Occam's Razor, but if one has to contort an argument into this weird, windy argument with unusual assumptions - maybe this hard attempt at something like "rationalization" should be a warning flag. That said, the world is complex and unpredictable, so perhaps reasoning about it is complex too - I guess this is an age-old debate with no clear answer!
Animal welfare on the other hands seems so extremely easy to argue is important. Global poverty a little less so but still easier than x-risk (more about whether handing out mosquito nets is better than economic growth, democracy, human rights, etc.).
That is true and perhaps I could have chosen a better wording. This is also why I put "cause neutrality" in quotation marks. I would welcome any suggestions for wording that might be less confusing. Apologies if I have caused confusion - I now changed it to "cause balanced" - hopefully that is better and less confusing.
Are there events or discussion boards, perhaps not organized by the funders with an AI focus? A list of those might be a more "cause-neutral EA space" to initially explore, rather than EAG and listening to 80k hours which will be dominated by AI. Perhaps people conducting EA introduction events could anchor on such a set of spaces and events, and categorize especially 80k hours as "AI orgs", just one org in one of many parts of the "EA eco system". That way, if newcomes start listening to 80k hours, or go to an EAG, they know they are "really tapping into a single-cause AI" space.
I think to me cubic or faster value increases and that we will mostly have a future with very low risk and that it is only now, or during a few periods that risk will be extremely high. In a sense, I see these assumptions in tension as high value often is accompanied by high risk. I was just made aware that even sending digital being to far-away galaxies looks extremely expensive energy-wise, even if one keep only the minimum power requirement during a multi-year travel between solar systems. I guess in essence, I feel like to justify these assumptions one would have to really look into what these assumptions materially mean, and use historical precedent and reasonable analysis across a wide range of scenarios to see if they make sense. For me this is more intuition and a scepticism that enough work have been done to get certainty of these assumptions. To some degree, I also feel like AI safety was a direction where funders might get more of a feeling "of doing something" - something I have been at fault at myself. Something like just chipping away at the stubborn problems of poverty/global health, or animal welfare is likely to remain "unsolved" problems even with billions more invested. Moreover, they do not have novelty, and these "industries" are less prone to be affected while AI is new and one can see more systemic effects. Maybe this last point actually drives at something supporting AI safety - it might be more tractable in a sense. Sorry this was long and not underpinned by much analysis so would welcome any analysis on these points, especially analysis that might change my mind.
I just have to call out the amazing work by Rethink Priorities and those that funded this sequence of analyses (not sure who that is, would welcome info!): https://forum.effectivealtruism.org/s/WdL3LE5LHvTwWmyqj
I guess this might be the "last, properly funded EA analysis" unless something came out after that which I missed ("last" in that going forward it seems funders are doubling down on AI and might not rethink this decision in the near future)? I think the takeaway from this work by Rethink Priorities for me is that it is not at all unreasonable to focus on other things than AI, as going all in on AI seemed to require a set of quite extreme beliefs/assumptions. Would be happy to be corrected if my simple take-away might be overly naive.
Yeah if $100 can save you even just 3 filter replacements, that sounds like a good investment. Maybe I should do this myself. For now, I will just hope motor sound + my intuition/tacit knowledge of air purifier air flow is enough for me to realize when a filter definitely needs replacing. Thanks Jesse!
Fixed! Thanks for pointing that out.