There are a number of EA infrastructure projects that provide free or cheap support to other effective altruists, two of which I'm involved with. Something they all seem to have in common is that a major limiting factor is spreading the word about the availability of the service or product and keeping the word spread.

One can justify a handful of posts on here and in Facebook groups about any given project, but after that, unless it has changed in some way, further posts seem increasingly like spam - even when the project is still useful and still limited by EAs' remembering or ever knowing of it.

So this is a quick post with a few related objectives:

  1. to call for better standardisation of norms for promoting such projects
  2. to open discussion of some kind of such advertising on this forum, since it's probably the single place where it would be most visible
  3. to discuss the possibility of advertising in other venues. Eg such projects could promote each other - although again a limiter is that not all such projects know about all the others
  4. partly to help with 3, partly for its own sake, to provide a list of such projects. I'll mention all the ones that occur to me below with the best description I can easily find. Please let me know any I've missed in the comments, and I'll edit them into this post

The last could be quite subjective since anything could qualify as a resource and I don't want the list to become overwhelming, so I'll give some initial guidelines - feel free to persuade me they should be changed:

  • The resource should be free to use, or at available at a substantial discount to relatively poor EAs
  • It should be aimed specifically at EAs
  • It should make the people using its lives better, not just 'enable them to do more good'
  • It should be available to people across the world (ie. not just a local EA group)
  • It should be a service or product that someone is putting ongoing work into (ie not just a list of tips, or Facebook/Discord/Slack groups with no purpose other than discussion of some EA subtopic)

At the very least, hopefully this post can then become a useful reference for people looking to see what benefits the community can provide them.

Coworking/socialising:

  • EA Gather Town - An always-on virtual meeting place for coworking, connecting, and having both casual and impactful conversations
  • EAVR - A community for people interested in Effective Altruism who use VR to connect and collaborate
  • EA Anywhere - An online EA community for everyone
  • EA coworking Discord - A Discord server dedicated to online coworking

Professional services:

  • Altruistic Agency - provides free tech support and development to organisations
  • Tech support from Soof Golan
  • Legal advice from Tyrone Barugh - a practice under consideration with the primary aim of providing legal support to EA orgs and individual EAs, with that practice probably being based in the UK.
  • EA mental health navigator - aims to boost people's well-being by connecting them with mental health resources considered to be effective and to have the greatest likelihood of being helpful
  • SEADS - Data Science services to EA organizations
  • User-Friendly - an EA-aligned marketing agency
  • Anti Entropy - offers services related operations for EA organizations
  • Arb - Our consulting work spans forecasting, machine learning, and epidemiology. We do original research, evidence reviews, and large-scale data pipelines.
  • Pineapple Operations - Maintains a public database of people who are seeking operations or Personal Assistant/Executive Assistant work (part- or full-time) within the next 6 months in the Effective Altruism ecosystem

Coaching:

Financial and other material support:

  • CEEALAR/formerly the EA hotel - Provides free or subsidised serviced accommodation and board, and a moderate stipend for other living expenses.
  • Nonlinear productivity fund - A ​low-barrier fund paying for productivity enhancing tools ​for top longtermists. Supported services and products include Coaching, Therapy, Sleep coaching, Medication management , Personal Assistants, Research Assistants, Virtual Assistants, Tutors (e.g. ML, CS, language), Asana, FocusMate, Zapier, etc., Productivity apps, A/C, dishwashers, etc, SAD Lamps
  • Effective Altruism Funds - Whether an individual, organisation or other entity, we’re eager to fund great ideas and great people.
  • Nonlinear fund - We incubate longtermist nonprofits by connecting founders with ideas, funding, and mentorship
  • FTX Future Fund - Supports ambitious projects to improve humanity's long-term prospects
  • Survival and Flourishing Fund - A “virtual fund”: we organize application submission and evaluation processes to help donors decide where to make donations.
  • Open Philanthropy Project - a research and grantmaking foundation that aims to share its findings openly
  • Berkeley Existential Risk Initiative - Supports university research groups working to reduce x-risk, by providing them with free services and support.
Comments24


Sorted by Click to highlight new comments since:

Couldn't agree more! 

Some others to add to the list:

Amazing, thanks Kat!

This is a great list. Always happy to have people on EA Radio to talk about their projects if they want! 

That's a cool idea, though I have the feeling that for most projects such as these there'd be relatively little to talk about beyond letting people know the product or service exists. How would you image structuring a longer conversation?

It doesn't have to be that long but I think there are a couple more things we could discuss: what groups are using these platforms, what connections are being made, what events have been done or are in the works, what do people like about the platforms, how do they differ from other ones, what features are being added or are people considering adding, what projects or impacts have resulted. 

More professional services: 

  • SEADS offers Data Science services  to EA organizations
  • User-Friendly is an EA-aligned marketing agency
  • Anti Entropy offers services related operations for EA organizations

Together with Altruistic Agency these Agencies are working together to form an umbrella organisation but more info on this will soon be announced

Thanks! I don't have time to check all the links atm. Do you know whether any/all of them offer free or strongly discounted services?

They all offer free or strongly discounted services for EA Orgs. 

Yeah, I have this pain point with: https://forum.effectivealtruism.org/posts/FkWHn6WaFGzrzqb9P/i-m-offering-free-coaching-for-software-developers-in-the-ea

And lately some other services I offer, like improving job posts

How could I forget? O_O Added to the OP!

Under coaching there is also https://www.trainingforgood.com/coaching and https://lynettebye.com/services.

The list in the article seems like it's probably very incomplete. I'm not aware of other similar lists others have made, but they may exist.

I've added TfG. Lynette Bye doesn't look as though she meets the 'free or heavily discounted' requirement.

Any information I have is multiple years outdated, but her about page says "Many thanks to EA Grants, the EA Meta Fund, and the Long-Term Future Fund for supporting EA Coaching" so I assume she at least still offers some heavily discounted coaching to at least some EAs. Probably worth it for whoever is taking on the job of trying to make the list somewhat complete to reach out.

I had 4 coaching calls with her for free after 80,000 Hours directed me to her.

There's a newer version of this post here. I suggest that this one be linked to at the top of this post so that people don't spend a bunch of time reading through this only to realize there is a more up-to-date list of resources. I see the final comment here mentions it, but it would be better to put it prominently at the top. thanks! 

Unclear if research consultancies count as infra, but Arb answer hard questions for people.

Looks as though they'd charge consultancy fees, though?

We've done pro bono stuff before, to each according to need.

The EA Opportunity Board to help members find actionable next steps!

Consider adding the Berkeley Existential Risk Initiative (BERI) to the list, either under Professional Services or under Financial and other material support. Suggested description: "Supports university research groups working to reduce x-risk, by providing them with free services and support."

Thanks! I've added them now.

This is a great idea! Would also add Magnify Mentoring, which provides (free!) services to support more people with mentorship, particularly those from traditionally underrepresented groups.

Sorry Jessica, I somehow missed this comment until now. I've just added them to the latest version of this post.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3