Hide table of contents

The EA Hub will retire on the 6th of October 2024.

In 2022 we announced that we would stop further feature development and maintain the EA Hub until it is ready to be replaced by a new platform. In July this year, the EA Forum launched its People Directory, which offers a searchable directory of people involved in the EA movement, similar to what the Hub provided in the past.

We believe the Forum has now become better positioned to fulfil the EA Hub’s mission of connecting people in the EA community online. The Forum team is much better resourced and users have many reasons to visit the Forum (e.g. for posts and events), which is reflected in it having more traffic and users. The Hub’s core team has also moved on to other projects.

EA Hub users will be informed of this decision via email. All data will be deleted after the website has been shut down.

We recommend that you use the EA Forum’s People Directory to continue to connect with other people in the effective altruism community. The feature is still in beta mode and the Forum team would very much appreciate feedback about it here.

We would like to thank the many people who volunteered their time in working on the EA Hub.

96

0
3
12

Reactions

0
3
12
Comments22


Sorted by Click to highlight new comments since:

Thanks for all the hard work that went into building/rebuilding/maintaining EA Hub!

It's always sad to see old projects get shuttered, especially ones that were a labour of love, so kudos on recognising that it's the right time to do this.

You may want to see if anyone wants the URL.

Personally, I’d love to see a resource like aisafety.com which provides a broad overview of resources.

effectivealtruism.com only links to a narrow range of resources because it’s run by the Center of Effective Altruism, so I expect a need for them to “make it official”.

I’d be happy to assist with trying to find a new home for it, if that would help.

Thanks for thinking about how this resource could be used!

Do you mean that another project could use the domain "eahub.org"? I am sceptical that this is a good idea. There was already confusion with another project with the same name in the past, so adding a third project in the mix will be confusing, and might be bad for that third project building a clear brand. 

Only so many good short names though.

Would it be possible to pass some of the data to the EA Forum thing, perhaps with a simple permission tick box? I? This could help kickstart things a lot building network externalities.

This also seems like a good precedent To me. People will be more likely to participate in these data gathering survey things in the future if they think that if the initiative is retired there is the opportunity to re-purpose it for its successor.

We did consider this but decided against it for several reasons
- We felt the only right way to do this would have been for it to be opt-in, i.e., that users would have to actively agree to this. We assumed that very few users would have done this and that the effort would have not been worth it.
- We raised this option with the Forum team in 2023 when discussing a possible handover and they felt similarly
- The vast majority of the profiles will be quite outdated by now (the % of users returning to the Hub to update their profile was quite low), so the quality of the data would also not have been great.

In the email we're sending to users we encourage them though to use the Forum's Profile Directory, so we hope that those users who are on the Hub, but not the Forum, will migrate their (I believe this will be a rather small subset of those Hub users who are still actively involved in EA).

I understand the logic, but I think there could be some workarounds that make this doable. If you can put a “last updated “ on profile people can judge for themselves whether the information is rusty.* I would worry a bit that not doing this makes people think “why should I fill in my information on this next thing when I’m just gonna have to do so again when they switch it up next time?

*by the way voice recognition replaced “trustworthy“ with “rusty“ but actually it works as well. :-)

You're of course right, there would be some advantages to doing it. It comes down to whether it's worth the time, and we concluded it is probably not.

That’s fair. But maybe hold onto the previous database if possible, in case the signup for this one is low and it needs a kickstart?

As communicated here and to users via email, all data will be deleted after the website goes down, so this is not an option. 

However, regarding your use case (finding lawyers/accountants), I have tried using 
- free text search on the profiles directory: seems not to work so well and something that could be improved.
- the role filter feature. That seems like it might be useful to you!
 

Eg I was looking for EA/earning to give lawyers and accountants to hire and had trouble finding them

Dunno if it's still helpful, but https://www.highimpactprofessionals.org/talent-directory is a directory of EAs looking for work and contained several each of lawyers and accountants on a quick search.

Did you find them? Perhaps they should be added to the periodic EA Advertisements post?

Helpful but to disambiguat, that is a directory of professionals who want to do impactful work.

I am also looking to favour “earning to give” professionals willing to do non-impactful work.

Ala my “corporate bake sale” post and @Brad West Introducing the Profit for Good Blog: Transforming Business for Charity 

I was referring to this post, which I think includes services for those who are earning to give?

Thanks for all of the hard work you put into developing and maintaining it! 

Posting from an alt account...

Very disappointed that the EA hub is not at least retaining the information for future use or otherwise making any effort to convey the information. Collectively, EAs, including myself, have spent a lot of time submitting the information. EAs spend their valuable time contributing to these projects with trust that the information will be stewarded and used. Now it seems that no effort is being taken to see that there is even the possibility of future use of this information. I don't know what the cost of the information retention would be, but I would be very surprised if it was lower than either the value of the possibility of its future gainful use and/or the harm from the loss of trust caused from the fact the products of community members' time was discarded so cavalierly.

The lack of care for the information gathered will likely cause people to slightly update against spending their time to collaborate in networking projects. 

Perhaps someone would want to champion this and get EA ~Meta or EA community funds to cover the cost of their time and tech in maintaining this? As @Midtermist12 says, I think the investment would be worth it, both for the use of this info itself, and for the knock-on effects for future collaboration. 

This seems potentially important for initiatives like @Brad West's Profit For Good initiative.  
A warm list for this ... Earning to give/EA people in relevant businesses. 

 

Thanks for thinking of us  @david_reinstein

Right now, we're focused on gathering information about Profit for Good businesses. Down the line, we’re definitely interested in compiling a guide of individuals or businesses that might offer favorable terms to Profit for Good enterprises, especially those benefiting effective charities. However, at the moment, we don’t have the capacity to work on compiling and developing this list.

Sebastian addressed this in a comment below. I'll also add that the Hub is a volunteer-run project, and we have limited time / resources. 

Receiving the news that the hub was shutting down is the reason I finally came here and looked around properly. 

Glad I did.

People have expressed reasons why they think this is disappointing but I will add another one. This is yet another way that the groupthink-generating karma system will distort our community.

The EA hub didn't sort people based on karma, but now this new system does. The karma system generates soft-censorship, self-censorship and groupthink. If you write a critique on a post with high-karma authors, they can just strong-downvote it and delete it from the frontpage, which just leaves you with less karma and voting power in the future (and other people can see your low comment-to-karma ratio, or the little icon indicating it, and dismiss you).

People with more voting power can downvote people they disagree with giving them less voting power (and thereby less voting power they can distribute to other people of similar sentiment... ad infinitum) while conversely upvote things they agree with giving those people more voting power (and thereby more voting power they can distribute to people they agree with... ad infinitum).

Getting sorted lower on the directory becomes yet another reason why people wouldn't want to criticize popular EA people/ideas (and the people that disregard this and go against their own self-interest to help EA as a whole will get yet another punishment for it).

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
Recent opportunities in Building effective altruism
46
Ivan Burduk
· · 2m read