Hide table of contents

If one would try to create a custom social network for effective altruism community, what would it look like?

In this article, I will present an example of a concept and will ask, what are the flaws or risks? What could be improved? What is a different way to do it?

I tried to make it a simple, impact and numbers driven network, which could add a lot of value, and could cover most of aspirations EA community members would have.

Summary

The concept is centered around goals and action.

  1. Members add goals to their profiles.
  2. Then, add activities to their goals (for example, a raw idea of doing something)
  3. Artificial intelligence automatically matches people and organizations having similar or opposite goals (for example, “needs and gives”).
  4. Then, users select the best matches for their circles around the goal.
  5. There, users see each other's activities and collaborate within them.
  6. Numbers are added to measure goals, and graphs to gamify.

In more detail

In the profile, members add their goals.

Goals consist of title, picture, description and numbers.

Goal example:

Profile example:

Automatic suggestions of people and organizations

For each goal, artificial intelligence suggests like-minded members based on the similarity of goal descriptions and takes into account all other variables (like the similarity between all other goals and other numbers). Opposite goals to each other, like concept "Needs" and "Gives", gets matched here.

Personal circles around goals

If a person gets good suggestions of similar goals of other members and organizations, he adds them to the goal's circle and follows activities.

Collaboration together

Users create activities for a goal. It can be a raw idea, a task, a conversation.

This example list may be filled with a lot of ideas from the circle. Now I will try to predict with a lot of speculation what that could mean.

One scenario: since people are connected about something they care about, there is a degree of passion. If the goal is very specific, then the activities the circle members post are relevant. The whole tool encourages raw ideas to be presented. Conversations happen here, ideas combine and mix, or bad ones get postponed and forgotten. The result is problems being solved and goals achieved using common brain power.

On the other hand, it is uncertain how this would actually play out. This is what the development team would need to test.

Value for organizations

  1. Since organizations create their own profiles with goals, the network would connect them with people and other organizations having the same goals. For example: organizations have similar needs around technology, maybe they could join up and solve it together.
  2. Since organizations create activities to achieve their goals, they could crowd-source ideas or tasks. It might become a streamlined way for organizations to attract motivated volunteers. And for people, it would lower the barrier to contribute. By being able to join an organization's work today, they might grow up to eventually become employees.

Simplified project management system

If previously mentioned concepts work in the real world, then activities could be grouped into projects, categories, tags, and statuses to organize all work. This would not add much new complexity, rather simplification.

Integrated measurement

People would add core numbers to their goals to measure everything, which would reflect in graphs to gamify. Which measurement tools would fit here best?

Dashboard

In the dashboard, everything important is in one place.

Next steps?

Channel hashtag #custom-ideas was created in the slack "EA Public Interest Technologists" [3]. And we can talk about it in the comments under this post.

Why to explore and develop new concepts?

In order to innovate. If some of the concepts prove to work in the real world, then they could be merged under existing platforms like eahub.org.

Reference

  1. Example icons taken from https://commons.wikimedia.org/
  2. Example photos taken from https://generated.photos/
  3. Link to join slack https://join.slack.com/t/ea-pub-interest-tech/shared_invite/zt-tar2i03b-3xqmTh1lLFn8NWB6X1ZA6Q
     
Comments5


Sorted by Click to highlight new comments since:

Update 2022-03-04.

A prototype was created to test this concept. More info in a new article: 

https://forum.effectivealtruism.org/posts/Rtfvyoj5wfA3wYpsq/prototype-of-re-imagined-social-network-for-ea-community

Thanks for sharing! I've had the feeling for a while that it would be great if EA managed to make goals/projects/activities of people (/organizations) more transparent to each other. E.g. when I'm working on some EA project, it would be great if other EAs who might be interested in that topic would know about it. Yet there are no good ways that I'm aware of to even share such information. So I certainly like the direction you're taking here.

I guess one risk would be that, however easy to use the system is, it is still overhead for people to have their projects and goals reflected there. Unless it happens to be their primary/only project management system (which however would be very hard to achieve).

Another risk could be that people use it at first, but don't stick to it very long, leading to a lot of stale information in the system, making it hard to rely on even for highly engaged people.

I guess you could ask two related questions. Firstly, let's call it "easy mode": assuming  the network existed as imagined, and most people in EA were in fact using this system as intended - would an additional person that first learns of it start using it in the same productive way?

And secondly, in a more realistic situation where very few people are actively using it, would it then make sense for any single additional person to start using it, share their goals and projects, keep things up to date persistently, probably with quite a bit of overhead on their part because it would happen on top of their actual project management system?

I think it's great to come up with ideas about e.g. "the best possible version EA Hub" and just see what comes out, even though it's hard to come up with ideas that would answer both above questions positively. Which is why improving the EA Hub generally seems more promising to me than building any new type of network, as at least you'd be starting with a decent user base and would take away the hurdles of "signing up somewhere" and "being part of multiple EA related social networks". 

So long story short, I quite like your approach and the depth of your mock-up/prototype, and think it could work as inspiration for EA Hub to a degree. Have my doubts that it would be worthwhile actually building something new just to try the concept. Except maybe creating a rough interactive prototype (e.g. paper prototype or "click dummy"), and playing it through with a few EAs, which might be worthwhile to learn more about it.

True, this has to be done through smaller steps like a prototype. It can be implement in many shapes and forms. I am working on such ideas right now. All other points you wrote are valid and, I guess, are solvable. Good to hear the concept is not flawed in fundamental ways. Thanks for your comment.

Interesting article! Thank you!! For me, the best social network is LinkedIn, since it is a work and business focused network.

When creating an account for your network above, would I be able to pull in my LinkedIn Profile? If so, that would make your network slightly easier to start using. 

Good point about LinkedIn. It seems like it is a very good networking tool for enterprise. It helps to find employees, partners and clients for businesses, and visa versa. Then they network in order to do business together. It looks like LinkedIn covers the needs of business already. So this concept may add no value to them, mainly because of their competitive nature and large scale. Companies will not want to share they day to day activities, nor they day to day goals.

But it would add new value for non-profit organizations and effective altruists to solve world problems, because of collaborative nature and decentralization. EA members work towards common goals with decentralized initiative - there is a lot of space to innovate this process, mainly because community like effective altruists never existed before.

True, it may be difficult to define goals and start using. Possibly, it may not be an issue - users may find a lot of inspiration from many examples in other profiles. There might be a button to clone those examples and modify them.

Goals could be a practical smaller things to achieve. You need something a little bit and you don't know how to get there, you add it to your profile. Possibly in a private hidden mode too. Then AI should search the match.

It may be difficult to import LinkedIn profile, because the structure is completely different. Unless artificial intelligence would scan your profile and define goals for you, but that looks not real, unless LinkedIn has some information on profiles that could be converted to goal statements and descriptions?

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal