This is part 1 of what I expect to be a 3-part series. The next parts will focus on the costs of these technical projects and more specific ways to kickstart them.
Rigor: Writing based on my experiences and education in the tech industry. I spent around 50 hours on this series plus some sporadic thinking and discussion. I focused more on idea quantity than neatness or data.
Intended Audience: Effective altruists interested in either funding or working on ambitious software projects.
History: This series was originally written as one Google Doc in March 2021 for private conversations. I've since done some updating to post it publicly. One important update since this document was written is that the CEA tech team has seemed to really up hiring considerably. This seems great, though I'm sure we could have even more, so it still seemed useful to post this.
About me (Ozzie Gooen):
I’ve been thinking about this a lot over the last several years. I joined 80,000 Hours as a web developer in 2013 hoping to do major/ambitious work, but there I realized that the funding situation at the time wasn’t large enough for most ambitious software efforts. I’ve since spent time in other startups, making my own startups, and doing consulting for small and large companies.
Recently I’ve been planning what work QURI should do. We’re considering focusing on engineering. Some of the motivation for this topic is to help make decisions for QURI.
Acknowledgments: Many thanks to Aaron Gertler, Rachel Bui, JP Addison, Daniel Kokotajlo, Adam Gleave, Daniel Eth, Oliver Habryka, Jonas Vollmer, and Nuño Sempere for their comments.
When I think of,
There’s an elite community with a net worth of over $40 Billion, made up of many genius mathematicians, engineers, and entrepreneurs and they’re trying to optimize the future of the world
I think of things like this:
(From Iron Man 2)
Right now we clearly don’t have this. Open Philanthropy and other core effective altruism organizations are highly philosophically focused as opposed to technically focused. This seems like a good beginning, but perhaps a suboptimal end.
Large technical implementations would be expensive ($10 Million to $100 Million+ per year). They would be different from current setups in some key ways. But I think they could be worth it.
Perhaps the best analogy would be that of the finance industry. Early on there were clever individuals or small teams who would intuitively make bets with very little information. Financial bets are very similar to making altruistic decisions. Charitable funding decisions are particularly equivalent, but so might be more generic things like career decisions. Over time the financial sector became far more sophisticated, with intense specialization, formalization, and expertise. Data-heavy and quant-based approaches have become decisive in much of the market, and continue to expand. We might expect altruistic decisions to follow a similar trajectory; begin with clever people using intuitions (like with existing EA funders), and expand to use more data and automation.
One counter-example in the field of finance is venture capital firms that invest with (relatively) little data. However, they exist in environments with applications like Crunchbase, AngelList, and other tools, that organize and charge for data.
Select Project Ideas
What should we code? I'm less certain about particular software interventions than I am about us being able to find some good bets (assuming we have good people).
I personally see software opportunities all around me, but it's difficult to put them into distinct large-project-size clusters. Lots of internal tools often start really small, then a few turn out to be surprisingly useful, and these sometimes scale.
Below are some ideas I've been personally interested in. I'm sure there are many more, this is just to give one take. My background is in effective altruism community, strategy, and meta-research, so these ideas are heavily biased towards those areas.
Some ideas include:
- Dramatically better data management for all things of interest to effective altruists. This encompasses a whole lot of possible projects and could absorb a lot of investment
- Advanced infrastructure to allow for great online research collaboration. For example, private internal networks with organized documents, videos, podcasts, and discussions. LessWrong and similar platforms would count.
- An open-source Twitter equivalent for use by effective altruists and surrounding communities, made in the spirit of LessWrong. Figure out how to make a social network to make discussions better, not worse.
- Forecasting, evaluation, and estimation infrastructure. I have more upcoming work to outline projects in this area. For example:
- Great interfaces to make forecasting results (like on Metaculus) dramatically more accessible and prominent. Have them integrated in popular news outlets.
- Guestimate++ software. See Squiggle for one direction. I think there’s a ton of work to do in this area.
- Organized data infrastructure to be used for forecasting. For example, if we had clean APIs for effective altruist organizational data, then forecasters can use that for continuously updating forecasts.
- Knowledge graph software for small and decentralized groups.
- Tooling for building many more calculators similar to microCOVID.
- A big list of key moral and philosophical claims that will later be evaluated by expert panels in the long-term future. This would be used as the basis of forecasting questions on forecasting platforms.
- Big lists of estimates of the values of every AI safety paper, then EA paper, then scientific paper, then most other things.
- Coordination software. I’m personally excited about radical legal+technical innovation, but there are multiple approaches.
- A collaborative application similar to Google Docs but aimed at the needs of EA researchers. Better LaTeX editors.
- Attempts to automate comments, some discussion, and other repetitive research workflows using GPT-n. Writing style-transfer to clean-up and converse with multiple communities. For example, lots of new effective altruists make similar errors when getting started writing (overconfidence, statistical mistakes, not knowing of previous work). It would be great to have a bot spot these and provide help. Automated therapy. (Note: Ought is doing some work here. Also, I've heard that Robert Miles is doing some cool experiments with chatbots and his Discord community.)
- Convert all writing relevant to EA interests into a huge knowledge graph. Then, make this easily explorable. This means that if someone mentioned paper X in a comment on one EA Forum thread, it would be accessible anywhere else someone reads or reads about paper X.
- Advanced and customized education platforms or portals, targeting altruistic & important topics. Education experiences that would act as full online classes and similar.
- Better versions of Charity Navigator for more mainstream donors.
- Anything that helps improve wisdom and intelligence.
- Software to help more people set up their own programs like Fast Grants or ACX Grants.
- Recommender systems to recommend valuable research, blog posts, news sources, books, and more.
These ideas are very software-dominated. There are also large classes of promising ideas that would look like “tech bundled with services”. EA Funds is an example. There’s solid software in the backend, but also a lot of continuous grant evaluation and operations work.
I also recommend going through LessWrong and the EA Forum for more ideas. Some tags and series that stand out include:
- Software Engineering (EA Forum)
- Public Interest Technology (EA Forum)
- Software Tools (LessWrong)
- Kickstarter for Public Action (LessWrong)
- Spaced Repetition (LessWrong)
Key Categories of Potential Projects
I think of EA software in three broad categories:
- Internal tools. Things only for the EA community.
- External tools. Standard software tools created for other groups.
- Fundamental technologies. Core technologies, like internet protocols, that we think are valuable enough to fund, either because they could be useful for internal or external use.
Internal Tools (to be used by EAs)
Internal tools software has a unique place in industry. This typically refers to tools built for use by a specific company. I'd argue that tools "only used by EA organizations" would count as "EA internal tools" even if multiple (EA) organizations are using them. Internal tools in industry get little product manager oversight, meaning that they are rough on the edges. You can read more about their use here and here.
- Process automation. The EA Funds funding infrastructure has a lot of custom code.
- Cause-specific data infrastructure. Faunalytics is an example.
- Organized datasets and analysis of everything important for EA purposes (which adds up to a lot of stuff). Think of Crunchbase, or IMDB Pro.
- Organization-specific workflows. Rethink Priorities has a bunch of Python code for generating and analyzing their surveys, for example.
- EA-only blogging and collaboration platforms. LessWrong and Guesstimate are examples.
External Tools (to be used by non-EAs)
Note: Arguably Public Interest Technology would fall into this bucket.
- Things like the internal tools that we think will help influence external groups in positive ways. The Open Philanthropy COVID dashboard, microcovid. Platforms like LessWrong, Guesstimate, and Roam Research, insofar as they help with this purpose. Our World In Data.
- Marketing technology for EA purposes. Automation, community-specific web portals, etc.
- Tools for other important groups. For example, whatever is most helpful to nuclear nonproliferation nonprofits.
EAs are doing fundamental philosophical, mathematics, and biodefense research (decision theory, MIRI work). It sometimes makes sense for us to either directly pursue or fund other fundamental engineering work. With QURI I’ve been working on some fairly low-level probabilistic open-source tools; I could imagine much more work being done in this space. Another area that comes to mind is fundamental internet technologies like the Dat protocol and knowledge graph software.
Most technical AI safety research would count as "fundamental technologies".
One would think that fundamental technologies (even outside technical AI safety) shouldn’t be neglected, but in practice, from what I can tell, many seem to be. Honestly, I find it very strange
Douglas Engelbart was one of the most significant innovators behind modern computer interfaces but faced substantial funding challenges for much of his career. Many lectures by Computer Science greats involve them complaining that the really innovative work (language design in particular) isn't funded. My impression is that there actually isn't that much agency+money+altruism out there for foundational R&D. See this post for more discussion on open-source software in particular.
Sometimes it’s worth collaborating with existing organizations, rather than making things from scratch. This could be especially true in cases where there are existing information sources that we'd simply like to add data to for public consumption (Wikipedia, as an example). Partnerships could include effective altruists giving advice, time, and funding.
- There are currently several websites to browse through academic papers. Instead of making our own, we might be able to find an existing one to implement functionality that we care about. Some examples include Google Scholar, Meta, Semantic Scholar, Connected Papers, and Shortscience. We may ask that they include LessWrong and EA Forum posts, and occasionally add effective altruist specific metrics.
- We could work with Wikidata or Wikipedia to make sure that a lot of EA-relevant materials are well covered.
- We could partner with existing social media or chat software and get them to encourage epistemic benefiting features, instead of making our own versions.
- We could convince a group similar to Crunchbase to make a clone version for EA organizations.
We could work with Our World In Data to focus more on EA endeavors or add forecasting abilities.
Generic Benefits of Software Engineering
Note: I moved this section to the bottom because I found that people found the other sections more interesting.
Software brings with it some unique advantages over more traditional effective altruist research.
Engineering can be much more scalable than research. Engineers are far more similar to each other than researchers, and many engineering systems are relatively well-understood, even if they take a lot of time to make.
Thus, engineering projects can be surprisingly predictable and corresponding costs estimable. The main contractor I paid for much of the work for Foretold cost $30/hr. They were based in Ukraine. They were assigned to me by an agency, they didn’t speak English particularly well, but we got through a lot of functionality together. It cost around $50k total, for probably around 70% of the Foretold codebase. I think this was a pretty good deal. Arguably it would have been nice for an effective altruist to do this, but I don’t think any available EA programmer would have gone for a similar rate. Given the project was experimental and I wasn’t sure about a long-term plan, this was a good fit.
The predictability of certain software projects is one reason why software startups are so popular. VCs can be fairly confident that a team will be able to scale their technology according to expectations.
If requirements are known upfront, it’s often possible to have a rough idea of how much a project will cost. Agencies will help provide estimates. These agencies can be expensive but are typically so for a reason. I’ve worked with one agency, Gigster, and realized that despite the markup, they actually seemed to be about breaking even. It came out that in the case of intense project failures, Gigster would foot the bill, and these expenses were considerable.
I believe that small to medium size projects by some of these agencies can routinely cost between $60k and $400k. (Note that these of course don't include indefinite maintenance, which would be required for a continuing project.)
Engineering projects are particularly outcomes-based and measurable. It’s typically much easier to evaluate the quality of software than it is to build it, and it’s relatively straightforward to get a lot of metrics on its use. In comparison, more traditional effective altruist research can be very difficult to measure.
If we had more software engineering talent accessible, we could easily pull off people for urgent or strategic projects.
First, we might want general-purpose abilities for unforeseen future disasters. During the start of COVID-19, several EA groups got involved to different extents, and some of this was very useful. At the same, if we were better prepared, I would guess that much more could have been done. This is important in particular because future disasters might be more unexpected and much worse.
Second, it could be good to have broad abilities available if/when AI ramps up. There are many directions things might take.
This is more an argument for software in the short-term, rather than software in total.
If we think we will want to have ambitious software projects in the future, it would be useful to get started soon, even if the short-term projects are much smaller.
So far effective altruists have spent many research years. That's come with a lot of benefits for future research work.
- We have much better impressions of what works
- We've developed better research questions for future work
- We've developed talent that's better positioned for future work
- We raised awareness among potential future research hires
All of these benefits should also hold for software engineering work.
That was a lot of stuff. Again, the goal was to be more comprehensive than neat. I have a lot of thoughts on this topic and am trying to get them all out.
We haven't yet touched the costs of these technical ventures. Unfortunately, I think the costs are very high. Recent startup valuations have been very high. Programmers can cost a lot of money, and strong technical teams have incredibly high opportunity costs.
However, costs are high not because supply is low, but because demand is high. It's high for a reason: software projects can be incredibly scalable when carefully targeted and managed. There's a reason why entrepreneurs are paid so much.
Our World in Data isn’t exactly made by effective altruists, but I think it’s a good example of the kind of project we might want more of. That said, it does actively collaborate with our community. ↩︎
This is in comparison to fundamental hardware or biological ventures, for example. Software startups commonly fail for technical reasons, they just do so less often than other ambitious ventures. ↩︎