Brendon_Wong

739Joined Jul 2015

Bio

I'm a social entrepreneur and product manager that's been involved in EA since 2013! Right now, my interests lie in the areas of self/life improvement and societal transformation.

I'm the COO of Roote, an educational hub and startup studio focused on systems change to ensure humanity has a bright future. This includes reducing human and animal suffering as well as x-risks (see Roote's article on meta existential risks).

I'm also the founder of Better, a research organization and startup studio that is working on improving well-being and well-doing. We're specifically operating in the space of evidence-based self-improvement. Our theory of change is that recommendations we make can significantly amplify the efforts of EAs and EA organizations as well as improve people's lives in a highly cost effective manner.

Comments
147

Hi Inga, thanks for commenting!

First: I like the framework and the fact that you help to make impact vesting a viable option for the EA space. It indeed might open more opportunities for entering a multitude of markets and funding. Having a streamlined standard EA-aligned framework for this in the Global Health and Wellbeing Space could make the investment process more attractive (smooth), more efficient (options clearer and better comparable), and lead to better decisions (if the background analysis is high-quality). 

Thanks! I personally think that impact investing is an incredibly promising space for EA.

Might it be useful to add something like “neglectedness” of funding”? E.g. in the Mindease Case, I believe it was moderately likely (depending on the quality of the presented scaling strategy) that another investor would have jumped in to take the lead. There might be value in identifying and helping (1) ventures that have a promising impact prospect but low funding chances or (2) ventures that look like they might not have a promising impact prospect but since you have some rare specialists in the corresponding field, you know it is better than other options in the field. E.g. if Mindease had a really promising approach (evaluated by the specialists) to solve the low retention of users or the lack of sustainable effects of mental health interventions.

That's already included under "Investor Impact!" See  "funding probability." If I recall correctly, the 15% indicates that there's a ~1/6 chance this investment was counterfactually impactful, and that draws from the  much lengthier documents written by TPP that this writeup is based on.

The evaluation of neglectedness (of the solution/problem) seemed partly confusing to me. It is correct that lots of people suffering from mental health issues do not receive treatment. This is also true in HIC where digital solutions are widely available. The real neglected problem seems to be the distribution of these interventions and this has been hard for all companies out there offering services like that and is only possible if you adapt your service to the specifics of the different cultures and countries and then also tailor the distribution strategy. This means, that currently Mindease just does something in HIC that other apps such as Sanvello are offering already (maybe no additional value for people that in this calculation is added to the DAILYs) and does not have product or market strategy for LMIC - where it would be neglected. Convincing providers like Sanvello to offer their services in LMIC, and then just specializing in tailoring the product for people in a specific large country (e.g. Nigeria) as well as nailing distribution there, might be much more impactful. E.g. the UK-based charity Overcome does something along the lines of this. 

Did you read Hauke's analysis, or just our brief summary of it? Here's a direct link to the 3.5 pages covering neglectedness in Hauke's report. The extended TPP report covers several distribution strategies, which I personally believe to be very compelling and differentiated from existing apps, and some of them may resemble strategies you proposed. I do not recall if the full report is public, but Jon Harris would know the latest status.

Would it be useful also to add a “people’s time resources” needed to the equation? E.g. it is a difference if 10 or 30 EA spend their time working on this solution because they could also help to make an impact elsewhere.

I think that's an interesting idea! I haven't seen too many examples of that in EA analyses, but let me know if you find any on the EA Forum! I can think of a few difficulties associated with doing that and I'm not sure how people would overcome that, including establishing a counterfactual personal impact benchmark (what is the baseline impact of the EA that would have been hired?) and modeling the additive impact of additional team members (does going from 10 to 30 team members triple the impact? under what impact scenarios would that occur?). Also, not all team members would need to be "EAs" (what even counts as EA?), and I think that can also be hard to forecast and is somewhat dependent on the team and how things end up working out (whether people end up joining from within or outside EA).

I believe that governance is a technology. Thus, while there may be no "perfect" governance system, humanity's knowledge on it will improve greatly over time. That will improve the default governance models used (right now, representative democracy is a very common default with company shareholders, most developed countries, etc.) as well as humanity's ability to customize governance models to particular situation. Since representative democracy is so commonplace, I think making default models better will produce most of the benefit, rather than the adapting to the context as you mention.

Regarding getting there, as indicated in Holden's article, governance can be applied to many human systems, not just a government. Governments change, of course, but organizations change faster and emerge at a higher rate. Take public benefit corporations (PBCs) for example. Delaware (the most popular state for incorporation), passed  PBC legislation in 2013 and we already have PBCs IPOing.

There are also very creative ways to influence governments with technology. For example, in Taiwan, while the governance model hasn't changed, the government is deploying technologies like Polis to improve democracy, using it to effectively come up with policy proposals for potentially contentious issues that improve society and enjoy high consensus. I think that developing "add-ons" to entrenched governance models is a decent strategy, and it's one of the routes that our Civic Abundance project is taking.

It's so cool seeing articles that align with what I've been wanting to see for years! Holden also recently wrote about governance but with a more external-to-EA lens (for example, to responsibly govern AI companies) rather than using better governance to solve issues in EA causes and within EA itself.

This work is very aligned with what Roote is working on (one of the organizations I help run) and our work on meta existential risk (although we don't explicitly name collective intelligence as a potential part of the solution for this problem). We think that improving governance and collective intelligence is very important. We're fans of the civic tech efforts in Taiwan and our building societal-level information and coordination software with our Civic Abundance project (we are looking for funding and trying to hire a project lead!). Among the things we've looked at is integrating with Manifold to experiment with futarchy (but in the context of an informational dashboard, not tied to the actual governance process).

We're very aligned with Web3 for social good (like Greenpilled) and Gitcoin. I personally believe that EA itself needs a better mechanism to fund public goods within the EA community, given that EAIF seems to commonly ask public goods projects to become revenue positive which, well, is usually not possible for public goods to do without becoming private goods. Very unfortunate.

I was not aware that there was any research on the specific implications of  collective intelligence on existential risk. I would have been excited to read a quick summary of the main points/findings or some hyperlinked articles.

Thanks for writing this!!

Antigravity Investments Public Impact Log

This is an interim post sharing examples of Antigravity Investments' impact over the years. Organizations that have not explicitly consented to being named have had their details anonymized (we only asked CEA due to time constraints).

August 2022: One example of my ongoing correspondence with the EA operations team member is that I identified that an AI research organization with $6 million in cash could access yields of ~2.5% instead of 0% at their existing bank, which would generate another $150,000 for the organization every year on an annualized basis at essentially zero cost to themselves (at current interest rates).

April 2022: Agreed to help a relatively core EA organization administer $1M+ in a DAF because their existing advisor could not do it for free. The situation seems unlikely to move forward due to logistical problems with the DAF provider.

December 2021+: An operations team member that has worked at multiple EA- aligned AI organizations started an active correspondence with me regarding treasury cash management and operations in general. Questions included the base rates of defaults with various cash management options.

December 2020: GiveWell had their team investigate cash management, very likely directly as a result of my article directed specifically at them (see GiveWell's comment). Hopefully they have implemented it by now (we will likely be able to assess the impact in their 2021 Form 990 which is expected shortly).

Early 2020: A nonprofit that has assisted with certain EA endeavors committed $10M+ to one of our recommended cash management solutions, which later became $20M+ after they had tested it for a while and expanded into a second recommended cash management option.

2019: The Center for Effective Altruism started setting up a brokerage account at Vanguard after I reached out. According to their Form 990, they moved the majority of their funds into some form of interest-bearing account or investment during 2020. This may or may not impact CEA's fiscally sponsored projects like 80,000 Hours and Longview Philanthropy.

Mid-2019: A core EA charity was considering utilizing one of our recommended cash management for $1M+. They discovered that they had an agreement with their bank that required that they bank exclusively with that bank for a certain period. We explored alternatives like opening a brokerage account. The charity corresponded with their bank regarding their low interest rate, and as a result, the bank raised their interest rate.

2018: An animal charity committed $1M+ in assets under management to Antigravity with a below-market-rate fee.

This is a great article! Agree that the EA space doesn't have well-developed models for this. The two major organizations I am founding, Better and Roote, are both highly involved in this space.

Better is working on identifying, verifying, and distributing high-impact “informational interventions" for people and organizations, which corresponds to the theory of change of your BOTEC of people having better lives and organizations having higher effectiveness if they have access to optimal information. We're also developing knowledge management/collective intelligence software called Cosmic to help with that.

Roote is working on ideas that are related to your scalable/meta proposals. For example:

  • "Capable and “value-aligned” recommender systems in practically every area" - We are building Tweetscape, a general-purpose aligned recommender for Twitter
  • "An interested author writing a pop-science book on the topic" - We're authoring a book on how information spreads, What Information Wants, which is related to how humans exchange information, although maybe not specifically exchange value through information
  • We're working on societal information and coordination infrastructure with Civic Abundance to help groups align on the best actions to take to advance shared goals

Roote is actively hiring (and fundraising) so anyone interested is welcome to reach out!

My Journey in Effective Altruism

My journey in effective altruism started after I stumbled across The High Impact Network (THINK) online in 2012. At the time, I was 14 and a first-year in high school. I reached out to THINK and interned at its parent organization for a week in summer 2013 and was invited to the very first EA Summit that summer. Due to a twist of fate, I had an international trip planned and couldn’t attend, which in retrospect might’ve resulted in my experience in EA going in a very different direction. After the summit, I was very shy and also still attending high school, so I didn't develop any network to speak of in the burgeoning EA movement.

I had a strong interest in social entrepreneurship and dabbled in various projects over the next few years. These projects included utilizing quantified self methods to help people improve their physical and mental health, connecting people in developing countries with higher-paying work opportunities via the internet, and helping animal charities like NutritionFacts apply for and/or utilize Google Ad Grants to spread awareness of plant-based nutrition to encourage dietary behavior change. I briefly worked on a venture called The Giving Basket to create freely accessible, zero-minimum donor-advised funds (DAFs) for everyone. These DAFs could be independently directed, directed via crowdsourcing mechanisms like voting, or directed via expert recommendations.

While I decided not to proceed forward with the idea, The Giving Basket involved investing the funds inside the DAFs before they were disbursed, thus increasing the total amount donated in expectation. That gave me the general idea of utilizing investing to increase the amount that people were able to donate. Thus, my next venture idea was born, Antigravity Investments. It was 2016, and I graduated high school and headed off to study at UC Berkeley. The original idea of Antigravity Investments was to help donors invest money before they donated it, much like The Giving Basket had intended. Over time, this expanded into helping people with investing in general (which would hopefully counterfactually increase the amount people could afford to donate if they were themselves better off), and later, helping institutions like foundations and nonprofits invest. The latter audience is where Antigravity Investments has derived the majority of its impact, counterfactually donating millions of dollars to charity in transparent and verifiable ways.

After years of work, Antigravity Investments met my threshold for success, but I faced considerable challenges executing it including finding enough vetting (especially in Antigravity’s later years) and funding. I spent some time working on trying to create infrastructure to solve those issues, including prototyping an EA Projects Platform (which would connect projects, evaluations, funders, and team members) and spinning up the first EA Angel Group. I also interned in product management working on big data and AI products (applying ACR, CRNNs, and other interesting technologies).

After that, I graduated college in 2020 in the midst of the pandemic. I started working as a product manager at Capital One, and on the side, explored high-impact ideas. I participated in the Longtermist Entrepreneurship Fellowship, where I worked on an idea we termed “GiveWell for Impact Investing.” This culminated in an article on the EA Forum demonstrating that impact investing could counterfactually generate impact to a degree that, in my opinion, could be comparable to or exceeding that of a donation depending on the specific opportunity at hand. This seemed revolutionary, but didn’t seem to make many waves, perhaps because the article was very conservative in its messaging.

I was still in an exploratory mode and briefly explored “prediction markets for good” (including variants similar to ideas that Manifold Markets has implemented) before settling on launching a general-purpose version of my previous ideas. I noticed that many of my ideas, like my idea to increase charitable funding or help charities better leverage Google Ads, were based on very simple yet very high impact recommendations. I decided to launch a research organization and startup studio called Better to identify, validate, and share such recommendations to help the EA community and the world at large increase well-being and well-doing (well-doing referring to the combination of effectiveness and altruism). A few months later, I discovered an opportunity to be the COO of an upcoming nonprofit called Roote working on societal systems change. Roote touches on many topics I'm excited about including reducing meta existential risk, improving governance, and improving upon capitalism, so I joined to cofound that as well!

These days, I'm busy jamming away at getting thought leadership and ventures at Roote and Better off the ground, and chatting with people to help them along in their own journeys as my network in and knowledge of EA, and social impact in general, has grown :)

There is a section in the article that says:

Submissions must be posted or submitted no later than 11:59 pm BST on September 1st, and we’ll announce winners by the end of September.

(I nearly missed this as well)

Seeing this late, but this is a wonderful idea! Will Roderick and I worked on "GiveWell for Impact Investing" a while ago and published this research on the EA Forum. We ultimately pursued other professional priorities, but we continue to think the space is very promising, stay involved, and may reenter it in the future.

I think that’s a good idea to reduce groupthink! Also, I think it can be helpful to uncover if specific individuals and sub-groups think a proposal is promising based on their estimates, since rarely will an entire group view something similarly. This could bring individuals together to further discuss and potentially support/execute the idea.

Load More