B

BrownHairedEevee

Funemployed
5047 karmaJoined Working (0-5 years)New York, NY, USAsunyshore.substack.com

Bio

Participation
5

I'm interested in effective altruism and longtermism broadly. The topics I'm interested in change over time; they include existential risks, climate change, wild animal welfare, alternative proteins, and longtermist global development.

A comment I've written about my EA origin story

Pronouns: she/her

"It is important to draw wisdom from many different places. If we take it from only one place, it becomes rigid and stale. Understanding others, the other elements, and the other nations will help you become whole." —Uncle Iroh

Sequences
8

Philosophize This!: Consciousness
Mistakes in the moral mathematics of existential risk - Reflective altruism
EA Public Interest Tech - Career Reviews
Longtermist Theory
Democracy & EA
How we promoted EA at a large tech company
EA Survey 2018 Series
EA Survey 2019 Series

Comments
757

Topic contributions
122

I can speak for myself: I want AGI, if it is developed, to reflect the best possible values we have currently (i.e. liberal values[1]), and I believe it's likely that an AGI system developed by an organization based in the free world (the US, EU, Taiwan, etc.) would embody better values than one developed by one based in the People's Republic of China. There is a widely held belief in science and technology studies that all technologies have embedded values; the most obvious way values could be embedded in an AI system is through its objective function. It's unclear to me how much these values would differ if the AGI were developed in a free country versus an unfree one, because a lot of the AI systems that the US government uses could also be used for oppressive purposes (and arguably already are used in oppressive ways by the US).

Holden Karnofsky calls this the "competition frame" - in which it matters most who develops AGI. He contrasts this with the "caution frame", which focuses more on whether AGI is developed in a rushed way than whether it is misused. Both frames seem valuable to me, but Holden warns that most people will gravitate toward the competition frame by default and neglect the caution one.

Hope this helps!

  1. ^

    Fwiw I do believe that liberal values can be improved on, especially in that they seldom include animals. But the foundation seems correct to me: centering every individual's right to life, liberty, and the pursuit of happiness.

Thank you for posting this! I've been frustrated with the EA movement's cautiousness around media outreach for a while. I think that the overwhelmingly negative press coverage in recent weeks can be attributed in part to us not doing enough media outreach prior to the FTX collapse. And it was pointed out back in July that the top Google Search result for "longtermism" was a Torres hit piece.

I understand and agree with the view that media outreach should be done by specialists - ideally, people who deeply understand EA and know how to talk to the media. But Will MacAskill and Toby Ord aren't the only people with those qualifications! There's no reason they need to be the public face of all of EA - they represent one faction out of at least three. EA is a general concept that's compatible with a range of moral and empirical worldviews - we should be showcasing that epistemic diversity, and one way to do that is by empowering an ideologically diverse group of public figures and media specialists to speak on the movement's behalf. It would be harder for people to criticize EA as a concept if they knew how broad it was.

Perhaps more EA orgs - like GiveWell, ACE, and FHI - should have their own publicity arms that operate independently of CEA and promote their views to the public, instead of expecting CEA or a handful of public figures like MacAskill to do the heavy lifting.

I've gotten more involved in EA since last summer. Some EA-related things I've done over the last year:

  • Attended the virtual EA Global (I didn't register, just watched it live on YouTube)
  • Read The Precipice
  • Participated in two EA mentorship programs
  • Joined Covid Watch, an organization developing an app to slow the spread of COVID-19. I'm especially involved in setting up a subteam trying to reduce global catastrophic biological risks.
  • Started posting on the EA Forum
  • Ran a birthday fundraiser for the Against Malaria Foundation. This year, I'm running another one for the Nuclear Threat Initiative.

Although I first heard of EA toward the end of high school (slightly over 4 years ago) and liked it, I had some negative interactions with EA community early on that pushed me away from the community. I spent the next 3 years exploring various social issues outside the EA community, but I had internalized EA's core principles, so I was constantly thinking about how much good I could be doing and which causes were the most important. I eventually became overwhelmed because "doing good" had become a big part of my identity but I cared about too many different issues. A friend recommended that I check out EA again, and despite some trepidation owing to my past experiences, I did. As I got involved in the EA community again, I had an overwhelmingly positive experience. The EAs I was interacting with were kind and open-minded, and they encouraged me to get involved, whereas before, I had encountered people who seemed more abrasive.

Now I'm worried about getting burned out. I check the EA Forum way too often for my own good, and I've been thinking obsessively about cause prioritization and longtermism. I talk about my current uncertainties in this post.

VSL isn't directly comparable across countries. It's a measure of how much money people in a given country would be willing to spend to save their own lives. For example, if someone would be willing to pay up to $125,000 to reduce the chance of them dying by 1%, then their VSL is $12.5 million. These amounts are lower in poor countries simply because the people there have less money, and it has nothing to do with whether their lives are more or less valuable.

Disclaimer: This shortform contains advice about navigating unemployment benefits. I am not a lawyer or a social worker, and you should use caution when applying this advice to your specific unemployment insurance situation.

Tip for US residents: Depending on which state you live in, taking a work test can affect your eligibility for unemployment insurance.

Unemployment benefits are typically reduced based on the number of hours you've worked in a given week. For example, in New York, you are eligible for the full benefit rate if you worked 10 hours or less that week, 25-75% of the benefit rate if you worked 11-30 hours, and 0% if you worked more than 30 hours.[1]

New York's definition of work is really broad and includes "any activity that brings in or may bring in income at any time must be reported as work... even if you were not paid". Specifically, "A working interview, where a prospective employer asks you to work - with or without pay - to demonstrate that you can do the job" is considered work.[1]

Depending on the details of the work test, it may or may not count as work under your state's rules, meaning that if it is unpaid, you are losing money by doing it. If so, consider asking for remuneration for the time you spend on the work test to offset the unemployment money you'd be giving up by doing it. Note, however, that getting paid may also reduce the amount of unemployment benefits you are eligible for (though not necessarily dollar for dollar).

  1. ^

    Unemployment Insurance Claimant Handbook. NYS Department of Labor, pp. 20-21.

It seems like these terms would constitute theft if the equity awards in question were actual shares of OpenAI rather than profit participation units (PPUs). When an employee is terminated, their unvested RSUs or options may be cancelled, but the company would have no right to claw back shares that are already vested as those are the employee's property. Similarly, don't PPUs belong to the employee, meaning that the company cannot "cancel" them without consideration in return?

Are there currently any safety-conscious people on the OpenAI Board?

Status: Fresh argument I just came up with. I welcome any feedback!

Allowing the U.S. Social Security Trust Fund to invest in stocks like any other national pension fund would enable the U.S. public to capture some of the profits from AGI-driven economic growth.

Currently, and uniquely among national pension funds, Social Security is only allowed to invest its reserves in non-marketable Treasury securities, which are very low-risk but also provide a low return on investment relative to the stock market. By contrast, the Government Pension Fund of Norway (also known as the Oil Fund) famously invests up to 60% of its assets in the global stock market, and the Japanese Government Pension Investment Fund invests in a 50-50 split of stocks and bonds.[1]

The Social Security Trust Fund, which is currently worth about $2.9 trillion, is expected to run out of reserves by 2034, as the retirement-age population increases. It has been proposed that allowing the Trust Fund to invest in stocks would allow it to remain solvent through the end of the century, avoiding the need to raise taxes or cut benefits (e.g. by raising the retirement age).[2] However, this policy could put Social Security at risk of insolvency in the event of a stock market crash.[3] Given that the stock market has returned about 10% per year for the past century, however, I am not very worried about this.[4]

More to the point, if (and when) "transformative AI" precipitates an unprecedented economic boom, it is possible that a disproportionate share of the profits will accrue to the companies involved in the production of the AGI, rather than the economy as a whole. This includes companies directly involved in creating AGI, such as OpenAI (and its shareholder Microsoft) or Google DeepMind, and companies farther down the value chain, such as semiconductor manufacturers. If this happens, then owning shares of those companies will put the Social Security Trust Fund in a good position to benefit from the economic boom and distribute those gains to the public. Even if these companies don't disproportionately benefit, and transformative AI juices the returns of the stock market as a whole, Social Security will be well positioned to capture those returns.

  1. ^

    "How does GPIF construct its portfolio?" Government Pension Investment Fund.

  2. ^

    Munnell, Alicia H., et al. "How would investing in equities have affected the Social Security trust fund?" Brookings Institution, 28 July 2016.

  3. ^

    Marshall, David, and Genevieve Pham-Kanter. "Investing Social Security Trust Funds in the Stock Market." Chicago Fed Letter, No. 148, December 1999.

  4. ^

    "The average annualized return since [the S&P index's] inception in 1928 through Dec. 31, 2023, is 9.90%." (Investopedia)

I think Oppenheimer was a missed opportunity to raise money for the space. I would have liked it if Universal had pledged to donate 10% of their profits from the film to organizations advancing nuclear security.

It looks to me like the nuclear security space isn't in dire need of funding, despite MacArthur ending its nuclear security program. Nuclear Threat Initiative (NTI) ran a deficit in 2022 (they reported $19.5M in expenses versus $14M in revenues), but they had net assets of $79M, according to their Form 990 which can be found here. Likewise, Carnegie Endowment has no shortage of major funders. Is it important for the EA movement to make up for the funding shortfall?

Load more