Jack Cunningham

178Joined Nov 2021

Bio

Currently enrolled in a MA in Economics at the University of Texas, Austin. I'm interested in transitioning my career to a more effective direction. Applied research at an org like Open Phil or Rethink Priorities would be ideal. The causes I'm into currently are existential risk (of course), general longtermism, and land use policy.

I used to teach a high school philosophy class, which included a unit on ethics, meaning that I got to teach my high school students about Effective Altruism.

I donated a kidney altruistically in April 2020.

The charities I donate to regularly include GiveWell's Maximum Impact Fund and the Clean Air Task Force. I've taken the Giving What We Can pledge.

Sometimes I write things down here: ordinaryevents.substack.com

Comments
11

Great read!  Am I the only one who heard Will's Scottish brogue in my ear as I was reading?

I'm pleased to hear that you're running this fellowship again and extremely excited about applying!

A question about the application process: For the think tank tracks, you require a writing sample. "Applicant should be the sole or main author, ≤5 pages, can be an excerpt. Required for think tank track, optional for congressional and federal track.  Please do not create new material." Can you give more feedback on what you're looking for, especially as far as content and style go? Would a well-researched EA Forum post qualify, or more of an academic paper? Should it relate to tech policy explicitly?

Thanks for the clarification! I've edited the post to reflect your feedback.

"I think this high success rate [at receiving meeting requests] was due to a few key things:"

It might not be due to such key things at all! I was at EAGx Boston this weekend and I also had quite a high success rate at scheduling 1:1 meetings. And I don't have much in common with your experience - most of my messages were no more than a couple days out from the conference, I mostly asked people for how they could help me, and I have no full-time EA projects at the moment.

Plausibly, it might just be the case that EAs who attend such conferences are inclined to meet with you - either for their own selfish reasons (perhaps more common than you think!), altruistic inclinations, or an acknowledgement of the vibe of an EAGx being more geared to students and early career professionals.

My point here being that people plausibly should expect a somewhat high degree of success with 1:1 meeting requests at EA conferences, even without being diligent about making such requests ahead of time or feeling like they have much to offer their conferees.

Thanks Akhil!

There are a couple good reasons to think that more fine grained monitoring could be effective. For one thing, PM2.5 conditions are often much more localized than we realize, so some neighborhoods and microregions are exposed to much higher conditions than others. And they are time-dependent, meaning that some days and times that are much worse than others. So this more fine grained data can improve our understanding of the hardest hit regions at the neighborhood level, while giving local residents better information as well - imagine if everyone had the kind of understanding of air quality conditions that Bay Area residents have during wildfires.

I also think it’s possible that better local monitoring creates its own momentum, since local residents now have quantifiable proof of their air quality conditions. It’s possible that this kind of information would elevate the issue to a more pressing political priority in the hardest-hit areas, though I am still uncertain about that.

Note: I wrote a post recently that tries, in part, to answer this question. The post isn't a 2 minute answer, more like a 15 minute answer, so I've adapted some of it below to try and offer a more targeted answer to this question.

Let’s agree that the 8 billion people alive right now have moral worth - their lives mean something, and their suffering is bad. They constitute, for the time being, our moral circle. Now, fast forward thirty years. Billions of new people have been born. They didn’t exist before, but now they do.

Should we include them in our moral imagination now, before they are even born? There are good reasons to believe we should. Thirty years ago, many who are alive today (including me!) weren’t born. But we exist now, and we matter. We have moral worth. And choices that people and societies made thirty years ago affect people who were not yet born but who have moral worth now. Our lives are made better or worse by the causal chain that links the past to the present.

Aristotle teaches us that time, by itself, is not efficacious. He's wrong about that in some respects - in modern economies, time is enough by itself to inflate the value of currency, to allow opportunities for new policies or technology to come into existence and scale up, etc., which might influence us to believe that we should discount the future accordingly. But he's right when applied to the moral worth of humans. The moral worth of humans existing now isn't any less than the moral worth of humans a generation ago; for the same reason, the moral worth of humans a generation from now is just as important as humans' moral worth right now.

Our choices now have the power to influence the future, the billions of lives that will come to exist in the next thirty years. Our choices now affect the conditions under which choices will be made tomorrow, which affect the conditions under which choices will be made next year, etc. And future people, who will have moral worth, whose lives will matter, will be affected by those choices. If we take seriously the notion that what happens to people matters, we have to make choices that respect the moral worth of people who don’t even exist yet.

Now expand your moral circle once more. Imagine the next thirty generations of people. So far, there have been roughly 7500 generations of humans, starting with the evolution of Homo Sapiens roughly 150,000 years ago. One estimate puts us at a total of just over 100 billion human beings who have ever lived. The next thirty generations of humans will bring into existence at least that many humans again. Each of these humans will have the same moral worth as you or I. Why should we discount their moral worth, simply because they are in a different spot on the timeline than we are?

If possible, we should strive to influence the future in a positive direction, because future people have just as much moral worth as we do. Anything less would be a catastrophic failure of moral imagination.

I love that you are celebrating your successes here! Your parenthetical apologizing for potentially sounding self-congratulatory made me think, "Huh, I'd actually quite like to see more celebration of when theory turns to action." The fact that your work influenced FP to start the Patient Philanthropy Fund is a clear connection demonstrating the potential impact of this kind of research; if you were to shout that from the rooftops, I wouldn't begrudge you! If anything, clarity about real-world impacts of transformational research into the long-term future likely inspire others to pursue the field (citation needed).

I'm quite sympathetic to your mission of developing a robust understanding of the parameters of cause prioritization. I do have a maybe-dumb question: what is your Theory of Change? You write,

"In GPI’s first few years, we have made a good start on producing high-quality and mission-aligned research papers. In 2022 we are planning to continue the momentum and have set ourselves ambitious targets on the number of papers we want to get through different stages of the publishing pipeline, as well as that we want to post as working papers on our website."

What do you plan on doing with your research output? What would you like to see others do with it, concretely? Is the goal to let your research percolate throughout EA-space/academia and maybe influence others' work? Is there a more direct policy or philanthropic goal of your research?

I suppose you answer some of these questions here:

"In 2021, we commenced a project to design and then begin tracking more sophisticated progress metrics. This project was put on hold, for reasons of capacity constraint, with the resignation of our Research Manager. We plan to continue the project once we have succeeded in hiring the successor of this role."

But I'm still interested in, like, your top-level thinking around your theory of change, or maybe your gut-check.

Open questions:

What's the incentive structure here? If I'm following the money, it seems likely that there's a much higher likely return if you hype up your plausibly-really-important product, and if you believe in the hype yourself. I don't see why Musk or Zuckerberg should ask themselves the hard questions about their mission given that there's not, as far as I can see, any incentive for them to do so. (Which seems bad!)

What can be done? Presumably we could fund two FTE in-house at any given EA research organization to red-team any given massive corporate effort like SpaceX. But I don't have a coherent theory of change as to what that would accomplish. Pressure the SEC to require annual updates to SEC filings? Might be closer...

[epistemic status: strong opinion]

I see Policy Design and Implementation as a neglected cause area for Effective Altruism.

Effective policy changes in developed countries could unleash many trillions of dollars in economic potential. This is especially true in the cases of immigration reform and land use policy. While political concerns are often cited as obstacles to progress on these issues, it's still the case that there's not enough investment in time or money to finding creative solutions to these obstacles, especially considering the size of the trillion dollar bill we're leaving lying on the sidewalk.

In developing countries, economic policy changes might have an even higher impact. We don't have a good sense of what factors allow countries to climb the income ladder yet, but it seems clear that policy and governance have quite a lot to do with it! There are a few exciting ideas in this space, charter cities among them.

Policy design and implementation, as a cause area, is likely to be high-risk high-reward. Unlike an intervention like GiveDirectly with a well-defined outcome, policy change has high error bars around the expected effect size. That fits in well with Open Philanthropy's model of hits-based giving - throw a lot of interventions against the wall and see what sticks.

Policy design and implementation is also a complement to many of effective altruism's other important cause areas. When we think of safeguarding the long-term future, that includes effective policy around AI governance and biosecurity as well as nuclear policy. The single most effective change we could make in the animal welfare space would plausibly be a coordinated global policy outlawing battery cages.

Historically, my read is that policy change has been seen as less tractable than other kinds of interventions. However, with the money and talent that currently exists within the effective altruism movement, there's a good case to be made that policy change should be revisited. We should be thinking about how to conduct and share effective policy research, communicate the results of that research broadly, and get our people into government agencies and even legislatures.

Load More