Recent Discussion

In a recent comment, Ben Todd of 80,000 Hours wrote about his desire to see more EA entrepreneurs who can scale their charities to spend larger amounts of money:

I'm especially excited about finding people who could run $100m+ per year 'megaprojects', as opposed to more non-profits in the $1-$10m per year range, though I agree this might require building a bigger pipeline of smaller projects.

He later tweeted

It's striking that the projects that were biggest in pre-2015 (OP, GiveWell, MIRI, CEA, 80k, FHI) are still among the biggest today, when additional resources should make new types of project possible.

It is striking and surprising that these are still some of the largest projects in the EA community. However, it's not surprising that these types of projects aren't spending $100...

I wonder if this is better or worse than buying up fractions of AI companies?

I'd be keen to know how this helps.

Investors and researchers who don't believe in your stances or leadership can probably exit and form new companies, and if they do believe, you don't necessarily need to buy shares to get them to listen. Capital isn't a major constraint as far as I can see. But I'm an outsider to this so I'd be keen to know more.

P.S. Even within the EA community there's disagreement on safety/capabilities tradeoffs, or what safety work actually works. I wonder how you'll pick good leadership for this that all of the EA community is comfortable with.

Summary

It seems unlikely that green growth alone (or absolute decoupling, measures of efficiency) will be sufficient to reduce greenhouse gas (GHG) emissions in time [1]. In order to not spend the world’s carbon budget in the coming decade(s), it seems we will also need to change our behavior (measures of sufficiency). However, it is very difficult for political parties to advocate such measures - and to properly inform the general public - out of a fear of electoral loss. Yet the longer we wait with effective GHG emission reductions, the more impactful such measures will need to be. The solution this text proposes is an (online-first) preferendum, in which a government or parliament polls the broad population about which of a set of additional measures, compiled ideally...

I think where we disagree is that you seem to accept warming of considerably more than 2°C, where I refuse to accept such a scenario, and look for ways to avoid it.

Clearly in order to reach net zero emissions at some point, we need technological innovation and market forces. But if we want to stay well below 2°, preferably at 1,5°, waiting for those won't cut it: during the coming decade(s), we would spend all of our remaining carbon budget to stay within those boundaries with some degree of certainty.

In order to sufficiently reduce our emissions the comi... (read more)

Summary

  • The Community Building Grants (CBG) programme will be narrowing its scope to support groups in certain key locations and universities.
  • Harri Besceli, who has been running the programme since 2018, is stepping down from the CBG manager role, so we will be hiring a new programme manager. Individuals interested in the role can apply here.
  • The CBG programme has opened applications for certain universities (see list below).
  • The CBG Programme plans to run hiring rounds in select city / national locations (see list below). The timeline for these application rounds will be announced once a new Programme Manager for the program is hired.
  • The Effective Altruism Infrastructure Fund (EAIF) will start assessing applications for grants to pay for full- and part-time organisers from groups not covered by the CBG programme. The
...
3JoanGass10hWithin the CEA Groups team, we have several different sub-teams. Two of the sub-teams are focused on experimenting and understanding what a model looks like with full-time community builders in a focused set of locations (one sub-team for university groups, another sub-team for city/national groups). This is because the type of centralized support CEA might provide and the type of skills/characteristics required of someone working full-time running a university group or a city/national professional network might look very different depending on the ultimate model. Our staff capacity is limited (to either hiring, piloting, scaling) and we think that this focus will enable faster scaling in the long term. I also want to note a couple things: * In addition to the sub teams mentioned above, we have two sub teams supporting part-time organizers. One team provides foundational support to all part-time/volunteer group organizers (basic funding, resources hub, EA slack, phone calls), and another team runs the University Groups Accelerator Program to help part-time university organizers launch their group. * Additionally, just because the CEA Groups team building up the ‘full-time’ model is prioritizing certain locations, that doesn't mean we want to stop experiments in other locations. We'd encourage people interested in full-time organizing in places that aren't on the locations list above to apply to the EAIF, help us innovate on the community building model in different locations, and share back your learnings with other organizers and on the forum.

Thanks! I need to ask a lot of clarifying questions:

When you say "This is because the type of centralized support CEA might provide and the type of skills/characteristics required of someone working full-time running a university group or a city/national professional network might look very different depending on the ultimate model.", (1) does "This" refer to the fact that you have 2 subteams working with focus locations as opposed to everyone working on all locations? (2) If so, could I reword the explanation the sentence gives to "We need to work on focu... (read more)

A new volume with essays on EA from a religious perspectives is out. 
  
The volume is open access. The first three chapters - after Lara Buchak's foreword - provide a general take on EA from a Buddhist, Christian, and Orthodox Jewish perspective. Among the other chapters, I would particularly like to highlight Stefan Riedener’s chapter. Given the focus on extinction risks in EA, his clear and helpful analysis from a Christian (more specifically: Thomist) perspective is a very valuable contribution. 
 
How you can help: if you know academics in the field of religion, you can point them towards the volume or to one of these chapters. If you know anyone whom I should send a physical copy, let me know.
 
This volume is not the final word on the intersection...

A lot has happened since I wrote about my PhD in August. Seems like a reflection is in order!

In short, I have decided to suspend my PhD. For the next year I will be working as a contractor for the Open Philanthropy Project, helping them develop better models of how transformative AI might result in economic growth loops. If this goes well, I plan to transition into doing more contractor work for EA-aligned organizations.

But before we get to the juicy details, more on these last few months.

  • Between September and October I did a PhD secondment in Maastricht University. I had a fantastic time and met some great people. Having an office and being surrounded by colleagues while working made a huge difference to my happiness and productivity.

    During
...

Congratulations on the new position, it sounds really exciting!

Content warning: gambling is addictive and generally loses money. Please don't make any bets with negative expected value.

tl;dr: New York State recently legalized online sports gambling, so casinos are offering large incentives for signing up on their websites. I got about $4,400 out of this (you can probably do better; see below) and will be donating it to GiveWell. You must be located in New York or another eligible state while signing up and making the bets.

How to exploit this

There are various casinos, sportsbooks, and other gambling establishments located in New York State. Online sports gambling was legalized there in January 2022, so they are giving out introductory offers where they will match your deposits or give you free bets when you sign up. Basically, you can...

1KaseyShibayama4hI agree with that analysis (and someone risk-neutral should bet the 2nd game on whatever game has the lowest vig [https://www.bookmakers.bet/1794/the-vig/]). Worth considering taxes, though.
4david_reinstein7hShould we be concerned at all about Casinos refusing to release funds because of 'wagering requirements' [https://calbizjournal.com/what-to-do-when-an-online-casino-refuses-to-pay-out-your-winnings/] (also see HERE [https://www.playnow.com/mb/resources/documents/casino/promotions/understanding-wagering-requirements.pdf] or something like this? Are we sure these offers don't come with hidden wagering requirements or other catches? Also these tips [https://www.gamblingsites.org/sports-betting/essentials/why-bookmakers-limit-accounts/] seem helpful to not getting your account shut down.

My understanding is that wagering requirements that severely reduce the EV of promotional offers (but still keeping it positive in some cases) are standard in the industry. That said, maybe the situation in NY is exceptional now.

Sign up for the Forum's email digest
Want a weekly email containing the best posts from the past week? Our moderator Aaron sends out a weekly digest of recent posts that have a lot of karma/discussion or seemed really good to him, as well as question posts that could use more answers.

Summary: Creating the right incentive structures in science could make science more fluid, efficient, and painless. Indeed, there seem to be multiple reasons why people complain about how science is done now. In this post, I analyze some of the problems science has, and give some hints at ideas that could improve the situation. The aim of this post is to suggest science policy as a possible research area for EAs where it might be possible to do progress that results in better science.

Introduction

Science is one of the key enablers of progress in our world. Yet it seems to me that there are many ways it could be improved. It seems to me from anecdotal evidence that most scientists are in fact not happy about the incentive...

Thanks for this! I've been thinking quite a bit about this (see some previous posts) and there is a bit of an emerging EA/metascience community, would be happy to chat if you're interested!

Some specific comments:

In consequence, a possible solution is some kind of coordinated action by scientists (or universities) to decline being referees for high-fee journals.

Could you elaborate the change in the system you envision as a result of something like this? My current thinking (but very open to being convinced otherwise) is that lower fees to access publication... (read more)

How does Replaceability applies to overcrowded academic areas that have an oversupply of PhDs slaving away in postdocs never to find a permanent job? This glut of PhDs must mean the actual impact of me joining a field like Particle Physics would be very low. Is that correct?

How is that different to one of the recommended career paths by 80000 Hours, Biomedical Research, which is also competitive, but is marked as high impact?

For context, I must mention that Biomedical Research seem to tackle more 'solvable' problems while Theoretical Physics has been mostly stagnated since the standard model (little progress has been made in several decades). For example, a few scientists predicted the Higgs boson within a short time of each other, Higgs was merely the first to...

tl;dr: We should value large expected impact[1] rather than large inputs, but should get especially excited about megaprojects anyway because they're a useful tool we're now unlocking.

tl;dr 2: It previously made sense for EAs to be especially excited about projects with very efficient expected impact (in terms of dollars and labour required). Now that we have more resources, we should probably be especially excited about projects with huge expected impact (especially but not only if they're very efficient). Those projects will often be megaprojects. But we should remember that really we're excited about capacity to achieve lots of impacts, not capacity to absorb lots of inputs

We should be excited about the blue and green circles, including but not limited to their overlaps with the orange circle. We should not...

20Peter Wildeford6hSo my understanding is as follows. Imagine that we had these five projects (and only these projects) in the EA portfolio: * Alpha: Spend $100,000 to produce 1000 units of impact (after which Alpha will be exhausted and will produce no more units of impact; you can't buy it twice) * Beta: Spend $100,000,000 to produce 200,000 units of impact (after which Beta will be exhausted and will produce no more units of impact; you can't buy it twice) * Gamma: Spend $1,000,000,000 to produce 300,000 units of impact (after which Gamma will be exhausted and will produce no more units of impact; you can't buy it twice) * GiveDeltaly: Spent any amount of money to produce a unit of impact for each $2000 spent (GiveDeltaly cannot be exhausted and you can buy it as many times as you want). * Research: Spend $200,000 to create a new opportunity with the same "spend X for Y" of Alpha, Beta, Gamma, or GiveDeltaly. Early EA (say ~2013), with relatively fewer resources (we didn't have $100M to spend), would've been ecstatic about Alpha because it only costs $100 to buy one unit of impact, which is much better than Beta's $500 per unit, GiveDeltaly's $2000 per unit, or Gamma's $3333.33 per unit. But "modern" EA, with lots of money and a shortage of opportunities to spend it on would gladly buy Alpha first but would be more excited by Beta because it allows us to deploy more of our portfolio at a better effectiveness. (And no one would be excited by Gamma - even though it's a huge megaproject, it doesn't beat our baseline of GiveDeltaly.) ~ Now let's think of things as allocating an EA bank account and use Research. What should we use Research for? Early EA would want us to focus our research efforts on finding another opportunity like Alpha since it is very cost-effective! But modern EA would rather we look for opportunities like Beta - even though it is less effective than Alpha, it can use up 1000x more funds!

I completely agree with everything you said (and my previous comment was trying to convey a part of this, admittedly in much less transparent way).

2MichaelA12hDoubling the cost effectiveness while maintaining cost absorbed, and doubling cost absorbed while maintaining cost effectiveness, would both take work (scaling without dilution/breaking is also hard). Probably one tends to be harder, but that’d vary a lot between cases. But if we could achieve either for free by magic, or alternatively if we assume an equal hardness for either, then doubling cost effectiveness would very likely be better, for the reason stated above. (And that’s sufficient for “literally the same” to have been an inaccurate claim.) I think that’s just fairly obvious. Like if you really imagine you could press a button to have either effect on 80k for free or for the same cost either way, I think you really should want to press the “more cost effective” button, otherwise you’re basically spending extra talent for no reason. (With the caveat given above. Also a caveat that absorbing talent also helps build their career capital - should’ve mentioned that earlier. But still that’s probably less good than them doing some other option and 80k getting the extra impact without extra labour.) As noted above, we’re still fairly constrained on some resources, esp. certain types of talent. We don’t have left overs of all types of resources. (E.g. I could very easily swap from my current job into any of several other high impact jobs, but won’t because there’s only 1 me and I think my current job is the best use of current me, and I know several other people in this position. With respect to such people, there are left over positions/project ideas, not left over resources-in-the-form-of-people.)