I would like to share the policy platform for maximizing total welfare that I've written, applicable to the U.S. government.

This incorporates content from older Candidate Scoring System reports but now has better ideas, sources, and focus, and is now halfway readable and user-friendly. 

This is meant to provide academic solutions to debates and to inform EA and EA-adjacent audiences. It's not well optimized for persuasion of general audiences or government officials (although it contains plenty of content that could help you with those things).

As always, challenges are welcome, I actively revise the page with new ideas and sources.

38

0
0

Reactions

0
0
Comments4


Sorted by Click to highlight new comments since:

(As always, personal opinion, not my employer's.)

I think this looks like an interesting project and think it would be great if more EAs were more involved in politics.

One piece of feedback though – I hope it's useful: I generally recommend against using the EA branding for such projects, for several reasons: 1) it likely discourages others from attempting similar projects, as they think the space is already covered, 2) if you don't do a great job, that could reflect badly on all of EA, as your project will automatically be perceived by some people as being representative of EA if you're using that branding, 3) you might unnecessarily limit your target audience: not everyone might like or understand the EA philosophy, but they might still be interested in your project. (Over the past years, I have recommended against using EA branding in the projects that I've been involved with myself for these reasons, unless they represent a central part of EA infrastructure. For instance, I've rebranded EAF to CLR.)

I hope that's helpful feedback!

kbog
-4
0
0

I think there are countervailing reasons in favor of doing so publicly, described here

Additionally, prominent EA organizations and individuals have already displayed enough politically contentious behavior that a lot of people already perceive EA in certain political ways. Restricting politically contentious public EA behavior to those few  orgs and individuals maximizes the problems of 1) and 2) whereas having a wider variety of public EA points of view mitigates them. I'd use a different branding if I were less convinced that politically engaged audiences already perceive EA as having political aspects.

(As always, personal opinion, not my employer's.)

While I agree that it could be good for EAs to become more politically active, I don't think there are good arguments for an EA branding.

My main point: By not putting "EA" into the name of your project, you get free option value: If you do great, you can still always associate with EA more strongly at a later stage; if you do poorly, you have avoided causing any problems for EA. By choosing an EA branding for your project, you selectively increase the downside risk, but not the upside/benefits.

Quoting from that link:

You might be worried that Effective Altruism will get a public relations problem if members associate with something politically controversial.

I'm not worried about this; I think EAs doing something politically controversial is both a risk worth taking and mostly unavoidable. I'm only worried about associating the EA brand itself with something politically controversial (or, perhaps as big a risk, something that's perceived as amateurish).

Where political work earns criticism from some, it earns accolades from others. 

The concern is not that political work earns criticism (I think that's a risk worth taking), but that this criticism would be perceived as being relevant to all of EA (rather than just your project).

People are glad to see Effective Altruists supporting their rights and interests. 

I think this is not a strong argument:

  • The EA community is small and isn't widely perceived as having a lot of resources.
  • A lot of EA issues are inherently controversial, with a small supporter base. Partly by definition, EA focuses on neglected issues, helping those who don't have a supporter base. Non-human animals and people in the long-term future might be glad about the support we provide, but cannot help us now get more political influence.

The movement is perceived as more serious and potent when it tackles political issues in addition to regular charities and careers. 

I think this mainly holds if your project is successful; see my point about option value above.

I perceive your website as framed in a EA-ingroup-y way. I don't think this is bad; in fact, I really like some work of this type (e.g., Brian Tomasik's essays). But I don't think it's a great way to get more ordinary people to perceive EA as "more serious and potent" – instead, I think it'll make EA look somewhat weird and niche.

Finally, as time goes by, our efforts will probably be increasingly regarded as being on the right side of history, due to our generally superior epistemics and ethics.

I appreciate your optimism, but I think it'll be a relatively small minority of people who perceive it that way – most will just believe whatever is most advantageous given the short-term incentives they face. E.g., I don't think atheists/deists and experts are very highly regarded in politics, despite being on the right side of history.

most people do not think about politics in the same way as the Very Online left and right … Don’t let Twitter define your understanding of what counts as good or bad PR. … The people who get most outraged about political disagreement generally wouldn’t contribute positively to EA causes anyway, so we can let them go.

I agree; I'm mainly worried about the perception by public intellectuals, policy professionals, and politicians.

EA already has a contentious reputation among some people who are highly politically animated, either because they cannot stand the diversity of political opinions within the EA movement, or because we do not often support certain political causes. Those people are simply a lost cause.

(I don't think this is an important point here, but you could still make things much worse by causing major backlash, shitstorms, etc.)

Finally, Effective Altruism grows best when it offers something for everyone. And for people who are not well equipped or interested in our other cause areas, civic action may be that something.

You can do this just as well without putting "EA" into the name of the project.

Additionally, prominent EA organizations and individuals have already displayed enough politically contentious behavior that a lot of people already perceive EA in certain political ways. Restricting politically contentious public EA behavior to those few  orgs and individuals maximizes the problems of 1) and 2) whereas having a wider variety of public EA points of view mitigates them. 

I agree with this. As far as I know, none of these orgs and individuals currently use an EA branding. That seems good to me, and I hope that everyone launching a political EA project will follow suit.

I hope this is helpful, and I hope it’s clear that I wrote this comment trying to help you improve the project and have more impact, and I’m overall excited about this work. I haven’t looked at the handbook in detail, but based on skimming it, it looks really interesting, so thanks for putting that together!

My main point: By not putting "EA" into the name of your project, you get free option value: If you do great, you can still always associate with EA more strongly at a later stage; if you do poorly, you have avoided causing any problems for EA. 

I've already done this. I have shared much of this content for over a year without having this name and website. My impression was that it didn't do great nor did it do poorly (except among EAs, who have been mostly positive). One of the problems was that some people seemed confused and suspicious because they didn't grasp who I was and what point of view I was coming from. 

I agree with this. As far as I know, none of these orgs and individuals currently use an EA branding. 

A few do. And most may not literally have "EA" in their name, but they still explicitly invoke it, and audiences are smart enough to know that they are associated with the EA movement. 

And they get far larger audiences and attention than me, so they are the dominant images in the minds of people who have political perceptions of EA. Whatever I do to invoke EA will create a more equal diversity of public political faces of the movement, not a monolithic association of the EA brand with my particular view.

 

RE: the rest of your points, I won't go point by point because you are making some general arguments which don't necessarily apply to your specific worry about the presence or absence of "EA" in the name. It would be more fruitful to first clarify exactly which types of people are going to have different perceptions on this basis. Then after that we can talk about whether the differences in perception for those particular people will be good or bad. 

You already say that you are mainly worried about "public intellectuals, policy professionals, and politicians." Any of these who reads my website in detail or understands the EA movement well will know that it relates to EA without necessarily being the only EA view. So we are imagining a political elite who knows little about EA and looks briefly at my website. A lot of the general arguments don't apply here, and to me it seems like a good idea to (a) give this person a hook to take the content seriously and (b) show this person that EA can be relevant to their own line of work.

Or maybe we are imagining someone who previously didn't know about EA at all, in which case introducing them to the idea is a good thing.

More from kbog
136
kbog
· · 4m read
76
kbog
· · 36m read
Curated and popular this week
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Omnizoid
 ·  · 9m read
 · 
Crossposted from my blog which many people are saying you should check out!    Imagine that you came across an injured deer on the road. She was in immense pain, perhaps having been mauled by a bear or seriously injured in some other way. Two things are obvious: 1. If you could greatly help her at small cost, you should do so. 2. Her suffering is bad. In such a case, it would be callous to say that the deer’s suffering doesn’t matter because it’s natural. Things can both be natural and bad—malaria certainly is. Crucially, I think in this case we’d see something deeply wrong with a person who thinks that it’s not their problem in any way, that helping the deer is of no value. Intuitively, we recognize that wild animals matter! But if we recognize that wild animals matter, then we have a problem. Because the amount of suffering in nature is absolutely staggering. Richard Dawkins put it well: > The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In fact, this is a considerable underestimate. Brian Tomasik a while ago estimated the number of wild animals in existence. While there are about 10^10 humans, wild animals are far more numerous. There are around 10 times that many birds, between 10 and 100 times as many mammals, and up to 10,000 times as many both of reptiles and amphibians. Beyond that lie the fish who are shockingly numerous! There are likely around a quadrillion fish—at least thousands, and potentially hundreds of thousands o