My apologies if this has been posted elsewhere!

The article basically suggests that the Horizon Policy Fellows might be sympathetic to big AI Labs because of their ties to Open Philanthropy, and/or that Fellows might be interested in types of long-term risk that aren’t important right now.

There are also a lot of implications that Open Philanthropy and EA are suspicious and are extending shadowy tendrils into government policy.

36

0
0

Reactions

0
0
Comments8
Sorted by Click to highlight new comments since: Today at 8:25 AM

I think there is a conversation to be had about EA funders inside-companies strategy, and whether that creates issues. For example, Holly Elmore writes that many EA institutions don't want to fund AI Pause advocacy to the public, because that could jeopardize their influence with those companies. (https://twitter.com/ilex_ulmus/status/1690097834123755521?t=FSq1OWL406KkdF74IN0T3w&s=19) That seems like an unhealthy dynamic.

However, Politico has a history of bad-faith EA coverage (https://forum.effectivealtruism.org/posts/Q4TJ2vPQnD5Zw2aiz/james-herbert-s-shortform?commentId=wXZzJgJqZ8TxArEN6), and also here I find that they're using a much lower standard for EA criticism* than for the defense.

I do wonder if there are any particular ways in which those working on catastrophic risk should visibly signal cooperativeness with those focused on current-model risk.

** Especially this part: “There’s a push being made that the only thing we should care about is long-term risk because ‘It’s going to take over the world, Terminator, blah blah blah,’” Venkatasubramanian said. "

I'm pro dismissiveness of sneer/dunk culture (most of the time it comes up), but I think the CoI thing about openphil correlated investments/board seats/marriage is a very reasonable thing to say and is not sneer/dunk material. I get the sense from what's been written publicly that openphil has tried their best to not be manipulating horizon fellows into some parochial/selfish gains for sr openphil staff, but I don't think that people who are less trusting than me about this are inherently acting in bad faith. 

In an "isolated demand for rigor" sense it may turn back into opportunism or sneer/dunk---- I kinda doubt that any industry could do anything ever without a little corruption, or a considerable risk of corruption, especially new industries. (i.e. my 70% hunch is that if an honest attempt to learn about the reference class of corporation and foundation partnerships wining and dining people on the hill and consulting on legislation was conducted, these risks from horizon in particular would not look unusually dicey. I'd love for someone to update me in either direction). But we already know that we can't trust rhetorical strategies in environments like this. 

To extend your comment about lower standards for EA criticism, I thought the remainder of Venkatasubramanian's quote was quite interesting:

"...Terminator, blah blah blah,’” Venkatasubramanian said. “I think it’s important to ask, what is the basis for these claims? What is the likelihood of these claims coming to pass? And how certain are we about all this?

The EA community has spilled heaps of words on every single one of these issues, but the article nevertheless portrays the EA community as if it is pushing frivolous, ill-considered ideas instead of supporting the Real, Serious concerns held by Thoughtful and Reasonable people.

It's interesting to consider why the portrayal is so off-base, because a few minutes of Googling and reading EA content could have disabused the reporter of the notion that EA has an unserious, careless bent toward long-term AI risk.

On the other hand, if you Google "effective altruism AI," the first result is this Wired article with a very negative take on EA and AI. There are a few top-level results from 80K and EA.org, but most of the first-page results are articles that basically say, "So there's this weird group of people who care a lot about AI...", with varying but mostly negative levels of sympathy.

I guess it could be the case that the reporter or the outlet or both have a level of antipathy for EA that precludes due diligence. Or they could be attempting a basic due diligence but are mainly reading sources that have a very negative take on EA.

Either way, EA's public image (specifically regarding AI) is not ideal. Your suggestion about making a greater effort to visibly signal cooperativeness might be a really good one!

the article nevertheless portrays the EA community as if it is pushing frivolous, ill-considered ideas instead of supporting the Real, Serious concerns held by Thoughtful and Reasonable people.

What bothers me is that many criticisms of EA that hinge on "EA is neglecting this angle in a careless and malicious manner" could have been addressed with basic Googling.

I don't expect the average Joe to actively research EA, but someone who's creating a longform written or video essay with multiple sources should be held to higher standards.

One example: Effective Altruism and the Cult of Rationality: Shaping the Political Future from FTX to AI — COLUMBIA POLITICAL REVIEW (cpreview.org)

Thus, we must be wary of the power behind a mindset focused solely on the hypothetical future and allow space and empathy for the short term needs of society. A tyranny of the quantifiably rational majority would lead to more quantifying of human suffering than policy change.

This conclusion somehow manages to completely ignore the neartermist cause areas, the frequent discussions about prioritising neartermism vs longtermism and the research on neartermism that does focus on qualitative wellbeing. I genuinely don't know how someone can read dozens of pages about EA and not come across any reference to neartermism.

Ultimately, I just read so many EA criticism pieces where it feels like the writer hasn't talked to EAs, or conveniently ignores that most EAs spend most of their work time thinking about how to solve real problems that affect people.

These articles paint a picture of EAs so completely divorced from my actual interactions with EAs. The actual object-level work done by EA orgs is often described in 1-2 sentences, while an entire article is devoted to organisational drama/conflict. Like ... how do you talk about EA without mentioning the work EAs do to ... solve problems???

EA people might have written thousands of posts on the reason behind the dangers of AI, but these posts were made for the community, assuming people already believed in Bostrom's work etc. These posts have a certain jargon and assume many things that the mainstream audience doesn't assume at all. And talking about advocacy about AI is still very much debated and far from granted (see the recent post on advocacy on the forum, I can link it if people are confused). Also the way OpenPhil advocates money and its clear concentration on CGR detrimentally to more neartermist, globah health causes is a fact, and their influence on Congress is also a fact. Facts can be skewed towards a certain thinking perspective, true, but they're there. 

Despite the feeling some might have--that most in EA consider existential risks related to AI as THE most pressing issue, I'm not sure how true that is. The forum is a nice screen of smoke given that people posting and commenting on these posts are always the same, and Rethink Priorities survey is NOT representative--I ran the number of my own EA group with the trends that were evoked for my group on the RP priorities and it doesn't align, it's clearly skewed towards a minority that is always on the forum and AI afficionados. 

So we can be mad all we want and rant that these journalists are dense (I don't deny Politico bad coverage of EA btw, it's just not the only journal making these conclusions), as long as we don't take advocacy seriously and try to get these arguments out there, nothing better will happen. So let's take these articles as an opportunity to do better, instead of taking our arguments for granted. There is work to do about this inside and outside the community.

And let me anticipate the downvotes that these opinions usually get me (quite bad btw for the a community that is supposed to seek truth and not just cede to the human impulsion of 'I don't like it, I'll downvote it without arguing): if you disagree on these specific points, let me know why. Be constructive. It's also an issue: imagine a journalist creating an account to understand better the EA community and comment on the posts who gets downvoted every time he dares raising negative opinions/asking uncomfortable questions about AI safety. Well, so much for our ability to be constructive. 

asking uncomfortable questions about AI safety.

Can you give some examples here? What are some uncomfortable questions about AI safety (that a journalist might ask)?

Sure! So far there are arguments showing how General AI could be created, or even superintelligence, but the timelines vary immensely from researcher to researcher and there is no data-based evidence that would justify pouring all this money into it? EA is supposed to be evidenced-based and yet all we seem to have are arguments, but not evidences? I understand this is the nature of these things, but it's striking to see how efficient EA tries to be when it comes to measuring GiveWell's impact VS the impact created by pouring money into AI safety for longtermist causes? It feels that impact evaluation when it comes to AI safety is non-existent at worse and not good at best (see useful post of RP about impact-based evidence regarding x-risks).

Yeah, I think it's quite likely that these reporters are basing their opinions on others' opinions, rather than what people from the EA and AI safety communities are saying. I wonder if some search engine optimization could help with this?

This part was also interesting & frustrating:

“Regulations targeting these frontier models would create unique hurdles and costs specifically for companies that already have vast resources, like OpenAI and Anthropic, thus giving an advantage to less-resourced start-ups and independent researchers who need not be subject to such requirements (because they are building less dangerous systems),” Levine wrote (emphasis original).

I think this is a really good point that needs to be signal boosted more (and obviously policy needs to actually do this). But then it immediately follows with this refutation without any arguments 😭:

Many AI experts dispute Levine’s claim that well-resourced AI firms will be hardest hit by licensing rules. Venkatasubramanian said the message to lawmakers from researchers, companies and organizations aligned with Open Philanthropy’s approach to AI is simple — “‘You should be scared out of your mind, and only I can help you.’” And he said any rules placing limits on who can work on “risky” AI would put today’s leading companies in the pole position.

More from Calum
Curated and popular this week
Relevant opportunities