https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/

(Not mine) This post looks at ghostwriting and other misleading/dishonest behavior in EA. Maybe some people who have accounts here can clarify if it was intentional or not.

8

0
0

Reactions

0
0
Comments24


Sorted by Click to highlight new comments since:

A few updates: I have e-mailed the Open Philanthropy Project to ask about their activities. In particular about anyone at the Open Philanthropy Project trying to influence which ideas about, for example, moral philosophy, value theory or the value of the future, that a grant recipient or potential grant recipient talks or writes about in public. I have also asked whether I can share their replies in public, so hopefully there will be more public information about this. They have not replied yet but I have elaborated on this issue in the following section: https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/#Troublesome_work_behind_the_scenes_including_censoring_research_and_suppressing_ideas_and_debates

I have e-mailed with Bostrom about his claim that his “Undergraduate performance set national record in Sweden” and I have talked to the university he studied at. Again, this is a less important issue but it looks strange to me, it looks like a part of a broader pattern, and it feels valuable to check it. My latest published info on the issue can be found at https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/#Potentially_dishonest_self-promotion. A part of the info is the following: On Oct. 23, 2019, Bostrom replied and gave me permission to share his reply in public, the relevant part of which reads as follows:

The record in question refers to the number of courses simultaneously pursued at one point during my undergraduate studies, which – if memory serves, which it might not since it is more than 25 years ago – was the equivalent of about three and a half programs of full time study, I think 74 ’study points’. (I also studied briefly at Umea Univ during the same two-year period I was enrolled in Gothenburg.) The basis for thinking this might be a record is simply that at the time I asked around in some circles of other ambitious students, and the next highest course load anybody had heard of was sufficiently lower than what I was taking that I thought statistically it looked like it was likely a record.

A part of my e-mail reply to Bostrom on Oct. 24, 2019:

My impression is that it may be difficult to confirm that no one else had done what you did. One would need to check what a vast number of students did at different universities potentially over many years. I don’t even know if that data is accessible before the 1990s, and to search all that data could be an enormous task. My picture of the situation is as follows: You pursued unusually many courses at some point in time during your undergraduate studies. You asked some students and the next highest course load anyone of them had heard of was sufficiently lower. You didn’t and don’t know whether anyone had done what you did before. (I do not know either; we can make guesses about whether someone else had done what you did, but that would be speculation.) Then you claim on your CV “Undergraduate performance set national record in Sweden.” I am puzzled by how you can think that is an honest and accurate claim. Will you change your CV so that you no longer claim that you set a record?

Information about university studies seems publicly available in Sweden. When I called the University of Gothenburg on Oct. 21, 2019, the person there was not aware of any such national records and said they have the following information for Niklas Boström, born 10 March 1973: One bachelor’s degree (Swedish: fil. kand.) from University of Gothenburg awarded in January 1995. Coursework included theoretical philosophy. One master’s degree (Swedish: magister or fil. mag.) from Stockholm University. He also did some additional coursework. He started to study at university in Lund in fall 1992. I asked Bostrom whether this is him but he did not reply. More information that I noted from my call with the university include that the person could see information from different universities in Sweden, and there are in total 367.5 higher education credits in the system (from different Swedish universities) for Boström, according to the current method for counting credits. 60 credits is a normal academic year (assuming one does not, e.g., take summer courses). Boström bachelor’s degree corresponds to 180 credits, which is the exact requirement for a bachelor’s degree. The total number of credits (367.5) corresponds to 6.125 years of full-time study (again, assuming, e.g., no summer courses or extra evening courses). According to the university, he started studying in 1992 and, according to Bostrom’s CV, he studied at Stockholm University until 1996. I asked Bostrom and I gather he confirmed that he only has one bachelor’s degree. Overall, I doubt he set such a record (I think no one knows, including Bostrom himself), and I think he presents the situation in a misleading way.

Given the post's focus on purported efforts to silence alternative & dissenting views in EA, I think it'd be better for folks here to make comments rather than just downvote.

I downvoted. I'll just say I'm all for criticizing EA. I have little allegiance to the people criticized here and if this was good criticism I would be one of the first people to say it. There is great criticism of EA out there. This is not it. These concerns are a trumped up and petty drama piece.

By the points: guidelines to not encourage people to try to end the world seem very reasonable, "creat[ing] syllabi with writings by themselves and those who agree with them" also sounds super standard, I do personally consider Will a "cofounder of the effective altruism movement", I have no idea what Nick Bostrom’s CV is about but putting inflated impressive-sounding things on a CV is also super typical, and the ghost writing sounds super benign even after the author made a big deal about it and then had to retract it.

Also this essay made essentially no attempt to verify claims or provide people who are accused a right of reply, which I consider very bad practice. I also don't think the author is at risk of death for writing this essay and think the reasoning for not attempting to do more to verify claims and provide a right of reply is irresponsible and lazy.

I also downvoted because I think it's a poor, overly dramatic piece and it's not the type of work I'd want others to encounter on the Forum. I have literally never met anyone mentioned in the piece, I just think it's badly done, largely for the reasons mentioned.

I also think it's generally bad practice to publish confidential correspondence.

Peter writes:

I have little allegiance to the people criticized here and if this was good criticism I would be one of the first people to say it.

It should be noted that Peter was profiled by William MacAskill (one of the main subjects of this post) in Quartz and was one of the few people profiled in William's book Doing Good Better. Chapter 9 of the book begins with:

As Peter Hurford entered his final year at Denison University, he needed to figure out what he was going to do with his life. He was twenty-two, majoring in political science and psychology, and he knew he wanted a career that would both be personally satisfying and would make a big difference. Graduate school was the obvious choice for someone with his interests, but he didn’t know what his other options were, or how to choose among them.

How should young people like Peter who want to make a difference in their careers go about their decisions?

Peter also writes:

I have no idea what Nick Bostrom’s CV is about but putting inflated impressive-sounding things on a CV is also super typical

But what Bostom wrote is not just an "inflated impressive-sounding thing". He seems to have falsely claimed setting a national record in undergraduate performance. Does Peter consider false claims about setting academic records to be an acceptable practice?

Peter also writes:

"creat[ing] syllabi with writings by themselves and those who agree with them" also sounds super standard

As an academic: no, this is not a standard academic practice. The standard academic practice is to represent the views you support and the views you reject fairly. Note, that Peter ignores the following part of the post in his comment:

Toby Ord is a trustee at CEA and part of the team at FHI. His 2013 essay against negative utilitarianism (NU) is a one-sided and misleading attempt to convince lay people away from negative utilitarianism. I try to be polite in my response to it, but I will try to be blunter here. His text is so bad partly for the following reasons: Toby writes in the role of a university researcher with a PhD in philosophy, and he writes for non-experts. He spends the whole essay essentially trashing a moral view that is opposite to his own. He does little to refer the reader to more information, especially information that contradicts what he writes. He describes the academic literature incorrectly in a way that benefits his case. He writes that “A thorough going Negative Utilitarian would support the destruction of the world (even by violent means)” without mentioning that for many years, a published objection to his favoured view (classical utilitarianism) is that it implies that one should kill everyone and replace us, if one could thereby maximize the sum of well-being (see my paper The World Destruction Argument).

It should be noted that Peter was profiled by William MacAskill (one of the main subjects of this post) in Quartz and was one of the few people profiled in William's book Doing Good Better.

I don't get what you're implying and I don't see this as a source of bias. This was mainly just about being in the right place at the right time, but I interact with Will very infrequently, get no personal benefit from helping Will, and suffer no harms from criticizing Will, and am currently not associated with Will in any way. I like Will, but that's solely because of Will being likable, not because of any background conspiracy.

But what Bostom wrote is not just an "inflated impressive-sounding thing". He seems to have falsely claimed setting a national record in undergraduate performance. Does Peter consider false claims about setting academic records to be an acceptable practice?

I don't know. It still is very unfair to not hear from Bostrom - or even ask Bostrom! - what he meant by this.

But Peter, he just didn't have time and the CV issue was too unimportant (not to publish-- just too unimportant to verify):

The issue with Bostrom’s CV is a minor thing compared to the other things I write about in this text. For example, if I were to ask Bostrom something, I would rather ask him about the seemingly problematic behaviour of the organisation FHI he leads. There are also many other people that I mention in this text who I could have asked about more important things than a CV before publishing this text. But I doubt I would have time for that work, so I prefer to write based on the information I have in a hedged way using phrases such as ‘I doubt’ and ‘I suspect.’

Anon, do you think publishing something that attacks people's individual reputations and damages the reputation of negative utilitarians as a whole despite "not having time" to do it right is an acceptable practice?

mic
10
0
0

The EA Syllabus is not an academic syllabus for the course, and "Why I'm Not a Negative Utilitarian" is not a journal-published academic paper (although it sure looks like one given the citations and structures, but is listed on Ord's website as an "unpolished idea"). Knutsson thinks that since it's directed toward the general public and not an academic audience, it's even more important that it represent all academic views fairly instead of just what the author believes. I think that it might be good to do that, but it's not unacceptable to not do that, as we can't apply academic standards to something that's not academic.

Do I understand you correctly, you believe that the following (copied from the comment you're replying to) are acceptable practices in the type of an essay Toby Ord published?

He [Toby Ord] describes the academic literature incorrectly in a way that benefits his case. He writes that “A thorough going Negative Utilitarian would support the destruction of the world (even by violent means)” without mentioning that for many years, a published objection to his favoured view (classical utilitarianism) is that it implies that one should kill everyone and replace us, if one could thereby maximize the sum of well-being (see my paper The World Destruction Argument).

What's unacceptable about this in your opinion, anon account?

"Note, that Peter ignores the following part of the post in his comment: Toby Ord is a trustee at CEA and part of the team at FHI. His 2013 essay against negative utilitarianism (NU) is a one-sided and misleading..."

I'm not going to ask someone to quit being a trustee because they wrote an opinionated essay in 2003. I write one-sided pieces all the time, trying to convince people of a particular view - hopefully people won't try to remove me from any boards in 2035 because of that!

Why I'm Not a Negative Utilitarian was published in 2013, not 2003.

Sorry, my mistake

Also, if everyone who wrote one-sided and/or misleading articles in EA had to be kicked out of EA, we'd have a very small movement. :P

Thanks, this is helpful.

None of the accusations here is shocking, and often they reflect the author's naivete more than any wrongdoing on the part of the accused. Assistants contribute to writing books (however, private correspondence is meant to stay private). Organizations set ethical standards for the conducting and sharing of their research. People present themselves in the best light possible. Will is a co-founder of EA, not of the idea of maximizing social impact, but of the set of ideas and practices that governs this community today.

I'm generally pretty in favour of public criticism of EA orgs, and of public disputatiousness in general, but this piece is (a) quite long-winded and hard to read and (b) where I did get a good idea of what it was claiming, not especially compelling. A piece on the same themes that was 1/3 as long and better researched could have been valuable.

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
Recent opportunities in Community
89
· · 3m read