Hide table of contents

Tl;dr: One of the biggest problems facing any kind of collective action today is the fracturing of the information landscape. I propose a collective, issue-agnostic observatory with a mix of algorithmic and human moderation for the purposes of aggregating information, separate from advocacy (i.e. "what is happening", not "what should happen").

Introduction

There is a crisis of information happening right now. 500 hours of video are uploaded to Youtube every minute. Extremely rapid news cycles, empathy fatigue, the emergence of a theorised and observed polycrisis, and the general breakdown of traditional media institutions in favour of algorithms designed to keep you on-platform for as long as possible means that we receive more data than we have ever before, but are consequently more easily overwhelmed than ever before. The pace of research output has increased drastically while the pace of research intake (i.e. our reading speed) has not. The recent emergence of AI technology able to manufacture large amounts of spurious disinformation or "botshit" has added to this breakdown in information.

Any kind of corrective or preventative action to address global issues requires an accurate understanding of those issues first. John Green's video on The Only Psychiatric Hospital in Sierra Leone gives a powerful example why: charities with ample resources but incorrect information donated electrical generators that were too powerful for the hospital's electrical grid, causing more harm than good. In the same way, misguided, ill-informed, and over-aggressive "assistance" can be worse than no assistance at all.

Why existing traditional information sources are insufficient

Most of us likely rely on some form of news media for information on world events. They are updated around-the-clock by teams of dedicated staff who provide broad-based coverage on a wide variety of events. However, these sources are plagued by well-documented issues surrounding bias, censorship, misaligned economic incentives, billionaire owners etc. Furthermore, they are unlikely to feature a particular focus on cause areas that most EAs are concerned about.

Why existing specialised information sources are insufficient

Good work is currently being done by organisations like BlueDot Impact who work to collect information on cause areas like AI Safety and Biosecurity. However, these sources are also limited in specific ways that may be hard to discern at a first glance.

Update delay

Since these sources position themselves as authorities within their cause areas, they rightfully feature a delay before incorporating speculative announcements or new developments. However, in fast moving fields like AI safety developments are happening at a pace that exceeds the ability for expert review. As such, resources can quickly become outdated or incorrect without being updated as such.

Topical focus

Limited resources means that sites usually focus on one cause area or area of interest. As a result, information becomes fractured and siloed into specialist communities, and opportunities for inter-group interaction falls. This encourages readers to hyper-specialise into one field and may lead them to discount common systemic factors that lead to heightened x-risk across many fields.

More broadly, having many such fractured information sources reduces the visibility of information as a whole by diluting the space of information sources, making it easier for vital resources to become lost in the noise. This results in information transmission being based on ad-hoc sharing rather than coordinated dissemination, reducing the efficiency of information spread.

Cause advocacy

The people designing resources for a cause area usually have preconceived notions about what should be done in that area. More importantly, they have usually come to these conclusions before putting together these resources. This can of course be helpful in setting priorities, but can also reduce the diversity of ideas in complex, fast moving fields. 

Since researchers are likely put material they find useful and relevant into a resource for others, specialised resources (especially those which collate links to other resources) are likely to suffer from confirmation bias. This narrows the possibility space for interventions by preventing readers from learning about interventions the authors may not find useful or productive. Furthermore, if small groups of similarly-opinionated experts create entry-level resources that are not subject to scrutiny, a form of anchor bias is likely to take hold in the cause area community as a whole.

To be clear, I am not accusing Bluedot Impact or any other resource of intentional or unintentional bias. I am also not suggesting that these resources are counterproductive. However, the nature of specialised cause groups performing advocacy work is that they are likely to find information which agrees with their position more valuable. In a high risk world where many conclusions are often counterintuitive, putting all of our eggs in one basket, however well designed, is a dangerous risk.

Why community forums are insufficient

While forums like this one are a valuable resource to collect news, insights, and updates, they are diluted due to their multipurpose functions as discussion forums. The moderators of these forums do not have disseminating the news as their first priority, nor should it be. News collection, news presentation, and journalism are also specialised skillsets that are not easily replaced by AI bots or content algorithms.

Proposed model: Joint algorithmic-human observatory

The proposed model involves a separate website with two functions:

  1. Crowdsourced information collection: Modelled on sites like Hacker News or Reddit, users should be able to submit links for other users and moderators to vote on. Unlike thoe sites, there will be no generic "upvote" or "downvote" button. Instead, users will tag content with a variety of emojis based on whether they feel it is relevant to a cause area, of general interest, fair and balanced etc. Comments will not be enabled except as user-submitted factual corrections (i.e. further points of clarification or points of information). Discussion is reserved for forums like the present forum. Ranking of links/comments will be based on the reddit algorithm, with a bias towards recent, broadly relevant, and high-quality content.
  2. Human information collection: A team of paid editors with subject matter expertise or journalism skills should be retained as staff to process both user submitted links and any news they themselves receive. This would function as a specialised newsroom producing weekly digests or long read articles that act as manual filters for the week's events. Access to such articles might be gated behind a subscription to recoup costs for running the site.

Critically, this source of information is not an advocate for action. Only news or factual corrections are presented without calls to action. Cause advocacy organisations will not be able to submit op-eds or articles for publication at the observatory. This does not mean that the observatory is "neutral": global warming is real, but the observatory will not host a post about geoengineering being the answer.

Potential counter-arguments

Possible biases

Counter-argument: The human moderators and editors of the observatory would hold a position of power through which to determine which sources of information are important and which are not. In effect, they would replicate the position of specialist authors collecting information for specialised resources. Even if the observatory has a position of non-advocacy, how information is presented affects how it is received. Something being described as a "1-in-1000 moonshot" is very different from something being described as "a rapidly maturing technology".

Response: Biases are present in all sources of information. There is no such thing as a non-biased source, as even neutrality is a position on a subject-the position that all the parties involved are equally credible. The existence of the community submitted section should act as a counterbalance to the editorial team and hopefully alert them to developments that they have missed or erroneously dismissed as unimportant.

"Source of Truth" risks

Counter-argument: Sources of truth refer to authoritative sources that other actors in a system refer to as verifiers to determine if their information is accurate. For example, the TPM (Trusted Platform Module) in a computer is a tamper-resistant piece of hardware that certifies the computer's OS or hardware has not been compromised at boot time. Importantly, information only flows one way from sources of truth: the computer cannot change the TPM, otherwise, malware in the computer would be able to certify itself as safe.

As you can already imagine, any such authoritative source, if compromised, provides a massive security risk. If the TPM is compromised (scroll down to the "Attacks" section of the Wikipedia article), the computer has no way of correcting itself and will blindly trust the compromised TPM. Similarly, a single authoritative information source for a community can produce information gaps or the possibility of spreading misinformation to the community as a whole.

Response: The observatory does not position itself as a single source of truth. Many other sources of truth exist and are used by the observatory as a link hub, reducing the likelihood that the observatory can be compromised to spread misinformation. Furthermore, users in this model would be able to submit corrections to the observatory which can be acted upon by the human staff.

Conclusion

I hope this idea is useful and sparks a fruitful discussion. I look forward to addressing any further ideas on this topic.

17

0
1

Reactions

0
1

More posts like this

Comments6
Sorted by Click to highlight new comments since: Today at 8:30 AM

I think it's great to think about what projects should maybe exist and then pitch them! Kudos to you for doing that; it seems potentially one of the highest-value activities on the Forum.

I think that information flows are really important, and in principle projects like this could be really high-value already in the world today. Moreover I agree that the general area is likely to increase in importance as the impacts of language models are more widely felt. But details are going to matter a lot, and I'm left scratching my head a bit over this:

  • When I read the specific pitch here, I think I don't think that I have a clear enough picture of what kind of topics this is going to cover, and what audiences it will serve 
    • Is it best thought of like "Wikipedia, but for news"? Something more EA-focused than that?
  • You talk about the importance of having things that are just news, not advocacy
    • But it also sounds like most of what you're imagining is links to other sources of information
      • Most news sources at the moment come with some degree of opinionated views slanting how they're presented; presumably you're not going to exclude anything being linked just because of that?
    • If this impartiality is really important, would it maybe be better to more just collect the bare facts, rather than link to external articles?
      • This could be more efficient in information-per-word, as well as reducing spin

Hi, the general model for the platform would be something akin to a web-based news site (e.g. WIRED, Vox, etc.) and a subreddit combined. There's the human run in depth coverage part, where the work should be done to increase impartiality, but there's also the linklist part which allows community members to "float" content they find interesting without getting bogged down in writing it up, so to speak. The links shared will be opinionated, definitely,  but that should be mitigated by the human coverage, and the limitations of human coverage (speed of updates, long reading time) can hopefully be compensated by the linklist/subreddit portion of the site.

My initial thoughts around this are that yeah, good information hard to find and prioritize, but I would really like better and more accurate information to be more readily available. I actually think AI models like chatgpt achieve this to some extent, as a sort of not-quite-expert on a number of topics, and I would be quite excited to have these models become even better accumulators of knowledge and communicators. Already it seems like there's been a sort of benefit to productivity (one thing I saw recently: https://arxiv.org/abs/2403.16977). So I guess I somewhat disagree with AI being net negative as an informational source, but do agree that it's probably enabling the production of a bunch of spurious content and have heard arguments that this is going to be disastrous.

But I guess the post is focused moreso on news itself? I appreciate the idea of a sort of weekly digest in that it would somewhat detract from the constant news hype cycle, I guess I'm in more favor of longer time horizons for examining what is going on in the world. The debate on covid origin comes to mind, especially considering Rootclaim, as an attempt to create more accurate information accumulation. I guess forecasting is another form of this, whereby taking bets on things before they occur and being measured by your accuracy is an interesting way to consume news which also has a sort of 'truth' mechanism to it - and notably has legible operationalization of truth! (Edit: guess I should also couch this more so in what already exists on EAF, and lesswrong and rationality pursuits in general seem pretty adjacent here)

To some extent my lame answer is just AI enabling better analysis in the future as probably the most tractable way to address information. (Idk, I'm no expert on information and this seems like a huge problem in a complex world. Maybe there are more legible interventions on improving informational accuracy, I don't know them and don't really have much time, but would encourage further exploration and you seem to be checking out a number of examples in another comment!)

I think overall this post plays into a few common negative stereotypes of EA: Enthusiastic well-meaning people (sometimes with a grandiose LoTR reference username) proposing grand plans to solve an enormously complex problem without really acknowledging or understanding the nuance.

Suggesting that we simply develop an algorithm to identify "high quality content" and that a combination of crowds and experts will reliably be able to identify factual vs non-factual information seems to completely miss the point of the problem, which is that both of these things are extremely difficult and that's why we have a disinformation crisis.

Phib
1mo11
0
0
1

Responding to this because I think it discourages a new user from trying to engage and test their ideas against a larger audience, maybe some of whom have relevant expertise, and maybe some of those will engage - seems like a decent way to try and learn. Of course, good intentions to solve a 'disinformation crisis' like this aren't sufficient, ideally we would be able to perform serious analysis on the problem (scale, neglectedness, tractability and all that fun stuff I guess) and in this case, seems like tractability may be most relevant. I think your second paragraph is useful in mentioning that this is extremely difficult to implement but also just gestures at the problem's existence as evidence.

I share this impression though, that disinformation is difficult and also had a kinda knee-jerk about "high quality content". But idk, I feel like engaging with the piece with more of a yes-and attitude to encourage entrepreneurial young minds and/or more relevant facts of the domain could be a better contribution.

But I'm doing the same thing and just being meta here, which is easy, so I'll try too in another comment

It is true that this is not likely to solve the disinformation crisis. It is also true that the successful implementation of such a platform would be quite difficult. However, there are reasons why I outlined the platform as I did:

  • Small online newsrooms like 404 media have recently come into existence with subscriber based models that allow them to produce high quality content while catering to specialised audiences. If the sufficient resources are there to attract high quality reporters (whom I note in the post perform a function that cannot be easily replaced by algorithms), then the platform has a good chance of producing technical, scientific, or cause based news that is worthy of reading on its own.
  • Subreddits have been widely noted as efficient ways of finding answers to complex domain-specific questions, largely because they concentrate a domain-specific regular technical userbase and feature ruthless downvoting for posts that spread misinformation. Similarly, facebook's system of emoji reacts has been found to correlate certain reactions strongly with inflammatory news spreading. Of course, both of these platforms have monetisation incentives that mean that they cannot act properly on these signals. A subscription based model would hopefully reduce these perverse incentives and allow for better algorithms than exist today.
  • "High Quality" as an indicator here is not about the quality of the reporting, evidence etc. in a given link but "relative quality" in a manner similar to content-agnostic ranking algorithms like PageRank. Since the model approximates news tickers with new links coming in over time rather than having websites linking to each other spatially, a version of reddit's content ranking algorithms (which are open sourced) can be used.
  • Finally, I understand being dismissive of certain expert groups and some forms of crowd based information sourcing. However, if you reject both of them at once then we're really left with quite limited options for information gathering.

Again, this is not a solution in the sense of a silver bullet. But it is also not as fanciful as perhaps it appears at first glance. A lot of the technology is here, and with the proper investment and application it can be used to provide a positive impact.

Curated and popular this week
Relevant opportunities