I have now reviewed and edited the relevant section.
My feeling when I drafted it was as per Ozzie's comment -- as long as I was transparent, I thought it was OK for readers to judge the quality of the content as they see fit.
Part of my rationale for this being OK was that it was right at the end of a 15-page write-up. Larks wrote that many people will read this post. I hope that's true, but I didn't expect that many people would read the very last bits of the appendix. The fact that someone noticed this at all, let alone almost immediately after this post was published, was an update for me.
Hence my decision to review and edit that section at the end of the document, and remove the disclaimer.
You wrote:
Consider these types of questions that AI systems might help address:
- What strategic missteps is Microsoft making in terms of maximizing market value?
- What metrics could better evaluate the competence of business and political leaders?
- Which public companies would be best off by firing their CEOs?
- <...>
I'm open to the possibility that a future AI may well be able to answer these questions more quickly and more effectively than the typical human who currently handles those questions.
The tricky thing is how to test this.
Given that these are not easily testable things, I think it might be hard for people to gain enough confidence in the AI to actually use it. (I guess that too might be surmountable, but it's not immediately obvious to me how)
I don't think bringing the ISS down in a controlled way is because of the risk that it might hit someone on earth, or because of "the PR disaster" of us "irrationally worrying more about the ISS hitting our home than we are getting in their car the next day".
Space debris is a potentially material issue.
The geopolitics of space debris gets complicated.
I haven't done a cost-effectiveness analysis to justify whether $1bn is a good use of that money, but I think it's more valuable than this article seems to suggest.
A donor-pays philanthropy-advice-first model solves several of these problems.
This explains why SoGive adopted this model.
Hi Ozzie, I typically find the quality of your contributions to the EA Forum to be excellent. Relative to my high expectations, I was disappointed by this comment.
> Would such a game "positively influence the long-term trajectory of civilization," as described by the Long-Term Future Fund? For context, Rob Miles's videos (1) and (2) from 2017 on the Stop Button Problem already provided clear explanations for the general public.
It sounds like you're arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?
This struck me as strawmanning.
> It seems insane to even compare, but was this expenditure of $100,000 really justified when these funds could have been used to save 20–30 children's lives or provide cataract surgery to around 4000 people?
These are totally different modes of impact. I assume you could make this argument for any speculative work.
I'm more sympathetic to this, but I still didn't find your comment to be helpful. Maybe others read the original post differently than I did, but I read the OP is simply expressing the concept "funds have an opportunity cost" (arguably in unnecessarily hyperbolic terms). This meant that your comment wasn't a helpful update for me.
On the other hand, I appreciated this comment, which I thought to be valuable:
I also like grant evaluation, but I would flag that it's expensive, and often, funders don't seem very interested in spending much money on it.
Donors contribute to these funds expecting rigorous analysis comparable to GiveWell's standards, even for more speculative areas that rely on hypotheticals, hoping their money is not wasted, so they entrust that responsibility to EA fund managers, whom they assume make better and more informed decisions with their contributions.
I think it's important that the author had this expectation. Many people initially got excited about EA because of the careful, thoughtful analysis of GiveWell. Those who are not deep in the community might reasonably see the branding "EA Funds" and have exactly the expectations set out in this quote.
I'm working from brief conversations with the relevant experts, rather than having conducted in-depth research on this topic. My understanding is:
I guess in either case it's possible for the food/agriculture lobby to nonetheless recognise that alt proteins could be a threat to them and object. I don't know how common it is for this actually happen.
It does sound sort of interesting, but I don't think I have a clear picture of the theory of change. How does the dashboard lead to better outcomes? If the theory of change depends on certain key people (media? Civil servants? Someone else?) making use of the dashboard, would it make sense to check with those people and see if they would find it useful? Should we check if they're willing to be involved in the creation process to provide the feedback which helps ensure it's worth their while to use it?