S

Sanjay

4526 karmaJoined

Comments
396

It does sound sort of interesting, but I don't think I have a clear picture of the theory of change. How does the dashboard lead to better outcomes? If the theory of change depends on certain key people (media? Civil servants? Someone else?) making use of the dashboard, would it make sense to check with those people and see if they would find it useful? Should we check if they're willing to be involved in the creation process to provide the feedback which helps ensure it's worth their while to use it?

Shortly after I wrote this, the news reported nationwide protests on topics pretty aligned you what in talking about here. This might mean that my assessment of neglectedness should be updated

I have now reviewed and edited the relevant section.

My feeling when I drafted it was as per Ozzie's comment -- as long as I was transparent, I thought it was OK for readers to judge the quality of the content as they see fit.

Part of my rationale for this being OK was that it was right at the end of a 15-page write-up. Larks wrote that many people will read this post. I hope that's true, but I didn't expect that many people would read the very last bits of the appendix. The fact that someone noticed this at all, let alone almost immediately after this post was published, was an update for me.

Hence my decision to review and edit that section at the end of the document, and remove the disclaimer.

You wrote:

Consider these types of questions that AI systems might help address:

  • What strategic missteps is Microsoft making in terms of maximizing market value?
  • What metrics could better evaluate the competence of business and political leaders?
  • Which public companies would be best off by firing their CEOs?
  • <...>

I'm open to the possibility that a future AI may well be able to answer these questions more quickly and more effectively than the typical human who currently handles those questions.

The tricky thing is how to test this.

Given that these are not easily testable things, I think it might be hard for people to gain enough confidence in the AI to actually use it. (I guess that too might be surmountable, but it's not immediately obvious to me how)

Can you give an indication of how common the problem is? (ie how often do papers get lost/deleted?) My intuition says not very often, and when it does happen it's most likely to be the least useful papers, but I could believe my intuition is wrong.

I don't think bringing the ISS down in a controlled way is because of the risk that it might hit someone on earth, or because of "the PR disaster" of us "irrationally worrying more about the ISS hitting our home than we are getting in their car the next day".

Space debris is a potentially material issue.

  • There are around 23,000 objects larger than 10 cm (4 inches) and about 100 million pieces of debris larger than 1 mm (0.04 inches). Tiny pieces of junk might not seem like a big issue, but that debris is moving at 15,000 mph (24,140 kph), 10 times faster than a bullet. (Source: PBS)
  • This matters because debris threatens satellites. Satellites are critical to GPS systems and international communication networks. They are used for things like helping you get a delivery, helping the emergency services get to their destination, or military operations. 
  • Any one bit of space debris probably won't cause a big deal if you ignore knock-on effects. However a phenomenon called Kessler Syndrome could make things much worse. This arises when space debris hits into satellites, causing more space debris, causing a vicious circle.

 The geopolitics of space debris gets complicated.

  • The more space debris there is, the more legitimate it is to have weapons on a satellite (to keep your satellite safe from debris). 
  • However such weapons could be dual-purpose, since attacking an enemy's satellite could be of great tactical value in a conflict scenario.

I haven't done a cost-effectiveness analysis to justify whether $1bn is a good use of that money, but I think it's more valuable than this article seems to suggest.

A donor-pays philanthropy-advice-first model solves several of these problems.

  • If your model focuses primarily on providing advice to donors, your scope is "anything which is relevant to donating", which is broad enough that you're bound to have lots of high-impact research to do, which helps with constraint 1.
  • Strategising and prioritisation are much easier when you're knee-deep in supporting donors with their donations -- this highlights the pain points in making good giving decisions, which helps with constraint 2.
  • If donors perceive that the research is worth funding, and have potentially had input into the ideation of the research project, they are likely to be willing to fund it, which helps with constraint 6.

This explains why SoGive adopted this model.

Hi Ozzie, I typically find the quality of your contributions to the EA Forum to be excellent. Relative to my high expectations, I was disappointed by this comment.

> Would such a game "positively influence the long-term trajectory of civilization," as described by the Long-Term Future Fund? For context, Rob Miles's videos (1) and (2) from 2017 on the Stop Button Problem already provided clear explanations for the general public.

It sounds like you're arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?

This struck me as strawmanning.

  • The original post asked whether the game would positively influence the long-term trajectory of civilisation. It didn't spell it out, but presumably we want that to be a material positive influence, not a trivial rounding error -- i.e. we care about how much positive influence.
  • The extent of that positive influence is lowered when we already have existing clear and popular explanations. Hence I do believe the existence of the videos is relevant context.
  • Your interpretation "It sounds like you're arguing that no other explanations are useful, because Rob Miles had a few videos in 2017 on the issue?" is a much stronger and more attackable claim than my read of the original.

> It seems insane to even compare, but was this expenditure of $100,000 really justified when these funds could have been used to save 20–30 children's lives or provide cataract surgery to around 4000 people?

These are totally different modes of impact. I assume you could make this argument for any speculative work.

I'm more sympathetic to this, but I still didn't find your comment to be helpful. Maybe others read the original post differently than I did, but I read the OP is simply expressing the concept "funds have an opportunity cost" (arguably in unnecessarily hyperbolic terms). This meant that your comment wasn't a helpful update for me.

On the other hand, I appreciated this comment, which I thought to be valuable:

I also like grant evaluation, but I would flag that it's expensive, and often, funders don't seem very interested in spending much money on it.

Donors contribute to these funds expecting rigorous analysis comparable to GiveWell's standards, even for more speculative areas that rely on hypotheticals, hoping their money is not wasted, so they entrust that responsibility to EA fund managers, whom they assume make better and more informed decisions with their contributions.

I think it's important that the author had this expectation. Many people initially got excited about EA because of the careful, thoughtful analysis of GiveWell. Those who are not deep in the community might reasonably see the branding "EA Funds" and have exactly the expectations set out in this quote.

I'm working from brief conversations with the relevant experts, rather than having conducted in-depth research on this topic. My understanding is:

  • the food security angle is most useful for a country which imports a significant amounts of its food; where this is true, the whole argument is premised on the idea that domestic food producers will be preserved and strengthened, so it doesn't naturally invite opposition. 
  • the economy / job creation angle is again couched in terms of "increasing the size of the pie" -- i.e. adding more jobs to the domestic economy and not taking away from the existing work. Again, this doesn't seem to naturally invite opposition from incumbent food producers.

I guess in either case it's possible for the food/agriculture lobby to nonetheless recognise that alt proteins could be a threat to them and object. I don't know how common it is for this actually happen.

Load more