Matthew_Barnett

Wiki Contributions

Comments

A proposal for a small inducement prize platform

Potential ways around this that come to mind:

Good ideas. I have a few more,

  • Have a feature that allows people to charge fees to people who submit work. This would potentially compensate the arbitrator who would have to review the work, and would discourage people from submitting bad work in the hopes that they can fool people into awarding them the bounty.
  • Instead of awarding the bounty to whoever gives a summary/investigation, award the bounty to the person who provides the best  summary/investigation, at the end of some time period. That way, if someone thinks that the current submissions are omitting important information, or are badly written, then they can take the prize for themselves by submitting a better one.
  • Similar to your first suggestion: have a feature that restricts people from submitting answers unless they pass certain basic criteria. E.g. "You aren't eligible unless you are verified to have at least 50 karma on the Effective Altruist Forum or Lesswrong." This would ensure that only people from within the community can contribute to certain questions.
  • Use adversarial meta-bounties: at the end of a contest, offer a bounty to anyone who can convince the judge/arbitrator to change their mind on the decision they have made.
A proposal for a small inducement prize platform

What is the likely market size for this platform?

I'm not sure, but I just opened a Metaculus question about this, and we should begin getting forecasts within a few days. 

How should longtermists think about eating meat?

Eliezer Yudkowsky wrote a sequence on ethical injunctions where he argued why things like this were wrong (from his own, longtermist perspective).

How should longtermists think about eating meat?
And it feels terribly convenient for the longtermist to argue they are in the moral right while making no effort to counteract or at least not participate in what they recognize as moral wrongs.

This is only convenient for the longtermist if they do not have equivalently demanding obligations to the longterm. Otherwise we could turn it around and say that it's "terribly convenient" for a shorttermist to ignore the longterm future too.

Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics

Regarding the section on estimating the probability of AI extinction, I think a useful framing is to focus on disjunctive scenarios where AI ends up being used. If we imagine a highly detailed scenario where a single artificial intelligence goes rougue, then of course these types of things will seem unlikely.

However, my guess is that AI will gradually become more capable and integrated into the world economy, and there won't be a discrete point where we can say "now the AI was invented." Over the broad course of history, we have witnessed numerous instances of populations displacing other populations eg. species displacements in ecosystems, and humans populations displacing other humans. If we think about AI as displacing humanity's seat of power in this abstract way, then an AI takeover doesn't seem implausible anymore, and indeed I find it quite likely in the long run.

Matthew_Barnett's Shortform

A trip to Mars that brought back human passengers also has the chance of bringing back microbial Martian passengers. This could be an existential risk if microbes from Mars harm our biosphere in a severe and irreparable manner.

From Carl Sagan in 1973, "Precisely because Mars is an environment of great potential biological interest, it is possible that on Mars there are pathogens, organisms which, if transported to the terrestrial environment, might do enormous biological damage - a Martian plague, the twist in the plot of H. G. Wells' War of the Worlds, but in reverse."

Note that the microbes would not need to have independently arisen on Mars. It could be that they were transported to Mars from Earth billions of years ago (or the reverse occurred). While this issue has been studied by some, my impression is that effective altruists have not looked into this issue as a potential source of existential risk.

A line of inquiry to launch could be to determine whether there are any historical parallels on Earth that could give us insight into whether a Mars-to-Earth contamination would be harmful. The introduction of an invasive species into some region loosely mirrors this scenario, but much tighter parallels might still exist.

Since Mars missions are planned for the 2030s, this risk could arrive earlier than essentially all the other existential risks that EAs normally talk about.

See this Wikipedia page for more information: https://en.wikipedia.org/wiki/Planetary_protection

If you value future people, why do you consider near term effects?

I recommend the paper The Case for Strong Longtermism, as it covers and responds to many of these arguments in a precise philosophical framework.

Growth and the case against randomista development
It seems to me that there's a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.

If this is true, is there a post that expands on this argument, or is it something left implicit?

I've since added a constraint into my innovation acceleration efforts, and now am basically focused on "asymmetric, wisdom-constrained innovation."

I think Bostrom has talked about something similar: namely, differential technological development (he talks about technology rather than economic growth, but the two are very related). The idea is that fast innovation in some fields is preferable to fast innovation in others, and we should try to find which areas to speed up the most.

Growth and the case against randomista development
Growth will have flowthrough effects on existential risk.

This makes sense as an assumption, but the post itself didn't argue for this thesis at all.

If the argument was that the best way to help the longterm future is to minimize existential risk, and the best way to minimize existential risk is by increasing economic growth, then you'd expect the post to primarily talk about how economic growth decreases existential risk. Instead, the post focuses on human welfare, which is important, but secondary to the argument you are making.

This is something very close to my personal view on what I'm working on.

Can you go more into detail? I'm also very interested in how increased economic growth impacts existential risk. This is a very important question because it could determine the influence from accelerating economic-growth inducing technologies such as AI and anti-aging.

Growth and the case against randomista development

I'm confused what type of EA would primarily be interested in strategies for increasing economic growth. Perhaps someone can help me understand this argument better.

The reason presented for why we should care about economic growth seemed to be a long-termist one. That is, economic growth has large payoffs in the long-run, and if we care about future lives equally to current lives, then we should invest in growth. However, Nick Bostrom argued in 2003 that a longtermist utilitarian should primarily care about minimizing existential risk, rather than increasing economic growth. Therefore, accepting this post requires you to both be a longtermist, but simultanously reject Bostrom's argument. Am I correct in that assumption? If it's true, then what arguments are there for rejecting his thesis?

Load More