markus_over

360Joined May 2018

Comments
86

The recent push for productization is making everyone realize that alignment is a capability. A gaslighting chatbot is a bad chatbot compared to a harmless helpful one. As you can see currently, the world is phasing out AI deployment, fixing the bugs, then iterating.

While that's one way to look at it, another way is to notice the arms race dynamics and how every major tech company is now throwing LLMs into the public head over heels even when they stil have some severe flaws. Another observation is that e.g. OpenAI's safety efforts are not very popular among end users, given that in their eyes these safety measures make the systems less capable/interesting/useful. People tend to get irritated when their prompt is answered with "As a language model trained by OpenAI, I am not able to <X>", rather than feeling relief over being saved from a dangerous output.

As for your final paragraph, it is easy to say "<outcome X> is just one ouf of infinite possibilities", but you're equating trajectories with outcomes. The existence of infinite possibilities doesn't really help when there's a systematic reason that causes many or most of them to have human extinction as an outcome. Whether this is actually the case or not is of course an open and hotly debated question, but just claiming "it's just a single point on the x axis so the probability mass must be 0" is surely not how you get closer to an actual answer.

why the overemphasis/obsession on doom scenario?

Because it is extremely important that we do what we can to avoid such a scenario. I'm glad that e.g. airlines still invest a lot in improving flight safety and preventing accidents even though flying is already the safest way of traveling. Humanity is basically at this very moment boarding a giant AI-rplane that is about to take off for the very first time, and I'm rather happy there's a number of people out there looking at the possible worst case and doing their best to figure out how we can get this plane safely off the ground rather than saying "why are people so obsessed with the doom scenario? A plane crash is just one out of infinite possibilities, we're gonna be fine!".

it's not AI, more code completion with crowd-sourced code

Copilot is based on GPT3, so imho it is just as much AI or not AI as ChatGPT is. And given it's pretty much at the forefront of currently available ML technology, I'd be very inclined to call it AI, even if it's (superficially) limited to the use case of completing code.

This seems like a very cool project, thanks for sharing! I agree that this type of project can be considered a "moonshot", which implies that most of the potential impact lies in the tail end of possible outcomes. Consequently the estimated become very tricky. If the EV is dominated by a few outlier scenarios, reality will most likely turn out to be underwhelming.

I'm not sure if one can really make a good case that working on such a game is worthwhile from an impact perspective. But looking at the state of things and the community as a whole, it does still seem preferable to me that somebody somewhere puts some significant effort into EA games (sorry for the pun).

Also, to add one possible path to impact this might enable: it might be yet another channel to deliberately nudge people into in order to expose them to the key EA ideas in an entertaining way (HPMOR being another such example). So your players might not all end up being "random people", but a ratio of them might be preselected in a way.

Lastly, it seems like at least 5-10 people (and probably considerably more) in EA are interested or involved in game development. I'm not familiar of any way in which this group is currently connected - would probably be worth doing so. Maybe something low on overhead such as a Signal group would work as a start?

Sometimes I think that this is the purpose of EA. To attempt to be the "few people" to believe consequentialism in a world where commonsense morality really does need to change due to a rapidly changing world. But we should help shift commonsense morality in a better direction, not spread utilitarianism.

Very interesting perspective and comment in general, thanks for sharing!

Very good argument imo! It shows there's a different explanation rather than "people don't really care about dying embryos" that can be derived from this comparison. People tend to differentiate between what happens "naturally" (or accidentally) vs deliberate human actions. When it comes to wild animal suffering, even if people believe it exists, many will think something along the lines of "it's not human-made suffering, so it's not our moral responsibility to do something about it" - which is weird to a consequentialist, but probably quite intuitive for most people.

It takes a few non-obvious steps in reasoning to get to the conclusion that we should care about wild animal suffering. And while fewer steps may be required in the embryo situation, it is still very conceivable that a person who actually cares a lot about embryos might not initially get to the conclusion that the scope of the problem exceeds abortion.

This seems very useful! Thank you for the summaries. Some thoughts:

  • having this available as a podcast (read by a human) would be cool
  • at one point you hinted at happenings in the comments (regarding GiveWell), this generally seems like a good idea. Maybe in select cases it would make sense to also summarize on a very high level what discussions are going on beneath a post.
  • this sentence is confusing to me: "Due to this, he concludes the cause area is one of the most important LT problems and primarily advises focusing on other risks due to neglectedness." - is it missing a "not"?
  • given this post has >40 upvotes now, I'm looking forward to reading the summary of it next week :)
  • Flow and distribution of information (inside EA, and in general)
  • how to structure and present information to make it as easily digestible as possible (e.g. in blog posts or talks/presentations)

A bit less pressing maybe, but I'd also be interested in seeing some (empirical) research on polyamory and how it affects people. It appears to be rather prevalent in rationality & EA, and I know many people who like it, and also people who find it very difficult and complicated. 

Sort of, so firstly I have a field next to each prediction that automatically computes its "bucket number" (which is just FLOOR(<prediction> * 10)). To then get the average probability of a certain bucket, I run the following: =AVERAGE(INDEX(FILTER(C$19:K, K$19:K=A14), , 1)) - note that this is google sheets though and I'm not sure to which degree this transfers to Excel. For context, column C contains my predicted probabilities, column K contains the computed bucket numbers, and A14 here is the bucket for which I'm computing this. Similarly I count the number of predictions of a given bucket with =ROWS(FILTER(K$19:K, K$19:K<>"", K$19:K=A14)) and the ratio of predictions in that bucket that ended up true with =COUNTIF(FILTER(D$19:K, K$19:K=A14), "=1") / D14 (D19 onwards contains 1 and 0 values depending on if the prediction happened or not; D14 is the aforementioned number of predictions in that bucket).

If this doesn't help, let me know and I can clear up one such spreadsheet, see if I can export it as xlsx file and send it to you.

Load more