April 19 - 25
Better Futures Highlight

Read the Better Futures series here, and discuss it here, all week.

Read the Better Futures series here, and discuss it here, all week.

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Better Futures
Global health
Animal welfare
Existential risk
13 more
Did any of the boosters of real-money prediction markets correctly predict that prediction market platforms would be quickly dominated by thinly disguised sports gambling? (I mean this question literally and earnestly, not as a snide takedown of prediction markets or their proponents)
I wrote a quick post about why I think people committed to working on ASI+animals should be making sure we don't spread wild animal suffering throughout the universe.  Full post here: https://naiveconsequentialism.substack.com/p/dont-green-the-universe
6
Lizka
2d
0
People have pretty different background expectations about what the most relevant or worrying kind of AI misalignment /takeover/... scenario would  look like. This also corresponds to different views on when they expect signs of it to be visible (such that not seeing those signs or seeing something else would update them). Among other issues, I think this confuses discussion around whether (e.g.) "alignment is easy" or how we should be updating.[1]  My brain likes pictures, so I've found it useful to tag different views and discussions via the following diagrams (these are pretty "raw"/not-distilled):  1)   2) And a second one, roughly:  how much systems at a given capability level appear safe vs where they actually are on the path or spectrum to the kind of safety we care about[2].  (This also has some more notes on how people might relate differently to the same results / evidence.) These are very messy sketches! I'm sharing because I made a hacky commitment to post short things and in case it's useful for someone (or in case a comment helps clarify things for me, which I'd definitely appreciate). There's some chance that I'll clean these up and update them later. 1. ^ Related: "conflationary alliances" (see also a post with a version of this dynamic about "charity" on the Forum) 2. ^ Again, a huge part of the problem/confusion seems to be that this is a very underdetermined term; see footnote above. Also feels sort of related to things I wrote about here (altohugh I'm guessing it's also partly due to the fact that this is a basically unedited sketch and I first drew this because a similar image had come to mind in a variety of contexts, and I wanted a version I could adapt as needed - i.e. it was meant to be flexible. If I were making a v2 I'd probably want to commit more, though.)
3
a_mai
3d
0
Shorter timelines don't clearly warrant less caution about infohazards I have started to worry that some people are reaching premature conclusions about what AGI timelines imply for biosecurity. One example is infohazards, and how cautious we should be about them. A claim I’ve heard twice (from two fairly influential people in biosecurity) is that shorter timelines warrant less caution about infohazards. I think the reasoning might be this:  1. For any piece of information I’m considering releasing, shorter timelines imply that AIs will disclose this information sooner 2. If AIs disclose this information sooner, then my withholding it delays its dissemination by a shorter period of time 3. During a shorter period of delay, the piece of information has less time to accrue risk, and so the delay averts less risk 4. Conclusion: We have less reason to withhold this piece of information  I think that is too quick. The key issue is with the third premise: “Delaying its dissemination by a shorter period of time means we avert less risk.” Importantly, if we have shorter timelines, the risk from biological attack per unit time arguably increases, as AIs would provide greater uplift to enterprising bad actors. And so, shorter timelines potentially concentrate the same (or more) biological risk into a shorter period of time. As a result, delaying a piece of information by 1 month under short timelines could be as valuable, or more valuable than delaying the same piece of information by 2 months under longer timelines.  I therefore do not think that shorter timelines generally warrant less caution about a given infohazard.