P

Phib

309 karmaJoined
0

Bio

This point is not to identify with it. It’s a fib.

Comments
36

FWIW Habryka, I appreciate all that I know you’ve done and expect there’s a lot more I don’t know about that I should be appreciative of too.

I would also appreciate if you’d write up these concerns? I guess I want to know if I should feel similarly even as I rather trust your judgment. Sorry to ask, and thanks again

Editing to note I‘ve now seen some of comments elsewhere

Yeah, thank you, I guess I was trying to say that the evidence only seems to be stronger over time that the Bay Area’s: ‘AI is the only game in town’, is accurate.

Insofar as, timelines for various AI capabilities have outperformed both superforecasters’ and AI insiders’ predictions; transformative AI timelines (at Open Phil, prediction markets, AI experts I think) have decreased significantly over the past few years; the performance of LLMs have increased at an extraordinary rate across benchmarks; and we expect the next decade to extrapolate this scaling to some extent (w/ essentially hundreds of billions if not tens of trillions to be invested).

Although, yeah, I think to some extent we can’t know if this continues to scale as prettily as we’d expect and it’s especially hard to predict categorically new futures like exponential growth (10%, 50%, etc, growth/year). Given the forecasting efforts and trends thus far it feels like there’s a decent chance of these wild futures, and people are kinda updating all the way? Maybe not Open Phil entirely (to the point that EA isn’t just AIS), since they are hedging their altruistic bets, in the face of some possibility this decade could be ‘the precipice’ or one of the most important ever.

Misuse and AI risk seem like the negative valence of AI’s transformational potential. I personally buy the arguments around transformational technologies needing more reasoned steering and safety, and I also buy that EA has probably been a positive influence, and that alignment research has been at least somewhat tractable. Finally I think that there’s more that could be done to safely navigate this transition.

Also, re David (Thorstad?) yeah I haven’t engaged with his stuff as I probably should, and I really don’t know how to reason for or against arguments around the singularity, exponential growth, and the potential of AI without deferring to people more knowledgeable/smarter than me. I do feel like I have seen the start and middle of trends they predicted, and predict will extrapolate-with my own personal use and some early reports on productivity increases.

I do look forward to your sequence and hope you do really well on it!

Hi, I went to Lessonline after registering for EAG London, my impression of both events being held on the same weekend is something like:

  1. Events around the weekend (Manifest being held the weekend after Lessonline) informed Lessonline's dates (but why not the weekend after Manifest then?)

  2. People don't travel internationally as much for EAGs (someone cited to me ~10% of attendees but my opinion on reflection is that this seems an underestimate).

  3. I imagine EAG Bay Area, Global Catastrophic Risks in early Feb also somewhat covered the motivation for "AI Safety/EA conference".

I think you're right that it's not entirely* a coincidence that Lessonline conflicted with EAG Bay Area, but I'm thinking this was done somewhat more casually and probably reasonably.

I think it's odd, and other's have noted too, the most significant AI safety conference shares space with things unrelated on an object-level. I think it's further odd to consider, I've heard people say, why bother going to a conference like this when I live in the same city as the people I'd most want to talk with (Berkeley/SF).

Finally, I feel weird about AI, since I think insiders are only becoming more convinced/confirmed of extreme event likelihoods (AI capabilities). I think it has only become more important by virtue of most people updating timelines earlier, not later, and this includes Open Phil's version of this (Ajeya and Joe Carlsmith's AI timelines). In fact, I've heard arguments that it's actually less important by virtue of, "the cat's out of the bag and not even Open Phil can influence trajectories here." Maybe AI safety feels less neglected because it's being advocated from large labs, but that may be both a result of EA/EA-adjacent efforts and not really enough to solve a unilateralizing problem.

Phib
3
4
0
1
3

(feel a little awkward just pushing news but feel some completeness obligation on this subject)

My initial thoughts around this are that yeah, good information hard to find and prioritize, but I would really like better and more accurate information to be more readily available. I actually think AI models like chatgpt achieve this to some extent, as a sort of not-quite-expert on a number of topics, and I would be quite excited to have these models become even better accumulators of knowledge and communicators. Already it seems like there's been a sort of benefit to productivity (one thing I saw recently: https://arxiv.org/abs/2403.16977). So I guess I somewhat disagree with AI being net negative as an informational source, but do agree that it's probably enabling the production of a bunch of spurious content and have heard arguments that this is going to be disastrous.

But I guess the post is focused moreso on news itself? I appreciate the idea of a sort of weekly digest in that it would somewhat detract from the constant news hype cycle, I guess I'm in more favor of longer time horizons for examining what is going on in the world. The debate on covid origin comes to mind, especially considering Rootclaim, as an attempt to create more accurate information accumulation. I guess forecasting is another form of this, whereby taking bets on things before they occur and being measured by your accuracy is an interesting way to consume news which also has a sort of 'truth' mechanism to it - and notably has legible operationalization of truth! (Edit: guess I should also couch this more so in what already exists on EAF, and lesswrong and rationality pursuits in general seem pretty adjacent here)

To some extent my lame answer is just AI enabling better analysis in the future as probably the most tractable way to address information. (Idk, I'm no expert on information and this seems like a huge problem in a complex world. Maybe there are more legible interventions on improving informational accuracy, I don't know them and don't really have much time, but would encourage further exploration and you seem to be checking out a number of examples in another comment!)

Phib
11
0
0
1

Responding to this because I think it discourages a new user from trying to engage and test their ideas against a larger audience, maybe some of whom have relevant expertise, and maybe some of those will engage - seems like a decent way to try and learn. Of course, good intentions to solve a 'disinformation crisis' like this aren't sufficient, ideally we would be able to perform serious analysis on the problem (scale, neglectedness, tractability and all that fun stuff I guess) and in this case, seems like tractability may be most relevant. I think your second paragraph is useful in mentioning that this is extremely difficult to implement but also just gestures at the problem's existence as evidence.

I share this impression though, that disinformation is difficult and also had a kinda knee-jerk about "high quality content". But idk, I feel like engaging with the piece with more of a yes-and attitude to encourage entrepreneurial young minds and/or more relevant facts of the domain could be a better contribution.

But I'm doing the same thing and just being meta here, which is easy, so I'll try too in another comment

Yeah wow the views vs engagement ratio is the most unbalanced I’ve seen (not saying this is a bad or good thing, just noting my surprise)

I think of the expanding moral circle sometimes instead like an abstracting moral, uh, circle. Where I’m able to abstract suffering over a distance, over time into the future, onto other species at some rate, into numbers, into probabilities and the meta, into complex understandings of ideas as they interact.

Agreed, the evidence is solely, "according to at least two sources with direct knowledge of the situation, who asked to remain anonymous."

Appreciate the post quite a bit, thank you for taking the time to share.

Load more