machinaut

11Joined Jun 2015

Bio

Alex Ray. Currently I do AI Alignment @ OpenAI.

I got into EA through CFAR, and then spent a bunch of time thinking about 80k guides that I read.

I'm in the SF Bay Area, and hoping to see/meet more EAs in person soon.

Posts
1

Sorted by New

Comments
5

I liked the transcript, much easier to skim and skip around than a video.  At different points in my life/career I would have liked the video more -- having both seems like an accessibility win.

I liked that this is organized -- the table-of-contents is a great way to explore and jump around.

I wish there was some summary of key points or takeaways.  It seems like that could have gone with or in place of the sections overview at the top.  It seems like a bunch of care/preparation went into having good questions, so I think here I'd have a lot of trust in the interviewer's brief.

Also I think it would be more skim-able if there were clearer typographic indications of who is speaking what paragraph.  Not exactly sure how to do it on the AF site, but having each speaker highlighted in a slightly different color, or indentation, or something like that.  Right now my assumption is that there isn't a great way to do that on this site.

I really like idea here and think it's presented well.  (A+ use of illustrative graphs.)  The tradeoff of "invest more in pondering vs invest in exploring object-level options" is very common.

Two thoughts I'd like to add to this post:

re-initating deliberation & non-monotonic credence

I think that the credal ranges are not monotonically narrowing, mostly because we're imperfect/bounded reasoners.

There are events in peoples lives / observations / etc that cause us to realize that we've incorrectly narrowed credence in the past and must now re-expand our uncertainty.

This theory still makes a lot of sense in that world -- where termination might be followed up by re-initiation in the future, and uncertainty-expanding events would constitute a clear trigger for that re-initiation.

Value of information for updating our meta-uncertainty

Given the above point about our judgement on when/how to narrow credal ranges being flawed, I think we should care about improving at that meta-judgement.

This adds an additional value of information to pondering more -- that we improve our judgement for when to stop pondering.

I think this is important to call out because this update is highly asymmetric -- it's much easier to get feedback that you pondered too long (by doing extra ponding for very little update) than to get feedback that you pondered too short (because you don't know what you'd think if you pondered longer).

In cases where there is this very asymmetric value of information, I think a useful heuristic is "if in doubt, ponder too long, rather than too short" (this doesn't really account for that fact that its not Yes/No as much as it is opportunity cost of other actions, but hopefully the heuristic can be adapted to be useful)

(Coda: this seems more like rationality than the modal EA forum post -- maybe would get additional useful/insightful comments on LW)

I really appreciated the Get In The Van post, and my inner Kantian immediately turned it categorical.

What should I be doing to make more GITV things happen?  I think the obvious thing is 'drive more vans' or something like that.

Maybe a simpler/smaller thing is host most things where there is the opportunity for people who don't know each other to meet.

(I'm a pretty big proponent of the social value creation of hosting gatherings -- and the addition of the possibility of GITV moments seems to be another point in its favor)

(1) seems worth funding to the extent that it's fund-able (like if it were an open source software project)

I'm less optimistic about public advocacy.  As ML models have had a greater impact on peoples lives, there's already been more of a public movement looking for more transparency and accountability for these models (which could include structured access).  It seems like this isn't a very strong incentive to existing companies' products.

(5) I like a lot, and would fit well with structured evaluation programmes, like BIG-Bench

What incentives and mechanisms do you think would be most effective at getting industrial and academic labs to provide structured access to their models?