S

Stian

93 karmaJoined Nov 2021

Posts
4

Sorted by New
2
Stian
· 1y ago · 1m read

Comments
6

The EAGxNordics talks are starting to appear on the CEA Youtube channel. Here's some notes on which ones I liked and would recommend to watch (for reference I put together the raw footage we got from the venue into publishable form, and have therefore watched them all already).

(I will check back and add a link Sibylle's talk is published)

On the longlist would also be talks by: Anders Sandberg, Caroline Jeanmaire, Rachel Waddell & Mathias Bonde, Henri Thunberg, Jacob Arbeid, Phil Trammell.

Now, I have more recommendations, but I think there's also a pernicious problems in people holding talks; some people are really good at holding talks, but I'm not sure how good their underlying content is. So, the talks I thought were really good, but which I haven't really sat down and thought about much were;

  • Kristina Mering on her experience with lobby work and building an organization
  • Suvi Auvinen on her path "From Activist to Consultant", by Suvi Auvinen
  • Ryuji Chua on Engaging skeptics and effective communication

Yeah, I'm starting to believe that a severe limitation on Brier scores is this inability to use them in a forward-looking way. Brier scores reflect the performance of specific people on specific questions and using them as evidence for future prediction performance seems really fraught...but it's the best we have as far as I can tell.

This was a great post, and I appreciated the comments towards the end about the train to Crazy Town, like "Stops along the train to Crazy Town simply represent the place where we see the practical limits to a certain kind of quasi-formal reasoning." In my own (but noticeably Yudkowsky-derived) words I was thinking that this applies in areas where "the probabilities are there to make clear our beliefs and values. If, on reflection, you find that the probabilities advocate for something you do not believe or value, then the model you built your probabilities on don't capture your beliefs/values well enough."

This is noticeably narrower than the AI x-risk prediction case, where I see the beliefs about possible/relevant models to be less clustered than beliefs about the set of human beliefs. And now I'm noticing that even here I might be trapped inside the Bayesian Mindset, as the previous sentence is basically a statement of credences over the spread of those sets of beliefs.

Have you had a chance to read Vanessa Kosoy and Diffractor's work on Infra-Bayesianism? From what I can tell (I haven't spent much time engaging with it myself yet, but), it's very relevant here, as it wants to be a theory of learning that applies "when the hypothesis is not included in the hypothesis space". Among other things, they talk about infradistributions: a set of probability distributions which would act like a prior over environments.

On critique

Does anyone have any good pieces on the nature of critique? Some of my own thoughts below, might write something longer on this at some point, I've had thoughts on this (and feedback in video game production) pinging around my head recently.


First, critique is good. It encourages those who create [a business venture, a piece of art or entertainment, an analysis] to do their best work, so that there are no holes in what they produce. Letting others see your project or idea - having it face reality - is good practice.

Second, in practice a lot of critique is bad. It can be directed at what is not the core of the product, aka, that the critic has missed the point of the venture - or, and this happens more rarely, has gotten the point but is arguing that the point is bad - and is in fact making a critique which would not make the product the best along the dimensions that are valuable to the producer, but those that are valuable or obvious to the critic.

Third, nor can we leverage back at the critics the unequivocal burden so often phrased as "do it better yourself then". While it can often be the case that those who critique are in the best position to do create something based on the principles of that critique (the positive version of this remark), it seems to me that it is more often the case that, instead, we are merely heaping a larger emotion burden on the shoulders of those who have disagreement with the means or aims of someone else (the negative version).

I think a lot of things fall in this third bucket, where A will make a thing, B will say A is not taking X into account, and C will say "well, B, what would a project taking X into account look like?". And B, who probably has other obligations and is already trying to make ends work on their own projects, has not the time nor energy to spend to ideate or instantiate a project around X.

Nice article! There's a lot of things to unpack here, as it goes over quite a lot, but I wanted to focus on something that caught my attention in section 4.

It appeared to me that you discuss two types of resiliency without - to my mind - making much of a distinction between them. The first is that of institutional resiliency, and the second that of resilient interventions. In my mind, this latter one comes across as object-level interventions for for specific problems - drought-tracking, etc., and the former as meta-level organisational design/interventions for ensuring that our current institutions can build operate under (future, probable) high organisational stress conditions.

Is this a conceptual divide you would endorse, or do you more see the institutional resiliency as another object-level area of resiliency interventions on line with others, and which would be upgraded/updated in tandem with other object-level interventions? (e.g. As we get better drought-tracking capabilities, it is designed so that institutional resiliency in the usage of this systemis a built-in feature package.)

Hi Ian. This write-up made me even more excited to follow the work of the EIP in the coming years.

One thing I wanted to ask for more information about was the "system-centered" approach you detailed towards the end of the post. I'm not sure I see entirely the difference between an approach which "seeks out and prioritizes the highest-impact opportunities without artificial boundaries on scope related to issues, audiences, or interventions." and using a mish-mash of interventions from the traditional improvement strategies. 

Is the proposal that e.g. instead of coming to an organisation and saying "we think you need forecasting", there should instead be more of an open-ended analysis of their structure, needs, incentives, etc., to tailor both *which* interventions/products to implement *and* (as usually done) how to implement them? And that before even coming to an organisation you will have carried out an analysis like the one in this post, and by doing these two things, be issue-, audience-, and product-agnostic? 


I am not familiar with too many examples or case studies of carrying out large changes in organization, but in an area I *am* familiar with, performance psychology, one of the large limiters on this kind of approach is resources. As a one-person operation, you don't have the capacity to play around with approaches, and have to specialize in one issue/implementation, e.g. mindfulness in athletes. Do you expect that the EIP will be structured so as to, and have the resources to, do the needs analysis of relevant institutions, and then have individually specialised, expert staff on demand to deploy as the needs analysis suggests, or that you will have general competency staff at these different levels?

 

I see that these issues might be things that will be considered and dealt with in due time, and if you think they are not timely for the current state of EIP, that's fine.