SiebeRozendal

I am an (aspiring) x-risk researcher, and president at EA Groningen for the past 2 years. I am especially interested in crucial considerations within longtermism.

I have a background in (moral) philosophy, business admin, and moral psychology.

Comments

How have you become more (or less) engaged with EA in the last year?

I recently moved to a (nearby) EA hub to live temporarily with some other EA's (and some non-EA's), while figuring out my next steps in my life/career.

This has considerably increased my involvement. The ability to talk about EA over lunch, dinner, and to join meetups that are 5 minutes away make a big difference. As well as finding nice people I connect with socially/emotionally.

I suppose COVID had somewhat of a positive influence here too: I am less likely to attend a wide range of events, because I don't know people's approaches to safety. This leaves more time for EA.

Use resilience, instead of imprecision, to communicate uncertainty

Although communicating the precise expected resilience conveys more information, in most situations I prefer to give people ranges. I find it a good compromise between precision and communicating uncertainty, while remaining concise and understandable for lay people and not losing all those weirdness credits that I prefer to spend on more important topics.

This also helps me epistemically: sometimes I cannot represent my belief state in a precise number because multiple numbers feel equally justified or no number feels justified. However, there are often bounds beyond which I think it's unlikely (i.e. <20% or <10% or my rough estimates) that I'd estimate that even with an order of magnitude additional effort.

In addition, I think preserving resilience information is difficult in probabilistic models, but easier with ranges. Of course, resilience can be translated into ranges. However, a mediocre model builder might make the mistake of discarding the resilience if precise estimates are the norm.

EA Focusmate Group Announcement

Just to clarify: Focusmate isn't meant to talk about your work, so most people don't try to find people with in-depth knowledge. I mostly don't explain things in detail and don't feel like I need to. It's more an accountability thing and to share general progress (e.g. "I wanted to get 3 tasks done: write an email, draft an outline for a blog post, and solve a technical issue for my software project. I got 2 of them done, and realized I need to ask a colleague about #3, so I did that instead).

CEA's Plans for 2020

Thanks for the elaborate reply!

I think there's a lot of open space in between sending out surveys and giving people binding voting power. I'm not a fan of asking people to vote on things they don't know about. However, I have something in mind of "inviting people to contribute in a public conversation and decision-making process". Final decision power would still be with CEA, but input is more than one-off, the decision-making is more transparant, and a wider range of stakeholders is involved. Obviously, this does not work for all types of decisions - some are too sensitive to discuss publicly. Then again, it may be tempting to classify many decisions as "too sensitive". Well, organisation "opening up" should be an incremental process, and I would definitely recommend to experiment with more democratic procedures.

CEA's Plans for 2020

Hi Max, good to read an update on CEA's plans.

Given CEA's central and influential role in the EA community, I would be interested to hear more on the approach on democratic/communal governance of CEA and the EA community. As I understand it, CEA consults plenty with a variety of stakeholders, but mostly anonymously and behind closed doors (correct me if I'm wrong). I see lack of democracy and lack of community support for CEA as substantial risks to the EA community's effectiveness and existence.

Are there plans to make CEA more democratic, including in its strategy-setting?

AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

There will be a lot to learn from the current pandemic from global society. Which lesson would be most useful to "push" from EA's side?

I assume this question is in between the "best lesson to learn" and "lesson most likely to be learned". We probably want to push a lesson that's useful to learn, and that our push actually helps to bring it into policy.

AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

Given the high uncertainty of this question, would you (Toby) consider giving imprecise credences?

Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar?

Not a funding opportunity, but I think a grassroots effort to employ social norms to enforce social distancing could be effective in countries in early stages where authorities are not enforcing, e.g. The Netherlands, UK, US, etc.

Activists (Student EA's?) could stand with signs in public places asking people non-aggressively to please go home.

State Space of X-Risk Trajectories

I think this article very nicely undercuts the following common sense research ethics:

If your research advances the field more towards a positive outcome than it moves the field towards a negative outcome, then your research is net-positive

Whether research is net-positive depends on the current field's position relative to both outcomes (assuming that when either outcome is achieved, the other can no longer be achieved). It replaces this with another heuristic:

To make a net-positive impact with research, move the field closer to the positive outcome than the negative outcome with a ratio of at least the same ratio as distance-to-positive : distance-to-negative.

If we add uncertainty to the mix, we could calculate how risk averse we should be (where risk aversion should be larger when the research step is larger, as the small projects probably carry much less risk to accidentally make a big step towards FAI).

The ratio and risk-aversion could lead to some semi-concrete technology policy. For example, if the distance to FAI and UAI is (100, 10), technology policy could prevent funding any projects that either have a distance-ratio (for lack of a better term) lower than 10 or that have a 1% or higher probability a taking a 10d step towards UAI.

Of course, the real issue is whether such a policy can be plausibly and cost-effectively enforced or not, especially given that there is competition with other regulatory areas (China/US/EU).

Without policy, the concepts can still be used for self-assessment. And when a researcher/inventor/sponsor assesses the risk-benefit profile of a technology themselves, they should discount for their own bias as well, because they are likely to have an overly optimistic view of their own project.

Load More