I am an (aspiring) x-risk researcher, and president at EA Groningen for the past 2 years. I am especially interested in crucial considerations within longtermism.

I have a background in (moral) philosophy, business admin, and moral psychology.

SiebeRozendal's Comments

State Space of X-Risk Trajectories

I think this article very nicely undercuts the following common sense research ethics:

If your research advances the field more towards a positive outcome than it moves the field towards a negative outcome, then your research is net-positive

Whether research is net-positive depends on the current field's position relative to both outcomes (assuming that when either outcome is achieved, the other can no longer be achieved). It replaces this with another heuristic:

To make a net-positive impact with research, move the field closer to the positive outcome than the negative outcome with a ratio of at least the same ratio as distance-to-positive : distance-to-negative.

If we add uncertainty to the mix, we could calculate how risk averse we should be (where risk aversion should be larger when the research step is larger, as the small projects probably carry much less risk to accidentally make a big step towards FAI).

The ratio and risk-aversion could lead to some semi-concrete technology policy. For example, if the distance to FAI and UAI is (100, 10), technology policy could prevent funding any projects that either have a distance-ratio (for lack of a better term) lower than 10 or that have a 1% or higher probability a taking a 10d step towards UAI.

Of course, the real issue is whether such a policy can be plausibly and cost-effectively enforced or not, especially given that there is competition with other regulatory areas (China/US/EU).

Without policy, the concepts can still be used for self-assessment. And when a researcher/inventor/sponsor assesses the risk-benefit profile of a technology themselves, they should discount for their own bias as well, because they are likely to have an overly optimistic view of their own project.

Comparing Four Cause Areas for Founding New Charities

I really love Charity Entrepreneurship :) A remark and a question:

1. I notice one strength you mention at family planning is "Strong funding outside of EA" - I think this is a very interesting and important factor that's somewhat neglected in EA analyses because it goes beyond cost-effectiveness. We are not asking the 'given our resources, how can we spend them most effectively?' but the more general (and more relevant) 'how can we do the most good?' I'd like to see 'how much funding is available outside of EA for this intervention/cause area' as a standard question in EA's cost-effectiveness analyses :)

2. Is there anything you can share about expanding to two of the other cause areas: long-termism and meta-EA?

Final update on EA Norway's Operations Project

A consulting organisation aimed at EA(-aligned) organisations, as far as I'm aware:

Mark McCoy, mentioned in this post, is the Director of Strategy for it.

Thoughts on doing good through non-standard EA career pathways

This might be just restrating what you wrote, but regarding learning unusual and valuabe skills outside of standard EA career paths:

I believe there is a large difference in the context of learning a skill. Two 90th-percentile quality historians with the same training would come away with very different usefulness for EA topics if one learned the skills keeping EA topics in mind, while the other only started thinking about EA topics after their training. There is something about immediately relating and applying skills and knowledge to real topics that creates more tailored skills and produces useful insights during the whole process, which cannot be recreated by combining EA ideas with the content knowledge/skills at the end of the learning process. I think this relates to something Owen Cotton-Barratt said somewhere, but I can't find where. As far as I recall, his point was that 'doing work that actually makes an impact' is a skill that needs to be trained, and you can't just first get general skills and then decide to make an impact.

Personally, even though I did a master's degree in Strategic Innovation Management with longtermism ideas in mind, I didn't have enough context and engagement with ideas on emerging technology to apply the things I learned to EA topics. In addition, I didn't have the freedom to apply the skills. Besides the thesis, all grades were based on either group assignments or exams. So some degree of freedom is also an important aspect to look for in non-standard careers.

Thoughts on doing good through non-standard EA career pathways

Can I add the importance of patience and trust/faith here?

I think a lot of non-standard career paths involve doing a lot of standard stuff to build skill and reputation, while maintaining a connection with EA ideas and values and keeping an eye open for unusual opportunities. It may be 10 or 20 years before someone transitions into an impactful position, but I see a lot of people disengaging from the community after 2-3 years if they haven't gotten into an impactful position yet.

Furthermore, trusting that one's commitment to EA and self-improvement is strong enough to lead to an impactful career 10 years down the line can create a self-fulfilling prophecy where one views their career path as "on the way to impact" rather than "failing to get an EA job". (I'm not saying it's easy to build, maintain, and trust one's commitment though.)

In addition, I think having good language is really important for keeping these people motivated and involved. We have "building career capital" and Tara MacAulay's term of "Journeymen" but these are not catchy enough I'm afraid.

Final update on EA Norway's Operations Project

(Off-topic @JPAddison/@AaronGertler/@BenPace:)

Is tagging users going to be a feature on the Forum someday? It'd be quite useful! Especially for asking a question to non-OP's where the answer can be shared and would be useful publicly.

Final update on EA Norway's Operations Project

(@Meta Fund:)

Will any changes be made to the application and funding process in light of how this project went? I can imagine that it would be valuable to plan a go/no-go decision for projects with medium to large uncertainty/downside risk, and perhaps add a question or two (e.g., 'what information would you need to learn to make a go/no-go decision?') if that does not bloat the application process too much. I think this could be very valuable to explore more risky funding opportunities. For example, a two-stage funding commitment can be made where the involved parties can pre-agree to a number of conditions that would decide the go/no-go decision, making follow-up funding much more efficient than going through a new complete funding round.

Final update on EA Norway's Operations Project

(@Mark McCoy:)

I wonder what is currently happening with Good Growth and how it relates to this current so-far nameless operations project. It seems like it is an unfunded merging of the two projects? Could you briefly elaborate on the plans and funding situation for the project?

Final update on EA Norway's Operations Project

Props for making a no-go decision and switching the focus of the project - I think that is very commendable!

I am very curious about what is going to happen further, and have a few questions:

@EA Norway: Do you have any ideas/opinions on addressing operations bottlenecks that might also be highly impactful, such as

a) organisations doing highly impactful work but not explicitly branded as EA (e.g. top charities, research labs) and

b) other EA projects, such as large local/national groups, and early-stage projects.

Long-term investment fund at Founders Pledge

This is a really interesting idea and I'm glad you are taking this up! Some considerations of the top of my head:

1. This set-up would probably not only 'take away' money that would otherwise have been donated directly. There is some percentage of 'extra' money this set-up would attract. So the discussion should not be solely decided by 'would the money be better spent investing or donated now?

2. There is probably a formal set-up for this (optimization) problem, and I think some economist or computer scientist would find it a worthwhile and publishable research question to work on. I'm sure there is related work somewhere, but I suppose the problem is somewhat new with the assumptions of 'full altruism', time-neutrality, and letting go of the fixed-resource assumption.

3. There is a difference between investing money for a) later opportunities that seem high-value that can be found by careful evaluation, and b) later opportunities that seem high-value and require a short-time frame to respond. I hope this fund would address both, and I think the case for b) might be stronger than for a). One option for a) would be a global catastrophic response fund. As far as I am aware, there is not a coordinated protocol to respond to global catastrophes or catastrophic crises, and the speed of funding can play a crucial role. A non-governmental fund would be much faster than trying to coordinate the international response. Furthermore, I think a) and b) play substantially different roles in the optimization problem.

Load More