JA

Jens Aslaug

77 karmaJoined Aug 2023Pursuing a graduate degree (e.g. Master's)Danmark

Bio

Studying final year of dentistry in Lithuania with the intention of doing earning to give (most likely in Denmark), but currently evaluating if this was the right choice (due to limited earning potential and options for doing good). If I continue this path, I expect to donate minimally 50 % (aiming for 65-70%) or 40.000-60.000 $ annually (at the beginning of my career). While I do expect to mainly do "giving now", I do expect, in periods of limited effective donating opportunities, to do "investing to give".  

As a longtermist and total utilitarian (for the most part), finding the cause that increases utilities (no matter the time or type of sentient being) the most time/cost-effectively is my goal. In the pursuit of this goal, I so far care mostly about WAW (wild animal welfare), x-risk and s-risk (but feel free to change my mind).  

I heard about EA for the first time in 2018 in relation to an animal rights organization I worked for part-time (Animal Alliance in Denmark). However I have only had minimal interactions with EAs.

Due to reading and my time working at Animal Alliance etc., I'm relatively knowledgeable in the following areas: effective communication, investing (stocks) and personal development.

Male, 23 years old and diagnosed with aspergers (ASD) and dyslexia. 

How others can help me

You are welcome to give me advice on being more effective (e.g. career advice). 

How I can help others

Any suggestions for ways I can help or questions are welcome :)

Comments
14

I agree. :) Your idea of lobbying and industry-specific actions might also be more neglected. In terms of WAW, I think it could help reduce the amount of human caused suffering to wild animals, but likely not have an impact on natural caused suffering. 

Thanks a lot for the post! I’m happy that people are trying to combine the field of longtermism and animal welfare. 

 

Here’s a few initial thoughts from a non-professional (note I didn't read the full post so I might have missed something): 

I generally believe that moral circle expansion, especially for wild animals and artificial sentience, is one of the best universal ways to help ensure a net positive future. I think that invertebrates or artificial sentience will make up the majority of moral patients in the future. I also suspect this to be good in a number of different future scenarios, since it could lower the chance of s-risks and better the scenario for animals (or artificial sentience) no matter if there will be a lock-in scenario or not. 

I think progression in short-term direct WAW interventions is also very important since I find it hard to believe that many people will care about WAW unless they can see a clear way of changing the status quo (even if current WAW interventions will only have a minimal impact). I also think short-term WAW interventions could help to change the narrative that interfering in nature is inherently bad. 
(Note: I have personally noticed several people that have similar values as me (in terms of caring greatly about WAW in the far future) only caring little about short-term interventions.)

It could of course be argued that working directly on reducing the likelihood of certain s-risk and working on AI-alignment might be a more efficient way of ensuring a better future for animals. I certainly think this might be true, however I think these measures are less reliable due to the uncertainty of the future. 

I think Brian Tomasik has written great pieces on why animal-focused hedonistic imperative and gene drive might be less promising and more unlikely than it seems. I do personally also believe that it’s unlikely to ever happen on a large scale for wild animals. However, I think if it happens and it’s done right (without severely disrupting ecosystems), genetic engineering could be the best way of increasing net well-being in the long term. But I haven't thought that much about this. 

 

Anyways, I wouldn't be surprised if you already have considered all of these arguments. 

 

I’m really looking forward to your follow-up post :)  

I do agree that t in the formula is quite complicated to understand (and does not mean the same as the typical meant by tractability), I tried to explain it, but since no one edited my work, I might be overestimating the understandability of my formulations. "t" is something like “the cost-effectiveness of reducing the likelihood of x-risk by 1 percentage point” divided by “the cost-effectiveness of increasing net happiness of x-risk by 1 percent”.  

When that’s been said, I still think that the analysis lacks an estimation for how good the future will be. Which could make the numbers for "t" and "net negative future" (or u(negative)) “more objective”. 

I do somewhat agree (my beliefs on this has also somewhat changed after discussing the theory with others). I think "conventional" WAW work has some direct (advocacy) and indirect (research) influence on peoples values, which could help avoid or make certain lock-in scenarios less severe. However, I think this impact is less than I previously thought, and I'm now of the belive that more direct work into how we can mitigate such risk is more impactful. 

If I understand you correctly you believe the formula does not take into account how good the future will be. I do somewhat agree that there is a related problem in my analysis, however I don't think that the problem is related to the formula. 

The problem your talking about is actually being taken into account by "t". You should note that the formula is about "net well-being", so "all well-being" minus "all suffering". So if future "net well-being" is very low, then the tractability of WAW will be high (aka "t" being low). E.g. lets say "net well-being" = 1 (made up unit), than it's gonna be alot easier to increase by 1 % than if "net well-being" = 1000.  

However I do agree that estimations for expectations on how good the future is going to be, is technically needed for making this analysis correctly. Specifically for estimating "t" and "net negative future" (or u(negative)) in for the "main formula". I may fix this in the future.  

(I hope it’s not confusing that I'm answering both your comments at once). 

While I will have to consider this for longer, my preliminary thought is that I agree with most of what you said. Which means that I might not believe in some of my previous statements.  

Thanks for the link to that post. I do agree and I can definitely see how some of these biases have influenced a couple of my thoughts. 

--

On your last point, but future-focused WAW interventions, I'm thinking of things that you mention in the tractability section of your post:...

Okay, I see. Well actually, my initial thought was that all of those four options had a similar impact on the longterm future. Which would justify focusing on short-term interventions and advocacy (which would correspond with working on point number three and four). However after further consideration, I think the first two are of higher impact when considering the far future. Which means I (at least for right now) agree with your earlier statement: 

“So rather than talking about "wild animal welfare interventions", I'd argue that you're really only talking about "future-focused wild animal welfare interventions". And I think making that distinction is important, because I don't think your reasoning supports present-focused WAW work.”

While I still think the “flow through effect” is very real for WAW, I do think that it’s probably true working on s-risks more directly might be of higher impact. 

--

I was curious if you have some thoughts on these conclusions (concluded based on a number of things you said and my personal values): 

  • Since working on s-risk directly is more impactful than working on it indirectly, direct work should be done when possible. 
  • There is no current organization working purely on animal related s-risk (as far as I know). So if that’s your main concern, your options are start-up or convincing an “s-risk mitigation organization” that you should work on this area full time.
    • Animal Ethics works on advocating moral circle expansion. But since this is of less direct impact to the longterm future, this has less of an effect on reducing s-risk than more direct work. 
  • If you’re also interested in reducing other s-risks (e.g. artificial sentience), then working for an organization that directly tries to reduce the probability of a number of s-risk is your best option (e.g. Center on Long-Term Risk or Center for Reducing Suffering). 

I'd argue there's a much lower bar for an option value preference. To have a strong preference for option value, you need only assume that you're not the most informed, most capable person to make that decision. 

I do agree that there are more capable people to make that decision than me and there will be even better in the future. But I don’t believe this to be the right assessment for the desirability of option value. I think the more correct question is "whether the future person/people in power (which may be the opinion of the average human in case of a “singleton democracy”) would be more capable than me?". 

I feel unsure whether my morals will be better or worse than that future person or people because of the following:

  • The vast majority of moral patients currently, according to my knowledge, are invertebrates (excluding potential/unknown sentient beings like aliens, AI made by aliens, AI sentient humans already made unknowingly, microorganisms etc.). My impression is that the mean moral circle is wider than it was 10 years ago and that most people's moral circle increases with the decrease in poverty, the decrease in personal problems and the increase in free time. However, whether or not the majority will ever care about "ant-suffering" and the belief that interventions should be done is unclear to me. (So this argument can go both ways)
  • A similar argument can be used for future AI sentients. My impression is that a lot of humans care somewhat about AI sentients and that this will most likely increase in the future. However, I’m unsure how much people will care if AI sentients mainly come from non-communicating computers that have next to nothing in common with humans. 

To what extent do you think approaches like AI-alignment will protect against S-risks? Or phrased another way, how often will unaligned super-intelligence result in a S-risk scenario.

Well, I think working on AI-alignment could significantly decrease the likelihood of s-risks where humans are the main ones suffering. So if that’s your main concern, then working on AI-alignment is the best option (both with your and my beliefs).

While I don't think that the probability of “AGI-caused S-risk” is high. I also don’t think the AGI will prevent or care largely for invertebrates or artificial sentience. E.g. I don’t think the AGI will stop a person from doing directed panspermia or prevent the development of artificial sentience. I think the AGI will most likely have similar values to the people who created it or control it (which might again be (partly) the whole human adult population).

I’m also worried that if WAW concerns are not spread, nature conservation (or less likely but even worse, the spread of nature) will be the enforced value. Which could prevent our attempts to make nature better and ensure that the natural suffering will continue.

And since you asked for beliefs of the likelihood, here you go (partly copied from my explanation in Appendix 4):

  • I put the “probability” for an “AI misalignment caused s-risk” as being pretty low (1 %), because most scenarios of AI misalignment, will according to my previous statements, be negligible (talking about s-risk, not x-risk). It would in this case only be relevant if AI keeps us and/or animals alive “permanently” to have net negative lives (which most likely would require traveling outside of the solar system). I also put “how bad the scenario would be” pretty low (0,5) because I think most likely (but not guaranteed) the impact will be minimal to animals (which technically might mean that it would not be considered a s-risk).

I want to try explore some of the assumptions that are building your world model. Why do you think that the world, in our current moment, contains more suffering than pleasure? What forces do you think resulted in this equilibrium? 

I would argue that whether or not the current world is net positive or net negative depends on the experience of invertebrates since they make up the majority of moral patients. Most people caring about WAW believe one of the following:

  • That invertebrates most likely suffer more than they experience pleasure.  
  • It is unclear whether invertebrates suffer or experience pleasure more. 

I’m actually leaning more towards the latter. My guess is there’s a 60 % probability that they suffer more and a 40 % probability that they feel pleasure more. 

So the cause for my belief that the current world is slightly more likely to be net negative is simply: evolution did not take ethics into account. (So the current situation is unrelated to my faith in humanity). 

With all that said, I still think the future is more likely to be net positive than net negative. 

I think that the interventions that decrease the chance of future wild animal suffering are only a subset of all WAW things you could do, though. For example, figuring out ways to make wild animals suffer less in the present would come under "WAW", but I wouldn't expect to make any difference to the more distant future. That's because if we care about wild animals, we'll figure out what to do sooner or later.

I do agree that current WAW interventions have a relatively low expected impact compared with other WAW work (e.g. moral circle expansion) if only direct effects are counted. 

Here are some reasons why I think current interventions/research may help the longterm future. 

  • Doing more foundational work now means we can earlier start more important research and interventions, when the technology is available. (Probably a less important factor)
  • Current research gives us a better answer to how much pleasure and suffering wild animals experience, which helps inform future decisions on the spread of wildlife. (This may not be that relevant yet)
  • Showcasing that interventions can have a positive effect on the welfare of wildlife, could help convince more people that helping wildlife is tractable and the morally right thing to do (even if it’s unnatural). (I think this to be the most important effect) 

So I think current interventions could have a significant impact on moral circle expansion. Especially because I think you have to have two beliefs to care for WAW work: believe that the welfare of wildlife is important (especially for smaller animals like insects, which likely make up the majority of suffering) and believe that interfering with nature could be positive for welfare. The latter may be difficult to achieve without proven interventions since few people think we should intervene in nature. 

Whether direct moral circle expansion or indirect (via. interventions) are more impactful are unclear to me. Animal Ethics mainly work on the former and Wild Animal Initiative works mainly on the latter. I’m currently expecting to donate to both.  

So rather than talking about "wild animal welfare interventions", I'd argue that you're really only talking about "future-focused wild animal welfare interventions". And I think making that distinction is important, because I don't think your reasoning supports present-focused WAW work. 

I think having an organization working directly on this area could be of high importance (as I know only the Center For Reducing Suffering and the Center on Long-Term Risk work partly on this area). But how do you think it’s possible to currently work on "future-focused wild animal welfare interventions"? Other than doing research, I don’t see how else you can work specifically on “WAW future scenarios”. It’s likely just my limited imagination or me misunderstanding what you mean, but I don’t know how we can work on that now. 

Yes that’s correct and I do agree with you. To be honest the main reasons were due to limited knowledge and simplification reasons. Putting any high number for the likelihood of “artificial sentience” would make it the most important cause area (which based on my mindset, it might be). 
But I’m currently trying to figure out which of the following I think is the most impactful to work on: AI-alignment, WAW or AI sentience. This post was simply only about the first two. 

When all of that’s been said, I do think AI-sentience is a lot less likely than many EAs think (which still doesn't justify “0.01%”). But note that this is just initial thoughts based on limited information. Anyways, here’s my reasoning: 

  • While I do agree that it might be theoretically possible and could cause suffering on an astronomical scale, I do not understand why we would intentionally or unintentionally create it. Intentionally I don't see any reason why a sentient AI would perform any better than a non-sentient AI. And unintentionally, I could imagine that with some unknown future technology, it might be possible. But no matter how complex we make AI with our current technology, it will just become a more "intelligent" binary system. 
  • Even if we create it, it would only be relevant as an s-risk if we don’t realize it and fix it. 

However I do think the probability of me changing my mind is high. 

Thanks alot for your thoughts on my theory. I never heard the term option value before, I’m gonna read more about it and see if it changes my beliefs. Here’s my thoughts on your thoughts: 

- I didn't directly consider option value in my theory/calculation, but I think there is a strong overlap since my calculation only considers “permanent” changes (similar to the “lock-in” you're referring to). 

- To clarify, I don’t believe that the mean future will be net negative. However the possibility of a “net-negative lock-in scenario” makes the expected value of working on preventing x-risk lower. 

- I’m somewhat skeptical of the value of option value, because it assumes that humans will do the right thing. It’s important to remember that the world is not made up of EAs or philosophers and an AGI will likely not have much better values than the people who created it or controls it. And because of humans' natural limited moral circle, I think it's likely that the majority of sentient experiences (most likely from (wild) animals or artificial sentience) will be mostly ignored. Which could mean that the future overall will be net negative, even if we don’t end up in any “lock-in” scenario. 

- With all that said, I still think the mean value of working on x-risk mitigation is extremely valuable. Maybe even the most or second most impactful cause area based on total utilitarianism and longtermism. But I do think that the likelihood and scale of certain “lock-in” net negative futures, could potentially make working on s-risk directly or indirectly more impactful. 

Feel free to change my mind on any of this.

Load more