All of Paul_Lang's Comments + Replies

Hi @Leandro Franz thank you very much for this post. I'd be curious to have a look at your document or a summarized version of it. Could you double check the link to the document? It does not work for me.

I am also lacto-vegetarian and wanted to buy https://veganpowah.com/product/vegan-powah-180/. They have some good info about their ingredients on that website. However, they are out of stock, so I purchased most ingredients in powder form (except for things I take separately/don't need like Omega3 (I have a product with higher EPA; also Idk how vegan powah got oil into powder form - and have concerns about chemical stability if I mix it in myself), iron (inhibits zinc absorption, so I take it separately) selenium (I just eat ~2 brazil nuts/day) and B vitam... (read more)

I am wondering if 80k should publicly speak of the PINT instead of the INT framework (with P for personal fit). This is because I get the impression that the INT framework contributes to generating a reputation hierarchy of cause areas to work on; and many (young) EAs tend to over-emphasize reputation over personal fit, which basically sets them up for failure a couple of years down the line. Putting the "P" in there might help to avoid this.

Is there any evidence that translation efforts are effective to reach people who do not have English as their first language? My impression is that native German speakers <35 years with a university degree understand written English perfectly well, although some prefer German. Listening and especially speaking can be a bit more challenging. As a rule of thumb, the younger the person, the better their English (due to YouTube, Netflix, etc.).

I suggest exploiting Facebook's Dating App instead, roughly like so (still needs some testing; dm'd you, Affective Altruist): https://docs.google.com/document/d/1VTRO12Nsl3H9P7Zpx3mcyeQ1HWNapxkUlaf45xS5OcU/edit?usp=sharing

1
Adrià Garriga Alonso
2y
This seems better than having to make an entirely new dating site.

Good to hear that there are EAs working on that within governments.

Thanks for sharing your insights Mako! After reading your response and the IEEE Spectrum article you mentioned, I am much more optimistic that the metaverse can/will move in the right direction. Is there anything that could be done (by governments, companies, NGOs, the general public, or whatever player) to make this even more likely?

I also liked your example of Twitter, where addictiveness was not designed into the system, but happened accidentally. Accidents usually prompt investigations to improve regulations, for instance in the aircraft industry. Do y... (read more)

2
mako yass
2y
Fair prompt. I get the impression that the most impactful thing you can do is to make sure that the people leading the standards dialog have strong technical vision and good taste. That'll also make it more likely to even succeed at establishing a standard. I guess that's something that EA (with so much software engineering acumen) could probably do better than most NGOs! But yeah, it looks like that might already be the case, I'm not sure. I don't know what the addictive social media systems of VR will look like. It might just be twitter again, but with bigger text. Hmm... I guess VR social systems might orient around VR's adaptation to voice chats, ubiquity of mics, support for body language (filtered through an avatar, which will often make people more comfortable) and a more natural sense of presence. I find it difficult to imagine many novel systems about that, because it seems like it's constrained to the sorts of arrangements that're already pretty natural for humans. People walking around in a room and making sounds at each other. If you're rude, people remember, and you don't get invited next time. It doesn't seem obvious that the information or the social bonds can be structured in any alarmingly novel ways. Well, I guess one big difference is that the social cliques can end up a lot more globe-sprawling and specific and extreme. But I'm not sure. There will still be lots of cross-linking. You'll tend to meet your friends' friends. Maybe systems will end up being.. less about structuring information, and more about structuring relationships, controlling group matchmaking or timetabling.

To me that sounds like a project that could be listed on https://www.eawork.club/ . I once listed to translate the German Wikipedia article for Bovine Meat and Milk Factors into English cause I did not have the rights to do it. A day later somebody had it done. And in the meanwhile somebody apparently translated it to Chinese.

Regarding media: to keep track of media coverage and potentially react accordingly, it seems that https://www.google.com/alerts can be helpful.

I agree with your statement that "The message of the post is that specific impact investments can pass a high effectiveness bar"

But when you say >>I think the message of this post isn't that compatible with general claims like "investing is doing good, but donating is doing more good".<< I think I must have been misled by the decision matrix. To me it suggested this comparison between investment and donation, while not being able to resolve a difference between the columns "Pass & Invest to Give", "Pass & Give now" (and a hypothetical c... (read more)

Thanks for the response. My issue was just that the money flow from the customer to the investor was accounted as positive for the investor, but not negative for the customer. I see the argument that customers are reasonably well off non-EAs whereas the investor is an EA. I am not sure if it can be used to justify the asymmetry in the accounting.

Perhaps it would make sense that an EA investor is only 10% altruistic and 90% selfish (somewhat in line with the 10% GW pledge)? The conclusion of that would be that investing is doing good, but donating is doing more good.

6
jh
2y
I'm still not sure I understand your point(s). The payment of the customers was accounted for as a negligible (negative) contribution to the net impact per customer. To put it another way: Think of the highly anxious customers each will get $100 in benefits from the App plus 0.02 DALYs averted (for themselves) on top of this. The additional DALYs being discounted for the potential they could use another App. Say the App fee is $100 dollars. This means to unlock the additional DALYs the users as a group will pay $400 million over 8 years. The investor puts in their $1 million to increase the chances the customers have the option to spend the $400m. In return they expect a percentage of the $400m (after operating costs, other investors shares, the founders shares). But they are also having a counterfactual effect on the chance the customers have/use this option. This is basically a scaled up version of a simple story where the investor gives a girl called Alice a loan so he can get some therapy. The investor would still hope Alice repays them with interest. But they also believe that without their help to get started she would have been less like to get help for herself. Should they have just paid for her therapy? Well, if she is a well-off, western iPhone user who comfortably buys lattes everyday, then that's surely ineffective altruism. Unless she happens to be the investor's daughter or something, so it makes sense for other reasons. I think the message of this post isn't that compatible with general claims like "investing is doing good, but donating is doing more good". The message of the post is that specific impact investments can pass a high effectiveness bar (i.e. $50 / DALY). If the investor thinks most of their donation opportunities are around $50/DALY, then they should see Mind Ease as a nice way to add to their impact. If their bar is $5/DALY (i.e. they see much more effective donation opportunities) then Mind Ease will be less attractive. It might

I would have thought that this is magnitudes easier,  because (with exception of my last sentence) this uses existing technology (although, AFAIK the artificial ecosystems we tried to create on earth failed after some time, so maybe there is a bit more fine-tuning needed). Whereas we still seem to be far away to understand humans or upload them to computers. But in the end, perhaps we would not want to colonise space with a rocket like structure, but with the lightest stuff we can possibly built do to relativistic mass increase. Who knows. The lightweight argument would certainly work in favour of the upload to computer solution.

From an impartial perspective, I think it is also necessary to account for the wallet of the customer, not only of the investor. After all, the only reason why the investor gets their money back, is that customers are paying for the product.

In other words,  one could add a row "Financial loss customer" to the decision matrix. For the "Pass & Give Now" Column it would be 0% (there is customer who pays the investor back). For all other columns it would be 100%, I think. That is, once the customers wallet is taken into account, the best world would b... (read more)

8
jh
2y
Thanks for this comment and question, Paul. It's absolutely true that the customer's wallets are worth potentially considering. An early reviewer of our analysis also made a similar point. In the end we are fairly confident this turns out to not be a key consideration. The key reason is that mental health is generally found to be a service for which people's willingness to pay is far below the actual value (to them). Especially for likely paying customer markets of e.g. high-income country iPhone users, the subscription costs were judged to be trivial compared to changes in their mental health. This is why, if I remember correctly, this consideration didn't feature more prominently in Hauke's report (on the potential impacts on the customers). Since it didn't survive there, it also didn't make it into the investment report. I'm not quite sure I understand the point about the customer donating to the BACO instead. That could definitely be a good thing. But it would mean an average customer with anxiety choosing to donate to a highly effective charity (presumably instead of not buying the App). This seems unlikely. More importantly, it doesn't seem like the investor can influence it?... In short, since the expected customers are reasonably well off non-EAs, concerns about customer wallet or donations didn't come into play.

I really enjoyed this podcast. But regarding space colonization, I do not think that uploading humans to computers is the only alternative to avoid transporting human colonies in space ships. For instance, we could send facilities for producing nutrition, oxygen and human artificial wombs there, plus two tiny test tubes of undamaged egg and sperm cells. Of course, once synthetic biology gives us the ability to create cells ourselves, we can also upload the human (epi)genome to a storage medium and synthetize the DNA and zygotes on the new planet.

2
Linch
2y
Maybe we can do that eventually, but doesn't that just seem unnecessarily hard? 

Did anyone else use Google Pay? Didn't seem to incur fees for me.

2
Jon Wolverton
2y
No fees with Google Pay for me either, but they didn't give me that option until I'd used up my $25 referral credit.

Love to hear that there is important work being done in that area! Are there approaches to measure SWB as a function of "objective" well-being (OWB)? And what are their shortcomings? For instance, to me it feels that SWB could be a weighted sum of OWB the recent change in OWB and how one's own (x) OWB compares to the OWB of others (x_i):

The weights are probably person specific parameters. Persons with low w1 and w2 might be resilient, persons with high w2 might like status symbols,  etc.

4
MichaelPlant
3y
I'm not sure exactly what you mean by "objective well-being". Here are two options. One thing you might have in mind is that well-being is constituted by something subjective, eg happiness or life satisfaction, but you then wonder how objective life circumstances (health, wealth, relationship status, etc), positional concerns, etc. contribute to that subjective thing. In this case, health, etc are determinants of well-being, not actually well-being itself. This approach is pretty much exactly what the SWB literature does: you see how the right-hand side variables, many of which are objective,  relate to the left-hand side subjective one. I'm not sure what the shortcomings of this approach are in general - if you think well-being is subjective, this is just the sort of analysis you would want to undertake.  An alternative thing you might mean is that well-being is properly constituted (at least in part) by something objective. One might adopt an objective list theory of well-being: If one had this view, your question would be about how well-being, which is objective, relates to how people feel about their well-being. It's not clear what the purpose of this project would be: if you already know what well-being is, and you think it's something objective, why would you care how having well-being causes people to feel about their lives? So, I assume you mean the former!

OK, I realised the flaw in my argumentation. If I have 1000 GBP to give away, I could either 'walk' 1000 GBP in direction of charity x or 1000 GBP in direction of charity y but only sqrt(x^2 + y^2) in a combination of x and y, e.g. the maximal gradient. The optimal allocation (x, y) of money is what maximises the scalar product of gradient (dU/dx, dU/dy) * (x, y) under the restriction that x + y = 1000. If dU/dx = dU/dy a 50/50 allocation as good as an allocation of all money to the most effective charity. Otherwise giving all money to the most effective charity maximises utility. Sorry for the confusion and thanks for the discussion.

No, I am not trying to account for uncertainty.

But look for instance at this or this picture and assume it shows utility z as function of the budget of two charities x and y. For almost every point (x, y), the steepest slope is in neither in direction (0, 1) nor in direction (1, 0) but in a combination of both direction. In other words, to optimize utility, I should give part of my money to charity x, part to charity y.

3
Paul_Lang
5y
OK, I realised the flaw in my argumentation. If I have 1000 GBP to give away, I could either 'walk' 1000 GBP in direction of charity x or 1000 GBP in direction of charity y but only sqrt(x^2 + y^2) in a combination of x and y, e.g. the maximal gradient. The optimal allocation (x, y) of money is what maximises the scalar product of gradient (dU/dx, dU/dy) * (x, y) under the restriction that x + y = 1000. If dU/dx = dU/dy a 50/50 allocation as good as an allocation of all money to the most effective charity. Otherwise giving all money to the most effective charity maximises utility. Sorry for the confusion and thanks for the discussion.

Thank you both for your answer. I am really just thinking of marginal utility, as I am just a student with limited budget. I just do not think that the vector the maximizes marginal utility points in the direction of the most efficient charity (which I thought is true a few weeks ago). Now I think it should point in the direction of the gradient of U(x1, ..., xn), the utility as function of the budget of charities x1 to xn. That is, it points in the direction of many charities, and I should split my spendings accordingly between the charities to maximise the benefit (assuming negligible overhead costs of splitting).

2
Aaron Gertler
5y
Why do you think this? Are you trying to account for uncertainty about what the most efficient charity is? Otherwise, I don't understand this particular argument for "multiple charities" over "one charity".