PL

Paul_Lang

41 karmaJoined May 2019

Participation

    Comments
    17

    I am wondering if 80k should publicly speak of the PINT instead of the INT framework (with P for personal fit). This is because I get the impression that the INT framework contributes to generating a reputation hierarchy of cause areas to work on; and many (young) EAs tend to over-emphasize reputation over personal fit, which basically sets them up for failure a couple of years down the line. Putting the "P" in there might help to avoid this.

    Is there any evidence that translation efforts are effective to reach people who do not have English as their first language? My impression is that native German speakers <35 years with a university degree understand written English perfectly well, although some prefer German. Listening and especially speaking can be a bit more challenging. As a rule of thumb, the younger the person, the better their English (due to YouTube, Netflix, etc.).

    I suggest exploiting Facebook's Dating App instead, roughly like so (still needs some testing; dm'd you, Affective Altruist): https://docs.google.com/document/d/1VTRO12Nsl3H9P7Zpx3mcyeQ1HWNapxkUlaf45xS5OcU/edit?usp=sharing

    Good to hear that there are EAs working on that within governments.

    Thanks for sharing your insights Mako! After reading your response and the IEEE Spectrum article you mentioned, I am much more optimistic that the metaverse can/will move in the right direction. Is there anything that could be done (by governments, companies, NGOs, the general public, or whatever player) to make this even more likely?

    I also liked your example of Twitter, where addictiveness was not designed into the system, but happened accidentally. Accidents usually prompt investigations to improve regulations, for instance in the aircraft industry. Do you think there are any concrete key learnings from the case Twitter how to prevent similar accidents in the future of the internet or metaverse? If so, could or should some of these be baked into better designs, and are current incentives aligned with this or would it require some governmental regulations (since you are worried about liberalisation)?

    I still believe that Meta is a major player on the market. And while I do agree that they have no direct interest in destroying democracy or creating an unliveable world, I think they act in line of Milton Friedman and would do just try to maximise their profits. I am not sure if there is anything wrong with that in principle, as long as the rules of the game ensure that maximizing profits aligns well with overall utility. In the past, I don’t think the rules of the social media game aligned well with overall utility. And I am not sure that the need for and support of open standard by players like Meta alone is sufficient to align profit maximization with overall utility in the metaverse. If this assessment is correct, it would make sense to brainstorm ideas for such an alignment as the metaverse develops.

    Btw. thanks also for sharing your LW article on Webs of Trust (on my reading list) and your thoughts on RoamResearch (pm’d you with a question on Roam vs. Obsidian).

    To me that sounds like a project that could be listed on https://www.eawork.club/ . I once listed to translate the German Wikipedia article for Bovine Meat and Milk Factors into English cause I did not have the rights to do it. A day later somebody had it done. And in the meanwhile somebody apparently translated it to Chinese.

    Regarding media: to keep track of media coverage and potentially react accordingly, it seems that https://www.google.com/alerts can be helpful.

    I agree with your statement that "The message of the post is that specific impact investments can pass a high effectiveness bar"

    But when you say >>I think the message of this post isn't that compatible with general claims like "investing is doing good, but donating is doing more good".<< I think I must have been misled by the decision matrix. To me it suggested this comparison between investment and donation, while not being able to resolve a difference between the columns "Pass & Invest to Give", "Pass & Give now" (and a hypothetical column "Pass & Invest to keep the money for yourself", with presumably all zero rows), which would all result in zero total portfolio return (differences between these three options would become visible if the consumer wallet would be included and the "Pass & Invest to Give" would create impact through giving, like the "Pass & Give now" column does).

    Anyway,  I now understand that this comparison between investing and donating was never the message of the post, so all good.

    Thanks for the response. My issue was just that the money flow from the customer to the investor was accounted as positive for the investor, but not negative for the customer. I see the argument that customers are reasonably well off non-EAs whereas the investor is an EA. I am not sure if it can be used to justify the asymmetry in the accounting.

    Perhaps it would make sense that an EA investor is only 10% altruistic and 90% selfish (somewhat in line with the 10% GW pledge)? The conclusion of that would be that investing is doing good, but donating is doing more good.

    I would have thought that this is magnitudes easier,  because (with exception of my last sentence) this uses existing technology (although, AFAIK the artificial ecosystems we tried to create on earth failed after some time, so maybe there is a bit more fine-tuning needed). Whereas we still seem to be far away to understand humans or upload them to computers. But in the end, perhaps we would not want to colonise space with a rocket like structure, but with the lightest stuff we can possibly built do to relativistic mass increase. Who knows. The lightweight argument would certainly work in favour of the upload to computer solution.

    Load more