790 karmaJoined Oct 2014


I'm just a normal, functioning member of the human race, and there's no way anyone can prove otherwise


I think the purpose of the 'overall karma' button on comments should be changed. 

Currently, it asks 'how much do you like this overall?'. I think this should be amended to something like 'how much do you think this is useful or important?'. 

This is because I think there is still a too strong correlation between 'liking' a comment, and 'agreeing' with it. 

For example, in the recent post about nonlinear, many people are downvoting comments by Kat and Emerson. Given that the post concerns their organisation, their responses should not be at risk of being hidden - their comments should be upvoted because it's useful/important to recognise their responses, regardless of whether someone likes/agrees with the content.

This is a very helpful post. I'm surprised the events are so expensive, but breakdown of costs and explanations make sense.

That said, this makes me much more skeptical about the value of EAG given the alternative potential uses of funds - even just in terms of other types of events. 

As suggested by Ozzie, I'd definitely like to see a comparison with the potential value of smaller events, as well as experimentation. 

Spending $2k per person might be good value, but I think we could do better. Perhaps there is an analogy with cash transfers as a benchmark - what event could someone put on if they were just given that money?

For example, with $2k, I expect I could hire a pub in central London for an evening (or maybe a whole day), with perhaps around 100 people attending. So that's $20 per person, or 1% of the cost of EAG. Would they get as much benefit from attending my event as attending EAG? No, but I'd bet they'd get more than 1% of the benefit. 

Now what if 10 or 20 people pooled their $2k per person? 

Nice study, thanks for sharing!

Environmental and health concerns were found to be of increasing importance among those adopting their diet more recently, which may reflect increasing awareness of and advocacy regarding possible health benefits of plant-based diets, as well as increasing concerns over anthropogenic climate change

Could this also be due to survivorship bias? If environmental/health motivations are associated with giving up being veg*n sooner than animal welfare motivations, then in cohorts that adopted their diet longer ago, relatively more of the environmental/health motivated people would have dropped out compared to more recent cohorts. 

It costs time to read it! Do you happen to know of a 10 minute summary of the key points? 

I'd also note that hundreds of billions of dollars are spent on biomedical research generally each year. While most of this isn't targeted at anti-aging specifically, there will be a fair amount of spillover that benefits anti-aging research, in terms of increased understanding of genes, proteins, cell biology etc.

Thanks for sharing!

Our funding bar went up at the end of 2022, in response to a decrease in the overall funding available to long-term future-focused projects

Is there anywhere that describes what the funding bar is and how you decided on it? This seem relevant to several recent discussions on the Forum, e.g. this, this, and this.

Sounds like he'd be good to have at the debate! But it seems very unlikely he'll make the first one in a few weeks time. There seem to be 3 requirements to qualify for the first debate:

  1. Pledge support for the eventual nominee. Hurd has said he won't do this.
  2. (from 538) "they must earn 1 percent support in three national polls, or in two national polls and two polls from the first four states voting in the GOP primary, each coming from separate states, based on polls recognized by the RNC and conducted in July and August before the debate."
    1. "As of Sunday [July 23rd], he had only one qualifying poll to his name..." 
  3. (from 538) "Meanwhile, a candidate must also attain at least 40,000 unique donors, with at least 200 contributors from 20 or more states and/or territories."
    1. "..and said last week that he was about one-fifth of the way to 40,000 contributors"

It sounds like he needs a big boost from somewhere - maybe if e.g. Elon Musk were to tweet about him and endorse his position on AI that would get him there (and convince him to change his mind re 1, though I'm not sure briefly speaking about AI alignment justifies this)?!  

Re 2 - ah yeah, I was assuming that at least one alien civilisation would aim to 'technologize the Local Supercluster' if humans didn't. If they all just decided to stick to their own solar system or not spread sentience/digital minds, then of course that would be a loss of experiences.

Thanks for clarifying 1 and 3!

Interesting read, and a tricky topic! A few thoughts:

  1. What were the reasons for tentatively suggesting using the median estimate of the commenters, rather than being consistent with the SoGive neartermist threshold?
  2. One reason against using the very high-end of the range is the plausible existence of alien civilisations. If humanity goes extinct, but there are many other potential civilisations and we think they have similar moral value to humans, then preventing human extinction is less valuable.
    1. You could try using an adapted version of the Drake equation to estimate how many civilisations there might be (some of the parameters would have to be changed to take into account the different context, i.e. you're not just estimating current civilizations that could currently communicate with us in the Milky Way, but the number there could be in the Local Supercluster)
  3. I'm still not entirely sure what the purpose of the threshold would be.
    1. The most obvious reason is to compare longtermist causes with neartermist ones, to understanding the opportunity cost - in which case I think this threshold should be consistent with the other SoGive benchmarks/thresholds (i.e. what you did with your initial calculations).
      1. Indeed the lower end estimate (only valuing existing life) would be useful for donors who take a completely neartermist perspective, but who aren't set on supporting (e.g.) health and development charities
    2. If the aim is to be selective amongst longtermist causes so that you're not just funding all (or none) of them, then why not just donate to the most cost-effective causes (starting with the most cost-effective) until your funding runs out?
      1. I suppose this is where the giving now vs giving later point comes in. But in this case I'm not sure how you could try to set a threshold a priori
        1. It seems like you need some estimates of cost-effectiveness first. Then (e.g.) choose to fund the top x% of interventions in one year, and use this to inform the threshold in subsequent years. Depending on the apparent distribution of the initial cost-effectiveness estimates, you might decide 'actually, we think there are plenty of interventions out there that are better than all the ones we have seen so far, if only we search a little bit harder'
  4. Trying to incentivise more robust thinking around the cost-effectiveness of individual longtermist projects seems really valuable! I'd like to see more engagement by those working on such projects. Perhaps SoGive can help enable such engagement :)

Assuming it could be implemented, I definitely think your approach would help prevent the imposition of serious harms. 

I still intuitively think the AI could just get stuck though, given the range of contradictory views even in fairly mainstream moral and political philosophy. It would need to have a process for making decisions under moral uncertainty, which might entail putting additional weight on the views on certain philosophers. But because this is (as far as I know) a very recent area of ethics, the only existing work could be quite badly flawed.

Load more