I'm just a normal, functioning member of the human race, and there's no way anyone can prove otherwise
This is a very helpful post. I'm surprised the events are so expensive, but breakdown of costs and explanations make sense.
That said, this makes me much more skeptical about the value of EAG given the alternative potential uses of funds - even just in terms of other types of events.
As suggested by Ozzie, I'd definitely like to see a comparison with the potential value of smaller events, as well as experimentation.
Spending $2k per person might be good value, but I think we could do better. Perhaps there is an analogy with cash transfers as a benchmark - what event could someone put on if they were just given that money?
For example, with $2k, I expect I could hire a pub in central London for an evening (or maybe a whole day), with perhaps around 100 people attending. So that's $20 per person, or 1% of the cost of EAG. Would they get as much benefit from attending my event as attending EAG? No, but I'd bet they'd get more than 1% of the benefit.
Now what if 10 or 20 people pooled their $2k per person?
Nice study, thanks for sharing!
Environmental and health concerns were found to be of increasing importance among those adopting their diet more recently, which may reflect increasing awareness of and advocacy regarding possible health benefits of plant-based diets, as well as increasing concerns over anthropogenic climate change
Could this also be due to survivorship bias? If environmental/health motivations are associated with giving up being veg*n sooner than animal welfare motivations, then in cohorts that adopted their diet longer ago, relatively more of the environmental/health motivated people would have dropped out compared to more recent cohorts.
I'd also note that hundreds of billions of dollars are spent on biomedical research generally each year. While most of this isn't targeted at anti-aging specifically, there will be a fair amount of spillover that benefits anti-aging research, in terms of increased understanding of genes, proteins, cell biology etc.
Thanks for sharing!
Our funding bar went up at the end of 2022, in response to a decrease in the overall funding available to long-term future-focused projects
Is there anywhere that describes what the funding bar is and how you decided on it? This seem relevant to several recent discussions on the Forum, e.g. this, this, and this.
Sounds like he'd be good to have at the debate! But it seems very unlikely he'll make the first one in a few weeks time. There seem to be 3 requirements to qualify for the first debate:
It sounds like he needs a big boost from somewhere - maybe if e.g. Elon Musk were to tweet about him and endorse his position on AI that would get him there (and convince him to change his mind re 1, though I'm not sure briefly speaking about AI alignment justifies this)?!
Re 2 - ah yeah, I was assuming that at least one alien civilisation would aim to 'technologize the Local Supercluster' if humans didn't. If they all just decided to stick to their own solar system or not spread sentience/digital minds, then of course that would be a loss of experiences.
Thanks for clarifying 1 and 3!
Interesting read, and a tricky topic! A few thoughts:
Assuming it could be implemented, I definitely think your approach would help prevent the imposition of serious harms.
I still intuitively think the AI could just get stuck though, given the range of contradictory views even in fairly mainstream moral and political philosophy. It would need to have a process for making decisions under moral uncertainty, which might entail putting additional weight on the views on certain philosophers. But because this is (as far as I know) a very recent area of ethics, the only existing work could be quite badly flawed.
I think the purpose of the 'overall karma' button on comments should be changed.
Currently, it asks 'how much do you like this overall?'. I think this should be amended to something like 'how much do you think this is useful or important?'.
This is because I think there is still a too strong correlation between 'liking' a comment, and 'agreeing' with it.
For example, in the recent post about nonlinear, many people are downvoting comments by Kat and Emerson. Given that the post concerns their organisation, their responses should not be at risk of being hidden - their comments should be upvoted because it's useful/important to recognise their responses, regardless of whether someone likes/agrees with the content.