All of Arran McCutcheon's Comments + Replies

That's right, he said 'It’s just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road, and so, goodbye anthill.'

Adaptation: Assuming that advanced AI would preserve humanity is the same as an ant colony assuming that real estate developers would preserve their nest. Those developers don’t hate ants, they just want to use that patch of ground for something else (I may have seen this ant analogy somewhere else but can't remember where).

2
Peter S. Park
2y
I think Elon Musk said it in a documentary about AI risks. (Is this correct?)

If the capabilities of nuclear technology and biotechnology advance faster than their respective safety protocols, the world faces an elevated risk from those technologies. Likewise, increases in AI capabilities must be accompanied by an increased focus on ensuring the safety of AI systems.

Human history can be summarised as a series of events in which we slowly and painfully learned from our mistakes (and in many cases we’re still learning). We rarely get things right first time. The alignment problem may not afford the opportunity to learn from our mistakes. If we develop misaligned AGI we will go extinct, or at the very least cede control of our destiny and miss out on the type of future that most people want to see. 

Givewell for AI alignment  

Artificial intelligence

When choosing where to donate to have the largest positive impact on AI alignment, the current best resource appears to be Larks annual literature review and charity comparison on the EA/LW forums. Those posts are very high-quality but they’re only published once a year and are ultimately the views of one person. A frequently updated donation recommendation resource contributed to by various experts would improve the volume and coordination of donations to AI alignment organisations and projects.

T... (read more)

Website for coordinating independent donors and applicants for funding

Empowering exceptional people, effective altruism

At EAG London 2021, many attendees indicated in their profiles that they were looking for donation opportunities. Donation autonomy is important to many prospective donors, and increasing the range of potential funding sources is important to those applying for funding. A curated website which allows applicants to post requests for funding and allows potential donors to browse those requests and offer to fully or partially fund applicants, seems like an effective solution.

Research scholarships / funding for self-study 

Empowering exceptional people

The value of a full-time researcher in some of the most impactful cause areas has been estimated as being between several hundred thousand to several million dollars per year, and research progress is now seen by most as the largest bottleneck to improving the odds of good outcomes in these areas. Widespread provision of scholarships / funding for self-study could enable far more potential researchers to gain the necessary experience, knowledge, skills and qualifications to ma... (read more)

Thanks Khorton for the feedback and additional thoughts.

I think the impact of cold emails is normally neutral, it would have to be a really poorly-written or antagonising email to make the reader actively go and do the opposite of what the email suggests! I guess neutral also qualifies as 'not good'.

But it seems like people with better avenues of contact to DC have been considering contacting him anyway, through cold means or otherwise, so that’s great.

Exactly, he has written posts about those topics, and about 'effective action', predictions and so on. And there is this article from 2016 which claims 'he is an advocate of effective altruism', although it then says 'his argument is mothball the department (DFID)', which I'm fairly sure most EAs would disagree with.

But as he's also written about a huge number of other things, day-to-day distractions are apparently the rule rather than the exception in policy roles, and value drift is always possible, it would be go... (read more)

Although the blog post is seeking applications for various roles, the email address to send applications to is ‘ideas for number 10 at gmail dot com’.

If someone/some people took that address literally and sent an email outlining some relatively non-controversial EA-aligned ideas (e.g. collaboration with other governments on near-term AI-induced cyber security threats, marginal reduction of risks from AI arms races, pandemics and nuclear weapons, enhanced post-Brexit animal welfare laws, maintenance of the UK’s foreign aid commitment an... (read more)

9
Kirsten
4y
I don't think cold emailing is usually a good idea. I've sent you a private message with some more thoughts.

Thanks for sharing. Definitely more research like yours and WAI’s is needed regarding what species and stages of development within species are likely to experience suffering, and how we should view the importance of moderate/extreme suffering/pleasure.

As far as I’m aware, the lives of invertebrates are considered likely to be net negative due to r-selection (most (all?) species reproduce by having a large amount of offspring, most of whom die at a very young age), and short lifespans in general, which tend to end in painful death by dehydration, being eaten alive etc (the extreme suffering involved in this type of death is thought to typically outweigh any positive aspects of the individual’s short life).

I don’t know of any explicit calculations apart from Charity Entrepreneurship’s weighted animal welf

... (read more)
8
utilitarian01
5y
I don't think its enough to say they're net negative because of r-selection though. Insect larvae probably have like 2 orders of magnitude less neurons and they might not even be conscious in the first place. Also I saw those welfare reports but really didn't like them because they left out the duration of suffering which is a huge factor in how bad something is. A broiler chicken experiencing a moderate amount of stress for it's entire life could be much much worse than it being boiled alive for a few seconds. This is my welfare spreadsheet but I didn't intend to share it so if you want citations for the numbers I can try to link them.

In his 2007 paper ‘Protagonistic Pleiotropy’, de Grey comments on ‘...the concept of antagonistic pleiotropy (AP) proposed by Williams (Williams, 1957) and now recognised to play a widespread role in aging’. He also mentioned it in an interview posted on fightaging.org last year.

So presumably antagonistic pleiotropy is already accounted for in the SENS Foundation’s ongoing work in developing repair technology, even if it’s not defined as one of the seven major classes of cellular and molecular damage in its own right.

2
John
4y
Everything that works in machine is antagonistic pleiotropy. Car brakes stop car from crashing. But using car brakes wears out (damages) brakes. Machines damage themselves via machines normal operation. Also known as wear and tear. To keep machine functioning forever just need to repair machine at rate faster than damage laid down. Damage is defined as structural changes to the machine that impair machine function. Machine can only tolerate finite amount of structural change. Too much structural change stops machine function leading to death.