Matrice Jacobine🔸🏳️‍⚧️

Student in fundamental and applied mathematics
772 karmaJoined Pursuing a graduate degree (e.g. Master's)France

Bio

Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist

Posts
41

Sorted by New

Comments
118

Topic contributions
1

MacAskill:

Up until recently, there was no name for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. The most common language to refer to this cluster of views was just to say something like ‘people interested in x-risk reduction’. There are a few reasons why this terminology isn’t ideal [...]

For these reasons, and with Toby Ord’s in-progress book on existential risk providing urgency, Toby and Joe Carlsmith started leading discussions about whether there were better terms to use. In October 2017, I proposed the term ‘longtermism’, with the following definition:

Yes. One of the Four Focus Areas of Effective Altruism (2013) was "The Long-Term Future" and "Far future-focused EAs" are on the map of Bay Area memespace (2013). This social and ideological cluster has existed long before this exact name was coined to refer to it.

I'm not sure what StopAI meant by Mr. Kirchner not having -- to its knowledge -- "yet crossed a line [he] can't come back from," but to be clear: his time working on AI issues in any capacity has to be over.

This unfortunately do not seem to be StopAI's stance.

Ilya Sutskever today on X:

One point I made that didn’t come across:

- Scaling the current thing will keep leading to improvements. In particular, it won’t stall.
- But something important will continue to be missing.

Social media recommendation algorithms are typically based on machine learning and generally fall under the purview of near-term AI ethics.

Sorry, I don't know where I got that R from.

I'm giving a ∆ to this overall, but I should add that conservative AI policy think tanks like FAI are probably overall accelerating the AI race, which should be a worry for both AI x-risk EAs and near-term AI ethicists.

You can formally mathematically prove a programmable calculator. You just can't formally mathematically prove every possible programmable calculator. On the other hand, if you can't mathematically prove a given programmable calculator, it might be a sign that your design is an horrible sludge. On the other other hand, deep-learnt neural networks are definitionally horrible sludge.

Load more