E

EgilElenius

7 karmaJoined Apr 2018

Comments
6

Ideas coming through my mind, not too well refined:

Reading this post, I came to think of this old joke:

A police officer sees a drunken man intently searching the ground near a lamppost and asks him the goal of his quest. The inebriate replies that he is looking for his car keys, and the officer helps for a few minutes without success then he asks whether the man is certain that he dropped the keys near the lamppost.
“No,” is the reply, “I lost the keys somewhere across the street.” “Why look here?” asks the surprised and irritated officer. “The light is much better here,” the man.

So, how could this be applied to cause prioritisation? For one, I think the area where the keys could be lost is quite large.


My second thought would be, that "How do we prioritise what to do, to achieve the most good?" sounds to me partly like an existential question, a bit like "What is the meaning of life?" Perhaps this goes back a bit to the dropped keys, with the GP research being done focusing on the visible area of what can be done concretely. Trying to answer the question of global priorities without a grand narrative of what the globe is to become, seems incomplete to me.


Insofar as the EA moment wants to answer the concrete question of how to do create change according to one's values instead of discussing values as such, I would expect the different branches to remain interested in their respective agendas and not into how to compare them to one another. That would be contra-productive.


Also, despite EA's philosophical roots, I think perhaps not enough different parts of philosophy is being used. For example, if value and meaning is created by ourselves, what implications does that hvae on GPR? Has the subconscious been considered when it comes to increasing well-being? To me, the EA movement seems to be in a humanistic, individualistic or such worldview, and if a new grand narrative, like that outlined in Homo deus or Digital libido were to come, and the EA movement stays in the old paradigm, it could very well end up looking to outsiders that the primary question of concern is akin to how many angels can dance on a needle's point.

Has anyone looked into the implications of deepfakes? To me, it seems like a highly impactful technology that will obstruct the use of the internet as an information-gathering tool, among other things.

First of all, thank you for this post! Well-written article on a topic I'm surprised I haven't thought about myself.


My first thoughts are that the post does well in exploring BCI from a risk perspective, but that more would be needed. I think this quote is a good place to start:

" In a scenario where all citizens (including the dictator) are implanted with BCIs, and their emotions are rewired for loyalty to the state, resistance to the dictatorship, or any end to dictatorship would be incredibly unlikely. "

That is, if everybody is loyal to this state, is it really to be understoood as a dictatorship? I don't see the Western democracies based on free-will of today as the last and only legitimate form of government, and think we need to adjust to the idea of the governments of tomorrow be unlike ours based on individualism and free will. I've yet to read about it more in-depth myself, but what Bard and Söderqvist call the sensocracy could be worth looking at.

Quick thought: as the human mind evolved for a hunter-gathererer society, and since then humanity has had loads of different societies, where different ways of thinking are favourable, perhaps some sort of cognitive enhancement making it easier for the human mind to see in what situation it is and quickly readjust thereafter could increase its cognitive performance.

Quickly written idea, could need some time to develop it:

Something I've thought about myself, which isn't quite a Task Y, but still similar, is widely accepted framework for what aspiring EA's can do, with "levels" of depth depending on how much they are willing to commit.

If the two dominant ideologies of Western society, to which EA has to relate to, are consumerism and environmentalism/social responsibility, to me the (primary) mean would be to spend money and the goal would be to have a environmentally/socially non-negative impact on the world. I get the impression that the moral message we get is that we should do as little impact as possible in our consumption, or buy products which are less harmful.

I would like to explore the idea of indulgence, to create a (relatively easy?) framework telling people that if they live in a way using x units of ressources and y of suffering suffering, they give so or so much money to the following organisations.

Something to stress in the case of natural ressources, would be that it's not the same as them not having been spent, but rather that atleast insofar that they will be spent, it would be a positive act of enabling an atleast comparable amount to come.

I personally don't believe EA, because of its intellectually ambitious commitments, have good conditions to spread to the public at large, while I believe a relatively straight-forward framework for how to act/spend would have an easier time.

That said, somebody else might already have written about this?

[European Union Election 2019]

In one year, the election to the European parliament will be held. The differences between parties can be unclear already at a national level, and so far I haven't seen any attempts to systematically compare how the party groups have voted and what they believe themselves to stand for.

To make it easier to make a well-informed vote, I would like to gather interested EA's and make a handy guide on what group to vote on depending on values. To make it more useful, I'd suggest that it not only be written with EA's in mind, but also other voters.