Congratulations. Are you planning to upload recordings of the presentations? Where can I access the conference program?
This was a nice post. I haven't thought about these selfishness concerns before, but I did think about possible dangers arising from aligned servant AI used as a tool to improve military capabilities in general. A pretty damn risky scenario in my view and one that will hugely benefit whoever gets there first.
Here (https://thehumaneleague.org/animals) you'll find many articles on the subject. For example, this one: What really happens on a chicken farm.
He later abdicated the throne in 2014, ending the monarchy.
Not really. He abdicated in favor of his son, who is the present king of Spain. Ending the monarchy is an idea that never crossed his mind.
In case you'd prefer the EA Forum format, this post was also crossposted here some time ago: https://forum.effectivealtruism.org/posts/oRx3LeqFdxN2JTANJ/epistemic-legibility
Spatterings of Latin
I can't think of one single post where this is a serious issue. There may be exceptions that I ignore, but generalizing this is exaggerated.
Klingt exotisch, aber wenn man das Wort 10x sagt, dann merkt man das nicht mehr
I believe this happens because , to my knowledge, German words ending in -ismus are only combined with proper names ('Marxismus') or foreign words (specially adjectives), that is Lehnwörter, like 'Liberalismus', 'Föderalismus'. But I'm not a native speaker, so I can't really tell how "exotic" this neologism sounds.
Have you checked this https://forum.effectivealtruism.org/events? There are some meetups in Berkeley.
Great article! Another thing I just realized: I dislike the clock metaphor. It seems to suggest that we will eventually reach midnight, no matter what. Perhaps a time bomb (which can be deactivated) would be a better illustration.
My version tried to be an intuitive simplification of the core of Bostrom's paper. I actually don't identify these assumptions you mention. If you are right, I may have presupposed them while reading the paper, or my memory may be betraying me for the sake of making sense of it. Anyway, I really appreciate you took the time to comment.
I would like to understand how that is a valid objection, because I honestly don't see it. To simplify a bit, if you think that 1 ('humanity won't reach a posthuman stage') and 2 ('posthuman civilizations are extremely unlikely to run vast numbers of simulations') are false, it follows that humanity will probably both reach a posthuman stage and run a vast number of simulations. Now if you really think this will probably happen, I can see no reason to deny that it has already happened in the past. Why postulate that we will be the first simulators? There's...
crucial information! I.e., we know that we are not in any of the simulations that we have produced.
I think the point has to do with belief consistency here. If you believe that our posthuman descendants will probably run a vast number of simulations of their ancestors (the negation of the second and first alternatives), then you have to accept that the particular case of being a non-simulated civilization is one in a vast number, and therefore highly improbable, and therefore we are almost certainly living in a simulation. You cannot know that you are not ...
Actually they did:
...In 1784, the French mathematician Charles-Joseph Mathon de la Cour wrote a parody of Benjamin Franklin’s then-famous Poor Richard’s Almanack. In it, Mathon de la Cour joked that Franklin would be in favour of investing money to grow for hundreds of years and then be spent on utopian projects. Franklin, amused, thanked Mathon de la Cour for the suggestion, and left £1,000 each to the cities of Philadelphia and Boston in his will. This money was to be invested and only to be spent a full 200 years after his death. As time went by, the money
All of these are arguably either neglected or less-discussed, or at least that's what the posts discussing these causes suggest. I suppose the same goes for your posts (I just didn't have the time to read them in detail yet) and that's why I lean towards including them.
I’ll add them soon, thanks! Yes, you’re right about the beneficial influence of improving institutional decision-making over other causes. This is something that occurs very frequently between other causes as well (though not always, as the meat eater problem has shown). I look forward to reading that post.
Thanks for raising this point. I agree that such category could include enhancements not strictly limited to "being smarter". I think this is a legitimate cause area, but I'm not sure if I would include Magnus's excellent post. I just don't feel he is proposing this as a cause area. . . Anyway, the real reason I didn't include it was far more trivial: It was published in April and this update is supposed to cover up to March. I'm thinking about ways of extending the limit and keeping this up to date on a regular basis.
I like that one very much, but I stopped listing posts in March, that's why it is not included. Thanks anyway.
Could anyone help me downvote the 'Job listing (open)' tag? Applications closed two days ago. Thanks
That's true, but that comment was only meant for you, who seemed confused about what kind of 'should' you should use in a normative sentence. I took for granted that you already knew 'normative', because you had posted a nice and useful answer to the original question.
Aristotle would answer "'should' is said in many ways". I was of course thinking of the normative 'should', which I believe is the first that comes to mind when someone asks about normative sentences. But I'd be highly interested in a different kind of counterexample: a normative sentence without a 'should' stated or implied.
To achieve this you could create a "community user" and share the pass on top of the post. People would login with it, make changes and explain them in the comments. Not sure if sharing the pass would be against the Forum's rules.
It has happened to me that when trying to make an edit I accidentally click ok on the warning that says "We've found a previously saved state for this document, would you like to restore it?", thus restoring an old version of the article and reverting someone else's edits.
I don't think I will elaborate on policies, given that they are the last thing to worry about. Even RP negative report counts new policies among the benefits of charter cities. Now we are supposed to have effective ways to improve welfare, why wouldn't we build a new city, start from scratch, do it better than everybody else, and show it to the world? While I agree that this can't be done without putting a lot of thinking into it, I believe it must be done sooner or later. From a longtermist point of view: how could we ever expect to carry out a rational c...
Mere libertarians may have failed, as anarchists did in similar attempts. But I believe that EAs can do better. An EA city would be a perfect place to apply many of the ideas and polices we are currently advocating for.
Here is an even more ambitious one:
Found an EA charter city
Effective Altruism
A place where EAs could live, work, and research for long periods, with an EA school for their children, an EA restaurant, and so on. Houses and a city UBI could be interesting incentives.
Kelsey Piper has written an excellent article on different ways to help Ukrainians, including how to donate directly to the Ukrainian military. But she wisely points out that "[s]uch donations occupy a tricky ethical and even legal area... A safer choice would be to direct money to groups that are providing medical assistance on the ground in Ukraine, like Médecins Sans Frontières or the Ukrainian Red Cross."
Every culture has always been concerned about the future, the afterlife and so on, but it seems to me that worries about "remote" future generations are relatively recent. There are probably isolated counterexamples, though, which I believe are the ones you are looking for. Aside from that, in the animal reign, there is of course the instinctive concern about the "next" generation, which is in turn reproduced in every following generation.
I'm not updating this anymore. But your post made me curious. I will try to read it shortly.