New episode of La Bisagra de la Historia:
Jaime Sevilla sobre las tendencias en los modelos de inteligencia artificial
As I said last time, trying to quantify agreement/disagreement is much more confusing to determine and to read, than just measuring, out of an extra $100m, how many $ millions people would assign to global health/animal welfare. The banner would go from 0 to 100, and whatever you vote, let say 30m, would mean that 30m should go to one cause and 70m to the other. As it is, just to mention one paradox, if I wholly disagree with the question, it means that I think it wouldn't be better to spend the money on animal welfare than on global health, which in turn ...
I think I would prefer to strongly disagree, because I don't want my half agree to be read as if I agreed to some extent with the 5% statement. This is because "half agree" is ambiguous here. People could think that it means 1) something around 2,5% of funding/talent or 2) that 5% could be ok with some caveats. This should be clarified to be able to know what the results actually mean.
This is a great experiment. But I think it would have been much clearer if the question was phrased as "What percentage of talent+funding should be allocated to AI welfare?", with the banner showing a slider from 0% to 100%. As it is now, if I strongly disagree with allocating 5% and strongly agree with 3% or whatever, I feel like I should still place my icon on the extreme left of the line. This would make it look like I'm all against this cause, which wouldn't be the case.
In case anyone is interested, here is the recording of Condor Initiative's director Carmen Csilla Medina talking about Condor Camp
The expected impact of waiting to sell will diminish as time goes on, because you are liable to change your values or, more probably, your views about what and how best to prioritize. This is especially true if you have a track record of changing your mind about things (like most of us). While the expected impact of waiting is, say, the value of two kidneys, conditional on not changing your mind, this same impact will be equal to the value of one kidney, or less, if you have a 50% chance or more of changing your mind. So I guess your comment is valid only if you are very confident that you will not change your mind about donating a kidney between now and the estimated time when you can sell it.
This was a nice post. I haven't thought about these selfishness concerns before, but I did think about possible dangers arising from aligned servant AI used as a tool to improve military capabilities in general. A pretty damn risky scenario in my view and one that will hugely benefit whoever gets there first.
Here (https://thehumaneleague.org/animals) you'll find many articles on the subject. For example, this one: What really happens on a chicken farm.
In case you'd prefer the EA Forum format, this post was also crossposted here some time ago: https://forum.effectivealtruism.org/posts/oRx3LeqFdxN2JTANJ/epistemic-legibility
Klingt exotisch, aber wenn man das Wort 10x sagt, dann merkt man das nicht mehr
I believe this happens because , to my knowledge, German words ending in -ismus are only combined with proper names ('Marxismus') or foreign words (specially adjectives), that is Lehnwörter, like 'Liberalismus', 'Föderalismus'. But I'm not a native speaker, so I can't really tell how "exotic" this neologism sounds.
Have you checked this https://forum.effectivealtruism.org/events? There are some meetups in Berkeley.
My version tried to be an intuitive simplification of the core of Bostrom's paper. I actually don't identify these assumptions you mention. If you are right, I may have presupposed them while reading the paper, or my memory may be betraying me for the sake of making sense of it. Anyway, I really appreciate you took the time to comment.
I would like to understand how that is a valid objection, because I honestly don't see it. To simplify a bit, if you think that 1 ('humanity won't reach a posthuman stage') and 2 ('posthuman civilizations are extremely unlikely to run vast numbers of simulations') are false, it follows that humanity will probably both reach a posthuman stage and run a vast number of simulations. Now if you really think this will probably happen, I can see no reason to deny that it has already happened in the past. Why postulate that we will be the first simulators? There's...
crucial information! I.e., we know that we are not in any of the simulations that we have produced.
I think the point has to do with belief consistency here. If you believe that our posthuman descendants will probably run a vast number of simulations of their ancestors (the negation of the second and first alternatives), then you have to accept that the particular case of being a non-simulated civilization is one in a vast number, and therefore highly improbable, and therefore we are almost certainly living in a simulation. You cannot know that you are not ...
Actually they did:
...In 1784, the French mathematician Charles-Joseph Mathon de la Cour wrote a parody of Benjamin Franklin’s then-famous Poor Richard’s Almanack. In it, Mathon de la Cour joked that Franklin would be in favour of investing money to grow for hundreds of years and then be spent on utopian projects. Franklin, amused, thanked Mathon de la Cour for the suggestion, and left £1,000 each to the cities of Philadelphia and Boston in his will. This money was to be invested and only to be spent a full 200 years after his death. As time went by, the money
I’ll add them soon, thanks! Yes, you’re right about the beneficial influence of improving institutional decision-making over other causes. This is something that occurs very frequently between other causes as well (though not always, as the meat eater problem has shown). I look forward to reading that post.
Thanks for raising this point. I agree that such category could include enhancements not strictly limited to "being smarter". I think this is a legitimate cause area, but I'm not sure if I would include Magnus's excellent post. I just don't feel he is proposing this as a cause area. . . Anyway, the real reason I didn't include it was far more trivial: It was published in April and this update is supposed to cover up to March. I'm thinking about ways of extending the limit and keeping this up to date on a regular basis.
Aristotle would answer "'should' is said in many ways". I was of course thinking of the normative 'should', which I believe is the first that comes to mind when someone asks about normative sentences. But I'd be highly interested in a different kind of counterexample: a normative sentence without a 'should' stated or implied.
I don't think I will elaborate on policies, given that they are the last thing to worry about. Even RP negative report counts new policies among the benefits of charter cities. Now we are supposed to have effective ways to improve welfare, why wouldn't we build a new city, start from scratch, do it better than everybody else, and show it to the world? While I agree that this can't be done without putting a lot of thinking into it, I believe it must be done sooner or later. From a longtermist point of view: how could we ever expect to carry out a rational c...
Kelsey Piper has written an excellent article on different ways to help Ukrainians, including how to donate directly to the Ukrainian military. But she wisely points out that "[s]uch donations occupy a tricky ethical and even legal area... A safer choice would be to direct money to groups that are providing medical assistance on the ground in Ukraine, like Médecins Sans Frontières or the Ukrainian Red Cross."
This is the only post that quoted it last year. It explains the idea, but it doesn't look like the one you're looking for.
Every culture has always been concerned about the future, the afterlife and so on, but it seems to me that worries about "remote" future generations are relatively recent. There are probably isolated counterexamples, though, which I believe are the ones you are looking for. Aside from that, in the animal reign, there is of course the instinctive concern about the "next" generation, which is in turn reproduced in every following generation.
This is the best simple case I have read so far. Well done!