All of Leo's Comments + Replies

Leo
3
0
0
1

This is the best simple case I have read so far. Well done!

I see, thanks. I guess I would have preferred a more accurate, unambiguous aggregation of everyone’s opinion, to have a clearer sense of the preferences of the community as a whole, but I'm starting to think that it's just me.

3
Toby Tremlett🔹
That's fair enough Leo! It's definitely not just you. But if that was my only goal, I'd probably run a survey rather than a debate week. During this week we are hoping to see conversations which change minds, which means juggling a few goals.  Maybe in the future (no promises) we will introduce polls or debate sliders to add in posts, and then we could get more niche and precise data. 

As I said last time, trying to quantify agreement/disagreement is much more confusing to determine and to read, than just measuring, out of an extra $100m, how many $ millions people would assign to global health/animal welfare. The banner would go from 0 to 100, and whatever you vote, let say 30m, would mean that 30m should go to one cause and 70m to the other. As it is, just to mention one paradox, if I wholly disagree with the question, it means that I think it wouldn't be better to spend the money on animal welfare than on global health, which in turn ... (read more)

5
Toby Tremlett🔹
Thanks Leo! I remember your comment from last time, it's a fair point.  We did consider framing the question exactly like that (i.e. splitting 100m between the two), but I decided against it. The main reason was that a vote would actually seem to project far more certainty if you had to give a precise number than with this question, which might introduce a far higher barrier to voting. The reason we have voting in a debate week is not to produce a perfectly accurate aggregation of everyone's opinion (though, all else equal, a more accurate aggregation is better), but rather to encourage and enable valuable conversation on crucial questions. So a question that is framed in a way which still makes sense and represents a preference, but will get more votes, is probably a better one.  I do understand that the meaning of a vote is ambiguous, but this is why we are introducing commenting, so you can explain the reason behind your vote. Hopefully, this means that ambiguities like the one you mention won't matter too much.   
Answer by Leo3
0
0

There's substantial discussion on this topic following Eliezer's take on this.

I think I would prefer to strongly disagree, because I don't want my half agree to be read as if I agreed to some extent with the 5% statement. This is because "half agree" is ambiguous here. People could think that it means 1) something around 2,5% of funding/talent or 2) that 5% could be ok with some caveats. This should be clarified to be able to know what the results actually mean.  

4
Toby Tremlett🔹
Makes sense Leo, thanks. I don't want to change anything very substantial about the banner after so many users have voted, but I'll bear this in mind for next time. 
Leo
5
2
1
2

This is a great experiment. But I think it would have been much clearer if the question was phrased as "What percentage of talent+funding should be allocated to AI welfare?", with the banner showing a slider from 0% to 100%. As it is now, if I strongly disagree with allocating 5% and strongly agree with 3% or whatever, I feel like I should still place my icon on the extreme left of the line. This would make it look like I'm all against this cause, which wouldn't be the case.

2
Toby Tremlett🔹
Good point (I address similar concerns here). For the time being, personally I would treat a half agree as some percentage under 5%, and explain your vote in the discussion thread if you want to make sure that people know what you mean. 

The expected impact of waiting to sell will diminish as time goes on, because you are liable to change your values or, more probably, your views about what and how best to prioritize. This is especially true if you have a track record of changing your mind about things (like most of us). While the expected impact of waiting is, say, the value of two kidneys, conditional on not changing your mind, this same impact will be equal to the value of one kidney, or less, if you have a 50% chance or more of changing your mind. So I guess your comment is valid only if you are very confident that you will not change your mind about donating a kidney between now and the estimated time when you can sell it.

I'm not updating this anymore. But your post made me curious. I will try to read it shortly.

Congratulations. Are you planning to upload recordings of the presentations? Where can I access the conference program?

1
Leo
In case anyone is interested, here is the recording of Condor Initiative's director Carmen Csilla Medina talking about Condor Camp
1
Hugo Ikta
Hi Leo, in order to reduce costs we decided not to record the presentations this time. The final program was only published on Swapcard but you can find the draft of the schedule here.

This was a nice post. I haven't thought about these selfishness concerns before, but I did think about possible dangers arising from aligned servant AI used as a tool to improve military capabilities in general. A pretty damn risky scenario in my view and one that will hugely benefit whoever gets there first. 

Leo
10
0
0

He later abdicated the throne in 2014, ending the monarchy.

 

Not really. He abdicated in favor of his son, who is the present king of Spain. Ending the monarchy is an idea that never crossed his mind.  

In case you'd prefer the EA Forum format, this post was also crossposted here some time ago: https://forum.effectivealtruism.org/posts/oRx3LeqFdxN2JTANJ/epistemic-legibility   

1
Jeremy
Ah thanks. I should remember to check for that.
Leo
11
0
0

Spatterings of Latin

I can't think of one single post where this is a serious issue. There may be exceptions that I ignore, but generalizing this is exaggerated.

2
Tyner🔸
There are dozens of posts/comments that use phrases like ex-ante and modus tollens.  
2
tlevin
I think there are certain Latin academic phrases that crop up in a lot of writing in general (not specifically EA Forum writing) and that, ceteris paribus, would be better if translated to plain English
Leo
10
0
0

Was the winner 'efflorescence' or 'peripeteia'?

5
Davidmanheim
Efflorescence

Klingt exotisch, aber wenn man das Wort 10x sagt, dann merkt man das nicht mehr

I believe this happens because , to my knowledge, German words ending in -ismus are only combined with proper names ('Marxismus') or foreign words (specially adjectives), that is Lehnwörter, like 'Liberalismus', 'Föderalismus'.  But I'm not a native speaker, so I can't really tell how "exotic" this neologism sounds.

Have you checked this https://forum.effectivealtruism.org/events? There are some meetups in Berkeley. 

I think this is very useful. Added. 

Langzeitigkeit

Langzeitethizismus

But I think the best is the already proposed 'Langzeitethik'.

Great article! Another thing I just realized: I dislike the clock metaphor. It seems to suggest that we will eventually reach midnight, no matter what. Perhaps a time bomb (which can be deactivated) would be a better illustration.

6
christian.r
Thank you! I also really struggle with the clock metaphor. It seems to have just gotten locked in as the Bulletin took off in the early Cold War. The time bomb is a great suggestion — it communicates the idea much better

My version tried to be an intuitive simplification of the core of Bostrom's paper. I actually don't identify these assumptions you mention. If you are right, I may have presupposed them while reading the paper, or my memory may be betraying me for the sake of making sense of it. Anyway, I really appreciate you took the time to comment.

I would like to understand how that is a valid objection, because I honestly don't see it. To simplify a bit, if you think that 1 ('humanity won't reach a posthuman stage') and 2 ('posthuman civilizations are extremely unlikely to run vast numbers of simulations') are false, it follows that humanity will probably both reach a posthuman stage and run a vast number of simulations. Now if you really think this will probably happen, I can see no reason to deny that it has already happened in the past. Why postulate that we will be the first simulators? There's... (read more)

5
bmg
To be clear, I'm not saying the conclusion is wrong - just that the explicit assumptions the paper makes (mainly the Indifference Principle) aren't sufficient to imply its conclusion. The version that you've just presented isn't identical to the one in Bostrom's paper -- it's (at least implicitly) making use of assumptions beyond the Indifference Principle. And I think it's surprisingly non-trivial to work out exactly how to formalize the needed assumptions, and make the argument totally tight, although I'd still guess that this is ultimately possible.[1] ---------------------------------------- 1. Caveat: Although the conclusion is at least slightly wrong, since - if we're willing to assign non-zero probability to the hypothesis that we're hallucating the world, because we're ancestor simulations - it seems we should also assign non-zero probability to the hypothesis that we're hallucinating for some other reason. (The argument implicitly assumes that being an ancestor simulation is the only 'skeptical hypothesis' we should assign non-zero probability to.) I think it's also unclear how big a deal this caveat is. ↩︎
Answer by Leo1
0
0

crucial information! I.e., we know that we are not in any of the simulations that we have produced.

I think the point has to do with belief consistency here. If you believe that our posthuman descendants will probably run a vast number of simulations of their ancestors (the negation of the second and first alternatives), then you have to accept that the particular case of being a non-simulated civilization is one in a vast number, and therefore highly improbable, and therefore we are almost certainly living in a simulation. You cannot know that you are not ... (read more)

-7
Alex Williams
1
Vasco Grilo🔸
Thanks!

Actually they did:

In 1784, the French mathematician Charles-Joseph Mathon de la Cour wrote a parody of Benjamin Franklin’s then-famous Poor Richard’s Almanack. In it, Mathon de la Cour joked that Franklin would be in favour of investing money to grow for hundreds of years and then be spent on utopian projects. Franklin, amused, thanked Mathon de la Cour for the suggestion, and left £1,000 each to the cities of Philadelphia and Boston in his will. This money was to be invested and only to be spent a full 200 years after his death. As time went by, the money

... (read more)

All of these are arguably either neglected or less-discussed, or at least that's what the posts discussing these causes suggest. I suppose the same goes for your posts (I just didn't have the time to read them in detail yet) and that's why I lean towards including them. 

2
IanDavidMoss
Yeah, it also seems like you're trying to highlight posts where the author is doing a lot of work to define and/or make the case for the cause area. Maybe that's an easier way to think about the logic for inclusion.

I’ll add them soon, thanks! Yes, you’re right about the beneficial influence of improving institutional decision-making over other causes. This is something that occurs very frequently between other causes as well (though not always, as the meat eater problem has shown). I look forward to reading that post.

Thanks for raising this point. I agree that such category could include enhancements not strictly limited to "being smarter".  I think this is a legitimate cause area, but I'm not sure if I would include Magnus's excellent post. I just don't feel he is proposing this as a cause area. . . Anyway, the real reason I didn't include it was far more trivial: It was published in April and this update is supposed to cover up to March. I'm thinking about ways of extending the limit and keeping this up to date on a regular basis.

There is a rough draft. I’ll try to update it and let you know.

Added. This is a great contribution. Thanks a lot!

I like that one very much, but I stopped listing posts in March, that's why it is not included. Thanks anyway.

4
Question Mark
This partially falls under cognitive enhancement, but what about other forms of consciousness research besides increasing intelligence, such as what QRI is doing? Hedonic set-point enhancement, i.e. making the brain more suffering-resistant and research into creating David Pearce's idea of "biohappiness", is arguably just as important as intelligence enhancement. Having a better understanding of valence could also potentially make future AIs safer. Magnus Vinding also wrote this post on personality traits that may be desirable from an effective altruist perspective, so research into cognitive enhancement could also include figuring out how to increase these traits in the population.

Could anyone help me downvote the 'Job listing (open)' tag? Applications closed two days ago. Thanks

That's true, but that comment was only meant for you, who seemed confused about what kind of  'should' you should use in a normative sentence. I took for granted that you already knew 'normative', because you had posted a nice and useful answer to the original question.  

Aristotle would answer "'should' is said in many ways". I was of course thinking of the normative 'should', which I believe is the first that comes to mind when someone asks about normative sentences. But I'd be highly interested in a different kind of counterexample: a normative sentence without a 'should' stated or implied.

2
Arepo
Defining a normative statement as 'a statement with a normative "should"' has certain problems...
Answer by Leo7
0
0

There's a 'should' either stated or implied.

0
Vynn
Do "must" and "may" imply a should?
1
Arepo
'If you add 1 to 1 you should get 2' is not a statement people would necessarily consider normative.

To achieve this you could create a "community user" and share the pass on top of the post. People would login with it, make changes and explain them in the comments.  Not sure if sharing the pass would be against the Forum's rules.

4
Nathan Young
That would be a massive effort for people to do and I'm almost certain few would. Cool idea though.

It has happened to me that when trying to make an edit I accidentally click ok on  the warning that says "We've found a previously saved state for this document, would you like to restore it?", thus restoring  an old version of the article and  reverting  someone else's edits.     

4
JuanGarcia
Ah it must have been that, thanks for letting me know

I don't think I will elaborate on policies, given that they are the last thing to worry about. Even RP negative report counts new policies among the benefits of charter cities. Now we are supposed to have effective ways to improve welfare, why wouldn't we build a new city, start from scratch, do it better than everybody else, and show it to the world? While I agree that this can't be done without putting a lot of thinking into it, I believe it must be done sooner or later. From a longtermist point of view: how could we ever expect to carry out a rational c... (read more)

Mere libertarians may have failed, as anarchists did in similar attempts. But I believe that EAs can do better. An EA city would be a perfect place to apply many of the ideas and polices we are currently advocating for.

3
RyanCarey
Could you elaborate on the policies? And what, roughly, are you picturing - an EA-sympathising municipal government, or a more of a Honduran special economic zone type situation?
Leo
15
0
0

Here is an even more ambitious one:

Found an EA charter city

Effective Altruism

A place where EAs could live, work, and research for long periods, with an EA school for their children, an EA restaurant, and so on. Houses and a city UBI could be interesting incentives.

9
RyanCarey
What would be the value add of an EA city, over and above that of an EA school and coworking space? For example, I don't see why you need to eat at an EA restaurant, rather than just a regular restaurant with tasty and ethical food. Note also that the libertarian "Free State Project" seems to have failed, despite there being many more libertarians than effective altruists.

Kelsey Piper has written an excellent article on different ways to help Ukrainians, including how to donate directly to the Ukrainian military. But she wisely points out that "[s]uch donations occupy a tricky ethical and even legal area... A safer choice would be to direct money to groups that are providing medical assistance on the ground in Ukraine, like Médecins Sans Frontières or the Ukrainian Red Cross."

This is the only post that quoted it last year. It explains the idea, but it doesn't look like the one you're looking for.

Answer by Leo1
0
0

Every culture has always been concerned about the future, the afterlife and so on, but it seems to me that worries about "remote" future generations are relatively recent. There are probably isolated counterexamples, though, which I believe are the ones you are looking for. Aside from that, in the animal reign, there is of course the instinctive concern about the "next" generation, which is in turn reproduced in every following generation.