Ramiro

Brazilian legal philosopher and financial supervisor

Comments

Should we think more about EA dating?

I loved this post and its comments. I'd add:

1. You should totally tell that girl (and maybe everyone else) about the drowning child, the real challenge is to find the best way to do that. Now, instead of emphasizing how having a significant other aligned with your goals might improve your prospects, I wonder how it affects your own personal happiness. People don't have to identify as EAs to support you or share your ultimate goals, but it sure helps; this might be demanding, as other people emphasized above, but actually the effect of your personal lifestyle is usually not so big, so you can compromise a little bit if your acquaintances do it, too. The real problem, in my opinion, is that you'll probably live way better if your significant other understands why something is important to you, instead of just accepting it as some sort of peculiar hobby. Now if that significant other loves you because of that...

Plus, the opposite is also true. You may fall in love with someone for their charm, wit & beauty, but passion fades; now if you're with someone because you love what they do and you can in some sense feel a part of it...

I'm definitively outside of my expertise here (I can only provide negative examples); I'd not say "Nuca Zaria: Effective Dating", but I'd advise young people to seriously entertain the idea that their choice of partners might be comparable (from a personal POV) to some decisions on career paths.

2. This problem extrapolates to friends, though in a milder way. I'm profoundly grateful to my EA friends for the way they make me feel comfortable. I've always felt sort of an outsider in my personal social life, but now, with other people, I'm often that guy who stops in the middle of a sentence to refrain from quoting The Precipice or shedding some tears for human suffering and dreams, etc. I don't want to be the one who lends EA a cult-like appearance.

3. I'd totally welcome EA tips on social life in general; not about how to be charming (that's useful, but I learned one trick or two), but focused on how to be happy with this. Besides my own welfare, I believe it could make me more effective; even if I'm not always trying to "convert" my acquaintances, I want to have a positive impact on / through them. Personally, sometimes I admit to my old friends - at least those who I think can sort of understand it - that I'm trying to "use" them to maximize something like general expected utility through our interactions. I don't think that's the optimal strategy, but it's hard to lie to smart friends, and I sort of see this as a higher form of friendship; so they might forgive my lame or cynical comments like "Wow, this wine is totally worth 20 bednets", or "Now you face Global Warming, the Red Dragon, Destroyer of Worlds; roll initiative."

4. MacAskill is just too handsome, it's counterfactually more effective to pick less dreamy characters. I'd be prefer Toby Ord, which sees the present as a more hingey moment.

Is region-level cause prioritization research valuable to spot promising long-term priority causes worldwide?

Discussions over local vs. global remind me the contrast between the performances of two Give Directly programs, 100+ (cash tranfers for American families), which received US$ 114.3 mi, and Covid-19 Africa, which received US$ 53.7. I can see reasons for GD supporting 100+, and I'm not surprised that US$1 is more likely to be donated to poor Americans than to sub-saharian Africa, but this made me (and other people, of course, but I speak for me) wonder if we can draw a line between "we're using parochialism to promote EA-like goals" and "we're compromising with parochialism, diverting scarce resources and giving up effectiveness"? I don't think of this as a main issue, but as a puzzle; it would be interesting to have some research on public criteria or clues about this difference.

Is region-level cause prioritization research valuable to spot promising long-term priority causes worldwide?

Thanks for the post! Would you have any examples of causes that could be a local priority, but not a global one?

Mike Huemer on The Case for Tyranny

The conclusion:

That’s the problem with freedom, in an advanced society. What can be done about it?
a. Targeted restrictions: The most natural thought is that we should tightly control just the really dangerous technologies, the ones that could be used to kill millions of people. So far, that’s worked because there aren’t that many such technologies (esp. nuclear weapons). It may not work in the future, though, when there are more such technologies. [...]
b. Defensive technologies: We’ll build defenses against the main threats. E.g., we’ll build defenses against nuclear weapons, we’ll engineer ourselves to resist genetically engineered viruses, etc. Problem: same as above; we may not be able to anticipate all the threats in advance. Also, defense is generally a losing game. It’s easier and cheaper to destroy things than to protect them. That’s why we have the saying “the best defense is a good offense”.
[...]
c. Tyranny/the End of Privacy: Maybe in the future, everyone will need to be closely monitored at all times, so that, if someone starts trying to destroy the world, other people can immediately intervene. Sam Harris suggested this in a podcast somewhere. Note: obviously, this applies as well (especially!) to government officials.
d. A better alternative . . . ?
Someone please fill in (d) for me. Thanks.

I don't think (c) works so better than the others. It implies a single-point-of-failure and bad incentives due to no accountability, besides the really hard problem of controlling everyone.

Transhuminsts would say (d) is super AGI, but that's basically (c) with more tech.

(Interplanetary civilization would possibly solve it... but as Huemer remarked, we're closer to destruction than to spreading through the galaxy)

Ramiro's Shortform

Legal personality & AI systems

From the first draft of the UNESCO Recommendation on AI Ethics:

Policy Action 11: Ensuring Responsibility, Accountability and Privacy 94. Member States should review and adapt, as appropriate, regulatory and legal frameworks to achieve accountability and responsibility for the content and outcomes of AI systems at the different phases of their lifecycle. Governments should introduce liability frameworks or clarify the interpretation of existing frameworks to make it possible to attribute accountability for the decisions and behaviour of AI systems. When developing regulatory frameworks governments should, in particular, take into account that responsibility and accountability must always lie with a natural or legal person; responsibility should not be delegated to an AI system, nor should a legal personality be given to an AI system.

I see the point in the last sentence is to prevent individuals and companies from escaping liability due to AI failures. However, the last bit also seems to prevent us from creating some sort of "AI DAO" - i.e., from creating a legal entity totally implemented by an autonomous system. This doesn't seem reasonable; after all, what is company if not some sort of artificial agent?

Collection of good 2012-2017 EA forum posts

We should have some sort of e-book with some of the "best picks" by year

Is it possible, and if so how, to arrive at ‘strong’ EA conclusions without the use of utilitarian principles?

[epistemic status: very insecure, but I've been thinking about it for a while; there's probably a more persuasive argument out there]

I think you can easily extrapolate from a Kantian imperfect duty to help other to EA (but I understand peolpe seldom have the patience to engage with this point in Kantian philosophy); also, I remeber seeing a recent paper that used normative uncertainty to argue, quite successfully, that a deontological conception of moral obligation, given uncertainty, would end up in some sort of maximization. Other philosophers (Shelly Kagan, Derek Parfit) have persuasively argued that plausible versions of the most accepted moral philosophies tend to collapse into each other.

It'd be wonderful if someone could easily provide an argument reducing consequentialism, deonlogism and virtue ethics into each other. People could stop arguing like "you can only accept that if you're a x-utilitarian...", and focus on how to effectively realize moral value (which is a hard enough subject).

My own personal and sketchy take here would be something like:

To consistently live with virtue in society, I must follow moral duties defined by social norms that are fair, stable and efficient – that, in some way, strive for general happiness (otherwise,s ociety will change or collapse).

To maximize general happiness, I need to recognize that I am a limited rational agent, and devise a life plan that includes acquiring virtuous habits, and cooperating with others through rules and principles that define moral obligations for reasonable individuals.

To act taking Reason in me as an end in itself and according to the moral law, I need to live in society, and recognize my own limitations and my dependence on other rational beings, thus adopting habits that prevent vice and allow me to be recognized as a virtuous cooperator. To consistently do this, at least in scenarios of factual and normative uncertainty, implies acting in a way that can be described as restrictedly optimizing a cardinal social welfare function

Maximizing the Long-Run Returns of Forced Retirement Savings

I think there's a small typo, probably from your previous post on prisons:

Note that each prison’s profit-maximizing bid is independent of the other prisons’ bids

I like this idea; we should have many more second-price auctions out there. Do you have any further references about it?

Ramiro's Shortform

Should donations be counter-cyclical? At least as a "matter of when" (I remember a previous similar conversation on Reddit, but it was mainly about deciding where to donate to). I don't think patient philanthropists should "give now instead of later" just because of that (we'll probably have worse crisis), but it seems like frequent donors (like GWWC pledgers) should consider anticipating their donations (particularly if their personal spending has decreased) - and also take into account expectations about future exchange rates. Does it make any sense?

Load More