Pablo Villalobos

Research assistant at Epoch

Posts

Sorted by New

Topic Contributions

Comments

Is green growth or degrowth the best near-term future? 

From the longtermist perspective, degrowth is not that bad as long as we are eventually able to grow again. For example, we could hypothetically halt or reverse some growth and work on creating safe AGI or nanotechnology or human enhancement or space exploration until we are able to bypass Earth's ecological limits.

A small scale version of this happened during the pandemic, when economic activity was greatly reduced until the situation stabilized and we had better tools to fight the virus.

But let's not be mistaken, growth (perhaps measured by something other than GDP) is pretty much the goal here. If we have to forego growth temporarily, it's because we have failed to find clever ways of bypassing the current limits. It's not a strategy, it's what losing looks like.

It's also probably politically infeasible: just raising inflation and energy prices is enough to have most people completely forget about the environment. It could not be a planned thing, rather a consequence of economic forces.

It's like if Haber and Bosch hadn't invented their nitrogen process in 1910. We would have run out of fertilizer and then population growth would've had to slow down or even reverse.

Potatoes: A Critical Review

Great question. The paper does mention micronutrients but does not try to evaluate which of these advantages had a greater influence. I used the back-of-the-envelope calculation in footnote 6 as a sanity check that the effect size is plausible but I don't know enough about nutrition to have any intuition on this.

Why should we care about existential risk?

Even if you think all sentient life is net negative, extinction is not a wise choice. Unless you completely destroy Earth, animal life will probably evolve again, so there will be suffering in the future.

Moreover, what if there are sentient aliens somewhere? What if some form of panpsychism is true and there is consciousness embedded in most systems? What if some multiverse theory is true?

If you want to truly end suffering, your best bet would be something like creating a non sentient AGI that transforms everything into some nonsentient matter, and then spends eternity thinking and experimenting to determine if there are other universes or other pockets of suffering, and how to influence them.

Of course this would entail human extinction too, but it's a very precise form of extinction. Even if you create an AGI, it would have to be aligned with your suffering-minimizing ethics.

So for now, even if you think life is net negative, preventing ourselves from losing control of the future is a very important instrumental goal. And anything that threatens that control, even if it's not an existential threat, should be avoided.

Avoiding Moral Fads?

I don't think embryo selection is remotely a central example of 20th century eugenics, even if it involves 'genetic enhancement'. No one is getting killed, sterilized or otherwise being subjected to nonconsensual treatments.

In the end, it's no different than other non-genetic interventions to 'improve' the general population, like the education system. Education transforms children for life in a way that many consider socially beneficial.

Why are we okay with having such massive interventions on a child's environment (30 hours a week for 12+ years!), but not on a child's genes? After all, phenotype is determined by genes+environment. Why is it ok to change one but not the other?

What is morally wrong about selecting which people come to existence based on their genes, when we already make such decisions based on all other aspects of their life? There are almost no illiterate people in the western world, almost no people with stunted growth. We've selected them out of existence via environmental interventions. Should we stop doing that?

A valid reason to reject this new eugenics would be fearing that the eugenic selection pressure could end up being controlled by political processes, which could be dangerous. But the educational system is already controlled by political processes in most countries, and again this is mostly seen as acceptable.

I want to be replaced

Strongly agree, but I want to emphasize something. The word 'better' is doing a lot of work here.

I want to be replaced by my better future self, but not my future self who is great at rationalizing their decisions.

I want to be replaced by a better partner, but not by someone who is great at manipulating people into a relationship.

I want to be replaced by a better employee, but not by one who is great at getting the favor of the manager.

I want to be replaced by a machine which can do my job better, but not by an unaligned AGI.

I want to be replaced by better humans, but not by richer humans if they are lonely and depressed.

I want to be replaced by a simulation that feels like the best holiday ever, but not by a contract drafting em.

I want to be replaced, if and only if I'm being replace by something that is, in a very precise sense, better. If the process that will replace me does not share my values, then I want to replace it with one that does.

Is EA compatible with technopessimism?

The fact that risk from advanced AI is one of the top cause areas is to me an example of at least part of EA being technopessimist for a concrete technology. So I don't think there is any fundamental incompatibility, nor that the burden of proof is particularly high, as long as we are talking about specific classes of technology.

If technopessimism requires believing that most new technology is net harmful that's a very different question, and probably does not even have a well defined answer.

A Case for Improving Global Equity as Radical Longtermism

(When I say 'we' I mean 'me, if I had control over the EA community'. This is just my view, and the actual reasons behind funding decisions are probably somewhat different)

Well, I'm not sure about the numbers but I'd say a pretty substantial percentage of EA funding and donations is going to GiveWell-style global health initiatives. So it's not like we are ignoring the plight of people right now.

The reason why there is more money that we can spend is that we don't know a lof of effective interventions to reduce say, pandemic risk, which scale well with more money.

We could just spend all that money on interventions that might help, like trying to develop broad spectrum antivirals, but it's legitimately a hard problem and it's likely that we would end up with no more money to spend without having solved anything.

Going back to improving equity, the three people you mentioned (rohingya, yemeni, afghan) are victims of war and persecution. The root causes of their suffering are political. We could spend hundreds of billions trying to improve their political system so that this does not happen again, but Afghanistan itself is an example of just how hard that is.

In short, even though helping people now is very valuable, we also don't know a lot of interventions that scale well with money. Malaria nets and deworming are the exception, not the rule. Remember that the entire world has been trying to eliminate poverty for centuries. It's just a hard problem.

Maybe paying for vaccines in lower income countries is an effective and scalable intervention. The right way to evaluate this is with a cost-benefit analysis, not by how much money the WHO says it needs.

New Top EA Causes for 2020?

Turning the United Nations into a Decentralized Autonomous Organization

The UN is now running on ancient technology[source], is extremely centralized[source] and uses outdated voting methods and consensus rules[source]. This results in a slow, inefficient organization, vulnerable to regulatory capture and with messed up incentives.

Fortunately, we now have much better alternatives: Decentralized Autonomous Organizations (DAOs) are blockchain-based organizations which run on smart contracts. They offer many benefits compared to legacy technology:

1. Since the blockchain is always online and permanent, they are always available, fast, and 100% transparent by design.

2. They are decentralized and invulnerable to any attacks:

The blockchain-based DAO system works in a fully decentralized way and is immune to both outside and inside attacks. At the same time, operations of such system is only controlled by pre-defined rules; thus, the uncertainty and errors caused by human processes are greatly reduced.

[source]

3. The rules are enforced by code, so they are unbreakable.

When a government’s powers are encoded on a blockchain, its limitations will not be mere redress in a court of law, but will be the code itself. The inherent capabilities of blockchain technology can ex ante prevent a government from acting ultra vires; it can prevent government over-reach before the government act occurs.

[source]

4. They support new forms of governance and voting, such as futarchy or quadratic voting [source].

5. Since everything runs on ethereum, and cryptocurrencies always go up, a small investment in Ether now could provide enough funds to run the UN forever, freeing states from having to contribute funds [source].


Given the ample benefits, I'm sure a quick email to UN Secretary General António Guterres will convince everyone to switch to DAOs. Thus, we only need a small team of developers to write the code, which should take maybe a couple of months.

What is the expected impact? The UN recently prohibited nuclear weapons[source], contributing to reduce nuclear risk. An improvement in UN efficiency and capabilities is likely to lead to reduced existential risk, via better global coordination on issues like AI Safety.

Note that the savings from reduced operating costs will be much greater than the implementation cost, so this could even be a profitable intervention.