All of Pablo Villalobos's Comments + Replies

Hi, thank you for your post, and I'm sorry to hear about your (and others') bad experience in EA. However, I think if your experience in EA has mostly been in the Bay Area, you might have an unrepresentative perspective on EA as a whole. Most of the worst incidents of the type you mention I've heard about in EA have taken place in the Bay Area, I'm not sure why.

I've mostly been involved in the Western European and Spanish-speaking EA communities, and as far as I know there have been much less incidents here. Of course, this might just be because these comm... (read more)

7
Elika
1y
Your comment (at least how it's read as, maybe different from your intentions) reads as  "that's a particularly problematic location, just go to a different one".  That doesn't solve the problem. That doesn't hold the Bay * or any community accountable or push for change in a positive direction. I think that sort of logic is  a common response to what Maya writes about and doesn't help or make anything better.  *and this is coming from an ex-Berkeley community builder

I'm personally still reserving judgment until the dust settles. I think in this situation, given the animosity towards SBF from customers, investors, etc, there are clear incentives to speak out loud if you believe there was fraud, and to stay quiet if you believe it was an honest (even if terrible) mistake. So we're likely seeing biased evidence.

Still, a mistake of this magnitude seems at the very least grossly negligent. You can't preserve both the integrity and the competence of SBF after this. And I agree that it's hard to know whether you're competent... (read more)

Well, I also think that the core argument is not really valid. Engagement does not require conceding that the other person is right.

The way I understand it, the core of the argument is that AI fears are based on taking a pseudo-trait like "intelligence" and extrapolating it to a "super" regime. The author claims that this is philosophical nonsense and thus there's nothing to worry about. I reject that AI fears are based on those pseudo-traits.

AI risk is not in principle about intelligence or agency. A sufficient amount of brute-force search is enough to be... (read more)

Upvoted because I think the linked post raises an actually valid objection, even thought it does not seem devastating to me and it is kind of obscured by a lot of philosophy that also seems not that relevant to me.

There was a linkpost for this in LessWrong a few days ago, I think the discussion in the comments is good.

1
Locke
2y
The top voted comment in LW says: "(I kinda skimmed, sorry to everyone if I’m misreading / mischaracterizing!)" All the comments there just seem to assert that VGR's core argument isn't really valid. It's not really an actual engagement.

I quite liked this post, but just a minor quibble. Engram preservation still does not directly save lives, it gives us an indefinite amount of time, which is hopefully enough to develop the technology to actually save them.

You could say that it's impossible to save a life since there's always a small chance of untimely death, but let's say we consider a life "saved" when the chance of death in unwanted conditions is below some threshold, like 10%.

I would say widespread engram preservation reduces the chance of untimely death from ~100% (assuming no longevity advances in the near future) to the probability of x-risks. Depending on the threshold, you might have to deal with x-risks to consider these lives "saved".

3
John Smart
2y
Pablo, I submit you haven't thought carefully enough about the nature of the postbiological future. Once humanity has the capacity to preserve and emulate minds, those minds are as impervious to x-risks as is the entire network of backups. Once minds are stored redundantly and both on and off Earth, x-risks themselves become negligible. There is something deeply accelerative and protective of advanced complexity in our universe that is typically ignored by the x-risk community. It doesn't serve their fundraising and political purposes to see it, and it is truly weird vs. biology's dependence on planets, suns, etc.. Yet it is apparently (most likely, the default model) how evolutionary development works, on all Earthlikes in our universe. We just need the courage to see and learn from it.
3
aurellem
2y
Would you say that anesthesia doesn't "directly" extend life? After all, it only makes it possible to do certain surgeries, and it's really the surgery that is "directly" extending the life. And yet "the hospital" extends lives through it's interventions, one of which is anesthesia and without which the hospital would not function or be able to do surgeries effectively. This is just the standard problem of assigning credit when multiple causes are involved. I'd propose the same sorts of tests we use in other cases, such as considering whether in the absence of preservation it would still be possible to save someone's life with future technology. The conclusion I draw is that preservation technology saves lives in a similar way to how anesthesia extends lives by enabling better surgeries. So it's perfectly sensible to talk about preservation directly saving lives even though it's not the only technology required to do so -- after all, if the life does get saved eventually, then preservation would deserve a hefty amount of the credit. Just as anestheologists deserve a hefty amount of credit whenever a surgery is performed successfully, and anesthesologists can be said to be directly extending people's lives directly through their work, as a critical pillar of a surgical team. Dealing with x-risks in a satisfactory way and inventing uploading technology are also necessary to save someone's life, and will deserve substantial credit if lives are truly saved. And preservation is a substantial and irreplacable part of the constellation of truly life saving technologies for people alive today. 

Well, capital accumulation does raise productivity, so traditional pro-growth policies are not useless. But they are not enough, as you argue.

Ultimately, we need either technologies that directly raise productivity (like atomically precise manufacturing, fusion energy or other cheap energy source) or technologies that accelerate R&D and commercial adoption. Apart from AI and increasing global population, I can think of four:

  • boosting average intelligence via genetic engineering
  • reforming science and engineering, as well as education (a la dath ilan)
  • n
... (read more)

From the longtermist perspective, degrowth is not that bad as long as we are eventually able to grow again. For example, we could hypothetically halt or reverse some growth and work on creating safe AGI or nanotechnology or human enhancement or space exploration until we are able to bypass Earth's ecological limits.

A small scale version of this happened during the pandemic, when economic activity was greatly reduced until the situation stabilized and we had better tools to fight the virus.

But let's not be mistaken, growth (perhaps measured by something oth... (read more)

1
Goran Haden
2y
Yes, degrowth now might mean more growth in the future than otherwise. It's better to let some air out of the growth balloon than to inflate it so hard that it bursts. If we done the "right" things historically, we could have done so much more space exploration and other valuable choices before we caused the environmental crisis of today. But now we have wasted Earth's resources in so many useless and destructive ways in a global consumption society that now even challenges our mental health. What we need now globally is not more overconsumption, but enough basic needs for everyone within the planetary boundaries, and free extensive sharing of the best tools for well-being, like 29k.org  Most people agree to reduce their consumption if everyone have to do it, so we should try rationing like in past huge crisis. You can offer more free time instead of higher salaries on a society level, and compensate poor people. 

Great question. The paper does mention micronutrients but does not try to evaluate which of these advantages had a greater influence. I used the back-of-the-envelope calculation in footnote 6 as a sanity check that the effect size is plausible but I don't know enough about nutrition to have any intuition on this.

Even if you think all sentient life is net negative, extinction is not a wise choice. Unless you completely destroy Earth, animal life will probably evolve again, so there will be suffering in the future.

Moreover, what if there are sentient aliens somewhere? What if some form of panpsychism is true and there is consciousness embedded in most systems? What if some multiverse theory is true?

If you want to truly end suffering, your best bet would be something like creating a non sentient AGI that transforms everything into some nonsentient matter, and then sp... (read more)

I don't think embryo selection is remotely a central example of 20th century eugenics, even if it involves 'genetic enhancement'. No one is getting killed, sterilized or otherwise being subjected to nonconsensual treatments.

In the end, it's no different than other non-genetic interventions to 'improve' the general population, like the education system. Education transforms children for life in a way that many consider socially beneficial.

Why are we okay with having such massive interventions on a child's environment (30 hours a week for 12+ years!), but no... (read more)

Strongly agree, but I want to emphasize something. The word 'better' is doing a lot of work here.

I want to be replaced by my better future self, but not my future self who is great at rationalizing their decisions.

I want to be replaced by a better partner, but not by someone who is great at manipulating people into a relationship.

I want to be replaced by a better employee, but not by one who is great at getting the favor of the manager.

I want to be replaced by a machine which can do my job better, but not by an unaligned AGI.

I want to be replaced by better... (read more)

The fact that risk from advanced AI is one of the top cause areas is to me an example of at least part of EA being technopessimist for a concrete technology. So I don't think there is any fundamental incompatibility, nor that the burden of proof is particularly high, as long as we are talking about specific classes of technology.

If technopessimism requires believing that most new technology is net harmful that's a very different question, and probably does not even have a well defined answer.

1
vivek
2y
"risk from advanced AI is one of the top cause areas is to me an example of at least part of EA being technopessimist"  ...assuming that particular example is a concern of such an impact primarily on humans, could that be articulated as anthropocentric technopessimism ? On a broader sidebar, there is discussion around technology (particularly computing) in regards to ecological and other limits - e.g. https://computingwithinlimits.org

(When I say 'we' I mean 'me, if I had control over the EA community'. This is just my view, and the actual reasons behind funding decisions are probably somewhat different)

Well, I'm not sure about the numbers but I'd say a pretty substantial percentage of EA funding and donations is going to GiveWell-style global health initiatives. So it's not like we are ignoring the plight of people right now.

The reason why there is more money that we can spend is that we don't know a lof of effective interventions to reduce say, pandemic risk, which scale well with mor... (read more)

2
LiaH
2y
Yes, I agree on the point that interventions are best assessed with cost-benefit analysis, rather than propping up inefficient institutions.  I was not necessarily suggesting support for WHO, only indicating that the  purported leader in global health is spending more time fundraising than leading.  I, perhaps mistakenly, thought EA, particularly Open Phil, was about funding high risk, low yield, but fat tail causes, vs the "sure thing" that Give Well funds.  For pandemic risk, what about funding campaigns to back the TRIPs waiver proposal for all pandemic vaccines? For people of conflict-affected countries, what about supporting impartial organizations which can access people in need? Advocate for impartial access? The latter would scale well, if effective, because the entire southern Afghanistan is unvaccinated against every childhood disease, not just COVID. I agree with you, when the root cause of suffering is political, the solution is complicated, and improving the political system would be costly and ineffective. This is why I think the creativity of the EA community could be so beneficial, by seeking other solutions.

Turning the United Nations into a Decentralized Autonomous Organization

The UN is now running on ancient technology[source], is extremely centralized[source] and uses outdated voting methods and consensus rules[source]. This results in a slow, inefficient organization, vulnerable to regulatory capture and with messed up incentives.

Fortunately, we now have much better alternatives: Decentralized Autonomous Organizations (DAOs) are blockchain-based organizations which run on smart contracts. They offer many benefits compared to legacy technology:

1. Since the ... (read more)