I'm personally still reserving judgment until the dust settles. I think in this situation, given the animosity towards SBF from customers, investors, etc, there are clear incentives to speak out loud if you believe there was fraud, and to stay quiet if you believe it was an honest (even if terrible) mistake. So we're likely seeing biased evidence.
Still, a mistake of this magnitude seems at the very least grossly negligent. You can't preserve both the integrity and the competence of SBF after this. And I agree that it's hard to know whether you're competent...
Well, I also think that the core argument is not really valid. Engagement does not require conceding that the other person is right.
The way I understand it, the core of the argument is that AI fears are based on taking a pseudo-trait like "intelligence" and extrapolating it to a "super" regime. The author claims that this is philosophical nonsense and thus there's nothing to worry about. I reject that AI fears are based on those pseudo-traits.
AI risk is not in principle about intelligence or agency. A sufficient amount of brute-force search is enough to be...
Upvoted because I think the linked post raises an actually valid objection, even thought it does not seem devastating to me and it is kind of obscured by a lot of philosophy that also seems not that relevant to me.
There was a linkpost for this in LessWrong a few days ago, I think the discussion in the comments is good.
I quite liked this post, but just a minor quibble. Engram preservation still does not directly save lives, it gives us an indefinite amount of time, which is hopefully enough to develop the technology to actually save them.
You could say that it's impossible to save a life since there's always a small chance of untimely death, but let's say we consider a life "saved" when the chance of death in unwanted conditions is below some threshold, like 10%.
I would say widespread engram preservation reduces the chance of untimely death from ~100% (assuming no longevity advances in the near future) to the probability of x-risks. Depending on the threshold, you might have to deal with x-risks to consider these lives "saved".
Well, capital accumulation does raise productivity, so traditional pro-growth policies are not useless. But they are not enough, as you argue.
Ultimately, we need either technologies that directly raise productivity (like atomically precise manufacturing, fusion energy or other cheap energy source) or technologies that accelerate R&D and commercial adoption. Apart from AI and increasing global population, I can think of four:
From the longtermist perspective, degrowth is not that bad as long as we are eventually able to grow again. For example, we could hypothetically halt or reverse some growth and work on creating safe AGI or nanotechnology or human enhancement or space exploration until we are able to bypass Earth's ecological limits.
A small scale version of this happened during the pandemic, when economic activity was greatly reduced until the situation stabilized and we had better tools to fight the virus.
But let's not be mistaken, growth (perhaps measured by something oth...
Great question. The paper does mention micronutrients but does not try to evaluate which of these advantages had a greater influence. I used the back-of-the-envelope calculation in footnote 6 as a sanity check that the effect size is plausible but I don't know enough about nutrition to have any intuition on this.
Even if you think all sentient life is net negative, extinction is not a wise choice. Unless you completely destroy Earth, animal life will probably evolve again, so there will be suffering in the future.
Moreover, what if there are sentient aliens somewhere? What if some form of panpsychism is true and there is consciousness embedded in most systems? What if some multiverse theory is true?
If you want to truly end suffering, your best bet would be something like creating a non sentient AGI that transforms everything into some nonsentient matter, and then sp...
I don't think embryo selection is remotely a central example of 20th century eugenics, even if it involves 'genetic enhancement'. No one is getting killed, sterilized or otherwise being subjected to nonconsensual treatments.
In the end, it's no different than other non-genetic interventions to 'improve' the general population, like the education system. Education transforms children for life in a way that many consider socially beneficial.
Why are we okay with having such massive interventions on a child's environment (30 hours a week for 12+ years!), but no...
Strongly agree, but I want to emphasize something. The word 'better' is doing a lot of work here.
I want to be replaced by my better future self, but not my future self who is great at rationalizing their decisions.
I want to be replaced by a better partner, but not by someone who is great at manipulating people into a relationship.
I want to be replaced by a better employee, but not by one who is great at getting the favor of the manager.
I want to be replaced by a machine which can do my job better, but not by an unaligned AGI.
I want to be replaced by better...
The fact that risk from advanced AI is one of the top cause areas is to me an example of at least part of EA being technopessimist for a concrete technology. So I don't think there is any fundamental incompatibility, nor that the burden of proof is particularly high, as long as we are talking about specific classes of technology.
If technopessimism requires believing that most new technology is net harmful that's a very different question, and probably does not even have a well defined answer.
(When I say 'we' I mean 'me, if I had control over the EA community'. This is just my view, and the actual reasons behind funding decisions are probably somewhat different)
Well, I'm not sure about the numbers but I'd say a pretty substantial percentage of EA funding and donations is going to GiveWell-style global health initiatives. So it's not like we are ignoring the plight of people right now.
The reason why there is more money that we can spend is that we don't know a lof of effective interventions to reduce say, pandemic risk, which scale well with mor...
Turning the United Nations into a Decentralized Autonomous Organization
The UN is now running on ancient technology[source], is extremely centralized[source] and uses outdated voting methods and consensus rules[source]. This results in a slow, inefficient organization, vulnerable to regulatory capture and with messed up incentives.
Fortunately, we now have much better alternatives: Decentralized Autonomous Organizations (DAOs) are blockchain-based organizations which run on smart contracts. They offer many benefits compared to legacy technology:
1. Since the ...
Hi, thank you for your post, and I'm sorry to hear about your (and others') bad experience in EA. However, I think if your experience in EA has mostly been in the Bay Area, you might have an unrepresentative perspective on EA as a whole. Most of the worst incidents of the type you mention I've heard about in EA have taken place in the Bay Area, I'm not sure why.
I've mostly been involved in the Western European and Spanish-speaking EA communities, and as far as I know there have been much less incidents here. Of course, this might just be because these comm... (read more)