Anything I write here is written purely on my own behalf, and does not represent my employer's views (unless otherwise noted).
I reckon my donations this year will amount to about:
I recently reconfigured my giving to be about 85% animal welfare and 15% global health, however, for reasons similar to those spelled out in this post (I think, though I only skimmed that post, and came to my decision independently).
Some non-fiction books I enjoyed this year were James Gleick's The Information (a sprawling book about information theory, communication, and much else), Wealth and Power by Orville Schell & John Delury (about the intellectual history of modern China), Fawn M. Brodie's No Man Knows My History (about Joseph Smith and the early days of the LDS Church, or Mormonism), and David Stove's The Plato Cult (polemics against Popper, Nozick, idealism, and more). Some of these are obviously rather narrow, and you probably would not enjoy them if you are not at all interested in the subject matters.
You can find it here, but use this power responsibly as I assume the author deleted it for a reason.
I agree that the idea could be restated in a clearer way. Here is an alternative way of saying essentially the same thing:
The project of doing good is a project of making better decisions. One important way of evaluating decisions is to compare the consequences they have to the consequences of alternative choices. Of course we don't know the consequences of our decisions before we make them, so we must predict the consequences that a decision will have.
Those predictions are influenced by some of our beliefs. For example, do I believe animals are sentient? If so, perhaps I should donate more to animal charities, and less to charities aiming to help people. These beliefs pay rent in the sense that they help us make better decisions (they get to occupy some space in our heads since they provide us with benefits). Other beliefs do not influence our predictions about the consequences of important decisions. For example, whether or not I believe that Kanye West is a moral person does not seem important for any choice I care about. It is not decision-relevant, and does not "pay rent".
In order to better predict the consequences of our decisions, it is better to have beliefs that more accurately reflect the world as it is. There are a number of things we can do to get more accurate beliefs -- for example, we can seek out evidence, and reason about said evidence. But we have only so much time and energy to do so. So we should focus that time and energy on the beliefs that actually matter, in that they help us make important decisions.
It's embarassing for the EA movement, too. It's another SBF situation. Some EAs get control over billions of dollars, and act completely irresponsibly with that power.
Probably disagree? Hard to say for sure since we lack details, but it's not obvious to me that the board acted irresponsibly, let alone to the degree that SBF did. I guess one, it seems fairly likely that Ilya Sutskever initiated the whole thing, not the EAs on the board. And two, the board members have fiduciary duties to further the OAI nonprofit's mission, i.e., to ensure that AGI benefits all of humanity. (They do not have a duty to ensure OAI is valued at billions of dollars, except in so far as that helps further its mission.)
If the board members had reason to believe that Sam Altman was acting contrary to OAI's mission of ensuring that AGI benefits all humanity, perhaps moving to fire him was the responsible thing to do (even if it turns out to be bad ex post), and what has been irresponsible are the efforts of investors and others to try to reinstate him. I guess we will know better within the next weeks, but I think it's premature to say that the board acted irresponsibly right now.
That looks like a great interview subject!
Hugo argues that while many people believe that human beings are gullible and easily persuaded of false ideas, in fact people are surprisingly good at telling who is trustworthy, and generally aren’t easily convinced of anything they don’t already think.
That’s because communication couldn’t evolve among human unless it was beneficial to both the sender and receiver of information. If the receiver generally lost out, they would stop listening entirely.
I'm confused. I thought the general take was "people are tricked into believing things that are not true", not "people are tricked into believing things that are bad for them". The above argument is a reason to think the second claim is false, but not the first claim (since you can have false beliefs that are nonetheless not bad for you).
Also, could you not have communication evolve even if people are gullible, so long as it is good for groups to have unity/cohesion/obedience? Groups and tribes with more gullible members might have outcompeted groups with more independent-minded members if the former were more united/cohesive.
Some other questions:
I have a little bit of a different perspective on that I don't really consider earning "only" 70k a "sacrifice". Maybe it could be considered a "relative sacrifice"? But even that language makes me uncomfortable.
Any sacrifice is relative. You can only sacrifice something if you had or could have had it in the first place.
I think the idea is that lots of money is spent on treating diseases caused by aging, but little is spent on preventing aging in the first place. So I don't see a contradiction.