EG

Erich_Grunewald

Associate Researcher @ Institute for AI Policy and Strategy
2060 karmaJoined Dec 2020Working (6-15 years)Berlin, Germanywww.erichgrunewald.com

Bio

Anything I write here is written purely on my own behalf, and does not represent my employer's views (unless otherwise noted).

Comments
255

I reckon my donations this year will amount to about:

  • $3.7K to animal welfare, via Effektiv Spenden.
  • $1.7K to global health and development, via Effektiv Spenden.
  • $1.1K to the Donation Election Fund.
  • And my labour to mitigating risks from AI. In a way, this amounts to way more than the above, given that I would be earning 2x+ what I am earning now if I were doing what I did before, i.e., software engineering.

I recently reconfigured my giving to be about 85% animal welfare and 15% global health, however, for reasons similar to those spelled out in this post (I think, though I only skimmed that post, and came to my decision independently).

Some non-fiction books I enjoyed this year were James Gleick's The Information (a sprawling book about information theory, communication, and much else), Wealth and Power by Orville Schell & John Delury (about the intellectual history of modern China), Fawn M. Brodie's No Man Knows My History (about Joseph Smith and the early days of the LDS Church, or Mormonism), and David Stove's The Plato Cult (polemics against Popper, Nozick, idealism, and more). Some of these are obviously rather narrow, and you probably would not enjoy them if you are not at all interested in the subject matters.

You can find it here, but use this power responsibly as I assume the author deleted it for a reason.

I agree that the idea could be restated in a clearer way. Here is an alternative way of saying essentially the same thing:

The project of doing good is a project of making better decisions. One important way of evaluating decisions is to compare the consequences they have to the consequences of alternative choices. Of course we don't know the consequences of our decisions before we make them, so we must predict the consequences that a decision will have.

Those predictions are influenced by some of our beliefs. For example, do I believe animals are sentient? If so, perhaps I should donate more to animal charities, and less to charities aiming to help people. These beliefs pay rent in the sense that they help us make better decisions (they get to occupy some space in our heads since they provide us with benefits). Other beliefs do not influence our predictions about the consequences of important decisions. For example, whether or not I believe that Kanye West is a moral person does not seem important for any choice I care about. It is not decision-relevant, and does not "pay rent".

In order to better predict the consequences of our decisions, it is better to have beliefs that more accurately reflect the world as it is. There are a number of things we can do to get more accurate beliefs -- for example, we can seek out evidence, and reason about said evidence. But we have only so much time and energy to do so. So we should focus that time and energy on the beliefs that actually matter, in that they help us make important decisions.

It's embarassing for the EA movement, too. It's another SBF situation. Some EAs get control over billions of dollars, and act completely irresponsibly with that power.

Probably disagree? Hard to say for sure since we lack details, but it's not obvious to me that the board acted irresponsibly, let alone to the degree that SBF did. I guess one, it seems fairly likely that Ilya Sutskever initiated the whole thing, not the EAs on the board. And two, the board members have fiduciary duties to further the OAI nonprofit's mission, i.e., to ensure that AGI benefits all of humanity. (They do not have a duty to ensure OAI is valued at billions of dollars, except in so far as that helps further its mission.) 

If the board members had reason to believe that Sam Altman was acting contrary to OAI's mission of ensuring that AGI benefits all humanity, perhaps moving to fire him was the responsible thing to do (even if it turns out to be bad ex post), and what has been irresponsible are the efforts of investors and others to try to reinstate him. I guess we will know better within the next weeks, but I think it's premature to say that the board acted irresponsibly right now.

That looks like a great interview subject!

Hugo argues that while many people believe that human beings are gullible and easily persuaded of false ideas, in fact people are surprisingly good at telling who is trustworthy, and generally aren’t easily convinced of anything they don’t already think.

That’s because communication couldn’t evolve among human unless it was beneficial to both the sender and receiver of information. If the receiver generally lost out, they would stop listening entirely.

I'm confused. I thought the general take was "people are tricked into believing things that are not true", not "people are tricked into believing things that are bad for them". The above argument is a reason to think the second claim is false, but not the first claim (since you can have false beliefs that are nonetheless not bad for you).

Also, could you not have communication evolve even if people are gullible, so long as it is good for groups to have unity/cohesion/obedience? Groups and tribes with more gullible members might have outcompeted groups with more independent-minded members if the former were more united/cohesive.

Some other questions:

  • What does he make of the claim that all cognitive biases at heart are just confirmation bias based around a few "fundamental prior" beliefs?
  • Is he an atheist, and if so what does he make of humanity's history of belief in religion? I am thinking especially of times and places that were especially fertile ground for new religious ideas, e.g., the Mediterranean prior to and during the spread of Christianity, the Second Great Awakening, and the Taiping Rebellion in China. I think those were times when many people readily believed false ideas -- why?
  • On social media and fake news, can he imagine any plausible information ecologies that would cause major problems? How would those look, and why will we avoid them?
  • Similarly, can he imagine an ideal information ecology? How different is it from what we have today, and how much would things change if we could switch over?
  • You could argue that fake news is a problem not because it convinces people of falsehoods, but because it spurs them into action, or extremizes their beliefs (e.g., by providing more extreme evidence of their beliefs' truth than does reality). What does he make of that argument?
  • Presumably people sometimes do change their mind. What's his model of how that typically happens? (Presumably it mostly involves things you would not call persuasion.)
  • Does he think LLMs and voice synthesis will be widely used for scams in the next decade? If not, why not? If yes, does scamming not involve persuasion?
  • Why did the ad media industry have over $800B revenue last year?

I have a little bit of a different perspective on that I don't really consider earning "only" 70k a "sacrifice". Maybe it could be considered a "relative sacrifice"? But even that language makes me uncomfortable.

Any sacrifice is relative. You can only sacrifice something if you had or could have had it in the first place.

  • Do you think it was a mistake (ex ante) for some folks to de-emphasize earning to give a few years back?
  • What sorts of field building efforts around earning to give are you more excited about? E.g., focusing on promising students versus trying to recruit high-net-worth individuals (aka rich people).
  • Which of your past donations do you feel best/worst about?

You may draw some ideas from when this topic was previously discussed here.

Nice work. Do you have an intuitions about whether the same patterns also apply to federal regulations in the US?

Load more