david_reinstein

I am an academic economist and a 'Distinguished Researcher' at Rethink Priorities (https://www.rethinkpriorities.org/our-team)

My previous and ongoing research focuses on determinants and motivators of charitable giving (propensity, amounts, and 'to which cause?'), and drivers of/barriers to effective giving, as well as the impact of pro-social behavior and social preferences on market contexts.

I'm working to impact EA fundraising and marketing; see https://daaronr.github.io/ea_giving_barriers/index.html, innovationsinfundraising.org, and giveifyouwin.org.

Twitter: @givingtools

Wiki Contributions

Comments

david_reinstein's Shortform

Variant of Chinese room argument? This seems ironclad to me, what am I missing:

My claims:

Claim: AI feelings are unknowable: Maybe an advanced AI can have positive and negative sensations. But how would we ever know which ones are which (or how extreme they are?

Corollary: If we cannot know which are which, we can do nothing that we know will improve/worsen the “AI feelings”; so it’s not decision-relevant

Justification I: As we ourselves are bio-based living things, we can infer from the apparent sensations and expressions of bio-based living things that they are happy/suffering. But for non-bio things, this analogy seems highly flawed. If a dust cloud converges on a ‘smiling face’, we should not think it is happy.

Justification II. (Related) AI, as I understand it, is coded to learn to solve problems and maximise things, optimize certain outcomes or do things it “thinks” will yield positive feedback.

We might think then, that the AI ‘wants’ to solve these problems, and things that bring it closer to the solution make it ‘happier’. But why should we think this? For all we know, it may feel pain when it gets closer to the objective, and pleasure when it avoids this.

Does it tell us it makes it happy to come closer to the solution? That may merely because we programmed it to learn how to come to a solution, and one thing it ‘thinks’ will help is telling us it gets pleasure from doing so, even though it actually gains pain.

A colleague responded:

If we get the AI through a search process (like training a neural network) then there's a reason to believe that AI would feel positive sensations (if any sensations at all) from achieving its objective since an AI that feels positive sensations would perform better at its objective than an AI that feels negative sensations. So, the AI that better optimizes for the objective would be more likely to result from the search process. This feels analogous to how we judge bio-based living things in that we assume that humans/animals/others seek to do those things that make them feel good, and we find that the positive sensations of humans are tied closely to those things that evolution would have been optimizing for. A version of a human that felt pain instead of pleasure from eating sugary food would not have performed as well on evolution's optimization criteria.

OK but this seems only if we:

  1. Knew how to induce or identify "good feelings"
  2. Decided to induce these and tie them in as a reward for getting close to the optimum.

But how on earth would we know how to do 1 (without biology at least) and why would we bother doing so? Couldn't the machine be just as good an optimizer without getting a 'feeling' reward from optimizing?

Please tell me why I'm wrong.

Part 1: EA tech work is inefficiently allocated & bad for technical career capital

My impression is that EA orgs are far more mission-aligned and have more scope for cooperation than typical nonprofits and charities. Those guys tend to compete with each other and are very concerned with self-preservation.

Open Thread: July 2021

I agree. I was having some similar ideas. I'm particularly thinking about data surrounding effective giving choices and attitudes towards 'EA issues'.

Was thinking some tagging of EA-relevant data on kaggle.com, but the Github repo idea seems gret

The EA Forum Podcast is up and running

Airtable 'sign up form'... here

If you want to be a reader/editor/commenter/organizer,

please sign up and or/contact me or @dothemath

[Link] Reading the EA Forum; audio content

EA Forum podcast” collab seems to be catching on. If you want to be a reader/editor/commenter/organizer, please sign up here and or/contact me or @dothemath

The EA Forum Podcast is up and running

Fwiw my podcast with more recordings is HERE. @dothemath and I are in contact and we will probably merge the organization of our content at some point.

I'm also planning to make an airtable (database) to keep track of this and for people to sign up to do readings.

Which EA forum posts would you most like narrated?

(Fwiw my podcast with recordings is HERE). @dothemath and I are in contact and we will probably merge the organization of our content at some point.

I'm also planning to make an airtable (database) to keep track of this and for people to sign up to do readings.

[Link] Reading the EA Forum; audio content

So far the episodes I recorded have about 15-20 listens each (although not sure how many listened to the 'whole thing' vs a curious snip).

Seems decent as I haven't promoted it much yet. Probably worth continuing but not yet worth investing a lot in production value... until the listenership goes say, over 100 per episode.

[Link] Reading the EA Forum; audio content

and the audio versions averaged 6% of the number of downloads of the short versions.

What are the base rate numbers here?

EA syllabi and teaching materials

Thank you!

Other than the one from David Bernard and Matthias Endres, do you know if any of these are Economics-focused and/or involve formal maths and/or quantitative content?

Load More