Kelsey Piper

Topic Contributions

Comments

Stop scolding people for worrying about monkeypox (Vox article)

To be clear, though, I don't think EAs should worry about monkeypox more than they currently are - EAs are already pretty aware that pandemics can be very bad and in favor of doing more to detect them early, understand how exponential growth works, and are in a pretty functional  information ecosystem where they'll hear about monkeypox if it becomes a matter of greater personal safety concern or if we get to the point where it's a good idea for people to get smallpox vaccinations.

Stop scolding people for worrying about monkeypox (Vox article)

Huh, interesting example of "should you reverse any advice you hear?". I have mostly encountered US articles in which CDC, etc experts are quoted telling the public unhelpful things like "very few people have monkeypox in the US right now" and "there's no evidence this variant is more transmissible" and "don't panic". 

Liars

I hadn't thought of this and I'm actually intrigued - it seems like prediction markets might specifically be good for situations where everyone 'knows' something is up but no one wants to be the person to call it out. The big problem to my mind is the resolution criterion: even if someone's a fraud, it can easily be ten years before there's a big article proving it.

Disclaimer that I've given this less than ten minutes of thought, but I'm now imagining a site pitched at journalists as an aggregated, anonymous 'tip jar' about fraud and misconduct. I think lots of people would at least look at that when deciding which stories to pursue. (Paying sources, or relying on sources who'd gain monetarily from an article about how someone is a fraud, is extremely not okay by journalistic ethics, which limits substantially what you can do here.)

Liars

ooooops, I'm sorry re: the imposter syndrome - do you have any more detail? I don't want to write in a way that causes that!

Liars

I think checking whether results replicate is also important and valuable work which is undervalued/underrewarded, and I'm glad you do it. 

One dynamic that seems unique to fraud investigations specifically is that while most scientists have some research that has data errors or isn't robust, most aren't outright fabricating. Clear evidence of fake data more or less indicts all that scientists's other research (at least to my mind) and is a massive change to how much they'll tend to be respected and taken seriously. It can also get papers redacted, while (infuriatingly) papers are rarely redacted for errors or lack of robustness.

But in general I think of fraud as similar in some important ways to other bad research, like the lack of incentives for anyone to investigate it or call it out and the frequency with which 'everyone knows' that research is shady or doesn't hold up and yet no one wants to be the one to actually point it out. 

Vox's Future Perfect is hiring

Update: I have since been told that the deadline is going to be sooner, August 4th! So sorry for the late change.

Vox's Future Perfect is hiring

August 18th and unfortunately US only - I'm hoping to change that someday but Vox has not taken the legal and regulatory steps that'd make it possible for them as a US-based company to make hires outside the US.

Climate Change Is, In General, Not An Existential Risk

One way in which geoengineering increases societal fragility is if we pump particles into the atmosphere and then find ourselves obliged to keep pumping particles into the atmosphere in order to maintain the effects, and then suffer a significant collapse of infrastructure that makes us not capable of this any longer. This could result in extremely sudden warming and a rapid, unpredictable change in weather patterns. Something would have to go very wrong first, of course, but it could compound an existing catastrophe and take it from recoverable to irrecoverable.

The case for taking AI seriously as a threat to humanity

Hmm. I think I'm thinking of concern for justice-system outcomes as a values difference rather than a reasoning error, and so treating it as legitimate feels appropriate in the same way it feels appropriate to say 'an AI with poorly specified goals could wirehead everyone, which is an example of optimizing for one thing we wanted at the expense of other things we wanted' even though I don't actually feel that confident that my preferences against wireheading everyone are principled and consistent.

I agree that most peoples' conceptions of fairness are inconsistent, but that's only because most peoples' values are inconsistent in general; I don't think it means they'd necessarily have my values if they thought about it more. I also think that 'the U.S. government should impose the same prison sentence for the same crime regardless of the race of the defendant' is probably correct under my value system, which probably influences me towards thinking that other people who value it would still value it if they were less confused.

Some instrumental merits of imposing the same prison sentence for the same crime regardless of the race of the defendant:

I want to gesture at something in the direction of pluralism: we agree to treat all religions the same, not because they are of equal social value or because we think they are equally correct, but because this is social technology to prevent constantly warring over whose religion is correct/of the most social value. I bet some religious beliefs predict less recidivism, but I prefer not using religion to determine sentencing because I think there are a lot of practical benefits to the pluralistic compromise the U.S. uses here. This generalizes to race.

There are ways you can greatly exacerbate an initially fairly small difference by updating on it in ways that are all technically correct. I think the classic example is a career path with lots of promotions, where one thing people are optimizing for at each level is the odds of being promoted at the next level; this will result in a very small difference in average ability producing a huge difference in odds of reaching the highest level. I think it is good for systems like the U.S. justice system to try to adopt procedures that avoid this, where this is sane and the tradeoffs relatively small.

(least important): Justice systems run on social trust. If they use processes which undermine social trust, even if they do this because the public is objectively unreasonable, they will work less well; people will be less likely to report crimes, cooperate with police, testify, serve on juries, make truthful decisions on juries, etc. I know that when crimes are committed against me, I weigh whether I expect the justice system to behave according to my values when deciding whether to report the crimes. If this is common, there's reason for justice systems to use processes that people consider aligned. If we want to change what people value, we should use instruments for this other than the justice system.

The case for taking AI seriously as a threat to humanity

This is not for criminal investigation. This is for, when a person has been convicted of a crime, estimating when to release them (by estimating how likely they are to commit another crime).

Load More