All of technicalities's Comments + Replies

Career advice for Australian science undergrad interested in welfare biology

Not a bio guy, but in general: talk to more people! List people you think are doing good work and ask em directly.

Also generically: try to do some real work in as many of them as you can. I don't know how common undergrad research assistants are in your fields, or in Australian unis, but it should be doable (if you're handling your courseload ok).

PS: Love the username.

2ripbennyharvey4dThanks so much for replying! Really appreciate the advice. I definitely should try to contact some more experts, thanks for that push. Getting some more experience is a good idea too, it's a bit tricky at the moment due to uncertainties with COVID but as soon as things start to open back up, I'll do my best to get in contact. I'm sure it'll be useful regardless of the field I go into. Means a lot that you took the time to reply, I'll do my best to follow your advice and I'll hopefully leave an update here at some point to say how I'm going with it. and P.S. thanks so much, I see we both have refined tastes :P
Writing about my job: Data Scientist

Big old US >> UK pay gap imo. Partial explanation for that: 32 days holiday in the UK vs 10 days US. 

(My base pay was 85% of total; 100% seems pretty normal in UK tech.)

Other big factor: this was in a sorta sleepy industry that tacitly trades off money for working the contracted 37.5 h week, unlike say startups. Per hour it was decent, particularly given 10% study time. 

If we say hustling places have a 50 h week (which is what one fancy startup actually told me they expected), then 41 looks fine

The case against “EA cause areas”

Agree with the spirit - there is too much herding, and I would love for Schubert's distinctions to be core concepts. However, I think the problem you describe appears in the gap between the core orgs and the community, and might be pretty hard to fix as a result.

What material implies that EA is only about ~4 things?

  • the Funds
  • semi-official intro talks and Fellowship syllabi
  • the landing page has 3 main causes and mentions 6 more
  • the revealed preferences of what people say they're working on, the distribution of object-level post tags

What emphasises cause diverg... (read more)

How to explain AI risk/EA concepts to family and friends?

Brian Christian is incredibly good at tying the short-term concerns everyone already knows about to the long-term concerns. He's done tons of talks and podcasts - not sure which is best, but if 3 hours of heavy content isn't a problem, the 80k one is good.

There's already a completely mainstream x-risk: nuclear weapons (and, popularly, climate change). It could be good to compare AI to these accepted handles. The second species argument can be made pretty intuitive too.

Bonus: here's what I told my mum.

AIs are getting better quite fast, and we will probably

... (read more)
Undergraduate Making Life-Altering Choices While Sober, Please Advise

[I don't know you, so please feel free to completely ignore any of the following.]

I personally know three EAs who simply aren't constituted to put up with the fake work and weak authoritarianism of college. I expect any of them to do great things. Two other brilliant ones are Chris Olah and Kelsey Piper. (I highly recommend Piper's writing on the topic for deep practical insights and as a way of shifting the balance of responsibility partially off yourself and onto the ruinous rigid bureaucracy you are in. She had many of the same problems as you, and thin... (read more)

What is an example of recent, tangible progress in AI safety research?

Not recent-recent, but I also really like Carey's 2017 work on CIRL. Picks a small, well-defined problem and hammers it flush into the ground. "When exactly does this toy system go bad?"

What is an example of recent, tangible progress in AI safety research?

If we take "tangible" to mean executable:

But as Kurt Lewin once said "there's nothing so practical as a good theory". In particular, theory scales automatically and conceptual work can stop us from wasting effort on the wrong things.

  • CAIS (2019) pivots away from the classic age
... (read more)
5technicalities1moNot recent-recent, but I also really like Carey's 2017 work on CIRL [https://arxiv.org/pdf/1709.06275.pdf]. Picks a small, well-defined problem and hammers it flush into the ground. "When exactly does this toy system go bad?"
A Viral License for AI Safety

I think you're right, see my reply to Ivan.

A Viral License for AI Safety

I think I generalised too quickly in my comment; I saw "virality" and "any later version" and assumed the worst. But of course we can take into account AGPL backfiring when we design this licence!

One nice side effect of even a toothless AI Safety Licence: it puts a reminder about safety into the top of every repo. Sure, no one reads licences (and people often ignore health and safety rules when it gets in their way, even at their own risk). But maybe it makes things a bit more tangible like LICENSE.md gives law a foothold into the minds of devs.

Matsés - Are languages providing epistemic certainty of statements not of the interest of the EA community?

That's cool! I wonder if they suffer from the same ambiguity as epistemic adjectives in English though* (which would suggest that we should skip straight to numerical assignments: probabilities or belief functions).

Anecdotally, it's quite tiring to put credence levels on everything. When I started my blog I began by putting a probability on all major claims (and even wrote a script to hide this behind a popup to minimise aesthetic damage). But I soon stopped.

For important things (like Forum posts?) it's probably worth the effort, but even a document-level ... (read more)

2michaelchen1moSomething similar to explicit credence levels on claims is how Arbital has inline images of probability distributions [https://arbital.com/]. Users can vote on a certain probability and contribute to the probability distribution.
2michaelchen1moInteresting! Could you provide links to some of these blog posts?
2mikbp2moI think one very cool feature of having something like this embedded in the language is that you learn to do it automatically. I can think about a couple of examples now: - Cases: in English, Catalan or Spanish, one does not indicate the case of a substantive, but in German it is done. This makes learning German more difficult, but if you are native or after practising a lot, it becomes automatic. - Directions: I recall having read about a language that does not give relative directions (right, left) but absolute ones (East, West). That sounds like a very difficult thing to do for us, but for the people who speak that language it comes natural. My guess is that if we'd fluently speak Maltés, it would be just as natural for us to indicate the degree of certainty. And that would be very cool :-)
A Viral License for AI Safety

This is a neat idea, and unlike many safety policy ideas it has scaling built in.

However, I think the evidence from the original GPL suggests that this wouldn't work. Large companies are extremely careful to just not use GPL software, and this includes just making their own closed source implementations.* Things like the Skype case are the exception, which make other companies even more careful not to use GPL things. All of this has caused GPL licencing to fall massively in the last decade.** I can't find stats, but I predict that GPL projects will have mu... (read more)

1Daniel_Eth1moI'm not sure how well the analogy holds. With GPL, for-profit companies would lose their profits. With the AI Safety analog, they'd be able to keep 100% of their profits, so long as they followed XYZ safety protocols (which would be pushing them towards goals they want anyway – none of the major tech companies wants to cause human extinction).
4IvanVendrov2moThis is a helpful counterpoint. From big tech companies' perspective, I think that GPL (and especially aGPL) is close to the worst case scenario, since it destroys the ability to have proprietary software and can pose an existential risk to the company by empowering their competitors. Most of the specific clauses we discuss are not nearly so dangerous - they at most impose some small overhead on using or releasing the code. Corrigibility is the only clause that I can see being comparably dangerous: depending on the mechanism used to create future versions of the license, companies may feel they are giving too much control over their future to a third party.
Help me find the crux between EA/XR and Progress Studies

Aschenbrenner's model strikes me as a synthesis of the two intellectual programmes, and it doesn't get enough attention.

Why should we *not* put effort into AI safety research?

Robin Hanson is the best critic imo. He has many arguments, or one very developed one, but big pieces are:

  • Innovation in general is not very "lumpy" (discontinuous). So we should assume that AI innovation will also not be. So no one AI lab will pull far ahead of the others at AGI time. So there won't be a 'singleton', a hugely dangerous world-controlling system.
     
  • Long timelines [100 years+] + fire alarms
     
  • Opportunity cost of spending / shouting now 
    "we are far from human level AGI now, we'll get more warnings as we get closer, and by saving $ y
... (read more)
What are your favorite examples of moral heroism/altruism in movies and books?

Spoilers for Unsong:

Jalaketu identifies the worst thing in the world - hell - and sacrifices everything, including his own virtue and impartiality, to destroy it. It is the strongest depiction of the second-order consistency, second-order glory of consequentialism I know. (But also a terrible tradeoff.)

Voting reform seems overrated

Shouldn't the title be "Proportional Representation seems overrated"?

PR is often what people mean by voting reform, in the UK, but there are options without these problems, e.g. approval voting.

What are your main reservations about identifying as an effective altruist?

I see "effective altruist" as a dodgy shorthand for the full term: "aspiring effective altruist". I'm happy to identify as the latter in writing (though it is too clunky for speech).

I'm the same. I'm a "member" and even a "community leader" in the "EA movement", and happy to identify as such. But calling yourself an "Effective Altruist" is to call yourself an "altruist", at least in the ears of someone who isn't familiar with the movement. I think it will sound morally pretentious or self-aggrandizing. Generally the label of "altruist" should be given to an individual by others, not claimed, if it should ever be applied to describe a specific individual, which actually seems a bit weird regardless of whoever is bestowing the label.

I scraped all public "Effective Altruists" Goodreads reading lists

I call shotgun on "On Certainty", one of the most-wanted books. (The author and I have butted heads before. He is much better at headbutting than me.)

AGI risk: analogies & arguments

I felt much the same writing it. I'll add that to my content note, thanks.

AGI risk: analogies & arguments

The opposite post (reasons not to worry) could be good as well. e.g.

EA capital allocation is an inner ring

In this one, it's that there is no main body, just a gesture off-screen. Only a small minority of readers will be familiar enough with the funding apparatus to complete your "exercise to the reader..." Maybe you're writing for that small minority, but it's fair for the rest to get annoyed.

In past ones (from memory), it's again this sense of pushing work onto the reader. Sense of "go work it out".

2Milan_Griffes4moYes, I want people to think about this for themselves. (I don't think that's esoteric.)
EA capital allocation is an inner ring

It might be better to collate and condense your series into one post, once it's finished (or starting now). These individual posts really aren't convincing, and probably hurt your case if anything. Part of that is the Forum's conventions about content being standalone. But the rest is clarity and evidence: your chosen style is too esoteric.

I don't think it's our unwillingness to hear you out. Some of the most well-regarded posts on here are equally fundamental critiques of EA trends, but written persuasively / directly:

https://forum.effectivealtruism.org/p... (read more)

1Milan_Griffes4moWhat about my style stands out as esoteric? (From my perspective, I'm trying to be as clear & straightforward as possible in the main body of each post. I am also using poetic quotes at the top of some of the posts.)
Can a Vegan Diet Be Healthy? A Literature Review

Worth noting that multivitamins are associated with very slightly increased mortality in the general population. Cochrane put this down to them overdosing A, E, and beta-carotene, which I don't expect vegans to be deficient in, so the finding might transfer. (Sounds like you've done blood tests though, so ignore me if it helps you.)

https://www.cochrane.org/CD007176/LIVER_antioxidant-supplements-for-prevention-of-mortality-in-healthy-participants-and-patients-with-various-diseases

3MichaelStJules4moI haven't dug through the studies, but these were specific supplements, not multivitamins, right? I'd imagine ~100% recommended daily value in a multivitamin and ~200% in your entire diet is safe for pretty much any nutrient, but ya, some multivitamins go way over for some nutrients (although are typically below upper limits). Supplements for specific nutrients may be worse. I use https://labdoor.com/ [https://labdoor.com/] to pick supplements. The multivitamin I'm using now [https://labdoor.com/review/deva-vegan-multivitamin] is poorly-rated, but significantly above average for safety, including all nutrients below upper limits, but maybe the upper limits are set too high.
What are some potential coordination failures in our community?

The cycle of people coming up with ideas about how to organise people into projects, or prevent redundant posts, or make the Forum more accretive, being forgotten a week later. i.e. We fail to coordinate on coordination projects.

Progress Open Thread: December 2020

Can anyone in clean meat verify this news? The last time I checked, we were still years off market release.

Conditional on it being a real shock, hooray!

https://www.google.com/amp/s/amp.theguardian.com/environment/2020/dec/02/no-kill-lab-grown-meat-to-go-on-sale-for-first-time

2EdoArad7moUpdate here [https://www.cnbc.com/2020/12/18/singapore-restaurant-first-ever-to-serve-eat-just-lab-grown-chicken.html] - Chicken nuggets for $23, a mix of cells and plants
4EdoArad8moFrom my understanding (by asking people in clean meat), the deal is with Just which apparently has a stem-cell-based chicken product. This is probably still very expensive and limited in its taste and texture, although from what I heard from reports on similar products of SuperMeat's The Chicken [https://thechicken.kitchen/] “Feedback from multiple tasting panels was consistent that it was indistinguishable from conventionally manufactured chicken, and simply a great-tasting chicken burger.”
The Case for Space: A Longtermist Alternative to Existential Threat Reduction

Some more prior art, on Earth vs off-world "lifeboats". See also 4.2 here for a model of mining Mercury (for solar panels, not habitats).

The academic contribution to AI safety seems large

This makes sense. I don't mean to imply that we don't need direct work.

AI strategy people have thought a lot about the capabilities : safety ratio, but it'd be interesting to think about the ratio of complementary parts of safety you mention. Ben Garfinkel notes that e.g. reward engineering work (by alignment researchers) is dual-use; it's not hard to imagine scenarios where lots of progress in reward engineering without corresponding progress in inner alignment could hurt us.

The academic contribution to AI safety seems large

Thanks!

research done by people who are trying to do something else will probably end up not being very helpful for some of the core problems.

Yeah, it'd be good to break AGI control down more, to see if there are classes of problem where we should expect indirect work to be much less useful. But this particular model already has enough degrees of freedom to make me nervous.

I think that it might be easier to assign a value to the discount factor by assessing the total contributions of EA safety and non-EA safety.

That would be great! I used headcount bec... (read more)

The academic contribution to AI safety seems large

An important source of capabilities / safety overlap, via Ben Garfinkel:

Let’s say you’re trying to develop a robotic system that can clean a house as well as a human house-cleaner can... Basically, you’ll find that if you try to do this today, it’s really hard to do that. A lot of traditional techniques that people use to train these sorts of systems involve reinforcement learning with essentially a hand-specified reward function...
One issue you’ll find is that the robot is probably doing totally horrible things because
... (read more)
The academic contribution to AI safety seems large

Thanks for this, I've flagged this in the main text. Should've paid more attention to my confusion on reading their old announcement!

The academic contribution to AI safety seems large

If the above strikes you as wrong (and not just vague), you could copy the Guesstimate, edit the parameters, and comment below.

How can I apply person-affecting views to Effective Altruism?

Welcome!

It's a common view. Some GiveWell staff hold this view, and indeed most of their work involves short-term effects, probably for epistemic reasons. Michael Plant has written about the EA implications of person-affecting views, and emphasises improvements to world mental health.

Here's a back-of-the-envelope estimate for why person-affecting views might still be bound to prioritise existential risk though (for the reason you give, but with some numbers for easier comparison).

Dominic Roser and I have also puzzled over Christian longtermism a ... (read more)

What would a pre-mortem for the long-termist project look like?

Great comment. I count only 65 percentage points - is the other third "something else happened"?

Or were you not conditioning on long-termist failure? (That would be scary.)

1alexrjl1yI was not conditioning on long termist failure, but I also don't think my last three points are mutually exclusive, so they shouldn't be naively summed.
1Azure1yAdditionally, is it not likely that those scenarios are correlated?
(How) Could an AI become an independent economic agent?

IKEA is an interesting case: it was bequeathed entirely to a nonprofit foundation with a very loose mission and no owner(?)

https://www.investopedia.com/articles/investing/012216/how-ikea-makes-money.asp

Not a silly question IMO. I thought about Satoshi Nakamoto's bitcoin - but if they're dead, then it's owned by their heirs, or failing that by the government of whatever jurisdiction they were in. In places like Britain I think a combination of "bona vacantia" (unclaimed estates go to the government) and "treasure trove" (old treasure also) cover the edge ca

... (read more)
2Milan_Griffes1yFascinating:
What posts do you want someone to write?

A nice example of the second part, value dependence, is Ozy Brennan's series reviewing GiveWell charities.

Why might you donate to GiveDirectly?
You need a lot of warmfuzzies in order to motivate yourself to donate.
You think encouraging cash benchmarking is really important, and giving GiveDirectly more money will help that.
You want to encourage charities to do more RCTs on their programs by rewarding the charity that does that most enthusiastically.
You care about increasing people’s happiness and don’t care about saving the lives of small
... (read more)
What posts do you want someone to write?

Collating predictions made by particularly big pundits and getting calibration curves for them. Bill Gates is getting a lot of attention now for warning of pandemic in 2015; what is his average though? (This is a bad example though, since I expect his advisors to be world-class and to totally suppress his variance.)

If this could be hosted somewhere with a lot of traffic, it could reinforce good epistemics.

What posts do you want someone to write?

A case study of the Scientific Revolution in Britain as intervention by a small group. This bears on one of the most surprising facts: the huge distance, 1.5 centuries, between the scientific and industrial revs. Could also shed light on the old marginal vs systemic argument: a synthesis is "do politics - to promote nonpolitical processes!"

https://forum.effectivealtruism.org/posts/RfKPzmtAwzSw49X9S/open-thread-46?commentId=rWn7HTvZaNHCedXNi

What are some 1:1 meetings you'd like to arrange, and how can people find you?

Who am I?

Gavin Leech, a PhD student in AI at Bristol. I used to work in international development, official statistics, web development, data science.

Things people can talk to you about

Stats, forecasting, great books, development economics, pessimism about philosophy, aphorisms, why AI safety is eating people, fake frameworks like multi-agent mind. How to get technical after an Arts degree.

Things I'd like to talk to others about

The greatest technical books you've ever read. Research taste, and how it is transmitted. Non-opportunistic ways to do A... (read more)

Open Thread #46

Suggested project for someone curious:

There are EA profiles of interesting influential (or influentially uninfluential) social movements - the Fabians, the neoliberals, the General Semanticists. But no one has written about the biggest: the scientific revolution in Britain as intentional intervention, a neoliberal style coterie.

A small number of the most powerful people in Britain - the Lord Chancellor, the king's physicians, the chaplain of the Elector Palatine / bishop of Chester, London's greatest architect, and so on - apparently pushed a ma... (read more)

Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism

To my knowledge, most of the big names (Bentham, Sidgwick, Mill, Hare, Parfit) were anti-speciesist to some degree; the unusual contribution of Singer is the insistence on equal consideration for nonhumans. It was just not obvious to their audiences for 100+ years afterward.

My understanding of multi-level U is that it permits not using explicit utility estimation, rather than forbidding using it. (U as not the only decision procedure, often too expensive.) It makes sense to read (naive, ideal) single-level consequentialism as the converse, forbidding or di

... (read more)
What are the key ongoing debates in EA?

I read it as 'getting some people who aren't economists, philosophers, or computer scientists'. (:

(Speaking as a philosophy+economics grad and a sort-of computer scientist.)

3willbradshaw1yI think there's quite a large diversity in what people in EA did in undergrad / grad school. There's plenty of medics and a small but nontrivial number of biologists around, for example. What they wish they'd done at university, or what they're studying now, might be another matter.
What are the key ongoing debates in EA?

Not sure. 2017 fits the beginning of the discussion though.

2Linch1yI thought most of the fights around the worm wars were in 2015 [1]? I really haven't been following. [1] https://chrisblattman.com/2015/07/24/the-10-things-i-learned-in-the-trenches-of-the-worm-wars/ [https://chrisblattman.com/2015/07/24/the-10-things-i-learned-in-the-trenches-of-the-worm-wars/]
What are the key ongoing debates in EA?

I've had a few arguments about the 'worm wars', whether the bet on deworming kids, which was uncertain from the start, is undermined by the new evidence.

My interlocutor is very concerned about model error in cost-benefit analysis, about avoiding side effects (and 'double effect' in particular); and not just for the usual PR or future credibility reasons.

4Linch1yWhat's the new evidence? I haven't been keeping up with the worm wars since 2017. Is there more conclusive data or studies since?
What are the best arguments that AGI is on the horizon?

It can seem strange that people act decisively about speculative things. So the first piece to understand is expected value: if something would be extremely important if it happened, then you can place quite low probability on it and still have warrant to act on it. (This is sometimes accused of being a decision-theory "mugging", but it isn't: we're talking about subjective probabilities in the range of 1% - 10%, not infinitesimals like those involved in Pascal's mugging.)

I think the most-defensible outside-view argument is: it cou... (read more)

4rohinmshah1yJust wanted to note that while I am quoted as being optimistic, I am still working on it specifically to cover the x-risk case and not the value lock-in case. (But certainly some people are working on the value lock-in case.) (Also I think several people would disagree that I am optimistic, and would instead think I'm too pessimistic, e.g. I get the sense that I would be on the pessimistic side at FHI.)
Growth and the case against randomista development

Great work. I'm very interested in this claim

the top ten most prescribed medicines many work on only a third of the patients

In which volume was this claim made?

6HaukeHillebrandt2y[37] [https://docs.google.com/document/d/e/2PACX-1vREIVXc8XyErrS6Ui7YwU_MbyLXoaU8H-zeYmCVaid2ICg1KwpD2A56FQFjB3Z_5r4zAkMxrQssmmxC/pub#ftnt_ref37] Pritchett, ‘Randomizing Development: Method or Madness?’ (2019), p. 23-24. see: https://d101vc9winf8ln.cloudfront.net/documents/32264/original/RCTs_and_the_big_questions_10000words_june30.pdf#page=23 [https://d101vc9winf8ln.cloudfront.net/documents/32264/original/RCTs_and_the_big_questions_10000words_june30.pdf#page=23]
In praise of unhistoric heroism

Some (likely insufficient) instrumental benefits of feeling bad about yourself:

  • When I play saxophone I often feel frustration at not sounding like Coltrane or Parker; but when I sing I feel joy at just being able to make noise. I'm not sure which mindset has led to better skill growth. : Evaluations can compare up (to a superior reference class) or compare down. I try to do plenty of both. e.g. "Relative to the human average I've done a lot and know a lot." Comparing up is more natural to me, so I have an emotional-support Anki deck of
... (read more)

A recent book discusses the evolutionary causes of "bad feelings", and to what extent they have instrumental benefits: Good Reasons for Bad Feelings: Insights from the Frontier of Evolutionary Psychiatry.

Against value drift

Sure, I agree that most people's actions have a streak of self-interest, and that posterity could serve as this even in cases of sacrificing your life. I took OP to be making a stronger claim, that it is simply wrong to say that "people have altruistic values" as well.

There's just something up with saying that these altruistic actions are caused by selfish/social incentives, where the strongest such incentive is ostracism or the death penalty for doing it.

Against value drift

How does this reduction account for the many historical examples of people who defied local social incentives, with little hope of gain and sometimes even destruction? (Off the top of my head: Ignaz Semmelweis, Irena Sendler, Sophie Scholl.)

We can always invent sufficiently strange posthoc preferences to "explain" any behaviour: but what do you gain in exchange for denying the seemingly simpler hypothesis "they had terminal values independent of their wellbeing"?

(Limiting this to atheists, since religious martyrs are explained well by incentives.)

Load More