Not a bio guy, but in general: talk to more people! List people you think are doing good work and ask em directly.
Also generically: try to do some real work in as many of them as you can. I don't know how common undergrad research assistants are in your fields, or in Australian unis, but it should be doable (if you're handling your courseload ok).
PS: Love the username.
Big old US >> UK pay gap imo. Partial explanation for that: 32 days holiday in the UK vs 10 days US.
(My base pay was 85% of total; 100% seems pretty normal in UK tech.)Other big factor: this was in a sorta sleepy industry that tacitly trades off money for working the contracted 37.5 h week, unlike say startups. Per hour it was decent, particularly given 10% study time.
If we say hustling places have a 50 h week (which is what one fancy startup actually told me they expected), then 41 looks fine.
Agree with the spirit - there is too much herding, and I would love for Schubert's distinctions to be core concepts. However, I think the problem you describe appears in the gap between the core orgs and the community, and might be pretty hard to fix as a result.
What material implies that EA is only about ~4 things?
What emphasises cause diverg... (read more)
Brian Christian is incredibly good at tying the short-term concerns everyone already knows about to the long-term concerns. He's done tons of talks and podcasts - not sure which is best, but if 3 hours of heavy content isn't a problem, the 80k one is good.
There's already a completely mainstream x-risk: nuclear weapons (and, popularly, climate change). It could be good to compare AI to these accepted handles. The second species argument can be made pretty intuitive too.
Bonus: here's what I told my mum.
AIs are getting better quite fast, and we will probably
[I don't know you, so please feel free to completely ignore any of the following.]
I personally know three EAs who simply aren't constituted to put up with the fake work and weak authoritarianism of college. I expect any of them to do great things. Two other brilliant ones are Chris Olah and Kelsey Piper. (I highly recommend Piper's writing on the topic for deep practical insights and as a way of shifting the balance of responsibility partially off yourself and onto the ruinous rigid bureaucracy you are in. She had many of the same problems as you, and thin... (read more)
Not recent-recent, but I also really like Carey's 2017 work on CIRL. Picks a small, well-defined problem and hammers it flush into the ground. "When exactly does this toy system go bad?"
If we take "tangible" to mean executable:
But as Kurt Lewin once said "there's nothing so practical as a good theory". In particular, theory scales automatically and conceptual work can stop us from wasting effort on the wrong things.
I think you're right, see my reply to Ivan.
I think I generalised too quickly in my comment; I saw "virality" and "any later version" and assumed the worst. But of course we can take into account AGPL backfiring when we design this licence!
One nice side effect of even a toothless AI Safety Licence: it puts a reminder about safety into the top of every repo. Sure, no one reads licences (and people often ignore health and safety rules when it gets in their way, even at their own risk). But maybe it makes things a bit more tangible like LICENSE.md gives law a foothold into the minds of devs.
Seems I did this in exactly 3 posts before getting annoyed.
That's cool! I wonder if they suffer from the same ambiguity as epistemic adjectives in English though* (which would suggest that we should skip straight to numerical assignments: probabilities or belief functions).
Anecdotally, it's quite tiring to put credence levels on everything. When I started my blog I began by putting a probability on all major claims (and even wrote a script to hide this behind a popup to minimise aesthetic damage). But I soon stopped.
For important things (like Forum posts?) it's probably worth the effort, but even a document-level ... (read more)
This is a neat idea, and unlike many safety policy ideas it has scaling built in.
However, I think the evidence from the original GPL suggests that this wouldn't work. Large companies are extremely careful to just not use GPL software, and this includes just making their own closed source implementations.* Things like the Skype case are the exception, which make other companies even more careful not to use GPL things. All of this has caused GPL licencing to fall massively in the last decade.** I can't find stats, but I predict that GPL projects will have mu... (read more)
Aschenbrenner's model strikes me as a synthesis of the two intellectual programmes, and it doesn't get enough attention.
Robin Hanson is the best critic imo. He has many arguments, or one very developed one, but big pieces are:
Spoilers for Unsong:
Jalaketu identifies the worst thing in the world - hell - and sacrifices everything, including his own virtue and impartiality, to destroy it. It is the strongest depiction of the second-order consistency, second-order glory of consequentialism I know. (But also a terrible tradeoff.)
Shouldn't the title be "Proportional Representation seems overrated"?
PR is often what people mean by voting reform, in the UK, but there are options without these problems, e.g. approval voting.
I see "effective altruist" as a dodgy shorthand for the full term: "aspiring effective altruist". I'm happy to identify as the latter in writing (though it is too clunky for speech).
I'm the same. I'm a "member" and even a "community leader" in the "EA movement", and happy to identify as such. But calling yourself an "Effective Altruist" is to call yourself an "altruist", at least in the ears of someone who isn't familiar with the movement. I think it will sound morally pretentious or self-aggrandizing. Generally the label of "altruist" should be given to an individual by others, not claimed, if it should ever be applied to describe a specific individual, which actually seems a bit weird regardless of whoever is bestowing the label.
I call shotgun on "On Certainty", one of the most-wanted books. (The author and I have butted heads before. He is much better at headbutting than me.)
I felt much the same writing it. I'll add that to my content note, thanks.
The opposite post (reasons not to worry) could be good as well. e.g.
In this one, it's that there is no main body, just a gesture off-screen. Only a small minority of readers will be familiar enough with the funding apparatus to complete your "exercise to the reader..." Maybe you're writing for that small minority, but it's fair for the rest to get annoyed.
In past ones (from memory), it's again this sense of pushing work onto the reader. Sense of "go work it out".
It might be better to collate and condense your series into one post, once it's finished (or starting now). These individual posts really aren't convincing, and probably hurt your case if anything. Part of that is the Forum's conventions about content being standalone. But the rest is clarity and evidence: your chosen style is too esoteric.
I don't think it's our unwillingness to hear you out. Some of the most well-regarded posts on here are equally fundamental critiques of EA trends, but written persuasively / directly:
https://forum.effectivealtruism.org/p... (read more)
Worth noting that multivitamins are associated with very slightly increased mortality in the general population. Cochrane put this down to them overdosing A, E, and beta-carotene, which I don't expect vegans to be deficient in, so the finding might transfer. (Sounds like you've done blood tests though, so ignore me if it helps you.)
The cycle of people coming up with ideas about how to organise people into projects, or prevent redundant posts, or make the Forum more accretive, being forgotten a week later. i.e. We fail to coordinate on coordination projects.
Can anyone in clean meat verify this news? The last time I checked, we were still years off market release.
Conditional on it being a real shock, hooray!
Follow-up post to ARCHES with ranking of existing fields, lots more bibliographies.
Some more prior art, on Earth vs off-world "lifeboats". See also 4.2 here for a model of mining Mercury (for solar panels, not habitats).
This makes sense. I don't mean to imply that we don't need direct work.
AI strategy people have thought a lot about the capabilities : safety ratio, but it'd be interesting to think about the ratio of complementary parts of safety you mention. Ben Garfinkel notes that e.g. reward engineering work (by alignment researchers) is dual-use; it's not hard to imagine scenarios where lots of progress in reward engineering without corresponding progress in inner alignment could hurt us.
research done by people who are trying to do something else will probably end up not being very helpful for some of the core problems.
Yeah, it'd be good to break AGI control down more, to see if there are classes of problem where we should expect indirect work to be much less useful. But this particular model already has enough degrees of freedom to make me nervous.
I think that it might be easier to assign a value to the discount factor by assessing the total contributions of EA safety and non-EA safety.
That would be great! I used headcount bec... (read more)
An important source of capabilities / safety overlap, via Ben Garfinkel:
Let’s say you’re trying to develop a robotic system that can clean a house as well as a human house-cleaner can... Basically, you’ll find that if you try to do this today, it’s really hard to do that. A lot of traditional techniques that people use to train these sorts of systems involve reinforcement learning with essentially a hand-specified reward function...
One issue you’ll find is that the robot is probably doing totally horrible things because
Thanks for this, I've flagged this in the main text. Should've paid more attention to my confusion on reading their old announcement!
If the above strikes you as wrong (and not just vague), you could copy the Guesstimate, edit the parameters, and comment below.
It's a common view. Some GiveWell staff hold this view, and indeed most of their work involves short-term effects, probably for epistemic reasons. Michael Plant has written about the EA implications of person-affecting views, and emphasises improvements to world mental health.
Here's a back-of-the-envelope estimate for why person-affecting views might still be bound to prioritise existential risk though (for the reason you give, but with some numbers for easier comparison).
Dominic Roser and I have also puzzled over Christian longtermism a ... (read more)
Great comment. I count only 65 percentage points - is the other third "something else happened"?
Or were you not conditioning on long-termist failure? (That would be scary.)
IKEA is an interesting case: it was bequeathed entirely to a nonprofit foundation with a very loose mission and no owner(?)
Not a silly question IMO. I thought about Satoshi Nakamoto's bitcoin - but if they're dead, then it's owned by their heirs, or failing that by the government of whatever jurisdiction they were in. In places like Britain I think a combination of "bona vacantia" (unclaimed estates go to the government) and "treasure trove" (old treasure also) cover the edge ca
A nice example of the second part, value dependence, is Ozy Brennan's series reviewing GiveWell charities.
Why might you donate to GiveDirectly?
You need a lot of warmfuzzies in order to motivate yourself to donate.
You think encouraging cash benchmarking is really important, and giving GiveDirectly more money will help that.
You want to encourage charities to do more RCTs on their programs by rewarding the charity that does that most enthusiastically.
You care about increasing people’s happiness and don’t care about saving the lives of small
Collating predictions made by particularly big pundits and getting calibration curves for them. Bill Gates is getting a lot of attention now for warning of pandemic in 2015; what is his average though? (This is a bad example though, since I expect his advisors to be world-class and to totally suppress his variance.)
If this could be hosted somewhere with a lot of traffic, it could reinforce good epistemics.
A case study of the Scientific Revolution in Britain as intervention by a small group. This bears on one of the most surprising facts: the huge distance, 1.5 centuries, between the scientific and industrial revs. Could also shed light on the old marginal vs systemic argument: a synthesis is "do politics - to promote nonpolitical processes!"
Who am I?
Gavin Leech, a PhD student in AI at Bristol. I used to work in international development, official statistics, web development, data science.
Things people can talk to you about
Stats, forecasting, great books, development economics, pessimism about philosophy, aphorisms, why AI safety is eating people, fake frameworks like multi-agent mind. How to get technical after an Arts degree.
Things I'd like to talk to others about
The greatest technical books you've ever read. Research taste, and how it is transmitted. Non-opportunistic ways to do A... (read more)
Suggested project for someone curious:
There are EA profiles of interesting influential (or influentially uninfluential) social movements - the Fabians, the neoliberals, the General Semanticists. But no one has written about the biggest: the scientific revolution in Britain as intentional intervention, a neoliberal style coterie.
A small number of the most powerful people in Britain - the Lord Chancellor, the king's physicians, the chaplain of the Elector Palatine / bishop of Chester, London's greatest architect, and so on - apparently pushed a ma... (read more)
To my knowledge, most of the big names (Bentham, Sidgwick, Mill, Hare, Parfit) were anti-speciesist to some degree; the unusual contribution of Singer is the insistence on equal consideration for nonhumans. It was just not obvious to their audiences for 100+ years afterward.
My understanding of multi-level U is that it permits not using explicit utility estimation, rather than forbidding using it. (U as not the only decision procedure, often too expensive.) It makes sense to read (naive, ideal) single-level consequentialism as the converse, forbidding or di
I read it as 'getting some people who aren't economists, philosophers, or computer scientists'. (:
(Speaking as a philosophy+economics grad and a sort-of computer scientist.)
Not sure. 2017 fits the beginning of the discussion though.
I've had a few arguments about the 'worm wars', whether the bet on deworming kids, which was uncertain from the start, is undermined by the new evidence.
My interlocutor is very concerned about model error in cost-benefit analysis, about avoiding side effects (and 'double effect' in particular); and not just for the usual PR or future credibility reasons.
It can seem strange that people act decisively about speculative things. So the first piece to understand is expected value: if something would be extremely important if it happened, then you can place quite low probability on it and still have warrant to act on it. (This is sometimes accused of being a decision-theory "mugging", but it isn't: we're talking about subjective probabilities in the range of 1% - 10%, not infinitesimals like those involved in Pascal's mugging.)
I think the most-defensible outside-view argument is: it cou... (read more)
Welcome! This is a fine thing - you could link to your story here, for instance:
Great work. I'm very interested in this claim
the top ten most prescribed medicines many work on only a third of the patients
the top ten most prescribed medicines many work on only a third of the patients
In which volume was this claim made?
Some (likely insufficient) instrumental benefits of feeling bad about yourself:
A recent book discusses the evolutionary causes of "bad feelings", and to what extent they have instrumental benefits: Good Reasons for Bad Feelings: Insights from the Frontier of Evolutionary Psychiatry.
Sure, I agree that most people's actions have a streak of self-interest, and that posterity could serve as this even in cases of sacrificing your life. I took OP to be making a stronger claim, that it is simply wrong to say that "people have altruistic values" as well.
There's just something up with saying that these altruistic actions are caused by selfish/social incentives, where the strongest such incentive is ostracism or the death penalty for doing it.
How does this reduction account for the many historical examples of people who defied local social incentives, with little hope of gain and sometimes even destruction? (Off the top of my head: Ignaz Semmelweis, Irena Sendler, Sophie Scholl.)
We can always invent sufficiently strange posthoc preferences to "explain" any behaviour: but what do you gain in exchange for denying the seemingly simpler hypothesis "they had terminal values independent of their wellbeing"?
(Limiting this to atheists, since religious martyrs are explained well by incentives.)