alexlintz

Comments

Learnings about literature review strategy from research practice sessions

Yeah, maybe I should change some text... but I guess I have assumption built in that when finding papers which seem relevant you'd be reading the abstract,  getting a basic idea of what they're about, and then adjusting search terms. 
 

The reason having a pile of papers is useful is because the value of papers is extremely uneven for any given question and by having a pile you get a better feel for the range of what people say about a topic before diving into one perspective. Wrt the first point I'd argue that in most cases there are one or two papers which would be perfect for getting an overview. Reading those might be 100x more valuable than reading something which is just kind of related (what you are likely to find on the first search). If that's true it's clearly worth spending a lot of time looking around for the perfect paper rather than jumping into the first one you find. Obviously this can be overdone but I expect most people err toward too little search. Note  that you might also find the perfect paper by skimming through an imperfect one. I tend to see this  as another way of searching as you can look for that without actually 'reading' the paper,  just by skimming through their lit review or intro. 

Ask Rethink Priorities Anything (AMA)

Yeah, this would be nice to have! It's a lot of text to digest as it is now and I guess most people won't see it here going forward

Ask Rethink Priorities Anything (AMA)

I don't work at Rethink Priorities but I couldn't resist jumping in with some thoughts as I've been doing a lot of thinking on some of these questions recently

Thinking vs. reading. I’ve been playing around with spending 15-60 min sketching out a quick model of what I think of something before starting in on the literature (by no means a consistent thing I do though). I find it can be quite nice and help me ask the right questions early on. 

Self-consciousness. Idk if this fits exactly but when I started my research position I tried to have the mindset of, ‘I’ll be pretty bad at this for quite a while’. Then when I made mistakes I could just think, ‘right, as expected. Now let’s figure out how to not do that again’. Not sure how sustainable this is but it felt good to start! In general it seems good to have a mindset of research being nearly impossibly hard. Humans are just barely able to do this thing in a useful way and even at the highest levels academics still make mistakes (most papers have at least some flaws). 

Optimal hours of work per day. I tend to work about 4-7 hours per day including meetings and everything. Including only mentally intensive tasks I probably get around 4-5 a day. Sometimes I’m able to get more if I fall into a good rhythm with something. Looking around at estimates (Rescuetime says just ~3 hours per day average of productive work) it seems clear I’m hitting a pretty solid average. I still can’t shake the feeling that everyone else is doing more work. Part of this is because people claim they do much more work. I assume this is mostly exaggeration though because hours worked is used as a signal of status and being a hard worker. But still, it's hard to shake the feeling.

Learning a new field. I just do a lot of literature review. I tend to search for the big papers and meta-analyses, skim lot’s of them and try to make a map of what the key questions are and what the answers proposed by different authors are for each question (noting citations for each answer). This helps to distill the field I think and serves as something relatively easy to reference. Generally there’s a lot of restructuring that needs to happen as you learn more about a topic area and see that some questions you used were ill-posed or some papers answer somewhat different questions. In short this gets messy, but it seems like a good way to start and sometimes it works quite well for me.  

Hard problems. I have a maybe-controversial take that research (even in LT space) is motivated largely by signalling and status games. From this view the advice many gave about talking to people about it sounds good. Then you generate some excitement as you’re able to show someone else you’re smart enough to solve it, or they get excited to share what they know, etc. I think if you had a nice working group on any topic, no matter how boring, everyone would get super excited about it. In general, connecting the solution to a hard problem to social reward is probably going to work well as a motivator by this logic. 

Emotional motivators. I’ve been thinking a lot recently about what I’m calling ‘incentive landscaping’. The basic idea is that your system 2 has a bunch of things it wants to do (e.g. have impact). Then you can shape your incentive landscape such that your system 1 is also motivated to do the highest impact things. Working for someone who shares your values is the easiest way to do this as then your employer and peers will reward you (either socially or with promotions) for doing things which are impact-oriented. This still won’t be perfectly optimized for impact but it gets you close. Then you can add in some extra motivators like a small group you meet with to talk about progress on some thing which seems badly motivated, or ask others to make your reward conditional on you completing something your system 2 thinks is important. Still early days for me on this though and I think it’s a really hard thing to get right. 

Typing speed. At least when I'm doing reflections or broad thinking I often circumvent this by doing a lot of voice notes with Dragon. That way I can type at the speed of thought. It’s never perfect but ~97% of it is readable so it’s good enough. Then if you want to actually have good notes you go through and summarize your long jumble of semi-coherent thoughts into something decent sounding. This has the side of effect of some spaced repetition learning as well! 

Tiredness, focus, etc. I’ve had lot’s of ongoing and serious problems with fatigue and have tried many interventions. Certainly caffeine (ideally with l-theanine) is a nice thing to have but tolerance is an issue. Right now what seems to work for me (no idea why) is a greens powder called Athletic Greens. I’m also trying pro/prebiotics which might be helping. Magnesium supplementation also might have helped. A medication I was taking was causing some problems as well and causing me to have some really intense fatigue on occasion (again, probably…). It’s super hard to isolate cause and effect in this area as there are so many potential causes. I’d say it’s worth dropping a lot of money on different supplements and interventions and seeing what helps. If you can consistently increase energy by 5-10% (something I think is definitely on the table for most people), that adds up really quickly in terms of the amount of work you can get done, happiness, etc. Ideally you’d do this by introducing one intervention at a time for 2-4 weeks each. I haven’t had patience for that and am currently just trying a few things at once, then I figure I can cut out one at a time and see what helped. Things I would loosely recommend trying (aside from exercise, sleep, etc): Prebiotics, good multivitamins, checking for food intolerances, checking if any pills you take are having adverse effects.
I do also work through tiredness sometimes and find it helpful to do some light exercise (for me, games in VR) to get back some energy. That also works as a decent gauge for whether I'll be able to push past the tiredness. If playing 10 min of Beatsaber feels like a chore, I probably won't be able to work.
How you rest might also be important. E.g. might need time with little input so your default mode network can do it’s thing. No idea how big of a deal this is but I’ve found going for more walks with just music (or silence) to maybe be helpful, especially in that I get more time for reflection. 
I’ve also been experimenting with measuring heart rate variability using an app called Welltory. That’s been kind of interesting in terms of raising some new questions though I’m still not sure how I feel about it/how accurate it is for measuring energy levels.

Learnings about literature review strategy from research practice sessions

Thanks!

Yes! You're totally right that going down the citation trail with the right paper can be better than search, I just edited to reflect that.

This spreadsheet seems great. So far we've only found ways to practice the early parts of literature review so we never created anything so sophisticated but that seems like a good method

Learnings about literature review strategy from research practice sessions

Iris.ai sounds potentially useful, I'll definitely check it out!

So far we've done some things on inspectional note-taking, finding the logical argument structure of articles, and breaking down questions into subquestions. I'm not too sure what the next big thing will be though. Some other ideas have been to practice finding flaws in articles (but it takes a bit too long for a 2hr session and is too field specific), abstract writing, making figures, and picking the right research question. 

I haven't been spending too much time on this recently though so the ideas for actually implementing these aren't top of mind

Objections to Value-Alignment between Effective Altruists

That said, I do agree we should work to mitigate some of the problems you mention. It would be good to get people more clear on how uncertain things are, to avoid groupthink and over-homogenization. I think we shouldn't expect to diverge very much from how other successful movements have happened in the past as there's not really precedent for that working, though we should strive to test it out and push the boundaries of what works. In that respect I definitely agree we should get a better idea of how homogenous things are now and get more explicit about what the right balance is (though explicitly endorsing some level of homogeneity might have it's own awkward consequences)

Objections to Value-Alignment between Effective Altruists

I agree with some of what you say, but find myself less concerned about some of the trends. This might be because I have a higher tolerance for some of the traits you argue are present and because AI governance, where I'm mostly engaged now, may just be a much more uncertain topic area than other parts of EA given how new it is. Also, while I identify a lot with the community and am fairly engaged (was a community leader for two years), I don't engage much on the forum or online so I might be missing a lot of context.

I worry about the framing of EA as not having any solutions and the argument that we should just focus on finding which are the right paths without taking any real-world action on the hypotheses we currently have for impact. I think to understand things like government and refine community views of how to affect it and what should be affected, we need to engage. Engaging quickly exposes ignorance and forces us to be beholden to the real world, not to mention gives a lot of reason to engage with people outside the community.

Once a potential path to impact is identified, and thought through to a reasonable extent, it seems almost necessary to try to take steps to implement it as a next step in determining whether it is a fruitful thing to pursue. Granted, after some time we should step back and re-evaluate, but for the time when you are pursuing the objective it's not feasible to be second-guessing constantly (similar idea to Nate Soare's post Diving in).

That said it seems useful to have a more clear view from the outside just how uncertain things are. While beginning to engage with AI governance, it took a long time for me to realize just how little we know about what we should be doing. This despite some explicit statements by people like Carrick Flynn in a post on the forum saying how little we know and a research agenda which is mainly questions about what we should do. I'm not sure what more could be done as I think it's normal to assume people know what they're doing, and for me this was only solved by engaging more deeply with the community (though now I think I have a more healthy understanding of just how uncertain most topic areas are).

I guess a big part of the disagreement here might boil down to how uncertain we really are about what we are doing. I would agree a lot more with the post if I was less confident about what we should be doing in general (and again I frame this mostly in AI governance area as it's what I know best). The norms you advocate are mostly about maintaining cause agnosticism and focusing on deliberation and prioritization (right?) as opposed to being more action oriented. In my case, I'm fairly happy with the action-prioritization balance I observe than I guess you are (though I'm of course not as familiar with how the balance looks elsewhere in the community and don't read the forum much).

The ITN framework, cost-effectiveness, and cause prioritisation

I think your critique of the ITN framework might be flawed. (though I haven't read section 2 yet). I assume some of my critique must be wrong as I still feel a bit confused about it, but I really need to get back to work...

One point that I think is a bit confusing is that you use the term marginal cost-effectiveness. To my knowledge this is not an acknowledged term in economics or elsewhere. What I think you mean instead is the average benefit given a certain amount of money.

Cost-effectiveness is (according to wikipedia at least) generally expressed at something like: 100USD/QALY. This is done by looking at how much a program costs and how many QALYs it created. So we get the average benefit of each $100 dollars for the program by doing this. However, we gain no insight as to what happened inside of the program. Maybe the first 100USD did all the work and the rest ended up being fluff, we don't know. More likely would be that the money had diminishing marginal returns.

When talking about tractability you say:

with importance and tractability alone, you could calculate the marginal cost-effectiveness of work on a problem, which is ultimately what we care about

You would know cost-effectiveness if you knew the amount spent so far/amount of good done. You know the amount spent from neglectedness but don't know the amount already done with the money spent. I guess marginal cost-effectiveness = average benefit from X more dollars. Let’s say that this is doubling the amount spent so far. I don’t think we can construe this as marginal though as doubling the money is not an ‘at the margin’ change. I think then that tractability gives you average benefit from X more dollars (so no need for scale).

We still need neglectedness and scale though to do a proper analysis.

Scale because if something wasn’t a big problem, why solve it? And to look at neglectedness let's use some made-up numbers:

Say that we as humanity have already spent 1 trillion USD on climate change (we use this to measure neglectedness) and got a 1% reduction in risk of an extinction event (use this to calculate the amount of good = .01* present value of all future lives). That gives us cost-effectiveness (cost/good done). We DON'T know however what happens at the margin (if we put more money in). We just have an average. Assuming constant returns may seem (almost) reasonable on an intervention like bed net distribution but it seems less reasonable when we've already spent 1 trillion USD on a problem. Then what we really need to know is the benefit of, say, another 1 trillion USD. This I think is what 80k's tractability measure is trying to get at. The average benefit (or cost-effectiveness) of another hunk of money/resources.

So defending neglectedness a bit. If we think that the marginal benefit to more money is not constant (which seems eminently reasonable) then it makes sense to try to find out where we are on the curve. Neglectedness helps to show us where we might be on the curve, even though we have little idea what the curve looks like (though I would generally find it safe to assume decreasing marginal returns). If we're on the flat bit of the diminishing marginal returns curve then we sure as hell want to know, or at least find evidence which would indicate that to be likely.

So then neglectedness is trying to find where we are on the curve, which will help us understand the marginal return to one more person/dollar entering (the true margin). This might mean that even if a problem is unsolvable there might be easy gains to be had in terms of reducing risk on the margin. For something that is neglected but not tractable we might be able to have huge benefits by throwing a few people/dollars in (get x-risk reductions f.ex) but that might peter off really quickly thus making it untractable. It would be less attractive overall then because putting a lot of people in would not be worth it.

Tractability says, if we were to dump lot's more money, what are the average returns going to look like. If we are now at the flat part of the curve average returns might be FAR lower than they were in a cost-effectiveness analysis (average returns of past spending) of what we already spent.

Maybe new intuitions for these:

Neglectedness: How much bang for the buck do we get for one more person/dollar?

Tractability: Is it worth dumping lot's of resources into this problem?


What actions would obviously decrease x-risk?

Just to play devil's advocate with some arguments against peace (in a not so well thought out way)... There's a book called 'The Great Leveler' which puts forward the hypothesis that the only time when widespread redistribution has happened is after wars. This means that without war we might expect consistently rising inequality. This effect has been due to mass mobilization ('Taxing the Rich' asserts that there has only been mass political willpower to increase redistribution with the claims of veterans having served and feeling they should be compensated) anddestructionn of capital (in Europe much of the capital was destroyed in WW2 -> massive decrease in inequality, US less so on both front) (haven't read the book though). Spinning this further we could be approaching a time where great power war would not have this effect. This is because less labor is required and it would be higher skilled. Perhaps there would be little use for low skilled grunts in near future wars (or already). If we also saw less destruction of capital (maybe information warfare is the way of the future?) Then we lose the mechanisms which made war a leveller in the past. SO we might be in the last time where a great power war (one of the only things we know reduces inequality) would be able to reduce inequality. If inequality continues to increase we could see suboptimal societal values which could continue on indefinitely and/or cause large amount of suffering the mediumrun. This could also lead to more domestic unrest in medium-run which would imply a peace now vs peace later trade-off. Depending on how hingey the moment is for the long-term future now, it could be better to have peace later. ALSO, UN was created post WW2. Maybe we only have appetite for major international cooperation after nasty wars? Anyway... Even after considering that, peace and cooperation is probably good on net, but not as obvious as it may seem. (Wrote this on mobile, sorry for any errors and lack of having read more than a few pages of the books I cited)

EA Handbook 3.0: What content should I include?

I always recommend Nate Soares' post 'On Caring' to motivate the need for rational analysis of problems when trying to do good. http://mindingourway.com/on-caring/


Load More