All of alexlintz's Comments + Replies

Digital People Would Be An Even Bigger Deal

I see a lot of talk with digital people about making copies but wouldn't a dominant strategy (presuming more compute = more intelligence/ability to multitask) be to just add compute to any given actor? In general, why copy people when you can just make one actor, who you know to be relatively aligned, much more powerful? Seems likely, though not totally clear, that having one mind with 1000 compute units would be strictly better for seeking power than 100 minds with 10 compute units each.

For example, companies might compete with one another to have the sma... (read more)

2Holden Karnofsky1moI think this depends on empirical questions about the returns to more compute for a single mind. If the mind is closely based on a human brain, it might be pretty hard to get much out of more compute, so duplication might have better returns. If the mind is not based on a human brain, it seems hard to say how this shakes out.
Digital People Would Be An Even Bigger Deal

That seems true for many cases (including some I described) but you could also have a contingent of forward-looking digital people who are optimizing hard for future bliss (a potentially much more appealing prospect than expansion or procreation). Seems unclear that they would necessarily be interested in this being widespread.

Could also be that digital people find that more compute = more bliss without any bounds. Then there is plenty of interest in the rat race with the end goal of monopolizing compute. I guess this could matter more if there were just o... (read more)

Digital People Would Be An Even Bigger Deal

One thing that seems interesting to consider for digital people is the possibility of reward hacking. While humans certainly have quite a complex reward function, once we have full understanding of the human mind (having very good understanding could be a prerequisite to digital people anyway) then we should be able to figure out how to game it.

A key idea here is that humans have built-in limiters to their pleasure. I.e. if we eat good food that feeling of pleasure must subside quickly or else we'll just sit around satisfied until we die of hunger. Digital... (read more)

6kokotajlod2moDigital people that become less economically, militarily, and politically powerful--e.g. as a result of reward hacking making them less interested in the 'rat race'--will be outcompeted by those that don't, unless there are mechanisms in place to prevent this, e.g. all power centralized in one authority that decides not to let that happen, or strong effective regulations that are universally enforced.
Report on Whether AI Could Drive Explosive Economic Growth

I did my masters' thesis evaluating Kremer's paper from the 90's which makes the case for the more people->more growth->more people feedback loop. It essentially supports Ben's post from awhile ago (https://forum.effectivealtruism.org/posts/CWFn9qAKsRibpCGq8/does-economic-history-point-toward-a-singularity) [fyi I did work with ben on this project] in arguing that, with radiocarbon data (which I hold is much better than the guesstimate data Kremer uses), the more people->more growth relationship doesn't seem to hold. In terms of population it seem... (read more)

A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it

Cool thanks for the feedback everyone! I haven't done much thinking about root cause vs symptoms but I agree that especially with mental health it does seem right that 'root cause' isn't really a useful term given the complexity. I changed up that last recommendation a bunch to get rid of symptom/root cause dichotomy:

"[revised] Try a bunch of other things. There are a lot of medications and pills you can take which have relatively low downsides and which can potentially be game-changers. This includes things like antidepressants, various supplements, nootr... (read more)

A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it

Oh yeah, I think you're right on that! I shouldn't have been so down on symptom-reducing treatment. It does seem clearly better to fix root causes but given they can be so hard to fix it can often be the case that the best solution is to treat symptoms (and in some cases, like mental health, that can help improve root cause as well). I'll change that language so it's more positive on those

4alexlintz3moCool thanks for the feedback everyone! I haven't done much thinking about root cause vs symptoms but I agree that especially with mental health it does seem right that 'root cause' isn't really a useful term given the complexity. I changed up that last recommendation a bunch to get rid of symptom/root cause dichotomy: "[revised] Try a bunch of other things. There are a lot of medications and pills you can take which have relatively low downsides and which can potentially be game-changers. This includes things like antidepressants, various supplements, nootropics, or other medication. Again, it's probably worth thinking of these as abnormally good lottery tickets. Expect most to fail but eventually something might really work. [see comments section for more on how to think about treating symptoms vs root causes]"

Fwiw, for mental health I'm not sure whether therapy is more likely to treat the 'root causes' than medications. You could have a model where some 'chemical thingie' that can be treated by meds is the root cause of mental illness and the actual cognitive thoughts treated by therapy are the symptoms. 

In reality, I'm not sure the distinction is even meaningful given all the feedback loops involved. 

Hm, I'm a bit unhappy with the framing of symptoms vs. root causes, and am skeptical about whether it captures a real thing (when it comes to mental health and drugs vs. therapy). I'm worried that making the difference between the two contributes to the problems alexrjl pointed out.

Note, I have no clinical expertise and am  just spitballing: e.g. I understand the following trajectory as archetypical for what others might call "aha! First a patch and then root causes":

[Low energy --> takes antidepressants --> then has enough energy to do therapy ... (read more)

Learnings about literature review strategy from research practice sessions

Yeah, maybe I should change some text... but I guess I have assumption built in that when finding papers which seem relevant you'd be reading the abstract,  getting a basic idea of what they're about, and then adjusting search terms. 
 

The reason having a pile of papers is useful is because the value of papers is extremely uneven for any given question and by having a pile you get a better feel for the range of what people say about a topic before diving into one perspective. Wrt the first point I'd argue that in most cases there are one or t... (read more)

Ask Rethink Priorities Anything (AMA)

Yeah, this would be nice to have! It's a lot of text to digest as it is now and I guess most people won't see it here going forward

Ask Rethink Priorities Anything (AMA)

I don't work at Rethink Priorities but I couldn't resist jumping in with some thoughts as I've been doing a lot of thinking on some of these questions recently

Thinking vs. reading. I’ve been playing around with spending 15-60 min sketching out a quick model of what I think of something before starting in on the literature (by no means a consistent thing I do though). I find it can be quite nice and help me ask the right questions early on. 

Self-consciousness. Idk if this fits exactly but when I started my research position I tried to have the mindset ... (read more)

3Denis Drescher9moWhee! Thank you too! Yeah, I think that perspective on self-consciousness is helpful! Work hours: I also wonder how much this varies between professions. Maybe that’s worth a quick search and writeup for me at some point. When you go from a field where it’s generally easy to concentrate for a long time every day to a field where it’s generally hard, that may seem disproportionately discouraging when you don’t know about that general difference. “Try to make a map of what the key questions are and what the answers proposed by different authors are”: Yeah, combining that with Jason’s tips seems fruitful too: When talking to a lot of people, always also ask what those big questions and proposed answers are. More nonobvious obvious advice! :-D I may try out social incentives and dictation software, but social things are usually draining and sometimes scary for me, so there’d be a tradeoff between the motivation and my energy. And I feel like I think in a particular and particularly useful way while writing but can often not think new thoughts while speaking, but that may be just a matter of practice. We’ll see! And even if it doesn’t work, these questions and answers are not (primarily) for me, and others probably find them brilliantly useful! I’ve bought some Performance Lab products (following a recommendation from Alex in a private conversation). They have better reviews on Vaga [https://www.vaga.org/health/performance-lab-whole-food-multi-review/] and are a bit cheaper than the Athletic Greens. “Default mode network”: Interesting! I didn’t know about that.
Learnings about literature review strategy from research practice sessions

Thanks!

Yes! You're totally right that going down the citation trail with the right paper can be better than search, I just edited to reflect that.

This spreadsheet seems great. So far we've only found ways to practice the early parts of literature review so we never created anything so sophisticated but that seems like a good method

Learnings about literature review strategy from research practice sessions

Iris.ai sounds potentially useful, I'll definitely check it out!

So far we've done some things on inspectional note-taking, finding the logical argument structure of articles, and breaking down questions into subquestions. I'm not too sure what the next big thing will be though. Some other ideas have been to practice finding flaws in articles (but it takes a bit too long for a 2hr session and is too field specific), abstract writing, making figures, and picking the right research question. 

I haven't been spending too much time on this recently though so the ideas for actually implementing these aren't top of mind

Objections to Value-Alignment between Effective Altruists

That said, I do agree we should work to mitigate some of the problems you mention. It would be good to get people more clear on how uncertain things are, to avoid groupthink and over-homogenization. I think we shouldn't expect to diverge very much from how other successful movements have happened in the past as there's not really precedent for that working, though we should strive to test it out and push the boundaries of what works. In that respect I definitely agree we should get a better idea of how homogenous things are now and get more explicit about what the right balance is (though explicitly endorsing some level of homogeneity might have it's own awkward consequences)

Objections to Value-Alignment between Effective Altruists

I agree with some of what you say, but find myself less concerned about some of the trends. This might be because I have a higher tolerance for some of the traits you argue are present and because AI governance, where I'm mostly engaged now, may just be a much more uncertain topic area than other parts of EA given how new it is. Also, while I identify a lot with the community and am fairly engaged (was a community leader for two years), I don't engage much on the forum or online so I might be missing a lot of context.

I worry about the framing of ... (read more)

3MichaelA1yI think I agree with all of what you say. A potentially relevant post is The Values-to-Actions Decision Chain: a lens for improving coordination [https://forum.effectivealtruism.org/posts/Ekzvat8FbHRiPLn9Z/the-values-to-actions-decision-chain-a-lens-for-improving] . Just in case future readers are interested in having the links, here's the post and agenda I'm guessing you're referring to (feel free to correct me, of course!): * Personal thoughts on careers in AI policy and strategy [https://forum.effectivealtruism.org/posts/RCvetzfDnBNFX7pLH/personal-thoughts-on-careers-in-ai-policy-and-strategy] * AI Governance: A Research Agenda [https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf]
1alexlintz1yThat said, I do agree we should work to mitigate some of the problems you mention. It would be good to get people more clear on how uncertain things are, to avoid groupthink and over-homogenization. I think we shouldn't expect to diverge very much from how other successful movements have happened in the past as there's not really precedent for that working, though we should strive to test it out and push the boundaries of what works. In that respect I definitely agree we should get a better idea of how homogenous things are now and get more explicit about what the right balance is (though explicitly endorsing some level of homogeneity might have it's own awkward consequences)
The ITN framework, cost-effectiveness, and cause prioritisation

I think your critique of the ITN framework might be flawed. (though I haven't read section 2 yet). I assume some of my critique must be wrong as I still feel a bit confused about it, but I really need to get back to work...

One point that I think is a bit confusing is that you use the term marginal cost-effectiveness. To my knowledge this is not an acknowledged term in economics or elsewhere. What I think you mean instead is the average benefit given a certain amount of money.

Cost-effectiveness is (according to wikipedia at least) generally expressed a... (read more)

1ishi2yI think you identified the same problem i saw. If you have a small problem, then there no reason to call it 'neglected' if you put enough resources into solving that small problem. You have to put all problems into context--no reason to spend alot of resources to 100% solve a small problem when you put no resources into trying to solve a big problem. This is like spending alot of money to give sandwiches to solve temporary hunger problem for a few people, while 'neglecting ' the entire issue of global hunger or food scarcity.
What actions would obviously decrease x-risk?

Just to play devil's advocate with some arguments against peace (in a not so well thought out way)... There's a book called 'The Great Leveler' which puts forward the hypothesis that the only time when widespread redistribution has happened is after wars. This means that without war we might expect consistently rising inequality. This effect has been due to mass mobilization ('Taxing the Rich' asserts that there has only been mass political willpower to increase redistribution with the claims of veterans having served and feeling they should be compensated

... (read more)
7Pablo2yThis seems like point worth highlighting, especially vis-à-vis Bostrom's own views about the importance of global governance in 'The Vulnerable World Hypothesis [https://doi.org/10.1111/1758-5899.12718]'. Worth also noting that the League of Nations was created in the aftermath of WW1.
EA Handbook 3.0: What content should I include?

I always recommend Nate Soares' post 'On Caring' to motivate the need for rational analysis of problems when trying to do good. http://mindingourway.com/on-caring/


An overview of arguments for concern about automation

Wage growth:

It took a surprisingly long time to find anything on real wage trends in Europe but it looks like, judging by the graphs on page 5 of this paper that Sweden, Norway, and in part the UK are exceptions to quite slow real-wage growth. Germany, France, Italy, Spain, and Denmark follow the wage stagnation of the US.

I very much agree though that my analysis is very focused on the US (and the discussion in general). This paper demonstrates that at least on a micro level there are demonstrated effects on wages and employment from automation in the UK.... (read more)

Long-Term Future Fund: April 2019 grant recommendations

Yeah I tend to agree that sending the whole thing is unnecessary. The first 17 chapters of printed version distributed at CFAR workshops (I think, haven't actually been to one) is enough to get people engaged enough to move to the online medium. I'm guessing sending just that small-looking book will make people more likely to read it as seeing a 2k page book would definitely be intimidating enough to stop many from actually starting.

I do tend to think giving the print version is useful as it incurs some sort of reciprocity which should incentivize reading it.

After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation

I agree that a quick and decisive input from someone very knowledgeable about EA and the topic involved would be very useful and save a lot of time and indecision for people evaluating career options.

I think we can provide a bit of this though through more engaged online communities around given topic areas. Not nearly as good as in person talks but people can at least get some general feedback on career ideas. I'm hoping to host an event later this year that will gather people interested in a cause area and use that as a catalyst to form a more cohe... (read more)

Strongly agreed. I really like Raemon's analysis why it's so hard to get EA careers: we're network constrained. [This isn't exactly how he frames it, more my take on his idea.]

Right now, EA operates very informally, relying heavily on the fact that the several hundred people working at explicitly EA orgs are all socially networked together to some degree. This social group was significantly inherited from LessWrong and Bay Area rationalism, and EA has had great success in co-opting it for EA goals.

But as EA grows beyond its roots, more ... (read more)

Rationality vs. Rationalization: Reflecting on motivated beliefs

Yes! Totally agree. I think I mentioned very briefly that one should also be wary of social dynamics pushing toward EA beliefs, but I definitely didn't address it enough. Although I think the end result was positive and that my beliefs are true (with some uncertainty of course), I would guess that my update toward long-termism was due in large part to lot's of exposure to the EA community and from the social pressure that brings.

I basically bought some virtue signaling in the EA domain at the cost of signaling in broader society. Given I hang ou... (read more)