All of aggg's Comments + Replies

aggg
3y13
0
0

While I can't find any EA work on economic policy in poor countries, two Charity Entrepreneurship incubated charities are working on health policy: 

I think the most obvious reason that this work isn't happening is just that the EA community is overwhelming... (read more)

Just wanted to flag a new development on the topic:  it looks like US Treasury Secretary Janet Yellen is pushing for a global minimum corporate tax, which might be another good solution to the problem!

8
tamgent
3y
The international minimum corporate tax rate got finalised last month! After only 28 years of discussion.  https://www.oecd.org/newsroom/130-countries-and-jurisdictions-join-bold-new-framework-for-international-tax-reform.htm 
1
Alex Barnes
3y
Thank you! That sounds very encouraging. The exact details will be very important.

It would be fantastic if we could set up RSS feeds for individual tags!

2
Aaron Gertler
3y
There are multiple ways to accomplish something like this. You can subscribe to a tag, which will notify you whenever a post gets that tag: Or set a tag as "required". This will show you only posts with that tag, creating an instant "feed":

Giving Green should be added to that list of EA-aligned charity evaluators - they provide recommendations for high-impact giving in the climate change space (which is probably particularly helpful given how much corporate giving/CSR is focused on climate change)! 

They also state on their website that they "provide bespoke consulting services to organizations who want to bring more data and evidence to their pro-climate activities" - so they might be able to provide research tailored to different companies needs, or to your new nonprofit!

2
mss74
3y
Thank you for your comment and this reference! I wasn't aware of this group. I've edited the post to include them!! 
aggg
3y13
0
0

You mentioned in your 2021 update that you're starting a research internship program next year (contingent on more funding) in order to identify and train talented researchers, and therefore contribute to EA-aligned research efforts (including your own). 

Besides offering similar internships, what do you think other EA orgs could do to contribute to these goals? What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?

7
Linch
3y
I think this is a relatively minor thing, but trying to become close to perfectly calibrated (aka being able to put precise numbers on uncertainty) on some domains seem like a moderate-sized win, at very low cost.  I mainly believe this because I think the costs are relatively low. My best guess is that the majority of EAs can become close to perfectly calibrated on trivia numerical questions in much less than 10 hours of deliberate practice, and my median guess is for the amount of time needed  is around 2 (eg practice here). I want to be careful with my claims here. I think sometimes people have the impression that getting calibrated is synonymous with rationality, or intelligence, or judgement. I  think this is wrong: 1. Concretely, I just don't think being perfectly calibrated is that big a deal. My guess is that going from median-EA levels of general calibration to perfect calibration on trivia questions is an improvement in good research/thinking by 0.2%-1%. I will be surprised if somebody becomes a better researcher by 5% via these exercises, and very surprised  if they improve by 30%. 2. In forecasting/modeling, the main quantifiable metrics include both a) calibration (roughly speaking, being able to quantify your uncertainty) and b) discrimination (roughly speaking, how often you're right). In the vast majority of cases, calibration is just much less important than discrimination.  3. There are generalizability issues with generalizing from good calibration on trivia questions to good calibration overall. The latter is likely to be much harder to train precisely, or even precisely quantify (though I'm reasonably confident that going from poor calibration on trivia to perfect calibration should generalize somewhat, Dave Bernard might have clearer thoughts on this) 4. I think calibration matters more for generalist/secondary research (much of what RP does) than for things that either a) require relatively narrow domain expertise, like ML-heavy AI Safe
5
MichaelA
3y
Misc thoughts on "What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?" There was some relevant discussion here. Ideas mentioned there include:  * getting mentorship outside of EA orgs (either before switching into EA orgs after a few years, or as part of a career that remains outside of explicitly EA orgs longer-term) * working as a research assistant for a senior researcher I think the post SHOW: A framework for shaping your talent for direct work is also relevant.
2
MichaelA
3y
Research training programs, and similar things (You said "Besides offering similar internships". But I'm pretty excited about other orgs running similar internships, and/or running programs that are vaguely similar and address basically the same issues but aren't "internships". So I'll say a bit about that cluster of stuff, with apologies for sort-of ignoring instructions!) David wrote: I second all of that, except swapping GPI's Early Conference Career Programme (which I haven't taken part in) for the Center on Long-Term Risk's Summer Research Fellowship. I did that fellowship with CLR from mid August to mid November, found it very enjoyable and useful.  I recently made a tag for posts relevant to what I called "research training programs". By this I mean things like FHI and CLR's Summer Research Fellowships, Rethink Priorities' planned internship program, CEA's former Summer Research Fellowship, probably GPI's Early Career Conference Programme, probably FHI's Research Scholars Program, maybe the Open Phil AI Fellowship, and maybe ALLFED's volunteer program. Readers interested in such programs might want to have a look at the posts with that tag.  I think that these programs might be one of the best ways to address some of the main bottlenecks in EA or at least in longtermism (I've thought less about areas of EA other than longtermism). What I mean is related to the claim that EA being vetting-constrained, and to Ben Todd's claim that some of EA's main bottlenecks at the moment are "organizational capacity, infrastructure, and management to help train people up". There was also some related discussion here (though it's harder to say whether that overall supported the claims I'm gesturing at). So I'm really glad a few more such programs have recently popped up in longtermism. And I'm really excited about Rethink's internship program (which I wasn't involved in the planning of, and didn't know about when I accepted the role at Rethink). And I'd be keen to see m
3
MichaelA
3y
My own story & a disclaimer (This is more of a tangent than an answer, but might help provide some context for my other responses here and elsewhere in this AMA. Feel free to ignore it, though!) I learned about EA in late 2018, and didn't have much relevant expertise, experience, or credentials. I'd done a research-focused Honours year and published a paper, but that was in an area of psychology that's not especially relevant to the sort of work that, after learning about EA, I figured I should aim towards. (More on my psych background here.) I was also in the midst of the 2 year Teach For Australia program, which involves teaching at a high school, and also wasn't relevant to my new EA-aligned plans. Starting then and continuing through to mid 2020 ish, I made an active effort to "get up to speed" on EA ideas, as described here.  In 2019, I applied for ~30 EA-aligned roles, mostly research-ish roles at EA orgs (though also some non-research roles or roles at non-EA orgs). I ultimately got two offers, one for an operations role at an EA org and one for a research role. I think I had relevant skills but didn't have clear signals of this (e.g., more relevant work experience or academic credentials), so I was often rejected at the CV screening stage but often did ok if I was allowed through to work tests and interviews. And both of the offers I got were preceded by work tests.  Then in 2020, I wrote a lot of posts on the EA Forum and a decent number on LessWrong, partly for my research job and partly "independently". I also applied for ~11 roles this year (mostly research roles, and I think all at EA orgs), and ultimately received 4 offers (all research roles at EA orgs). So that success rate was much higher, which seems to fit my theory that last year I had relevant skills but lacked clear signals of this.  So I've now got a total of ~1.5 years FTE of research experience, ~0.5 of which (in 2017) was academic psychology research and ~1 of which (this year) was sp
4
MichaelA
3y
Hi Arushi, Good questions! I'll split some thoughts into a few separate comments for readability.  Writing on the Forum I second Peter's statement that (Though in some cases it might make sense to publish the post to LessWrong instead or in addition.) This statement definitely seems true in my own case (though I imagine for some people other approaches would be more effective): I got a offer for an EA research job before I began writing for the EA Forum. But I was very much lacking in the actual background/credentials the org said they were looking for, so I'm almost certain I wouldn't have gotten that offer if the application process hadn't included a work test that let me show I was a good fit despite that relevant lack of background/credentials. (I was also lucky that the org let me do the work test rather than screening me out before that.) And the work test was basically "Write an EA Forum post on [specific topic]", and what I wrote for it did indeed end up as one of my first EA Forum/LessWrong posts. And then this year I've gotten offers from ~35% of what I've applied to, as compared to ~7% last year, and I'd guess that the biggest factors in the difference were: 1. I now had an EA research role on my CV, signalling I might be a fit for other such roles 2. Going from 1FTE non-EA stuff (teaching) in 2019 to only ~0.3FTE non-EA stuff (a grantwriting role I did for a climate change company on the side of my ~0.7FTE EA work till around August) allowed me a lot of time to build relevant skills and knowledge 3. In 2020 I wrote a bunch of (mostly decently/well received) EA Forum or LessWrong posts, helping to signal my skills and knowledge, and also just "get my name out there" * "getting my name out there" was not part of my original goal, but did end up happening, and to quite a surprising degree. 4. Writing EA Forum and LessWrong posts helped force and motivate me to build relevant skills and knowledge 5. Comments and feedback from others on my EA

Hi Arushi,

I am very hopeful the internship program will let us identify, take on, and train many more staff than we could otherwise and then either hire them directly or be able to recommend them to other organizations.

While I am wary of recommending unpaid labor (that's why our internship is paid), I otherwise think one of the best ways for a would-be researcher to distinguish themselves is writing a thoughtful and engaging EA Forum post. I've seen a lot of great hires distinguish themselves like this.

Other than open more researcher jobs and internships, ... (read more)

What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?

There are some relevant answers in here and here.

I’m happy to see an increase in the number of temporary visiting researcher positions at various EA orgs. I found my time visiting GPI during their Early  Career Conference Programme very valuable (hint: applications for 2021 are now open, apply!) and would encourage other orgs to run similar sorts of programmes to this and FHI’s (summer) research scholars programme. I'm very excited to see how our internship program develops as I really enjoy mentoring.

I think I was competitive for the RP job because of my T-shaped skills, broad knowledge in lots of ... (read more)

aggg
3y42
0
0

I started as the full-time Co-Director of EA NYC two months ago! Since Aaron Mayer and I started, we've accomplished a bunch of things, including launching a new website (just published yesterday!), starting 1-1 calls, a NYC job board (updated weekly), and our new Rings program to help EA NYCers start projects together! 

I'm loving the new job and am really excited about what we've been able to do in such a short time :) Hopefully the coming 10 months are just as productive!

Btw we are still taking applications for Rings, if anyone is interested in appl... (read more)

2
jojo_lee
3y
Awesome, Arushi and Aaron! Hope you keep the momentum up  and keep doing great things <3 -Jojo
3
Ben_West
3y
Congrats! And Rings is a cool idea. I hope you write up a Forum post about the results! 

Interesting results. I personally do like the moral duty option - I think it does have a pretty different connotation than an obligation. Obligation suggest something forced upon you by outside forces, while moral duty suggests something done out of a sense of responsibility, but more joyfully and consciously chosen.

I'm just wondering why Muslim is not an option for the religious beliefs question? This seems like a silly oversight since it is a major religion.

4
RandomEA
7y
It actually was an option (see the survey here). I suspect they left it out of the results because nobody chose it.

I was also interested in this book - I've ordered a copy and I'm excited for it to arrive! The news that they haven't replied to questions about the data is disappointing but I think there is still value in the book. Particularly, on the "solutions" page on the site, they state: "The list is comprised primarily of “no regrets” solutions—actions that make sense to take regardless of their climate impact since they have intrinsic benefits to communities and economies."

Considering some of the solutions that actively make lives better (such... (read more)

I've been thinking about this as well lately, specifically in terms of reducing hatred and prejudice (racism, sexism, etc). For example, this is anecdotal, but one (black) man named Daryl Davis says that he has gotten more than 200 KKK members to disavow the group by simply approaching them and befriending them. Over time they would realize that their views were unfounded, and gave up their KKK membership of their own volition. This is an interview with Davis: http://www.npr.org/2017/08/20/544861933/how-one-man-convinced-200-ku-klux-klan-members-to-give-up... (read more)

0
astupple
7y
I bet a more neglected aspect of polarization is the degree to which the left (which I identify with) literally hates the right for being bigots, or seeming bigots (agree with Christian Kleineidam below). This is literally the same mechanism of prejudice and hatred, with the same damaging polarization, but for different reasons. There's much more energy to address the alt-right polarization than the not-even-radical left (many of my friends profess hatred of Trump voters qua Trump voters, it gives me the same pit of the stomach feeling when I see blatant racism). Hence, addressing the left is probably more neglected (unsure how you'd quantify this, but it seems pretty evident). The trouble I find is that the left's prejudice and hatred seems more complex and harder to fix. In some ways, the bigots are easier to flip toward reason (anecdotes about befriending racists, families changing when their kids come out etc). Have you ever tried to demonstrate to a passionate liberal that maybe they've gone too far in writing off massive swaths of society as bigots? Just bringing it up literally challenges the friendship in my experience. I think polarization is incredibly bad, there are neglected areas, but neglectedness seems to be outweighed by intractability.
1
ChristianKleineidam
7y
I don't think the recent rise of polarization in the US over the last decade is driven by a rise in racism or sexism. Activism to reduce either of them might be valuable, but I don't think it solves the issue of polarization.