All of Jan-Willem's Comments + Replies

I think well-roundedness is maybe unfavorable for some EAs (mainly in academia), but not for a majority of EAs.

My experience from observing some of my most successful friends in non-EA orgs (policy roles, consulting, PE etc.) is that well-roundedness is a good predictor of success. Of course, no scientific proof, but you can imagine that abilities as quickly understanding social norms, showing grit, and manoeuvring in complex social (not purely intellectual) environments help you in those careers. These are things that you (partially) practice and learn in sports, board roles and some type of work you can do in college.

Amazing update thanks. Very much interested in your fiscal sponsorship model, is it possible to indicate interest already?

Curious to hear what your current intuitions and latest updates in beliefs are on:

  1. What the most promising / impactful path is for most private sector professionals from those described above 
  2. Where HIP could add most value for professionals looking at this map and if you eventually want to focus on a subset of activities
  3. The most pressing bottlenecks that professionals face that prevents them from having / maximising impact 
0
Jona
2y
+1

Thanks for this clear write-up and as many others, I definitely share some of your worries. I liked it that you wrote that the extra influx of money could make the CB-position accessible to people from different socioeconomic backgrounds, since this point seems to be a bit neglected in EA discussions.

I think it is true for many other impactful career paths that decent wages and/or some financial security (e.g. smoothening career transitions with stipends) could help to widen the pool of potential applicants, e.g. to more people from less fortunate socioeco... (read more)

tae
2y28
0
0

Adding on: Increasing EA spending in certain areas could certainly support diversity, but it could have the opposite effect elsewhere.

I’m concerned that focusing community-building efforts at elite universities only increases inequality. I’m guessing that university groups do much of the recruiting for all-expenses-paid activities. In practice, then, students at elite universities will benefit, while students at state schools and community colleges won’t even hear about these opportunities. So the current EA community-building system quite accurately selects for privileged students to give money to.

Curious about any work to change this pattern!

Hi Tessa,  although biorisks can be included in risks coming from high-priority emerging technologies, we decided for this round to focus on AI / cybersecurity risks for placements and therefore also for our training content. 

After the program we will re-evaluate and possibly re-run the program including expansion to other areas (as biorisks). We will announce this on the Forum and feel free to subscribe to our newsletter to receive updates.

Hi Aryeh, really interested in this as well. Can you link me to any literature, experts, videos, software etc. that you deem valuable from DA?

Would be really useful for future training programs from Training For Good!

Wow! Spot-on Adam, I wanted to respond to this question but no need to anymore after reading this

Does membership of a political party increase the odds of landing a traineeship?

1
EP intern
2y
It's certainly not a requirement. I was surprised by how many interns of MEPs did not belong to their MEP's party (maybe 50% or more ). (There was even one case of a Conservative MEP who had a Green intern.) This is even more true for the interns who work for the group as a whole rather than an individual MEP. However, of course, some MEP interns had  got their internship through party connections with that MEP. Party membership certainly raises the odds significantly even without knowing MEPs.

Thanks for this clear write-up. I will include this post in the content of Training For Good's Impactful Policy Careers workshop. Are you open to 1-on-1s with EAs interested in this career path? Feel free to respond in a pm.

Great idea, at TFG we have similar thoughts and are currently researching if we should run it and the best way to run a program like this. Would love to get input from people on this.

Awesome topic! Curious to read book reviews from people that read it. 

Great idea, at TFG we have similar thoughts and are currently researching the best way to run a program like this. Feel free to PM to provide input.

Hi Chris! We run this on a recurring base with Training For Good! We already had a few dozens of people on the program and we are currently measuring the impact.

See https://www.trainingforgood.com/salary-negotiation

3
Chris Leong
2y
I was suggesting an actual service and not just training.

Interesting thoughts, apart from the sections finm mentioned this one stood out to me as well:

status engineering - redirecting social status towards productive ends (for instance on Elon Musk making engineers high status)

I think this is something that the EA community is doing already and maybe could/should do even more. Many of my smartest (non EA) friends from college work in rent-seeking sectors / sectors that have neutral production value (not E2G). This seems to be an incredible waste of resources, since they could also work on the most pressing probl... (read more)

4
SamuelKnoche
2y
I believe Sam Harris is working on an NFT project for people having taken the GWWC pledge, so that would be one example. Academia seems like the highest leverage place one could focus on. Universities are to a large extent social status factories, and so aligning the status conferred by academic learning and research with EA objectives (for example, by creating an 'EA University') could be very high impact. Also relates to the point about 'institutions.'

Thanks a lot, this looks like a great resource. This would add a lot of value I think: Properly evaluate existing policy ideas to identify and promote the set of high-quality ideas.

I would be really interesting to see how different experts within the EA community rank the ideas within the same category (e.g. AI) on certain criteria (e.g. impact, tractability, neglectedness, but there are probably better critera). Or enrich this data with the group that is probably most able to push for certain reforms (e.g. American civil servants, people that engage with party politics etc.).

This would make the database actionable to the EA community and thereby even more valuable.

If you believe however that the EU becomes irrelevant at all (argument 5 against), all policy careers for EAs in mainland Europe become quite unappealing suddenly. This makes me think: if you believe the EU market and political environment favor AGI safety (argument 4 in favor), shouldn’t it be a priority for European EAs to keep the EU a relevant political force?

2[anonymous]2y
I think that's a very indirect intervention whose cost-effectiveness is probably lower than many other priorities (given the political forces and the many other stakeholders' strong interests, neglectedness is quite low) - but maybe I am missing something? It sounds definitely relevant as a "byproduct" of one's career though.  Would it therefore be a good principle to, ceteris paribus, push for the intervention that strengthen the EU, while working on one's own priority? 

I think there is one argument I really want to back, but I also want to provide a different angle: “Growing the political capital of AGI-concerned people”

I think that even when you think there are substantial odds that the EU doesn’t play an important role in regulation of AGI, having political capital could still be useful for other (tech-related) topics. Quite often I think there is a “halo-effect” related to being perceived as an tech-expert in government. That means that if you are perceived as a tech expert in government because you know a lot about A... (read more)

2[anonymous]2y
Agreed on the halo-effect, but besides one's prestige, I think one's region-specific knowledge and network matter a lot and does not transfer. As a result, if the EU is less relevant, building up prestige in the EU might not be as efficient as building up prestige in China or the US, given that in parallel you'd be developing a network and region-specific knowledge that will be more helpful to be impactful overall. (That being said, even though I wanted to avoid anchoring the reader by expressing my opinion in the post, I expect the EU to be most relevant right now for AGI governance given the institutional precedents it sets. I believe the lack of investment in EA time & money there is an unfortunate mistake. So the "if the EU is less relevant" scenario should be disregarded in my opinion.) 
6
Jan-Willem
2y
If you believe however that the EU becomes irrelevant at all (argument 5 against), all policy careers for EAs in mainland Europe become quite unappealing suddenly. This makes me think: if you believe the EU market and political environment favor AGI safety (argument 4 in favor), shouldn’t it be a priority for European EAs to keep the EU a relevant political force?

Applied for OPP strategy role during Sumer 2021 and received no feedback in first and second test task round. Wasn't disapointed, because it was well-compensated.

On a different note however: this is one of the largest advantages I see coming from an EA recruitment agency that would be able to give feedback to EA candidates. It feels like quite a miss I didn't get it, since I have to do very similar work for my own organisation Training For Good. Maybe there is something really obvious I can improve on, but due to the lack of feedback I don't know what.

Hahah same here Jonas! Let me know if you know more

I thought this was really interesting : we should open the way to a constitutional assembly leading to the development of a federal European state

Curious to hear thoughts from other EAs about the new German administration's appetite for a federalist Europe. I think a stronger, federalist Europe is something we should want from an EA perspective for the following reasons:

  • A stronger Europe increases the chance of European collective investments in AI leading to the development of "human-centric" AI
  • Increases the chance of a strong green European energy policy
... (read more)
2
ludwigbald
2y
I think there are definitely some areas where further European integration is warranted and popular across most of the political spectrum. We need a common foreign policy, we need a common migration policy. Just to get these wins, the EU will have to centralize some more power. Some other things also make sense and are rather popular, like allowing paneuropean party lists at EU parliament elections. I think over time, the direction is towards tighter integration, but this will have to follow cultural integration of the people, not precede it.
1
JohannWolfgang
2y
I think of federalism and further European integration as opposite ideas. More integration means moving towards having a single point of failure where we currently have 27. For instance, the commission bungled the acquisition of vaccines in 2020. Consequently, vaccination rates in the European Union lagged behind those in Britain by about one month (see this graph). Neither do I think that joint investments in AI and climate joint require further integration, but I guess it would strengthen the European position wrt foreign policy.

I am personally also very unsure of how to feel about european federalism. At this present moment it seems to me there is neither a strong political majority for further political integration, nor is there one for a significant roll-back. I expect the next years to be about management of the status-quo.

While I think that a federal EU would be desirable in principle, at the present moment the risk of backlash seems high enough to me that I don't think EAs should invest resources into pushing for it. Although if such a push were to happen, there seem to be many opportunities in the steering of this process, as I expect it to be in large part elite-driven.

Thanks, great overview! How can early stage career European EAs contribute to this? Do you know which organisations you mentioned have the capacity to absorb interns / starters from the EA community? 

Also sent you an email for possible involvement from Training For Good

Great work Michael, I've already included this Airtable in the curriculum  of Training For Good's upcoming impactful policy careers workshop. Well done, this work is of high value!

3
MichaelA
2y
Glad to hear that you think this'll be helpful! (Btw, your comment also made me realise I should add Training For Good to the database, so I've now done so. )

Hi, you say you will provide "housing in the Bahamas for up to 6 months".

Is there a certain minimum length of stay required (in terms of months)? 

Haven't decided on a minimum, was figuring we'd see how long people want to stay/what their other commitments are and work around that

Thanks Jared for thinking about TFG! We will make use of existing resources as much as possible. Never heard of this but just got the book .

Our focus on EMEA is temporary and dependent on the kind of programs. E.g. our program for policy makers is aimed at EMEA because of substantial differences with the US market

1
jared_m
3y
Congratulations on the launch, Jan-Willem! Good luck in the final months of 2021 and the new year - and if/when you expand to the Americas, feel free to drop me a line. Would be happy to chat and share any resources that may be helpful.

"People who expressed interest in a program about doing good" seems to be the best description. Marketing was focused on Dutch speaking people that wanted to do more good. 

No prior EA knowledge was needed and most people heard about EA but had no real prior knowledge.

Hi Jack,

Jan-Willem here, one of the other co-founders of Training for Good. I actually have some data on tractability of outreach to an older generation. As chapter director at EA Netherlands we organised a serie of workshops targeted at a slightly older audience (average age ~35).

Three out of 25 people in this program comitted to considerable changes in their life (pledging large amounts of money and switching into high impact roles). We didn't use a control group, but it is a good sign of tractability.   

3
Jack R
3y
Thanks! This is the exact kind of thing I was interested in hearing about. If you don’t mind sharing, is there any significant way in which the 25 people were selected for? E.g. “people who expressed interest in a program about doing good” vs “people who had engaged with EA for at least N hours and were the top 25 most promising from our perspective out of 100 who applied.” I’m hoping for the sake of meta-EA tractability that it was closer to the former :)

Hi Michael! Thanks for your response and your question. About TFG: We are considering a management for EA orgs program for our second year of existence.  As mentioned in our longer introduction post we are even open to changes in the second half of this year's training schedule, if new information shifts our beliefs about the added value of certain programs.

Hi Sarah, thanks for writing this great article.

As someone else mentioned in the comments most EAs work in non-EA orgs looking at the EA surveys. According to the last EA Survey I checked only ~10% of respondents worked in EA orgs and this is probably an overestimation (people in EA orgs are more likely to complete the survey I assume)

So I think the problem is not that EAs are not considering these jobs, I would say the bottleneck for impact is something else:

 1) Picking the right non-EA orgs, as mentioned in the comments the differences are massive h... (read more)

Interesting thougts Sanjay and I agree that we neglect the 60% for profit sector

My biggest concern with your solution in one sentence: as long as people mostly care about money they want to act  on advice that maximises their financial return. Of course we could " subsidise"  a service like that for social profit, but as long as it is not in the systems interest to act on our advice it's useless.

So changing the incentives of the system (through policy advocacy) or movement building (expanding the moral circle) seem more promosing from this viewpo... (read more)

2
Sanjay
3y
When I started thinking about these issues last year, my thinking was pretty similar to what you said.  I thought about it and considered that for the biggest risks, investors may have a selfish incentive to avoid to model and manage the impacts that their companies have on the wider world -- if only because the wider world includes the rest of their own portfolio! It turns out I was not the first to think of this concept, and its name is Universal Ownership. (I've described it on the forum here) Universal Ownership doesn't go far enough, in my view, but it's a step forward compared to where we are today, and gives people an incentive to care about social impacts (or social "profits")

Thanks for your response Benjamin (and Ben West asking a question)

Sorry for not being completely clear about this, but I pointed towards the profile of a (EA-style) charity entrepreneur which is indeed different from the regular SV co founder (although there are similarities, but let’s not go into the details). I think the mini profile you wrote about a non profit entrepreneur is great and I am happy to see that 80k pushes this. Hopefully the Community Building Program will follow since national and local chapters are for many people the first point of ent... (read more)

Great idea! I have a few questions:

  • Do you know similar voting methods that worked on a small scale?
  • What are the next steps in terms of research / action?
1
Bob Jacobs
3y
I haven't seen the use of categories and columns before, but the voting systems I used have already seen a bunch of analysis and real world use (the electo-wiki I linked to is a good starting point if you want to look into it). If with "small scale" you mean "you and a bunch of friends need to find a place to eat" I wouldn't use columns and categories (takes too long), but would instead use a simple Approval Vote. If you have a specific scenario in mind, feel free to message me and maybe I can help you out. I'm not a professional voting theorist, so I'm going to wait and see if someone finds a flaw in the idea of using columns, categories or departments. If not, I might be able to publish it in a couple years if my university/a journal is interested. I think from an activism perspective we should first focus on introducing a better voting system. Something like Approval Voting would be easier to explain/get the public on board with than this more complex electoral reform. If I run into some people that are passionate about voting reform I will certainly share this idea with them, but for now I don't really have an audience for it beyond this forum. If you have a project in mind, feel free to message me.

One comment regarding:

But the presence of the overhang makes them even more valuable. Finding an extra grantmaker or entrepreneur can easily unlock millions of dollars of grants that would otherwise be left invested.

If we really think that this is the case for EA / charity entrepreneurs I think we should consider the following:

We spend too little effort on recruiting entrepreneurial types in the movement. Being relatively new in the movement (coming in as an entrepreneur), I think we should foster a more entrepreneurial culture than we currently do. I... (read more)

1
Manuel Allgaier
3y
Agree that this seems neglected. EA Germany (and I personally) are happy to support EA projects that have potential to grow into impactful EA organisations. If you have ideas on how to better do that (within the limited capacity of national group organisers), feel free to get in touch! (I also agree on the importance of having founders that are value-aligend and have good epistemics, which I think some entrepreneurs are but many others may not be)
7
Benjamin_Todd
3y
One extra thought is that there was a longtermist incubator project for a while, but they decided to close it down. I think one reason was they thought there weren't enough potential entrepreneurs in the first place, so the bigger bottleneck was movement growth rather than mentoring. I think another bottleneck was having an entrepreneur who could run the incubator itself, and also a lack of ideas that can be easily taken forward without a lot more thinking. (Though I could be mis-remembering.)

As someone who's spent a fair amount of time with the SV startup scene (have cofounded multiple companies) and the EA scene, I'd flag that the cultures of at least these two are quite different and often difficult to bridge. 

Most of the large EA-style projects I'd be excited about are ones that would require a fair amount of buy-in and trust from the senior EA community. For example, if you're making a new org to investigate AGI safety, bio safety, or expand EA, senior EAs would care a lot about the leadership having really strong epistemics and under... (read more)

I agree - people able to run big EA projects seem like one of our key bottlenecks right now. That was one of my motivations for writing this post, and this mini profile.

I'm especially excited about finding people who could run $100m+ per year 'megaprojects', as opposed to more non-profits in the $1-$10m per year range, though I agree this might require building a bigger pipeline of smaller projects.

I also agree it seems plausible that the culture of the movement is a bit biased against entrepreneurship, so we're not attracting as many people with this ski... (read more)

I feel like these conversations often get confusing because people mean different things by the term "entrepreneur", so I wonder if you could define what you mean by "entrepreneur" and what you think they would do in EA?

Even with very commercializable EA projects like cellular agriculture, my experience is that the best founders are closer to scientists than traditional CEOs, and once you get to things like disentanglement research the best founders have almost no skills in common with e.g. tech company founders, despite them both technically being "entrepreneurs" in some sense.

Thanks for this great post, I think a must read for everyone working in the EA meta space.

Some thoughts on the following: 

"I continue to think that jobs in government, academia, other philanthropic institutions and relevant for-profit companies (e.g. working on biotech) can be very high impact and great for career capital."

I think we sometimes forget that these jobs in developing countries usually pay quite well. I wouldn't see earning to give and working in these institutions as opposites. There are jobs that give career capital with earning to give ... (read more)

In the Netherlands we have DonerEffectief, launched last year around May. 

Happy to share some of our experiences and very interested in your story as well Pablo.  We registered many effective charities to make them tax-deductible over the last year. According to https://www.imperial.ac.uk/news/165846/donations-sci-deductible-across-european-union/ this this status applies to residents of other EU-countries as well? 

Wondering if there are any other EU parties who want to capitalise on this? 

1
pmelchor
3y
Hi Jan-Willem, thanks for the info! I am interested in learning more about DonerEffectief: I will DM you about that. As for EU-wide deductibility, last time I checked it was one of those cases where: 1. EU rules establish something. 2. The countries' tax authorities  are not happy about it. 3. National rules make sure the process is as hellish as possible so that only heroes can push through.  I am quite sure that is still the case in Spain, but I will use this as a nudge to look into it again :-)

We (EAN) run a large project around improving decision making at our MFA. We try to incorporate the newest insights from research, happy to talk. @Laura: I've reached out to you

2
MichaelA
3y
Out of interest, what does MFA stand for, in this context?

I think this should be an important part of a potential EA training institute, see https://forum.effectivealtruism.org/posts/L9dzan7QBQMJj3P27/training-bottlenecks-in-ea

To have impact you need to have personal impact skills as well, besides object-level knowledge.

Hi great report, thanks a lot! I think this is a very inspirational case. Congrats on the results! During presentations to young professionals I always mention your initiative. 

I will reach out to you to exchange experiences.

1
Jack Lewars
3y
Thanks Jan - looking forward to hearing from you!

Nice, thanks. I use this message a lot during broader outreach outside of the EA world, I think it works!

Hi there,

I am a former management / strategy consultant (3 years) and currently entrepreneur (4 years now) of which the last year in the EA space (leading EA Netherlands) . I think we have a very similar profile

Happy to talk to you in June, I will send you my email in a pm!

1
NoteworthyTrain
3y
Sounds great! Thank you very much :)

Thanks for this! I've sent you an email. Especially regarding caveat #2 I believe you can help with relative little time and resources. 

Thanks Barry, it would be great to have someone on the team who is able to give a verdict on the mental health / social media influence. Let me know if you have someone on your mind. I think working on that question should be a seperate work stream in this (small) GPR project.

1
Barry Grimes
3y
I don't have a specific person in mind I'm afraid but you could post in the Effective Altruism, Mental Health, and Happiness Facebook group and see if anyone there would like to get involved.

Great thanks! Did you already listen to https://80000hours.org/podcast/episodes/tristan-harris-changing-incentives-social-media/? 

New 80k episode, partially dedicated to this argument.

1
kokotajlod
3y
Not yet, thanks for introducing it to me!

Can you elaborate on the EU's AI Ethics guidelines case? What did they try and why didn't they succeed?

1
Aleksi Maunu
3y
I'd also like to know

Thanks, great response kokotajlod. Do we have any views if there are already other EAs seriously investigating this, to see how probable and large the danger is and try to brainstorm tractible solutions?

At the moment I am quite packed with community building work for EA Netherlands but I would love to be in a smaller group to have some discussions about it. I am relatively new to this forum, what would be the best way to find collaborators for this?

5
kokotajlod
3y
Here are some people you could reach out to: Stefan Schubert (IIRC he is skeptical of this sort of thing, so maybe he'll be a good addition to the conversation) Mojmir Stehlik (He's been thinking about polarization) David Althaus (He's been thinking about forecasting platforms as a potential tractible and scalable intervention to raise the sanity waterline) There are probably a bunch of people who are also worth talking to but these are the ones I know of off the top of my head.

Thanks! I would love to see more opinions on your first argument: 

  • Do we believe that there is no significant increase in X-risk? (no scale)
  • Do we believe there is nothing we can do about it (not solvable)
  • Do we believe there are many overfunded parties working on this issue (not neglected).
4
kokotajlod
3y
I can't speak for anyone else, but for me: --Short term AI risks like you mention definitely increase X-risk, because they make it harder to solve AI risk (and other x-risks too, though I think those are less probable) --I currently think there are things we can do about it, but they seem difficult: Figuring out what regulations would be good and then successfully getting them passed, probably against opposition, and definitely against competition from other interest groups with other issues. --It's certainly a neglected issue compared to many hot-button political topics. I would love to see more attention paid to it and more smart people working on it. I just think it's probably not more neglected than AI risk reduction. Basically, I think this stuff is currently at the "There should be a couple EAs seriously investigating this, to see how probable and large the danger is and try to brainstorm tractible solutions."  If you want to be such an EA, I encourage you to do so, and would be happy to read and give comments on drafts, video chat to discuss, etc. If no one else was doing it, I might do it myself even. (Like I said, I am working on a post about persuasion tools, motivated by feeling that someone should be talking about this...) I think probably such an investigation will only confirm my current opinions (yup, we should focus on AI risk reduction directly rather than on raising the sanity waterline via reducing short-term risk) but there's a decent chance that it would chance my mind and make me recommend more people switch from AI risk stuff to this stuff.

Hi Sella and Gidon,

Great to read all this, thoughtful considerations on many topics. I think the EA Netherlands (EAN) strategy is comparable to yours and therefore I would love to collaborate in the future. First a few comments from our experience:

1) About direct work / broad scope of activities

I think the Dutch and Israeli mentality are very similar, people want to do stuff when they are part of a community. In addition to the pro direct-work arguments and goals you've mentioned I think you can add another goal of direct projects: changing the way pe... (read more)

Load more