ShayBenMoshe

Comments

List of Under-Investigated Fields - Matthew McAteer

Thanks for linking this, this looks really interesting! If anyone is aware of other similar lists, or of more information about those fields and their importance (whether positive or negative), I would be interested in that.

My Career Decision-Making Process

Thanks for detailing your thoughts on these issues! I'm glad to hear that you are aware of the different problems and tensions, and made informed decisions about them, and I look forward to seeing the changed you mentioned being implemented.

I want to add one comment about to the How to plan your career article, if it's already mentioned. I think it's really great, but it might be a little bit too long for many readers' first exposure. I just realized that you have a summary on the Career planning page, which is good, but I think it might be too short. I found the (older) How to make tough career decisions article very helpful and I think it offers a great balance of information and length, and I personally still refer people to it for their first exposure. I think it will be very useful to have a version of this page (i.e. of similar length), reflecting the process described in the new article.

With regards to longtermism (and expected values), I think that indeed I disagree with the views taken by most of 80,000 hours' team, and that's ok. I do wish you offered a more balanced take on these matters, and maybe even separate the parts which are pretty much a consensus in EA from more specific views you take so that people can make their own informed decisions, but I know that it might be too much to ask and the lines are very blurred in any case.

Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement

Thanks for publishing negative results. I think that it is important to do so in general, and especially given that many other group may have relied on your previous recommendations.

If possible, I think you should edit the previous post to reflect your new findings and link to this post.

(Autistic) visionaries are not natural-born leaders

Thanks to Aaron for updating us, and thanks guzey for adding the clarification in the head of the post.

How EA Philippines got a Community Building Grant, and how I decided to leave my job to do EA-aligned work full-time

Thank you for writing this post Brian. I appreciate your choices and would be interested to hear in the future (say in a year, and even after) how things worked out, how excited will you be about your work, and if you will be able to sustain this financially.

I also appreciate the fact that you took the time to explicitly write those caveats.

(Autistic) visionaries are not natural-born leaders

I meant the difference between using the two, I don't doubt that you understand the difference between autism and (lack of) leadership. In any case, this was not main point, which is that the word autistic in the title does not help your post in any way, and spreads misinformation.

I do find the rest of the post insightful, and I don't think you are intentionally trying to start a controversy. If you really believe that this helps your post, please explain why (you haven't so far).

(Autistic) visionaries are not natural-born leaders

I don't understand how you can seriously not understand that difference between the two. Autism is a developmental disorder, which manifests itself in many ways, most of which are completely irrelevant to your post. Whereas being a "terrible leader", as you call them, is a personal trait which does not resemble autism in almost any way.

Furthermore, the word autistic in the title is not only completely speculative, but also does not help your case at all.

I think that by using that term so explicitly in your title, you spread misinformation, and with no good reason. I ask you to change the title, or let the forum moderators handle this situation.

My Career Decision-Making Process

Hey Arden, thanks for asking about that. Let me start by also thanking you for all the good work you do at 80,000 Hours, and in particular for the various pieces you wrote that I linked to at 8. General Helpful Resources.

Regarding the key ideas vs old career guide, I have several thoughts which I have written below. Because 80,000 Hours' content is so central to EA, I think that this discussion is extremely important. I would love to hear your thoughts about this Arden, and I will be glad if others could share their views as well, or even have a separate discussion somewhere else just about this topic.

Content

I think that two important aspects of the old career guide are much less emphasized in the key ideas page: the first is general advice on how to have a successful career, and the second is how to make a plan and get a job. Generally speaking, I felt like the old career guide gave more tools to the reader, rather than only information. Of course, the key ideas page also discusses these issues to some extent, but much less so than the previous career guide. I think that these were very good career advice which could potentially have a large effect on your readers' careers.

Another important point is that I don't like, and disagree with the choice of, the emphasis on longtermism and AI safety. Personally, I am not completely persuaded by the arguments for choosing a career by a longtermist view, and even less by the arguments for AI safety. More importantly, I had several conversations with people in the Israeli EA community and with people I gave career consultation to, who were alienated by this emphasis. A minority of them felt like me, and the majority understood it as "all you can meaningfully do in EA is AI safety", which was very discouraging for them. I understand that this is not your only focus, but people whose first exposure to your website is the key ideas page might get that feeling, if they are not explicitly told otherwise.

Another point is that the "Global priorities" section takes a completely top-to-bottom approach. I do agree that it is sometimes a good approach, but I think that many times it is not. One reason is the tension between opportunities and cause areas which I already wrote about. The other is that some people might already have their career going, or are particularly interested in a specific path. In these situations, while it is true that they can change their careers or realize that they can enjoy a broader collection of careers, it is somewhat irrelevant and discouraging to read about rethinking all of your basic choices. Instead, in these situations it would be much better to help people to optimize their current path towards more important goals. Just to give an example, someone who studies law might get the impression that his choice is wrong and not beneficial, while I believe that if they tried they could find highly impactful opportunities (for example the recently established Legal Priorities Project looks very promising).

I think that these are my major points, but I do have some other smaller reservations about the content (for example I disagree with the principle of maximizing expected value, and definitely don't think that this is the way it should be phrased as part of the "the big picture").

Writing Style

I really liked the structure of the previous career guide. It was very straightforward to know what you are about to read and where you can find something, since it was so clearly separated into different pages with clear titles and summaries. Furthermore, its modularity made it very easy to read the parts you are interested in. The key ideas page is much more convoluted, it is very hard to navigate and all of the expandable boxes are not making it easier.

My Career Decision-Making Process

Thanks for spelling out your thoughts, these are good points and questions!

With regards to potentially impactful problems in health. First, you mentioned anti-aging, and I wish to emphasize that I didn't try to assess it at any point (I am saying this because I recently wrote a post linking to a new Nature journal dedicated to anti-aging). Second, I feel that I am still too new to this domain to really have anything serious to say, and I hope to learn more myself as I progress in my PhD and work at KSM institute. That said, my impression (which is mostly based on conversations with my new advisor) is that there are many areas in health which are much more neglected compared to others, and in particular receive much less attention from the AI and ML community. From my very limited experience, it seems to me that AI and ML techniques are just starting to be applied to problems in public health and related fields, at least in research institutes outside of the for-profit startup scene. I wish I had something more specific to say, and hopefully I will have in a year or two from now.

I completely agree with your view on AI for good being "a robustly good career path in many ways". I would like mention once more that in order to have a really large impact in it though, one needs to really optimize for that and avoid the trap of lower counterfactual impact (at least in later stages of the career, after they have enough experience and credentials).

It is very hard for me to say where the highest impact position are, and this is somewhat related to the view that I express at the subsection Opportunities and Cause Areas. I imagine that the best opportunities for someone in this field highly depend on their location, connections and experience. For example, in my case it seemed that joining the floods predictions efforts at Google, and the computational healthcare PhD, are significantly better options than the next options in the AI and ML world.

With regards to entering the field, I am super new to this, so I can't really answer. In any case, I think that entering to the fields of AI, ML and data science is no different for people in EA than others, so I would follow the general recommendations. In my situation, I had enough other credentials (background in math and in programming/cyber-security) to make people believe that I can become productive in ML after a relatively short time (though at least one place did reject me for not having background in ML), so I jumped right in to working on real-world problems rather than dedicating time to studying.

As to estimating impact of a specific role or project, I think it is sometimes fairly straightforward (when the problem is well-defined and the probabilities are fairly high, you can "just do the math" [don't forget to account for counterfactuals!]), while in other cases it might be difficult (for example more basic research or things having more indirect effects). In the latter case, I think it is helpful to have a rough estimate - understand how large the scope is (how many people have a certain disease or die from it every year?), figure out who is working on the problem and which techniques they use, try to estimate how much of the problem you imagine you can solve (e.g. can we eliminate the disease? [probably not.] how many people can we realistically reach? how expensive is the solution going to be?). All of this together can help you in figuring out the orders of magnitudes you are talking about. Let me give a very rough example of an outcomes of these estimates: A project will take roughly 1-3 years, seems likely to succeed, and if successful, will significantly improve the lives of 200-800 people suffering from some disease every year, and there's only one other team working on the exact same problem. This sounds great! Changing the variables a little might make it seem much less attractive, for example if only 4 people will be able to pay for the solution (or suffer from it to being with), or if there are 15 other teams working on exactly the same problem, in which case your impact will probably be much lower. One can also imagine projects with lower chances of success, which if successful will have a much larger effect. I tend to be cautious in these cases, because I think that it is much easier to be wrong about small probabilities (I can say more about this).

Let me also mention that it possible to work on multiple projects at the same time, or over a few years, especially if each one consist of several steps in which gain more information and you can re-evaluate them along the way. In such cases, you'd expect some of the projects to succeed, and learn how to calibrate your estimates over time.

Lastly, with regards to your description of my views, that's almost right, except that I also see opportunities for high impact not only on particularly important problems but also on smaller problems which are neglected for some reason (e.g. things that are less prestigious or don't have economic incentives). I'd also add that at least in my case in computational healthcare I also intend to apply other techniques from computer science besides AI and ML (but that's really a different story than AI for good).

This comment already becomes way too long, so I will stop here. I hope that it is somewhat useful, and, again, if someone wants me to write more about a specific aspect, I will gladly do so.

My Career Decision-Making Process

Thanks for your comment Michelle! If you have any other comments to make on my process (both positive and negative), I think that would be very valuable for me and for other readers as well.

Important Edit: Everything I wrote below refers only to technical cyber-security (and formal verification) roles. I don't have strong views on whether governance, advocacy or other types of work related to those fields could be impactful. My intuition is that these are indeed more promising than technical roles.

I don't see any particularly important problem that can be addressed using cyber-security or formal verification (now or in the future), which is not already being addressed by the private or public sector. Surely these areas are important for the world, and therefore are utilized and researched outside of EA. For example, (too) many cyber-security companies provide solutions for other organizations (including critical organizations such as hospitals and electricity providers) to protect their data and computer systems. Another example is be governments using cyber-security tools for intelligence operations and surveillance. Both examples are obviously important, but not at all neglected.

One could argue that EA organizations need to protect their data and computer systems as well, which is definitely true, but can easily be solved by purchasing the appropriate products or hiring infosec officers, just like in any other organization. Other than that I didn't find any place where cyber-security can be meaningfully applied to assist EA goals.

As for formal verification, I believe that the case is similar - these kinds of tools are useful for certain (and very limited) problems in the software and hardware industry, but I am unaware of any interesting applications for EA causes. One caveat is that I believe that it is plausible (but not very probable) that formal verification can be used for AI alignment, as I outlined in this comment.

My conclusion is that, right now, I wouldn't recommended people in EA to build skills in any of these areas for the sake of having direct impact (of course cyber-security is a great industry for EtG). To convince me otherwise, someone would have to come up with a reasonable suggestions where these tools could be applied. If anyone has any such ideas (even rough ideas), I would love to hear them!

Load More