T

Toph

136 karmaJoined Dec 2018

Bio

Working in the area of AI governance. Previously worked on quantum computing and AI hardware.

Posts
1

Sorted by New

Comments
8

Thank you so much for the time and energy put into this post! I have lots more to take in, but I wanted to highlight some points I took away on my first read through

In allocating collective effort to this issue, I hope we can create protective norms, practices, or processes that take care of those affected and encourage the kind of behaviors we want in the future.

One thing that seems especially noteworthy about the general framing of this post, and this statement in particular, is that it's describing a healthier community that only needs a little extra effort from each individual. This reminded me of what I see as norms around welcoming new people to EA groups. At least in my experience, most people see the importance of tailoring conversations with new people toward their interests and avoiding unnecessary jargon. Since many people are both capable of having this type of conversation and recognize its importance,  there are many people that feel license to make this person more comfortable, and  no individual has to be the one to do it on a given day.  There are also tons of people that developed introductory curricula to be even more welcoming to new people, and train group leaders to do a better job of this in their individual groups. It seems like applying a similar level of care to sexual misconduct reinforces your concrete points of how we could get to this healthier community: treating each individual with compassion and putting in the time to think about how to do this better. 

Another example of norms that feels in the right direction is how the  EA community addresses burnout. EA seems to stigmatize driving people to burnout a lot more than other groups I've been apart of. This also gives me hope that EAs could realistically handle situations like the ones described in this post better than other communities. This feeling especially resonated with the final line in the post

We’re a community based in altruism, in being generous and caring about the well-being of all people. We should take responsibility for the harm people experience in this community and take responsibility for preventing it when we can. Especially where we can do more, we shouldn’t let the survivors of misconduct among us shoulder the burden of improving things.

Thanks, so glad to see another engineer here! I'll put down some rough ideas here, but if you're interested in chatting sometime I'd be very happy to go into more detail, please feel free to reach out via DM!

I'm pretty uncertain about which of the paths listed has the potential to be the most effective (or that the most effective path is even on the list!). I would think that comparative advantage would play an important role here. I think Arden's point is a very good one, that it's important to be selective about what to work on. My (very inexperienced) intuition is that, to be very selective in many jobs related to computer hardware, one needs to really be a standout candidate, and to do that it probably has to be a topic that one finds really motivating.

If I had to make a bet on just one path independent of comparative advantage, I'd lean toward hardware security for AI. Part of this is that it touches many other paths (it seems like the type of area that's forward-looking in AI hardware, and quite relevant to policy.). Another part is the point you brought up, this seems less likely to speed up timelines without increasing safety. I'm not really sure how having a masters vs. PhD would change any of this.

Thinking about other career paths less related to AI, if you're interested more in the bio/materials side of EE, I've looked into atomically precise manufacturing a little bit  (which was mentioned in this other post from 80000 Hours on the forum). It seems like a very interesting topic, but my impression was that (1) it's not clear exactly what an EA should want to do in this space (though people are actively thinking about this!), and (2) if you want to go into this as an engineer you'd need to put a lot of work into building it up as a field.

Thanks for writing this up! I think this is really important topic and I'm glad it's being discussed. I'm hoping to discuss some of the possible solutions brought up:

Should we re-consider some decisions like de-emphasizing earning to give 

I have only been involved in the EA community for a couple of years so I may not have the full picture about what de-emphasizing means.  My impression of the current advice on earning to give is that it's represented as one great option among many, and a sort of baseline impact you can have working even a fall-back job.  In this way, I've thought the discussion is becoming increasingly more inclusive about what sort of role is an "EA" role.   Are others perhaps not getting the same message I am? (e.g. I could see that if it felt like the advice went from "everyone should be earning to give" to "no one should be earning to give" that could alienate people.)

Should we re-consider...  reducing the size of EA global?

 At least for my experience as a grad student applying to conferences, I don't think there's a perception that the small conferences are exclusive, perhaps because it's common knowledge (or maybe common lore?) that not everyone within one research group will be accepted. So, research groups self-select only one or two people to apply per year, with the understanding that if you aren't going this year, it will be your turn in a future year. This ends up working out fine because there are lots of small conferences to choose from, and people within one group can just rotate through which one they go to any given year. Things probably can't work exactly like this because EA is a lot less compartmentalized than, e.g., physics research, but perhaps there is some mechanism design like this that could be done to help people self-select out rather than being rejected?

Toph
3y12
0
0

Thank you so much for the detailed comment! I would be very excited to chat offline, but I'll put a few questions here that are directly related to the comment:

For one, I think while the forecasts in that report are the best publicly available thing we have, there's significant room to do better

These are all super interesting points! One thing that strikes me as a similarity between many of them is that it's not straightforward what metric (# transistors per chip, price performance, etc.) is the most useful to forecast AI timelines. Do you think price performance for certain applications could be one of the better ones to use on its own? Or is it perhaps better practice to keep an index of some number of trends?

I think it's not the case that we have access to enough people with sufficient knowledge and expert opinion. I've been really interested in talking to hardware experts, and I think I would selfishly benefit substantially from experts who had thought more about "the big picture" or more speculative hardware possibilities

Are there any specific speculative hardware areas you think may be neglected? I mentioned photonics and quantum computing in the post because these are the only ones I've spent more than an hour thinking about. I vaguely plan to read up on the other technologies in the IRDS, but if there are some that might be worth looking more into than others (or some that didn't make their list at all!) that would help focus this plan significantly.

A  recent report estimates that ASICs are poised to take over 50% of the hardware market in the coming years

Thank you for pointing this out! I talked with someone working in hardware that gave me the opposite impression and I haven't thought to actually look into this myself. (In retrospect this may have been their sales pitch to differentiate themselves from their competitors. FWIW, I think their argument was that AI moves fast enough that an ASIC start-up will be irrelevant before their product hits the market.) I look forward to updating this impression!

I think things like understanding models of hardware costs, the overall hardware market, cloud computing, etc. are not well-encapsulated by the kind of understanding technical experts tend to have.

I would naively think this would be another point in favor of working at start-ups compared to more established companies. My impression is that start-ups have to spend more time thinking carefully about their market is in order to attract funding (and the small size means technical people are more involved with this thinking). Does that seem reasonable?

This is a cool list, thanks for compiling it! For ease of others viewing, I'll just list a couple that seemed most in the direction of the "hard" physics-based engineering as stated in the original comment:

"7. Open source leaf grinder for leaf protein extract from tree/crop leaves. Leaf protein has been produced at the household and industrial scale. - (S)

10. Work out how to recover industry as quickly as possible, e.g. focusing on making replacement parts destroyed by EMP (and estimate time of recovery). - (E)

16. Quantify the impact on energy/electricity production of nuclear winter, particularly solar, wind, and hydroelectricity. – (S)

25. Develop open source wood chipper. - (S)

27. Develop an open source shortwave (HAM) radio system (two way or just receiver). - (E)"

Hi Hervé, glad to see another engineer here! I was a physics undergrad and I'm working near the area of quantum computing hardware now. I agree that there's not a lot of advice for engineers on 80k, though it may be good to peruse this page (which mentions engineering as well as some other related areas) . A few comments about this:

one of the more "hard" physics-based engineering field (electrical, mechanical, chemical) 

I think one area I might add to the list of "hard" physics based engineering is certain types of bioengineering/biophysics. You mention a couple areas that I think could fit nicely into this category (nanotechnology and clean meat, and probably alternative energy). If you haven't listened already, I think the 80k podcast with Marie Gibbons lists some of the technical skills needed for clean meat, which seem to all be highly transferable to other exciting (if less EA) technical jobs (I can't remember if it's in this episode, but I think one 80k podcast mentions that the tissue engineering of clean meat could also be transferred to human tissue engineering for anti-aging.)  I think some other directions you could take bioengineering is biorisks like pandemics. This is definitely less physics and more biology/public health, but certainly very EA. This brings up another point:

Be very helpful for directly addressing EA related problems

If this is what gets you excited I think it would be helpful to list out a couple precisely defined careers you are interested in and think if you're choosing the right major for that. I picked my current path before hearing about EA, and I've found it really challenging to figure out how to use my skills to do good. (See my top-level comment for what I've been thinking about).

Better hardware for AI?

I think this is very interesting and would love to chat more (also seem my top-level comment). However, if you are interested in working on AI I think it would be helpful to really give software an honest chance, since I think it's much more clear that you're directly working on solving the problem from that approach. (I echo alexrjl's comment that it's not necessarily positive)

Also, am I right in avoiding potentially dangerous fields such as nanotechnology? Or would those be even more important to get into just to shape development positively?

I'm reading nanotechnology in this context in the Eric Drexler sense, related to the idea of atomically precise manufacturing (APM). As best I can tell, there's no consensus about whether it's worth it to go into APM , but if you're just starting college I think there's no harm in trying to get some research in similar fields to help come to your own conclusions. I think the research skills are transferable to lots of other exciting areas, too. There are a few people in EA who have thought about this, though I think little is written publicly except this Open Phil post and talks from Eric Drexler on YouTube. 

Really glad you posted this! I'm happy to talk more about any of this, you can DM me on the forum

Toph
3y25
0
0

I want to first say thanks for making this thread! This has helped me set a deadline for myself to write down my thoughts and ask for some feedback. As described below, I’d love some feedback about my career plans, and also this draft post of notes about what it could mean to be an expert in AI hardware, which I wrote up while working on these plans.

For a little background on me, I’m currently a grad student working near the area of quantum computing hardware and I’m on track to get my PhD in summer 2022. I think my strengths are laboratory work in experimental physics. I find that I enjoy leadership roles, though I find it hard to gauge if I actually am skilled at these roles. (For more background see my resume). I’m also planning to do an internship in summer 2021. I’m hoping to figure out what could be particularly good uses of my time for the internship and my first couple roles after grad school. I currently have no constraints on location.

I think I am pretty cause neutral, but given my skill set some of the areas I’ve thought about focusing on are:

  • AI Hardware
  • AI Policy
  • AI Technical research
  • Earning to give (and continuing to work on my personal cause prioritization)
  • Atomically Precise Manufacturing (APM)

I talked with a few people about APM. My impression is that it’s not clear if anyone should be working on actually making this technology right now. However, if one were to do this anyway, one of the most promising approaches would be essentially biology work in academia, trying to start the subfield. This made me less interested in the area, since the low expected value doesn’t seem to merit trying to start a subfield that I have no experience in.

I think I would enjoy some earning to give roles, but I view this as a solid backup option after getting a feel for the impact I can have with direct work.

I looked into AI Safety research for a little while, but it’s not clear to me that it’s my comparative advantage (as more of a lab person) compared to the other people pursuing these roles. 

One idea I came across from the post “Some promising career ideas beyond 80,000 Hours' priority paths” was AI Hardware. I think this does play more to my comparative advantage, and may let me work on the same problems. I spent some time taking notes on what it might mean to be an expert in AI Hardware, and I’m planning to make a forum post about this.

Part of the point of this comment is to welcome any and all comments on this draft post about what it means to be an expert in AI Hardware!

From this research, I’ve come up with the following plan:

  • Next month: Gain experience in one emerging AI Hardware platform (photonics) through an edX course recommended by someone in the field
  • Next summer: Expand my experience with real AI hardware doing an internship in summer 2021, prioritizing companies working on near-term hardware like Google, but any internship would be better than nothing.
  • Next year: Apply for the AAAS Science and Technology Policy Fellowship (for the positions starting in September 2022) to see how well suited I am to policy

These experiences will probably update my thoughts on my career significantly, but I’m currently most excited about two possible career paths

  • Plans A/B:  Work on AI hardware in industry or a national lab, but take “tour of duty” roles such as a program manager at IARPA. Alternatively, try for a career in something like AI Hardware policy (which was recommended as perhaps the less risky route in the 80k podcast with Danny Hernandez) at a place like CSET.  I’m hoping my experience with a AAAS STPF (including getting rejected) would help decide how far into policy to go.

And some backup career paths I’m considering

I would love some feedback about this plan. I think one major flaw is that almost all these careers are outside my area of expertise, and I’m not sure if I’m being detailed enough about what skills I lack and how to get them (though if you take an expansive view of what AI Hardware means, I think I would be a competitive candidate at a quantum computing company right after graduation). Also, if there are any other careers that seem like I should consider, I’d love to hear it!

Thanks for putting this all together! I just wanted to expand on Michael's suggestion of quantum computing. There is increasing interest in industry to provide quantum computing tools. A few standouts that I see publishing/talking at conferences are

These companies are advertising their product as a tool for machine learning (among other things). Further, these companies are all just cloud service providers. Their goal is to get their product online as fast as possible for use by whoever can afford it. I would need to do more research to figure out how quantum machine learning impacts x-risk (it could be small assuming machine learning on classical computers develops quickly but scaling quantum hardware is slow), but playing the right role in these companies could be very high impact.

I also wanted to add a possibly off-topic note of caution about physics research. I am currently a grad student in experimental physics, so this is possibly overly-pessimistic about university research (grass is always greener on the other side). However, I would be cautious optimizing your resume completely to do research as a professor, which is what I think many physicists do. Academic positions are extremely competitive and best suited to people who are interested in the research for its own sake rather than for the impact it has. So I think it is important to either be very sure how excited you are about physics for its own sake (does it match the level of your young professors?) or to build a resume that could also veer towards your favorite non-physics direction, if needed (e.g. taking a few classes in writing good code, doing an internship in industry).