All of Abby Hoskin's Comments + Replies

Not sure what the inclusion criteria is for conferences, but I thought it was interesting the Cognitive Neuroscience Society made it on the list you linked. I would consider the Society for Neuroscience conference, just because it has tens of thousands of attendees, so somebody will be presenting on the neuro topic you're interested in there: https://www.sfn.org/
 

This is so, so, so, wonderful! Thanks for organizing such a fantastic event, as well as sharing all this analysis/feedback/reflection. I want to go next year!!!!

2
Agustín Covarrubias
14d
Hope to see you next year! 🤝

So glad somebody is finally fixing Swapcard!

1
Ivan Burduk
15d
It's been a long haul, but we've finally convinced their CEO (a Pisces), that redeveloping their core architecture to support star-sign matching would be a good business decision.

Any plans to have this printed on t shirts?

This needs to be discussed internally, but I think a better description is Cooperative with EA (CEA)

This so interesting, thanks for writing this up, Jess! As one of your 80k coworkers, I'm always blown away by how organized and detail oriented you are. Reading about your general approach to solving problems/mindset about your job; I'm not surprised that you're always trying to anticipate how to improve processes for the team, but it's still super impressive!

To others reading this post: I also endorse 80k as a cool place to work ;)

These are great things to check! It's especially important to do this kind of due diligence if you're leaving your support network behind (e.g. moving country). Thanks for spelling things out for people new to the job market ❤️

Thanks so much for sharing this, Michelle! It's always strange to visit our past selves, remembering who we used to be and thinking about all of the versions of ourselves we chose not to become. 

I'm glad you became who you are now ❤️

3
Michelle_Hutchinson
2mo
<3

This is a really interesting question! Unfortunately, it was posted a little too late for me to run it by the team to answer. Hopefully other people interested in this topic can weigh in here. This 80k podcast episode might be relevant? https://80000hours.org/podcast/episodes/michael-webb-ai-jobs-labour-market/

This is an interesting idea! I don't know the answer. 

Thanks for the interesting questions, but unfortunately, they were posted a little too late for the team to answer. Glad to hear writing them helped you clarify your thinking a bit!

On calls, the way I do this is not assume people are part of the EA community, and instead see what their personal mindset is when it comes to doing good. 

I think 80k advisors give good advice. So I hope people take it seriously but not follow it blindly.

Giving good advice is really hard, and you should seek it out from many different sources. 

You also know yourself better than we do; people are unique and complicated, so if we give you advice that simply doesn’t apply to your personal situation, you should do something else. We are also flawed human beings, and sometimes make mistakes. Personally, I was miscalibrated on how hard it is to get technical AI safety roles, and I think I was overly optimisti... (read more)

Tricky, multifaceted question. So basically, I think some people obsess too much about intelligence and massively undervalue the importance of conscientiousness and getting stuff done in the real world. I think this leads to silly social competitions around who is smarter, as opposed to focusing on what’s actually important, i.e. getting stuff done. If you’re interested in AI Safety technical research, my take is that you should try reading through existing technical research; if it appeals to you, try replicating some papers. If you enjoy that, consider a... (read more)

Alex Lawsen, my ex-supervisor who just left us for Open Phil (miss ya 😭), recently released a great 80k After Hours Podcast on the top 10 mistakes people make! Check it out here: https://80000hours.org/after-hours-podcast/episodes/alex-lawsen-10-career-mistakes/ 

We had a great advising team chat the other day about “sacrificing yourself on the altar of impact”. Basically, we talk to a lot of people who feel like they need to sacrifice their personal health and happiness in order to make the world a better place. 

The advising team would actually prefer for people to build lives that are sustainable; they make enough money to meet their needs, they have somewhere safe to live, their work environment is supportive and non-toxic, etc. We think that setting up a lifestyle where you can comfortably work in the long... (read more)

I love my job so much! I talk to kind hearted people who want to save the world all day, what could be better? 

I guess people sometimes assume we meet people in person, but almost all of our calls are on Zoom. 

Also, sometimes people think advising is about communicating “80k’s institutional views”, which is not really the case; it’s more about helping people think through things themselves and offering help/advice tailored to the specific person we’re talking to. This is a big difference between advising and web content; the latter has to be aime... (read more)

Yeah, I always feel bad when people who want to do good get rejected from advising. In general, you should not update too much on getting rejected from advising. We decide not to invite people for calls for many reasons. For example, there are some people who are doing great work who aren’t at a place yet where we think we can be much help, such as freshmen who would benefit more from reading the (free!) 80,000 Hours career guide than speaking to an advisor for half an hour. 

Also, you can totally apply again 6 months after your initial applicatio... (read more)

Sudhanshu is quite keen on this, haha! I hope that at the moment our advisors are more clever and give better advice than GPT-4. But keeping my eye out for Gemini ;) Seriously though, it seems like an advising chat bot is a very big project to get right, and we don’t currently have the capacity.

This is pretty hard to answer because we often talk through multiple cause areas with advisees. We aren’t trying to tell people exactly what to do; we try to talk through ideas with people so they have more clarity on what they want to do. Most people simply haven’t asked themselves, “How do I define positive impact, and how can I have that kind of impact?” We try to help people think through this question based on their personal moral intuitions.  Our general approach is to discuss our top cause areas and/or cause areas where we think advisees could ... (read more)

Studying economics opens up different doors than studying computer science. I think econ is pretty cool; our world is incredibly complicated, but economic forces shape our lives. Economic forces inform global power conflict, the different aims and outcomes of similar sounding social movements in different countries, and often the complex incentive structures behind our world’s most pressing problems. So studying economics can really help you understand why the world is the way it is, and potentially give you insights into effective solutions. It’s often a ... (read more)

Mid-career professionals are great; you actually have specific skills and a track record of getting things done! One thing to consider is looking through our job board, filtering for jobs that need mid/senior levels of experience, and applying for anything that looks exciting to you. As of me writing this answer, we have 392 jobs open for mid/senior level professionals. Lots of opportunities to do good :) 

It would be awesome if there were more mentorship/employment opportunities in AI Safety! Agree this is a frustrating bottleneck. Would love to see more senior people enter this space and open up new opportunities. Definitely the mentorship bottleneck makes it less valuable to try to enter technical AI safety on the margin, although we still think it's often a good move to try, if you have the right personal fit. I'd also add this bottleneck is way lower if you: 1. enter via more traditional academic or software engineer routes rather than via 'EA fellowshi... (read more)

1
Huon Porteous
7mo
To add on to Abby, I think it’s true of impactful paths in general, not just AI safety, that people often (though not always) have to spend some time building career capital without having much impact before moving across. I think spending time as a software engineer, or ML engineer before moving across to safety will both improve your chances, and give you a very solid plan B. That said, a lot of safety roles are hard to land, even with experience. As someone who hasn’t coped very well with career rejection myself, I know that can be really tough.

Our advising is most useful to people who are interested in or open to working on the top problem areas we list, so we’re certainly more likely to point people toward working on causes AI safety than away from it. We don’t want all of our users focusing on our very top causes, but we have the most to offer advisees who want to explore work in the fields we’re most familiar with, which include AI safety, policy, biosecurity, global priorities research, EA community building, and some related paths. The spread in personal fit is also often larger t... (read more)

I totally agree that more life experience is really valuable. For example, I recently updated my bio to reflect how I'm a mom (of two now, ahhhh!); somebody mentioned they booked in with me because they specifically wanted to chat with a parent, so it's great we have an advisor with that kind of experience on the team. If you have recommendations for experienced people who you think would be good advisors, feel free to shoot me a DM with names!

So cool! I've read through the updated career guide and really love it. Surprisingly engaging, shockingly well balanced between high level strategic advice and concrete advice for practical next steps. The personal anecdotes sprinkled throughout the guide were were super inspirational as well; real people have followed this advice and done huge amounts of good! Edit: These are my views and do not reflect those of my employer (80k lol) ;)

So cool! And thanks for sharing your syllabus :) Do you have any interest in collaborating with the Princeton EA club this semester? Hit me up at anovick@princeton.edu

I agree with Jaime's answer about how alignment should avoid deception. (Catastrophic misgeneralization seems like it could fall under your alignment as capabilities argument.)

I sometimes think of alignment as something like "aligned with universal human values" more than "aligned with the specific goal of the human who programmed this model". One might argue there aren't a ton of universal human values. Which is correct! I'm thinking very basic stuff like, "I value there being enough breathable oxygen to support human life". 

Thanks for this very thorough write up. I appreciate this level of transparency on what's needed for two of our community's biggest grantmaking orgs!

I didn't even know you could make a table and then embed youtube videos within the table on EA Forum posts! Very cool. 

-1
wes R
8mo
Thanks :)

Just saw how strongly downvoted this parent comment is! OP asked "Why do EA people think a thing?" And I responded with "This is why I, an EA person, think a thing." You can disagree with my opinion, but you can't deny that I have this opinion. I'm not obsessed with EA forum karma, but it's kind of annoying how badly people are following discourse norms here by downvoting opinions that they simply don't like. (There's a disagree button for this exact purpose, people!)

1
yanni kyriacos
8mo
i find this a strange feature of this forum tbh.  i don't think ive ever downvoted anything?  but yeah, the best strategy is not to care imo

I finished my PhD, but it would have been reasonable for me to quit. I wrote a little bit about this here: https://forum.effectivealtruism.org/posts/rJ9LBoSt9MvXJrbEf/how-to-apply-for-a-phd

Relevant section:

Don't get stuck

If you get into a program and then realize that you're wasting your time, you can always drop out of graduate school. If you're deeply unhappy in graduate school but don't want to drop out "because you're the kind of person who completes academic courses" (ht Robert Miles), take a moment to consider what you value about your personal ident

... (read more)

This looks really cool! I will recommend it to 80,000 Hours advisees :)

I'm interested to hear why you're asking this question. How would this affect your confidence in certain beliefs and they way you defer?

2
aprilsun
8mo
I've become much more familiar with EA, historically I've consider the two communities to be similarly rational and I thought the two were generally a lot more similar in their beliefs than I do now. So when I learn of a difference of opinion, I update my outside view and the extent to which I consider people the relevant experts. E.g., when I learn that Eliezer thinks pigs aren't morally relevant because they're not self-aware, I lose a bit of confidence in my belief that pigs are morally relevant and I become a bit less trustful that any alignment 'solutions' coming from the rationalist community would capture the bulk of what I care about.

I think individuals donating less than $1 million a year need very different advice than big donors moving millions a year (e.g., Dustin Moskovitz). 

If you are in the former category, any smart normal financial advisor can give good advice.  It is hard to find smart retail financial advisors who aren't trying to sell you some random high fee product, so it makes sense for you to collect recommendations. I just don't think they need to be EA aligned; lots of wealthy people ask these exact same questions with the goal of maximizing their donations to whatever their chosen cause is. 

Great to hear the water infrastructure is improving! Seems like a huge boost to quality of life :) 

The mystery of the beans continues though...

A lot of EAs are into mindfulness/meditation/enlightenment. You link to Clearer Thinking, and I consider Spencer Greenberg to be part of our community. If you want to get serious about tractable, scalable mental health interventions, SparkWave (also from Spencer Greenberg) has a bunch of very cool apps that focus on this. 

I'm personally not into enlightenment/awakening because meditation doesn't do much for me, and a lot of the "insights" I hear from "enlightened" people strike me as the sensation of insight more than the discovery of new knowledge. I... (read more)

5
Abby Hoskin
8mo
Just saw how strongly downvoted this parent comment is! OP asked "Why do EA people think a thing?" And I responded with "This is why I, an EA person, think a thing." You can disagree with my opinion, but you can't deny that I have this opinion. I'm not obsessed with EA forum karma, but it's kind of annoying how badly people are following discourse norms here by downvoting opinions that they simply don't like. (There's a disagree button for this exact purpose, people!)

This is not central to the original question (I agree with you that poverty and preventable diseases are more pressing concerns), but for what it's worth, one shouldn't be all that nonplussed at how the “insights” one might hear from “enlightened” people sound more like the sensation of insight than the discovery of new knowledge. Most people who've found something worthwhile in meditation—and I'm speaking here as an intermediate meditator who's listened to many advanced meditators—would agree that progress/breakthroughs/the goal in meditation is not about gaining new knowledge, but rather, about seeing more clearly what is already here. (And doing so at an experiential level, not a conceptual level.)

3
Rebecca
8mo
I think Yanni actually works at SparkWave :)

Random thought: you mention it's not always easy to get clean drinking water. Is there anything in the water in Uganda that could become dangerous to consume if left sitting around for 12 hours? Maybe there are different bean soaking norms in Uganda compared to other countries because you get sick after consuming stagnant water there? (Bean soaking is the norm for other developing countries I'm aware of.)

Also, now I'm really hungry for beans ;)

7
NickLaing
8mo
Haha we ate beans just now (as we do a few nights a week). After soaking the beans, the water is discarded and the new water is boiled for an hour. I have fairly high confidence there is no major issues here. As a side note, more and more boreholes and protected springs (usually pipees coming out of the ground) are available around Uganda, and the national water piping system is spreading around cities. Development is real, and this has been one clear, positive improvement over the last few years here.

You have obviously put a lot of thought into this, which I think is super valuable both for deciding your own next steps and for other people in the community to see how it relates to them. I strongly encourage you to apply for 80,000 Hours Advising; I think chatting through these considerations with an advisor would be super useful. Also, it's free ;) https://80000hours.org/speak-with-us/

My hot take is, at the level of donations you're considering, your main consideration should be how impactful your actual job is/how impactful the job that you're pivotin... (read more)

1
Péter Drótos
8mo
Thank you for taking a look and for the suggestions! Not saying I've tried super hard to talk these through with an advisor but my attempts did not get much attention so far. Completely agree that one should prioritize long-term impact. I'm just saying that in case of a temporary funding constraint, choosing not to donate may prevent other, at least equally promising candidates who are in need of funding from investing into their own careers.

Really great post, I don't often read community posts that completely resonate with me. Thanks for sharing these insights, hope they're useful for others, especially community builders :) 

(This is just my personal opinion and does not reflect the views of my employer.)

What is the bottleneck: There aren't a lot of people who are well established in things like AI Safety or AI Governance because these fields are relatively young. There are a lot of people interested in entering these fields because AI has recently become much more legibly a problem. So there are more newcomers than experienced experts. We want experienced experts to actually do the thing they're expert in, not just mentor newcomers. So there's a limit to how many new people can get mentored per year. 

What is the solution: I don't really know what kin... (read more)

3
Péter Drótos
8mo
I think the effect you describe makes sense, you can’t just grow a field arbitrarily fast. But I think it might be useful to talk more about wheter we still think that trying to get into one of these very competitive programmes is currently still the best use of the talented people who are happy to use their career to do the most good.

Good question! This place seems designed to do high impact final projects, but I don't think they use an EA framework to define impact. They seem quite altruistic though. https://www.datascienceforsocialgood.org/

Thanks for sharing this! Good points here too:

I find it interesting how he says that there is no such thing as AGI, but acknowledges that machines will "eventually surpass human intelligence in all domains where humans are intelligent" as that would meet most people's definition of AGI.

I also observe that he has framed his responses to safety on "How to solve the alignment problem?". I think this is important. It suggests that even people who think aligning AGI will be easy have started to think a bit more about this problem and I see this as a victory in

... (read more)

Patrick, thanks for sharing your story! Seeing all these high school students get involved in EA and starting high impact research projects before they even go to university can feel overwhelming; how could a mid-career professional compete with that???

Actually, mid-career professionals are awesome. They know how the real world works, and they know how to get stuff done. People who have only been students don't know how to navigate government bureaucracies, bring a product from conception to market, or manage large teams. Indeed, many of our most impressiv... (read more)

So cool! Looking forward to the projects that come out of this :)

Thanks, really helpful context! 

Looking around and realizing you're the grown up now can be startling. When did I sign up for this responsibility????

Load more