Habiba Banu

Co-founder; Director of Operations and Research @ Spiro
1187 karmaJoined Sep 2018

Comments
16

Hi Andrew,

Thanks for your comment!

It wasn’t clear to me from the post whether you’re planning to do an impact evaluation of an existing government TB programme, or to trial a new kind of screening and preventive treatment programme in partnership with a government (which wouldn’t otherwise do it without you).

Apologies it wasn't clear!

Our current plan is the latter: to start a new program (in partnership with the government).

Programs like this do exist in many countries and many regions but we are hoping to show that a certain program can work well in a particular context where it hasn't been tried before.

Our program may well have elements that are not yet widespread e.g. particular drug regimens, dignostic tools or methods of program delivery (such as through schools or using mobile vans)

Have I understood correctly that the Global Fund wouldn’t be willing to fund the proof-of-concept and pilot programmes itself?

Yes you have understood correctly that we don't expect to get Global Fund money going towards these early stages.

The Global Fund works on a 3 year cycles and provides money for a country's national TB program over that period.

We're hoping to introduce a new program that we don't expect a government / national TB program to already have funding for from the Global Fund. Though, in time, we hope that it will become part of the national TB program activities.

Hope that answers your questions!

Thanks so much for sharing this. Not following US politics closely I'd missed this. It would be so tragic if this wasn't renewed :(

I'm going to be leaving 80,000 Hours and joining Charity Entrepreneurship's incubator programme this summer!

The summer 2023 incubator round is focused on biosecurity and scalable global health charities and I'm really excited to see what's the best fit for me and hopefully launch a new charity. The ideas that the research team have written up look really exciting and I'm trepidatious about the challenge of being a founder but psyched for getting started. Watch this space! <3

I've been at 80,000 Hours for the last 3 years. I'm very proud of the 800+ advising calls I did and feel very privileged I got to talk to so many people and try and help them along their careers!

I've learned so much during my time at 80k. And the team at 80k has been wonderful to work with - so thoughtful, committed to working out what is the right thing to do, kind, and fun - I'll for sure be sad to leave them.

There are a few main reasons why I'm leaving now:

  1. New career challenge - I want to try out something that stretches my skills beyond what I've done before. I think I could be a good fit for being a founder and running something big and complicated and valuable that wouldn't exist without me - I'd like to give it a try sooner rather than later.
  2. Post-EA crises stepping away from EA community building a bit - Events over the last few months in EA made me re-evaluate how valuable I think the EA community and EA community building are as well as re-evaluate my personal relationship with EA. I haven't gone to the last few EAGs and switched my work away from doing advising calls for the last few months, while processing all this. I have been somewhat sad that there hasn't been more discussion and changes by now though I have been glad to see more EA leaders share things more recently (e.g. this from Ben Todd). I do still believe there are some really important ideas that EA prioritises but I'm more circumspect about some of the things I think we're not doing as well as we could (e.g. Toby's thoughts here and Holden's caution about maximising here and things I've posted about myself). Overall, I'm personally keen to take a step away from EA meta at least for a bit and try and do something that helps people where the route to impact is more direct and doesn't go via the EA community.
  3. Less convinced of working on AI risk - Over the last year I've also become relatively less convinced about x-risk from AI - especially the case that agentic deceptive strategically-aware power-seeking AI is likely. I'm fairly convinced by the counterarguments e.g. this and this and I'm worried at the meta level about the quality of reasoning and discourse e.g. this. Though I'm still worried about a whole host of non-x-risk dangers from advanced AI. That makes me much more excited to work on something bio or global health related.

So overall it seems like it was good to move on to something new and it took me a little while to find something I was as excited about as CE's incubator programme!

I'll be at EAG London this weekend! And will hopefully you'll hear more from me later this year about the new thing I'm working on - so keep an eye out as no doubt I'll be fundraising and/or hiring at some point! :)

Thanks so much for making this offer Ulrik! I think it is really helpful for there to be a range of folks that people can reach out to :)

I think people's tastes may vary but I appreciated the humour in this post, thanks :)

Thanks for writing this <3

Haha this is a great hypothetical comment! 

The concreteness is helpful because I think my take is that, in general, writing something like this is emotionally exhausting  (not to mention time consuming!) - especially so if you've got skin in the game and across your life you often come up across things like this to respond to and you keep having the pressure to force your feelings into a more acceptable format.

I reckon that crafting a message like that if I were upset about something could well take half a work day. And I'd have in my head all the being upset / being angry / being scared people on the forum would find me unreasonable / resentful that people might find me unreasonable / doubting myself the whole time. (Though I know plausibly that I'm in part just the describing the human condition there. Trying to do things is hard...!)

Overall, I think I'm just more worried than you that requiring comments to be too far in this direction has too much of a chilling effect on discourse and is too costly for the individuals involved. And it really just is a matter of degree here and what tradeoffs we're willing to make.

(It makes me think it'd be an interesting excerise to write a number of hypothetical comments arrange them on a scale of how much they major on carefully explaining priors, caveating, communicating meta-level intention etc. and then see where we'd draw the line of acceptable / not!)

Just a quick note to say thanks for such a thoughtful response! <3

I think you're doing a great job here modelling discourse norms and I appreciate the substance of your points! 

 

Ngl I was kinda trepidatious opening the forum... but the reasonableness of your reply and warmth of your tone is legit making me smile! (It probably doesn't hurt that happily we agree more than I realised. :P )

I may well write a litte more substantial response at some point but will likely take a weekend break :)

P.S. Real quick re social media... Things I was thinking about were phrases from fb like "EAs f'd up" and the "fairly shameful initial response"- which I wondered if were stronger than you were expressing here but probably just you saying the same thing. And in this twitter thread you talk about the "cancel mob" - but I think you're talking there are about a general case. You don't have to justify yourself on those I'm happy to read it all via the lens of the comments you've written on this post.

I wanted to say a bit about the "vibe" / thrust of this comment when it comes to community discourse norms...

(This is somewhat informed by your comments on twitter / facebook which themselves are phrased more strongly than this and are less specific in scope )

I suspect you and I agree that we should generally encourage posters to be charitable in their takes and reasonable in their requests - and it would be bad overall for discussions in general where this not the case. Being angry on the internet is often not at all constructive!

However, I think that being angry or upset where it seems like an organisation has done something egregious is very often an appropriate emotional response to feel. I think that the ideal amount of expressing that anger / upset that  community norms endorse is non-zero! And yes when people are hurt they may go somewhat too far in what they request / suggest / speculate. But again the optimal amount of "too strong requests" is non-zero.

I think that expressing those feeling of hurt / anger / upset explicitly (or implicitly expressing them through the kinds of requests one is making) has many uses and there are costs to restricting it too much.

Some uses to expressing it:

  • Conveying the sheer seriousness or importance of the question to the poster. That can be useful information for the organisation under scrutiny about whether / how much people think they messed up (which itself is information about whether / how much they actually messed up). It will lead to better outcomes if organisation in fact get the information that some people are deeply hurt by their actions. If the people who are deeply hurt cannot / do not express this the organisation will not know.
  • Individuals within a community expressing values they hold dear (and which of those are strong enough to provoke the strongest emotional reaction) is part of how a community develops and maintains norms about behaviour that is / isn't acceptable. 

Some costs to restricting it:

People who have stronger emotional reactions are often closer to the issue. It is very hard  when you feel really hurt by something to have to reformulate that in terms acceptable to people who are not at all affected by the thing.

  • If people who are really hurt by something get the impression from community norms that expressing their hurt is not welcome they may well not feel welcome in the community at all. This seems extra bad if you care about diversity in the community and certain issues affect certain groups more. (E.g. antisemitism, racism, sexism etc.)
  • If people who are really hurt by something do not post, the discourse will be selected towards people who aren't hurt / don't care as strongly. That will systematically skew the discussion towards a specific set of reactions and lead you further away from understanding what people across the community actually think about something.

I think that approaching online discussions on difficult topics is really really hard! I do not think I know what the ideal balance is. I have almost never before participated in such discussions and I'm personally finding my feet here. I am not arguing in favour of carte blanche for people making unreasonable angry demands.

But I want to push back pretty strongly against the idea that people should never be able to post hurt / upset comments or that the comments above seem very badly wrong. (Or that they warrant the things you said on facebook / twitter about EA discourse norms)

P.S. I'm wondering whether you would agree with me for all the above if the organisational behaviour was egregious enough by your / anyone's lights? [Insert thought experiment here about shockingly beyond the pale behaviour by an organisation that people on the forum express angry comments about]. If yes, then we just disagree on where / how to draw the line not that there is a line at all. If not, then I think we have a more fundamental disagreement about how humans can be expected to communicate online. 

Load more