I advance global challenges where transformative impact can be achieved by integrating frontier (research) knowledge with innovative practice.
To achieve this, I build theories of change and execute program models end-to-end, in particular where strategic convening across disciplines, sectors and systems will drive innovation critical to impact.
Currently skill-building:
- Innovation incentive models (problem framing & solution design, e.g. Prizes, challenges, accelerators, networks)
- Biosecurity, pandemic & health emergency preparedness
- AI safety & governance
- Storytelling & communications
I am generally super curious and love encountering surprising-but-probably-important new ideas and findings.
I'm seeking connections and career opportunities to engage in building, implementing & evaluating theories of change and program models.
I have years of deep and broad experience, but would really love to pair with others to build some new things that we can design, prototype and test-test-test, then maybe get some funds to implement.
I have experience in strategic planning, operational planning, measurement & evaluation, reporting, and fundraising.
Other EAers have told me they find it helpful to talk out their ideas & plans with me and hear me synthesize back what they've said, including unrecognized gaps or opportunities.
Nice opportunity to share, thanks for posting.
I was just sifting through NATO-DIANA Challenge/Accelerator topics to i.d. shared opportunities with different names. I think space, defense, remote communities, extreme enviros, etc. could bring much more synergy (and funding) than GCR folks recognize. I might map some of this out in the coming weeks if only to expand the field of funding opps people are thinking about.
Adding some weight to others' comments that since 80k went whole-hog for AI-more-AI-nothing-but-AI, what was initially interesting & compelling AI content for me to listen to as part of a broader repertoire of distinctly EA takes on things has felt like a firehose and there isn't interesting content I look to the podcase for now. I miss the other areas of content a lot.
Encountering these, which I'd listen to in 30-45min chunks over a few days, was indescribably useful. The ones with Ajeya Cotra on world-view diversfication, Rachel Glennerster on market shaping, Karen Levy on program dev & eval, and Hugh White on Donald Trump/US change, were so genuinely novel and informative to me that the perspectives they shared are now baked into how I think about things. The podcast change since then to 1000 angles of AI risk has nowhere near this value.
Editing to add something less crabby:
Some areas of AI risk that would be substantially interesting and useful and re-engage me would be around building out an actual understanding of AI risk. AI discourse given any attention here has been representative of a dangerously homogeneous group for something prioritized for its existential level of risk, global impact, etc. (mostly white men, almost entirely W.E.I.R.D. countries, middle-class, narrowly technical interest, etc.). More or less a mirror of the same people causing the risk. For novel + valuable content, I want to know perspectives that can help fill out even a bit more of the ENTIRE REST OF HUMANITY perspectives on this one -- countries/regions, ethnicities, life stages, genders, walks-of-life, socio-econ statuses, faiths, sectors, families, education experiences. I have a sense we can't possibly have a good grasp of what the major risks are if our understanding is based exclusively on what's most valued to the most narrow group of people. It would also open up so much rich space for new problem frames --> new solutions. I would avidly listen to this kind of content. The podcast team expansions would ideally reflect people with the abilities to build this out...
The simple answer to your question about the noteworthy salaries at core EA orgs: Symbolic Capitalism.
A truly EA approach to EA work would be everything is carried out with very reasonable wages in the lowest cost labour markets in the world, across every level of an organization, because even paying outright for staff members to undertake whatever specific niche skill training might be needed for a role would still never add up to even close to the entry level salaries at some of these US- and UK-based places.