Hide table of contents

A post for:

  • Young people focused on having a positive impact with their study and careers, looking to apply for undergraduate courses at top UK universities (e.g. Oxford & Cambridge)
  • EAs in contact with high school students who they may wish to refer for UK admissions support (e.g. students who have engaged with EA through outreach programmes or their local EA group)

I'm offering a free programme of 1:1 admissions support for students applying to top UK universities in the forthcoming admissions cycle.

UPDATE: Note this programme is paused for 2023 while I am on maternity leave, but if you are in need of support with your university application, or have any questions, please get in touch and I will help as much as I can.

Goals of the programme

  • To provide young people focused on having a positive impact with their study and careers with support in preparing for UK university applications, particularly to the most elite and competitive universities such as Oxford & Cambridge. Rationale: by gaining admission to the most elite and high quality universities, students increase their chance of having impactful academic and career opportunities in the future.
  • To guide & mentor students in degree course/major choices to best suit their interests and abilities, and to facilitate their long term academic/career goals. Rationale: by choosing the most appropriate degree course, students will be better placed to facilitate their future plans and optimise their future positive impact.

The Programme - Apply Here!

Note the programme is paused for 2023 - please get in touch if there’s anything I can help you with though.

The programme is free of charge. Successful applicants will receive remote 1:1 support, with regular sessions to support with application preparation, including:

  • Choosing a degree course
  • Strategic university choices (& Oxbridge college choices)
  • Supercurricular development
  • Preparing the UCAS personal statement (& SAQ for Cambridge)
  • Preparing for admissions tests and interviews

If the programme is oversubscribed with eligible applicants, support may also be offered via group sessions, webinars, resources, and correspondence.

The programme is being run pro bono by Hannah Rowberry, a Cambridge graduate and former Oxford Admissions & Access Officer, now working as admissions consultant. Hannah developed her interests in Effective Altruism through volunteering with SHIC, developing resources and programmes for schools.

Eligibility and selection priorities

You are eligible to apply for this programme if:

  • You are applying to UK universities for undergraduate courses in the forthcoming admissions cycle (i.e. October 15th 2022 or January 25th 2023 deadlines) If you are applying in subsequent years, but would like guidance on your current preparations for university, please get in touch
  • You are interested in having a positive impact on making the world a better place, and would like to pursue this further within your studies and career
  • You have a strong academic profile and would be able to make a competitive application to top UK universities, such as Oxford and Cambridge (e.g. you have mostly 7/8/9s at GCSE, and are predicted A/A*s in your A levels)

Priorities in selection if oversubscribed:

  • Students with a strong commitment and interest in having a positive impact with their studies and careers, with clear intentions to pursue this
  • UK universities are your main (or equal) priority compared to other options globally
  • Students with fewer sources of support elsewhere (e.g. your school does not have much expertise in UK/Oxbridge admissions)
  • Students from backgrounds which are underrepresented in UK universities (minority ethnic background, first generation, low income household, disabled)

Note that all eligible students are encouraged to apply, the above prioritisation will only be used if oversubscribed, to prioritise the programme for students who will most benefit from support.

Application process

Please make an application via the Application Form. Applications will be considered on a rolling basis, however spaces are limited, so early application is recommended. I will aim to respond within a week of an application being submitted, with support starting from May onwards.

If you have any questions or would like to find out more, please feel free to get in touch or connect via LinkedIn.

If you are sharing this with student contacts, the above information is also available in this flyer if this is a more useful format.

 

Thank yous: many thanks to Bastian Stern, Catherine Low, Jamie Harris, & Peter McIntyre for your inspiration, support & encouragement!
 

81

0
0

Reactions

0
0

More posts like this

Comments6


Sorted by Click to highlight new comments since:

Nice! I'm particularly excited by the emphasis on support with choosing degree courses. I think this is important and really underprovided in general.

Have you thought about framing the programme as about helping people who want to have a positive impact with their work rather than about helping young EAs? I'm a little worried about community effects if "joining EA" comes to be perceived as a way to get generic boosts to one's career (and that people who join in circumstances where they didn't really let themselves think about why they might not want to will be worse long-term contributors than if they had space to think clearly about it). But maybe I'm missing some advantages of framing in terms of EAs.

To update, the EA terminology was actually becoming problematic already for referrals for students engaged with outreach orgs, so I've updated the language both within the post and related materials to be more inclusive - many thanks for your insights on this Owen!

Thanks Owen! And thank you for raising this, it’s been something I’ve been thinking on and I agree a broader ‘positive impact’ framing could be better on a number of levels, and likely something I’d be looking to do if I scale to a larger project. My reasoning currently is more simply for practicalities for a small scale project (just me & my spare time!) with recruiting & screening students more broadly being less operationally feasible at this point.

This sounds like a really great idea! I'd be curious to know, what are the factors limiting this to being a UK-only project? Would something like capacity or knowledge of other university systems in other countries be the issue? Because this sounds like a project I'd love to see in the US too and think could generally scale quite well!

Thanks Aris! The limiting factors are as you guess - I’m limited by my own capacity and starting out with the UK purely because that’s my own strongest area of expertise. But likewise hope to scale in both capacity and global reach - thank you for your encouragement! I’m currently effectively in a pilot phase assessing demand and impact prior to potentially scaling further. If so, I’d be looking to either develop my own global admissions expertise and/or find regional admissions specialists - I’d love to hear from anyone who might be interested in getting involved!

I have done around 20 practice interviews for mathematical subjects for Oxbridge. If in the off chance you need somebody to help do this, feel free to reach out. (If yes, let me know and I will share my contact details with you via linkedin)

Regardless, good luck with this! Very useful for teenagers to get this advice which is otherwise not available.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr