Hide table of contents

In late 2023 I read Brandon Hendrickson’s book review of Kieran Egan’s book The Educated Mind on ACX. I’m a teacher and it lit a fire in me. I spent 2024 cycling with my young family and while I kept reading some Egan-related things, I didn’t really “work on it”. At some point in that year I decided to dedicate a bunch of 2025 time (around 20 hours a week) to understanding Egan’s ideas and determine what I should do with them. One possibility is starting an Egan school.

 

I’m posting because I would like to get feedback on what people think about my school and my education ideas. To this end, I welcome any comments. Please be polite, but pull no punches. If you have any ideas of your own that you think should be included in any good school, let me know them too. If you’d like to leave feedback anonymously, use this link

 

The problem: I think of it as soft edges, which I mean as “there are things that you’re literally allowed to change about education, but because people (teachers, students, parents) are set in their ways, these changes happen in a small way at best, so progress is slow/nonexistent”. Which is to say that I don’t think anyone is doing anything wrong - teachers, school execs, students, parents, The Department, the curriculum folks.

 

My solution: tell people “we’re doing a different thing over here so if you want to work/attend/send your spawn at/to this school, you’ll do it this way”. This will snap them out of their local inadequate equilibrium and we can explore the landscape of educational possibilities a bit. I also not-so-secretly want this to succeed then make a template for how other people can do it easily and export it to the rest of the country/the world. As in, here’s the menu, Egan is set, but you can pick from the other options depending on what you want your school to be like and what people near you want. Here’s how to do the financial bits, the legal bits, the marketing bits, the enrollment bits, all the other bits. At the end of the day I want more variety, ie. actual options for parents. Except Egan. Everyone must eat their Egan. (I don’t really mean that, if someone wants not-Egan they should be able to have it.)

 

Even though the first idea was purely “make an Egan school”, I’m departing from that because I think it's not marketable/distinct enough in small city like mine and I have a bunch of other ideas (that you're welcome to tell me are bad/incompatible) that I think are obviously good.

 

The differences to a “normal” school: 

  1. Every lesson is Eganised. Imaginative education is in the middle of everything. I’m also into Socratic Method teaching and I think they can both fit together, I just need to work out how. I love Michael Strong’s substack, I just need to cash it out in terms of what’s happening in the classroom. I’m also into a very limited selection of Building Thinking Classrooms, partly because I think Egan/Socratic solves a bunch of that stuff already, partly because no direct instruction is madness.
  2. Subjects aren’t Maths/English/HASS/Science. The world isn’t broken into those categories, so why would learning best be thought of that way? Thinking certainly isn’t. Instead we have fun things like Defence Against the Dark Arts (of manipulation, eg. sportsbetting, marketing), Reality Levels 1, 2, etc. (think like material science for things we interact with and physics/chem/bio come into it that way), Tool Use (think Conrad Wolfram’s maths curriculum that focuses less on the Calculate step of solving problems mathematically and more on the preceding Define and Abstract steps and the following Interpret step), Thinking (rationality, probably a bunch of stuff based on LW posts), Progress (Kinda progress studies, but also goes into the past, something something, grokking the arc of history and realising we’re living in it), Money (financial literacy, understand that a dollar is only 50c in some places and $100 in others), Food (with the goal of people having a healthy relationship with food and being able to easily cook at least 7 cheap, delicious, healthy meals that they personally enjoy). There will also be subjects like Math Appreciation (probably compulsory) and Math Theory for people that like and want to do maths. And yes, I do kinda think that going to school should feel like going to Hogwarts - if kids learnt how things actually work in the real world, it would basically feel like magic anyway!
  3. TECH! LLMs are a crazy tool to have access to! Each kid will be taught to be good at using them. They will have a context document and train their own LLM on their interests, their way of understanding so that every explanation is tailored to them. Assessment might be they use Sonnet/4o/r1 to learn about a topic (1 hour) then have a 10 minute teacher/class discussion about it, teaching the “teacher” (I like to think of us as “facilitators”). They will know how to make the LLM use Egan’s tools/Socratic Method to tell them stories and ask them (the student) questions to give them understanding, not information.
  4. There is (limited) self-directed inquiry (SDI). This is less “study what you want” more “I will make you explore the world”. Super smart and wise kids I know have done free-for-all SDI (eg. Big Picture) and said it was mostly a waste of time because they couldn’t predict what they would be interested in even six months hence, let alone in a few years time. So for us, each project will be a term in duration. They will concurrently do things in a few categories:
    1. Skill junkie: you will end up with a box ticked on a report and maybe something on your resume after each of these. Think touch typing, make the perfect barista coffee, video editing, three-plate carry, quick sketching, spreadsheets.
    2. What you want: go deep on anything you want. 
    3. Interesting part of a category. Imagine a non-music person being forced to find the most interesting thing to them in music production and spending 30 hours working on it or a nerd doing woodwork. We make them do it because it’ll be good for them to be broad, not because it’s necessarily what they would choose to do right now. We will spend time getting them on board with this plan by explaining it rather than holding a whip. 
    4. Exploration: I’m into spending 90% of learning time going deep and 10% exploring, so they must explore.
  5. Things aren’t age based. After an intro period to get the core stuff (focus, agency, confidence, LLM use, the culture) down, they are working at their level. Templestowe College does this and it seems great. I’m not super sure about this as I think it logistically works better for bigger schools (500+) and I don’t think I want these to be big schools.
  6. “After school program”. The idea here is that from an early finish time (between 2pm and 2:30pm) until about 5pm there are things to do at school. At the start of that time they’d tend to be more organised and towards the end they’d be more “hang out”. This is not school because I want this to be a relatively independent time for the kids, kind of like forcing them to hang out in person rather than be on their phones the whole time. Also, it means parents can work a full day. Also, I don’t know how this fits with school buses - maybe cheaper because it’s off peak times, probably more expensive because it’s later and nobody else is doing it then. 
  7. Mentorship. I haven’t thought about this one a lot yet because I know there are hundreds of successful mentorship programs in the world and I figure I can learn from them. I think everyone has and is a student mentor and the oldest kids have an external mentor, but we’ll see how it plays out.
  8. PE isn’t about learning sports, it’s about learning how your body works, being in the right zone throughout the school day and having good habits when you leave school (eg. actually enjoying sport and therefore playing it, playing with friends, etc.)
  9. The kids coming out are functional humans. For example, they should all be able to cook at least seven cheap, healthy, delicious meals. They have money skills. When they need to apply for a job/business number/university course they know how to use a language model to optimise that process and will five minute rule/more dakka until things get done. They will have good habits. (This is the goal anyway, shit will hit the fan as soon as kids get involved, it always does).

 

Experiments

I can test Egan/Socratic teaching and LLM use this year as a regular high school teacher. My viewpoint is that this isn’t really a science experiment because I’m just one teacher trying out some stuff, so I’m not going to pretend that it is. I just want to get an idea of if I think these things work and I want that idea to be as accurate as possible. If someone thinks they know how I could actually measure things at the start and the end (beyond a test of what they know or how they think they think), and disentangle the difference from what would have happened with an alternative 40 weeks for the kids, I’d love to hear it. Possibly I’d be picking three or four behaviours I’d like to see (eg. when someone gets stuck on a problem do they start trying effective strategies to solve it, kids can answer questions on adjacent topics because they understand rather than know) and get someone to observe at the start and end of the year. 

 

I must confess

I live in the real world as much as possible. My estimate of the base rate of people who want to start schools actually doing it is low. But everyone who does start a school, has a plan and some belief that it might happen. I’m trying to do the things that a smart person who starts a great school would do. Can a medium INT player play a high INT character? With help from some high INT players, maybe? 

Comments2


Sorted by Click to highlight new comments since:

In the UK, it seems as though if you form an independent school, you get a ton of leeway about what you teach and how. If it could fund itself, it could be a really cost-effective experiment with big implications if it's better and others adopt it. 

You probably need:

1. A few rich early-adopters who're die hard haters of traditional schooling, all concentrated in a single location, ideally with children around the same age.
2. An inexpensive school building to start with. Perhaps an office near a large park.
3. A model that lets you be the sole teacher, and insurance for a cover teacher if you fall ill. 
4. A mentor who's started an independent school before
5. A large loan / grant from an UHNWI. You likely have little hope at getting a grant from a foundation so it's probably not worth trying (if the education sector is anything like the rest of the charity sector)

It's likely that starting a full school right away is completely hopeless. You probably need a series of intermediate steps to move you in that direction slowly. For example, if you homeschooled for rich parents, that'd let you build a network of rich parents while testing / refining your ideas. 

For now, I suspect you'd be better off asking for advice from domain experts than EAs. If you're serious about this, you might want to meet with the author and try to sell them on promoting your school. I suspect marketing would be your biggest issue by far, at least for the first few years. 

Hi John, thanks for the thoughful comment!

My vague lead-in plan was along the lines of "run a weekend/holiday program to build network and when/if the kids and parents love it, see if they want more." I will get some government funding per student, as long as I'm doing some of the things they want, which I would be. 

1 would be amazing and I should think more at some point in the future about how to do that - thanks for the idea, I hadn't thought of it!

2 is what I'm looking for and it's possible given my location. 

3 I have a bunch of teachers who are interested, so I think I'd be doing this with at least one other person, even if it's small, so hopefully we just don't get sick at the same time. Though I'd have a network for if this happens. 

4 I'm going to look for a mentor who has done a similar thing. Now is not a good time to look because the school year (I'm in Australia) starts in a week, so everyone's very busy. 

5 I have a guy who can help in finding startup money and who thinks it's possible, so I'm deferring worrying about this for a while. 

Thanks again for the comment! You're right that this possibly isn't the best place for this post. So I'll likely not be posting here again about this unless it's more relevant. 

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f