Hide table of contents

This post is part of a series of six interviews. As EAs, we want to use our careers or donations to do the most good - but it’s difficult to work out what exactly that looks like for us. I wanted to interview effective altruists working in different fields and on different causes and ask them how they chose their cause area, as well as how they relate to effective altruism and doing good more generally. During the Prague Fall Season residency, I interviewed six EAs in Prague about what they are doing and why they are doing it. I’m grateful to my interviewees for giving their time, and to the organisers of PFS for supporting my visit. You can read my first interview, with Tyler Johnston, here.

I’m currently working as a freelance writer and editor. If you’re interested in hiring me, book a short call or email me at ambace@gmail.com. More info here


Daniel is a recent graduate of Stanford University and CEO of Inference Health, a medtech startup. He's currently mentoring EAs interested in publishing AI research & startups! You can reach him at dan@inferencehealth.com.

We talked about:

  • his career experiments in research and earning to give
  • his life as a digital nomad
  • his healthcare startup
  • his scepticism about longtermism
  • his thoughts on entrepreneurship, the EA community, cause agnosticism, and having a youthful and agentic personality

On testing different career paths

Daniel: I was born in 2000 and grew up in Florida. In high school, I was really into biology. I did a lot of research at cancer centres: computational modelling for cancer progression and management. I went to college at Stanford, starting in 2018. I was originally studying bioengineering. My background was in radiation oncology, so I was pretty sure that I wanted to do oncology research. But within a month, I switched to computer science.

This was because I was getting really into EA at the time. I joined Stanford EA and read Doing Good Better. Will McAskill makes a pretty compelling argument in that book about the opportunity cost of careers, and he uses medicine as a specific example. I think the argument is along the lines of: medical school admissions in the US are very quota-based - about a third of applicants don’t get in every year, and about the same amount of doctors come out every year. So the relative benefit [to the world] of you versus the person who would have taken your spot in medical school is probably not that much. I think they did some rough math and figured out that [US doctors] save something like 4 people over the course of their career [counterfactually]. So I was like, ‘this seems like a pretty reasonable argument [not to go into medicine].’

So I switched my major to Computer Science, but I was like ‘Okay, now I have no idea what I'm gonna do.’ So I decided to just go down the 80,000 Hours list of impactful careers and try literally everything. I ended up trying AI research, earning to give, and entrepreneurship, which is what I’m doing now.

Amber: How did you decide which one to start with? Or how to order them?

Daniel:  It was just greedy optimization: I started with the things that were easiest to do right now.  I don't have a particular personal passion for any given cause, except for some natural affinity towards global health. So I shopped around a bit.

For the first two years in college, I tried AI safety research, working on AI interpretability. I joined the Stanford Vision Lab, and published a couple of papers about model interpretability, model robustness, that kind of stuff. But I got bored out of my mind – research is just so glacially paced. And there’s this feeling of ‘Oh, even once you have the chops, you still need to get a PhD and spend all this time climbing the academic ladder to get a tenure track position’ - that felt kind of boring to me. I wanted to go a lot faster. Around this time, COVID hit, and I decided to take two years off of school.

Amber: Why? Did you ever go back?

Daniel: I went back for ten weeks and graduated at the same time as my friends. I decided to take a break because of COVID. The classes were virtual, there wasn't really a campus community. So it was like…

Amber: …why am I paying so much money for this terrible experience?

Daniel: Yeah. Also, a lot of my personal decision-making is very heuristic-based. I see myself as somebody who just does shit - a person who wants to do something, and who is comfortable with risk. Going to school during COVID felt like the default option; and a lot of my ambitious friends took the time to do something else. So it was like, ‘Okay, I know I need to take this time and do something. So I'm just gonna start by taking time off of school and I’ll figure out what I want to do along the way.’

I spent the first six months trying out earning to give. I worked for Two Sigma, a quantitative finance firm in New York, and donated a significant portion of my income. Out of all the 9-to-5s that I’ve worked, it felt like the best 9-to-5. It was fun in some sense, and intellectually interesting, but it didn't feel as meaningful as I would like.

On life as a digital nomad

Daniel: After that, I spent the next year and a half digital-nomad-ing. This was super duper fun, and honestly, cheaper than staying in the Bay. I went all over the place - South America, Europe, Thailand. Being involved in so many global communities gave me a real sense of global scope.

Amber: Say more about that: it gave you a sense of the largeness of the world?

Daniel: Yes; it also gave me a big appreciation for cultural relativism and diversity across different locales. It's very important to remember that EA is primarily a Western philosophy with hotbeds in the Bay Area and the UK, whereas lots and lots of people live and operate under completely different moral structures and frameworks. Similarly, ideas of what altruism is vary quite significantly across borders. For example, in China and other more relationship-based cultures,  there is a much stronger sense of familial responsibility, and respect, that kind of thing.

Amber: Did you have a particular aim while you were digital nomad-ing?

Daniel: The thought process behind this was really simple. First, even with the cost of flights, it’s cheaper to take flights and live anywhere else than it is to live in the Bay Area. Second, being immersed in a foreign place gives me a lot of energy, and lets me passively learn a lot of things. So for example, since I’ve been living in Prague for a while, I now know a tiny bit of Czech, and I’ve gotten a taste for Czech food, and I know what Prague is like. If you stay in one place, it's much easier to stagnate or get into routines. It’s harder to get value out of your walk to work, or your dinner, because it’s stuff you’ve seen before. When I travel, I have a bunch of tiny challenges for myself. One of them is to never go to the same place twice. This is the first time I've been to this cafe!

While I’ve been travelling, I’ve been working on my startup. You can code from anywhere.  The wildest time was in Ukraine: I spent July there with some friends, shooting war documentaries. 

Amber: How did you get into that?

Daniel: I can’t take any credit for it. A couple of friends said ‘Hey, Daniel, do you want to go to Ukraine with us to film a documentary?’ And I said, ‘Yes’ and booked the ticket the next day.

On his healthcare startup

Daniel: So I’ve spent about a year and a half wandering the world. And about three or four months into that, my friend and I started a startup called Inference Health (interestingly, entrepreneurship is the third or fourth option on the 80,000 Hours list of recommended careers). It’s a healthcare technology startup. I did this because I felt like I had a good personal fit for the industry. I’d done a lot of healthcare research, so I felt like healthcare technology was a good place to enter the startup world.

We try to break down the ridiculousness of the healthcare pricing system in the US. We focus on getting significant cash pay discounts for folks who are uninsured, and folks who are self-insured by their employer. As a very concrete example, a total knee replacement will cost you about $50,000 at a hospital. One of our knee surgery bundles was about $15,000. So, pretty significant savings.

Amber: How does that work?

Daniel: Long story short: medical bankruptcy is the biggest cause of bankruptcy in the US. This is because medicine in the US is one of those really strange markets where you can buy a product without knowing how much it costs. So a very common story is, you go to the hospital because you need a surgery, you get the surgery done, and six months later, you get a bill in the mail and you find out that you owe them like $90,000.

Every insurance company pays different amounts for different things. As a doctor, you set a “list price” for each medicine or procedure –  ‘I’ll sell this asthma inhaler for $2,000’. And then every single insurance company goes up to you and goes, ‘Okay, look, we're not going to put you in network unless you give us a discount on the asthma inhaler.’ And you're like, ‘Okay, I'll give it to you for $800’. Every insurance company will negotiate these discounts for all their prices, oftentimes pretty significant discounts, and then they can market those discounts when they sell their insurance to companies, saying, ‘Look, buy our insurance for your employees, because we have the biggest discounts for all of these things.’

But the issue is, when you don't have insurance, you have to pay the list price, which is $2000. And most doctors understand this, so they have these secret ‘cash pay prices’ for when patients pay them directly instead of using insurance to pay them; but many doctors don’t have these. So if you’re uninsured and go to a doctor, you might end up with a surprise bill of $590,000 or something.

The really interesting thing is, until recently, nobody actually knew how much insurance companies paid doctors for their services.  The negotiated rates were considered the intellectual property of the insurance companies; all patients would know is their copay. But a law was passed recently that required that all insurance companies publicly disclose information about the rates they negotiated.

Amber: So you now know how much they’re marking it up?

Daniel: Exactly. So with this data, our startup can go to doctors and say ‘Look, here's how much you’re making from insurance.’ We negotiated a couple of arrangements with doctors and built this open marketplace of health care prices. And we direct people who are uninsured, or companies who are self-insured, towards these transparently-priced offers for significant discounts.

Amber: Right. So you're saying, ‘Here's a doctor, and here's what their services cost.’ And the patients pay you a bit of markup, but not as much as they would to insurance companies, so doctors get more patients, and you get that markup as profit?

Daniel: Yes, that's right. And it's pretty cool, because a big problem in US healthcare is that folks who are uninsured, just don't get health care. The only free health care in the US is emergency room health care. A hospital cannot refuse to treat you if you come into the emergency room, but anything before that, you usually have to pay for. So we've worked with people who have told us ‘I wouldn't have gotten this care otherwise’.

Amber: How did you come to found Inference Health? You mentioned that it made sense because of your medical background. How did you come up with the concept? And how did you decide to found this type of medical startup as opposed to a different kind?

Daniel: Honestly, the answer is we initially had absolutely no bloody clue what we wanted to do. We spent literally half a year asking hospital CEOs and insurance CEOs what problems we should solve, and they gave us like twenty different answers. We built three different products, solving completely different problems; we had one that would reduce medical paperwork, and one that would speed up insurance claims. The breakthrough moment was when we got admitted to a startup accelerator called TechStars, which was sponsored by the biggest health insurance company in the US, United Healthcare. We met a lot of really high profile people in the industry there. They talked through this stuff with us and, as we looked at the market shifts, we felt like the legislation around healthcare price transparency was going to be pretty transformative for the market. Similarly, the problems of uninsured folks not getting healthcare, and medical bankruptcy, felt very acute. But the beginning of the process was totally just a random walk.

Amber: Do you have longer-term goals? Do you think you'll do this for a while?

Daniel: I don't know. At our stage, about 99% of startups go bankrupt within the next couple of months or so, so it's definitely an uphill battle. I don't know if we'll stick with this idea in particular or this company in particular. Hopefully, we'll have a pretty reasonable outcome and help a lot of people out on the way. But yeah, I’m 22, I honestly have no idea what I’ll be doing.

On cause agnosticism and intellectual diversity

Amber: Do you have any interest in working in global health, as opposed to US health? You mentioned that you have an affinity for this cause.

Daniel: Yes: with my current startup, the problem is very much within the US healthcare system, and it hasn't felt like we’re making as large an impact as I want. And even the limited amount of impact we've had so far, it feels like I'm fighting tooth and nail to get. So global health does feel like an important place to go. Do I know what route I'm gonna take to get there? Absolutely no clue.

Amber: You said that apart from that slight affinity for healthcare, you don’t have a strong attachment to any particular cause. Say more about that.

Daniel: So I have two layers of thoughts here. I have thoughts about cause agnosticism at the organisational level [of the EA movement], and thoughts about myself. At the organisational level, EA has a lot of cause-agnostic people. I think that’s because the EA recruitment pathways focus on innocent 18-year-olds who are just starting college, and nobody ever knows what they want to do in college!  You’re not likely to find people with really strong attachments to causes at that stage. I don’t think this is a good thing: right now EA has a critical mass of cause-agnostic people who are relatively junior in their careers, and because of that, 80% of people in EA tend to defer to experts on various topics, which doesn't lead to a lot of intellectual diversity.

Amber: Yeah, and it makes that diversity less relevant: even if we’re a community of thousands, it’s only a small minority of the community who determine priorities, because other people are unsure and therefore end up deferring, or just getting swept along with things.

Daniel: Right. You can see the opposite dynamic in US politics, where like 80-90% of Americans are very certain whether they're Democrat or Republican, and the remaining 10% are asked to adjudicate between two bipolar sets of views.

I imagine there's some happy medium where we have some people who are like, ‘I have very strong beliefs on animal welfare, and I'm going to continue working on animal welfare, period, regardless of whether the rest of the community’s funding is being spent on AI safety’. This does happen, but it’s a relatively small minority. I think this is probably true not just for cause areas, but for opinions and hot takes as well. We should be able to say ‘rather than just temporarily red-teaming our own views, our community just has people who believe different things from everybody else’ – in a principled way, of course. We should have more of a marketplace of ideas.

On scepticism about AI risk and longtermism

Daniel: As for my personal cause-agnosticism: there are lots of causes that EAs are interested in but that I don’t believe in — for example, a lot of longtermist stuff.

Amber: What sort of longtermist stuff, and why? 

Daniel: One thing is AI risk, and the first reason is that my AI timelines are actually really long. This is based on my experience doing AI research. I think lots of people are overestimating the speed at which development occurs. Secondly, when you're thinking about existential risks more generally, you’re looking at a very small probability mass with a very high impact, so you run into random noise and black swan events. This means that the vast majority of existential risks are likely to be unseen or unpredictable ones, as opposed to the known risks: these are the tip of the iceberg.

Amber:  So it's like: if we go extinct, it's not likely to be from any of the cause areas that EAs are working hard on, like biosecurity or nuclear risk, but instead something that we haven't even thought of, or something within those areas that we haven't thought to work on?

Daniel: Right. That's a mitigating argument, obviously: it’s not a reason to stop working on those things. But my point is that if I have to psychologically accept the unknowable low probabilities of extinction, I should be able to accept the knowable ones as well as.

Amber: Can you say more about why your AI research convinced you about long timelines?

Daniel: There are just various specific arguments that longtermists cite for shorter timelines, for example the argument about singularity, or the argument about scaling computer powers, and I have specific responses to each of those arguments.

I always like to make comparisons to similar moments in the past where EAs might have thought they were at ‘pivots of history’ [eras which have an unusually large influence on the future]. Obviously, at the development of nuclear weapons, but also the industrial revolutions and stuff like colonisation. I can imagine that 100 years from now, maybe everyone will be worried about existential risk from martian invaders and prioritize preventing martian invasion. They might look back and go, ‘Oh, it’s so weird how those people in 2020 thought AI was the biggest risk’, just like how we go, ‘Oh, it's so weird that in the Cold War we thought that nuclear war would be the real existential risk.’

On entrepreneurship and broadening the EA community

Amber: You mentioned before that there’s lots of cause-agnosticism in EA because EA recruits young university students who often don't have it all figured out, and you said that that might be bad. Might it be better to recruit more people who are strongly committed to certain causes? What sort of people should the EA community have more of?

Daniel: I remember this quote from someone that was like ‘it’s much easier to make an entrepreneur an EA than to make an EA a great entrepreneur.’ I was thinking about why, and I think it's because as an organisation, EA is really good at teaching people about EA, but we’re really bad at teaching people how to become like Bill Gates.

Amber: Yeah, I guess entrepreneurship is a rare skill.

Daniel: Yeah. I think this is probably true across many fields. For example, it feels relatively important to get traditional academia to turn towards AI safety, rather than trying to build a new academia from scratch around AI safety. That means that instead of publishing on LessWrong or on blogs, we should be publishing with ICLR, CVPR, the big AI conferences.

I also think the ability to reach outside of traditional EA fields is relatively important. Everybody was so freaking excited when Kelsey Piper started working at Vox. Everyone was like, ‘I didn't even know anybody in EA could write English!’ So we’ve gone from being limited to the fields of math and computer science, to having journalists. But it’s like, what fields have we not gone to? Basically everything else!

Amber: So we should try to bring EA values into existing fields and institutions?

Daniel: Yeah. This is a perennial problem that faces a lot of social movements: there's a very strict trade-off between going deep and going broad, where if you go broad, you need to dilute the message to make it understandable to lots of people, whereas if you go deep, you end up with lots of “EA-isms” or shibboleths.

Amber: You said that it’s maybe easier to make an entrepreneur an EA, than to make EAs entrepreneurs. You’re an entrepreneur: if people did want to make more entrepreneurs, do you have any advice? What sort of character traits are important? Or is it not about character?

Daniel: I mean, I guess this reduces to the question of ‘what makes a good startup founder.’ And honestly, the answer is ‘the perspective of just not letting the world put you down’. One common thing I’ve seen across all my entrepreneurial friends is that people will shit on them 90% of the time for their idea. They'll say it's dumb. But we just go, ‘No, the world is wrong and I'm correct, and I’m going to pour 100 hours into this a week until I show the world that I'm correct.’ Most of the time, we're wrong, but sometimes we’re right. Honestly, this is in some ways at odds with the need to use evidence - oftentimes, you have to completely work off of unfounded assumptions.

Amber: Yeah, because you might be the first person to try an idea - so you can't just ask yourself ‘has this worked before?’ 
 


On being a ‘fringe EA’

Amber: Is there anything you’d like to talk about that we haven’t talked about yet?

Daniel: I guess: I think there are lots of really cool people in EA, but I still consider myself very much a fringe EA.

Amber: Why is that?

Daniel: There was this interesting experiment I ran at EAG DC. I asked people, ‘if you think of all of the people that you've had a serious conversation with in the last few months, what percentage of them were EA?’ Can you guess what the distribution looks like?

Amber: I think some people will have said ‘almost everyone’, some will have said ‘almost no-one’.

Daniel: That's exactly it. It was super bimodal - it was either 80% or 20%. I follow the 20% pattern - I spend most of my day interacting with people in healthcare. So in that sense, I consider myself to be one of the fringe EAs who spend a lot of time interacting with non-EA people. Whereas the 80% group have a lot of shared characteristics, e.g., being interested in certain things.

Amber: That's interesting, because I can imagine that some people in the 20% category are having a huge impact, by EA standards, because they’re busy doing the thing rather than having nerdy chats. Whereas I was very plugged into the EA social community before I started having any impact by EA lights.

Daniel: Yeah, ‘fringe’ means something different than ‘not impactful’ - it means not really buried deep in the community. Anyway, as you can imagine, those fringe EAs usually are much more diverse in terms of career paths, so seek out those people at conferences, and they’ll tell you interesting things.

 

On having an agentic personality and staying young at heart

Amber: You said that many of your decisions came from the fact that you see yourself as someone who just does stuff. Have you always been like that? And if not, how did you start?

Daniel: I’ve always tried to cultivate a pretty random personality – that’s another reason I consider myself to be fringe EA. Your pure EA would sit down and work on the one thing that they care about for like, 30 hours a day until they get it done, and I'm very much not like that. Back in senior year of high school, I was like, ‘Daniel, your life is superduper boring. All you do is study and research all day.’ I remember googling ‘things that interesting people do’.

Over the years, every time I hear about something cool, I add it to a bucket list of stuff I want to experience or do. And I've been crossing off one or two of these items a month, but then adding three or four. So that's caused me to do a bunch of random stuff. Learning Czech was one, getting scuba certified was one, getting a private pilot’s licence was one…

Amber: Are there any bucket list items that you regretted doing?

Daniel: Totally - skydiving. It's one of those things where, you know you'll hate it before you do it, you hate it while you're there, and maybe you'll regret it after you do it. But then afterwards, you're glad you did it. Because now you don't have to do it anymore!

I think youth is a pretty interesting concept. People have a physical age, but there’s also mental age. It feels sad but true that as you get older, you get more boring –  that's my hot take! When I think about all the older people I know who still feel very young, they share this trait of being up for doing crazy things. It keeps them refreshed and young.

I think if you get a 9-to-5 job and just work, it's pretty easy to get old. Whereas if you do startups, shit is burning down around you all day, so you don’t have time to get old. And similarly, digital nomads and people who travel a lot seem very adventurous, even if they’re old.

Interested in more EA stories? Read the first interview in this series, with Tyler Johnston, here. You can also hire me to write and edit things.
 

16

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities