00:00hi and welcome to the a 16z podcast I'm
00:02Hannah and today we're talking about AI
00:04in medicine but we want to talk about it
00:06in a really practical way what it means
00:08to use it in practice and in a medical
00:11practice what it means to build medical
00:13tools with it but also what creates the
00:15conditions for AI to really succeed in
00:17medicine and how we design for those
00:19conditions both from the medical side
00:21and from the software side the guests
00:24joining for this conversation in the
00:25order in which you will hear their
00:27voices are min to to rakia a
00:29cardiologist at Stanford and director of
00:32the Center for digital health brandon
00:34Bollinger CEO and founder of cardiogram
00:37a company that uses heartrate data to
00:39predict and prevent heart disease and
00:41Vijay Pandey General Partner here at a
00:4416z and head of our bio Fund so let's
00:46maybe just do a quick breakdown of what
00:48we're actually talking about when we
00:50talk about introducing AI to medicine
00:52what does that actually mean
00:53how will we actually start to see AI
00:55intervene in medicine and in hospitals
00:57and in waiting rooms ai is not new to
01:00medicine automated systems in health
01:02care have been described since the 1960s
01:05and they went through various iterations
01:07of expert systems and neural networks
01:09and called many different things in what
01:12way would those show up in the 60s and
01:1370s so at that time there was no high
01:16resolutions there weren't too many
01:18sensors and it was about a synthetic
01:21brain that could take what a patient
01:24describes as the inputs and what a
01:26doctor finds on the exam as the inputs
01:29using verbal descriptions yeah basically
01:32words people created you know what are
01:34called ontology z' and literary and
01:36classification structures but you put in
01:38the ten things you felt and a computer
01:41would spit out the top ten diagnoses in
01:44order of probability and even back then
01:46they were outperforming sort of an
01:48average physician so this is not a new
01:50so basically doing what hypochondriacs
01:54Yeah right so so Google isn't as in some
01:58ways an AI expression of that where it's
02:00actually used ongoing inputs in
02:02classification to do that over time much
02:04more robust neural network so to speak
02:06so an interest in case study is is the
02:09Meissen system which is from 1978 I
02:11believe and so this was an expert system
02:14trained at Stanford it would take inputs
02:16that were just typed in manually and
02:19then it would essentially try to predict
02:20what a pathologist would show and it was
02:23put to the test against five
02:24pathologists and it beat all five of
02:27them and it was already outperforming it
02:29was already outperforming doctors but
02:31when you go to the hospital they don't
02:33use my son or anything similar and I
02:35think this illustrates that sometimes
02:37the challenge isn't just the technical
02:39aspects or the accuracy it's the
02:41deployment path and so some of the
02:44issues around there are okay is there a
02:46convenient way to deploy this to actual
02:48physicians who takes the risk what's the
02:51financial model for reimbursement and so
02:53if you look at the way the financial
02:55incentives work there's some things that
02:57are backwards right for example if if
03:00you think about kind of a hospital from
03:02the CFO's perspective misdiagnosis
03:06actually earns them more money because
03:08when you miss diagnose you do follow-up
03:11tests right and those and our billing
03:13system is fee-for-service so every
03:15little test that's done is billed for so
03:17but nobody wants to be giving out wrong
03:19diagnoses so where's the incentive the
03:21incentive is just in the system the
03:23money that results from it no one wants
03:25to give it an incorrect diagnosis on the
03:27other hand there's no budget invest in
03:29making sure this happen and so that's I
03:33think that's been part of the problem
03:34and so things like fee for value are
03:36interesting because now you're paying
03:38people for say an accurate diagnosis or
03:41for reduction in hospitalizations
03:43depending on the exact system and so I
03:45think that's the case where actually
03:46accuracy is rewarded with the greater
03:49payment which sets up the incentives so
03:51that a I can actually I can actually win
03:54in the circumstance where I think AI has
03:57come back at us with a force is it came
04:00to healthcare as a hammer looking for a
04:02nail what we're trying to figure out is
04:04where you can implement it easily and
04:08not too much friction and with not a lot
04:10of physicians going crazy and where it's
04:12going to be very very hard and that I
04:14think is the challenge in terms of both
04:16building developing these technologies
04:19commercializing them and seeing how they
04:21scale and so the use cases really vary
04:23across that spectrum yeah I think about
04:25there's being a couple of different
04:26cases where AI can intervene one is to
04:29substitute what doctors do for already
04:31and so people use the example of
04:33radiology as an example the other area
04:37that I think is maybe more interesting
04:39is AI can compliment what doctors can't
04:41do already so it would be possible for a
04:44doctor to say read an ECG and tell you
04:47whether you're in an abnormal heart
04:49rhythm no doctor right now can read your
04:51Fitbit data and tell you whether you
04:53have a condition like sleep apnea I mean
04:55if you look at your own data you can
04:56kind of see restful sleep and
04:57well-structured REM cycles so you can
05:00see some patterns there that said the
05:01gold standard that a sleep doctor would
05:03use is a sleep study where they wire up
05:05with six different sensors and tell you
05:08to sleep naturally there's a big
05:10difference here between kind of the the
05:11very noisy consumer sensors that may be
05:13less interpret able and what a doctor is
05:16used to seeing or it could be that the
05:18data is on the device but the analysis
05:20can't be done yet maybe Isis needs a
05:23gold standard data set to compare to
05:25there are a lot of missing parts to
05:27beyond just gathering the data from the
05:29patient in the first place I think
05:30there's some inherent challenges in the
05:34yeah healthcare is unpredictable it's
05:36stochastic you can predict a cumulative
05:39probability like a probability of
05:42getting condition X or diagnosis x over
05:45a time horizon of five or ten years but
05:48we are nowhere near saying you know
05:50you're gonna have a heart attack in the
05:51next three days prediction is very very
05:53very difficult and so we're prediction
05:55might have a place is where you're
05:59getting high fidelity data whether it's
06:00from a wearable or sensor it's so dense
06:02that a human can't possibly do it like a
06:05doctor is not going to look at it and -
06:06it's relatively noisy inaccurate poor
06:09classifiers missing periods where you
06:12don't actually have this continuous data
06:14that you really want for prediction
06:15right in fact the biggest predictor of
06:17someone getting ill with a lot of
06:20wearable studies is missing
06:21because they were too sick to wear the
06:22sensor oh that's so so the very absence
06:25of the data is a big indicator to put on
06:30my what-have-you and right and that
06:33means something's not right
06:34possibly or you're on vacation and so
06:36that's the problem that's the other
06:37challenge of AI is context and so what
06:40are some of the more simple problems
06:42where you have clean data structures you
06:44have less noise you have very clear
06:48training for these algorithms and I
06:50think that's where we've seen AI really
06:52pick up in imaging like studies it's a
06:54closed loop diagnosis you know there is
06:56a nodule on an x-ray that is you know
06:59cancer based on a biopsy proven later in
07:02the training data set or there isn't in
07:03the case of an of an EKG
07:06well we already have expert systems that
07:09can give us a provisional diagnosis on
07:11an EKG they're not really learning and
07:13so that's a great problem because most
07:15arrhythmias don't need context you can
07:17look at it and make the diagnosis we
07:19don't need them to learn so that's why
07:22it's good to use right away to apply
07:23this technology to them you don't need
07:25everything you don't need to mine the
07:27EMR to get all this other stuff you can
07:28get the image and and say is that it
07:31probably uh does it have a diagnosis or
07:34and so imaging of the retina imaging of
07:37skin lesions x-rays MRIs echocardiograms
07:42EKGs that's where we're really seeing AI
07:45pick up I sort of divide the problems
07:47into inputs and outputs and we talked a
07:49little bit about some of the inputs that
07:51have newly became available like EMR and
07:53imaging data I think it's also
07:55interesting to think about what the
07:57outputs of an AI algorithm would be in
07:59these examples are kind of
08:02self-contained well-defined outputs that
08:04fit into the existing medical system but
08:07I think it's also interesting to imagine
08:09what could happen if you were to
08:11reinvent the entire medical system with
08:14this assumption that we have a lot of
08:15data intelligence is artificial and
08:18therefore they're for cheap so we can do
08:20continuous monitoring so one of the
08:23things I think about is what are the
08:24gaps of people who do not have access to
08:27EKGs is Right most people like I've
08:29actually never had an EKG done aside
08:31from the ones I do myself
08:33and most people actually in the US you
08:35get your first dkg when you turn 65 as
08:38part of your Medicare check Wow so late
08:41you know people like my parents so like
08:42my dad's an excavator
08:44so he's dig foundations for houses and
08:46he hasn't seen a doctor in 20 years and
08:48if he leaves a job site the entire job
08:50site would shut down so it's hard for
08:52some people I think to go into the
08:55doctor's office between the hours of
08:569:00 to 5:00 and if you look at that in
08:58aggregate about half of people in the US
09:00have a primary care physician at all
09:02which seems astonishingly low but that's
09:04that's kind of the fact there's kind of
09:06a gap right about a third of people with
09:09diabetes don't realize they have it
09:10about a fifth of people with
09:12hypertension for afib it's 30 or 40
09:14percent for sleep apnea it's like 80
09:16percent I think it's one thing just
09:18finding out but not being will do
09:20anything about it but the actionable
09:21aspect I think really is a huge game
09:23changer it means that you can have both
09:25better outcomes for patients and an
09:27in-principle lower costs for payers
09:28right and these are areas where there
09:30are clear ways of addressing these kind
09:33of specific conditions I will take a
09:37little bit of a different view here
09:38which is that I don't know if Intel a AI
09:41artificial intelligence is needed for
09:45earlier and better detection and
09:46treatment to me that may be a a data
09:49collection issue how is that different
09:51from what we're saying about finding it
09:53early if because that may have to do
09:57with getting sensors out of hospitals
09:58and getting them to patients and that's
10:00not inherently an AI problem it could be
10:04a last mile ai problem so that if you
10:07want to scale the ability so to get this
10:10stuff okay so let's say we get to a
10:12point where our bathroom tiles have
10:14built-in EKG sensors and scales and the
10:18data just collected while we brush our
10:20it's the sensing technology that may
10:22detect things discreetly like an
10:25arrhythmia you may not necessarily need
10:26intelligence but who's gonna look at the
10:28data so this is scaling well no but the
10:30idea is yes over AI could look at the
10:32data and the other thing is that if
10:33you're using this as screening you want
10:36to make the accuracy as high as possible
10:37to avoid false positives Nene I would
10:39have a very natural role there too but
10:41it's interesting that you're saying it's
10:42not necessarily about the analysis it's
10:44about where the data comes from and when
10:46I think there are two different problems
10:48there may be a point that it truly
10:51outperforms the cognitive abilities of
10:54physicians and we have seen that with
10:57imaging so far and some of the most
10:59promising aspects of the imaging studies
11:02and the EKG studies are that the
11:04confusion matrices the the way humans
11:07misclassify things is recapitulated by
11:10the convolutional neural networks and to
11:13actually break that down for a second so
11:14are those confusion matrix so the
11:16confusion matrix is a way to graph the
11:19errors and which directions they go and
11:22so for rhythms on an EKG and and a
11:26rhythm that's truly atrial fibrillation
11:28could get classified as normal sinus
11:30rhythm or atrial tachycardia or super
11:33ventricular tachycardia the names are
11:34not important what's important is that
11:37the algorithms are making the same type
11:40of mistakes that humans are doing it's
11:42not that it's making a mistake that's
11:44more necessarily more lethal and and
11:47just nonsensical so to speak it's it
11:51recapitulates humans and to me that's
11:54the core thesis of AI in medicine
11:56because if you can show that your
11:58recapitulating human error you're not
12:01going to make it perfect but that tells
12:03you that it in check and with control
12:06you can allow this to scale safely and
12:08since it's liable to do what humans do
12:10and so now you're automating tasks that
12:12you know I'm a cardiologist I'm an
12:15electrophysiologist but I don't enjoy
12:17reading 400 ECGs and when it's my week
12:19to read them so you're saying it doesn't
12:21have to be better it just has to be
12:23making the same kinds of mistakes to
12:26feel that you can trust write a decision
12:28and you dip your toe in the water by
12:30having it be assistive mmhmm and then at
12:33some point we as a society will decide
12:36if it can go fully auto write fully
12:39autonomous without a doctor in the loop
12:41that's a societal issue that's not a
12:43technical hurdle at this point right
12:45well and you can imagine just as for
12:47less a self-driving cars you have
12:48different levels of autonomy it's not
12:51nothing versus everything you know in
12:53convention level 1 level 2 level 4 level
12:555 as in self-driving cars and so I
12:58and I think that would be the most
12:59natural way because we wouldn't want to
13:00go from nothing to everything exactly
13:02and just like a self-driving car we as a
13:04society have to define define who's
13:08taking that risk on right you can't
13:10really sue a convolutional neural
13:12network but you might be able to make a
13:15claim against the physician the group
13:17the hospital that implements it and how
13:19does that shake out to figure out
13:20literally like how to ensure yeah
13:22against these kinds of errors I think
13:24the way you think about some of these
13:25error cases kind of depends on whether
13:27the AI is substituting for part of what
13:29a doctor does today or if the AI is
13:31doing something that's truly novel I
13:33think in the novel cases you might not
13:35actually care whether it makes mistakes
13:36that would be similar to human oh that's
13:39an interesting point because it's doing
13:40something that we couldn't achieve what
13:42kinds of novel cases like that can you
13:44imagine wearables are an interesting
13:46case because they'll generate about two
13:48trillion data points this year so
13:50there's no cardiologists or team of
13:52cardiologists who could even possibly
13:53look at those that's a case where you
13:56can actually invert maybe the way the
13:58medical system works today rather than
14:00being reactive to symptoms you can
14:02actually be proactive and the AI can be
14:06essentially something that tells you to
14:08go to the doctor rather than something
14:09that the doctor uses when you're already
14:11there just take Radiology's an example
14:14where you could have one levels where
14:15it's as good as a common doctor another
14:18level where it's as good as the
14:20consensus of doctors right another level
14:23is that it's not just using the labels
14:25of a radiologist would say on the image
14:28it's using a higher level gold standard
14:30like it's predicting what the biopsy
14:32would say and so now you do which would
14:33be into your kind of novel yeah
14:36something no human being yes and so and
14:39it can do that because in principle
14:40confuse the data from the image and
14:42maybe blood work and other things yeah
14:44that are easier to get then and and much
14:47less risk inducing and then removing you
14:50know tissue in a biopsy so cooling those
14:52multiple streams of information into one
14:54sort of synthesizing them is another
14:56area that it's very difficult for human
14:58being to do and very natural for a
14:59computer it is very natural but I think
15:02we need a couple things to get there we
15:04need really dense high-quality data to
15:06train and the more data you put in a
15:09model I mean so much
15:12gene learning by definition is
15:14statistical overfitting and sometimes
15:19machine learning done poorly
15:21it's like saying driving is driving a
15:22car off a cliff like you know poor
15:25driving is driving is poor driving but
15:27machine learning tries to avoid
15:28statistical overfitting it does my point
15:31is is that you get one of the unknowns
15:33with any model it doesn't matter if it's
15:35machine learning or regression or a risk
15:37score is calibration and as you start
15:40including fuzzy and noisy data elements
15:43in there well first of all often the the
15:45validation datasets don't perform as
15:47well as the training data set no matter
15:49what math you use and why is that well
15:52that's that's a sign of overfitting and
15:54usually it's because there wasn't
15:55sufficient regularization during the
15:56training process so overfitting is a
15:59concept in statistics to effectively
16:03indicate that your model has been so
16:06highly tuned and specified to the data
16:09you see in front of it that it may not
16:12apply to other can't generalize so if
16:14you had to use a model and to identify a
16:19bunch of kids in a classroom and pick
16:21the kid who's the fastest a over fitted
16:25model might say it's the redheaded kid
16:27wearing Nikes okay and because in that
16:30class that was the case with the one
16:32child but that has no plausible
16:34biological or yeah and so if you take
16:38that to a place where the prevalence of
16:41Nike shoes or redheads is low right and
16:45so these are some of the issues that
16:48underlying shifts in population the in
16:51natural language processing that's
16:52embedded in AI the lexicon that people
16:54use how doctors and clinicians write
16:56what it is that they're saying with
16:58their patient is different from ha not
17:01even specialty to specialty but Hospital
17:04sort of mini subcultures it's gonna be
17:06different at Stanford than it was at
17:07UCSF which is gonna be different at
17:08Cleveland Clinic I think that's actually
17:10a nice thing about wearable data is it
17:12fitbit's are the same all over the world
17:13this legal problem though is is
17:15interesting because you know in our
17:18context each label represents a human
17:20life at risk right it's a person who
17:22came into the hospital with an
17:23arrhythmia and so you're not gonna get
17:26million labels the way you might for a
17:27computer vision application I'm going to
17:30be kind of unconscionable to ask for a
17:32million labels in this case so I think
17:34one of the interesting challenges is
17:36training deep learning mace based models
17:39which tend to be very data hungry
17:40with limited label data the kitchen sink
17:43approach of taking every single data
17:46element even if you're looking at an
17:49image can lead to these problems of
17:51overfitting and what brandon views are
17:54both alluding to is you limit the labels
17:57they're really high quality labeling
17:58right and see if you can and go there
18:01and so don't complicate your models
18:03unnecessarily and don't build models
18:06that are overly complicated for the
18:07amount of data you have right because if
18:09you have the case where you're doing so
18:11much better on the training set then the
18:12test set that's proof that you're you
18:14know that there's your overfitting and
18:15you're doing them all wrong modern ml
18:17practitioners have a whole set of
18:20techniques to deal with overfitting so I
18:22think that problem is is very solvable
18:24with well trained practitioners one
18:26thing you alluded to which is the
18:27interpretability aspect so let's say you
18:30train on a population that's very high
18:32in diabetes but then you're testing on a
18:34different population which has a higher
18:37lower prevalence that is kind of
18:39interesting so identifying shifts in the
18:41underlying data what would that mean so
18:44let's say we train on people in San
18:46Francisco and you know everyone runs to
18:48work and eats quinoa all day then we go
18:52to a different part of the country where
18:55you know maybe obesity is higher or you
18:58could be somewhere in the stroke belt
18:59where the rate of stroke is higher it
19:01may be that the statistics you trained
19:03on don't match the statistics that
19:05you're now testing on that's
19:06fundamentally a data quality problem if
19:08you collect data from all over the world
19:10you can address this but but you have to
19:14be a while for that to happen as we
19:16start gathering the data in different
19:17ways how does that actually even happen
19:19how are these streams of data funneled
19:22in and examined and fed into a useful
19:26system so it used to be the way you'd
19:28run a clinical trial is that you would
19:30have you know one Hospital you'd recruit
19:32patients from the hospital that'd be it
19:34if you got a couple of hundred patients
19:36that might actually be quite difficult
19:37to attain I think with research
19:40healthkit Google fed all of these things
19:42you can now get 40 or 50 thousand people
19:44into a study from all over the world
19:47which is great except the challenge that
19:49the first five research get hat apps had
19:52to say they got 40,000 people and then
19:55they lost 90% of them in the first 90
19:57days so everybody just drops out
19:59everyone just drops out because the apps
20:01the initial version of versions of the
20:02apps weren't engaging so this isn't in
20:05ads an interesting new dimension as a
20:07medical researcher you might not think
20:09about building an engaging well-designed
20:11app but actually you have to bring
20:12mobile design in is now a discipline
20:14that you're good at so there has to be
20:16some incentive to continue to engage
20:19yeah exactly and I mean you need to
20:21measure cohorts the same way Instagram
20:23or Facebook or snapchat does so I think
20:25the team says six is that are starting
20:28to succeed here tend to be very
20:29interdisciplinary they bring in the
20:31clinical aspect because you need that to
20:33choose the right problem to solve but
20:35also design the study well so that you
20:37you have convincing evidence you need
20:39the AI aspect but you also often need
20:41mobile design if it's a mobile study you
20:44may need domain expertise in other areas
20:46if your data is coming from somewhere
20:47else then it all has to be like gamified
20:49and fun to do yeah yeah I mean
20:52gamification is sort of extrinsic
20:53extrinsic motivation but you can also
20:56give people intrinsic motivation right
20:58giving them insights into their own
21:00health for instance it's a pretty people
21:03what's the system's incentives I mean of
21:05course it doctors want it if it makes
21:07them more accurate or to scale better
21:09patients want it if you can predict
21:11whether or not you're gonna have a
21:12problem how do we incentivize the system
21:15I believe fundamentally it is gonna come
21:18down to to cost and scale and a will and
21:21what willingness does a healthcare
21:22entity whoever that may be whether its
21:25employer based programs insured based
21:27programs accountable care organizations
21:30are they going to be willing to take on
21:32risk to see the rewards of cost and
21:33scale and so the early adopters will be
21:35ones who've taken on a little more risk
21:37yeah I think you know it is a challenges
21:39where the hope is in terms of value and
21:41in terms of better outcomes but one has
21:43to prove it out and and hospitals will
21:45want to see the regulatory risk thing is
21:47being largely addressed by this new
21:49office of digital health and the FDA
21:51see much more forward thinking about it
21:53but but there are going to be challenges
21:56that we have to solve and I'll give you
21:58one just to get the groups input here is
22:01should you be versioning AI or do you
22:05just let it learn on the fly and so
22:07normally when we have firmware a
22:09hardware software updates that and in
22:10regulated products FDA approved there's
22:13static there they don't learn on the fly
22:15if you keep them static you're sort of
22:17losing the benefit of learning as you go
22:20on the other hand bad data could heavily
22:24bias the system and cause harm right so
22:26if you start learning from bad inputs
22:28that come into the system for whatever
22:30reason you could intentionally or
22:32unintentionally do you know cause harm
22:35and so how do we deal with versioning
22:37and deep learning I mean it just frees
22:38the the parameterizations of versioning
22:40from a computer science point of view
22:41this trivial there's the deeper
22:43statistical question which you just
22:45version you could version every day
22:46every week every month
22:48it's browner's what you want to do is to
22:50the point that we're talking about
22:51earlier you want to bring in new
22:53validation sets things that it has never
22:55seen before because you don't want to
22:57just test each version on the same
22:58validation set because now you're
23:00intrinsically sort of overfitting into
23:02it and so what you always want to be
23:04doing holding out bits of data such as
23:06you can test each version separately
23:07because I want to make sure that they
23:09have very strict confidence that this is
23:12doing no harm and this is okay it's like
23:14we're actually introducing this whole
23:15new data set of a different kind of
23:18thing and that's when you make new
23:19considerations data is coming in all the
23:21time and so you know he's just version
23:23on what came in today version and that
23:25that's it I mean so three
23:26straightforward and and as you're
23:28training it this is the way speech
23:29recognition works on android phones is
23:31that obviously data is coming and
23:33continuously every time someone says
23:35okay Google or hey Siri is coming into
23:37either Google or Apple but you train a
23:40model in batch and then you test it very
23:42carefully and then you deploy it right
23:44and the versions are indeed tagged by
23:45the date of the training data it's
23:47already embedded in the system who are
23:49the decision-makers that are kind of
23:51green-lighting when like okay we're
23:53gonna try this new algorithm we're gonna
23:54start applying this to these you know
23:57radiology images what how are the what
24:00are the decision points so with EKGs the
24:04used expert systems to just ease the
24:07pain points of me having to write out
24:09and code out every single diagnosis the
24:11super low-hanging fruit yeah can you
24:14improve the accuracy of physicians yes
24:17can you increase their volume and
24:19bandwidth can you actually use it to see
24:23which physicians are maybe going off
24:25course right what if you start having a
24:27couple physicians whose error rates are
24:28going up right now with quality the QI
24:31process isn't really based on random
24:34sampling there's there's actually no
24:36standardized metrics for qi in any of
24:38this when people read EKGs and sign them
24:41off they just sign them there's no
24:42there's nothing telling anyone that this
24:45guy has a high error rate and so that is
24:47a great use case of this where you're
24:49not making diagnoses but you're helping
24:51anchor and and and show that well if you
24:54believe this algorithm is good and
24:56broadly generalizable across data you're
24:59sort of restating the calibration
25:02problem now it's not that the algorithm
25:04has gotten necessarily worse because in
25:07fact in seven of the eight doctors it
25:09it's right on par with them but in this
25:11other doctor it could be that that
25:12doctor if that doctor is not agreeing
25:15with the algorithm which is agreeing
25:16with the other seven that doctor is
25:17actually not agreeing with the other
25:18seven and so now you have an opportunity
25:21to train and relearn those are the use
25:23cases to train and relearn the person or
25:25the person address their reading errors
25:27coding errors see what's going on and
25:30that qualitative look I think is very
25:32very valuable so what are the ways we're
25:34actually gonna start seeing it in the
25:36clinical setting you know the tools that
25:38we might see our doctor actually use
25:40with us or not I think it's going to be
25:42these adjacencies around treatment with
25:44management there are a lot of things
25:45that happen in the hospital that seem
25:47really primitive in arcane and no one
25:49wants to do them and I'll give you a
25:51simple one which is Oh our scheduling so
25:53is it actually the way it looks like it
25:55is in Grey's Anatomy is it just a white
25:56board and it's a white people great and
25:58somebody at the phone the or front desk
26:00lievable there's a backend of scheduling
26:02that happens for the outpatients but you
26:03have a Don you have emergencies you have
26:05I mean it's not even an excel sheet
26:07would be better than a whiteboard way or
26:09scheduling works now is primitive and it
26:12also involves gaming it involves
26:14convincing staff XY and Z to stay to get
26:18or do it tomorrow so there's so much
26:21behind the scenes like human negotiation
26:23example when I do catheter ablations we
26:25have many different moving parts
26:27equipment the support personnel of the
26:30equipment manufacturer anesthesia
26:33fellows nurses whatever and everyone has
26:37little pieces of that scheduling and it
26:39all comes together but it comes together
26:40in the in the art of human negotiation
26:43and very simple things like well this is
26:45your block time and if you want to go
26:47outside your block time you you know you
26:49need to write a Hallmark card to Bragg's
26:51right and so very simple problem where
26:56there's huge returns in efficiency if
26:59you can have AI do that right and and
27:02the AI inputs over time could be like
27:03well you can really truly know which
27:06physicians are quick and speedy which
27:08ones over there are a lot of times which
27:11patient cases might be high-risk which
27:14ones may need more backup which should
27:16be done during daytime hours you could
27:17add their fit that data and then you
27:19could tell who is drowsy at any given
27:20oh that's fast whether or not they want
27:24to do it dressed are they feeling and so
27:27people stay what at times that they're
27:31really needed and that kind of
27:33elasticity can come with automation
27:35where we fail right now and so this is a
27:38great place where you are not making
27:40diagnosis there's nothing you're being
27:42committed to by from a kind of you know
27:45basic regulatory framework you're just
27:47optimizing scheduling so who actually
27:49says so say that that technology is
27:51available and how do you actually get it
27:53in you know where's the sort of
27:55confluence of the regulation and the
27:56actual rule out and how does it actually
27:58make its way into a hospital into a
28:00waiting room there's an alternative
28:02model I've seen which is that startups
28:04acting as full-stack health care
28:05providers so Omata health are vertol
28:08would be examples of this where if you
28:10have pre-diabetes or diabetes
28:11respectively the physician can actually
28:14refer the patient to one of these
28:16services they have on staff physicians
28:19they're registered as providers with
28:21national provider IDs they bill
28:22insurance just like a doctor would and
28:25they're essentially acting as a provider
28:26who addresses the whole condition into
28:28end I think that case actually
28:30simplifies decision making
28:32because you don't necessarily have to
28:33convince both you know Stanford and
28:35United Healthcare to adopt this thing
28:37you can actually convince just a
28:39self-insured employer that they want to
28:41include one of the one of these startups
28:44as part of their health plan and so I
28:46think that simplifies the
28:48decision-making process ensures that the
28:50physicians and the AI folks are under
28:52the same roof I think that's a model
28:54that we're gonna see probably get the
28:56quickest adoption rice and the payer
28:58well there are many models and which is
29:00the best model will depend on how you're
29:02helping and the indication and on the
29:04accuracy and what you're competing
29:05against and so on so this is a case
29:07where probably will see the healthcare
29:09industry maybe reconstitute itself by
29:11vertical with AI based Diagnostics or
29:14therapeutics because if you think right
29:16now providers are geographically
29:18structured but with AI every data point
29:20makes the system more accurate
29:22presumably in an AI based world
29:24providers will be more oriented around a
29:27particular vertical you might have the
29:29best data network in radiology the best
29:31data network in pathology oh that's
29:33interesting yeah thank you so much for
29:35joining us on the a 16z podcasts great
29:37thank you thank you thanks for having us