00:20I'm the founding General partner of the
00:23a16z bio and health fund and I'm Mark
00:26Andreessen the co-founder of a16c mark
00:29thank you so much for joining us yeah
00:31it's great to be here yeah so you know
00:33you famously wrote about software eating
00:34the world and that was basically what 10
00:37plus years ago yeah and actually that
00:39very much seems to have come to fruition
00:41if you look at all these other
00:42industries that software really wasn't a
00:44part of software's actually become a
00:46dominant part but actually this year's
00:48been kind of an amazing year for another
00:50type of software for AI and I'm curious
00:53to sort of talk about the Arc of what we
00:55think is going to happen in the future
00:56based on what we've seen in the past and
00:58and really how this new technology is
01:00going to change everything much like
01:01we've seen software change the last 10
01:03years I'm curious what you think for
01:05just like this year it's been kind of an
01:07amazing year it always seem like not
01:09much happens in any given year but 2022
01:11seems to have been an amazing year for
01:14also Vladimir Lenin uh yes once said yes
01:17um there are decades in which nothing
01:19um and then there are weeks in which
01:23um and uh let's not hope that happens
01:26um but uh it does happen you know in
01:27science and technology it does happen
01:28there are sort of moments where things
01:29kind of hit critical mass
01:32um and uh you know this the sort of AI
01:34machine learning Revolution seems it
01:35seems like that's what's happening right
01:36now you know it's been interesting to
01:38watch you know it's it's sort of like it
01:40feels to me at least it was like there
01:41was like a breakthrough moment in 2012
01:42right that had to do with images
01:44um and then there was a lot of work you
01:45know subsequently valid to you know
01:47things like the creation of self-driving
01:48cars you know based on that and then
01:50there was some it feels like some
01:51natural language breakthrough maybe
01:52three years ago yeah
01:54um and now that's really catalyzed into
01:55this you know kind of whole thing that
01:57we see happening around you know GPT and
01:58text generation yeah
02:00um and then uh you know even even other
02:02other applications transcription you
02:03know is getting much better all of a
02:05sudden speech synthesis is getting much
02:07um and then now you've got this artistic
02:09Revolution happening with image image um
02:11uh you know image creation and now video
02:14creation is is right next you know
02:16coming up now really fast yes
02:18um and so it seems like one of those
02:19catalytic moments and then it it's it's
02:21it's you know it's like every week now
02:23there's like fun it seems like there's
02:24like fundamental breakthroughs there's
02:25research papers there's product releases
02:26coming out so it seems like a cascading
02:28thing the the way I think about it as a
02:30software person you know sort of
02:31lifelong lifelong programmer
02:33um is that they're you know basically in
02:35the fullness of time it will appear I
02:37think that there were kind of two
02:38different ways to write software there
02:39was the sort of the old way to write
02:40software which is sort of the classic
02:42Von Neumann machine you know
02:44deterministic way and and the the whole
02:46problem with writing software in the old
02:47model is like computers are hyper
02:49literal yes right and so they do exactly
02:51what you tell them traditionally right
02:52every time they do something wrong it's
02:54because you have instructed them in
02:56properly and it's your very humbling
02:58experience to learn as a young
02:59programmer that everything is is your
03:02um and the Machine will just sit and
03:04wait for you to fix the problem like
03:05it's not it's not gonna it's not going
03:06to do that on its own um and then
03:07there's this new there's this other way
03:09to write software and this has to do
03:10with you know having having these these
03:12uh these AI systems and then having
03:13training data training the systems
03:14tweaking the systems yeah
03:16um and the sort of you know capability
03:18that that um the way I described it to
03:19kind of normies is though you know that
03:21sort of unlocks the ability for
03:22computers to more and more interact with
03:23the real world yes and with the
03:25messiness of the real world right and
03:26the probabilistic nature of the real
03:28world yeah well it seems almost less
03:30like writing software almost like
03:31training something that's right when I
03:33think about machine learning and image
03:34recognition you talked about it felt
03:36like it was almost like training a dog
03:37right like reinforcement learning is
03:39like we'll give treats as it gets better
03:41but there's something different now it
03:43feels like I don't know we've gone from
03:44like training a dog to recognize a bird
03:46versus like a hot dog or hot dog or not
03:48hot dog or so on to actually something
03:50where it feels closer like training a
03:52person or I don't know how you feel like
03:53when we talk about learning and training
03:55and data like what what are we training
03:57what is where do you think we are in
03:59that Arc of getting to like eventually
04:01HAL 9000 and so on well so I I while
04:04this has been happening you know if you
04:05have kids like I have a you know young
04:07young child so I have a seven-year-old
04:08now so as this stuff has all been
04:10popping I've been simultaneously
04:11training and training now the seven year
04:13old yes yeah you know any anybody who's
04:16had who's had kids will recognize what
04:17you know kind of what I'm about to say
04:19but um you know it is really interesting
04:21watching little kids the way I think
04:22about they're at least at least at least
04:24the little kid I have uh who's great um
04:26it's like you know it's like everything
04:27for the first for the first few years
04:29it's like every single thing he did was
04:31like a little applied physics experiment
04:32which is like let's see what happens if
04:34I drop this let's see what happens if I
04:35eat this let's see what happens if I do
04:37this to Daddy yes right and see what the
04:38response is and they just run experiment
04:40and you can see it you can see it very
04:42clearly when they're like learning how
04:43to walk because they're like running all
04:44these experiments about how to stand up
04:45and what to hold on to and they keep
04:46falling over and then at some point like
04:48good yeah it's a little neural network
04:50like actually figures it out it does
04:51learn off in a way they go right yeah
04:53um and so you know clearly it's like
04:55it's kind of you know it's a little bit
04:56Eerie like you can see that a similar
04:58kind of thing happening you know look
05:00having said that like you know the you
05:01know the human brain just like you
05:02developing and then you know It
05:04ultimately you know it clearly has
05:06Consciousness achieves you know higher
05:07levels of consciousness achieves higher
05:09levels of sort of self-knowledge you
05:10know reaches the Descartes you know kind
05:12of you know stage where it sort of has
05:13self-awareness um you know clearly is
05:15very creative from an early age I'm a
05:18little less convinced that the tech
05:20software Technologies we have now are
05:21like on some linear path towards just
05:23like quote-unquote AGI or just quote
05:25unquote like Consciousness like us it's
05:28hard for me to believe that conscious
05:29Consciousness is just simply like
05:30emergent from like higher scale neural
05:32networks like I I I that to me seems
05:34like a hand wave now yeah having said
05:36that I have a lot of smart friends who
05:37are pretty sure that that's what's going
05:38to happen so yeah actually I feel that
05:39way as well so I want to get to AGI in a
05:42bit I mean and also we can debate where
05:43the Consciousness is an illusion as it
05:45is but but uh where we are now is kind
05:48of amazing like can people can take like
05:49gpt3 you can give it SAT exams it can do
05:53okay like actually it can do quite well
05:55yeah yeah I can do scores it like the so
05:57the one I saw that scored it like with
05:581200 yeah yeah something like that yeah
05:59so it's not bad right I can do homework
06:03yeah I actually gave it like acid
06:06questions like uh to explain the
06:07derivation for the source trial radius
06:09you know the black hole radius to write
06:11a code for let's say eight by eight
06:13tic-tac-toe like random things that you
06:16should never be able to do because it's
06:18not just memorizing it it's generalizing
06:20right and it's getting that but then
06:22also it actually seems to have some sort
06:24of rear weird hiccups and that uh
06:26actually one thing that really does not
06:27seem to get is humor right you know
06:30um so I'm kind of curious where you
06:32think it's going to go because before we
06:34get to AGI there are things that an
06:36average human can do pretty well that
06:37GPT 3 can't but then there's also what
06:40experts can do yeah and uh what I'm very
06:43curious about is actually we may get to
06:45some of the expert stuff first uh before
06:47it can do even something like humor the
06:50irony is that something like humor that
06:51we take for granted might actually be
06:53really hard and other areas might be
06:55easier right well the ultimate example
06:57of the things that can't do like it
06:58can't my pecker suitcase yeah yeah like
07:00there's no robot that will pack your
07:01suitcase yeah it will and if you try to
07:03get her like make an omelette like yes
07:04it'll shred your clothes yes they're not
07:07you know yeah so it could drive your car
07:10but it can't pack your suitcase so um
07:11you can't do your laundry yeah
07:14um so there are there are these
07:15interesting kind of kind of twists um I
07:16so I would describe a little bit as
07:18followed us which is I think the the the
07:19this generation of AI that we have as
07:21impressive it is it is a little bit of a
07:23sleight of hand yes which we'll maybe
07:24talk about and then but I also think
07:26actually to your point human
07:27consciousness or human intelligence is
07:28also a little bit of hand yeah yeah
07:30maybe slightly different slides of hands
07:32so so the sleight of hand that you see
07:34when you're using GPT or you know one of
07:36these image generation things is it's
07:38not literally creating new information
07:40like what it's doing is is it is um it
07:42doesn't have like it has no opinion yes
07:45um it has no like point of view it has
07:47no like you know it's not sitting there
07:48like thinking on its own coming up with
07:50some new thing uh what it's doing is
07:52it's basically training you know ideally
07:53what it's doing this is training on the
07:54subtotal of all existing human knowledge
07:56yeah so for text generation it's
07:57training on all existing human texts
07:59right and so it plays back at you
08:01basically projections from the sort of
08:03you know assembled composite of all yeah
08:04when you ask it to do the eight by eight
08:06like yeah so probably somebody on the
08:09internet at some point wrote some paper
08:10about it I thought it's a little more
08:12than that because it because I asked
08:14like 56 by 56 or 101 by 101 yeah it has
08:18some sense of generalization yeah but
08:20but I'll bet we can check this I'll bet
08:22if we Google long enough I'll bet we
08:23could find a paper that described a
08:25general purpose algorithm for you know
08:26multi oh that may be right yeah yeah
08:28it's like somebody did that yeah yeah
08:29I've done the same humor experiment I
08:31like have it right Seinfeld scripts and
08:32sometimes they're really funny and
08:33sometimes they're just like yeah it
08:35makes no sense yeah I went for curb but
08:36you know it's the same idea exactly but
08:38like look there are a lot of jokes on
08:39the internet right and so you'd have to
08:41kind of you could kind of go back and
08:42kind of say okay it probably like pluck
08:44these jokes or by the way maybe there
08:46was a paper somewhere where they
08:47articulated a general theory of humor
08:48right because this has been humor's been
08:50studied as a as a thing and maybe
08:52there's like a general thing of like
08:53humors like the unexpected or whatever
08:55and so it it generalizes well it could
08:57be too like all sitcoms might be the
08:59same sitcom at some level right well
09:01yeah so here would be an example so Matt
09:03I also how to do like dramatic
09:04screenplays we've got a dramatic
09:05customers it's quite good if it was like
09:07three you could say like write a
09:08three-act screenplay yeah it will do it
09:09and it will have the proper like setup
09:11and resolution and so forth but yeah
09:12there are systems for like screenwriting
09:13in Hollywood where they have like three
09:15acts yes yes and then they have it's all
09:17Rocky or It's All Star Wars yeah well so
09:19so actually it's really interesting that
09:21maybe uh what we think is magical when
09:24humans do it isn't actually all that
09:25magical either so that's what I was
09:26going to say so then then the human
09:27sleight of hand is like you know is is
09:29there actually free will is there yes is
09:31there actually creativity happening
09:33upstairs yeah yes by the way if there is
09:35is it everybody is there really a
09:37thousand types of movies or is there
09:38like one latent space of the monomyth
09:40and basically what's happening I think
09:42the theory yeah just you know I'm kind
09:43of making this up but I think the theory
09:45would be the hero with a thousand faces
09:46or the yes the idea of the Union hero's
09:48journey yes yes which is sort of the
09:49basis for all of these plots in Star
09:50Wars and Harry Potter and everything
09:52um you know a a a a somebody with your
09:55background might say that basically it's
09:56sort of algorithm for surfing human
09:58neurochemistry yes yes right and yes
10:00generating different like you know sort
10:01of neuron you know chemical responses to
10:03like you know fear and anxiety and you
10:05know love and all these other all these
10:07um I've always been fascinated there's
10:08this thing in Psychology called core
10:10affect Theory wow that one I don't know
10:12oh yeah this is great so so okay so what
10:14do humans have all these love and
10:16despair like we have all these different
10:17emotions to celebrate core ethic Theory
10:18says no we don't uh oh yeah yes or no
10:21good bad good or bad yeah and then
10:24um and so we either have like a positive
10:26like uh we either have like a positive
10:27uh a neural response or a negative
10:29neural response and then it's either
10:30high intensity or low intensity and then
10:32you just basically and so it's like
10:33wistfulness is like you know just
10:35slightly negative but like you know
10:37despair is like extremely negative uh
10:39yeah and then you know so it's all too
10:41by two it's a two by two and it's it's
10:42and we're more basic organisms than we
10:44think and then we just we retro you know
10:46as and we're very one of the things
10:48that's great known is humans are very
10:49good at creating a story to justify
10:51whatever happened right
10:52um and so we create these stories these
10:55scripts around this idea of an emotion
10:56but it's basically just justifying the
10:57neural response yeah and so the the
10:59cynical view would be like having an ice
11:01cream cone on a hot Bay and like falling
11:02in love are like the same thing yes well
11:05maybe nor chemically neurochemically
11:07maybe they are well this comes into play
11:09in like you know drug abuse right which
11:11is you know things things that generate
11:12an opioid response yeah like some people
11:14get an opioid response from alcohol yeah
11:15right and they're they're former prone
11:17to alcoholism a lot of people don't get
11:18that response and so it's literally a
11:19neurochemical thing so so yeah look
11:22maybe we're maybe we're bundles of
11:23neurochemistry to a much deeper extent
11:25than or a much simpler extent than we
11:27want to believe yeah having said that
11:30um you know again oh oh and then that
11:32that takes me the other thing on AI
11:33which is I do you know one of the ways
11:35that people are testing AI is with the
11:37so-called Turing test right and the the
11:38simplified form of the Turing test is
11:40you're chatting with with somebody that
11:42maybe a human or maybe a bot and you
11:43chat for 20 minutes and can you guess
11:45better than random history yes human
11:47robot yeah you know that my take on that
11:50is to turn you know Alan Turing was a
11:51genius but the Turing test is malformed
11:53yes humans are too easy to trip yeah
11:54yeah yeah but but that's too low of a
11:56bar because tricking a person is not
11:58that hard and does not prove anything
12:00other than you've tricked the person
12:01like yes like I think and this is
12:03relevant because I think you know things
12:05like GPT are about to pass the Turing
12:10um and so I think it's going to turn out
12:12that that was too lightweight of a test
12:13yes well here's here's my favorite
12:15example for why I know GPT is not
12:17self-aware um if you ask it if it's
12:19self-aware and and you ask it to
12:21elaborate on how it became self-aware it
12:23will happily tell you yes and by the way
12:25if you ask it if how it's going to feel
12:26if you if you turn it off it's going to
12:28tell you please don't yeah yeah yeah
12:29yeah if you ask it to explain to you why
12:31it's not self-aware yes it will very
12:35happily do that too it does not have a
12:37differential opinion about those two
12:38outcomes yeah yeah whereas every living
12:40you know every conscious not ever even
12:42even no consciously any any living
12:44organism has a very different response
12:45to those two scenarios it's been amazing
12:47because in some ways I feel like it's as
12:49much been interesting to study the AI as
12:51the AIS reflected for us to study
12:53ourselves you know and I think we are
12:55sort of seeing that the magician has
12:56certain tricks whether it's the AI
12:58magician or the human magician uh and
13:00it's going through this education
13:02process I'm curious though like it feels
13:04like you know so so like GPT can get
13:06into a high school or get into college
13:08let's say but like what would it take
13:10for it to get its PhD you know and and
13:12like we're I think that's where where
13:15the sort of dramatic stuff is to come
13:16yeah well so again it's exactly your
13:19point it's not stress I would ask the
13:20question the other way which is like
13:21well okay what does it take to get a PhD
13:22yeah what does it take for Humanity like
13:25yeah how are the universities doing yes
13:27yes yes yes yes yeah how are they doing
13:29on quality control of their own programs
13:31yes how many people are getting phds
13:33today that we would say are like
13:34actually valid like yes you know
13:36whatever actual accomplishments yeah
13:39um people who got you know professors
13:41have years ago like how would they score
13:43the phds that are being granted today
13:44when they say wow would they say the bar
13:47is lower lower I think they would say it
13:49bars dramatically lower yes right
13:51um and so you know the answer might be
13:52we haven't lowered the bar but same
13:54thing for college admissions like you
13:55know what does it take to get into
13:56college but what does it take to finish
13:57college yeah and you know the education
13:58system well this is coming up a lot
14:00right now because it's like okay GPT can
14:01Auto generate like you know essays right
14:03um and so student essays uh and so it's
14:06like okay the great the grading method
14:07of assign an essay and grade the result
14:08like is probably not going to work
14:09anymore yeah but it's like okay was that
14:11ever actually yeah like just because we
14:13thought that was education was that
14:15actually education like what was that
14:16actually teaching anybody anything like
14:18actually I'm sure someone's going to
14:19take that to apply to colleges oh yeah
14:21yeah yeah absolutely yeah challenge
14:23applications are basically done yeah at
14:25least at least to the extent that you
14:27believe that college applications were a
14:28legitimate way to evaluate anybody in
14:30the first place yes yes no it's dark
14:31it's done I'd be more skeptical that
14:33they were ever useful in the first one
14:35um yeah well so in the PC let's talk
14:37about like at least in that old school
14:39mentality of a PhD of some Advan
14:41Advanced learning where you become an
14:42expert in something right you know I
14:44think that's the thing where what do you
14:46mean by expert uh let's say the ability
14:48to be in the top point one percent uh of
14:50humanity of let's say designing a drug
14:53or interesting doing something oh yeah
14:55yeah yeah yeah is that what that is that
14:56what they teach me yeah yeah well that's
14:58that's presentation universities that is
15:00my goal I wasn't aware of I wasn't aware
15:02that was part of the curriculum I think
15:05or at least that's what you have to do
15:07eventually when you get out right yeah
15:08you know and that you have to apply it I
15:10think it's where I think it is one of
15:13the things about being an expert in my
15:14mind is that something that is the
15:17difference between bad good and great
15:20can be really close like um I could
15:23probably write a piece of music but no
15:24one would think it's all that great you
15:26know and then you could have someone
15:27who's a good musician but not a great
15:29one then you have like a genius like a
15:31Mozart or a Led Zeppelin or whatever
15:34particular genre you know and I think
15:36we're we aren't there yet is that when
15:39the difference between good and great is
15:41so close right or like from spinal tap
15:44there's a fine line between uh brilliant
15:45and stupid right you know I I think that
15:48is where I think it hasn't really hit
15:49yet right in that if you look at the
15:52jokes the jokes are just kind of okay
15:53the screenplays it makes are not like
15:54brilliant screenplays right I think it
15:56could get into college but could it win
15:58best screenplay okay you know and so
16:01that's this part where I think we're
16:02we're gonna we're not there yet okay you
16:04know but then I think we're getting
16:05there so name a great music composer
16:08generated by a music uh PhD program uh
16:11yeah yeah yeah yeah yeah
16:12name one yeah I'm thinking more in the
16:15the scientific side of things but yeah I
16:17don't think probably the PHD programs in
16:19that space is probably not intended to
16:22generate music okay yeah made one
16:23greatest screenplay written by a PhD in
16:27so so that's an interesting point
16:30um but I I think what I'm getting at is
16:32still like the ability to do something
16:34and so and the education part we can
16:36talk about how they learn because I
16:38think in the case of the screenplay or
16:39the the music you're talking about they
16:41they still have to learn something right
16:43they you know or do you think they just
16:45innately sort of knew how to write a
16:46screenplay I don't know yeah I assume
16:48there's a process where they write a
16:49screenplay it's kind of mediocre oh yeah
16:51yeah yeah and then and then they get
16:53critiqued or they critique themselves
16:55and then and then it improves and
16:56improves and improves well the
16:57screenplay okay so the divorce divorce
17:08question number one for screenplay is
17:10does it sell to the studio will they
17:12usually buy it yeah and then that's
17:13number two is when the movie comes out
17:14and the TV show comes out does anybody
17:16watch it people like it yes yes finish
17:18it yeah yeah one of the fun things that
17:19Netflix will now tell people uh who make
17:21film and TV is they actually tell them
17:22for the first time whether anybody's
17:23actually finishing their movie yeah yeah
17:26um yeah just all those stats are kind of
17:27mind-blogging right yeah yeah a lot of
17:29movies like and you know people go to
17:30the theater and they feel you know
17:31invested in and they don't want to leave
17:33in the middle but like yeah it's very
17:34easy to punch out yeah it turns out a
17:36lot of screenplays you know this is
17:38something that's professional
17:39screenwriters will tell you like it
17:40can't ever sag yeah yeah just just as
17:42one example because people will will
17:44stop watching so uh yeah so so
17:46screenwriting is subject to uh uh Market
17:48test popular music required test but
17:50it's classical music which I'm a huge
17:52fan of uh is no longer subject to Market
17:54tests right it's thoroughly subsidized
17:56yeah that's interesting yeah it's not in
17:58the free market anymore yeah or maybe
17:59equivalents or music Movie music is you
18:02know so Movie music is subject to Market
18:04tests right and it's probably the modern
18:05classical it is the modern classical
18:08um yeah for that reason so yeah like the
18:10market test is real but yeah let me
18:11Grant your point so let's build on what
18:13you said your point like let's use this
18:16could we use the term paste maybe like
18:18yeah or just ability to do something
18:20hard well ability so okay so ability to
18:22do something hard and create let's say
18:24create create something hard yes uh
18:27create something complicated and then
18:28also the ability to judge yeah yeah
18:30right uh and and you're critically like
18:32to start with judging your own work yeah
18:34and probably therefore in the ability to
18:35prove and then therefore the ability to
18:37prove right so yeah I think that there's
18:39there yeah so there's something about
18:40there's something about taste yeah like
18:42I I tend to think this stuff all has
18:43like aesthetic yeah like a properly
18:45constructed mathematical formula or
18:47software program has aesthetic oh 100
18:49right theorial design has aesthetic
18:52all of it yeah yeah so there's something
18:55about taste that's like some combination
18:56of quantitative qualitative yeah like a
18:59great start is like from a from a
19:01mediocre one as taste right yeah exactly
19:03and like there's certain signals like
19:04there's certain methods and certain
19:05signals but it's not necessarily
19:07reducible to an algorithm it's more of
19:08like a composite you know it's sort of a
19:11foundational knowledge combined with
19:12some scope of experience combined with
19:14some yeah kind of ineffable
19:15characteristic of judgments well we
19:17associate an aesthetic with it but I
19:19wonder whether that's also just our
19:21emotional connection to it right you
19:22know because I think we have this good
19:25right or wrong or more right or more
19:26wrong right like a gradient like yeah
19:28that's the right direction right and but
19:30a lot of us also whether something is
19:31elegant versus like just a hack right
19:33you can tell whether these great things
19:34are just simple and Powerful rather than
19:37like some some complicated machine to do
19:40something that you know you know there's
19:41gonna eventually fall apart or that
19:43um you think about that's true in
19:45physics or in the go to market or in uh
19:47or in music it has all that both that
19:49sort of complexity and a simplicity at
19:53um but so but I'm curious like so
19:56when AI gets that point which I think
19:59that's a win not an if okay yeah yeah
20:01yeah or so why would you say why
20:03wouldn't it get there because like do we
20:05even understand how it works in people
20:07maybe we don't have to well maybe we
20:09don't have to so this is where I
20:10describe this this is like the AGI
20:12question yeah I call the handwave yeah
20:14it's or the same thing the embedded
20:16assumption that it's a win is that it
20:17will be an emergent process that will
20:19sort of unlock as a consequence of
20:20greater and greater levels of scale yeah
20:21maybe yeah yeah one way of looking at
20:25that is yes that is just what's going to
20:26happen that's not even Consciousness
20:27like that's obviously what's going to
20:28happen the other impression that is it's
20:30just a massive it's a hand wave it's a
20:31hand wave and it's what the kids would
20:32call it cope and the cope would be okay
20:34so here let me ask you a question in
20:36return yeah yeah um what is the sub
20:38specialty of human biology uh and
20:40medicine that best understands the
20:42nature of human consciousness today oh
20:43yeah I don't think there's a there is
20:44one right there is one anesthesiology
20:46okay which is poorly understood well
20:49it's poorly understood but they know how
20:50to turn it off but yeah and they know
20:52how to turn it back on yes they've got
20:55the on off switch that's all we go
20:57that's all we have like yeah we have
20:59been we collectively have been studying
21:01this question of human consciousness for
21:02a very long time we have very advanced
21:04Technologies today functional MRI and
21:06like all this stuff but that speaks
21:08there's a field I would love to see
21:09created which is molecular psychology
21:11yeah okay yeah where you can start to
21:13probe this one a little more than on off
21:14okay and molecular so so and this is
21:17literal or metaphorical
21:19um quite literally like and so play like
21:20molecular biology but this big thing in
21:22the 80s where we finally can bring like
21:24chemistry of small molecules to like
21:26pocket biology or chemical biology as
21:28well and if we could use like small
21:31molecules to maybe perturb more than
21:33just on off but like perturbs things we
21:35can start to understand the brain yeah a
21:37little bit because reading is one thing
21:38but like poking and and sort of
21:40perturbing and then seeing the result is
21:42usually How We Do any sort of experiment
21:43yeah would you do that is that a
21:45chemical chemical experimentation would
21:47that be electrical it could be either
21:49one it could be any of that but it would
21:50probably be some combinations like on a
21:52track in theory yeah so like look we
21:55just don't okay so here would be the
21:57counter argument is like we just we
21:58don't know how human consciousness Works
22:00um we actually we've I actually I I I I
22:03didn't go in the film but I didn't go in
22:05the actually was that was going to be
22:06what I was going to study in school 30
22:07years ago but I looked at the field at
22:09the time and I was like they don't have
22:09a clue like I'm going to spend my entire
22:11career so you want to go into
22:16yeah yeah but that was like expert
22:18systems expert systems well there are
22:20early neural networks and and then a lot
22:22of it got into brain chemistry and like
22:23we're going to figure this stuff out and
22:24we're gonna learn how to build you know
22:25and it's just like they didn't know that
22:27and as far as I know they don't know now
22:29and so the counter argument would be
22:30this is all just like massive cope for
22:32the fact that we actually we don't
22:33understand that so we don't understand
22:35how to do it and so all we can do is
22:37hand wave and kind of just say well it's
22:38just going to be a version and it's like
22:40no it's not yeah and we're going to be
22:42sitting here 30 years from now and we're
22:43still not going to have any more
22:44knowledge you know barring other
22:45scientific breakthroughs of the kind
22:46that you're talking about yeah what's
22:47interesting is if you think about that
22:48time we had neural Nets but they were
22:50all single layer basically and then they
22:52couldn't even do xor you know you
22:54couldn't even do some simple things
22:55because you needed deeper networks to
22:57get at them and you couldn't have deep
22:58networks in because we didn't have the
22:59computational power and so the space was
23:01pretty dormant for a while you know AI
23:04until like we started going to having
23:06the basically just the computational
23:08power from gpus and other things to be
23:09able to go deep and then you could feed
23:11the data through so it is possible that
23:13we sort of have a point where we sort of
23:15saturate the compute that we have now we
23:18get to as much as we can get to and that
23:20may get uh close to AGI maybe not and
23:22then takes another like 30 years to get
23:25to the next sort of breakthroughs to get
23:27but okay so but I would pull back from
23:29there so AGI is the fun thing there is a
23:32sort of step back which is to pick a
23:34domain and and you know the domains I
23:36think a lot about like life sciences
23:39um designing drugs doing Health Care
23:41like seeing if you can do it pick a
23:43diagnosis can you suggest a drug in
23:45those areas now we're talking about much
23:47more limited domain so we're now talking
23:48about we don't we don't need to go all
23:50the way to Consciousness for that
23:51necessarily you can you can have
23:53something that's more limited in that
23:55limited domain right now it seems like
23:57generative AI isn't quite far enough yet
23:59to be able to like yeah I don't see the
24:02examples quite yet sure yeah well let's
24:04we'll see I mean so yeah what's the
24:06what's the counter and I know you
24:08especially think about Healthcare a lot
24:09yeah yeah well so the first thing is
24:11whenever you're scoring well let's talk
24:12about medical diagnosis which is kind of
24:14just low hanging fruit Yeah question
24:15because everybody everybody experiences
24:16it so to start up front you have to ask
24:18a question up front which is like is the
24:19goal what's the threshold is the
24:21threshold perfect or is the threshold
24:22better than human yeah that's a great
24:24Point yeah right yeah um yeah and by the
24:25way you know this is a topic that comes
24:26up all the time it's all right in cars
24:28right which is is it perfect it will
24:29never make a mistake or is it just going
24:31to be better better than human in the
24:32way that self-driving cars score this is
24:33accidents per thousand miles driven and
24:36self-driving cars are already lower than
24:37human drivers and humans may actually be
24:42by the way you know it increased forms
24:44in certain kinds of drug abuse right
24:46um and then of course the machines have
24:47the characteristic they get better
24:48universally right so a car a car has one
24:51mishap in one location every other car
24:53gets trained on how to deal with that in
24:54the future where you know the Learning
24:56Happens across the entire entire system
24:59um and so like I think you can make a
25:01serious argument that like basically uh
25:03self-driving cars are already Better
25:04Than People on a relative basis and
25:06therefore like morally you could even go
25:08so far as to say human drivers should be
25:09outlawed today right like it would be
25:11like if you have the alternative you can
25:13have the self-driving car then yeah yeah
25:14like the utilitarian I'm not a
25:16utilitarian but the utilitarian argument
25:17would be you should obviously ban human
25:18drivers today because the machine driven
25:20stuff is already better probably by the
25:22way the same is true for airplanes right
25:24now we're not actually going to do that
25:25and there are other considerations
25:26involved and so forth but like you know
25:28logically speaking you should at least
25:29think about that as a possibility and I
25:31think you should think about that as a
25:33possibility I think for medical
25:34diagnosis which is you know and here the
25:36test is very simple which is well you're
25:37at least Express two tests test number
25:40one is the absolute test which is if I
25:41feed in a set of symptoms it generates
25:43the correct diagnosis 100 of the time
25:45deterministically guaranteed that's a
25:48the other is I do that with the
25:50algorithm and then I go to 100 doctors
25:52human doctors and I get back 100
25:54different responses and then let's
25:55compare right and then let's track over
25:57time and so if you compare to medium
25:59doctors yeah yeah right and like how
26:01good is the media doctor in the
26:02diagnosis and like I don't know what
26:03your experience has been like well and
26:06the median doctor may be smart but also
26:08maybe overloaded maybe exhausted may
26:11have like 12 other patients 15 minutes
26:13yeah a lot of experiences 15 minutes you
26:16know there's a thing here like experts
26:18in these areas tend to either like be
26:19like doctors themselves or they like
26:21know a lot of doctors or they have like
26:22their you know that they work in the
26:24industry they make money they have a
26:25concierge doctor who spends a lot of
26:26time with them and does house calls the
26:28media and Healthcare experience is 15
26:30minutes in somebody's you know Heritage
26:31schedule with a doctor that may or may
26:33not ever see you again and has very
26:34limited data yeah yeah and yeah and
26:37there's well-known algorithm which is
26:38that they come up with a diagnosis they
26:40come up with a treatment you go with
26:42that that doesn't work you repeat and
26:45while not sick and while still sick and
26:47not dead you just repeat and then I
26:50think many of us have been through that
26:51well and then there's and then there's
26:52all the other sort of things so then
26:53there's like drug interaction you know
26:54is any one doctor tracking all the
26:56interactions of your drugs yeah then
26:57there's this other issue which is okay
26:58they give the prescription does is there
27:00actually compliance for taking the
27:01prescription does the doctor actually
27:02know whether you're taking the
27:03prescription is one of the biggest
27:05disasters right but that means like the
27:07ability for a media doctor to even
27:08evaluate the success of a treatment they
27:10may actually may not be able to do it
27:11because they may not have the data on
27:12compliance yeah yeah and so like you
27:14look at the existing I don't know for me
27:15you look at the existing system by which
27:17this all happens it's very similar to
27:18look into the existing system by which
27:19people actually drive cars which is like
27:21oh my God this is not good like this is
27:23really not good yeah uh and and we kind
27:25of fool ourselves into believing that
27:26it's good because it kind of feels good
27:27and we don't really want to look behind
27:28the curtain but we look behind the
27:29curtain and it's pretty horrifying yeah
27:31yeah and so from that standpoint if if
27:33you follow that logic then it says okay
27:35if the machine could do a better job you
27:37know if if the machine was twice as good
27:39um at just like listening symptoms
27:41giving the response during the
27:42prescription doing the follow-up yes
27:44yeah I mean how far yeah I know if
27:46you've done this but you you plug in a
27:47list of symptoms I've been playing with
27:49it too yeah yeah yeah yeah yeah yeah
27:50yeah I mean because it does have acts I
27:52mean it has access to the collective
27:53medical yeah and if it doesn't now it
27:56can't right right you know it could have
27:58be filled with all the emrs all the
27:59medical records and so on and they could
28:01sort of learn from that as well well
28:02then the other and then the other
28:03question I'm sure you thought about but
28:04like okay so the medical field moves and
28:07you know in the existing system the
28:09media and doctor has to like read all
28:10the papers yeah yeah yeah yeah which
28:12never happens right no one has that for
28:13that yeah right yeah and there's
28:15continuing education but still it's not
28:16the same well here's an example do you
28:18want do you want your GP would you do
28:20you want MGP or an lgp
28:24presumably the old GP has more
28:25experience and so they have more pattern
28:27matching over time and more experienced
28:29with patients but the MGP is probably
28:30more up in the current science yeah yeah
28:33okay yeah and then it's like okay do you
28:35really want to have to make that
28:36trade-off or can the machine actually
28:37have both of those exactly well that's
28:39the thing is that like you talked about
28:40how like um Can it beat let's say how
28:43does it do compared to 100 doctors when
28:45the 100 doctors collaborate presumably
28:47that's the ideal situation right I mean
28:49or or you know that sounds horrifying no
28:51no I mean that's the wisdom of the crowd
28:53it could go well I guess it could go
28:56either way but it's a committee usually
28:58that's the Soviet method usually when
29:01you actually when you pull it you can or
29:03at least maybe it's how you collaborate
29:04have you really found human beings to
29:06make better decisions in groups than
29:07they do as individuals that's a good
29:09question yeah in your entire life yeah
29:11yeah the serious answer is the wisdom of
29:14crowds Madness of crowds yeah yeah
29:16yeah our flip side's the same coin right
29:18and so and when are you harnessing the
29:19wisdom when are you descending into
29:21madness or even just you know mediocrity
29:24yeah like it's very specific tasks
29:28um groups can do well but otherwise it's
29:29like one big like group project from
29:31high school yeah yeah which is like a
29:32well so generally right but generally
29:33what happens people at groups is the
29:35social social conformance kicks in right
29:36and so people want there's a well-known
29:38you know kind of thing there's like this
29:39group polarization which is you take a
29:41group of people who already inclined
29:42slightly to one side of the vehicles
29:43yeah you put them together let them talk
29:45for three hours they all come out much
29:46more radical yes yes because they've
29:48self-reinforced yes yes yes right well
29:50so maybe that's really interesting thing
29:51because you can imagine training AI to
29:53do have these different aspects and it's
29:55collaboration with other versions of it
29:58would be very different yeah yeah it
29:59could be very different I mean yeah
30:00maybe it should do like this effectively
30:02a Monte Carlo yeah yeah yeah right right
30:04run the same inputs 100 times yes yes
30:06right yeah well okay so so I I either
30:10will never get there or we're already
30:12there now but I think in 10 years it
30:15does seem especially maybe we hit like
30:17another winter but it seems like things
30:18are accelerating so much this seems
30:20pretty real it seems pretty real yeah
30:21what do you think Society needs to to to
30:24change because there's like all these
30:26things we were talking about and this
30:27seems bigger than like just just the
30:29revolution of software over the last 20
30:30years or internet from the last 20 years
30:32because we're talking about how it
30:33changes government how it changes
30:35regulatory how it changes education I
30:37mean I don't even know where you want to
30:39start with that but I think that's what
30:41something where it may take us 10 years
30:42just culturally right to be able to get
30:45ready for this thing that may arrive in
30:4610 years or may already be here right
30:48right yeah I don't know where you want
30:49to start yeah so where I would start is
30:51we've already fallen into I think we've
30:53we've deliberately kind of fallen into
30:54trap already which is we've only been
30:56using a single kind of example and we've
30:58used it both in our discussions on
30:59medicine and also in education which is
31:00basically a something is done today
31:02people are doing something today and
31:03then maybe the machine can do it instead
31:05that's an important thing and that's
31:07that you know as we're thinking about
31:08but the way that technological impact
31:10actually plays out in human society is
31:12not just that the way it plays out is it
31:14lets you basically revisit more
31:15fundamental assumptions yeah or what's
31:17not being done today or what's not being
31:18done today it all of a sudden becomes
31:19possible yeah and and this comes this
31:21this always comes up in any sort of
31:23discussion about employment yeah people
31:25doing jobs versus machines doing jobs
31:26people get worried about technological
31:27displacement of jobs but that logical
31:29displacement of job like technology
31:31never actually creates unemployment
31:32technology only ever creates jobs in
31:35um and and the reason for that is to
31:36algae makes possible things that were
31:38not possible before yes which is what
31:39yes to growth and so specifically for
31:42example the role of the doctor
31:44um you know it's like okay the doctor of
31:45the future is probably not going to be
31:47doing the same right we have a
31:50tournament in in it break fix yes which
31:53is kind of what doctors you know because
31:54the core motion of a lot of doctors as
31:55you said diagnose prescribe diagnose
31:57prescribed smoking yeah debugging yeah
32:01exactly doctors of the future probably
32:03like the the technologically empowered
32:05doctor 10 years from now is highly
32:06unlikely to be spending their day doing
32:08that yes they are probably going to be
32:09spending their day doing things that are
32:10actually much more important than that
32:11yes right and so for example maybe they
32:14have more time right with patients
32:16because the machine is is is a time
32:19um maybe they have more data to draw on
32:21you know to be able to make their
32:22decisions you know they've got the
32:23machine as a part really making the
32:25decisions maybe they're able to spend
32:27more time in their conversation with the
32:28patient talking about psychological
32:29issues as compared to just physical
32:31issues and as you know in a lot of
32:32medical conditions involved you know two
32:34sides of that or behavioral issues well
32:36as you know like a lot of primary
32:38medical issues today are consequence of
32:39different behaviors yes and maybe
32:41doctors should be spending more time in
32:42behaviors and it speaks to compliance as
32:44well as other issues awesome
32:46um yeah I mean compliance is a
32:48behavioral issue like why don't people
32:49do this or that right but then also
32:51there's all the behavioral health issues
32:53right which is probably one of the
32:54biggest catastrophes ever have coming
32:56out of covet yeah exactly right yeah
32:57exactly maybe doctors should be you know
32:59maybe the doctor of the future will be
33:00more of a life coach yeah of which there
33:02will be a pharmacological you know sort
33:03of a biological or pharmacological
33:05component right but maybe it's like
33:07maybe it's more of you know sort of the
33:08dream is sort of holistic uh you know
33:11um and so you know maybe the doctor the
33:13future is just as much is a is actually
33:14much more important and you know sort of
33:17fundamental figure in your life than he
33:18is than he or she is today yeah that
33:19sounds fantastic right exactly so so if
33:22I'm a doctor that that's where that's
33:24where I would want to be yeah towards
33:25yeah right and then and that and that's
33:27probably a bigger and more important
33:28market right you know and then and then
33:30in terms of like the size of that that
33:32industry will probably expand yeah you
33:33know kind of correspondingly I think the
33:34same thing is true in education like you
33:36know the the the teacher 10 or 20 years
33:38from now I hope is not doing the same
33:39things the teacher is doing today I hope
33:41they're doing much better things yeah
33:42right so for example one-to-one tutoring
33:44like there's basically the thick
33:46education example like there's only one
33:48in the last like 50 years there's
33:50basically only one known education
33:51intervention at scale that actually
33:52improves outcomes after you know
33:54thousands of experiments it's one-to-one
33:56tutoring yes which is very ancient
33:57actually which is very right which is
33:59the original form of Education which is
34:00literally how people used to get used to
34:02get educated and so maybe this
34:04industrial you know the education system
34:06we have today is an artifact of the
34:07Industrial Age if the Industrial Age
34:09components of it become automated the
34:11teacher becomes freed up to actually
34:12work more one-to-one with students the
34:13result might actually be a significant
34:15breakthrough in how Education Works
34:17although the ways you're describing you
34:18can imagine also like AI doing
34:19one-on-one oh yeah they're pretty
34:21intensively there will be part of that
34:22but also yeah and maybe the AI is the
34:24one-on-one and maybe and maybe in that
34:25case the teacher is supervising the AI
34:26yeah right and maybe and maybe maybe the
34:28teacher is making sure that they as like
34:30on the right track and doing the right
34:31things and is able to kind of sit at the
34:32control panel and watch all that
34:33happening right well that speaks to
34:35something really interesting because I
34:37we're probably a little nervous at least
34:39short term to just unleash This and like
34:41not pay attention to it and so you'll
34:43have the doctor using this as a tool but
34:46keeping an eye on it you'll have the
34:47teacher maybe scaling dramatically for
34:49all this one-on-one but keeping an eye
34:51on it do you think that's actually the
34:52way it's gonna I mean this is kind of
34:53how all Technologies work yeah like yeah
34:55yeah so so it's sort of um another way
34:58to think about it is you could imagine
34:59two acronyms for AIS and artificial
35:01intelligence which kind of implies
35:03replacement yeah the the one I actually
35:04like much better is augmented
35:07and augmented intelligence is you know
35:10it's another example the term would be
35:12Steve Jobs a bicycle for your mind but
35:13right um or you know
35:16right um right so the augmentation right
35:20um and so the way if you just look at
35:22the history of new technologies the way
35:23it plays out is everyone's afraid it's
35:25going to be a replacement and it turns
35:26out it's an augmentation yes yeah so you
35:27take a human being and you give them the
35:28technological tools they therefore are
35:30much more productive yeah like a factory
35:32versus like uh Artisan with their tools
35:34yeah exactly or like you know you know
35:36the dream of like an X you know an
35:37exoskeleton yeah you know any of these
35:40things yeah I mean look our artists are
35:42much more productive today with digital
35:43tools than they were with just you know
35:44painting canvas yeah yeah um and by the
35:46way even artists that still work on
35:47paint and canvas are much more
35:48productive today because they can sell
35:49their products to a much larger audience
35:51online or like my favorite thing for art
35:53is like you know photography comes
35:54online right and that dramatically
35:56changes art because being photorealistic
35:58isn't that interesting anymore but so
36:00that creates modern abstraction right
36:01yeah which actually is maybe even more
36:03expressive right than just taking a
36:04picture right and so now I can make
36:06pictures with with AI all the time right
36:07so where does that shove art maybe to a
36:10more interesting place and the artist
36:11yeah history the artists were not happy
36:12about the introduction of Photography
36:13because yeah it originally is a threat
36:15of course yeah right but it transformed
36:17things yeah it turned out to be it
36:18turned out yeah it turned out the market
36:19for art is much larger today than it was
36:21oh that's interesting before the
36:22introduction photography I mean we call
36:23it different things we call it things
36:24like TV shows and so forth but like the
36:26market for Creative expression is much
36:28much larger than it used to be by the
36:29way music same thing right you know
36:31recorded music was originally a threat
36:32it used to be a musician would compose
36:34and perform right um and then you know
36:36to have music in your home you'd have to
36:38hire a musician to come into your home
36:39you know photographs were a threat to
36:41that but phonographs made the music
36:43industry much much larger so people who
36:44were good at making music all of a
36:46sudden had a much bigger Market yeah so
36:47I I think AI is going to play out in a
36:49very similar way like there are people
36:51who argue you know AI is different
36:52because they just keep climbing this
36:53letter and we'll replace everything I
36:54actually think it's going to be
36:55basically it's the ultimate superpower
36:57it's the ultimate pairing uh good we
36:59were talking about creating screenplays
37:00and and scripts a good example if I'm a
37:02Hollywood screenwriter today like GPT is
37:04my best friend and I'm just sitting
37:05there all day long and I'm just saying
37:06you know playing out it's like okay I
37:08reach this plot Point dot dot dot give
37:09me a list of like 10 ideas for what to
37:10do it's like oh okay that's an
37:11interesting one yes uh um I'll give you
37:14an example how this could work so mad
37:16men it's one of my favorite shows
37:17Matthew weiner you know right in that
37:19show and he was always pleased he's like
37:21wow that show is so unpredictable like
37:22you know you never knew where it was
37:23going and he said yeah well the
37:25technique we had in the writer's room
37:26was at any given time we had to figure
37:27out what happened next in the plot yeah
37:29uh we would brainstorm we'd come up with
37:30the five sort of things obviously five
37:35so gbt would be obvious things and you
37:37rule those out yeah yeah exactly so it
37:39pushes creativity all of a sudden every
37:41individual screenwriter can do that
37:42without having to have a whole writer's
37:43room to brainstorm you just plug that in
37:45it gives it back to you in two seconds
37:46you're just like okay not those things
37:47yeah I'm gonna to do something else and
37:49now I am more creative than I was before
37:51right your comment about music's really
37:52interesting because now we've got
37:53Spotify so we got everything in your
37:54pocket you can imagine like the AI
37:56Spotify which is like the doctor the
37:58personal trainer the educator like all
38:00those different things in my pocket
38:02available right now for whatever I need
38:04to do yeah that's right yeah and with
38:05the human escalation path right yeah
38:07it's like yeah the AI therapist or
38:09whatever but yeah with the thing of like
38:11well okay yeah I really have a thing
38:12here especially if it gets really
38:13serious to escalate immediately yeah
38:15that's right yeah yeah okay so what's
38:17what's going to hold us back what what
38:18do we need to change so I think it's
38:20mostly fear so that this is where maybe
38:22I'm a radical on it because you know
38:24this is usually where people start
38:24talking about like regulation um I I I
38:27think it's like we have these we have
38:28these fear-driven reactions I I always
38:30think of it there's this deep-seated
38:31myth in human societies of the
38:33Prometheus myth right yeah yeah and then
38:35Prometheus Prometheus myth is all about
38:36new technology right and then Prometheus
38:38myth is like basically this new
38:39technology of fire right and you know
38:41and fire is one of these classic
38:42Technologies where like it can be used
38:45right or it can be used very badly right
38:48yeah it can destroy your your whole
38:51um and so you know Prometheus famously
38:52you know goes and retrieves um you know
38:54fire from the gods and his punishment
38:56for it is to be chained to Iraq and if
38:57his liver pecked out every day for the
38:59rest of Eternity right so embedded in
39:01there is like the anxiety about the new
39:02technology and then the arrival of the
39:03new technology maybe is like you know
39:05the fear right is that it's not bad and
39:07the person who does that should be
39:08punished and so I always find that myth
39:10kind of plays out over and over again in
39:12all these discussions about regulation
39:13yes that this stuff you know especially
39:15it's the gods who punish him right the
39:16existing Gods yes yes well on behalf yes
39:18on behalf of on behalf of uh on behalf
39:21of existence but um yeah yeah so um I
39:23yeah I I think generally it's this it's
39:25just you get you get these fears if you
39:27look at the history of yeah we talked
39:29about some of this if you look at the
39:30history of new technologies they
39:31generally have these fears yes uh every
39:33step along the way every technology has
39:35been created with some prediction that
39:36it's going to upend the social order and
39:37cause the you know well it does up into
39:39some degree right and it will do that
39:41but but generally speaking in a positive
39:43way on balance yeah you know I mean
39:45technology is why we live much better
39:46lives today certainly people now would
39:49not want what people had 50 years ago
39:50nobody would make that yes right and you
39:52could do you could go back in time
39:53infinite it would be like that right
39:55nobody would ever make this radius
39:56nobody would ever make a trade to go
39:57back in time it would never happen yeah
39:58and then yeah right that's literally
40:00it's because you wouldn't want to you
40:01would not want to lose the Technologies
40:03yes you have today so I think that's
40:06true and so I actually think like fear
40:07maybe the Jeff fear maybe the actual
40:10biggest threat yeah fear leads to the
40:12kind of uh you know reach for regulation
40:14yes I'm a skeptic I I don't it's like I
40:17don't know right regulating math can we
40:20really didn't regulate math well but
40:21it's not going to look like regulating
40:23math right it's going to look like
40:24regulating this superpower that's what
40:25they're gonna say yeah right yeah the
40:27actual implementation of this is
40:30algebra yeah regulating algebra
40:32regulating linear algebra like are we
40:35really going to regulate linear algebra
40:36matrix multiplication yeah really
40:38seriously yeah and and then even if we
40:40do are we gonna possibly do it you know
40:42in a way that makes any sense yeah well
40:45okay but it won't obviously won't look
40:46like that it'll be saying well we can't
40:48have computers drive cars right or like
40:51what what what's the what's how do you
40:53give the computer a test yeah or how do
40:55you know like okay you make this I'll be
40:56the cynic so okay you make this claim
40:58that the computer AI is better than
41:00human like how do I know that because
41:02that well yes it turns out because the
41:03cars are driving yeah so yeah
41:05that's okay so yeah Heroes internet okay
41:08so here's how that played out
41:09self-driving cars yeah it's interesting
41:10there was one category company that said
41:11we're going to basically wait until it's
41:12perfect and we're going to basically try
41:14to validate Regulators yes they're not
41:17driving yeah and they're not on the road
41:18they're still not on the road there's
41:21another category company that said you
41:22know what let's evolve out of basically
41:26um and you know it's go to cruise
41:27control and then it's radar basically
41:28and you get humans driving with it and
41:30you label data and exactly and you don't
41:32expect the car to drive itself from the
41:34very beginning the car is like an
41:35autopilot kind of thing the expectations
41:37you pay attention like you know Tesla is
41:39the company I'm alluding to and if you
41:41if you turn on full stop driving on
41:43Tesla you're still you know you're still
41:44told like you're not supposed to be
41:45watching a movie you're supposed to be
41:46actually paying attention to the car
41:47will like alert you when it's time to
41:50um but you know notwithstanding that
41:51Tesla has been climbing the ladder on
41:52self-driving car functionality
41:54capability they do new software releases
41:56push live to car at night anytime they
41:58want those new releases are not being
42:00tested by any federal yeah the federal
42:02you know it's uh whatever it is doing
42:04these things yeah there's no actual test
42:06happening yeah and that has that and
42:07that has led to incredible progress
42:09including yeah as I said clearly in the
42:12data this is now standard because you
42:13can't drivers you can't make it work
42:14just magically right it has to do happen
42:17gradually right because it's it's right
42:18it's actually much like Madison and it's
42:20centering into a complex system with a
42:22lot of variables yeah in the real world
42:23like medicine too it's like life or
42:25death you know it's just serious but
42:27yeah but it yeah then we go back to how
42:30we started the conversation yeah the the
42:31wait for permission thing the the binary
42:34zero or one wait for permission wait for
42:36Perfection Thing versus the incremental
42:38let's get better and better and better
42:39and the threshold is is it better than
42:41humans is it is it a net Improvement yes
42:43I mean clearly in self-driving cars that
42:45second approach is the approach that's
42:48and do you think you get to the Tipping
42:50Point where look let's look at the
42:51statistics we have because we we have
42:53all this happening right now we have the
42:54statistics and it's like so much better
42:56than humans why wouldn't we do it yeah
42:58exactly right and then at some point the
42:59morality tips where it's like well
43:01obviously we have to go in this
43:01direction because it's just it's just
43:02obviously better yeah
43:04I suspect we're going to get their
43:06medicine pretty quick yeah yeah yeah I'm
43:08an optimist on that and again I'm not an
43:09optimist because I think the a is going
43:11to be perfect I'm an optimist because I
43:12think the status quo is not that good
43:13yeah well that might be like you start
43:15empowering doctors you give them tools
43:16they start using them and start
43:18empowering patients patients start using
43:20them and actually here I think it's even
43:21different than the car because you're
43:23not on a road right it's you know it's
43:24your body or whatever and actually
43:26patients are driving their own health
43:28care more than ever I think covet was
43:29another sort of Tailwind there so maybe
43:31you start maybe it's just about
43:32developing the tools and giving them out
43:34right well here would be an example so
43:35let's use our screenwriting example to
43:37play into medicine which is you know a
43:38given set of conditions there may be
43:40many possible diagnoses yeah an
43:42experience I've had is there's a set of
43:44a set of symptoms yes one doctor comes
43:45up with one diagnosis another doctor
43:47comes up with a different diagnosis you
43:48read the literature and it's like
43:49actually both of those diagnoses in
43:50theory are but like for some reason the
43:52one guy I only thought of the one the
43:53other guy only thought of the other yeah
43:54so a way for doctors to start using this
43:56technology today would be plug in the
43:58symptoms give me five possible diagnoses
44:00yes okay oh I didn't even realize right
44:02that you know because maybe this is a
44:03new thing since you know I went to
44:05medical school or something I didn't
44:06realize diagnosis number three was an
44:07option I just go look at that yes yes
44:09right and so the doctor is still doing
44:11the diagnosis so that's your screenplay
44:12example your your augmented as a doctor
44:15you're augmented because in that case is
44:16alerting you to things that you should
44:18know but yeah no yeah right yeah I mean
44:21um that's interesting it's almost like
44:22having a mentor or or just someone to
44:24riff with that's right yeah yeah yeah
44:26and they write it's a great thing is it
44:27is a machine it will riff with you as
44:29much as you want like it will sit there
44:30at three in the morning yeah yeah
44:31there's a hundred times for you it's
44:32happy to it doesn't get bored it doesn't
44:33get tired yes by the way and then it
44:35also has the advantage it has all the
44:36up-to-date information yes yes right and
44:38all the other outcomes and when it makes
44:40a mistake it it actually can learn from
44:42it rather actually from being other
44:44other than being like devastated by it
44:46or emotionally reacting to it right
44:47right right and like self-driving cars
44:49if it makes if some other doctor in some
44:51other state had a patient last week and
44:52made a mistake and they fixed the
44:53mistake it will not make the mistake
44:55again you're patient yeah yeah right
44:56yeah so I mean so you think so that is a
45:00very different regulatory play than
45:01we've seen in the history of healthcare
45:04well I think that's just uh you tell me
45:05I think that's just going to happen so
45:06yeah yeah here's what everybody knows
45:08yeah I'll give you a couple things yeah
45:09everybody knows that patients should not
45:12everybody knows every patient now does
45:14that it's called Dr Google
45:16exactly literally I was called in the
45:18field right and there's no way like
45:19you're not practically speaking you're
45:20gonna like regulate that out of
45:21existence yeah yeah that's going to
45:23happen I I think doctors using these new
45:25tools as an augment is something that
45:27they could just do it doesn't require
45:28approval so the ship has already sailed
45:29do you think I think so yeah yeah and by
45:31the way patients using GPT if it hasn't
45:33started it's going to start imminently
45:34yeah yeah probably so so so the patients
45:36are going to show up with the results of
45:37GPT queries and the doctors are going to
45:38have to respond to that and so yeah
45:40they're going to end up being in this
45:41world whether they want to be or not but
45:42that's actually really interesting
45:43because as a patient and I probably know
45:45just enough about medicine to be
45:46dangerous to myself but like I show up
45:48with the doctor and I have all of that
45:50thought out basically that might
45:52equalize the patients you know such that
45:54they can actually come much more
45:55educated and cut from much more
45:56thoughtful and they become much more in
46:01as a doctor do you want your patient
46:05the patient more educated they may just
46:07be humoring me but I think they want
46:10they might look a little bit more
46:12sideways but if it was really helpful I
46:14think they would I think it's just about
46:16how good it is right that's right okay
46:18so what goes wrong like
46:20I mean look the big thing that goes I
46:22mean look the big thing I think two
46:23things go so one is just that the
46:24expectation of perfection right like
46:26that and and look it's very you know
46:28it's very easy to generate the negative
46:30headline it's very easy to set off the
46:32scare the moral Panic basically right it
46:34was a single instance goes wrong and
46:36that gets extrapolated you know we talk
46:37a lot about thalidomide like you know
46:39it'd be very easy to have that kind of
46:40moment or like the the person on a bike
46:43that got hit by a Tesla or something
46:45like that yeah I think it was biking
46:46across a freeway right right exactly and
46:50so like a human probably hit him too
46:52yeah that's right yeah oh well that's a
46:54good point yeah the trolley problem yeah
46:56yeah you know the Charlie Brown's been
46:57in the press a little bit more recently
46:58because it turns out that Sam Bateman
47:00freed was an expert in the trolley
47:01problems okay it shows you that yeah so
47:03it's not the route ultimate morality is
47:04it yes yes as it's been marketing
47:06marketed but um yeah the Charlie
47:08probably the problem always gets the
47:10Charlie problem gets always mooded about
47:11from self-driving cars which is you know
47:13you have a choice between killing you
47:14know I don't know it's like I don't know
47:15five grandmas or one little kid or all
47:17these different like you have to leave
47:18but like yeah human drivers don't no no
47:22no human drivers never make that
47:24decision no they have gas or brick yes
47:26right and they have I'm gonna hit the
47:27car in front of me we're not going to
47:28hit the car in front of me it's never
47:29this elaborate thing it's always a very
47:31simple thing and so it's not a question
47:32of whether the machine can ideally solve
47:34this sort of you know idealized complex
47:36problem it's gonna get the brakes faster
47:38right when it's about to crash in the
47:40car directly in front of it yeah and so
47:41properly logically kind of containing
47:44the expectation here to actual real
47:46world and not having this spin off into
47:48these like basically fantasy narratives
47:50that you can then criticize
47:51um yeah the the the the yeah so that the
47:54absolute the absolute limits um and then
47:55yeah look I I think just the generalized
47:57fear right and what I always have to
47:59remind myself is like you know I'd like
48:01to say I'm a software software developer
48:03but background it's like okay I can
48:04actually like the other the algorithms
48:05that do this like you know can I tell
48:07you every aspect how they work no like
48:09do they do I understand how they work do
48:10I understand the basic foundations do I
48:11understand the basic maths yes yes this
48:13is why I make the comment about
48:15now to somebody who's not a coder right
48:17this whole this all this stuff like
48:19weird magic yeah yeah yeah um yeah and
48:22so there is a yeah I have to remind
48:24myself to be patient and tolerant of
48:26people who don't understand the
48:27mechanics of what's Happening that said
48:28I think the people who are going to be I
48:31think they also have to get on the
48:32mechanics and try to understand us and
48:33that and there's always slippage there
48:34yeah so what's the antidote to fear is
48:36it optimism is it education
48:39I mean ideal ideally I mean I think
48:41there's ideally it's yeah ideally it's
48:43cultural uh cultural orientation towards
48:45new technology and then ideally it's
48:47it's education of people learning and
48:48kind of uh yeah or you know the CP snow
48:50two cultures things yes yes yes
48:51backgrounds coming together and kind of
48:53educating each other um honestly a big
48:55part of it also I think is When Things
48:56become a fatal company yeah I mean this
48:58is what Tesla's done with self-driving
48:59cars yeah like if it's just happening
49:01yeah yeah right because who would want
49:03to go back like nobody wants to go back
49:04and like the system adapts right
49:06um and so um there was this famous Uber
49:10fought all these regulatory Wars and all
49:11the cities that they were in because it
49:12was not technically allowed under the
49:13taxi limo Chargers in the beginning so
49:16one of the things they did early on was
49:17they just made sure that there were
49:18always lots of Uber cars available
49:19around State houses and City Halls yes
49:22and so whenever somebody you know so you
49:24literally have somebody who's like in
49:25you know sort of giving this like
49:26roaring speech you know in City Hall
49:27about shutting down Uber and then they
49:29would come out and they'd have to get
49:30home really fast Uber would show up 20
49:31seconds later right it's like at some
49:34point it just was like take it for
49:35granted and then at that point if you
49:37just said literally are we going to take
49:38Uber away people would have said no I
49:40can't it's over and that's what happened
49:41and then they and then literally what
49:43happened is they changed the laws to
49:44accommodate that behavior and so I
49:46actually think part of it here is just
49:47like having these tools okay here's the
49:49thing here's a good news thing yeah yeah
49:51these tools are becoming widely
49:52available up front right so like 50
49:54years ago a new technology like this
49:56would have been like deployed in the
49:57government first and then in big
49:58companies and then years later in the
50:00form of something individual people
50:01could use the the model today is like
50:03it's just online yeah like GPD is online
50:05right now yes well the the future of
50:07paint is really intriguing because from
50:09an engineer point of view it's the
50:10engineer's dream that if we make it good
50:12enough such it can get to a point where
50:14people just love it and it's helpful and
50:16it does what it needs to do the rest
50:19will take care of itself yeah
50:20I mean I kind of think that's mostly how
50:22things happen I mean yeah yeah no that's
50:24a beautiful feature yeah yeah so now
50:27look having said that Healthcare is very
50:28sophisticated right yes lots of
50:30regulations there's lots of payments
50:31right all these things so I saw this
50:33thing on Twitter the other day yeah it
50:34blew my mind right because this whole
50:36time I've been thinking in terms of like
50:37you know diagnosis all this stuff in my
50:38life so this doctor posted a video
50:41um and um I think I saw that one so that
50:44first video and he said look he said the
50:45problem is whatever diagnosis whatever
50:47like I do the diagnosis I do the
50:49prescription then it's a question of
50:50whether or not I can get the uh the
50:52insurance company to reimburse to pay
50:54for the thing yeah to do that for
50:55anything even slightly out of the
50:57ordinary I have to write a letter I the
50:58doctor has to write a letter to the
50:59insurance company and that letter needs
51:01to be in a specific format and it needs
51:02to have to make the case right make the
51:03case and it needs to have the scientific
51:05citations yeah and if I do the letter
51:06really well it's going to get paid for
51:08and if I don't do the letter really well
51:09it's not going to get paid for it it's
51:10going to matter you know to the possibly
51:11the life of the patient yes
51:13um and so he's like it turns out GPT is
51:15really good at writing those letters
51:16with the references with the references
51:18yes with the scientific references like
51:20full-on right yeah and so you've got
51:22this so that's another way I think about
51:24it is you've got this bureaucratic
51:25process which is legitimate and required
51:28it needs to exist and that data needs to
51:29be submitted and honestly it does not
51:31matter to that process whether that
51:32document is written by human or a
51:34machine yeah yeah but all of a sudden if
51:35every doctor in the world is really good
51:37at writing correctly properly just
51:39letters then all of a sudden it goes to
51:40the thing all of a sudden that doctor
51:41now is another you know whatever four
51:43hours a week to actually take care of
51:44patients yeah like that's the kind of
51:45thing that I think is going to happen
51:46quite quickly and that what's
51:48interesting what's interesting about
51:48that example is you can imagine that
51:50example having a big impact on the
51:51efficiency of the Health Care system
51:52today yes without any regulatory changes
51:55yes without anything it's within the
51:56system within the system within the
51:58system yeah and so and that was just and
52:00that was the one where it's just like oh
52:01in retrospect that's obvious I just
52:03hadn't thought about it yeah one guy
52:05thinks about it all the other doctors
52:06start to do that yes the whole system
52:07upgrades step function yes one time yeah
52:09I think that kind of thing I think is a
52:11real possibility yeah and that could be
52:13because everyone's working within the
52:14system you can have the transformation
52:15immediately but then eventually someone
52:17has to read all those letters so that's
52:19to validate them it's probably you know
52:21some sorry NLP on the other side well
52:23that's right it's it's corresponding so
52:25there's we have this company
52:26um uh this company called do not pay uh
52:28yeah which is this uh company it's a
52:30it's a it's an app that sort of acts
52:31like a bot and yeah I've used the app
52:33it's very nice it's for people people
52:34that's right it's and it basically it'll
52:36basically get you it started to get you
52:37like out it was starting to get you out
52:38of like basically um uh sort of BS
52:42um and then um it's he did this thing a
52:44while ago where it will unsubscribe you
52:45for you know all these consumer
52:47subscription services like Comcast or
52:48whatever like they all make it hard to
52:49like ever turn off the subscription and
52:51so he has this way to the bot will do it
52:53for you and so he just started using AI
52:55in the bot and so um he now so so the
52:58way a lot of consumer subscription
52:59companies work is you can't you can
53:01subscribe online you can't actually
53:02unsubscribe online you have to call an
53:04800 number and you have to argue with
53:05the person and there's actually this
53:07thing in these companies called save
53:08teams where they're actually paid
53:10specifically to prevent you from
53:11unsubscribing and they'll try to cut
53:13special deals with you and they'll try
53:14to talk you out of it and so he has this
53:16thing wired up where now he has the he
53:19has a AI generated text within
53:21text-to-speech oh it's just talking and
53:23it talks and it talks it talks to the
53:25customer service person at the end of
53:26the line yeah and basically with
53:27infinite patients yes yes
53:31we'll just sit and it will just argue
53:33like no I am actually going to
53:34unsubscribe yes no I'm not going to
53:36accept a special offer no no no no no no
53:38no right yeah exactly until finally the
53:41other guy finally gives up and says okay
53:43fine okay I'll stop charging you
53:46um and so it's like okay you know it was
53:48it was a precondition of the system that
53:50that worked the way that it did it was a
53:51burden on people to have to deal with
53:53that AI can now step in and equalize the
53:55power imbalance between the customer and
53:57the company and presuming that we'll
53:58change the system yeah well one would
54:00think yeah and to your point like step
54:02one for changing the system might be
54:03retaliation which is all of a sudden the
54:05safe teams will be Bots and so maybe the
54:07boss will be arguing with the Bots yes
54:08but at least it gets you out of this
54:10kind of kafka-esque thing you're in
54:11today where when you deal with these big
54:12companies you're dealing with this giant
54:14you're you're an individual dealing with
54:15a giant bureaucracy at least it like
54:17equalizes the power well that's kind of
54:18amazing and that will be the spark for
54:20changing things because once you're in
54:22that sort of system like we got to do
54:24better than this yeah this is crazy
54:25right yeah with each other all day long
54:27it's just food it's clearly stupid yeah
54:28yeah yeah yeah and especially when it's
54:29Bonds on both sides now we can finally
54:31say well Let's do an API on both sides
54:32so let's do something smart on both
54:34sides yeah just connect it yeah yeah
54:35well Mark I mean that's such a sort of
54:39beautiful optimistic view of how this
54:41could go right because the future we're
54:43talking about is actually a much more uh
54:46engineer driven that if an engineer can
54:48build this and it really really works it
54:50really helps patients it really changes
54:53things it will get adopted as it gets
54:55adopted cultural work around it and will
54:58love it and will not want to go back and
55:01then the future will just be right in
55:02front of us yeah I mean patients are
55:03going to get a vote Yes doctors are
55:05going to get a vote right yeah and you
55:07know this you know it gets in it so it's
55:08an industry made up of people a world
55:09many other people people will get a vote
55:11yeah beautiful thank you so much for
55:13joining us good yeah you bet foreign