00:00hi everyone welcome to the a 16z podcast
00:02today we have a special turnabout is
00:04fair play episode with Marc Andreessen
00:06who's always in the hot seat being
00:07grilled for answers instead asking a
00:10bunch of tough tech and policy questions
00:11of partners Frank Chen Vijay Pandey and
00:14Alex Rempel who cover AI healthcare and
00:16thin tech respectively they were put in
00:17the hot seat at our recent Tech policy
00:19summit in Washington DC where they
00:21covered everything from the implications
00:22of the acha for healthcare innovation
00:24dodd-frank for FinTech innovation and
00:26tackled a bunch more tough topics around
00:28using tech to discriminate for risk
00:30bullying nature versus nurture or rather
00:32genetics versus behavior addiction and
00:34the opioid crisis redlining versus
00:37predatory lending and finally the ethics
00:39of AI and machine learning beyond the
00:40old trolley problem what will the future
00:42look like I'm really really fortunate at
00:45our firm to get to work with some super
00:47bright people and I think you have a
00:48sense of that from earlier today but
00:49this is the session where I get to ask
00:50the questions that I want to know the
00:52answer to so many of these questions I
00:54have not actually asked these guys
00:55before so this is my my free fire zone
00:57and then to make it hopefully a little
00:59bit more enjoyable for me
01:01they haven't been pre briefed on the
01:02questions and so I'm hoping to get at
01:04least one look of shock and alarm along
01:05the way so VJ big day for healthcare the
01:09HCA passed the House today so two-part
01:11question part 1 so what you know is
01:14you've been involved in as we're
01:15involved in all these different areas of
01:16innovation new healthcare technologies
01:18new business models for healthcare and
01:20then the sort of new present kind of
01:22question for anything new for kind of
01:23how to insert into the health system and
01:25then ultimately for the new companies
01:27who pays kind of being a key question
01:29what's been the biggest impact of the
01:30ACA on how in your view healthcare
01:32innovation and especially healthcare
01:34innovation and technology
01:35I think the AC is part of a graduate of
01:37Ellucian that started with macro for pay
01:39for value and instead of pay for service
01:42and I think that is a huge thing and so
01:44pay for services like you think of
01:46health care having someone like renovate
01:48your house or something like that you
01:50know carpenter come in it comes in he
01:52puts up a wall you charge for the wall
01:54or whatever you're paying for the
01:55service first is paying for value you're
01:57thinking about are we keeping the
01:59patient healthy and you know from a
02:01patient point of view I'd actually
02:03rather pay not for service you know I'd
02:05rather not have to go through treatments
02:07and I want to pay for value I want to
02:08stay healthy and the key thing here is
02:10if we align it right we can pay for
02:13treatments or involvement so I said we
02:15can actually minimize the cost yeah it's
02:17gonna say it's pay for value always
02:18better yeah I've always better for the
02:20patient and then it's always better for
02:21everybody else in the ecosystem or who
02:23does it whose access to the core I can't
02:25at the moment think of a counter example
02:26but I think there is this general idea
02:28that if we keep people healthy they'll
02:30be cheaper and that does seem to be
02:32generally the most emic standpoint at
02:34least maybe on an individual basis some
02:36treatments don't and some seizures don't
02:38happen but then we should all be happy
02:39they're not happening exactly so that I
02:40think is a huge thing I think we need to
02:42keep that going the idea that we can
02:44also with enough information be the in
02:46the construction algae the general
02:48contractor of our health and our body
02:49would be key and that also start off
02:51with I guess under george w bush to have
02:54larger deductibles meant that people
02:56were a little more incentive to take
02:57care of things that combination actually
02:59could be quite powerful right and then
03:01you know granted it's true early to tell
03:03what the true red pack of a HCA is
03:04because it's only passed the house
03:05hasn't passed the senate hasn't been
03:06signed into law but in the event that it
03:08just simply gets passed approved in its
03:10current form what do you think will be
03:11the biggest impact on what we all do one
03:13of the things i think reading between
03:14the lines in there has just been a few
03:16hours that the states will be given much
03:19more prerogative in terms of what they
03:21decide to do what we're gonna see is a
03:22lot of disparity between whether you're
03:24in california or whether you're in texas
03:27and so on this will be an interesting
03:28challenge for stars because they don't
03:30have to deal with a much more
03:31heterogeneous system right and that's
03:32the first one that i would worry about
03:34there's concept of regulatory arbitrage
03:35the good news about state level
03:37regulation versus federal is with
03:39federal if the government decides
03:40muslims should do it applies to all 50
03:41states with states if some allow you do
03:43it and some don't that can actually be
03:44better for startups for it to be wrink
03:46that is there is possible benefit here
03:48or does the complexity end up
03:50overwhelming that in it you probably
03:51don't want health care done at like the
03:53county level but at the state level in
03:55that sense it could be an opportunity
03:56we'll see how it all shakes out
03:58okay got it and then this is a hybrid
04:00question this involves both health and
04:01FinTech there are odds against using
04:04certain kinds for example of genetic
04:05information for the purposes risk
04:07scoring visit been a huge part of the
04:08Obamacare debate and the HCA debate but
04:10a huge issue at the last minute with
04:11this was what to do about the high risk
04:13right and there's I guess there's part
04:14of the HCA now they break out some of
04:15the high risk patients in their own
04:16pools and so this whole concept of risk
04:18pooling and health insurance is kind of
04:20central to everything that's happening
04:21right now and then the nature of
04:23technology evolution at least as I
04:24understand it correct me if I'm wrong is
04:25with genomics and with
04:27to fighting all these new Diagnostics
04:28and all the rest of it we're gonna have
04:30a much more of a technological ability
04:31to risk score by person than we've had
04:33in the past there's a federal law called
04:35Gina sign of 2008 that prohibits the use
04:37of certain kinds of genomic data for the
04:39purpose of risk scoring there's an anti
04:41discriminatory case for it but the
04:43counter case we have this information we
04:45could use and we're basically plugging
04:46our ears and choosing not to which seems
04:48like an untenable longer in position and
04:50so I guess the the question is if we
04:51have much better individual level risk
04:53scoring does the current model of risk
04:54pool health insurance like visit does it
04:57survive in any form over 10 or 20 years
04:58or does at some point is that entire
04:59concept to start to cave in I think the
05:01reason why the law might make sense is
05:03that I don't know if we have the
05:04technology go from genome to risk okay
05:06quite yet so that's an important key and
05:09if you did that incorrectly you know you
05:11could have very nasty downside effects
05:14people would not get health insurance
05:15that they might need or vice versa so I
05:18think that would be an issue and also
05:20environmental issues would be key you
05:22might have great DNA but if you have
05:24every meal at a fast-food place that
05:25might not be great and actually you know
05:27who knows maybe your phone GPS couldn't
05:28register the check into McDonald's and
05:30Burger King and then then we would know
05:31for risk anyways so the big problem is
05:33that it's a long gap between genomics to
05:35risk and actually the environmental
05:37factors are also so key let me push you
05:39up and put you in that so I think
05:4123andme got read I don't know rube
05:43Lester read blessed by the FDA and one
05:44of the things that they'll give you for
05:45example is increased risk of Parkinson's
05:47yeah I know it's a big concern of the
05:49FDA's part with these personal genetic
05:50tests is what what actually better they
05:52have it apparently they do have value or
05:53at least you FDA know so how can the FDA
05:55claim they have value in that dimension
05:57and then say that they shouldn't be used
05:58for the purpose I think I mean this is a
06:00broader societal almost philosophical
06:02question of you have immutable traits
06:04and in many cases those are genetic the
06:06problem is that they're not entirely
06:07deterministic I think to your point it
06:09would be very nice if you saw that this
06:10snip actually led conclusively to this
06:13particular problem but you might have I
06:15did 23andme when it first came out and
06:16it says you have a point one five
06:18percent chance higher of getting X it's
06:21not really deterministic but the idea of
06:23discriminating based on immutable traits
06:25I feel like we as a society agree is
06:28wrong and if you start off with that as
06:30your axiom you can follow a lot of
06:32different things at least for
06:33deterministic it's like if X then Y we
06:35should not discriminate on that there
06:37are other cases where if it's behavioral
06:38you have a broader set of questions
06:41discrimination as it's used as a word in
06:43the English language is basically only
06:45pejorative you want to discriminate
06:47against people that don't pay their
06:49bills on time you want to discriminate
06:51against people that's smoke and
06:52insurance companies do that they say do
06:54you smoke yes or no that's the very
06:56first question that you ask if you're a
06:57life insurance company if you're a
06:59health insurance company because that is
07:00a choice that you're making you could
07:02argue that there might be a genetic
07:03component to being addicted to nicotine
07:05therefore was it really H by the way
07:07which there is which there is so I
07:09totally get that which is why I started
07:11saying it's not purely deterministic
07:13right it would be very very simple if
07:15you could just bifurcate the world into
07:16into like immutable traits which are
07:18genetic and we both have young kids and
07:20I think about like wow you do have these
07:21immutable traits and then you have other
07:23things that are behavioral based and
07:24what you want to do is you don't want to
07:27have lopsided risk pools that are made
07:29up like forget about the one that we've
07:30almost axiomatically agreed as a society
07:32that we will protect you don't want to
07:34have a lopsided risk pool around
07:35behavior because I see this in the
07:36lending space a lot like the only way
07:38that you reduce interest rates for
07:39people is you either discriminate based
07:41on outcome based payments which often
07:43requires machine learning more data or
07:44you're able to charge higher interest
07:46rates over time so if you have like a
07:48massive pool and 50% of people don't pay
07:50you back and you can't filter that pool
07:52out anymore then you have to charge a
07:54100% interest rate in order to break
07:55even and if you can't charge 100 percent
07:57interest you just don't loan money to
07:58the whole pool and that's bad like I
08:01don't think that people should be denied
08:02access to credit because they're poor or
08:04because they're part of a group so in
08:06that sense even though it sounds bad
08:07like discrimination against the
08:10assuming that it's not based on
08:12immutable traits like gender ethnicity
08:14and whatnot but it's actually based on
08:16past behavior that seem but the Devils
08:18in the details because what you might
08:20call an immutable trade might not be and
08:22vice-versa so it is it goes then going
08:24out on a slightly longer timeframe I
08:26mean there is more and more neurological
08:27research showing so a genetic or
08:29something biological origins of behavior
08:31my understanding you tell them are their
08:33genetic traits that predispose people to
08:34addictive behavior yep such addictive
08:36behavior might include things like
08:37gambling might include things like being
08:39a more dangerous driver and so if you go
08:40out 10 20 30 years you know there are
08:43behaviors that we would obviously like
08:44to discriminate against yet all of a
08:46sudden we may start reclassifying those
08:47non freewill based okay but with CRISPR
08:50we could just take care of that
08:51ah yes it'll do a little snipping and
08:53snipping sniffing here
08:54as they say it yeah yeah let's move on
08:57Frank so obviously is a hot topic in
08:59general and it's been a hot topic today
09:00what in your view what's the most
09:02surprising AI up side thing like let's
09:05say over like a 10-year period and it
09:07can't be like an obvious answer like
09:08self-driving cars but like what's the
09:09thing and over the next like ten years
09:10where we'll just be like wow I can't
09:12believe that they could get it to do
09:14that so I don't know if this is going to
09:16come true but already what we have is we
09:19have AI that can do creative things like
09:21create music but that's with training
09:23data that sounds like something you're
09:25familiar with right so make it sound
09:26like jazz or make it sound like Baroque
09:27music or make it sound with this
09:29singer's voice what I'm looking forward
09:31to is could an AI create a brand new
09:34genre of music that people find
09:36desirable which is it doesn't sound like
09:38anything today but it's just this
09:40breakthrough new genre and so in 10
09:43years I think we have a shot at doing
09:45that now would that be scored according
09:46to whether humans like that or rather I
09:49think it'd just be like is it at the top
09:50of the Billboard top chart okay so maybe
09:52then AI just gets very good at
09:53weaponized I relative yeah gets very
09:56good at likes it the impersonating the
09:57next yeah figuring out like how to be
09:59the next Justin Bieber yeah that's
10:01exactly right so you know sort of true
10:02creativity right as opposed to
10:04derivative which is it sounds like
10:06something that I'm gonna remix something
10:07that it already sounds like it or would
10:09that be true creativity or would it just
10:10be that it starts to understand us
10:11really well and starts to learn how to
10:13really manipulate us yeah so the
10:14question is did Mozart do anything
10:16different than that yeah yeah I don't
10:18know exactly and then on the other side
10:20a I has become like the sap that people
10:22like rubbing themselves to convince
10:23themselves that anything in the process
10:25in the future is possible and almost
10:27likely and they I can do everything so
10:28what's the one thing that maybe people
10:30the current dialog might view is like
10:32yes clearly a is going to be able to do
10:33X and in ten years we'll still be
10:35sitting here saying why haven't they
10:36been able to get it to do that yeah so
10:39first just to echo your comment about AI
10:41being the SAP so this is my favorite
10:43conversation now to have a start-up
10:44entrepreneurs so they all come in and
10:46they're like I have an AI startup and I
10:48asked that's awesome what machine
10:50learning techniques are you using in
10:51your software and now they're kind of
10:53squirming and like the good ones like
10:57that cliche right five years ago this
11:00was AI mobile first or cloud native
11:02right so this is the new flavor du jour
11:04which is I'm AI central the other tip
11:06office they'll have they'll have the
11:08shows there five areas of
11:09differentiation and AI will be the fifth
11:10bullet right so now that gets so excited
11:12about telling that story I forgot the
11:14so what's the what's the wall like
11:16what's the what's the AI wall this time
11:17like what will we not be able to go so
11:20are the applications that you go to
11:22either on a web site or through Facebook
11:24messenger or through Skype and you
11:26basically have a chat conversation right
11:28you're using text most of those
11:30experiences are miserable because the
11:32second you wander off what it
11:34understands it does some pretty
11:35ridiculous things yeah so you know we're
11:38always a conference there's a chatbot
11:40conference and if the t-shirts all said
11:41what do we want chatbots
11:43when do we want them I'm sorry I didn't
11:45understand the question best conference
11:47swag ever look it because AI is so hot
11:51the expectation around what the AI can
11:54accomplish is just unrealistic right
11:56what you're expecting is sentience and
11:57what we actually have is just automation
12:00on steroids which is we can now automate
12:02things that we couldn't automate before
12:04because we don't have to painstakingly
12:05describe the rules behind it we're just
12:08letting the computer sort of figure out
12:09the rules with data but behind that you
12:12know the expectations are Wow like I
12:15expect to top be talking to a normal
12:16human being and we're not there yet so
12:18we've all gotten up here on stage today
12:19and we've said I self-driving cars are
12:21gonna happen in autonomous drones are
12:22gonna happen in AI forecasting heart
12:24attacks and diagnosing cancer and all
12:25these things like what's so hard about
12:26chatbots like why isn't why at this
12:28point if we can do all these other
12:29amazing things what is it about language
12:31that makes it hard to do this this is
12:32like almost philosophical question so my
12:34favorite story from the literature on
12:36this is in the early days of natural
12:38language translation what they would do
12:40is they would take English turned it
12:41into Russian and then go backwards right
12:43just to sort of make sure that you could
12:44close the loop so you feed in a sentence
12:46the Spirit is willing but the flesh is
12:48weak turn that into Russian send it back
12:51into English and in the early days you
12:52would get some truly hysterical results
12:55and this is my favorite you get back the
12:58but the vodka is excellent and you can
13:03actually see exactly how it made that
13:05mistake right so one area of thinking is
13:08why is English so hard is because words
13:10don't mean what they mean or they only
13:11mean what they mean in a specific
13:13context and so we actually had
13:15generations of AI researchers who having
13:17reached that epiphany said okay the
13:19answer to this is a million rules
13:22there's a professor originally at
13:23Stanford he said oh we're never gonna be
13:25able to solve AI unless we can solve
13:27this corner case so I'm gonna codify all
13:29of these rules since the mid-1970s he's
13:32been doing that he still has a system
13:33and then a rural processing engine that
13:35can actually figure out the priority of
13:36the rules like if I have two rules in
13:38conflict which one is real so to give
13:40you a sense of the type of rules that
13:42he'd put in the system when you go to a
13:44quick serve restaurant the rule is you
13:46pay first and you get your food later if
13:48you go to a fancy restaurant the rule is
13:51you order get your food and pay at the
13:53end okay so we must have a rule for that
13:56and so rule number one in the system
13:58that's how restaurants work and so you
14:00can imagine down this line of millions
14:02and millions and millions of rules and
14:04he's been trying his entire life and it
14:06just didn't work because for every set
14:08of rules that you there is some
14:09exception there is some weird idiom
14:11there is some trick of the words that
14:13you know the ambiguity resolves some
14:15other way where you just don't get the
14:16right result there's any of this unique
14:18to English or is it true of all
14:19languages it's true of all languages and
14:21every language has they're sort of
14:23different quirks so for instance in
14:25Chinese you have this quirk of
14:26homophones lots of words sound exactly
14:29the same and are deeply ambiguous when
14:31you're trying to type Chinese you have
14:33this very hard problem because one they
14:35have no alphabet right so even if you
14:37gave them an alphabet you'd sort of type
14:38a series of letters and then you would
14:41get here's the 12 characters that could
14:43mean that thing and then you have to go
14:44and select that's literally how a
14:45Chinese keyboard works for anybody if
14:47you've never seen this it's terrifying
14:49you type the letters and then you have
14:50to pick one of twelve characters that
14:52that could be this is why talking to
14:54your phone in Chinese is now three times
14:56faster than typing right which is why
14:58Baidu is spending so much money trying
15:00to do this speech-to-text magically and
15:03so every language has their own quirk
15:05like that and the holy grail of this
15:08space would be sort of one learning
15:09algorithm to learn them all in other
15:12words one deep learning framework that
15:14could equally learn all languages
15:16effectively and I don't think we're
15:18quite there yet okay Vijay back to you
15:19and on health so much much much more
15:21serious topic opioids the inventors of
15:23opioids I would imagine had a very
15:26strong moral positive view that they
15:28were doing a wonderful thing give the
15:29world because lots of people suffer from
15:30pain of various kinds and it seriously
15:32degrades people's quality of life has
15:34serious economic impact and so
15:36advancements in pain medication as view
15:38despite the researchers involved is very
15:39good thing and yet we now have this
15:41public health crisis yep doppio it's
15:43leading to some very very adverse
15:44effects for human beings and also for
15:46our country and for our economy does the
15:48opioid crisis has it changed your view
15:50of how to think about innovation
15:51healthcare actually you know it's just a
15:54different type of innovation and that
15:56historically our opioids are derived
15:58from the same stuff and poppy and so
16:00it's really old I mean the
16:01pharmaceutical version is better and so
16:03on in what was the innovative anesthetic
16:05maybe a bit curious you don't know
16:06what's new and different about opioids I
16:08mean that it is derived from actually
16:10chemically it's very similar it's just
16:12slight modifications to make it more
16:15powerful or last longer or things like
16:17that so even like like remi fentanyl or
16:20something like that has the same
16:21connection back okay so in a sense
16:24there's very old thing is like morphine
16:25or is blocking laudanum in the old
16:27Webster's yeah I guess so then I would
16:29say it's been consumed Erised it's been
16:32medicalized been made available so it's
16:34really the innovation there is is that
16:36people have pain and therefore they need
16:38pain treatments yeah there's an
16:39interesting a sort of psychological or
16:42societal aspect of this because I don't
16:44know if you know the study with rats
16:45so-called rat Park you take a rat and
16:47you haven't in cage and it has you know
16:50a nasty life and give it the choice of
16:51food versus opioids he'll take the plate
16:54and it'll take the upper heads until
16:55until it died so it dies yeah yeah oh
16:57just over it overdose and you take rats
16:59and put them in rat Park which is you
17:00know with all their friends it's a great
17:02thing it's a great place it's like Club
17:04Med for rats and then you give them the
17:06they don't overdose well and so this is
17:09the intriguing thing is that we talked
17:11about addiction and part of addiction is
17:13what are your options and so if you're
17:15living in a box if you're in a war that
17:18you don't want to be fighting I mean
17:19there's lots of reasons why people's
17:20lives can be very nasty that this
17:22becomes an alternative so I I think we
17:25could turn around and the opioid crisis
17:27might not be as much about the drugs as
17:30about the state of these people's lives
17:31and that alternate intervention would be
17:34the way to go after doing so with that
17:35in mind is this an area in which we
17:37could we we being our broad community of
17:40innovation healthcare innovation in
17:41technology and healthcare can we have a
17:42positive impact redo you think the
17:44answers to this therefore a lie outside
17:45of my guess is as much more societal but
17:48there are interesting alter
17:49so we've seen technologies like tens
17:52technologies which is a electronic
17:54device that can be used for certain
17:55types of pain care that doesn't have
17:57this addiction and so they'll be that
17:59will play a role and those devices
18:00already rolling out so there'll be some
18:02areas that technology can help but I
18:03think this is a much more systemic issue
18:04yeah Alex obviously healthcare
18:06regulation in the news today financial
18:08services regulation is another hot topic
18:10and the new administration is making
18:12statements that they plan fundamental
18:13repeal potentially including everything
18:15up to an including potentially repealing
18:16dodd-frank in some form a two-part
18:18question I'll start with the first
18:19parent what's been the impact of
18:20dodd-frank do you think on FinTech and
18:23what it means to be a fin tech innovator
18:25today versus saying you know before
18:26dodd-frank so part of dodd-frank is this
18:29idea of risk retention which is a great
18:31idea in theory especially if you're
18:33chase or Wells Fargo if you're
18:34originating trillions of dollars of
18:36subprime mortgages as an example because
18:39that never happened and like you
18:42originated so originated means like okay
18:44I have all these brokers they get these
18:45these these people to sign up for loans
18:47that they can't afford I know that
18:49they're all bad I get somebody to say
18:51they're not bad because they haven't
18:52actually looked at all the data and they
18:53don't know the fact that they're bad
18:54they just look at the properties out the
18:55properties look like they have good
18:56value i bundle them up and then I sell
18:58them done and I earn a commission on
19:00both sides that sounds like a great
19:01business to be in the idea of risk
19:03retention is one of the ways that you
19:04prevent that is not just being able to
19:06clawback bonuses from big bank
19:08executives but actually have the bank
19:09itself retains some of the risk of these
19:12securitizations maybe hold some
19:14percentage yes not 5% is a legal
19:16requirement right if you take your
19:18clients down you're going down your
19:19buddies interest so you have both
19:20personal in terms of like my bonus can
19:23get clawed back and you actually saw
19:24that with Wells Fargo for a different
19:26issue and then you have just corporate
19:28wide where it's like okay we shouldn't
19:30do this because we have to hold on to
19:31the risk and actually we have to finance
19:33that risk so for every ninety five cents
19:35like five cents of that as our is like
19:37we don't have infinite money so
19:38therefore we have to be careful about
19:40how we are lending practices and why not
19:42so you know that's probably a good idea
19:44for a for a brand new company that has
19:46no money in the bank for them to have to
19:49do risk retention around loans that they
19:51might originate that that's kind of
19:53tricky and it might not make sense and
19:55also like what kind of societal downfall
19:57can come from a brand-new company that
19:59is helping refinance your existing high
20:01rate debt to kind of low rate debt
20:04and they only pony up 1% alongside that
20:07and only make 100 loans because it turns
20:09out there's no demand for their product
20:10so a lot of the like that that's just
20:12one example regulations apply to small
20:14companies big companies dodd-frank it
20:16just has so many different components
20:17like there's part that regulates
20:18interchange which is how much the credit
20:21card companies charge for debit card
20:22products like that's probably a good
20:23thing small merchants actually benefit
20:25if they don't have to pay as much money
20:27if you have a low margin consumer
20:28service and you're paying 2% out the
20:31door to the credit card companies or the
20:33credit card conglomerate that's not good
20:34for you so part of dodd-frank had this
20:37thing called the Durbin amendment which
20:38allowed the Fed to actually cap debit
20:41interchange and if you actually look at
20:43some of the financial results of
20:44somebody like stripe or Square they have
20:46this great tailwind from the fact that
20:49debit interchange went down dramatically
20:51so you know it's it's a multi-faceted
20:52thing yeah if you if you're doing
20:54trillions of dollars a year of loans or
20:57you somehow pose a systemic risk to
20:59society okay I can by the fact that we
21:02don't want to let one company bring down
21:03the entire financial system but when
21:06they're just trying to like figure out
21:07do we buy Google ads or Facebook ads to
21:09get people to find out about our product
21:10it's a little bit premature for them to
21:12vaporize half of their funding on legal
21:15bills because of regulations that are
21:17really meant to watch after the
21:19companies with hundreds of billions of
21:20dollars in market cap there are a lot of
21:22things that predate dodd-frank which I
21:24actually think are the most
21:25anachronistic so this idea of fair
21:27lending actually goes back to this
21:28machine learning concept fair lending is
21:30a very very well-intentioned law like I
21:32should not as a bank be allowed to
21:33discriminate based on marital status or
21:35ethnicity religion any of that stuff so
21:37fair lending has an appendage called
21:39disparate impact it's not just are you
21:41are you discriminating outright on one
21:43of these factors but are you having a
21:45disparate impact against somebody in one
21:47of those factors now the funny thing
21:48about computer code is that a you can
21:50examine it and be its dispassionate like
21:52if you have a in your code if there's a
21:54statement that says if race equals y
21:57then reject applicant like great go to
21:59jail I'm totally in favor of that but
22:01that's not how machine learning works
22:02you basically it's like linear algebra
22:04you have like here's every borrower ever
22:06here's every attribute ever let's look
22:09for patterns over time and by the way
22:10we're not even collecting things like
22:12gender ethnicity we're not inferring any
22:14of that kind of stuff so it might turn
22:15out you have four cats you have two cars
22:17and you'd like to watch Seinfeld every
22:19night and the combination of those
22:20factors means that you are a bad risk
22:22for lending like but you have purple
22:25hair and that's a protected class
22:26therefore you can't use those prior
22:28three things even though we might
22:30consider the developing world as being
22:31behind in many respects they're ahead
22:33because the regulations and everything
22:36there can actually recognize the fact
22:38that you are using newer techniques to
22:40positively discriminate against
22:42deadbeats and only deadbeats and you can
22:44look at the code to verify that as
22:46opposed to doing abyss and anything that
22:48really should be a protected class
22:49whereas here you have to actually when
22:51you when you get denied from a loan in
22:52the u.s. it's like here are the three
22:54reasons why you were denied for a loan
22:55and if it's like if there are four
22:56thousand micro reasons that altogether
22:58sum up for this like you can't say
23:00Seinfeld and cats and cars like it
23:01doesn't compute there was a time not
23:03that long ago there was a huge redlining
23:05debater on mortgages and lending and was
23:06lending being fundamentally denied to
23:08people for reasons by virtue of where
23:09they live which was code for who they
23:11are right at some point through some
23:12consequence of either regulatory changes
23:14or political changes or what you know
23:16incentive changes or whatever at some
23:17point it flipped from those banks are
23:20racist and denying loans to people who
23:21should who should get them to these
23:24banks are predatory though they're evil
23:26because they're denying loans to people
23:27versus they're predatory because they're
23:29giving lows to people I mean in
23:31particular many of the same people in
23:32our cases that were being read lines
23:33like people living for example at lower
23:35income neighborhoods right and so you
23:37know by 2007 or 2008 everybody knew for
23:39a fact that it was evil for a bank to
23:41give a loan to somebody in a low-income
23:42neighborhood who couldn't pay it off and
23:44I know bank executives who are like well
23:45that's the exact opposite of what we're
23:47being accused of five years ago so for
23:49example new lending companies or new
23:50insurance companies how do we think
23:51about the line the line between
23:53redlining and predatory lending well
23:55ideally and this is where risk retention
23:57or reputation retention really makes
23:59sense which is I mean part of the
24:01problem there is that people were
24:02willingly and willfully making loans to
24:04people that they knew couldn't afford it
24:06and my my view of regulation is that it
24:08should enforce the transparency like the
24:10reason why payday loans are bad is
24:12because the fine print isn't even like
24:14nine point font it's not even eight
24:15point font it's like two point five you
24:17can't see it and then if you are the
24:19even scummy or payday lending company
24:21that makes it one point font you'll win
24:22against the payday lending company that
24:24makes a three point font so it's this
24:25race to the bottom and nobody wins
24:27everybody loses it's a tragedy of the
24:31we're regulation can be helpful is like
24:33okay let's make this incredibly
24:34transparent when you have securitization
24:36x' where you kind of pass the buck off
24:38to the greater fool that's where you
24:40have the predatory problem I'm actually
24:42regulation I really do believe has a
24:43role you know before joining this firm I
24:45ran an ad network around payments we had
24:48a lot of what I would consider not very
24:50friendly competitors and I think in
24:52their heart of hearts they didn't set
24:53out to build bad businesses or be bad
24:55people it's just the only way to win is
24:58to kind of out out trick consumers and
25:01that's the greatest role that regulation
25:03can really have I think when we back a
25:05company ultimately we're at the stage
25:06where it only works if the consumer
25:08wants it and you have to kind of fight
25:10for the hearts and minds of consumers
25:11that have 9,000 other different banking
25:14services available to them and then and
25:16only then will you have a chance of
25:17being used by that particular consumer
25:19and you know we're trying to back things
25:21that fundamentally have a 10x
25:23improvement over existing products in
25:25the banking system okay another ethical
25:26questions if you have somebody who's
25:27kind of following the whole discussion
25:28on AI they bring up AI FX they'll cite
25:31something called the trolley problem and
25:32the trolley problem is basically this
25:34problem of you've got a self-driving car
25:35and it's barreling down the street and
25:37there's different versions of the
25:38problem but basically it has a choice it
25:40has a choice and a choice to make it's
25:41either gonna keep going or it's gonna
25:43slam on the brakes and the computer is
25:44gonna calculate as the computers gonna
25:46know a lot about what's happening around
25:47it if I keep going I'm gonna run into a
25:49car and I'm gonna kill five nuns who are
25:52all seven years old and if I hit the
25:54brakes I'm gonna hit and kill two
25:56six-year-olds with their entire life in
25:57front of them and I have a thirtieth of
25:59a second to make the decision right I
26:00think Frank you and I would probably
26:02agree that that is what might be
26:04politely called an edge case that is a
26:06hypothetical which just goes to say that
26:08hopefully nobody in the audience had to
26:09make that decision recently if the
26:11actual decisions that we have to make is
26:12am I gonna you know how can I text all
26:14the way through this stop sign or or
26:17should I look up at some point so so
26:19contra that or tell me if you disagree
26:20that what do you think is for example a
26:23very very serious AI ethics problem that
26:25people are maybe not discussing enough
26:27enough of who that is a good question
26:29but before I answer it I want to say a
26:30little bit about the trolley problem and
26:32sort of approaches industry is taking
26:34it's just sort of trying to solve this
26:35problem so one you might posit the
26:39existence of a ethical decision making
26:41as a service company and you're like why
26:45we want one company to hire a bunch of
26:47philosophers to do this well that would
26:49mean it's literally a company where you
26:50submitted an ethical problem yes gives
26:52you yeah on the face of it it sounds
26:53crazy how could we have an ethical
26:55decision making as a service company
26:56well your alternative is that you're
26:58gonna allow all of your motorcycle
27:00vendors and car vendors and ambulance
27:03vendors and truck vendors to figure it
27:05out themselves though philosophers and
27:06maybe not even take that into
27:08consideration at all if you look at the
27:10current crop of machine learning
27:12algorithms that drive autonomy they're
27:13not making high-level decisions like
27:15let's calculate the life expectancy of
27:18the people that I'm about to wipe out
27:20they're not doing that they're looking
27:21at there's an object in front of me and
27:24I'm trying to figure out the probability
27:25what what is the highest probability
27:27safe path that's the calculation it's
27:29performing right but let's say like we
27:32can do all of the probability adjusted
27:34safe paths and we could actually take
27:36into account the sort of life
27:38expectancies that you know what's the
27:41utility function I think you say what if
27:43we had everybody's genomes what if we
27:44could do risks that's right that guy was
27:47gonna get addicted to nicotine anyway
27:49don't worry knows in the history of the
27:52world they're all gonna live to be
27:53hundred 40 that's exactly so if we could
27:55get to the point where those
27:56calculations started factoring into what
27:59the car would do you kind of want the
28:01philosophers to encode those types of
28:03rules by the way you've never met people
28:04who are fired up about the evidence off
28:07arriving car as philosophers yeah so
28:09we're not at the point where that that's
28:10even entered the conversation except for
28:12a futurist who are positing the
28:14existence that maybe we ought to have an
28:15ethical decision making sort of as a
28:17service so what kinds of things are
28:19people not taking seriously well it's
28:21the conversation we just had which is
28:23these machine learning algorithms might
28:25be circling people for loans or not
28:27loans or diagnosis you know should you
28:29be part of this clinical trial or not
28:31right where there might be a high
28:34correlation with somebody that you want
28:35to be a protected class and we don't
28:37really understand why it is that
28:39algorithms are making those decisions
28:41broadly speaking there's sort of classes
28:43of machine learning algorithms where if
28:44you sort of poked at them you could
28:46figure out why did you make that
28:47decision but classic algorithm where you
28:49can do that is called a decision tree
28:51you can actually look at all the
28:53branches in the tree and say oh you made
28:55this decision because you followed all
28:56these branches down the tree and that's
28:57why you rejected this
28:58for a loan the modern more powerful more
29:02accurate algorithms that belong to a
29:04class of algorithms called deep learning
29:06algorithms they don't have that feature
29:07they are notorious black boxes you
29:09cannot ask it why did you make this
29:12decision all it is is basically a vast
29:14linear algebra matrix of weights and so
29:17there's no way to sort of query it now
29:20the kind of argument and the reason
29:23these things are being used as opposed
29:25to the old-fashioned decision trees is
29:26they are more accurate if they just
29:28plain and simple they're gonna make the
29:30right decision more times than the
29:32decision trees and so when you challenge
29:35somebody in the community he's working
29:36on these black box algorithms their
29:39defense will be hey look and I just
29:41actually went through this my
29:42sixteen-year-old son just got licensed
29:44to be a driver I terrifying experience
29:46for everybody who's had teenagers
29:48there's actually no way to query in his
29:51brain what he's gonna do in these edge
29:53cases either but yet we licensed him to
29:55drive we made a regulatory decision as a
29:58society that just because he passed a
30:01couple behavioral tests and a couple
30:03he's now licensed to drive and so you
30:05have no more inspect ability or
30:07understand ability of the
30:09sixteen-year-olds brain compared to the
30:11deep learning algorithms all you have is
30:12behavior and at the end of the day what
30:15we're gonna judge cars on is accidents
30:18per million model driven right just like
30:20you drudge humans well the way that
30:22you'd interrogate the deity of learn is
30:23you just have the deep learning
30:24algorithm run in a simulation and run
30:26through you know a trillion scenarios
30:27yeah with all kinds of variables and
30:29then see what comes out the other end
30:30which is much harder to do with the 16
30:32year old so you can only watch him
30:33playing Grand Theft Auto so many times
30:35right for you and for behavior we're
30:38gonna send the self-driving algorithms
30:40through tons of simulation in fact most
30:42of the big companies will say how many
30:44real miles versus how many simulator
30:46miles will feed the network's and
30:48they're saying like it could be 50/50
30:49yep yep so a final final question for
30:52all three of you so a lot of us in the
30:53audience a lot of people on stage have
30:54kids what will be the single biggest
30:57change in daily life when are all of our
30:59kids are our ages so 2025 years out
31:03let's say daily life I definitely
31:06believe they're not going to be driving
31:07they'll be picked up and you know
31:09whether it's too big question will be 2d
31:10or 3d right giving you from point
31:12I actually think here's another
31:14interesting thing which is if there were
31:16self-driving cars trucks robots shopping
31:19carts there will probably be a Cisco
31:21routing service on top of that that can
31:23calculate the best way to get object a
31:25from point A to B which is like some
31:28combination of handoffs between the
31:30bikes and the shopping carts and the and
31:32somebody's got to write the routing
31:34algorithms I've been actively looking
31:35for one of those so if you see one I'll
31:37send it my way I think there will be a
31:38Cisco class company that basically moves
31:41atoms from point A to point but how
31:42stuff moves like if you were like if you
31:46were the business school student
31:47designing FedEx now you would basically
31:49be riding on top of all of the things
31:51that can move themselves now you
31:52wouldn't be buying trucks and cars
31:54yourself right no I'd also go for the 2d
31:56versus 3d so if everybody knows the
31:58Fermi paradox it's if if there are
32:01roughly a hundred billion two hundred
32:02billion galaxies each one has roughly
32:04100 billion two hundred billion stars
32:05each one probably has at least one or
32:07two planets orbiting it why haven't we
32:09heard from anybody and they were like
32:10it's really interesting there are a
32:12hundred different explanations for why
32:13it could be that we're about to implode
32:15it could be that they all employ lots of
32:17different reasons my favorite one though
32:18actually relates to this which is you
32:20know what traveling interstellar Li it's
32:22not fun like you're just sitting in a
32:24spaceship like to go to Mars that takes
32:25nine months if you just kind of upload
32:27yourself to the Machine and you just
32:29yeah why do you travel and why don't we
32:31travel here if telepresence is actually
32:34sufficient and I believe it will be in
32:35the future like that that would be a big
32:37change by telepresence you mean like
32:39what would the experience be like I mean
32:40it's probably a RvR but you know without
32:43clunky goggles you know or a headset or
32:45something where it's almost as good it
32:47really is almost as good as being there
32:49and you have the bandwidth for it and
32:51you can kind of simulate experiences
32:53I mean if anybody's really has tried
32:54like a VR headset now that is already
32:56amazing I think right now but just if
32:58you follow the trajectory of that and
33:00where we'll be in 20 or 30 years like
33:01you don't have to cross across the
33:03country much less the galaxy might want
33:05me what would be the biggest let's say
33:06second-order impact from if telepresence
33:08gets almost as good as actually
33:09physically being somewhere the real
33:11estate prices would crash I guess that
33:13I'd probably the biggest one because the
33:14location location location would be
33:16shrunk down to like location doesn't
33:18matter that's probably biggest one yeah
33:19but it could also mean by the way to the
33:21extent that geographic concentration is
33:22but to the extent that urbanization is
33:25driving inequality because if you're
33:26a city you have access to economic
33:27opportunity and if you're not you don't
33:28that could be very positive from an
33:30inequality standpoint yeah you still
33:31need to grow food but maybe it'll figure
33:33that out to have robots who I'll be
33:34drinking Soylent buy them yeah the
33:38healthcare side you know we're already
33:39getting closest imagine you wake up you
33:42pee that gets analyzed automatically you
33:45know you've got sensors on your just as
33:46part of how you are it's a point where
33:49living to a thousand sounds ridiculous
33:52right so and I think that's a pretty far
33:54off but just imagine what that would be
33:58like if you had to have a car that lasts
34:00even a hundred years or a thousand years
34:02what would that look like well you guys
34:03take good care of the car you don't
34:05smash into things you don't do silly
34:07things but then also as parts sort of
34:09wear out you replace them yeah and so
34:12you know with these new technologies
34:14like stem cells or you can make new
34:15organs and CRISPR we could modify those
34:18before they get made you can imagine
34:20replacing parts with not just a heart
34:22transplant from a donor but with your
34:25own heart what's your prognosis like
34:27what's your best guess for a time frame
34:28when would that mainstream it's hard to
34:30know I mean they're already can grow
34:32beating heart tissue from your own blood
34:35that created stem cells yeah so you can
34:37do that right now and actually you could
34:39do that to test drugs on not just like
34:41how it bees in a mouse or behaves in
34:43someone else but how it behaves in me
34:44yeah so that's already there I'm growing
34:46a whole heart that's a big deal yeah but
34:48then again like 2025 years that's a long
34:50time and something we refer to yeah
34:52don't have double serving of french
34:53fries tonight good thank you everybody
34:55thank you all so much yeah for coming on
34:57behalf of the whole firm I would like to
34:59thank all of you for coming out here
35:01today to discuss tech policy and wrestle
35:04with some of these issues thanks very
35:06much see you next year