00:00hi and welcome to the a 16z podcast I'm
00:03Hannah and this episode is a Valentine's
00:05special where I talk with Kate darling
00:07researcher at MIT labs all about our
00:10emotional relationships with robots we
00:13already know that we have an innate
00:14tendency to anthropomorphize robots but
00:16as we begin to share more and more
00:18spaces both social and private with
00:20these machines what does that actually
00:22mean for how will interact with them
00:24from our lighter sides from affection
00:26and love and emotional support to our
00:28darker sides what do these relationships
00:30teach us about ourselves our tendencies
00:32and our behaviors how will these
00:34relationships in turn change us and what
00:37models should we be thinking about as we
00:39develop these increasingly sophisticated
00:41relationships with our robots besides
00:44just that natural instinct that we have
00:46to anthropomorphize all sorts of things
00:48how is it different with robots robots
00:50are just so fascinating because we know
00:53rationally that they're machines that
00:55they're not alive but we treat them like
00:57living things with robots I think they
01:00speak to our primal brain even more than
01:03like a stuffed animal or some other
01:05object that we might anthropomorphize
01:08because they combine movement and
01:10physicality in this way that makes us
01:12automatically project intent onto them
01:15so that's why we project more onto them
01:17than like the Samsung monitor that I'm
01:19looking at back here because it looks
01:21like it has agency in the world yeah I
01:24think that tricks our brains there are a
01:26lot of studies that show that we respond
01:27differently to something in our physical
01:29space than just something on a screen so
01:31even though people will imbue everything
01:34with human-like qualities robots take it
01:36to a new level because of this physical
01:39movement people will do it even with
01:41very very simple robots just like the
01:43Roomba vacuum cleaner you know it's not
01:45a very compelling anthropomorphic robot
01:48and yet people will name them and feel
01:51bad for them when they get stuck and
01:52it's just on getting the same one back
01:54if it gets broken and so if you take
01:59that and then you create something that
02:00is specifically designed to push those
02:03buttons in us then it gets really
02:05interesting so what is the reason why we
02:07should be aware of this tendency besides
02:09like this cute you know attachment to
02:14our vectors or whatever our pet robots
02:16are why does it matter that we develop
02:18these relationships with them well I
02:19think it matters because right now
02:21robots are really moving into these new
02:23shared spaces I mean we've had robots
02:26for decades but they've been you know
02:28increasing efficiency and manufacturing
02:31contexts they haven't been sharing
02:32spaces with people and as we integrate
02:36these robots into these shared spaces
02:38it's really important to understand that
02:40people treat them differently than other
02:42devices and they don't treat them like
02:44toasters they treat them subconsciously
02:45like living things and that can lead to
02:48some almost comical challenges as we try
02:53and figure out how to treat these things
02:57as tools and contexts where they're
02:59meant to be tools but then at the same
03:02time kind of want to treat them
03:03differently when you talk about these
03:05giant manufacturing robots that do exist
03:08in plants and factories on the floor do
03:10we see that there so there's this
03:12company in Japan that has this you know
03:16standard assembly line for manufacturing
03:18and like a lot of companies they have
03:20people working alongside the robots on
03:23the assembly line and so the people will
03:25come in and they do these aerobics in
03:27the morning to warm up their bodies for
03:28the day and they have the robots do the
03:31aerobics with their arms with the people
03:34so that they'll be perceived more like
03:36colleagues and less as machines people
03:38are more accepting of the technology
03:40people enjoy working with it more and I
03:43think it's really important to
03:44acknowledge this emotional connection
03:47because you can harness it too so we
03:50know we have this tendency if we think
03:51about you know being aware of it and
03:54being able to foster it or diminish it
03:57what are some of the ways in which we
03:59negotiate those relationships my focus
04:01is trying to think about what that means
04:04ethically what that means in terms of
04:06maybe changing human behavior what
04:09challenges we might want to anticipate
04:12we're seeing a lot of interesting use
04:15cases for social robots that are
04:17specifically designed to get people to
04:19treat them like living things to develop
04:21an emotional connection one of my
04:24favorite current use cases for this
04:27replacement for animal therapy we have
04:30therapeutic robots there uses medical
04:32devices with dementia patients or in
04:35nursing homes I saw an article about
04:37that recently and specifically with
04:39people with dementia yeah there's a baby
04:41seal that's very popular that's right
04:43that's the one I read about a lot of
04:44people think it's creepy like we're
04:46giving old people robots and like having
04:49them nurture something that's not alive
04:50but then you look at some of the
04:53advantages of having them have this
04:57experience when their lives have been
04:59reduced to being taken care of by other
05:00people and it's actually an important
05:03psychological experience for them to
05:04have they've been able to use these
05:06robots as alternatives to medication for
05:08calming distressed patients this isn't a
05:12replacement for human care and that's
05:14also not how it's being used it's really
05:16being used as a replacement for animal
05:18therapy where we can't use real animals
05:21because people will consistently treat
05:23them more like a living thing than a
05:25device what is the initial interaction
05:28like when you hold something like that
05:29is there a prelude that's necessary do
05:32you have to educate a little bit those
05:34patients or you just do they just put
05:36the robot seal in their arms the most
05:39clever robotic design doesn't require
05:41any prelude or anything because you will
05:44automatically respond to the cues the
05:47baby seal is very simple it just makes
05:49these little cute sounds and movements
05:51and responds to your touch and we'll per
05:55a little bit and so it's very intuitive
05:57and it's also not trying to be a cat or
06:02anything that you would be more
06:03intimately familiar with because no one
06:05has actually held a baby seal before so
06:08it's much easier to suspend your
06:09disbelief and just let go with it
06:12so what are sort of some of the very
06:13broad umbrella concerns that we want to
06:17be thinking about as we're watching
06:19these interactions develop a lot of my
06:22work has been around empathy and
06:23violence towards robotic objects are we
06:25already being violent towards them
06:28sometimes there was this robot called
06:31hitchbot that hitchhiked all the way
06:34across the entire country of Canada just
06:36relying on the kindness of strangers it
06:39was trying to do a road trip through the
06:41made it to Philadelphia and then it got
06:44vandalized beyond repair and I want to
06:46Philadelphia by the way New Jersey as
06:50you're telling me this story I'm already
06:52like imagining this alien life doing a
06:55little journey through the world I'm
06:58completely projecting this narrative
07:00onto it and that was the interesting
07:01thing about the story it wasn't that the
07:03robot got beat up but it was people's
07:05response to that that they were like
07:08empathizing with this robot that was
07:10just trying to like hitchhike around and
07:12that it got like people were so sad when
07:15this robot got or a little stranger in a
07:17strange land like there was news about
07:20this all over the world that hit
07:21international news and what do we learn
07:23from that why is it interesting that we
07:26empathize with them even more
07:27interesting to me is the question how
07:30does interacting with these very
07:31lifelike machines influence our behavior
07:35so could you use them therapeutically to
07:38you know to help children or prisoners
07:41or help improve people's behavior but
07:44then the flip side of that question is
07:46could it be desensitizing to people to
07:49be violent towards robotic objects that
07:51behave in a really lifelike way is that
07:54a healthy outlet for people's violent
07:57behavior to go and beat up robots that
07:59respond in a really lifelike way or is
08:02that kind of training or cruelty muscles
08:05isn't that sort of like a new version of
08:07almost the old video game argument I
08:09mean so how is it shifting so it's the
08:12exact same question which by the way I
08:15don't think we've ever really resolved
08:17we mostly kind of decided that people
08:21can probably compartmentalize but
08:24children were not sure about and so we
08:27restrict very violent games to adults so
08:30we've kind of decided that you know we
08:32might want to worry about the kids but
08:34you know adults can probably handle it
08:37now robots I think make us need to re
08:42ask the question because they have this
08:45visceral physicality that we know from
08:48research people respond differently to
08:50than things on a screen and so there's a
08:53question of whether we can
08:55with robots specifically because they
08:58are so present in the world with us yes
09:01so do you think that's because it's
09:03almost a somatic relationship to them
09:06will it matter the same way when we are
09:09immersed and say virtual reality I mean
09:11a virtual reality gets more physical I
09:13think that the two worlds merge and so
09:17even though the answer could very well
09:20be people can still distinguish between
09:23what's fake and what's real and just
09:26because they like beat up their robot
09:29doesn't mean that they're gonna go and
09:30beat up a person or that their barrier
09:32to doing that is lower but we don't know
09:35how do you start looking at that what
09:36are the details that start giving you an
09:38inkling one way or the other
09:40the way that I think we're beginning to
09:43start to get at the question is just
09:45trying to figure out what those
09:47relationships look like at first so I've
09:50done some work on how do people's
09:52tendencies for empathy relate to their
09:54hesitation to hit a robot just to try
09:58and establish that people do empathize
10:01with the robots because we don't we
10:02haven't have to show that for sure that
10:04first it's so interesting we all know
10:06what that feeling is but to show to
10:08demonstrate to model it and then see it
10:11and recognize it in our kind of research
10:14experimentation how do you actually you
10:16know categorize the response of empathy
10:20one of the things we did was how people
10:21come into the lab and smash robots with
10:24hammers and time how long they hesitated
10:28to smash the robot when we told them to
10:31smash it did you give them a framework
10:33around this experiment or just have them
10:36walk in and just start definitely they
10:37did not know that they were going to be
10:40asked to hit the robot okay and we did
10:42psychological empathy testing with them
10:44to try and establish a baseline for how
10:47they scored on empathic concern
10:49generally but also like we had a variety
10:53of conditions so what we were trying to
10:55look at was a difference for example in
10:57would people hesitate more if the robot
11:00had a name in a backstory versus if it
11:04was introduced to them as an object
11:06oh well presumably the name of the
11:10not a huge surprise that good new
11:13robot's name is Frank
11:14yeah sorry Frank yeah we actually tried
11:19measuring like slight changes in the
11:23sweat on their skin oh my god she if
11:26they were more physically aroused
11:29unfortunately those sensors were really
11:32unreliable as I couldn't get reliable
11:33data from that we tried coding the
11:36facial expressions which was also
11:39difficult that's what I was wondering
11:40about because as one human reading
11:43another human you do have some sense
11:45right and I have to say like the videos
11:49of this experiment are much more
11:51compelling than just the hesitation data
11:53because people really did like one woman
11:57was like looking at this robot which was
12:00a very simple like looked kind of like a
12:02cockroach like it was just a thing that
12:04moved around like an insect and so this
12:07one woman is like holding the mallet and
12:09like stealing herself and she's
12:10muttering to herself it's just a bug
12:12it's just a buzzword compelling but it
12:17wasn't it we just didn't find it easy
12:21enough to code them in a way that would
12:23be scientifically sound or reliable we
12:26relied just on the timing of the
12:28hesitation other studies have measured
12:29people's brain waves while they watch
12:31videos of robots being tortured so there
12:34are a bunch of different ways that
12:36people have tried to get at this so when
12:38we start learning about our capacity for
12:41violence towards robots are you thinking
12:43about that in terms of what it teaches
12:45us back about humans or about what going
12:49forward the reason we need to know that
12:50we are learning actually more about
12:53human psychology as we watch people
12:55interact with these machines that don't
12:59communicate back to us in an authentic
13:03so that's interesting but I think that
13:05it's mainly important because we're
13:07already facing some questions of
13:11regulating robots for example there's
13:14been a lot of moral panic around sex
13:16robots we already need to be answering
13:19the question do we want to allow this
13:21type of technology to exist and be you
13:23to be sold do we want to only allow for
13:26it in therapeutic context do we want to
13:28ban it completely and the fact is we
13:31have no evidence to guide us in what we
13:34should be doing so it's all coming down
13:36to the damn question of like is this
13:38desensitizing or is this enhancing
13:40basically yeah unfortunately a lot of
13:43the discussions are just fueled by you
13:46know superstition or moral panic or in
13:50this context a lot of it is science
13:53fiction in pop culture and our constant
13:56tendency to compare robots to humans and
13:58look at them as human replacements
14:01versus thinking a little bit more
14:03outside of the box and viewing them as
14:05something that's more supplemental to
14:06humans do we have a model for what that
14:09even might be I've been trying to argue
14:12that animals might be the better analogy
14:16to these machines that can sense and
14:20think and make autonomous decisions and
14:22learn and that we kind of treat like
14:24they're alive but we know that they're
14:26not actually alive or feel anything or
14:30have emotions or can make moral
14:32decisions right they are still
14:35controlled by humans property property
14:38their property and throughout history
14:40we've treated some animals as property
14:42as tools some animals we've turned into
14:47our companions and I think that that is
14:50how we're going to start integrating
14:52robotic technology as well we're going
14:54to be treating a lot of it like products
14:56and tools and property and some of it
14:58we're gonna become emotionally attached
15:00to and we might integrate in different
15:02ways but we definitely should stop
15:05thinking about robots as human
15:07replacements and start thinking about
15:09how to harness them as a partner that
15:14has a different skillset so while you're
15:16talking I'm thinking about the
15:17incredibly fraught space of how we
15:19relate to animals some people might
15:21argue you know that since that's such a
15:24gray area as it is and we're always
15:26feeling our way you know and that model
15:29is always changing it almost sounds like
15:31it just makes it Messier anyway right
15:33and I also think there's a way in which
15:37this like primal instinct of how to
15:39relate to animals do you think we have
15:41the same kind of seed for a primal
15:43relationship with robots there I think
15:45we do I think that ironically we're
15:49learning more about our relationship to
15:51animals through interacting with robots
15:54because we're realizing that we're
15:56complete hypocrites yeah
15:59we fancy ourselves as caring about you
16:04know the inner biological workings of
16:08the animals and whether animals can
16:09suffer and we actually don't care about
16:12any of that we care about what animals
16:14we relate to and a lot of that is
16:17cultural and emotional and a lot of that
16:19is based on which animals are cute for
16:21example in the United States we don't
16:25eat horses that's considered taboo
16:28whereas in a lot of parts of Europe
16:30people are like well horses and cows are
16:33both delicious why would you distinguish
16:35between the two there's no inherent
16:37biological reason to distinguish right
16:39and by the way we boil them into glue
16:41and yet culturally we feel this like
16:44bond with horses in the US as this
16:46majestic beast and it seems so wrong to
16:49us to eat them the history of animal
16:52rights is full of stories like this like
16:54the save the whales campaign didn't
16:57start until people recorded whales
17:00singing before that people did not care
17:03about whales but then once we heard that
17:05they can sing and make this beautiful
17:07music we were like oh we must save these
17:09beautiful creatures that we can now
17:11suddenly relate to because it needs to
17:13be about us kind of on some deep level
17:16this sad but important realization is
17:19that we relate to things that are like
17:21us and we can build robots that are like
17:24that and we are going to relate to those
17:26robots more than two other robots so
17:28it's a it's a principle almost of like
17:30design thinking then when you think
17:32about like well I want this robot to
17:35have a relationship to humans like
17:38cattle pulling a plow it gives you a
17:41sort of vision of a different spectrum
17:43of relationships for starters I mean
17:45we've even tried to design animals
17:47accordingly we've bred dogs to be to
17:51so that we relate more to them and the
17:54interesting thing about robbaz is that
17:55we have even more freedom to design them
17:58in compelling ways than we do with
18:00animals and it takes a while to breed
18:02yeah animals yea generation yeah so I
18:05think we're gonna see the same types of
18:08manipulations of the robot breeds why
18:12would you go down on that spectrum to
18:14the lesser relationships when it's
18:17something that is performing a service
18:18to humans if it's not directly harmful
18:22to have people develop an emotional
18:24attachment it's probably not a bad idea
18:26to do but a lot of the potential for
18:31robots right now is in taking over tasks
18:34that are dirty dull and dangerous and so
18:37if we're using robots as tools to go do
18:40this thing it might make sense to design
18:42them in a way that's less compelling to
18:44people so that we don't feel bad for
18:46them when they're doing the dirty dull
18:47dangerous work there are contexts where
18:50it can be harmful so for example you
18:53have in the military you have soldiers
18:54who become emotionally attached to the
18:56robots that they work with and that can
18:58be anything from inefficient to
19:00dangerous because you don't want them
19:01hesitating for even a second to use
19:04these machines the way that they're
19:05intended to be used Oh like police dogs
19:07that's a great analogy if you become too
19:11attached to the thing that you're
19:14working with if it's intended to go into
19:17harm's way in your place for example
19:20which is a lot of how we're using robots
19:22these days bomb disposal units stuff
19:25you don't want soldiers becoming
19:27emotionally affected by sending the
19:29robot into harm's way because that can
19:31be think they could risk their lives so
19:34it's really important to understand that
19:36these emotional connections we form with
19:38these machines can have real-world
19:40consequences another interesting area is
19:44responsibility for harm because it does
19:47get a lot of attention from policy
19:50makers and from the general public and
19:52with robots generally there's a lot of
19:53throwing up our hands like how can we
19:56possibly hold someone accountable for
19:58this harm if the robot did something no
20:01one could anticipate and I think we're
20:04a ton of history and with animals where
20:08we have things that we've treated as
20:11property that can make autonomous
20:13unpredictable decisions that can cause
20:15harm so there's this whole body of
20:18legislation that we can look to
20:22the Smorgasburg of different solutions
20:24we've had is really compelling I mean
20:27the Romans even had rules around you
20:30know if your oxen tramples the neighbors
20:33field the neighbor might actually be
20:37able to appropriate your oxen or even
20:39kill your oxen we've had animal trials
20:43oh I talked about that in a podcast of
20:46peer Leeson about the trials of the rats
20:48for decimating crops there's different
20:51ways even today that we like to assign
20:55responsibility for harm there's like the
20:57very pragmatic okay how do we compensate
21:00the victim of harm how do we hold the
21:03person who caused the harm accountable
21:05so that there's an incentive to not do
21:07it again right and a lot of that is done
21:09through civil liability okay
21:12there's also however criminal law that
21:14is kind of a primitive concept when you
21:20think about it there was just this case
21:22in India where an old man was stoned to
21:26death with bricks by monkeys who were
21:28intentionally flinging bricks and the
21:30family tried to get the police to do
21:34something about the monkeys and hold the
21:36monkeys criminally accountable for what
21:39happened just because of that human
21:42assigning of blame yes because it wasn't
21:45enough to just you know have some sort
21:49of monetary compensation they really
21:52wanted these monkeys to suffer a
21:54punishment for what they had done and I
21:57know it seems silly but we do sometimes
21:59have that tendency so it's interesting
22:01to think about ways that we might
22:04actually want to hold machines
22:05themselves accountable and ways that
22:08that's problematic as well so can you
22:10illustrate what that would look like
22:12with robots when we think about those
22:14different ways of assigning
22:17yeah so for example the way that we
22:19regulate pitbulls currently in some
22:22countries is really interesting Austria
22:24has decided there are some breeds of
22:26dogs that we are going to place much
22:29stricter requirements on then you know
22:33just the other dog breeds so you need to
22:36get what's basically the equivalent of a
22:39driver's license to walk these dogs they
22:42have to have special collars and they
22:45have to be registered and you could
22:47imagine for certain types of robots
22:48having a registry having requirements
22:51having a different legal accountability
22:54like strict liability versus oh did I
22:57intend to cause this harm or did I
22:59caused it through neglect the way that
23:01we distinguish for example between wild
23:04animals and pets if you have a tiger and
23:07the tiger kills the postal service
23:09worker that's gonna be your fault
23:11regardless of how careful you were with
23:13a tiger because we say having a tiger is
23:16just inherently dangerous it's almost
23:18the model is sort of developing
23:21different ideas around certain
23:22categories and groups of the way we
23:24relate to them depending on whether
23:27those relationships are based on then
23:29sort of our emotional narratives around
23:32it or evidence space ya know it becomes
23:34really important part of it is that we
23:38need to recognize that social robots
23:40could have an impact on people's
23:42behavior and that it's something that we
23:45might actually need to regulate one of
23:47the interesting conversations that's
23:49happening right now is around autonomous
23:51weapon systems and accountability for
23:54harm in settings of war where we have
23:57war crimes but they require
23:59intentionality and if a robot is
24:01committing a war crime then there's
24:04maybe not this moral accountability but
24:07wouldn't it be an obvious whoever
24:08programmed and owned the rebel because
24:11you need someone to have intentionally
24:14caused this rather than accidentally the
24:17thing about robots is that they can
24:20actually now make decisions based on the
24:23data that they gather that isn't a
24:25glitch in the code but is something that
24:28we didn't foresee happen
24:30we've used autonomous unpredictable
24:34agents as weapons in war previously for
24:37example the Soviets they trained dogs to
24:42run under tanks enemy tanks and they had
24:47explosives attached to them and they
24:49were meant to blow up the tanks and a
24:52bunch of things went wrong so first of
24:56all they had trained the dogs on their
24:59own tanks which means that the dogs
25:05sometimes blow up their own tanks
25:08instead of the enemy tanks they didn't
25:11train the dogs to be able to deal with
25:14some of the noise on the battlefield in
25:16the shooting so the dogs got scared and
25:19would run back to their handlers with
25:22these explosives attached to them and
25:24the handlers had to end up shooting the
25:27dogs and you know we're not perfect at
25:29programming robots either there's a lot
25:32of things that can go wrong that don't
25:34necessarily they're not glitches in code
25:37right it's unanticipated consequences so
25:41when we're thinking about regulating
25:42things I think that's a pretty good
25:44analogy to to look at the history of how
25:47we've handled these things in the past
25:49and who we've held accountable the
25:51interesting thing that occurs to me is
25:53how do we both acknowledge our human
25:55emotional attachment and yet not let it
25:57direct us too much what's that balance
26:01like step one is probably awareness
26:03right but is it something we can manage
26:05and navigator is it kind of beyond our
26:07control I think we struggle with that
26:11culturally as well because we have this
26:14judeo-christian distinction like we have
26:17this clear line between things that are
26:19alive and things that are not alive
26:21whereas in some other countries they
26:23don't necessarily make that distinction
26:25like in Japan they have this whole
26:27history of Shintoism and treating
26:29objects as things with Souls and so it's
26:32I think easier for them to view robots
26:37as just another thing with a soul and
26:40they don't have this contradiction
26:42inside themselves of
26:44oh I'm treating this thing like a living
26:45thing but it's just a machine oh that's
26:47so fascinating cuz I would have thought
26:49it would be the other way if you think
26:50everything has a soul it's sort of
26:52harder to disentangle but you're saying
26:54you sort of are desensitized to it in a
26:57way or you're more used to viewing
26:59everything as connected but different
27:02and so you know you still face the same
27:05design challenges of how do you get
27:08people to treat robots like tools and
27:10settings where you don't want them to
27:12get emotionally attached to them so
27:13those design challenges still exist but
27:15I think as a society you're not also
27:17dealing with this contradiction of I
27:20want to treat this thing like a machine
27:22but I'm treating it differently right
27:23the sort of ethical wrappers around us
27:26that we need to be aware of when we're
27:28starting to introduce these different
27:30types of interactions as these
27:31relationships become more sophisticated
27:33thank you so much for joining us on the
27:35a 16z podcast thanks for having me