00:00tell me about a time when you discover
00:02that one of the processes you use in
00:03your organization was inefficient and
00:06how you came up with a plan for
00:11hey everyone welcome back to another
00:13exponent program manager mock interview
00:15my name is Kevin way and I'm super
00:17excited to have Maddie on for today's
00:18program manager mock interview we're
00:20going to be doing a behavioral question
00:22but before we get into that Maddie do
00:23you want to introduce yourself to the
00:26sure yeah hi everyone my name is Maddie
00:29and I'm currently a engineering program
00:31manager at Google similar to a tpgm role
00:35um prior to Google I was at Facebook and
00:37Amazon's I've been in the tech and the
00:39tpgm space for about six years
00:41great super awesome so here's a question
00:46tell me about a time when you discovered
00:48that one of the processes you use in
00:50your organization was inefficient and
00:52how you came up with a plan for
00:56sure yes absolutely so as a tpgm I took
01:01over a pre-existing program
01:03and the program was tasked with reducing
01:06Fraud and Abuse in the payment space
01:09and since the program was pre-existing I
01:11wanted to dive into all of the
01:13documentation and as I was doing so I
01:15realized that the program was defining
01:18fraud in a way that I thought could be
01:20improved pretty drastically so for
01:24example the program was defining fraud
01:26as something totally static so for
01:29example if you were to purchase two gift
01:31cards for a hundred dollars tomorrow
01:33from different stores maybe you would be
01:35tagged as a fraudster and having some
01:38experience in the Fraud and Abuse space
01:40and currently in trust in safe Gene I
01:43knew that fraud is really Dynamic and we
01:45could probably do better in terms of the
01:47definitions that we were using
01:48so as a new program owner I tasked
01:51myself with really diving in to better
01:54Define these foundational metrics for
01:57so as far as actions I took I was
02:01thinking about how we could be more
02:02Dynamic and had had some exposure to ml
02:04models in the past so I met with our
02:07machine learning team and we chatted
02:10about potentially doing a supervised
02:12model since I had examples of what past
02:14fraudulent Behavior had looked like is
02:16able to apply those to teaching and
02:19model that we were building
02:21um to start to identify a fraud as an
02:23alternative to these completely static
02:27of course training a model is not
02:29something that happens super quickly so
02:31this is kind of a laborious process over
02:35um but through that initial training and
02:37then also through securing some human
02:39reviewers so these were actual humans
02:42who were reviewing transactions
02:45um through my organization and then
02:46adding those and that feedback into the
02:48model and we were able to get a
02:50confidence level that I was confident
02:53I was able to use that and to actually
02:56launch the model and produce definitions
02:58that were much more Dynamic much more
03:00accurately reflected the state of the
03:02business and not only did we have more
03:04accurate definitions but there were
03:06definitely some metrics wins so
03:09the big one was we had a year over year
03:1230 decrease in fraudulent activity the
03:15type that the program was measuring and
03:17the second one was it might seem
03:19negative but we actually increased the
03:21dollar value of fraud versus the prior
03:24um I think this was a big win because it
03:26meant that we were missing this piece of
03:28the pie with the previous definitions so
03:31um definitely not something that
03:33happened very quickly but happy with the
03:35results overall so essentially what I'm
03:37hearing is there was a very initially a
03:40very static definition I guess some like
03:42rules that fraud had to exactly match in
03:45order to be flagged as fraud and you
03:48drove this effort for T this team to
03:50build like a machine learning model such
03:52that fraud would be tagged dynamically
03:55like um just based on the data that the
03:59team had in the past instead of using
04:01like a very static model or a static
04:05exactly and the benefit of the ml model
04:07is that it would learn from continued
04:09fraud patterns and would try to guess
04:12what future fraud Behavior could look
04:13like so since fraud we're all familiar
04:16with scams for example scams look a
04:18little bit different every day and so
04:21since we had a model we could kind of
04:22adapt for the changing fraud space and
04:25more accurately capture it and then
04:27hopefully prevent it got it so I know
04:30that models need a lot of data
04:32um how were you able to get all this
04:34data for the model to be trained in
04:36production such that it was very robust
04:40great question so initially because we
04:43had the definitions and the program was
04:44pre-existing we did have those examples
04:47of fraud so we knew with some confidence
04:49that the static definitions were
04:52actually capturing fraud it's just that
04:53I thought that they were also missing
04:55all of this other fraudulent activity so
04:58we were able to start with feeding it
05:00years of that data and the other piece I
05:02touched on a little bit but we were able
05:04to have a pool of human reviewers so
05:06similar to content moderators for social
05:10who were able to look at live
05:12transactions and tag them as either
05:14fraudulent or not fraudulent and then
05:16add details feed those into the model in
05:19real time and continue to iterate from
05:22there to get into a higher confidence
05:26when you're building this model
05:28um yeah maybe can you like tell me more
05:30about how you implemented the solution
05:34sure yeah I think the biggest thing was
05:36obviously the team who was using these
05:38definitions so the folks who are
05:40actually actioning fraud who had entire
05:42processes based off of these definitions
05:45were very tied to the previous version
05:48so I didn't want to change anything
05:51so before implementing any of it I first
05:54held a meeting where I expressed why I
05:57think it was important that we make
05:58changes here how I thought more
06:00accurately capturing fraud could benefit
06:02their teams than their businesses
06:05um how about kickoff meeting got early
06:07initial sign off from the team to move
06:10forward with what we were doing
06:12um and then from there it was able to
06:14send regular updates so it would send
06:16typically a weekly update throughout
06:18launch because as I said it took quite a
06:21long time to launch about four months
06:23and so I wanted to make sure that the
06:25team was kept in the loop as we were
06:27going through that process
06:28and then when it came to actual final
06:31implementation where we fully rolled out
06:34um was able to again hold a meeting
06:37where I explained what the model looked
06:39like put together a quick one pager with
06:41an overview for folks who weren't able
06:43to make it or needed a reference later
06:46um and then also the big piece was kept
06:48the old definitions live for a while and
06:50I think we sends it over a period of
06:52maybe four weeks or so so folks who had
06:54these processes that depended on the old
06:57definitions or who were just used to
06:59what we were doing before we were able
07:00to kind of slowly head into this process
07:03of using new definitions got it um
07:07sounds like you communicated
07:08cross-functionally about this you did a
07:10really good job of making sure the team
07:12was aligned with the new and the old
07:13definitions and you documented all of
07:17um you know you made sure that everyone
07:18was on the same page
07:20um one other question I have is you
07:23mentioned that you looked at confidence
07:25intervals in order to know whether the
07:27new model was good enough so I'm curious
07:29like how you made that call on or like
07:32who you work with on to to make that
07:34call on like whether we should proceed
07:36with this this new model versus keeping
07:41yeah so I think it's really tricky
07:44particularly in the fraud space because
07:45the actions that we take for fraud are
07:48for example closing someone's Facebook
07:50account if we think that they're up to
07:52suspicious activity which causes a ton
07:54of customer friction
07:56so for me we wanted to be super
07:58confident I heard that the minimum from
08:00the ml team directly was I believe 70
08:05um but we also long-term implemented a
08:07secondary human review for any major
08:10action so for example those account
08:12closures any customer who's getting a
08:15message about potentially fraudulent
08:17activity we get a second check from a
08:21um so we were able to automate a lot of
08:22the maybe lower level lower friction
08:27um but because of not being able to
08:29reach something like a 99 confidence
08:32interval which felt very unattainable
08:34and we implemented this sort of second
08:37check or false positive protection
08:39really and for customers who we were
08:41taking serious actions against
08:46um thanks Maddie so I think this wraps
08:49up our mock interview for our program
08:51manager behavioral interview and
08:55um before we wrap up the entire video
08:58um I think just like my my reactions to
09:01this I think you you did a really good
09:02job showing that you whenever you come
09:05into a new space you listen and you're
09:07open to new ideas and you're very
09:09proactive about proposing these new
09:11ideas and you work very well like
09:13cross-functionally and I think it's very
09:14important for a program manager right to
09:16work cross-functionally because you're
09:17you're not going to be the expert on the
09:19data on the models on engineering but
09:21you were cross-functionally to make sure
09:23that the ideas that you have are
09:25validated and you have like support to
09:28check to see if they're they should be
09:30used or not and you did a good job of
09:34just like documenting communicating just
09:36making sure that everyone's aligned so
09:38you know there there were no surprises
09:40from leadership or anyone else so
09:42overall super super super good mock
09:45interview and personally as someone so
09:48I'm a product manager at coinbase and I
09:50actually work in the payments risk or
09:52like the anti-fraud organization and
09:56yeah I mean like it's the fraud space is
09:58a very Dynamic like you said
10:00um we don't have a program manager would
10:02have been great if you if you were on
10:03our team but yeah like the the frost
10:06faces are very Dynamic and I think like
10:08what you proposed like made a lot of
10:11um at least like from our conversation
10:12here so before we wrap up the interview
10:15did you have any like maybe like one tip
10:17that you want to give the viewer if
10:19they're preparing for their program
10:22I think I have two one is very general
10:24and the second is specific to Google
10:26that's okay so the first one I think
10:29mock interview as much as possible which
10:31is the classic advice but I think I've
10:34seen a lot of folks who have very
10:35detailed notes they write down exactly
10:37what they want to say and that doesn't
10:39necessarily translate very well to
10:41speaking in the way that we want to say
10:43everything so I'd say practice speaking
10:46in front of the mirror with folks you
10:48know as much as possible through
10:51um the second thing I would say is
10:53specific to Google it's a little bit
10:55different from other tech companies
10:57where I've interviewed and that they
10:59love a lot of questions so for example
11:03to tell you about a time Google often
11:06likes it if you ask questions about that
11:08back so would you like me to talk about
11:10the pre-existing program or a new
11:13program or kind of drill down that way
11:16which I think is unique to Google so
11:18this would be my top two tips great and
11:21for the audience who's watching at home
11:23and preparing for their program manager
11:24mock interviews or program manager
11:26interviews good luck and hope exponent
11:31thanks so much for watching don't forget
11:33to hit the like And subscribe buttons
11:34below to let us know that this video is
11:36valuable for you and of course check out
11:39hundreds more videos just like this at
11:44thanks for watching and good luck on
11:46your upcoming interview