00:04RACHEL BEEN: Hi, audience.
00:08I'm Rachel, Creative
Director of Material Design.
00:10This is Kunal and Phil,
Senior Designers on the team.
00:14Let's start out with
why are we doing this.
00:16Why are we talking about this.
00:17Why is a design
systems team concerned
00:19with machine learning.
00:21OK, let's do a quick primer for
those of you maybe unfamiliar
00:24with machine learning.
00:25I love using cats to talk
about ML, so humor me.
00:29So what is machine learning?
00:30It basically gives
computers the ability
00:32to make predictions
and solve problems
00:34without specific instructions.
00:36So by identifying
really complex patterns,
00:38machine learning can
be used for a variety
00:40of product experiences.
00:42Identifying music,
responses in a chat
00:45app, identifying imagery.
00:49And products, as many
of you saw yesterday,
00:52like ML Kit and Firebase,
really make this technology
00:55available to developers
of any skill set.
00:59But the issue is this
technology is not effective
01:02if users can't
understand its benefits.
01:04So this is an example of
one of the original demos
01:08Sorry, ML Kit, if anyone from
ML Kit is in the audience.
01:11Of looking at some object
detection patterns.
01:13Now, it's picking up
the dog to some degree,
01:16but how is this experience
super beneficial to users
01:19seeing this on the front end?
01:22So the goal of
material design is
01:24to build the design system that
harnesses technology like this
01:27into a beautiful and
usable interface.
01:31And this well-designed interface
is what's useful to people.
01:35But a well-designed
interface is also
01:36allows you to implement the
technology faster and provide
01:40assurances that your
audience and your users
01:42will understand the interface.
01:45And it will work for your users.
01:48One of the most important
things about a design system
01:51machine learning, are really
the design solutions when
01:54things do not go as planned.
01:56Because machine learning can
be fluid, and unpredictable,
01:59and augmented by users, these
situational design solutions
02:04must be considered for a
functional product experience.
02:09So there's no single
design pattern.
02:11And as many of
you probably know,
02:12that encompasses the
entire myriad of things
02:15that machine learning can do.
02:17This is an example of the
plethora of features that
02:20machine learning can power.
02:22We selected some of the most
prevalent machine learning
02:26Visual search patterns.
02:27What does that mean?
02:28Really it means
using your camera
02:30to search the world, instead
of more traditional text input
02:33Kunal and Phil are going to talk
about these patterns upcoming.
02:38But before we start, I want to
mention a few tactical pieces
02:41of information to make this
guidance more understandable.
02:45What are we providing?
02:46We're providing three visual
search pattern articles
02:48on material.io which
is the guideline
02:50site for material design.
02:53There's also a great demo
app provided by the ML Kit
02:55team available on GitHub that
is specific for material design
02:58and specific for these
patterns that you can download.
03:02There's also really great
demos from Adidas, , Ikea,
03:05and Flutter that we'll talk
a little bit about later.
03:07And with that, I'm going
to hand it off to Kunal
03:09to start talking
about the patterns.
03:12KUNAL PATEL: All right.
03:16So as Rachel
mentioned, we're going
03:17to focus on two ML Kit
APIs today and walk
03:20through three patterns.
03:22As we were working through
the object detection
03:24and tracking API,
we realized there
03:26were significant differences in
user experience for how a user
03:29would go through this flow.
03:31With the streaming
mode of the API,
03:33which uses a live camera
feed, or the static mode,
03:35which uses a provided image.
03:36So we've split that one API
into two separate design
03:39guidelines based on the
mode that you're using.
03:44And as we go
through these flows,
03:46for those of you who are
familiar with material design,
03:48you'll see a number of
familiar components.
03:50A top app bar,
buttons, dialogues,
03:53and also some new
elements that we've added.
03:55Such as the reticle
and object markers
03:57to extend the experience to
work for these visual search
04:03In addition, building on
our announcement last year
04:06of material theming.
04:07We wanted to make sure that
just like the rest of material
04:10design, these new visual search
experiences and the elements
04:13that we were adding can be
customized to match the look
04:15and feel of your application.
04:17What you're seeing
here is a sticker sheet
04:19of visual search elements for
our material studies shrine.
04:22And we'll be using this as an
example in all of our flows.
04:27So now I'm going to talk
about the first pattern we're
04:29going to walk through,
which is object
04:31detection in a live camera.
04:33You may be asking, what do
I mean by object detection?
04:36What does live camera mean?
04:38And why should I care about
either of those terms?
04:42So object dissection is
really the first step
04:44of a visual search journey
as Rachel was talking about.
04:48The advantages of
searching by an image,
04:50or through visually, is that if
users don't know what to type
04:53or say, they can
use an image instead
04:57to do that work for them.
04:59In addition, with the streaming
mode of the ML Kits object
05:02detection and tracking
API, they can do this
05:04without having to manually
take a photo of themselves.
05:07Just point out an object
and learn about it.
05:10Our guidelines for
this flow are going
05:12to cover tracking a single
prominent object in the user's
05:19So how does this
work technically?
05:22Users can detect an
object on their device
05:26using a ML Kits object
detection and trapping API.
05:29The API is going to crop
the camera frame just
05:33to the image area of
the detected object
05:36and send that along
to your own machine
05:38learning model
for classification
05:39and more information.
05:41Once you have results
available, those
05:43are sent back to the user.
05:47When we were thinking about
designing these experiences,
05:50we separated the
user's experience
05:51into these three phases.
05:53Sensing, recognizing,
and communicating.
05:56Let's take a closer
look at each one.
06:02So in the sensing
phase, this is when
06:03users have opened the visual
search feature in your app,
06:07and the app has begun
looking for an object.
06:10This could be to learn
more about a plant
06:11in a garden, an item
at a museum, or a shoe
06:15that they want to purchase.
06:19For first time
users, we want to be
06:21sure to explain how
this feature works.
06:24Many users are familiar with
using their smartphone camera
06:29But not as kind of
a remote control
06:32in which to learn about
objects around them.
06:34So we recommend having
a light and fast
06:37one-screen on-boarding
experience that
06:39focuses on what the user
can do, what kinds of things
06:42they can detect,
and using animation
06:45to show that moving their device
to actually identify an object
06:52After that on-boarding process,
when they're in the experience,
06:55we want to communicate
that the app is looking
06:58for objects in the camera.
07:00As I mentioned, our guidelines
for object detection
07:03in a live camera cover using
the prominent setting of ML Kits
07:08Which is going to look
for the largest object
07:10in the center of the camera.
07:12So we want to draw users'
attention to that area
07:14and let them know this is
where they have to search.
07:17We do that using this
new visual element
07:19that we call the reticle.
07:21It's animated to draw attention
to the center of the screen
07:24and to let the user
know that it's actively
07:28In addition, we reinforce
what the reticle
07:30is doing with a tooltip at
the bottom of the screen that
07:33prompts the user to point
their camera at an object.
07:40A user may have trouble
detecting objects
07:43based on conditions in the
environments around them.
07:46Maybe there's not
objects of the type
07:48that are recognized by your
app in their environment.
07:51Maybe it's too bright or
dark, or objects are too close
07:54together for the app to get
an accurate reading of objects
07:59What we recommend doing is
setting a detection time
08:03So that if you notice that
users haven't detected something
08:05after a certain
amount of time, you
08:07can bring up an error
message through a banner,
08:09and direct them to help
to troubleshoot the issues
08:11that they may be having.
08:15In the second phase
of the experience,
08:17when the user has
found an object
08:18that they want to
search for, the app
08:20needs to detect that
object and begin
08:22the visual search for them.
08:26When we do recognize an
object in the camera,
08:30we want to let the user
know that we identify
08:32this by adding a border
to the bounding box
08:35coordinates that we're getting
back from the ML Kit API.
08:39You can notice that the
frame here is rectangular.
08:41This matches the
coordinates that you're
08:44going to get back from
the API, and also lets
08:46the user know that
this is going to crop
08:48an area maybe a
little bit larger
08:50than the object
that's being shown.
08:56However, when we
detect an object,
08:58we don't want to necessarily
begin a search immediately.
09:01Users may be moving their device
around looking for objects,
09:04and we don't want
to start a search
09:05for every single thing we see.
09:07That's not going to align
to their expectation,
09:10and it's going to be very
expensive for your app to do.
09:12Instead, we need
some light signal
09:14to confirm that
users are actually
09:16interested in this object.
09:17That they want to search for it.
09:19And the way we do
that is by asking
09:20them to keep their camera
still for a moment.
09:23Sort of hover over that object
for a brief moment to let us
09:27know that they're interested
in searching for it.
09:30That amount of time, before
we wait to begin the search,
09:32can be preset by
your own application.
09:34So it can be customized.
09:36We want to communicate
that to the user
09:38by including a loader
within that center
09:40element, the reticle, so they
know how much longer they have
09:43to keep their device still.
09:47Once they do indicate
that they want
09:49us to search for this object.
09:50The first thing
we're going to do
09:52is pause that live camera feed.
09:54This is a strong
signal to the user
09:56that they can now
move their device
09:58to a more comfortable
position without losing
10:00track of the object.
10:01And the second thing we're going
to do is remove the reticle
10:04and replace it with an
indeterminate progress
10:06indicator to let them know that
we've started running a search
10:09and are looking
results for this item.
10:14At this stage, one
thing that can go wrong,
10:17or that you want to
keep an eye out for,
10:19is how far the user
is from the object.
10:22So while an object may be
detected from a distance,
10:25you may want to set a minimum
size for detected objects
10:28in order to begin your search.
10:30And this is because
the image that's
10:33going to be used by your
own model to find results
10:36is based on what's
in the camera view.
10:38So if the object is very
small in the camera,
10:40it may be missing
details that are
10:42going to be helpful for
identifying that item.
10:44So we included a partial
border style for you
10:48to use for an object
that's detected,
10:50but is maybe too far away.
10:51And tips on how to message
and change the reticle style
10:54to let a user know
that they need
10:55to move closer before
we can actually
10:57search for this object.
11:01In the last phase
of this experience,
11:03the app has results for
that object that they're
11:05ready to send back to the user.
11:06And the user needs
to be able to focus
11:08on them to complete their task.
11:12The first thing we
want to do is use
11:14a thumbnail of that image
of the detected object
11:17and present it above
the results space.
11:20This serves two
important functions.
11:22One, it's a bridge between
the recognizing phase
11:24and the communicating phase.
11:26It confirms what the user
was looking to search for
11:29and provides it
to them on results
11:31for easy comparison
with any information
11:33that you're returning them.
11:37The second thing is that
we're using a modal bottom
11:39sheet to present the results.
11:41And this has a
couple advantages.
11:43One, modal bottom sheets
come with this layer that
11:45separates the sheet from
the rest of the app UI
11:50This scrim darkens the
camera view behind and brings
11:53more emphasis to the results.
11:55And also provides a way for
users to return to the camera
12:00Users can also
return to the camera
12:01to conduct another search
by tapping the header
12:03of the results sheet.
12:07In terms of the
layout for results,
12:09really want this to be based
on the needs of your app.
12:13What is the specific kind
of content that you're
12:15presenting to users?
12:17How many results
are you returning?
12:19And what is your
confidence in the results?
12:21If you're returning
multiple search results
12:23to a user based on
a visual search,
12:25we recommend using
a list or grid
12:27format to show those results.
12:30And if you just have
a single result,
12:31or a high-confidence result
that you want to promote,
12:35customize that
layout to your needs.
12:40If your results are
mostly low-confidence,
12:43which is to say that
your model doesn't
12:45believe that it's very
similar to the object that
12:49It doesn't have a lot of
faith in its prediction.
12:52Let users know that they
may want to search again.
12:56You want to make
sure that you're also
12:57setting a confidence
threshold for what results
13:00you're presenting to users.
13:01And if the results are
kind of borderline,
13:04let users know that
they can search again.
13:06And maybe provide
some tips on what
13:07they could do to improve
in case the results don't
13:10meet their expectations.
13:13If no results are found, maybe
because the user was too far
13:17away from the object,
didn't capture it
13:19from the right angle
that was expected,
13:22or the environment affected
how bright or dark the object
13:26You want to make sure
you have a way to message
13:28these error cases
and direct the user
13:30to Help for more information.
13:32A lot of these
issues are not going
13:34to be things that you
can detect or help
13:36a user with in their
app and are things
13:38they are going to need to
change about their environment.
13:40Maybe it's turning
on their camera flash
13:42to increase the brightness
of the object that's
13:45Or change their position
to try photographing
13:47the item from another angle.
13:51So I mentioned
that we were going
13:52to look at how these
experiences could be customized.
13:56So let's take a look at how
this live object detection
13:58flow can be customized
to match your app.
14:03So we're going to use
Shrine, which is our material
14:05study for a retail app.
14:07Has a very kind of like
minimal and clean aesthetic.
14:10It uses these angled
corners for key elements
14:12that's based on the geometric
logo that Shrine has.
14:16And it has this light
pink brand color.
14:20So I'm thinking about how this
gets applied to a visual search
14:24We wanted this to feel seamless.
14:25We want these to blend in with
the rest of your application
14:28and carry over key elements
from color typography and shape.
14:34If we take a closer look at
some of the new elements we
14:37introduce here, like the
reticle, in our baseline flow
14:40it has this very rounded shape.
14:42But since Shrine has a more
geometric approach to the app
14:45and to key elements, we've
gone with a diamond shape
14:47instead to reflect
the brand and fit
14:49in with the rest of the
key elements of the app.
14:52Our tooltip, which
in the baseline form
14:54uses Rubato and a
black container,
14:57now uses Rubic, which
is Shrines font,
15:00and light pink
background color that's
15:02used in the rest of the app.
15:06So now that we've walked
through how detecting
15:08objects from a
live camera works,
15:10and how it can be
customized to your app,
15:12I'm going to turn
it over to Phil
15:13to talk about how this
works in a static image.
15:22If you like that, wait till
you see what's coming up next.
15:26So object detection
and a static image
15:28allows users to select
an image on their device.
15:31And then detect up to five
objects located inside
15:34This feature is
really useful for when
15:36users would like to analyze
an image that they captured
15:40Or if they are not able
to detect something
15:43right there on the spot.
15:44So one of the ways that
this can be integrated
15:46is in a search flow.
15:48Like what we see
here in this example
15:50where a user is looking
for plants in a photo
15:55So let's break down
the technical flow
15:58This is pretty similar
to the live camera flow
16:00that Kunal shared earlier,
just that here, our source is
16:03We're using a static image
instead of a live camera.
16:07So first, the objects
are detected on device
16:11Then the objects are
classified with your own model.
16:15And finally, the results are
presented back to the user.
16:19Now, as before,
we split this flow
16:20into three distinct phases.
16:22And these should probably look
familiar to you at this point.
16:25So here we have
Input, where the user
16:28selects an image to search.
16:30Recognize, where we
wait for the objects
16:31to be detected and identified.
16:33And Communicate, where
we review the results
16:35and complete the task.
16:37So in the Input phase,
we're introducing the flow
16:39to the user, and we ask them
to provide an image to search.
16:42When the user opens a
feature for the first time,
16:45it's important to explain how it
works so that there's a better
16:47chance for a successful search.
16:49And the best way to
do this is, again,
16:50with a simple
on-boarding screen.
16:52So as Kunal had
mentioned earlier,
16:54we're limiting
this to one screen
16:57and providing a
short explanation
16:58for how this feature works.
16:59So it's important
here that we don't
17:01want to use this moment to go
through every possible error
17:04that a user can encounter.
17:05Instead, we're focusing
on the user interactions.
17:07What does the user
need to do to get
17:09to the information they want?
17:11And once we get through
the on-boarding screen
17:13and into the image
selection screen,
17:15we recommend using
the operating systems
17:16native selection screens
so that users are already
17:19familiar with how
to select an image.
17:22So that's the Input phase done.
17:23Now we move on to
the Recognize phase,
17:25where we enter a
transition state
17:26while the user is waiting
for objects in the image
17:29to be detected and identified.
17:31In this phase, we're using
an indeterminate loader
17:34paired with a tooltip to
clearly communicate to the user
17:37that they should be
expecting a short delay.
17:39And in addition, we're also
using a translucent scrim
17:42on top of the image.
17:42So this does two things for us.
17:44The first is that it helps
to obscure the image slightly
17:46so that users know
the image not quite
17:48ready to interact with yet.
17:50The second is that it helps
provide adequate contrast
17:54for the loader and the
tooltip to be visible on top.
17:57And sometimes
certain factors can
17:59affect whether an image is
suitable for object detection.
18:02These are things like
poor image quality,
18:04the object in the image
being too small, low contrast
18:08between an object
and it's background,
18:10or an object being shown
from an unrecognizable angle.
18:13Other times, it might not
be with the image itself,
18:15and more that the user lost
connection to the network.
18:18So if a user
encounters an error,
18:21it's important to
anticipate them,
18:23and to facilitate a smooth
experience by explaining
18:26the issue in a banner.
18:28And giving the user an
opportunity to try a new image.
18:30You can also include a way
for the user to learn more
18:33in a dedicated help section.
18:36Finally, once we have
the objects detected,
18:39we move on to the last
phase, Communicate.
18:41So in this part of
the flow, we show
18:42the user which objects
have been detected,
18:46and give them the opportunity
to inspect those results.
18:49So the way that we
identify detected
18:51objects is through these
cute little object markers.
18:54They should be placed in the
center of a detected objects
18:56bounding box, and they're
elevated with a shadow
18:58to make sure that they're
visible against an image.
19:00So remember how
earlier Kunal talked
19:02about using a thumbnail
in the results
19:04view of the live camera mode?
19:05This is conceptually doing
the same thing, right.
19:07We're helping the user to
compare the object in the image
19:11to the object in the results.
19:15And in addition to
the object markers,
19:16we're giving the users
a preview of the results
19:18through these little
mini cards at the bottom.
19:21We place them in this
horizontally scrolling carousel
19:24at the bottom of the screen
so that it's easier for users
19:29And finally, how do we encourage
users to explore and interact
19:34With a little something
called the power of design.
19:38The first is through
the motion transition.
19:41So the way that these elements
appear on screen is staggered,
19:45and that helps to
demonstrate that there
19:47are multiple results in view.
19:49And second, the
tooltip at the bottom
19:52allows us to use language
to prompt the user
19:55to explore the results.
19:56And finally at the
end, notice how
19:59as the carousel is
being scrolled through,
20:01the markers scale up in size
as its corresponding card
20:07So this helps the user to draw
a relationship between each card
20:10and its matching dot.
20:13And then tapping each
card or the object marker
20:16brings up more details about
it in the bottom sheet.
20:19Same as what Kunal showed us
earlier in the live camera
20:25And now, again, when we talk
about errors and issues,
20:28it's important to account
for result confidence.
20:30If a search returns
a result with only
20:32low-confidence scores,
you can let the user note
20:34at the bottom of the list itself
with links to search again
20:38and tips to improve
their search.
20:39And aside from
low-confidence results,
20:41a search can just fail
and return without matches
20:44for several reasons.
20:45Like if the object isn't
in a known set of objects,
20:48or if the image is low quality.
20:49So when this happens we
recommend displaying a banner
20:52that guides users
to help section
20:55for more information about
how to improve their search.
20:59OK, now in terms of
theming, let's quickly
21:02talk about how we can take
these baseline patterns
21:04and customize them to
express your brand.
21:09Once again, we're going to
use the example of Shrine
21:11that Kunal shared earlier.
21:12So here's an overview of
what those key phases look
21:15like in these screens.
21:16The user might tap on the image
search icon in the top bar,
21:20select an image
from their device,
21:21and then explore the results
in the following screen.
21:25Like we mentioned,
Shrines visual language
21:27is all about using those
angular shapes, right.
21:29So we've expressed that here
through the object markers,
21:32where we turn them from
circles into diamonds,
21:34and through the cut
corners of the cards.
21:36And, of course, we've
transferred the typography
21:39and the color scale of Shrine
into the card title and shape.
21:46So that's object detection and
a static image in a nutshell.
21:49So, now I want to welcome
Kunal back to the stage
21:52to welcome you through the last
experience, barcode scanning.
21:59KUNAL PATEL: Thanks, Phil.
22:04So barcodes are an easy
and convenient format
22:06for passing information from
the real world to your device.
22:11ML Kits barcode scanning API
reads most standard barcode
22:15formats and provides an
easy way for your own app
22:18to be able to recognize barcodes
without users having to open up
22:21a separate application.
22:22And this is a great
way to either have
22:25users be able to search
in a different way
22:28or to automatically input
information by scanning a code,
22:31rather than by manually
typing things in.
22:36So how this works, technically,
is the ML Kit barcode
22:39scanning API can detect
most common one-dimensional
22:43and two-dimensional
barcode formats
22:45and then read the value.
22:46And if that value is
like a string, an ID that
22:49needs to be looked up,
you can send it off
22:51to your own database to get
results back and present them
22:55But one cool feature of
the barcode scanning API
22:57is that if the barcodes
value is in one
23:00of several common structured
data formats for contact
23:04information, event
details, things
23:07like that, the API
can automatically
23:09parse that structured data
and let you present it
23:12immediately to users without
you having to do any extra work.
23:18So we thought about this
experience in the same three
23:20phases that we used for the
other live camera experience
23:23or object detection flow.
23:24So I'm going to try to focus on
what's unique to barcodes here.
23:29So in the sensing phase,
we open up the feature.
23:31The app has begun
looking for barcodes.
23:35We want to do that same
type of on-boarding.
23:40Focus on the motion
that users might
23:42need to do to get a
barcode into view to scan.
23:47And once they're in the
experience, instead of
23:50the reticle that we used in
the live object detection flow,
23:53we have what we call a
barcode frame instead.
23:56This barcode frame is also
at the center of the screen,
23:58but it's setting the area
that the barcode will
24:00be automatically
scanned in once entered.
24:03So providing a
more prominent area
24:05for users to place
the barcode to scan.
24:08And we're also animating
it to draw attention
24:10to the center of the screen.
24:12One last thing to note
is that the aspect ratio
24:15this barcode frame
can be adjusted
24:16to match your own application.
24:18So if you're only
looking to scan QR codes,
24:21then this frame can take
on a more square shape
24:24to provide another
hints or cue to the user
24:26that those are the
types of barcodes
24:27that should be looking for.
24:31If they have any difficulties
with detecting objects,
24:35detecting barcodes, as we
covered in our previous flows.
24:38We want to have the same kind
of error case and error styles
24:41for the barcode frame and
for the rest of the app
24:43that we mentioned in
the earlier flows.
24:47In the Recognition phase,
the user scanned a barcode,
24:50the app has read its value,
and its loading results.
24:56So similar to what we talked
about in object detection.
25:00For most simple
barcode formats, you
25:02shouldn't need to really set
a high minimum detection size.
25:06Most formats are
graphically easy
25:09and should be able to get read
pretty quickly once they're
25:12in that barcode frame area.
25:13But more complex
formats, such as PDF 417,
25:17have a lot of detail that
will need a higher quality
25:20image in order to be
accurately read by the barcode
25:25For these types of formats,
we recommend setting
25:27a minimum detection size.
25:28And using this
partial fill style
25:31for the barcode frame and a
tooltip message to let the user
25:34know that they need to move
closer in order to get a higher
25:36quality image to be scanned.
25:41If you need to send
information about the barcode
25:44off to another
database or other part
25:46of your app for
processing, we want
25:48to communicate any
loading time to the user
25:50by turning the barcode frame
into an indeterminate progress
25:54If the value can be
read immediately,
25:56you can skip straight
to showing results.
26:01So when we do have results
available to users,
26:03there are a couple of unique
things about displaying results
26:06that we haven't covered
in our previous flows.
26:10The first is, what I mentioned
in the technical flow
26:12demonstration, is that the
barcode scanning API can read
26:17both structured data and we'll
call unstructured data, or data
26:20that you need to
look up, and that
26:21may have custom information.
26:24For structured data
that's in key value pairs,
26:27text fields provide
a really great format
26:29for representing
this information.
26:32The key end value
kind of relationship
26:34matches really well to a text
fields label and input areas,
26:38and it gives you room to
present actions to copy,
26:40and also a consistent
format for users
26:42to scan the different
types of information
26:45you may have available.
26:47For more custom
information that you're
26:48looking up and returning,
customize the layout
26:51of the container that you're
using to present the results.
26:56Another difference
with barcode scanning
26:58is that the types of
tasks users are doing
27:00may be very different.
27:02For cases where users are going
to be scanning multiple items,
27:05let's say they're an
e-commerce app, and maybe doing
27:08price comparison, and
looking up multiple items.
27:11We recommend displaying
the barcode results
27:13in a mobile bottom sheet
so that users can easily
27:16return to the camera and
scan another barcode.
27:22But for tasks like completing a
form, like scanning a gift card
27:26and getting its balance
added to your account,
27:28displaying that information over
the barcode scanning feature
27:30doesn't make a lot of
sense for the user.
27:32It's actually creating
more work for them.
27:35So we recommend, for
these types of flows
27:37where you're
completing data entry,
27:39is returning a
user automatically
27:40back to the previous screen with
the appropriate fields filled
27:44We want to do that work
for them when we can.
27:48A number of tasks may
fall somewhere in between.
27:52For reviewing contact
information, for example,
27:55you may want to
edit someone's name,
27:57add additional notes, before
you save that information
27:59and exit the barcode feature.
28:01So we recommend bringing
those results up
28:04in a modal bottom sheet
or a full screen dialog,
28:06so that if a user needs to
go back and change anything,
28:10And they have this
lightweight way
28:11of editing that information
before saving and exiting
28:13the feature entirely.
28:17So as we did in the
other flows, I'll
28:19quickly share an example of
how barcode scanning can be
28:22customized to match your app.
28:26So we're seeing
here in the flow,
28:27we're bringing in some of
Shrines colors and typography
28:31One thing to call out
here is the scrim, that
28:34separates the area that can
be scanned for the barcode
28:38from the rest of
the camera view,
28:40is using Shrines
text color instead
28:42of pure black for a slightly
more branded experience.
28:47In addition, if we take a closer
look at the barcode frame, when
28:50we usually have these
slightly rounded corners,
28:53we're using sharp
corners to reflect
28:56the sharp cuts in the logo.
28:59The same typography
and color choices
29:01for the tooltip that we
discussed in the previous flows
29:03carry through here as well.
29:07So we've gone through
three different patterns
29:10in a lot of detail, thrown
a lot of information
29:12and micro-interaction
decisions at you.
29:14But hopefully you
noticed there were
29:15some common themes across
all three of these flows.
29:19I'm going to turn it back over
to Phil to talk about some
29:21of the design principles
that we use to guide
29:23our decision making process.
29:30PHILLIPE CAO: Thanks, Kunal.
29:31So now that we're familiar
with these three experiences,
29:33why don't we zoom
out for a second
29:35and walk through the
core design principles
29:36that you should keep
in mind as you're
29:38implementing these patterns
into your products.
29:40Our first principle
is to make the camera
29:42UI minimal and meaningful.
29:44We want to make sure that the
central means of input, which
29:46is the camera, is unobscured.
29:48And when things do
have to be overlaid
29:50on top, like a reticle,
or an object marker,
29:53we're making sure that they're
legible against any kind
29:55of image, light or dark.
29:58Our second principle is to keep
users informed at every moment.
30:01These new patterns can be
pretty unfamiliar to users,
30:04and explaining what's
going on at every step
30:05is really important to
ensure a smooth experience.
30:09So first, really rely
on these design phases
30:11that we've shown you to
help organize your flows.
30:14And use multiple methods
like language, motion,
30:17components to implicitly
and explicitly communicate
30:20to your users
what's happening now
30:22and what they
should expect later.
30:24And finally, outside
of the main flow,
30:26introduce users to the
essentials of your app
30:28through an on-boarding
experience.
30:30And give them a persistent way
to learn more about the future
30:33through a dedicated
help section.
30:34Our last principle
is to anticipate
30:36opportunities for issues.
30:37So is what Rachel was talking
about at the beginning,
30:40about designing for the
fringes of these use cases.
30:43These flows are prone to errors
from a variety of factors,
30:47and it could be really
frustrating for your user
30:49if you don't take these
factors into account.
30:51So before designing
these flows, before even
30:53getting into the
design side, really
30:55test and learn the
model's boundaries.
30:57What is it good at doing?
30:58What is it bad at doing?
30:59And what common errors
might you encounter
31:01because of those boundaries?
31:03Also, always account for
environmental conditions.
31:06So most apps wouldn't otherwise
care how bright or dark
31:09a user's environment is, but
for something like visual search
31:12that's really important, right.
31:14So these are also
common issues that
31:16are applicable to
all kinds of users,
31:18regardless of their
technical familiarity
31:20or how good their device is.
31:23And finally, adapt your
design based on the confidence
31:28So this is less about explicitly
communicating the confidence
31:31at every step of the way.
31:32But more about
accommodating when you do
31:35have suboptimal
results so that users
31:37have a better understanding
of where their information is
31:42So to recap here's our three
high level design principles.
31:45Make your camera UI
minimal and meaningful.
31:48Keep users informed
at every moment.
31:50And anticipate
opportunities for issues.
31:53And now to close things
off, I'm going to throw it
31:55all the way back to Rachel.
31:58RACHEL BEEN: OK, I'm back.
31:59Let's close this out.
32:02So really quick,
I'm going to talk
32:04about some collaborations
we did that I
32:05mentioned in the beginning.
32:08We worked with Ikea and Adidas.
32:09We worked with these two
early-access partners
32:11to really look at
what these flows would
32:14look like in actual products.
32:15And most specifically,
using real training data.
32:18It was really fantastic
to work with these teams,
32:20not only to get feedback,
but to really see how
32:23these teams approached theming.
32:26How they really took this
same pattern of live object
32:29detection, but really
tailored really minute details
32:33You can check out both of these
demos in the AIML tent, which
32:40Secondly, we worked
with Flutter.
32:42They have downloadable
source code on GitHub
32:44that you can see a
hypothetical barcode flow.
32:49It is using our [? FA ?]
e-commerce app, Shrine,
32:51to showcase what this
hypothetical flow would
32:55And really valuable
to see what that
32:56might look like in practice.
33:00Real quick recap on
some of the resources
33:02to actually help you implement.
33:04ML kit, and all of the
dev docs, and the demo
33:08that is really specific
to material design
33:10is available on the ML
Kit site and on GitHub.
33:14For those of you at
the top yesterday,
33:16the people in our research team
recently launched a guidebook.
33:18A really fantastic resource
providing tools and best
33:21practices for designing
with machine learning.
33:24That's available
pair.withgoogle.com
33:26smallthanks.withgoogle.com.
33:27And of course, all of
these three patterns
33:30are available on the material.io
site in-depth and in-detail.
33:35And with that, thank you
for coming to our talk.
33:37We are going to be in the
sandbox right next door right
33:39after, if you want to
ask us any questions.