00:05welcome to no priors today we're
00:07speaking with Olivier pamel the
00:08co-founder and CEO of datadog the
00:10company at the Forefront of the devops
00:12revolution datadog is a leading
00:14observability and security platform for
00:16cloud applications its execution and
00:18ambition has impressed me for years
00:20especially since learning more about the
00:22company after it acquired screen in 2021
00:24a security startup I was the board
00:26member for I'm excited to be talking
00:28about the potential for AI in devops
00:30Olivier welcome to no priors
00:32thanks for having me
00:34so let's start with a little bit of
00:36personal background your French you've
00:39been in the US working on startups since
00:4199 how did you start to think about
00:44starting a company and datadog in
00:46particular yes so um so yes I'm from
00:48France yes nobody's perfect uh I uh I'm
00:52an engineer also I got into computers uh
00:55through computer graphics and when I was
00:57a kid I used to follow the demo scene in
01:00Europe which was all about 3D and you
01:02know doing interesting things in real
01:04this led me later on to be one of the
01:07first authors of VLC the media player
01:09who I think is mostly used for viewing
01:12illegally downloaded videos
01:14um and I should say most of the people
01:16who meant that successful came in after
01:18me picked up the project and did
01:19something fantastic with it after I left
01:22um and then move to the us to work out
01:24of old places for IBM research in
01:28um and ended up now I thought it was six
01:30months and have been here since 1999 so
01:33I um work for a number of startups
01:37um fully uh I would say the.com the tail
01:40end of the.com boom I arrived right in
01:42time for the bust and after that you
01:44know worked for I think eight years for
01:47an education software company addiction
01:49startup that was doing SAS for schools
01:54um and that's when I was at this company
01:56that I spent quite a bit of time with
01:59the person who's my co-founder data dog
02:00and that's where we had the idea to uh
02:02to stop that basically which was I used
02:05to run the dev team there and he used to
02:07run the Ops Team and whatever different
02:09on our teams we try hard not to hire
02:11jerks we were very good friends you know
02:13we knew we've known each other since the
02:15IBM days basically and we still ended up
02:18you know with Devin upsetting each other
02:19people pointing fingers on each other
02:21all day long big fights so the starting
02:24point for data dog was not monitoring uh
02:27it was not even the cloud initially it
02:28was let's get different ops on the same
02:30page let's give them a platform some
02:32place you can work together see the
02:34yeah that's actually quite different
02:35than thinking of it as like a like a
02:38ticketing relationship right a quite
02:40siled relationship between the two areas
02:42because I think most people assume that
02:44datadog comes from a place that was more
02:46like metrics or like we knew the cloud
02:48was coming and I'm sure you know both
02:50these things are true but it's
02:51interesting that the the core starting
02:52point is really around like Dev and Ops
02:55you are I think pretty long NYC and have
02:59been challenged on like building infra
03:01and NYC as a thing to even attempt
03:03before like tell me about your original
03:05thinking on that or if it even crossed
03:08uh well I mean so I I stayed in the US
03:11because I loved NYC you know that's why
03:13I I ended up staying here I love the
03:14energy I love the uh the diversity of
03:16the city I also met my wife in NYC and
03:18she's also not French you know so it's a
03:20it's also not American you know so it
03:22also made sense for us to stay to stay
03:23in the city uh so when we started it my
03:26girlfriend right makes tools it made
03:28total sense to uh to start a company in
03:30New York we also knew fantastic
03:31Engineers we could hire in New York so
03:33it was very obvious on that standpoint
03:35uh I would say it was less obvious when
03:38we started fundraising because we didn't
03:40come from you know systems management or
03:42observability or anything like that uh
03:45and we were based in New York which was
03:47not seen as a great place to start an
03:48infrastructure company at the time
03:50um so I would say for most investors
03:52especially the area investors I think it
03:54was considered as a some form of mental
03:56impairment you know to stay in New York
03:57at the time it made it harder to
03:59different ways and I think as a result
04:01uh it made us more successful because we
04:04were so scared of getting it wrong and
04:06so scared of not being able to fund the
04:08company any further that we really
04:10doubled down on building the right
04:11product for one thing but also
04:13um you know we built a company that was
04:15fairly efficient from day one and you
04:17know whole voter run profitability
04:19throughout this whole its whole
04:20existence pretty much and I think you
04:22know in the long run it's been an
04:23advantage like everything else you know
04:25if it's a if everything has a long run
04:27or long-term Advantage it turns out to
04:29be very difficult in the short terms
04:30typically how else do you think New York
04:32has benefited you because I feel like
04:34now it's it's kind of an obvious place
04:36to start a company and to your point
04:37when you first got started it was very
04:38different it feels like there was always
04:40some good talent polls there I know
04:41Google had a giant office there and meta
04:43set up one and you know it was really
04:44sort of flourishing over the last decade
04:46and now it definitely feels like a very
04:48strong Standalone ecosystem but are
04:50there other aspects of either recruiting
04:52or other things that that you really
04:54benefited from being in New York yeah I
04:56would say there's really two things the
04:57first one is from a customer perspective
05:00we're sort of out of the Eco chamber of
05:02the bay area which makes it easier I
05:05would set to latch on to what really
05:06matters to customers and what's not just
05:09you know a fantastic idea you told three
05:10people and it repeated three others and
05:12then you came back to you and it sounds
05:13even better now and there's a lot of
05:15companies of basically all
05:17many non-tech companies in New York you
05:19can you can sell too and you can get a
05:20good idea of what they need basically
05:22the second aspect I think that benefited
05:24us is that so it's a bit more difficult
05:26to recruit in New York uh there's less
05:29pure Tech Talent there's less deep Tech
05:31talent in New York than there is in the
05:34but the rate tension is a lot higher so
05:36you know if you give people you know
05:39great responsibilities interesting work
05:40treat them well they're going to stay
05:43with you for three four five years more
05:47um which I think in the Bay Area is
05:48pretty much on the you know very high
05:50end of what you can expect what we see
05:52from looking at data we have data from
05:54both of our customers our engineers and
05:56most of our users are Engineers
05:58um and so when we we see basically when
06:00when their individual accounts churn at
06:03our customers uh organizations and we
06:06see that it's not rare for companies in
06:07the Bay Area to have you know Engineers
06:10turn every 18 month we think it's it's
06:12really hard to build a successful
06:13company that way I think you you do have
06:15to over invest you want to do that so
06:18one last background question before we
06:20ask you to talk uh just a little bit
06:21about datadog today can you explain the
06:24so yeah so it's interesting because the
06:27uh I'm not a dog person microphone and
06:30I've had any dogs uh you know previous
06:32company we used to name production
06:34servers uh dogs and data dogs where
06:37production databases and data doc 17 was
06:41a horrible Oracle database that
06:43everybody lived in fear of that had to
06:45double inside of a six months and
06:47support the growth of the business and
06:48that could not go down so for us it was
06:51the name of pain was the the old world
06:53uh it was where we're coming from it was
06:55namophane so we used it as a code name
06:57when we started the product uh and we
07:00actually we called it that 17. everybody
07:03remembered that a dog uh so we kept it
07:05we dropped the 17 so it wouldn't uh
07:08sound like a MySpace handle and uh and
07:11you know then we had a designer propose
07:13you know a puppy as the uh as a logo but
07:16 CEO of you know alpha dogs and
07:18things like that and hunting dogs and I
07:20think that's that's the smartest
07:22branding decision we've made was to keep
07:23the name and to keep the the puppy love
07:25it so datadog is um clearly a leader in
07:28observability and security for cloud
07:30environments you've had an enormous
07:31success I think you're now approaching a
07:33two billion dollar run rate 26 000
07:35customers you're really Mission critical
07:36to a variety of different folks who
07:38spend in some cases more than 10 million
07:40dollars a year on you but for the for
07:42those uh listeners that we have who may
07:44be a little bit less familiar could you
07:45give us quick background on what datadog
07:47provides and uh almost like a datadog
07:50yeah so what we do is we basically
07:53gather all the information there they
07:55have together about the infrastructure
07:57our customers are running the
07:58applications they're running uh who
08:00these applications are changing over
08:01time what the developers are doing to it
08:02go to users of these applications are
08:05using them you know what are they
08:06clicking on whether they're going next
08:08what the applications are logging what
08:10they're telling about what they're doing
08:12themselves on the systems
08:14um so we cover everything end to end
08:15basically we sell to Engineers so our
08:18customers are I would say the the folks
08:21who buy our product are typically the
08:23Ops teams or devops teams in a company
08:26and the the vast majority of the users
08:28uh developers and engineers and then
08:31some of our users are going to be
08:32product managers they're going to be
08:34security Engineers they're going to be
08:35all of the other functions that
08:36gravitate around product development and
08:38product operations how do you think
08:40about translating some of those products
08:41or areas into the generative AI or llm
08:45world I know that you know obviously
08:46Cloud spend is now 25 of it and there's
08:50been these really big shifts now in
08:51terms of adoption of AI and it's very
08:53early right it's extremely early days at
08:55least for this new wave of llms
08:57obviously we've been using machine
08:58learning for 10 20 years
09:00how do you think about you know where
09:02observability and other aspects of your
09:03product go in this in this sort of
09:07yeah so we we find the area quite
09:09exciting actually uh I mean there's
09:11there's two parts to it one part is the
09:12demand side you know so what's the
09:15what's happening on the mark in the
09:18um that is driving the use of your
09:20compute and building more applications
09:22and things like that and the other side
09:23is the what we're doing the solution
09:24side with the product and hope we can
09:25use the uh generative AI there on the
09:29demand side it's exciting at so many
09:31levels you know if you think of the the
09:33highest level possible and the what
09:35might happen you know in the long the
09:37long run we think that there's going to
09:39be so many more applications written by
09:43um it's going to improve productivity of
09:45Engineers and at a high level you know
09:47if you if you imagine that one person is
09:49going to to be 10 times more productive
09:51it means that they're going to write 10
09:52times more stuff uh but they're also
09:54going to understand what they're writing
09:55times less just because they don't have
09:57the time to see in understanding
09:59everything they do and as a result you
10:01know we think it actually moves a lot or
10:03transfers a lot of the value from
10:05writing the software to then
10:06understanding it securing it running it
10:09modifying it when it breaks and things
10:11like that which ends up being what we do
10:13so we're thinking for our industry you
10:14know it's great in general
10:16from a workload perspective uh we
10:19already see actually an explosion of the
10:21workloads uh in terms of providing AI
10:24services or consuming AI Services you
10:26know so it's actually consumes a lot of
10:27infrastructure to train those models to
10:30um so we're going to see you know a lot
10:33we also see a lot you a lot of new
10:35technologies that are being used there
10:36uh new components the whole this whole
10:39new stack that is emerging
10:41um so you know overall it's it's
10:42exciting at every single level but to
10:44your earlier point though it's uh uh
10:46it's still very early
10:48um so you know it's hard to tell what
10:51actually is going to be the killer app
10:53you know for all of that
10:55um you know in six months in the year in
10:57it's possible that some of the things
11:00we've seen with LMS you know where you
11:02know all of a sudden everything is a
11:03chatbot uh it's possible that it's not
11:06the way people want to uh to eat to
11:08interact with everything you know two
11:10years from now you know for example you
11:12when you start your car you don't want
11:13to play 20 Questions with it you know
11:15you just want to start it
11:17um it might be the same thing with a lot
11:19of the products that I took today
11:20starting to uh to implement uh LMS but
11:24what I think is pretty certain is that
11:26we're going to see an expansion of the
11:28use cases and expansion of workloads uh
11:30also maybe an acceleration of the
11:32Transformations and whether it's digital
11:33transformation or Cloud migration that
11:34are bringing all of that you know in
11:36making all of that possible you know if
11:38you want to adopt AI
11:39you you actually you have to have your
11:42data digitally I mean it's a tons of use
11:44but you know it's not the case still for
11:47um and second uh you also have to to be
11:50in the cloud you know how else would you
11:52do it like if you try to build
11:53everything on-prem today you wouldn't
11:55even know what to buy because the
11:56technology is changing so fast so I
11:58think it's accelerating all those Trends
11:59which is very exciting I've definitely
12:01seen a lot of Enterprise buyers sort of
12:03change their mind almost on a monthly
12:05basis in terms of what they view as the
12:07primary components of the stack that
12:08they're using and that could be the
12:10specific album they're using should they
12:12use a vector database or not like the
12:13whole set of components seem to be very
12:16rapidly morphing and you mentioned
12:18earlier you sort of seeing this
12:19emergence of a new stack are there any
12:20specific components that
12:22you'd like to call out or that you think
12:24you know are going to kind of stick or
12:25how do you kind of view the evolution of
12:28so I I was sort of saying it's extremely
12:31hard to know what's going to stick in
12:32the end uh and for us you know it's
12:35actually a it's a very new place for us
12:39as a company we've been very good over
12:41the past 10 years at understanding which
12:45Trends are picking up would actually
12:46actually are going to be the winning
12:48platforms you know when the world went
12:50from the VMS to Cloud instances from
12:53cloud instances to Containers from
12:55containers to manage containers with
12:57kubernetes from that to serverless like
13:00we always had you know it always took
13:02you know a year two or three years for
13:03those Technologies to uh to gain Mass
13:05adoption and it was very clear what what
13:08the killer apps and what the winners
13:10were going to be with generative AI it's
13:12not the case it's changing so fast and
13:14everybody is exploring all of the
13:15various permutations of the of the stack
13:17and all the virus technology is so fast
13:19that it's really really really hard you
13:22know to tell what's going to to stick if
13:24you can take a guess I think the one
13:26thing that's been the most surprising
13:27was the speed at which the open source
13:31ecosystem has been innovating and
13:34building uh better and better models
13:37nobody has quite caught up to open AI
13:40yet in terms of for the Frontier Model
13:41is in the maximum level of performance
13:43you can get but I think we've all been
13:45surprised by the the amount of new new
13:47technologies that has come out in the
13:49open source that does as good or even
13:51better on smaller more identified use
13:53cases and I think we should expect to
13:55see uh you know a lot more of that so
13:57when we when we look at our customers
13:58and we're dead away and we do serve the
14:01largest providers of AI but also largest
14:02consumers uh we think we see that today
14:05everybody is testing and prototyping
14:07with one of the the best API getting
14:10models like typically the open open AI
14:13was there are a few others but everyone
14:14is also keeping in the back of their
14:16minds where they can still
14:18we can then bring in the house uh with a
14:21open source model they train themselves
14:22and host themselves and what part of the
14:25functionality they can break down to to
14:26which one of those models so I think we
14:28probably will have a very different
14:29picture you know a year or two years
14:31from now yeah definitely feels like the
14:33most sophisticated users are basically
14:34asking when can they fall back on the
14:36cheapest possible solution and when do
14:38they need to use the most advanced
14:39technology and then how do they
14:42a specific prompt or user action or
14:44something else against those so it's
14:45very interesting to watch this evolve um
14:47I know that you all released an llm API
14:51um is there a whole new observability
14:52and tooling stack needed for AI
14:54and if so what are the main components
14:57yeah well there is first there is the
15:00there is an observer that you need a
15:01stack needed period right so you you do
15:04need to fold that in uh because that's
15:06just one more component you're using and
15:08as I said earlier there's a whole new
15:09stack that's emerging you know so if
15:10you're going to use a Frontier Model
15:12from one of the big you know API Gadget
15:15providers you need to monitor that you
15:16need to understand what goes into it you
15:18need to understand what's response to
15:19you without responding rational wrong
15:21and how it interacts with the rest of
15:22your application uh then if you use a
15:25vector database or you know if you if
15:28you host the model yourself and you use
15:30basically Computing infrastructure for
15:31that and you have gpus and things like
15:33that you also need to uh to instrument
15:34and observe all that and figure out how
15:37you can optimize all this so there's a
15:39whole new set of components
15:41um that that from this new stack that
15:44needs to be observed and that you can
15:46second be observed pretty much the same
15:48way anything else can be observed you
15:49know which is with metrics still traces
15:51and logs and that sort of stuff
15:53I would say there's probably also a
15:55whole new set of use cases around you
15:58know what used to be called ml apps or
16:00numbered people LM Ops and and that's a
16:03field by the way we're we're only
16:06watching from afar over the past few
16:07years and the reason for that was that
16:10you know we saw 100 different companies
16:13few of them you know reaching two
16:15attraction and the reason for that was
16:17that the use cases for that were all
16:18over the map and the users tended to be
16:21very small groups of data scientists
16:23that also prefer to build things
16:25themselves in a very bespoke way so it
16:27was very difficult to actually come up
16:29with a product that would be widely
16:31applicable and that would be uh you know
16:34also something you can sell to your
16:36customers I think today has changed
16:38quite a bit because uh L M's are the
16:41killer app everybody is trying to use
16:43them and the users instead of just being
16:46a handful of data scientists in every
16:48company end up being pretty much every
16:50single developer and they are less
16:52interested about the building of the
16:54model themselves than they are about
16:56making use of it in application in a way
16:58that is reliable makes sense and they
17:00can can run day in and day out for their
17:02customers so I think there's a whole new
17:05set of use cases around that
17:07um that are very likely to emerge and be
17:09very valuable to uh to those developers
17:11and these have more to do about
17:12understanding what the model what the
17:14model is doing whether it is doing it
17:16right or wrong or it is changing over
17:18time and all the virus changes to the
17:21application can improve or or not
17:24it seems like one of the things that has
17:26given datadog enormous nimbleness is
17:29this unified platform that you've built
17:30which is both a big advantage and a big
17:32investment and you know my understanding
17:35is a pretty large proportion of the
17:36datadoc team is working on the platform
17:38right now how do you think about
17:39resources being allocated towards the
17:41main platform maintaining it versus new
17:43initiatives like Ai and llms yeah so the
17:47rule of thumb for us is about half the
17:49team is on the platform and but that
17:52relates to what we do right we sell a
17:53unified platform so internally you know
17:56as I said half is on the platform the
17:57other half is one of the products are
17:59like specific use cases but even the way
18:02we organize those those use cases those
18:04teams that work under use cases tend to
18:06be more focused on problems that tend to
18:08be forward-looking or where the Market's
18:09going whereas what we sell to our
18:11customers tends to be more aligned to
18:13categories that tend to be more backward
18:15looking which is how people are used to
18:18buying stuff and that's very important
18:20you know when you talk to customers in
18:21all space like there's like 12 15 20
18:24different categories uh always
18:26interesting acronyms and they correspond
18:29to things that customers have been
18:31buying for you know 10 15 20 years and
18:33that's how they understand the market so
18:35we sell into that while at the same time
18:37delivering everything as part of the
18:38unified platform that is itself shaped
18:40towards more what we think the world is
18:42going to be so it's very possible that's
18:44even likely that five or ten years from
18:46now or excusing what products have
18:48changed drastically because they
18:49correspond to you know evolution of the
18:51market as opposed to being you know
18:53pinned into a very specific and static
18:56definition of categories you know an
18:57example of that being observability is
19:00emerging as one big super category that
19:03really encompasses what used to be in
19:05fast extra monitoring application
19:07performance monitoring and log
19:09and we still sell those three as
19:11different skus to that but I think it's
19:14very likely that you know five or ten
19:16years from now you don't even think of
19:17them as being separate categories
19:18anymore like they really become part of
19:20one you know super integrated category I
19:23would say there's a specific cost you
19:24know when it comes to maintaining a
19:26unified platform you know which is that
19:28we also do some m a uh and we acquire
19:31companies such as you know screen the
19:33company Sarah was on the board of and
19:35and gracefully signed the the order to
19:37sell the company but
19:40when we do so the first thing we do is
19:42we actually re-platform to complete with
19:44acquired so we spend the first year
19:46really a post acquisition
19:48rebuilding everything that the company
19:50had built on top of a unified platform
19:53so it's an extra cost but again what we
19:56deliver to our customers is end-to-end
19:58integration uh bringing everybody into
20:00the same conversation into the same
20:01space the same place more use cases more
20:03people more different teams into the
20:05same place and with we see it as a
20:07necessary part of maintaining or or
20:09differentiation there you've made a
20:11handful of other Acquisitions
20:14um to expand the the product suite and I
20:16think the the sort of talent group at
20:18datadog what else has made them
20:20successful because it you know it seems
20:22to me that it has really like continued
20:25to drive like useful product Innovation
20:27at datadog which is not always true with
20:31yeah I mean to realize as you know the
20:33the making an acquisition is easy like
20:36you know signing a piece of paper and
20:37wiring money is super easy anybody can
20:39do that uh the problem was what happens
20:41next you know so now you've done it now
20:43you have you have to measure two things
20:46um I think you know in general the way
20:49we approach Acquisitions is they always
20:52um product areas we want to develop and
20:56you know we we're fairly ambitious like
20:57we there's a lot of different product
21:00areas we want to cover in the end let's
21:01span from uh observability to security
21:04to a number of other things at the end
21:07of the day we're ready to build them all
21:08but if we can find some great companies
21:11um to you know get us two or three years
21:13of a head start in a specific area you
21:16know we'll do it whenever we can
21:18so we start with a very broad uh
21:21pipeline or very large funnel of
21:24companies and then we you know we focus
21:27basically on the ones that are going to
21:28be fantastic fits for US Post
21:30Acquisitions meaning there are teams
21:31that want to build uh and entrepreneurs
21:34that can really take us to the next step
21:35there with the experience that they've
21:37gained in a specific area one thing
21:39we're very very careful of is so we
21:41select for entrepreneurs who want to
21:43stay and want to build as opposed to
21:45entrepreneurs who are tired and want to
21:48it's a fine reason to sell your company
21:50it's not a great reason for us to buy so
21:52you know that's what we're going to do
21:54the other thing we do is when we close
21:58the acquisition but before we close we
22:01have a very very specific and very
22:03um short fuse on the integration plan
22:05after that so we have a plan basically
22:08that calls for shipping something
22:09together within three months of the
22:11acquisition which is very short
22:12including acquisition people celebrate a
22:14little bit you know and then they have
22:16to get oriented and you
22:18HRC stands and whatnot three months is a
22:21very short time we don't really care
22:22what gets shipped within three months
22:24but we care that something gets shipped
22:26within stream off and what it does is
22:28that it forces everyone to find their
22:31um it's also uh makes it very easy for
22:35the uh the acquired company to start
22:37showing value which then builds trust
22:40either the main issue you have when you
22:41acquire companies is or domain risk is
22:45um it's not that you waste the money of
22:47that with the acquisition it's that you
22:49demoralize every everybody else in the
22:51company because they see these new
22:53committees being Acquired and they don't
22:54understand the value or they wonder you
22:57know why you like pay so new people a
22:59lot of money instead of paying the pool
23:00you have a lot of money to do the same
23:01thing and it's very very important to
23:04show value very very quickly for that
23:05and we put a high emphasis on doing that
23:08so far we've done it well you know I'm
23:11still expecting us to make some mistakes
23:12from there but so far it's worked out
23:14for us I want to go back to um something
23:17you were saying right around like
23:19calling let's say environmental
23:21technology changes well like you know
23:23you progression from like VMS to
23:26containers to here we are with um
23:28managed kubernetes and such but datadog
23:31as a company has always been amazing to
23:33me because the the spectrum of things
23:35that you want to you're ambitious to go
23:36after is very broad right like I was at
23:39Greylock investing in companies that
23:40were APM companies and you know logging
23:43companies and you have this platform
23:46Advantage but you also are still
23:48attacking like many different things
23:50customer problems in different
23:52categories like can you talk a little
23:53bit about how you organize that effort
23:55like in your mind or as a leadership
23:57team and how you sequence it uh so in
24:01terms of what we go after you know so
24:02first of all it took us a very long time
24:04to go beyond our first product you know
24:05so we I think we we spent the first six
24:08or seven years of the company on our
24:10first product and the reason for that is
24:12it was really hard to catch up you know
24:14for with the demand for that product we
24:16also we realized after the fact that you
24:18know we uh we were fairly Lucky in terms
24:21of when we entered the market like we
24:22had an opportunity to enter what's a
24:24sticky Market a market that's hard to
24:27um because of the re-platforming that
24:29came with the cloud you know so we could
24:30start with a smaller product and then
24:32expand to it as customers where
24:34themselves growing into the cloud
24:35everybody was new into the cloud at the
24:37time so we had to spend that time
24:38getting to the I would say the the
24:40minimum you know full product for
24:42infrastructure monitoring after that
24:44what has driven the expansion of the
24:46platform was really what we saw our
24:48customers build themselves you know so
24:50before we started building APM uh we had
24:53our customers or we saw a number of
24:54customers build uh all man's APM on top
24:56of datadog we didn't have the Primitives
24:58for it you know but our products is
25:00open-ended enough that they could
25:01actually build and script around it and
25:03do all sorts of things
25:04um so we we saw it uh it made perfect
25:07sense if they if it made sense to them
25:09for them to build it and for us to be
25:10part of the of the solution there uh we
25:13thought it would make sense for us to
25:14build it for them so in great part
25:16that's what guides the development of
25:18the of the platform the first threshold
25:20was really to get from a single Product
25:22Company to having two or more products
25:25that were successful it was not easy
25:28once we had done that that's what gave
25:31us for one thing the confidence to take
25:33the company public because we understood
25:34we could grow it a lot for a very long
25:37time but also it really opened us up to
25:41you know okay let's let's look at what
25:42our customers are doing and with our
25:44products but problems we can solve for
25:46them and use the secret weapon of the
25:49company which is the surface of contact
25:51we have with customers you know we
25:53deployed on every single one of the
25:54servers they have because we start with
25:56infaster monitoring so we import
25:57everywhere and we touch every single
25:59engineer so we use by everyone every day
26:01and that's the first of contact is then
26:03what lets us expand uh so more problems
26:06for the customers and build more
26:09you guys have I think over 5 000
26:11security customers now but you know
26:13relative to the overall data dog base or
26:15the security industry it's still newer
26:17effort you know I've been on Boards of
26:19companies that sell to it and security
26:20but it is hard conventional wisdoms it's
26:23like quite different audiences even if
26:26the as you said the surface area is
26:27there or it makes architectural sense to
26:30consolidate the tools because a lot of
26:31the data is the same I I think you know
26:33the the world or the investor base might
26:35see this as a bigger jump than some of
26:38um products you guys have released
26:40before what do you need to do as a
26:41business to succeed in security
26:44yeah it's a great question and it's
26:45definitely it is true like it is a
26:47bigger jump because we you can you can
26:49argue that most of the other new
26:51products we've released outside of
26:53security were part of the same larger
26:54category around observability the users
26:56were the same the buyers into Logics
26:58somewhere the same uh with security we
27:00get new types of users and new types of
27:02buyers our approach to it was that
27:06there's actually just been no shortage
27:07of Security Solutions today uh they're
27:10all like there's tons of Technology it's
27:13typically sold very very well in a
27:15top-down fashion to the season and
27:17uh what it's not doing well is it's
27:20actually not producing great outcomes
27:22everybody's buying security software
27:23nobody is more secure as a result
27:26so our ambition there is to actually
27:28start by delivering better outcomes and
27:30for that we think we need actually a
27:31different approach we think that if you
27:33if you if you sell a very sharp Point
27:36solution to a season which is what it's
27:38done today uh you're not going to uh to
27:40have these great outcomes on the flip
27:42side if you rely on the uh large numbers
27:46of developers and operations Engineers
27:48to a personalized security and you
27:50deploy it everywhere on the
27:51infrastructure and in the application at
27:53every single layer you have a chance of
27:56delivering better outcomes you know the
27:58the analysis I would make is that with
28:01there are great medicines today for
28:03security integral technology but for it
28:05to work you need to inject it in every
28:06single one of your organs every day and
28:09nobody's doing it and I think the uh the
28:11way we intend to do it is you know we
28:13can deliver it to you in an IV and
28:15that's it you know you're going to have
28:17it always on and it's going to be fine
28:18so again requires approaching the market
28:21fundamentally differently because we are
28:24building on usage and deployment we are
28:26building on ubiquity not building on
28:27great sales performance at the top level
28:30it's possible that later on we need to
28:32combine that with great sales
28:33performance at the top level because
28:34that's how it's done larger the prices
28:36but for now our focus is really on
28:38getting to better better outcomes I want
28:41to go back to sort of the AR opportunity
28:43for you guys as a lot was touching on so
28:45like if you just take one um sort of
28:47very naive example like anomaly
28:49detection on logs on metrics on security
28:52data has existed for a really long time
28:54you guys have this Watchdog intelligence
28:56layer I'm sure you're working on lots of
28:58interesting things with classical ml
29:00approaches and security as well like how
29:02would you how would you rank like the AI
29:04opportunity within within your products
29:07um in these different domains
29:08so there are so many new doors that we
29:11can open that's really exciting one
29:13thing I would say is uh you know in
29:15general we've been careful about not
29:17over marketing the AI and the reason for
29:19that is we think it's a it's very easy
29:22to over Market Ai and it's very easy to
29:24disappoint customers with that
29:25and that's the one thing I find a little
29:27bit worrying with the current AI
29:28explosion is that the expectations are
29:31are going completely wise you know in
29:34terms of what can be done with that and
29:35I think you know there's going to be
29:36maybe a little bit of this
29:37disillusionment after that though I
29:40actually am a believer that we can
29:41deliver things that are that were
29:44impossible you know just a few years ago
29:46so I don't think the old methods are
29:49going away because you you still need to
29:51do numerical reasoning on data streams
29:53for example you know you mentioned a
29:55watchdog like watching every single
29:57metric all the time everywhere uh for
30:00you know statistical deviation there are
30:02methods that work fantastically well for
30:04that uh that only involve language or
30:07language models or Transformers or any
30:08of that I mean it's possible that we see
30:10some of some new methods uh emerge uh
30:13using Transformers because there's so
30:15much work being done on that today but
30:18that's not yet the case so I think those
30:20those methods are not going anywhere
30:21those methods are also a lot more
30:23precise than the what you get with large
30:26language models you know so I'll give
30:27you an example specifically for
30:31if you talk to a customer and if you ask
30:34them would you rather have a false
30:36positive and you can you'll decide
30:38whether it's right or wrong but the
30:39computer is going to bring up new
30:41situations for you or or just nothing if
30:44we're not sure we won't tell you
30:46customers will all tell you or give me
30:48the false positive I'll decide the
30:50reality is you send them two two
30:52possibilities at night the same way and
30:55they'll turn you off forever
30:56uh so the reality is you need to be
30:59really really precise and with
31:00operational workflows like we do you
31:03you're making judgments you know a
31:05thousand times a minute you know so if
31:08you're if you're wrong you know even two
31:09percent of the time uh it becomes really
31:12painful really quickly so you know you
31:14need to set the bond really high and for
31:15that those methods are not going away
31:17now what's really interesting is that
31:19there's a number of new other new doors
31:22that are open with llms uh one of them
31:25is there's so much data that was off
31:27limits before that we can put to use now
31:29everything that's in knowledge base is
31:32in email threads and everything else all
31:34of that actually can be used to put the
31:36information the numerical information in
31:39another way I've seen I've seen the the
31:42llms described online which I thought
31:44was uh very astute was uh so basically
31:47calculators on language and you can
31:50actually use that really well actually
31:52you can what you can do is uh structure
31:54or bring together metadata from many
31:56different places uh output from many
31:59different numerical models and use the
32:01the language models to combine that uh
32:04maybe with some other wikis you have
32:06internally and this this allows you to
32:09combine data in ways that were
32:11impossible before and a lot of new
32:13intelligence is going to emerge out of
32:14that now the challenge of course is you
32:17still need to be correct uh and I think
32:20that's what we're working on right now
32:21I'll give you one last example we we so
32:23we obviously we've been working on that
32:25quite a bit and the first thing people
32:27do when they uh they they see an error
32:30in their production environments and uh
32:32and they have a GPT window open is they
32:36take the stack trace of their error and
32:38they ask that GPT what's wrong does it
32:40work well 100 of the time it says you oh
32:43I'll tell you this is this thing is
32:45wrong problem is in more than the
32:47majority of the case it is wrong
32:49and it there's a good reason for it is
32:51that you just can't know because if you
32:54don't have the problem the program State
32:55you just can't know what we found though
32:57is that if you combine that stack Trace
33:00uh with the actual program State you
33:03know which is what are the variables and
33:04what were the things when when it air up
33:06you actually can get a very very very
33:08precise answer pretty much all of the
33:11time uh from the large language model so
33:13I think the at least in the short term
33:16um the magic is going to come from
33:18combining the language models with the
33:20other sources of data and the uh I would
33:22say the more traditional numerical
33:23models to bring you insights to to our
33:26users that makes a lot of sense it seems
33:28like there's a real opportunity to build
33:29and I'm sure it sounds like you know
33:31this is the first steps towards building
33:32almost like an AI SRE co-pilot
33:36or eventually a more automated solution
33:38that can really help not only surface
33:40different things but also understand
33:41them in real time and you know provide
33:43an opinion on what what would a
33:45potential issue may be
33:46that's that's definitely I think we are
33:49the cusp of a maybe doing more
33:51automation than we could before I would
33:54say on the cost you know because we
33:56we're still not there like there's still
33:58quite a bit of what needs to happen
33:59there I would say the best test for that
34:01is even at places that are extremely
34:04uniform and have very large scale such
34:07as you know the Googles of the world the
34:09level of automation is still fairly low
34:10it is there like it's increasing but
34:12it's still fairly low and you know and
34:13if you use the uh the self-driving cars
34:16metaphor like you know Google is a is on
34:18a highway you know so if that can be
34:21automated like most of the other
34:22companies out there cannot either right
34:24because most other companies are like
34:26downtown Rome you know so it's a quite a
34:28bit more complicated what do you think
34:30is missing though do you think it's a
34:31technology issue or do you think it's
34:33just implementation because I feel like
34:35L alums are so new right Chachi PT
34:36launched six months ago and gpt4
34:39launched three months ago so is it just
34:41new technology and people need to adapt
34:42to it or do you think it's other
34:44obstacles in terms of actually you know
34:46increased automation through the
34:47application of this technology
34:49so for one thing it's it's very hard
34:51right so many of the problems you need
34:53to solve like you should think of a
34:55self-driving car like you know everybody
34:56over the age of 15 can drive with
34:59debugging production issues in a complex
35:01environment you you need a team of phds
35:04and you know it would take them time and
35:06they will disagree and you know so I
35:08think it's hard that's that's one reason
35:10that being said I think a lot of it
35:12might be possible in the end uh the
35:15biggest question I think not just for
35:17observability but for everything else
35:19with LMS is is this the uh
35:23is this the Innovation we need or do we
35:26have do we need another breakthrough on
35:28top of it to make it to the end you know
35:30so if I were to Clockwork again I love
35:32analogies so I keep I keep throwing them
35:33at you but with uh llms we have we
35:36clearly have ignition
35:37uh like there's Innovation everywhere
35:39it's happening we might have liftoff
35:41probably in the next uh year or so with
35:43real production use cases because right
35:45now most of the stuff is still not
35:46production it's still demo aware and no
35:48private data and that sort of stuff I
35:52think the question there is do we need a
35:53second stage or not I don't know uh and
35:56I think that won't be clear for another
35:58maybe a couple of years as all those
36:00this current world of innovation reaches
36:02production I would say look there's some
36:04some use cases for which it's very
36:05clearly there like for example
36:07uh creativity generating images
36:12as humans we're good enough uh for
36:14debugging what comes out of the machine
36:16like you know you generate an image and
36:18the dog has five legs you immediately
36:20you know re-roll the dice and you know
36:22you get to get something that works I
36:24think when you when you try and write
36:26code or debug an issue in production
36:27like it's when the system is wrong is
36:29less obvious and so that's what we still
36:32need to work on yeah makes sense are
36:34there any other near-term sort of
36:35productivity gains that you think are
36:36most likely to occur
36:38either for datadog your customers in
36:40terms of the this new type of AI because
36:42I feel like to your point there's a lot
36:43of forward-looking complexity in terms
36:46of some of these problems and some of
36:47them may be the self-driving example in
36:49terms of you need more stuff to happen
36:50before you can actually solve it and
36:53then separate from that it seems like
36:54there's a set of problems or class of
36:55problems where this new technology
36:56actually is dramatically performant if
36:58you look at you know Notions
36:59incorporation of L alums into how it's
37:01rethinking documents or
37:04um you know if you look at mid-journey
37:05and art creation to your point on images
37:07and you know the subsummation of teams
37:10clip art or marketing copy or other
37:12things like that I'm just sort of
37:13curious like where do you think the
37:14nearest terms gains are for your for
37:16your own area are likely to come well I
37:19mean we see it already in everything
37:21that has to do with offering writing
37:23drafting I think it's a it's there it's
37:26it's already good enough
37:27it hasn't dramatically changed anybody's
37:30processes just yet but it's it's I think
37:32it's going to happen in the in the near
37:35term we see and we only see and we'll
37:39much more of a of an improvement when it
37:42comes to uh developer productivity I
37:44think there's a whole class of
37:46development tasks that are becoming a
37:47lot easier with an AI advisor like using
37:50a new API for example uh used to be
37:53painful now you know it just takes a few
37:55minutes to add the machine to show you
37:57how to do it and it's gonna and that's
37:59that's a real immediate productivity
38:01again there I think there are some areas
38:03that will probably be completely
38:05Rewritten by AI in terms of a not just
38:07we can do different things but also
38:09we'll stop doing things because it
38:11become inefficient as everybody does
38:12them we say I give you an example so
38:14email marketing and things like that
38:16when you know you don't need a human to
38:19send an email anymore uh you can send a
38:21million of them from a machine and
38:22everybody's doing it that whole Avenue
38:24and that whole field might change quite
38:26drastically you think we just killed the
38:29it will have to take a different shape I
38:33would say someone will have to go to the
38:35notes a different way
38:37um super interesting and I love the
38:41um we're running out of time so I want
38:43to ask a few questions on uh leadership
38:45to wrap up if that's okay because we
38:48have a lot of Founders and CEOs who
38:49listen to the podcast and datadog is a
38:51company that just keeps executing the
38:53the company delivered 30 Revenue growth
38:55when you know a lot of people are
38:57slowing as they face a a different macro
39:00environment you guys released a cloud
39:03cost optimization product so you're
39:04doing certain things that are very
39:06specific to the environment or maybe
39:07they were in the works for a long time
39:09what else are you changing about how you
39:11run business if anything so we're
39:13actually not changing how we run
39:15business so we've always run the
39:16business with the profitability in mind
39:19um so we you know we've always looked at
39:22the margins looked at building the
39:23system from the ground up in a way that
39:25was uh sustainable and efficient we
39:29never actually built things top down and
39:30thinking where we get it to work and
39:31they will optimize it
39:33um and the reason for that is we again
39:34we were scared initially that we
39:36wouldn't be able to uh Finance the
39:38business but also we think it's really
39:40hard to shed bad habits you know once
39:43you start doing things in an efficient
39:45way uh it's really hard to to move away
39:47from that and we've definitely see that
39:49around seen that around us in the in the
39:52the one thing we're a bit more
39:54um careful of is uh with tuning Our
39:56engine a little bit differently when it
39:58comes to understanding customer value
40:00and uh product Market fit because
40:03customers themselves are more careful
40:05about what they buy how they buy and how
40:07much of it they buy so we need to return
40:09our sites a little bit or yeah just
40:11outside a little bit so we don't we
40:13don't make the wrong decisions based on
40:14that to the point I was making earlier
40:15at the same time we also need to to move
40:18a lot faster in some areas such as
40:19generative AI just because the field
40:21itself is moving so fast which is also
40:23causing us to uh to massage with teams
40:25internally a little bit differently so
40:27we're telling teams hey I know you're
40:28used to having to be really to being
40:30really good at filtering out the the
40:32noise and taking you know three four
40:35quarters to zero right on the right use
40:36case for generative AI you can't
40:38actually do that because the noise is
40:41part of voice being developed so accept
40:43the fact that you might be wrong a
40:44little bit more but we need to iterate
40:46over it with the rest of the market
40:48do you guys need new talent to um go
40:51attack these areas or does it change
40:53your view on Talent at all not much
40:55really I think it's a it that is always
40:58about you know Finding so finding people
41:00who are entrepreneurial who want to
41:02build who want to grow
41:05um and who are going to prove themselves
41:06by making a whole array of the business
41:08just disappear like you know you find
41:10the best people just just obliterate
41:12problem areas uh you yeah black holes
41:16for problems you send problems they
41:17disappear uh and that's how you know you
41:19you can promote them you can easily find
41:21them in organizations because you see
41:24all the work going to them like work
41:26avoids people that that are not
41:28performing and it finds people that are
41:30fantastic and so following the work is a
41:33great way to find different way
41:34performance I want to ask one last
41:36question because I feel like datadoc has
41:38so many unique attributes as a company
41:40another perhaps sort of less obvious
41:44path that the company took was serve
41:46many different customer segments right
41:48from sort of very small engineering
41:50teams to Fortune 100s and you know my
41:53understanding is you guys have done that
41:54for quite a long time I've grown up a
41:56little bit into the Enterprise as
41:57everyone does how did you think about
41:59that or why did it work for you
42:01well I need to look the starting point
42:02was bring everybody on the same page so
42:04we we focused on the on the humans we
42:07focused on on thinking well do humans
42:09whether they are endeavoring Ops are
42:11wired the same so let's bring in the
42:13same page on the same platform and it
42:15turns out the humans are also largely
42:17the same whether they work for a tiny
42:18company or for large you know
42:22I think it was made possible also by the
42:24fact that uh in the cloud and open
42:26source generation like the tooling is
42:28the same for all companies uh if you go
42:31back 15 years if you were a startup you
42:33were building a local Source uh and if
42:36you were a launch company you were
42:37buying whatever Oracle Microsoft was
42:39selling and uh and you know building on
42:41top of that Enterprise E platform today
42:44everybody's building on the WWEs with
42:48um it's the same components up and down
42:50the stack so it's really possible to to
42:53and it's been good for us you know it's
42:55given us a a lot of uh like network
42:58network effects that you wouldn't find
42:59in an Enterprise software company
43:02um by giving us this very broad like a
43:04spectrum of customers uh it's a great
43:07differentiator because it's really hard
43:09to replicate you know you can't just
43:11replicate the product you have to
43:12replicate the company around it which is
43:13hard you know from a competitive
43:15perspective it also creates some
43:17complexities you know because as much as
43:19you serve like the users are humans uh
43:23and they feel the same across the whole
43:24Spectrum uh commercially you don't deal
43:27the same way with individuals and with
43:29large Enterprises and it's hard for your
43:31messaging not too leak from one side to
43:33the other you know so one example of
43:35that is I think so recently there was
43:37some some articles in the news about
43:39some of our customers that pay us you
43:41know 10 tens of millions of dollars a
43:42year and you know you have on the
43:43individual side you have people who
43:45wonder who is it possible to pay even
43:46tens of millions of dollars a year
43:48whereas on the high end obviously like
43:49customers do that because well it's
43:51commensurate to the infrastructure they
43:53have and they made it do it because it
43:55saves them money in the end so you know
43:57it you do have this Balancing Act
43:59between the very very long tail of users
44:02and the very high end of a lot of
44:04surprises awesome thanks so much for
44:06joining us on the podcast this is great
44:08thank you so much thank you this was