00:05palantir has built a data-driven
00:07application platform over the past two
00:08decades that is used by governments
00:10militaries and some of the world's
00:12largest companies for analytics and
00:13operational decisions palantir recently
00:15announced a new platform AIP as part of
00:18a push to invest in AI this week on the
00:20podcast Sarah and I talked to palantir
00:22CTO sham Sankar who's the company's
00:26first business hire and has led the
00:28company for nearly two decades
00:29previously as a COO Sean welcome to no
00:32priors thanks for having me great to be
00:33here Sarah so I think you have a very
00:36unique background I believe you grew up
00:37in Nigeria and then moved to the United
00:39States you got interested in computers
00:41reasonably early it'd be great to just
00:42hear your personal story and your
00:45yeah I uh spent the first three years of
00:47my life really in Nigeria my father had
00:49built the first pharmaceutical
00:51manufacturing facility on the continent
00:53until then all the drugs were really
00:55imported and we fled Nigeria during some
00:58violence and really resettled in the US
01:01as kind of like refugees so a great deal
01:04of gratitude understanding the counter
01:06factual reality of like you know how the
01:09world could have ended up there
01:10and so I grew up in Florida uh I think
01:13relevant to the current age I grew up in
01:15a time where when the space shuttles
01:16would launch we would all file out into
01:18uh the recess Courtyard to actually just
01:20watch it you know and that seemed quite
01:22normal and it also seemed really normal
01:24that on Saturday morning at like 6 a.m
01:26you'd be woken up to double Sonic booms
01:28every now and then uh and so I'm eagerly
01:31awaiting the return of the Space Age
01:33that that commercial space and new space
01:34have been bringing back to us here uh
01:37but I made my way out to Silicon Valley
01:42and started getting involved with
01:43startups I might the first company I
01:45worked at was Zoom with an x uh that was
01:47founded by Kevin Harts and it was
01:49International Money Transfer Company and
01:51then after three years at Zoom I started
01:53a palantir as the 13th employee uh
01:55really the first person on the business
01:59had the most fantastic ride ever since
02:01but never been more excited about what
02:02we're doing than than what we're doing
02:04right now with with AIP and the
02:06opportunities that are in front of us
02:07how did you originally find volunteer
02:09because you know it's a very secretive
02:11company very early on it was sort of a
02:13very small community in technology and
02:14Silicon Valley I'd actually heard of it
02:15so I was just sort of curious like how
02:17you connected with the company and got
02:19one of my uh friends uh so Peter was
02:23also a seed fund investor in Zoom so I
02:26got some exposure there and one of my
02:28friends was actually a freshman year
02:30roommates at Stanford with Joe Lonsdale
02:32and so I had heard about this company
02:34that was you know very small at the time
02:35and it really talked to my heartstrings
02:37my uncle was a victim of of terrorism
02:40um I was in New York during 9 11 and it
02:43just like this was going to be I you
02:45know would rather fail on working on a
02:46problem this important than do anything
02:50yeah and then you joined as you
02:51mentioned as employee number 13 the
02:53first business hire Etc and you've had a
02:55variety of roles over time could you
02:57explain a little bit how your role has
02:58changed over the lifetime of the company
02:59I think the the kind of Common Thread
03:02through it all is just doing what you
03:03need to do which I know sounds like a
03:05banality but really when it when it
03:06first started I like wrote our first
03:08kind of candidate management system and
03:10what are you doing as a business hire
03:11before you have really a product here
03:13and I was I was uh an aggressive QA
03:16tester you could say but the the real
03:19initial contribution was what we call
03:21forward deployed engineering it comes
03:23from an Insight that Alex kind of Pat
03:25around like well you know he he muses
03:27why are French restaurants they're good
03:29well maybe one theory is that the wait
03:31staff is actually part of the kitchen
03:33staff there you know it's it's not that
03:35they have like deep context and
03:36understanding of the food and so the the
03:39Ford deployed engineering idea was that
03:40the people who are going to be
03:41interacting with customers in the field
03:43we're going to be computer scientists
03:45who could actually understand uh what
03:47does the product do today what does it
03:49need to do today how is it wonder what
03:50conditions is it going to work and and
03:52how do you kind of create this hybrid
03:54role that's product management customer
03:56success and Engineering all in one and
03:59that that's really the team that I first
04:01built up and then as we went from Gotham
04:04and Foundry and now AIP there's a lot to
04:07to do there there's like whether it's
04:09interacting more often with the
04:10customers thinking about the technology
04:11uh or really thinking about how are you
04:14going to get to Value and I think a lot
04:16of what for deployed engineering really
04:18gets you to think about is how do you do
04:19these things backwards instead of really
04:21going forward from the technology how do
04:23you work backwards from the problems
04:24that your customers actually experience
04:26and use that to create an accountability
04:27function like is what I'm doing
04:29mattering did I make the life of this
04:32customer better today can I do better
04:33tomorrow and using that absolute
04:36standard to judge yourself rather than
04:37saying you know did my software work
04:39according to spec like who cares about
04:41the spec who cares about what my
04:42ambition was yesterday what's my
04:43ambition today and why shouldn't it be
04:46that framework is super practical was it
04:50the customers that you were working with
04:52early that drove the need for this were
04:54you trying to solve a particular problem
04:56one the customer is working within
04:58government is just motivational right
04:59like you kind of view it and you're
05:00thinking about it like how do I walk as
05:02many miles in their shoes as possible
05:03like how could I possibly just think my
05:06job is done because I checked the box
05:08here it's like the job is defined by how
05:10are they doing and what more could I be
05:12doing for them then I think there's a
05:14more kind of cynical component of this
05:16which is like okay well are you as a
05:18company going to succeed in this sort of
05:20environment particularly with early
05:22government customers if you don't have
05:23that sort of mentality because if you
05:26think about the vertical stack you need
05:27to deliver your outcome you're dependent
05:29on so many things going right so if you
05:32just want to build the software this
05:33component of the stack if anything down
05:34here isn't working and you know
05:36certainly at the time we were doing this
05:38there really wasn't AWS yet but you
05:40couldn't depend on any of that stuff
05:41working and so your your visualization
05:43of what you would need to own so that
05:46you succeeded needed to be quite
05:50um AIP the company had three key
05:52platforms Gotham Foundry and Apollo
05:55could you tell us about what these
05:56different things do for the sake of our
05:58audience since they're less familiar
05:59necessarily with the company and then
06:01how does AIP fit into this yeah Gotham
06:04is our Flagship government product it's
06:05really focused on intelligence and
06:07defense customers and it helps them
06:09integrate and model their data to really
06:12Drive decisions in the context of their
06:14Enterprise so in the defense Community
06:16that'd be thinking about in terms of the
06:17kill chain how do I go from targets to
06:19effects on those targets and
06:21intelligence it's it's often a kind of a
06:24different sort of structure but how do I
06:25track and gain context and understanding
06:27of the things in the world that I need I
06:28have a responsibility to understand
06:30about you can think of Gotham as kind of
06:32conceptually at the highest part of the
06:34stack much of Gotham then depends on
06:36Foundry foundries are general purpose
06:39data integration platform it allows you
06:41to deal with structured and unstructured
06:43data it did transform that data to
06:45really treat data like code and then
06:47drive that through to an ontology a
06:49semantic layer that models not only the
06:51nouns of your Enterprise like the
06:53concepts that you think about but the
06:54the verbs the actions and so I think you
06:57know the buzzword for this is often
06:58digital twin and that can mean a lot of
07:00things to a lot of people but you know
07:01how do I have some sort of conceptual
07:03understanding and model of what we do as
07:06a business and use that to actually
07:07affect decisions and then drive that all
07:10the way through to the application and
07:11decision making layer so not dashboards
07:14that give me visibility but really
07:15pixels that I can make changes right so
07:18if I want to allocate inventory I need a
07:20platform that's going to allow me to
07:21write back to sap or read and write from
07:24my transactional systems and orchestrate
07:26my Enterprise and Foundry I think really
07:28gives its customers the ability to kind
07:30of squint and model their Enterprise as
07:33a change series of decisions or kind of
07:35like a decision web and then giving them
07:37the modeling and simulation ability to
07:39understand and ask counterfactual
07:40questions what happens if I do this and
07:43this is the same platform that was used
07:44to build the covid vaccine response
07:47distribution in the US and the UK same
07:49platform that commercial companies were
07:51using to manage the supply chain crises
07:53when suddenly steady state can of
07:55equilibrium wasn't really there and
07:56being able to model the counter factuals
07:58became really really crucial yeah it's
08:00kind of interesting that you mentioned
08:01those various customers for example on
08:03the covid vaccine distribution side or
08:05things like that because you know the
08:07perception of the company is it very
08:09early on a lot of the earliest customers
08:11were intelligence and defense and then
08:13it kind of brought in from there is that
08:14a correct assessment and was that
08:16intentional at the time or was it just
08:17you found that there's a pocket of
08:19customers that really cared about your
08:20product and you know we're a good fit
08:22for what you were doing initially
08:24well yeah we founded the company to to
08:26work with intelligence and defense
08:28organizations and and really I think
08:30um we expanded almost reluctantly you
08:33know I think it was like 2010 or 2011
08:34where we started working uh with our
08:36first commercial customer but really
08:38what we realized was that it took
08:40something as sexy as James Bond to
08:42motivate Engineers to work on a promise
08:44boring as data integration but this sort
08:46of we had our own ideas what would be
08:48valuable in these spaces and we built
08:49software for it but all of those ideas
08:51kind of presupposed that the data was
08:54integrated and you know kind of I think
08:56the kind of popular view is like this is
08:58a boring and solved problem but I think
08:59it might be kind of a boring and highly
09:02unsolved problem that people are kind of
09:04like duct taping together everywhere
09:05they go and so by productizing a
09:08solution to that we kind of expanded our
09:10market and the what we could sorts of
09:12problems in the world that we could go
09:13after Apollo is quite an interesting
09:15platform as well so like we really
09:16originally built Apollo for ourselves if
09:19you think about our customers the we're
09:21deploying in air gap environments so how
09:23do you deploy my modern software when
09:26you know you can't see ICD to the Target
09:28we had to build this entire
09:30infrastructure that allowed us because
09:32you know our software it's modern
09:33software we have 550 microservices we're
09:35releasing multiple times a day for each
09:36one of these Services they need to be
09:37able to upgrade independently but also
09:39our environments are complicated like
09:41hey the submarine only wants to upgrade
09:43on these windows or the these
09:45environments are not connected to the
09:47internet so how is that going to happen
09:48we had to build kind of what we think of
09:51as kind of a successor to cicd which is
09:53like autonomous software delivery and
09:56so Apollo allows you to think about your
09:58software and the environments you're
10:00deploying in separately model the
10:02dependencies and kind of hand that to
10:03Apollo to manage and orchestrate the
10:05upgrade it will understand what's your
10:07blue green upgrade pattern how do you
10:09think about health checks how do I roll
10:10forward how do I roll backward how is
10:12that integrated with my uh understanding
10:14of vulnerabilities in cves when do I
10:16need to like recall software or block
10:20and that software has started to get a
10:22lot more traction as people are dealing
10:24more and more with complicated
10:25environments not only
10:27air gapped customer environments in
10:29defense and intelligence but if you go
10:30to Europe where people have a strong
10:32push towards Sovereign SAS or a lot of
10:33people want you to deploy inside of
10:35their VPC as a SAS company how are you
10:37going to manage now having a thousand
10:39customer environments to manage and
10:42apologies makes that really easy
10:44and then how does uh AIP tie into this
10:46and can you tell us more about AIP and
10:49yeah AIP allows us to it's really a core
10:52set of technologies that allow you to
10:54bring llm-powered experiences to your
10:57private Network on your private data to
11:00to drive the the decision making
11:02everything from how do I integrate this
11:03data but I think much more interestingly
11:05how do I build these AI enabled
11:06applications you know it's like an
11:08application Forge and a core part of the
11:12proposition here is that these llms they
11:14need tools like that certainly there's
11:15something quite magical about this kind
11:17of non-algorithmic compute you know it's
11:19neither human thought nor kind of
11:21traditional computer science and they're
11:23very good at what they're good at and
11:24they're also quite bad at what they're
11:25bad at and and so like getting this
11:27right is really about providing
11:29sometimes people call it plugins but I
11:31think tool might be a more appropriate
11:32word but how do I give my customers not
11:35just a tool bench but really a tool
11:37Factory to go make uh the tools they
11:40need to get the most out of llms like an
11:42llm is not going to know anything about
11:43orbital simulation or weapon earring or
11:46um predicting forward inventory 30 days
11:48from now it's certainly not going to do
11:49that well but with the right tool it's
11:51going to do that quite excellently and
11:53it's going to give you a lot of Leverage
11:54on the workflows they already have uh
11:57and and I think in some sense much of
11:59the foundational work that we've done
12:01with with Foundry has enabled people to
12:03run really quickly with AIP build
12:05co-pilots that deploy into their
12:08existing decision making surface area
12:10like how do I allocate image how do I
12:12adjudicate client Auto claims and then
12:16um get that efficiency in in weeks at
12:19what point did you decide you wanted to
12:21make a big investment in LMS and and
12:23sort of what did the company do first
12:25uh I think it was it was really
12:28around Q4 you know the last part of the
12:31last year there where it just felt like
12:35obviously the LMS are exciting but what
12:37was more exciting to us is it felt like
12:39the llms were just waiting for something
12:40like ontology you know it's like to
12:42really get the value out of the llm the
12:45way that we had modeled the world it's
12:46almost like accidentally we had spent
12:47the last 20 years really thinking hard
12:49about Dynamic ontologies how you model
12:51them why they're valuable to humans and
12:54you can kind of think about the ontology
12:55as as having the semantic layer that
12:57gives you an incredible amount of
12:58compression that you're putting into the
13:00context window and allows you to build
13:03llm-backed functions in very reliable
13:05ways and I think part of this is just
13:09the llms there it's more like statistics
13:11and calculus and I think this is one of
13:13the impedance mismatches for a lot of
13:15Engineers who are working on them
13:16they're kind of like they model them a
13:18little bit like calculus a lot like
13:19calculus and then you know when it works
13:21it works magically when it doesn't work
13:23it falls off a cliff so how are you
13:25actually going to get this to work when
13:26you kind of have this like stochastic
13:29I think you're going to need a kind of a
13:31whole tool chain around that that kind
13:32of presupposes it's a stochastic Genie
13:34and uh I think the ontology is one of
13:37these things that massively grounds your
13:39llm in your reality in your business
13:41context it allows you to manage that
13:43without having to you know change the
13:47what are some of those components that
13:49you think are the tool chain that you
13:50need to sort of bottle the sarcastic
13:51Genie and I love that phrase by the way
13:53I think that's a really good way to put
13:55it so you're probably going to need
13:56everything you kind of need with the dev
13:58tool chain but you're going to have to
14:00adjust it for the fact that it's
14:01stochastic so you even see it like
14:03people call it eval and not unit tests
14:05but you're going to need like how many
14:06unit tests do you need if you're going
14:08to write an llm backed function and it's
14:10a stochastic Genie how how many times
14:12does it need to execute before you have
14:13confidence that's going to do what you
14:15want and then so then you can think
14:16about that that's like Day Zero okay so
14:18I build this thing how do I think about
14:19it but what sort of telemetry and
14:21production log data do I need uh and how
14:24often am I going to be looking at those
14:26traces and it's like I might even be
14:28writing unit tests against my traces I
14:30guess you could call that like a health
14:30check right and like there's going to be
14:34emphasis that that you're going to need
14:36there as an engineer as you think about
14:37using this and then there's going to
14:39have to be some calibration on the use
14:40case the best use cases are going to be
14:43when the llm gets it right there's
14:45massive upside and when it doesn't
14:48it's a no op right uh and and so picking
14:51those ones I think are going to be quite
14:53important as you build and tune the
14:55specific applications of these
14:57going back to this idea of an ontology I
15:00feel like I suddenly understand this
15:02much better in that there are a lot of
15:04companies right now trying to figure out
15:06how to take all of their uh messy less
15:09than perfectly integrated and largely
15:11unstructured data and create some sort
15:13of intermediate representation that the
15:16models can handle well and if you have
15:18something like an existing ontology of
15:21your business then leveraging it with
15:23LMS does feel like a like a really
15:25natural magical fit that's exciting
15:28um maybe to make it a little bit more
15:31real you could walk us through an
15:32example of like an AI tool using this
15:35tool chain ontology that you're excited
15:37about that one of your customers is
15:40yeah sure I'll just pick an example that
15:42was something we worked on recently in
15:43Hawaii which is how do I do automated
15:46um KOA generation courses of action
15:49generation from an operational plan you
15:51know so the Department of Defense has
15:53these o plans they call them that are
15:55kind of like that's the other thing you
15:57have these industries where there's just
15:58a tremendous amount of Doctrine whether
15:59that's Pharmaceuticals or defense where
16:02there's so much knowledge and how we
16:03want to do things that's been written
16:05down so you have this this o plan that
16:08um the phases of a potential conflict or
16:10the key risks and assumptions
16:13and so you might want to do something
16:14like a non-combatant evacuation
16:16operation so if if conflict happens here
16:19how will we get all the civilians out of
16:23um okay well we've thought about that
16:25and we've written that down it's in this
16:26document and so how do I then just say
16:28like build me uh a course of action to
16:32drive this evacuation well the plan is
16:34specified the resources that you're
16:35probably going to need the types of
16:37resources the phases the timing of it uh
16:39the risks and assumptions you need to to
16:41worry about so then how do I take those
16:43words and then hydrate the application
16:45state that people use to manage the
16:47common operating picture and that's a
16:49big part of what we're really thinking
16:50about which is you know I kind of think
16:51of like chat is a massively limiting
16:54interface you know at the limit prompts
16:56are for developers now I think that's
16:57it's really hard some it prompts leak
17:00over to users and users sometimes want
17:02to chat and lots of people start with
17:03that because of the popularity of chat
17:05EBT but really what I want to do in this
17:07context is they're entering a question
17:09like generate a course of action allow
17:11that is this evacuation operation
17:13and what I'm getting back it's not words
17:15I'm actually getting a map with
17:18resourcing a resource Matrix and the
17:20requisition of the necessary resources
17:22and you know I can hit a button and say
17:23yes and that that comes to life uh and
17:26so those are the sorts of experience
17:27that we're starting to build with
17:28customers now uh and on the commercial
17:30end it's really co-pilots to help people
17:32I was just looking at a demo this
17:35morning from my team on helping a major
17:38auto manufacturer adjudicate quality and
17:40claim so how do I manage the cost of
17:41quality cost of warranty on the
17:43production line and post-production when
17:44these cars are in service well I need to
17:46be able to Cluster these claims and more
17:48efficiently understand what what
17:50supplier sub components where in the
17:52supply chain and how do I remediate
17:54these issues how do I drive down my cost
17:56of orange and recall so building
17:57co-pilots that are kind of looking at
17:59the text of the claims understanding the
18:01components helping them identify
18:03um early indicators and signs of the
18:06kind of conditions under which these
18:08parts need to be recalled and managed
18:09and there it's really about human agent
18:11teaming I was listening to the uh I
18:15think you guys hosted like an AIP day
18:17like a set of like demos and
18:19presentations recently and one of the um
18:23points of view you take is that the
18:25models themselves are increasingly
18:26commoditized and and certainly more
18:28broadly available and available in open
18:30source how do you think about the value
18:32of that palantir is building for its
18:35well I think it's how do you how do you
18:37actually use the models to drive these
18:39experiences so there's kind of two ends
18:41of that one is the existing experiences
18:43that people have today so how do I build
18:45a co-pilot that's going to help me
18:47adjudicate Auto claims
18:49um or help me understand my production
18:50process on the other end of that it's
18:52like how do I develop trust in the
18:54underlying models you know if we go back
18:55to the stochastic Genie here maybe we
18:57should actually think about these as
18:58like slightly deranged uh mad Geniuses
19:02and then you know are you gonna only ask
19:05one of one of these experts to help you
19:07solve your problem like how do you think
19:09about the configuration of of mad
19:10Geniuses that you actually really want
19:13to have and I think I want an ensemble
19:15of mad Geniuses I'll feel better about
19:16that yeah I think that's that's probably
19:19the correct direction and I think one
19:21that that model companies will have a
19:22hard time with right because I think
19:23they they need to have kind of one model
19:25to rule them all or directionally that's
19:26that's where it needs to go but if
19:28you're an Enterprise and you're thinking
19:29about I have high consequence decisions
19:31that I'm trying to drive here how do I
19:33do that and I think certainly if you're
19:35living in a chat world where the outputs
19:37chat that's valuable and you want to
19:38think about like where do people agree
19:40and disagree but if you start thinking
19:42about actually the output of the model
19:45or Json that's even easier then now you
19:49can actually parse these things in very
19:50structured ways to understand not only
19:52the consensus view but maybe Divergent
19:54views and then then you're like kind of
19:56more authentically treating this as
19:58statistics and not calculus and then
20:00that forces you to flow that through to
20:01the UI and and how you're designing this
20:03for actual human users to interact with
20:05it in a way that's thoughtfully
20:08so you've said that you don't think that
20:12you know chat is the BL end-all and it's
20:14quite limiting especially when you think
20:15about like these complex workflows the
20:17outputs that are most efficient for your
20:20end users uh what are you imagining in
20:23the near term that turns into is it
20:25assistance overlaid on the existing
20:28software that your customers use whether
20:30it's things they've built with you or uh
20:33software they already have is it just
20:35automatic workflows and outputs in that
20:39um how help us imagine it a bit Yeah I
20:42think the ideal uh visualization of it
20:44is is something like I have an
20:46application State and I have an intent
20:48as a user uh combine those two things to
20:51give me a new application state
20:54now that can be very hard I think it
20:56depending on the workflow that might be
20:58super obvious I think if you're thinking
20:59about this like like a co-pilot for
21:01GitHub co-pilot it's more obvious I have
21:03an application State and an intent and
21:04you know you you generate something for
21:06me now it becomes less obvious when some
21:08of these things get a little more
21:09complex and you may need a little bit of
21:10hinting and a little bit of user
21:12prompting kind of that's where I think
21:13the art is so so that would be that
21:15would be one piece of it then another
21:17piece of it is like okay let's just say
21:18that's too hard like it or doesn't fit
21:20the use case properly not just just not
21:21just about it being hard
21:23you have the prompt that is then gen the
21:27return is Json or DSL that is
21:29manipulating the application state right
21:31the whole point is not to give me
21:33answers but to change my app and then
21:35that starts changing how you think about
21:36interacting with these things it becomes
21:38a new UI layer you know it kind of the
21:41the most extreme version of this is like
21:43why have any UI at all if you have
21:45really beautifully done apis and you
21:47have let's call it data apis and
21:49ontology your your the data that's
21:51actually going between these apis is
21:52incredibly well modeled I think you can
21:54actually use the llm to generate a lot
21:56of the experiences that you want as an
21:58end user some people have been talking
22:00about this in the context of um you know
22:02everything that comes as sort of
22:03programmatic asian-driven world
22:05we're five years from now or and years
22:07from now you just have agents that
22:08represent you as a user with a specific
22:10task interacting with other agents for
22:12apis and to your point you really
22:14minimize the UI dramatically do you
22:16think that's that's the most likely
22:17future or how do you sort of think about
22:18where all this stuff is heading from a
22:20UI perspective it's hard to see so far
22:23in the future on this but what I think
22:24it definitely does when I think about
22:25the integration layer like when I like
22:27you look at like the gorilla's paper and
22:28and can you can teach you know can you
22:30fine-tune an llm to basically tell you
22:33what API to call with what parameters
22:34like yes it turns out and so okay so if
22:37that's true what does system integration
22:39look like in the future that's going to
22:40be quite different so then I think it
22:42allows you to create more single panes
22:43of glass that are actually truly
22:45which is incredibly hard right now I
22:48think there's some subtle and
22:48interesting benefits he's like one of
22:50the consequences of a hack we had a
22:51number of months ago was that I had an
22:53engineer who could build a future uh in
22:56a couple hours that we had previously
22:58scoped it was on the road map it was a
22:59feature that was going to take like two
23:00months and two people and it's just
23:02simply because the amount of UI that was
23:03involved was so intensive you just
23:05replace the UI with language the whole
23:07thing changes so like that's that's one
23:09way of thinking about okay well what
23:10sort of UI are you not building today
23:11that you actually don't even have to
23:13build today and that you you probably
23:15have the tools The Primitives and the
23:16back end or the application that you can
23:18now surface so I think that's that's an
23:21interesting place to go there like I I
23:22don't know about the extreme view of
23:24like look there's going to be no UI just
23:25build every UI custom every time but I
23:27think the ability for a user to get the
23:29last mile to be what they need is going
23:31to be really powerful it's a big unlock
23:33and I certainly think in the Enterprise
23:34context we live in I see that all the
23:36time where there's kind of there's no
23:37perfect solution for all the different
23:39kind of tugs you're getting from from
23:41the customer to to generalize that or
23:44the cost of generalizing it is so high
23:46that you can't actually meet the need
23:48but now they can make it specific
23:51actually to their needs yeah the systems
23:53integrator point that you made I think
23:54is super interesting because I think
23:55there's a lot of companies who are part
23:57of their defensibility is the fact that
23:58you basically had to munch specialized
24:01data or Integrations you know that would
24:02be the saps of the world or different
24:05Erp systems workday et cetera like a lot
24:07of these things have modes in part
24:08because it takes six months to implement
24:10them and to integrate against and
24:11customize them and to your point this
24:14can really simplify a lot of those
24:15things down in a pretty performant way
24:17so it's this pretty interesting macro
24:20shift that you're in wasting firsthand
24:22in terms of where your customers are
24:24I think so I think it's going to change
24:25a lot of things you think about like how
24:27much control it gives the customer in
24:29terms of like how do you manage your API
24:30surface area like people have tried to
24:32do this with like API buses or Bridges
24:35and it's like no that stuff's really
24:36kind of working because just having all
24:37your apis in a big list turns out
24:39doesn't help you but having something
24:40that allows you to think about how you
24:42string them together
24:43is is pretty transformative and you
24:45think about some of these big systems as
24:47you know yesterday I was seeing a note
24:48where I'm sure it's somewhat
24:49traumaticized but Boeing in the Navy are
24:51fighting about data rights for you know
24:55yeah but that's kind of like this whole
24:58problem is just the consequence of the
24:59fact that you have that it's hard to do
25:01this with the data rights and that
25:03actually the more that we get to a world
25:06where you can fine tune an llm that is
25:07the F-18 production design llm that's
25:10that's a very different world
25:12what do you imagine doing with that
25:14well how you manage so the ability for a
25:18third party let's say the government it
25:19doesn't want to be locked in it's like
25:20how can I interrogate the design specs
25:22or um how do I understand how I'm going
25:24to do maintenance here so how much of
25:26that is just kind of locked up because
25:27the expertise is so difficult to acquire
25:30you know so it's part of the when I'm
25:32when I'm turning over my first plane am
25:33I also turning over the llm is that part
25:36of the value proposition that helps you
25:37manage this and the the kind of hardened
25:39pipelines behind that and the training
25:40behind that so how do I give you more
25:42leverage over something that's insanely
25:43complicated yeah and you think about
25:45this system it's it's a symbol it's a
25:47it's a complex subsystems behind it that
25:50have all been integrated together like
25:51how were they integrated and what if I
25:53want to swap out a component in the
25:54future how do I reintegrate that and so
25:56that's the part that's really hard and
25:57it maps to an Enterprise if you think
25:59about okay I need a swap out work day
26:00for something else like what is it going
26:01to break what does it all touch how do I
26:03how do I do that and I think that's
26:05going to get a lot easier
26:06Chan can you talk about some of the
26:09technical challenges that you guys feel
26:11like you need to solve or that you're
26:13working on now for customers to make
26:16I think the the key one is more
26:18conceptual it's realizing that it is a
26:20stochastic Genie so you know where I
26:22want to invest the most in the tooling
26:26um kind of like a robust eval IDE
26:29environment that enables people to
26:31unlock the value of this and and like
26:33how do I develop trust and put these
26:35these kind of co-pilots through
26:36probation how do my users think about
26:39another big part of the investment right
26:41now is in making the sort of co-pilot
26:44models accessible to Everyday users I
26:47think there's a fair amount of companies
26:49I see going after kind of let's call it
26:50the canonical data scientists as an
26:53archetype of like I want to fine tune a
26:55model and I'm going to go do that
26:57I see a smaller number trying to go
26:58after devs as an archetype but I want to
27:00go after the head of Maintenance
27:02um at an institution who's like look I
27:04know all these things and I
27:07I don't need all the knobs exposed to me
27:09but I I need you to have an opinion on
27:11all those knobs underneath that and I
27:13can get my whole team to help generate
27:15the Q a Pairs and not like there's a lot
27:17that I'm willing to put into this that
27:19revolves around my expertise helped me
27:20get a co-pilot in production that's
27:22affecting the lives of my users and I
27:25think that's like just the heart like a
27:26lot of hard systems integration like how
27:28do I integrate like all those building
27:29blocks exist how do I make that a very
27:32smooth workflow that you can trust and
27:34actually use and I and I don't feel like
27:36I've like we've solved all the kind of
27:38like statistical differences here I
27:40think that's the more you kind of pull
27:42on that thread the more you realize like
27:43okay like this this way of approaching
27:45it that would have what we would have
27:47done with traditional code you actually
27:49have to account for you need more
27:51defensibility in your thought process
27:52here because it's it's not traditional
27:55do you feel like you are doing a lot of
27:57customer education on how to like absorb
28:00this or is it more like you need to do
28:02that work in the product so that they
28:04don't have to understand like the
28:05stochastic output we have to do both
28:08um and and even internally like you know
28:09it's like as Engineers kind of ramp with
28:11the more they play with it you know it's
28:12like the more they kind of okay I'm
28:13starting to get my hands around the
28:17um so so I think part of it is like a
28:19lot of customers aren't seeing past chat
28:20right now everything everything that's
28:22interesting is kind of like a chatbot
28:23and I think some of the most
28:25sophisticated customers actually scares
28:26them because they're like well you know
28:28like how could I trust the output the
28:30textual output of this thing to make a
28:32decision of this sort of consequence and
28:34like getting them to realize that
28:35actually we're you know you should be
28:37thinking about how this this manipulates
28:38your application layer and how you can
28:40be using that and then how do you
28:42validate the outputs of that now that's
28:44kind of fitting into the context of your
28:46state machine which is probably my
28:47biggest comment on agents is like I
28:49almost can't use that word because the
28:51connotation of it has come to be the
28:53agent has to come up with his own kind
28:56of plan like the planning is the
28:57exoticism of the agent as opposed to
28:59really the practicalities of an
29:02Enterprise is there's either an implicit
29:03or explicit and often it's kind of 50 50
29:05or some combination implicit explicit
29:07State machine that represents that
29:09Enterprise so the idea that you're just
29:11going to have an agent that kind of
29:12comes up with a plan and does things is
29:14it's not going to meet reality
29:17but the idea that you're going to have
29:18an agent that has context of a part of
29:21the the state machine understands its
29:23authorities the guard rails are left and
29:24right of what what states am I allowed
29:26to manipulate and you probably want to
29:27start pretty small like one state you
29:29have authority over one state transition
29:31that's it uh and then how do I build
29:33that up so that I'm linking these
29:34together to drive the real Automation
29:36and that's going to map pretty cleanly
29:38to the humans you have in the Enterprise
29:39like there's a human who probably owns
29:41that one state transition and so now
29:42you're naturally building these human
29:45in your kind of upgrading or promoting
29:48your individual contributor to being a
29:50manager of Agents uh and that's that's
29:52pretty safe way from a change management
29:54perspective it generates the log data I
29:55need to have trust that this is actually
29:57valuable and helpful and is assisting
29:59the agent and it's probably more akin to
30:01how Tesla has tackled
30:03um self-driving as opposed to you know
30:05Cruise like big long shot we're just
30:07going to go all or nothing it's like
30:09actually we're going to get a little
30:09more self-driving every single day yeah
30:12sort of end-to-end magic planning we can
30:15figure out one state transition at a
30:17time with an existing ontology the
30:19business understands and and get
30:21feedback on along the way that that
30:23feels really promising especially with
30:25as you mentioned like things like the
30:26gorillas paper if you're helping people
30:28fine-tune to their own data
30:30um even a small amount of that seems to
30:32like dramatically increase
30:34um quality of tool choice and such so
30:36sounds really exciting one of the things
30:38that you mentioned a few times is really
30:39different ways that Engineers or people
30:41on your team have gotten familiar with
30:42the technology and the capabilities of
30:44it and I feel like it's llms are very
30:47non-intuitive relative to both
30:49traditional engineering but also
30:52you know I feel like a lot of
30:53organizations have kind of had to adapt
30:55to thinking differently a little bit in
30:56terms of what are the actual
30:57capabilities and how does it work and
31:00you know where doesn't it were there
31:02specific things that palantir did early
31:05onboard people to sort of this new way
31:07of thinking or have people play around
31:08with the models in specific ways I mean
31:10you mentioned a hackathon I'm just sort
31:11of curious how this all got started and
31:13you know how you now incorporated into
31:15how people think about these problems
31:17yeah we've made it a huge organizational
31:19Focus really to experiment and play with
31:21these things so like so how could you
31:23bring this to your own
31:25um area of the product but as
31:26importantly like how do we build this
31:28into our tool chain so hey we are doing
31:31incident response on our Stacks can
31:33let's let's have like let's build a
31:35co-pilot for ourselves to go manage that
31:37more efficiently and so by trying to
31:39solve your own problems with it you get
31:41much stronger intuition of like where
31:43it's amazing and where it falls off a
31:45cliff and how you have to think about
31:46that as you build it so aggressively
31:48adopting it to drive our own
31:50productivity has been one dimension of
31:52it and who who came up with that mandate
31:54like who was the person who said okay
31:55let's go do this and that was really my
31:57push you know it's like I think that's
31:58part of the fde mindset though you know
32:00it's like it's no credit to me it's like
32:02it's it's almost an obvious consequence
32:04of our culture of like what's the
32:05ambition how do we aggressively dog food
32:07everything and like uh if it doesn't
32:09work for us why would it work for anyone
32:10else and so then I think the the other
32:13part of this is like we live in a world
32:15where we can't count on gpt4 everywhere
32:17like we don't have that unclassified
32:18environments right so it is a beautiful
32:21kind of easy button for lots of problems
32:24to go after but then when you start
32:27um open source models you're like oh
32:28this one works in this in this context
32:30for these sorts of problems and so then
32:32how do I start exposing Engineers to
32:33that because I think what maybe one of
32:34the easiest ways to understand the
32:37models is to use multiple models
32:38concurrently and understand the outputs
32:40of it and so kind of like our internal
32:42version of chat GPT isn't one model it's
32:45actually multiple models and you're
32:47you're able to evaluate the output of
32:49each of these how long it took
32:51um the amount of tokens putting out
32:53you're able to control and tune in and
32:54kind of almost inductively explore the
32:56surface area area of these models one
32:58thing that was a claim that Alex made
33:00that is um a wonderful level of ambition
33:03is that volunteers of companies aiming
33:06for the entire market share of AI what
33:09does that mean what does that Vision
33:10look like I mean I think it's it's
33:11exactly what he said it's like we
33:14um think that we have done the necessary
33:18pre-work essentially like the
33:20foundational technologies that we've
33:21built that allow Enterprises to securely
33:24to protect their data to bring these
33:26llms to their private networks and then
33:28to deploy them operationally to get
33:30Beyond kind of the dabbling and The
33:32Innovation that I can hearken back to a
33:34period when when data was quite early
33:35everyone had something like a data
33:38they're not calling it generative AI
33:40Innovation lab but it's kind of
33:41structurally similar right now where
33:43people are kind of really working hard
33:45to think about what use cases and how
33:46will it be valuable if you think about
33:48this from the customer end uh and
33:50actually it's like I'll tell you what
33:51use cases the same use cases you were
33:53working on a year ago like the problems
33:55haven't changed you know you need to be
33:56applying these Technologies to the
33:58problems that are the most important
33:59problems in your business uh and and
34:01where are you have we've already made
34:03those Investments and how do I manage
34:04and model your digital twin and I you
34:07know how do I already connect up the
34:10different decisions you're making
34:11together so like if you think about this
34:12kind of like connected company to that
34:13decision web idea no decision is truly
34:15independent it's you know like the cons
34:17the decisions you made Upstream from it
34:19affected that decision you're about to
34:20make is about to have severe
34:22consequences for the decisions that can
34:23be made Downstream so I if I can bring
34:25that visibility to you you're actually
34:27in some sense simplifying the problem in
34:29terms of how much how hard of a problem
34:33am I asking the llm to solve and how
34:34incrementally can I deploy these things
34:36to go faster and I think that's the
34:38compounding Loop so I we feel like the
34:41the value is really going to accrete to
34:43folks who own the application layer and
34:45the Enterprises and we're going to go
34:47after that very hard
34:48so one one thing we've talked a little
34:50bit about are some of the customer use
34:51cases on the dod side are you know more
34:53General sort of defense and related
34:55areas and one area that I know that
34:58palantir added quite early on as a
34:59vertical is Healthcare and you mentioned
35:01some of the work that you did during
35:02covid I know that last year uh volunteer
35:05announced I believe as a 10-year
35:07with Cleveland Clinic to improve patient
35:09care and when I look at the implications
35:12of llms and generative AI to healthcare
35:15there's so much low hanging fruit
35:16because it's such a big people-intensive
35:19Services industry it'd be great to just
35:21hear your Viewpoint in terms of how you
35:22work with some of these Healthcare
35:23customers and what you think this coming
35:26wave of AI will do like what are the
35:27areas that will be most impacted by that
35:30yeah Healthcare is roughly a third of
35:31our business it's it's certainly I mean
35:33it's plug one of the fastest growing
35:35parts of our business as well and we do
35:37that uh you know in in a number of
35:39countries so the NHS in the UK and
35:41multiple Hospital Systems in the U.S
35:44and across both kind of dimensions of
35:46clinical care and operational care like
35:48the Opera Hospital operations
35:50and I think that's relevant because the
35:52pace of adoption for for these will uh
35:54vary and kind of the challenges you
35:56solve for the use cases with llms is
35:57different between them
35:59I think the operational context is is
36:02very obvious in the sense that it's just
36:04like operating any institution really
36:06you you have kind of supplied demand you
36:09have Labor inputs to that you're trying
36:10to manage that so that you can deliver
36:11the product the care that you actually
36:13have and there it fits very cleanly too
36:15how will we help you know auto companies
36:17get better at what they're doing or how
36:19we help manufacturers or energy
36:22and and there I think probably the
36:24archetypal pattern that I see across all
36:26Industries is something like you today
36:28have something if you squint at it it
36:29looks like an alert inbox where your
36:31state machine is essentially saying
36:33here's an exception or something that I
36:34need someone to think about
36:35the human kind of you then so many
36:38exceptions I need some help prioritizing
36:40all these alerts and then you prioritize
36:42them and you deal with them
36:44what the promises of of
36:46llms and what we're focused on with aib
36:48is turning that from a place where I'm
36:49surfacing alerts to a human to I'm
36:52uh so instead of saying here's an alert
36:54what should we do about it it's here's a
36:56recommendation here's a staged scenario
36:59of what you could do about this alert
37:01do you approve or reject it and that's a
37:04concrete manifestation of of the uh like
37:07kind of co-pilot and what I really like
37:09about that is like it to the point of
37:10like having done the foundational work
37:12for that to really work you need a
37:14primitive that is the scenario that is
37:16this like staged edit you know like a
37:18branch uh in in git right like without
37:21that that's a very powerful primitive
37:22without it you you lose a lot of
37:24capabilities if you have to build that
37:25at every customer over and over again
37:27so having having something with yellow
37:28one could say look here's a branch and
37:30here's a stage set of edits and then I
37:32can have a human evaluate that in the
37:33operational context of how they view a
37:38that that that's that's one set of
37:40workflows and then on the clinical side
37:42I think it's really about reducing human
37:43toil like I I don't think you're trying
37:44to get the uh llm to decide what you
37:47know it what to do for the patient here
37:49that's it's probably exactly the domain
37:50of the doctor here but what's in the
37:53clinical records in the clinical
37:54histories how do I drive the workflows
37:56what is it that the doctor can't get to
37:57or the nurse staff can't get to today
37:59because it's it's just too much coil and
38:02that we can turn that into something
38:03that takes you know maybe 400
38:04milliseconds that's going to improve
38:06what's happening at the point of care so
38:08driving completeness in that picture uh
38:10and and I kind of see that as a natural
38:12dichotomy between operational and
38:14analytical workflows you know the other
38:16thing I was looking at today is how do
38:18the throughput of a state machine so
38:20like I was looking at this claims
38:21processing workflow and then I was
38:22looking at this like claims optimization
38:24like what's wrong with my state machine
38:25it's almost the question and this one
38:27the second one is for like a manager
38:28who's looking down at this and they're
38:30saying like oh there's a cycle here and
38:31let's you know and so this the sorts of
38:33manipulations you're trying to do with
38:34the llm is structurally more analytical
38:37you're not asking it to change the state
38:39machine you're not asking it to you know
38:40there's no magic button there to press
38:43in the operational context you can get
38:44closer to something that's more like
38:45give me a recommended action that I can
38:48evaluate as a human and then there's
38:50kind of thresholding and learning over
38:52where might that be most valuable and I
38:54certainly think one thing that's
38:55promising about that is today we're so
38:56constrained by is it worth solving this
38:59alert you know because what would my
39:01human costs to go after solving this
39:04in a world where the llm can process all
39:06the alerts and give you a stage set of
39:08actions now you're now you're
39:09prioritizing not on the severity of the
39:11alert but on the possible consequences
39:13of the solution uh so that's already an
39:16improvement in the sort function and
39:17then you're much more likely to be able
39:18to get through all of them
39:20it's a really useful framing yeah I
39:22think that covered all the things we
39:23wanted to talk about I mean it's a
39:24really great overview of what volunteer
39:26is doing and some of the really exciting
39:27initiatives and customers that you work
39:29with thank you so much for joining us
39:30today no priors it was really a lot of
39:32fun a lot that we learned so thank you
39:34so much Sarah thanks for having me it's