00:05hi listeners for potential AI Founders
00:08my early stage AI fund conviction is
00:10accepting applications for its embed
00:12accelerator for two more days embed
00:15offers $150,000 in an unapt safe more
00:18than half a million of free compute and
00:20API credits a hand selected set of peers
00:23and access to Leading founder and
00:24research mentors apply at ed.
00:27conviction. by March 1st hi listeners
00:31and welcome to another episode of no
00:32priors today we're excited to be talking
00:34to the CTO of AMD Mark papermaster Mark
00:38has had a storied career in chips and
00:39Hardware with previous leadership
00:40positions at IBM apple and Cisco we're
00:44excited to have Mark on to get into gpus
00:46and the competition that's been driving
00:48this industry welcome Mark thanks s glad
00:50to be here with you and elad can you
00:52start by telling us a bit about your
00:54background you've worked on all sorts of
00:55interesting things from the iPhone and
00:57the iPad to like the latest generation
00:59of AMD super Computing chips oh sure
01:02I've been around a while so what's
01:04really fun is my timing was pretty good
01:06getting into the industry as a
01:07electrical and computer engineering grad
01:10University of Texas and got really
01:12interested in chip design and so it was
01:16back at a time when chip design was
01:18radically changing the kind of
01:20Technology everyone uses today seos was
01:22just coming into uh you know production
01:26usage and so I got uh on IBM's very
01:29first seos projects and created some of
01:31the first design so I got to get my
01:34hands dirty and do just about every
01:36facet of Chip design and had a number of
01:39years at IBM and took on different roles
01:42uh took on driving the microprocessor
01:45development at IBM across of first their
01:49power uh PCS and that was you know meant
01:52working with apple and Motorola as well
01:55as the Big Iron the the big Computing
01:57chips that we had in the Mainframe and
02:00in the big RIS serers so uh really got
02:03all facets of Technology there and
02:05included working on some of their server
02:07development but then uh shifted over to
02:10um uh to Apple uh Steve Jobs hired me in
02:13to run the iPhone and iPod uh and so I
02:17was there for a couple years but it was
02:20a a time of a great transition in the
02:23opportun in the uh industry and and for
02:26me it was a great opportunity because I
02:28ended up in 201 fall 2011 uh taking the
02:32role here at AMD of being both CTO uh
02:35and and really running the Technology
02:37and Engineering and right at a point
02:39where Moore's laws starting to slow down
02:43and so you know tremendous Innovation
02:45was needed yeah I want to get into that
02:48and sort of what we can expect in terms
02:50of computing Innovation if we're not
02:51just dramming more transistors on chips
02:54or we're unable to do that um every one
02:57of our listeners I think has heard of
02:59AMD but can you give like a very brief
03:01overview of the major markets you serve
03:03there sure so AMD is a a storyed company
03:07it's been around a well over 50 years
03:11and it it started out really being you
03:14know a Second Source company really
03:16bringing uh you know Second Source on
03:18key components and x86 microprocessors
03:21but you fast forward to where we are uh
03:24today uh and it's a very very broad
03:27portfolio uh when uh Lisa and Sue our
03:31CEO and I were brought uh into the
03:33company just over 10 years ago uh it was
03:36with a a mandate to uh get uh AMD back
03:39into very very strong competitiveness
03:43and so uh we started with the CPU line
03:45brought the CPU uh very very competitive
03:48and then really across the portfolio and
03:51just in February of 2022 acquired xyl
03:54link so that expanded the portfolio
03:57further so AMD creates the world's
04:00largest supercomputers it's got a
04:03massive install base now in the cloud so
04:05many of your Cloud operations that
04:07you're running are running on uh AMD
04:09epic x86 CPUs gaming were we we're huge
04:13we're underneath all the Xbox all the
04:15PlayStation as well as many uh gaming
04:18devices that uh uh that that you buy
04:20when you buy your your uh add-in boards
04:23and then across uh embedded devices with
04:25all of that rich zyink portfolio as well
04:28as embedded X8 six and we we acquired
04:31Pando so it extends that portfolio uh
04:34right into a networking interconnect
04:37that we need as we as we scale out these
04:38workloads so very very broad portfolio
04:42yeah AMD has had a pretty amazing run
04:43over the last decade plus since you
04:44joined um one of the things that you
04:46folks have really emphasized over the
04:48last couple years as well as Ai and
04:51there's been a big shift both in terms
04:53of the adoption of AI over the last
04:54decade or so in terms of traditional uh
04:57CNN RNN and other types of um neural net
04:59networ architectures but also in terms
05:01of this shift of Transformers and
05:02diffusion models and everything else um
05:05can you tell us a little bit more about
05:06what initially caught your attention in
05:09the AI landscape and then how AMD
05:11started to focus more and more in that
05:12over time and what what sort of
05:13solutions you've come up with you bet
05:16well uh we all know the AI Journey you
05:19know has been going since uh really the
05:22the the race began when uh the
05:25application space for AI opened up uh
05:28and gpus were viously uh pivotal there
05:30when you look at the uh the the key work
05:34that uh you know uh Hinton had done in
05:37terms of showing how gpus could
05:39drastically improve the uh accuracy of
05:42image recognition natural language
05:45processing uh and so that that that's
05:47been known uh for some time and so what
05:50we did at AMD is uh we uh right away saw
05:55the opportunity uh the question was
05:57plotting our course uh to be that strong
06:00player in AI so it was a very uh
06:03thoughtful and Del deliberate strategy
06:05because AMD we had to turn around the
06:07company so if you look at where AMD was
06:10uh in uh you know 2012 uh you know
06:14through uh you know really
06:162017 uh it was largely all all of the
06:19revenue was based on PCS and then gaming
06:23and so it was about making sure that the
06:26portfolio the building blocks were compe
06:29those building blocks had to be
06:30leadership they had to attract people to
06:33uh get on that AMD platform for high
06:36performance applications and so first we
06:38actually had to rebuild the CPU road map
06:41and that was a Zen microprocessors that
06:44uh that we released in
06:452017 uh in both uh PCS with our ryzen
06:48line as well as epic our x86 server line
06:52so that started the revenue ramp for the
06:55company and and started extending uh our
06:57portfolio and so right about uh that
07:01time uh in parallel as we saw where
07:03heterogeneous Computing was going we had
07:06we had called the ball on H hetrogeneous
07:08Computing before uh myself before lease
07:11ever joined the company uh AMD had made
07:15a a great acquisition of ATI that
07:17brought GPU into the portfolio it's one
07:19of the big reasons I was attracted to uh
07:22to AMD uh in the role is that wow what
07:26it was one of the it was the really the
07:28only company that had
07:29uh a very strong CPU portfolio and a
07:34portfolio and to me it was clear that
07:36the industry needed that powerful
07:39combination of the serial the scaler
07:42competing of these traditional CPU
07:44workloads and the massive
07:46parallelization that you get from a GPU
07:49uh and so we started with that
07:52heterogeneous compute uh created an
07:54architecture around that so we've been
07:56shipping CPUs and gpus combined for p
07:59applications longer than anyone started
08:01shipping those in 2011 with what we call
08:05apus accelerated processor units and
08:07then for Big Data applications we
08:10started with HPC the kind of high-
08:13performance compute technology that's in
08:15National Labs it's in oil exploration
08:18companies and so uh we uh focused first
08:22with the you know big government bids
08:24that ended up leading to supercomputer
08:26wins that we now have AMD uh CPU and AMD
08:30gpus under the world's largest
08:32supercomputers but that work started
08:34years ago and it was equally a hardware
08:36and a software effort uh and so uh we've
08:39been building that hardware and software
08:41capability and it really culminated in
08:43December 6 of 20123 of last year when we
08:47announced our Flagship the Mi 300 which
08:50just is a beast for both a high
08:53performance compute with one variant we
08:55have and takes high performance uh AI
08:59for both training and inference headon
09:01uh with with a variant which is
09:03optimized for those AI applications so
09:06it's been a long journey and we're
09:08really pleased to be where we are where
09:11our our sales are taking off no it's
09:14fantastic I mean I I guess when you
09:15launched the mi30 um you had public
09:17commitments from meta and Microsoft for
09:18example to purchase that and you just
09:20mentioned that there's a series of
09:22applications that you're pretty excited
09:23about there can you tell us more about
09:25which a applications and workloads
09:26you're most excited about or Mo most
09:29sure so if you think about where the
09:33bulk of AI is today you're still seeing
09:37tremendous Capital expenditures and
09:40building up the accuracy of capabilities
09:44for large language model training and
09:46inference so it is the the likes of chat
09:49GPT of Bard and and you know and the
09:51other uh you know llms that you can uh
09:55ask at anything because it's trying to
09:57ingest the V of data that that is out
10:01there and that can be trained upon and
10:03it's it's with really an you know an
10:05ultimate goal of artificial general
10:08intelligence and AGI type of uh of
10:10capability and so uh that is where we
10:14focus the Mi 300 is to start with that
10:17that Halo product that could take on the
10:20industry leader and in fact Mi 300's
10:22done that it's competitive on training
10:24and it leads in inferencing it has over
10:282X if you look at uh you know fp16 VMS
10:31which is a a metric that generally
10:34everyone uh can run that it's got a
10:37tremendous performance advantage and and
10:39we did that very purposely we created
10:41very efficient uh engines for the the
10:45math processing that you need for that
10:47uh training or inference uh processing
10:49but we also brought the memory that you
10:52need to have more efficient Computing so
10:55that's more Computing at less power less
10:57rack space uh than you need with
10:59competition a big front of competition
11:03is as you just pointed out there's
11:04performance like overall performance
11:07there's efficiency and then there's um
11:09like the software platform like Cuda
11:11rockem Etc how do you think about the
11:14investment in the optimized math
11:16libraries and like how you want
11:18developers to understand your approach
11:20versus competitors yeah you're you're so
11:23right Sarah it's multifaceted to be able
11:25to compete in this Arena uh you see many
11:29startups going after the space but the
11:31the fact is the the bulk of entrancing
11:34done today is done on general purpose
11:37CPUs not the huge llm inferencing but
11:40you know just general uh inferencing for
11:42AI applications and then for large
11:44language model applications it's almost
11:46all on gpus because that is the software
11:50and developer ecosystems out there and
11:52so we've been competitive on on uh CPUs
11:56we've been gaining uh share at a rapid
11:58clip because we've got you know a very
12:01strong CPU uh generation after
12:04generation that we've been releasing on
12:06on schedules we've laid out for the
12:08industry but for GPU it did take us uh
12:11until now to develop really worldclass
12:14hardware and worldclass software and
12:17what we've done uh is ensured that
12:20because we're a GPU it it should be easy
12:23to deploy uh and so really making sure
12:27uh that we Leverage that we have all the
12:29GPU semantics so if you're you're a
12:31coder uh it's it's just easy to code if
12:34if you're using the the lower level
12:36semantics uh but also uh we support all
12:40of the so Key software libraries that
12:42are out there when you think about the
12:44kind of Frameworks whether it be pytorch
12:46or a founding member of pytorch
12:49foundation whether it be Onyx um whether
12:52it be tensor flow we are out there very
12:55closely working with developers and so
12:58what we've now now got to now that we
13:00have uh you know competitive and
13:01Leadership offering uh is what you'll
13:04is that uh when you're deploying with a
13:07AMD very fasil if you're uh let's say
13:10you're using hugging face any of the you
13:12know thousands and thousands of llms
13:14Open Source LMS out there on hugging
13:16face well we partnered with Clem and's
13:18team they they test as they release any
13:21of those language models uh they're
13:23testing on AMD with our Instinct gpus
13:27equally as they're testing on Nvidia so
13:30we've uh really uh done the same thing
13:32as well with pytorch where we're one of
13:34two qualified uh offerings on uh on
13:38pytorch and so all of that testing is
13:40being done uh you know routinely with
13:42the regression testing that's run
13:44literally every night on any software
13:46release uh the other thing that's that's
13:48key uh is to learn from deployments and
13:52so we've had early engagements like
13:54lamini who who's running on AMD and
13:57they've been they've been offering uh
13:59you know um Services of getting on AMD
14:02and running your llms on their on their
14:04Cloud on their uh their rack
14:07configurations they have uh and so
14:10they've already been working with
14:11customers and now as you saw other
14:12people on stage with us at our December
14:15event you can see uh that we're in there
14:17with a key hyperscaler uh and we're also
14:21uh being sold through uh many OEM
14:24applications and we're directly working
14:27within customers so there's nothing like
14:29that feedback from Key customers that
14:32are running on your platform uh to speed
14:35us uh you know ensuring that we can just
14:37be easily deployed and and make sure
14:41that that it uh it's a seamless process
14:44yeah yeah lamini uh is a portfolio
14:46company for me and Sharon and Greg are
14:48great I think it's an indication of you
14:50guys having a big ecosystem of software
14:53developers and machine learning people
14:55that want to see uh competition and more
14:58heterogeneous compute out there for
15:00these AI applications Sarah you cannot
15:03underestimate that it tells you that it
15:05was a very uh a constrained environment
15:08there was there was a lack of a
15:09competition which bad for everybody by
15:11the way if there's a if there's not
15:12competition because you you really end
15:15up with a stagnant industry uh you can
15:17look at the CPU industry before we
15:19brought to competitive and Leadership it
15:21was really getting stagnant you're just
15:22getting incremental improvements and so
15:25the industry knows that and we've had
15:27tremendous uh pull and partnership and
15:29we're very appreciative of that uh and
15:32and in return we're going to we're going
15:34to keep providing generation after
15:36generation of of competitive product out
15:39for such a huge like software stack like
15:41Rockham to be open source like talk
15:43about that philosophy no it's a it's a
15:47great question it's very near and dear
15:49to us because uh we are uh as I
15:52mentioned all about collaboration that's
15:54you know just such a strong part of our
15:57culture and what open source does is it
16:00opens up technology to the community and
16:04so if you look at the the history of of
16:06AMD it's been um very focused on open
16:09source our our compiler for our CPUs is
16:12llvm it's it's open source the llvm is
16:15underneath our uh our compilers on our
16:18on our GPU but more than just the
16:20compiler on the GPU we've opened up the
16:22Rock and stack it is it is our enabling
16:25stack uh it was a huge piece uh in our
16:28uh winning uh supercomputing uh with
16:32such large installations we have why is
16:34it our philosophy and by the way xlinks
16:36had exactly the same uh philosophy and
16:38so bringing xyl links and AMD together
16:422022 uh didn't did nothing more than um
16:46even deepen that commitment to open
16:48source but Sarah that the point is we're
16:51not about locking in someone uh with a
16:54proprietary wall Garden software stack
16:57uh what we want want is uh we want to
17:00win with the best solution and we want
17:02our we're committed to open source uh
17:05and we're committed to giving our
17:06customers's choice uh we expect to win
17:08having the best solution uh but we're
17:10we're not going to lock our customers in
17:12we're going to we're going to win on
17:13Merit uh generation in and generation
17:15out I guess one of the areas that I
17:17think is evolving very rapidly right now
17:20is sort of the clouds for AI compute and
17:23so there's obviously the hyperscalers
17:25the Azure from Microsoft and AWS from
17:27Amazon and GC from Google but there's
17:29also other players that have been
17:31emerging um you know base 10 together
17:34modal uh replicate etc etc and one could
17:38argue that um they both are providing
17:40differentiated services in terms of
17:41different tooling API endpoints Etc that
17:44the hyperscalers don't currently have um
17:47but also that in part they have um
17:49access to GPU and there's a GPU shortage
17:51and so that's also driving part of the
17:52utilization how do you think about that
17:54market as it evolves over the next 3
17:56four years and perhaps you know GPU
17:59becomes a bit more accessible and maybe
18:02shortages or constraints Fall Away well
18:05that's definitely happening I mean that
18:06the supply constraint that will go away
18:08will be a part of that we're uh ramping
18:10up and and shipping as we speak on our
18:12Instinct line uh and it's going quite
18:15well it's going according to plan but
18:17moreover uh to answer your question I
18:20think the way to think about it is that
18:23it's just breathtaking how the Market's
18:25expanding so rapidly I said earlier that
18:28most of the applications today that that
18:31started on the you know the generative
18:33AI with these llms that's been largely
18:36cloud-based and not just cloud-based but
18:39hyperscaler based because it's such a
18:41massive cluster that's required not just
18:43for the training but frankly uh for a
18:46quite a bit of the the that type of
18:49generative AI llm inferencing also is on
18:52these massive clusters but what's
18:54happening now is we're getting
18:56application after application that that
18:58is just taking off nonl linearly it's uh
19:02and what we're seeing is a proliferation
19:05is people are understanding uh how they
19:07can tailor their models how they can
19:08fine-tune it uh how they can have
19:11smaller Noles that don't have to answer
19:13uh any question you have or any
19:15application you need to support but it
19:17might be just for your business and your
19:19area of of expl exploration and so that
19:21allows a tremendous variety of the size
19:25of compute and and how you need to
19:26configure that CL so a rapidly expanding
19:30Market application specific
19:33configurations you need for your compute
19:35cluster and it moving even further not
19:38just from these massive High
19:40hyperscalers to uh you know I'll call it
19:42you know kind of tier 2 kind of data
19:44centers but it just keeps on going
19:46because when you think about uh
19:48applications which are really bespoke
19:49and they can be run on the edge right on
19:51your factory floor where you know very
19:53low latency put the put the uh
19:55inferencing uh and uh you know right at
19:59the source of data creation right to end
20:01user devices so we've added U our AI
20:05inference accelerators right onto our
20:07PCS we we have been shipping it
20:10throughout all of 2023 and actually at
20:12CES this year announced already uh our
20:15our next generation of uh AI accelerated
20:18PCS uh and then of course with our zyink
20:21portfolio across uh embedded devices
20:23we're getting a lot of pull from
20:25industry uh that has bespoked infering
20:27sample appliation right in a pleora of
20:31embedded applications so with that Trend
20:34um we we're going to see more of that
20:37tailored uh compute installations uh
20:41with with the you know an attempt to
20:43service this ballooning demand yeah that
20:46makes a lot of sense I mean I guess a
20:47lot or a subset of um inference is going
20:49to push to the edge and obviously we'll
20:50have things on device but on laptops as
20:52well as phones in terms of you know
20:54where certain small models will be
20:55running and then it seems like there may
20:57be some Ono do potential set of
20:59constraints for larger models or larger
21:01data centers at least in the short run
21:03um what are the main drivers of the
21:05constraints on the GPU supply side is
21:07that you know I've heard things around
21:09packaging I've heard things around tsmc
21:11capacity I've heard sort of a mix of
21:13like potential drivers of constraints
21:15some people say the next constraint
21:16after that is do you have enough power
21:18into Data Centers to actually run these
21:19I just don't know what's real in terms
21:21of all this stuff and I'm a little bit
21:22curious like how to think about you know
21:24what are the constraints and how do we
21:25think about when those those um the
21:27supply things come a bit more into
21:29balance yeah Supply demand is uh frankly
21:33something that uh any chip
21:36manufacturer uh you know has to has to
21:38manage you have to secure your supply
21:40you look uh during the pandemic uh we
21:44had uh actually a a tremendous uh run on
21:48our devices that that uh stretched our
21:51supply chain because the demand for PCs
21:53went way up people were working from
21:55home uh the demand for our XA six
21:58servers went way up and so we were in a
22:00scramble mode during the pandemic uh and
22:03we did very well we worked uh we we had
22:05shortages of substrates and we we
22:07secured more substrate manufacturing
22:09capability we worked closely uh with our
22:12primary um wer Foundry supplier tsmc uh
22:16we're we're have such a deep partnership
22:18with them we've had it for decades uh
22:21that if we get out ahead of it and we
22:23understand the signals uh we we gener
22:26generally able to uh to meet the supply
22:29or if there's a if there's a shortage
22:31it's generally well contained uh and so
22:35what's happening with AI is uh yes it is
22:38clear that we're seeing this uh you know
22:40this massive increase in the demand and
22:43uh the Fabs are responding and you're
22:45having to not think of it just as a
22:47wafer Fab but you're absolutely right it
22:48is the packaging uh our cells and our
22:52GPU competitor both use Advanced
22:54packaging I mean I'll show you I don't
22:56know if in the camera if it'll come come
22:58across here but that is our ni30 and
23:01what you see is a whole set of chiplets
23:04uh so smaller chips with either you know
23:07a CPU function IO and memory controller
23:10can be it can be the CPU for what the
23:13version we have uh that focuses on high
23:16performance compute we literally drop a
23:18uh our CPU chip it's right in that same
23:20integration and all the high bandwidth
23:22memory that you have around it uh to be
23:25able to feed those engines and those are
23:28connected laterally and on the mi30 we
23:30connect them those devices vertically as
23:32well so it's a complex supply chain uh
23:35but it's one of which we are very very
23:37good at we're a fabulous company we've
23:40been fabulous for you know coming on 18
23:43years now uh and so we've got it down I
23:46hats off to the AMD supply chain team uh
23:49I and I think overall as the industry
23:51you'll you'll hear that generally we're
23:53going to move Beyond those type of
23:55constraints now you mentioned power
23:58this is I think uh ultimately going to
24:00be certainly a uh a key constraint uh
24:04and you see uh you know all the major
24:07operators looking for sources of power
24:09and for us as a as a a developer of the
24:12engines which are consuming that power
24:14it we brings tremendous Focus uh for
24:18Energy Efficiency and that we can drive
24:20into uh each generation of our design
24:23and we are committed to uh to that
24:25certainly at very top priority one thing
24:28you said before Mark is that you were
24:31actually excited about the innovation of
24:34the end of Moore's Law um and that being
24:36a reason that you actually wanted to go
24:37to AMD like what directions of
24:40innovation should we expect investment
24:42in I don't I don't know if it's like too
24:43deep to ask you to give us a lay man's
24:46understanding of like 3D stacking but I
24:48I think it is really interesting to to
24:50think about it at a at a time when it's
24:52not obvious where to go well no sah it's
24:55a it's a great question and and the
24:56reason that I was so attracted to uh to
24:59AMD is one it's it had a storyed history
25:01of being a disruptor in the industry uh
25:04and and I certainly felt very strongly
25:06that AMD could disrupt uh with very
25:10strong CPU and GPU but more importantly
25:12uh putting the pieces together uh the
25:14the idea of chiplets was just coming
25:17together there was there was early
25:18expiration of that of of that around
25:20that uh around that time and uh the
25:24engineering uh Team here at AMD were
25:26were able to um you know really uh get
25:30the team rallied and the the the the key
25:32leadership rallied around it and drove
25:33that uh that that Innovation
25:36so that the the reason it's so important
25:39is when Moore's Law slows down you know
25:42the easy way to think about it is it
25:44used to be that the chip technology
25:47itself The Foundry going from one
25:50generation to the next did most of the
25:52heavy lifting so you could just Bank on
25:54that new semiconductor te technology
25:57node shrinking your devices giving you
25:59more performance it have less power and
26:01it be at the same cost so that was what
26:04Mo's law was about and with mors law
26:06slowing it it means you still get those
26:08device improvements but it costs more uh
26:12your power is not coming down as much as
26:15it used to uh and uh you are are still
26:19getting that integration you're still
26:20certainly being able to pack uh more
26:22devices and but it it demands more
26:24Innovation it demands what I call
26:26Holistic design so you're you're going
26:28to you're going to rely on those new
26:30transition devices new Foundry nodes but
26:33how you use heterogeneous Computing
26:35meaning bringing the right compute
26:37engine for the right application a CPU a
26:40GPU a dedicated engine like we have
26:43super low power AI acceleration uh that
26:46we have in our in our PC devices and our
26:48embedded devices so it's about getting
26:51uh you know tailored engines for the for
26:54the right application leveraging
26:56chiplets that you combine B them put
26:58them on what is the best technology node
27:00you want each of those chiplets each of
27:02those functions to be on and then
27:04frankly holistic design means you got to
27:06keep going right up uh through the
27:09packaging how you package it together
27:11how you interconnect it and how you
27:13think about the software stack and so
27:16it's literally got it the the the
27:18optimization has to be the full circle
27:20of transistor design all the way up
27:23through the integration of your
27:24Computing devices and equally with the
27:27view of the software stack and
27:29applications uh and what I'm thrilled
27:32about along with all the engineers that
27:35I work with at AMD is that we we have
27:38that opportunity we have the building
27:40blocks and we are built on collaboration
27:44it's just such a part of our culture uh
27:46that uh we don't need to develop the
27:50entire system we don't need to be the
27:52ones developing the application stack
27:55and the end applications what we we do
27:57is partner incredibly deeply uh and
28:01ensure that the solution is optimized
28:04into to end I think everybody is very
28:06suddenly interested in the chip industry
28:09from a strategic perspective as well I
28:12think everybody's thinking more about
28:13the supply chain um from the you know
28:16tsmc near Monopoly to the idea of Fab
28:19Security in an increasingly complex
28:21geopolitical environment how does AMD
28:23prep for this or think about these
28:25issues well you know you you have to
28:27think about these things we are very
28:29supportive of working with certainly the
28:32US governments and and other governments
28:35uh across the world which uh have
28:37exactly that question how you know our
28:39our country is running now on ship
28:41design uh that that uh Power such
28:44essential systems that uh it becomes a
28:48matter of National Security to make sure
28:50that there will be continuity of supply
28:52and so we build that into our strategy
28:54uh we build it in uh with our partners
28:56and so we've been supportive of a Fab
28:59expansion so you see tsmc uh building
29:03Fabs in in Arizona we're partnering with
29:06them you see uh Samsung uh building Fabs
29:09in in Texas but it's not just in the US
29:11they're actually explaning uh as well
29:14just a global uh facilities in in Europe
29:17and and other parts of Asia and so uh it
29:21goes beyond The Foundry it's the same
29:23thing with the packaging so where do you
29:25as you put those chips onto carriers and
29:28you need to interconnect it you need
29:29that ecosystem uh to have Geographic
29:32diversity as well so the way we think
29:34about it is it it is a a matter of
29:38importance for everybody to know that uh
29:41that there will be uh Geographic
29:43diversity and we are heavily engaged and
29:45actually I'm I'm quite pleased with the
29:47the progress that we're making it takes
29:49it it doesn't happen overnight that's
29:51the difference between chip design
29:53versus software someone can up you know
29:55with software you can come come up with
29:57a new idea and get that product out very
29:59very quickly get that uh you know MVP
30:02design get it out there and and it can
30:04go viral uh but it does take years of
30:07prep uh in expanding the supply chain SE
30:10the whole semiconductor industry was
30:12built up as historically as well this is
30:16a global industry and we'll create
30:18Geographic pockets of expertise so
30:20that's how we got to where we are today
30:22uh but when you have uh you know more
30:25volatile uh you macro that uh that we're
30:28facing today uh with political tensions
30:31with uh you know economic tensions uh
30:34it's just imperative uh that we that we
30:36spread out uh that manufacturing
30:38capability and it's well underway I
30:41guess one of the um other things that's
30:43been happening a lot
30:45recently uh is and you know you've been
30:47involved with I think some of the most
30:49interesting and exciting new consumer
30:50Hardware platforms like iPhone and iPad
30:52and other things and obviously um AMD
30:54now is powering many uh interesting
30:57types of devices and applications um
30:59what's your point of view on the new
31:00hardware things that people are building
31:02today there's the Vision Pro there's
31:04rabbit which is sort of an AI first
31:06device there's Humane focused on the
31:07health side there's figure it seems like
31:09there's suddenly an explosion of new
31:11sort of Hardware devices and I was just
31:13curious to get your perspective on what
31:15do you think tends to predict success
31:17for those types of products um what
31:19tends to predict failure like how to
31:21think about this whole sort of Suite of
31:23Suite of new things and devices that are
31:24coming our way well that's a great
31:26question I'll give you um you know one
31:30point I'll start just with sort of a
31:31technological point of view I mean uh
31:34I'm proud of the fact uh that uh chip
31:37design uh is part of the reason you're
31:39seeing all these different type of
31:40applications because you're getting more
31:42and more compute capability that is
31:44shrunk down and and draws such a low
31:48power that you can you can see uh more
31:50and more of these devices that have
31:53simply incredible uh Computing and
31:56audiovisual cap capabilities uh that
31:58that they can bring to you I mean you
32:00look at meta Quest and Vision Pro and
32:02things like that this isn't happening
32:04overnight it's it you look at the
32:06earlier versions they were simply too
32:09heavy too big not enough Computing uh
32:13ump because if the uh the lag between
32:17you know seeing a photon on the that
32:18screen and on your head-mounted device
32:21and actually being out of process if
32:22that lags too high you actually get
32:24physically ill wearing uh you know
32:27wearing that and trying to watch a movie
32:28or play play a game so one I'm very
32:31proud of the the technology advances uh
32:34that uh we've been able to make as an
32:36industry and we we're certainly very
32:37proud of our aspects that uh that we
32:40drive from AMD uh but the broader
32:44question that you've asked is well how
32:46do you know what's going to be
32:47successful the technology is enabler but
32:50uh if there's one thing I learned at
32:52Apple uh uh the devices are successful
32:55really serving need I mean they really
32:58give you a capability that you love it's
33:02not just that oh it's incremental I can
33:03do this a little better than something
33:05else I did before it's got to be
33:07something that you love and that creates
33:10a new category uh so it it's enabled by
33:13technology but it is the product itself
33:17that has to really excite you and give
33:19you new capabilities I will mention one
33:21thing I mentioned the AI enablement and
33:24PCs uh that's going to I think it's
33:26almost going to make PCS a new category
33:29because when you think of the kind of
33:30applications that you're going to be
33:32able to run uh with with super high
33:36performance but yet low power
33:38inferencing you can run imagine right
33:40now if I'm I don't speak English at all
33:43and I'm watching this uh podcast let's
33:45say it was a l you know if it's
33:47broadcast live and I click my live
33:49translation button I I could just have
33:51it translated uh with to my spoken
33:54language with no perceptible delay uh
33:58and that's just one of a myriad of new
34:00applications uh that that will be
34:02enabled yeah I I think it's a really
34:05interesting time because for many years
34:09like increasingly and am day benefited
34:11from some of this right you're also in
34:13um in the data center but there was so
34:15much compute load moving to uh servers
34:20right ARA of cloud era of like all these
34:22like you know complex consumer social
34:25applications I I think in like in the
34:27new era of trying to create experiences
34:30and fighting like all these like new
34:31application companies are fighting
34:33latency as a a primary consideration
34:36because you have you have the network
34:37the models are slow you're trying to
34:38chain models and you have you know
34:40things you you want to do on device once
34:43again um and I just think that hasn't
34:46been like a real design consideration
34:48for a while sir I I agree with you and I
34:51think it's it's one of the next set of
34:53challenges uh and that is really
34:56tackling the idea of not just enabling a
35:01high performance and AI applications on
35:03the cloud on the edge and these end end
35:05user devices but thinking about how are
35:08they working together synergistically
35:10writing applications that where you
35:12don't have that latency that uh you know
35:15that dependency on on a lag in Computing
35:18run it on the cloud it's going to be the
35:19most uh it's going to be the most
35:21efficient because you're optimizing this
35:23massive data center uh with the most
35:27but write the algorithm such that where
35:29you do have that need for super low
35:31latency you just need that instance
35:33response have those aspects of the
35:35algorithms be at the edge or in fact uh
35:38on your end user device and often when
35:41you need to react quickly uh it just has
35:44to be the case I mean uh do do you want
35:47to to be in your ve vehicle which is
35:49being driven uh at a high degree of
35:52autonomous driving suddenly get a a loss
35:55of signal back to the cloud and and you
35:58just stop you know because it says I
36:00don't have a signal you you wouldn't
36:02stand for that so our our audiences lots
36:05of uh Engineers Founders Tech Executives
36:08um consumers too what what do you want
36:10people to know about that amd's focused
36:132024 well this uh for us is a is a huge
36:17year because we uh has spent so many
36:20years developing our hardware and
36:23software capabilities for AI we've just
36:25completed uh AI enabling our entire
36:28portfolio so Cloud Edge uh you know our
36:32PCS our our embedded devices our gaming
36:35devices we're we're enabling our our
36:37gaming devices to to upscale using uh AI
36:422024 uh is really a huge deployment year
36:45for us so now now is the the bedrocks
36:48there the capabilities there uh I talked
36:50to you about all the partners that we're
36:542024 uh is is for for us a huge
36:57deployment I think we're often unknown
37:00uh in in the AI space everyone knows our
37:04competitor U but we not only want to be
37:06known in the AI space but based on the
37:09results based on the capabilities and
37:11the value we provide we want to be known
37:14uh you know over the course of
37:162024 is the company that really enabled
37:19and brought AI across those breadth of
37:22applications yes in the cloud and those
37:25you know massive llm uh training and
37:28inference uh for generative AI but
37:31equally across the entire compute space
37:34and I think this is also the year that
37:36that expanded uh portfolio of
37:40applications comes to life uh I look at
37:43what uh Microsoft is talking about in
37:45terms of the uh enablement that they're
37:47doing of capabilities uh Cloud uh to
37:51client uh and uh it's incredibly
37:54exciting and and many many is feeds that
37:56I've talked to are doing the same thing
37:58and frankly Sarah they're addressing the
38:00very question you asked how do I write
38:02my application such that I give you the
38:05best experience tapping both the cloud
38:07and the device that's in your hand or in
38:10you know in your your laptop uh you know
38:12as as you're as you're running the
38:14application uh so it will be a transfor
38:17transformational year and we're so
38:19excited at AMD uh to be right in the
38:22middle of it uh awesome looking forward
38:24to the year ahead and seeing great
38:26things thank you so much for joining us
38:28yeah thanks for joining us well thank
38:30you both this is like I said you guys
38:32have just done a a wonderful job here
38:34with no priors and uh very um happy and
38:38uh appreciative that you invited us on
38:40and L to time with you it's real
38:43pleasure find us on Twitter at no prior
38:46pod subscribe to our YouTube channel if
38:49you want to see our faces follow the
38:50show on Apple podcast Spotify or
38:53wherever you listen that way you get a
38:54new episode every week and sign up for
38:56emails or find transcripts for every
38:58episode at no- pri.com