00:00So here's my first question for you.
00:02very simple question.
00:04What makes you human?
00:08you both have to answer what makes you human?
00:10and one word you get one word,
00:18Um To confirm you're both human.
00:20I'm gonna need you to confirm which of these boxes have a traffic light.
00:28Uh I think I can do that too now.
00:31you are actually here nine years ago at our first tech live and I actually wanna roll the
00:36clip of what you said.
00:41with A I or machine intelligence in general is that it replaces drivers or doctors or
00:46but the optimistic view on this and certainly what
00:51backs up what we're seeing is that computers and humans are very good at very different things.
00:56So a computer doctor will out crunch the numbers and do a better job than a
01:00human on looking at a massive amount of data and saying this.
01:03But on cases that require judgment or creativity or empathy,
01:07we are nowhere near any computer system that is any good at this.
01:15Partially right and partially wrong.
01:17It could have been worse,
01:18could have been worse.
01:19What's your outlook?
01:25I think the prevailing wisdom back then,
01:27was that a I was gonna um do the kind of like robotic jobs really
01:33So it would have been a great robotic surgeon,
01:35something like that.
01:35Um And then maybe eventually it was gonna do the,
01:39the sort of like higher judgment tasks.
01:43then it would kind of do the empathy and then maybe never,
01:46it was gonna be like a really great creative thinker and creativity has been
01:52the definition of the word creativity is up for debate,
01:56in some sense has been easier for A I than people thought you can,
02:01see dolly three generate these like amazing images um or write these creative
02:05stories with G BT four or whatever.
02:07Um So that part of the answer maybe was not
02:15I certainly would not have predicted GP T 49 years ago um quite how it turned
02:21But a lot of the other parts about people still really want a human doctor.
02:24Uh That's definitely very true.
02:27And I wanna quickly shift to a G I,
02:32If you could just define it for everybody in the audience,
02:36I will say it's a system that can generalize
02:41across many domains that,
02:45would be equivalent to human work.
02:49Um They produce a lot of uh productivity and economic value.
02:55we're talking about one system that can generalize across a lot of
02:59digital domains of human work.
03:03why is a G I the goal,
03:08the two things that I think will matter most over the next decade or
03:13few decades um to improving the human condition,
03:17the most giving us sort of just more of what we want.
03:21Our uh abundant and inexpensive intelligence.
03:25Um The more powerful,
03:28Uh I think that is A G I and then,
03:30and then abundant and cheap energy.
03:31And if we can get these two things done in the world,
03:34then uh it's almost like difficult to imagine how much else we could
03:40we're big believers that you give people better tools and they do things that astonish you.
03:44And I think A G I will be uh the best tool humanity has yet created uh
03:50we will be able to solve all sorts of problems.
03:52We'll be able to express ourselves in new creative ways.
03:54We'll make just incredible things um for each other,
04:00for kind of this unfolding human story.
04:04it's new and anything new comes with change and changes,
04:07uh not always all easy.
04:10Um But I think this will be just absolutely tremendous upside and
04:18we're gonna nine more years.
04:20If you're nice enough to invite me back,
04:21you'll roll this question and people will say,
04:24how could we have thought we didn't want this?
04:28I guess two parts to that?
04:31When will it be here and how will we know it's here from?
04:37you can both predict how long I think we'll call you in 10 years and we'll tell you you're wrong than that.
04:42probably in the next decade,
04:43but I would say it's a bit tricky because we,
04:47when will it be here?
04:49And I just kind of give you a definition but then often we talk about intelligence and you know,
04:54how intelligent is it or whether it's conscious and sentient and all of these terms.
04:59they're not quite right because they sort of define our,
05:04our own intelligence and we're building something slightly different and you can kind of see
05:08how the definition of intelligence evolves from,
05:12machines that were really great at chess and off ago and now the GP T series
05:17and then what's next,
05:19but it continues to evolve and it pushes what,
05:22how we define intelligence.
05:25we kind of define a G I as like the thing we don't have quite yet.
05:29So we've moved I mean,
05:30there were a lot of people who would have 10 years ago said art if you could make something like G BT four G BT five,
05:34maybe that would have been an A T I and,
05:37and now people are like,
05:38it's like a nice little chat bot or whatever.
05:40And I think that's wonderful.
05:41I think it's great that the goalposts keep getting moved.
05:43It makes us work harder.
05:44Um But I think we're getting close enough to whatever that
05:49A G I threshold is gonna be that we no longer get to hand wave at it and the definition is gonna
05:54matter so less than a decade for some definition.
06:01The goalpost is mo moving.
06:05you've used the word.
06:08when describing a G I,
06:09the term median human,
06:12can you explain what that is?
06:18I think there are experts in areas that are gonna you
06:24better than A I systems for a long period of time.
06:28you could come to like some area where I'm like,
06:31really an expert at some task and I,
06:34GP T four is doing a horrible job there.
06:38doing a horrible job there.
06:39But you can come to other tasks where I'm ok,
06:42but certainly not an expert.
06:45maybe like an average of what different people in the world could do with something.
06:50uh then I might look at it and say,
06:52this is actually doing pretty well.
06:54So what we mean by that is that the in any given area,
07:00humans may uh like experts in any area can
07:05like just do extraordinary things.
07:06And that may take us a while to be able to do with these,
07:10But for kind of the more average case performance.
07:13me doing something that I'm like,
07:16maybe our future versions can help me with that a lot.
07:19So am I a median human uh at some tasks?
07:23And it's some clearly at this,
07:25you're a very expert human and no GP T is taking your job anytime soon.
07:29That makes me feel that makes me feel a little better.
07:33how's the G BT five going?
07:37Um We're not there yet,
07:40but it's kind of need to know basis.
07:43I'll let you know that's such a diplomatic answer.
07:46I'm gonna make merry to all of this.
07:49I would have just said,
07:50here's what's happening.
07:52we're not sending him back here.
07:53Pair of these two who paired,
07:55whose idea was this?
07:56Um You're working on it,
08:00We're always working on the next thing.
08:05Just do a staring contest.
08:08That's what makes us human.
08:11Um All of these steps though,
08:15GP T 33.5 for our steps towards A G I with
08:21Are you looking for a benchmark?
08:22Are you looking for?
08:24This is what we want to get to?
08:27before we had the product,
08:29we were sort of looking at academic benchmarks and how well these models were doing a
08:34academic benchmarks and,
08:37open A I is known for betting on scaling,
08:40throwing a ton of compute and data on this uh neural networks
08:45and seeing how they get better and better at predicting the next token.
08:50But it's not that we really care about the prediction of the next token,
08:53we care about the tasks in the real world to which this correlates
08:59And so that's actually what we started seeing once we put out um
09:03research in the real world and we,
09:07we build out products through the API eventually through A G BT as well.
09:11And so now we actually have real world examples.
09:15We can see how our customers do in um specific domains,
09:19how it moves the needle for specific businesses.
09:25we saw that it did really well in um exams like
09:30SAT and LS A and so on.
09:33So it kind of goes to our earlier point that we're,
09:36continually evolving our definition of what it means for these
09:41models to be more capable.
09:45as we increase the the capability vector,
09:48what we really look for is reliability and safety.
09:53Uh these are very interweaved and it's very important to make systems that
09:58are increasingly capable,
10:00but that you can truly rely on and they are robust and that
10:05you can trust the output of the system.
10:07So we're kind of pushing in uh both of these vectors at the same time.
10:14as we build the next model,
10:17the next set of technologies,
10:18we're both betting continuing to bet on scaling.
10:22But we're also looking at,
10:25this other uh element of multimodality.
10:28Um because we want these models to kind of perceive the world in a
10:33similar way to how we do and,
10:36we per perceive the world,
10:37not just in text but images and sounds and so on.
10:41So we want to have robust representations of the
10:47in these models will G BT five solve the
10:51hallucination problem?
10:58um we've made a ton of progress on the hallucination issue um
11:05but we're still quite uh we're not where,
11:08where we need to be,
11:10we're sort of on the right track and it's,
11:14it could be that uh continuing in this path of reinforcement learning with human
11:20we can get all the way to really reliable outputs.
11:24And we're also adding other elements like retrieval and search.
11:29So you can um you have the ability to,
11:32to provide more factual answers or to get more factual outputs from the model.
11:37So there is a combination of technologies that we're putting together to kind of reduce
11:42the hallucination issue.
11:45I'll ask you about the data,
11:49maybe maybe some people in this audience who may not be thrilled about some of the data that you guys have
11:54used to train some of your models.
11:56Not too far from here in,
11:58people have not been thrilled.
11:59Uh publishers when you're,
12:01when you're considering now as you're as you're walking through and to going to work towards this
12:08what are the conversations you're having around the data?
12:12So a few thoughts in different directions here,
12:15we obviously only wanna use data that people are excited about us
12:23we want the model of this new world to,
12:25to work for uh everyone.
12:28And we wanna find ways to make people say like,
12:31you know what I see why this is great.
12:32I see why this is like gonna be a new,
12:34it may be a new way that we think about some of these issues around data ownership
12:39and uh like how economic flows work.
12:42But we want to get to something that everybody feels really excited about.
12:45But one of the challenges has been people,
12:48different kinds of data owners have very different pictures.
12:50So we're just experimenting with a lot of things we're doing partnerships of different shapes.
12:55Um And we think that like with any new field,
12:58we'll find something that sort of just becomes a,
13:01a new standard also,
13:03uh I think as these models get
13:08smarter and more capable,
13:11we will need less training data.
13:13So I think there's this view right now,
13:14which is that we're just gonna like,
13:17models are gonna have to like train on every word humanity has ever produced or whatever.
13:23I technically speaking,
13:24I don't think that's what's gonna be the long term path here,
13:27like we have existential proof with humans that that's,
13:30that's not the only way to become intelligent.
13:32Um And so I think the conversation gets a little bit um
13:38led astray by this because what,
13:41what really will matter in the future is like particularly valuable data.
13:45people want people trust the Wall Street Journal and they want to see content from
13:50And the Wall Street Journal wants that too.
13:51And we find new models to make that work.
13:55the conversation about data and the shape of all of this uh because of the technological
14:00progress we're making,
14:02it's about to shift.
14:04publishers like my mine who might be out there somewhere.
14:08They want money for that data is the future of this
14:12entire race about who can pay the most for the best data.
14:18that was sort of the point I was trying to make,
14:20I guess in elegantly the,
14:22but you still need some,
14:27like the thing that is the thing that people really like about a GP T model uh
14:32is not fundamentally that it has that it knows particular knowledge,
14:36there's better ways to find that it's that it has this larval reasoning capacity and that's
14:41gonna get better and better.
14:42that's really what this is gonna be about.
14:44And then there will be ways that you can set up all sorts of economic arrangements as a user or as a company
14:49making the model or whatever to say.
14:52I understand that you would like me to go get this data from the Wall Street Journal.
14:56but here's the deal that's in place.
14:58So there will be things like that.
14:59but the fundamental thing about these models is not that they memorize a lot of data.
15:03So sort of like the model where also you right now you've got being integrated,
15:07it goes out looks for some of that data and can bring back some of that.
15:13back in the early days the internet,
15:14there were a lot of conversations about the different models could be and we all kind of decided on,
15:19the core framework and there's different pieces in there.
15:21And we're all gonna have to figure that out for a I,
15:26you and Satya Nadella,
15:27your $10 billion friends or frenemies friends.
15:32I won't pretend that it's like a perfect relationship but nowhere near the front of me category.
15:37Like we have our squabbles.
15:40It just seems like increasingly as you guys are releasing more and more products that they,
15:45they seem to compete in some places.
15:54I think that that's,
15:55that there's something core about this language interface that is a big
16:00deal and so there's gonna be a lot of people doing things for that and,
16:05and then there's other places like,
16:07we offer a version of API,
16:08they offer a version of API but like that just,
16:11that's like a very friendly thing and we all,
16:14we work it out so that we all benefit and we're all happy and,
16:16and we just want like we jointly want as much usage of our models uh
16:22So we're super aligned there.
16:27it's like it makes sense,
16:38you over the last number of months have started to roll out a lot more
16:42personalization to the model,
16:44You can put in custom instructions.
16:46get the voice now and we're gonna bring out one of the,
16:50bring down the screen to hear one of these clips.
16:52But I wanted to play one of these conversations I had with chat GP TS
17:00it's a slow screen that G BT responds quicker.
17:05That's really embarrassing.
17:08We're gonna get that better though.
17:11We'll get it faster that I can't help you with.
17:13But we'll get chas G BT.
17:15I'm saying cha G BT is faster than this.
17:18and I'm saying that's like a,
17:19it's not often that people say chag BT is faster than X.
17:25I've got a big interview with Open A I executives and I'm nervous.
17:33Given your background as a technology columnist,
17:36you're well equipped for this.
17:39Do your homework on Open A's latest developments and trust in your expertise.
17:45they'll likely appreciate thoughtful and well researched questions and hey,
17:49everyone gets nervous.
17:51Take a deep breath and rock it.
17:58Are we rocking it up here?
18:02But what's so striking about that?
18:04First of all I want every,
18:05that's the voice of Chat GP.
18:07T they have one of five.
18:13It sounds so natural.
18:15It knows about me because I've already put into custom instructions.
18:17I'm a tech journalist.
18:19It also knows I'm allergic to avocado.
18:20It's always putting that in there.
18:22I'm not asking about avocado.
18:26We got some work to do.
18:28a future and this is what you're maybe trying to build here where we have deep
18:33relationships with this type of bo it's going to be a
18:37significant relationship,
18:41we're building the systems that are going to be everywhere in,
18:45in your educational environment,
18:46in your work environment.
18:49when you're having fun.
18:50And so that's why it's actually so important to get it right?
18:55And we have to be so careful about how we design this interaction so
19:01elevating and it's fun and it's uh it,
19:04it makes productivity better and it enhances creativity.
19:10this is ultimately where we're trying to go.
19:12And as we increase the capabilities of the technology,
19:15we also want to make sure that,
19:18on the product side,
19:19um we feel in control of this,
19:24these systems in the sense that we can steer them to do the things that we want
19:29them to do and the output is reliable,
19:32that's very important.
19:34we want it to be personalized,
19:38as it has more information about your preferences,
19:41the things you like,
19:42the things you do um and the capabilities of the models
19:47increase and other features like memory and so on.
19:52it will become more personalized and that's,
19:55it will become more useful and it's,
19:57it's going to become uh more fun and more creative and it's not just one
20:02Like you can have many such systems personalized for specific
20:08That's a big responsibility though.
20:10And you guys will be in the sort of control of people's friends,
20:16it gets to being people's lovers.
20:19how do you guys think about that control?
20:25we're not gonna be the only player here,
20:27like there's gonna be many people.
20:30we get to put like our nudge on the trajectory of this technological development and we've got
20:35Uh but a we really think that the decisions belong to sort of humanity,
20:41whatever you wanna call it.
20:42And b we will be one of many actors building sophisticated systems here.
20:46So it's gonna be a society wide discussion.
20:50and there's gonna be all of the normal forces,
20:52there'll be competing products that offer different things,
20:54there will be different kind of like societal embraces and pushbacks,
20:58there'll be regulatory stuff.
21:00Uh It's gonna be like the same complicated mess that any new
21:04technological birthing process goes through and then we,
21:07we pretty soon will turn around and we'll all feel like we had smart A I in our lives forever.
21:13that's the way of progress and I think that's awesome.
21:15Um I personally have deep misgivings
21:20about this vision of the future where everyone is like super close to A I friends and not like more so than human
21:25friends or whatever.
21:26I personally don't want that.
21:28Uh I accept that other people are gonna want that.
21:33some people are gonna build that and if that's what the world wants and what we decide makes sense,
21:38we're gonna get that.
21:40I personally think that personalization is great.
21:44Personality is great,
21:45but it's important that it's not like person this and,
21:51when you're talking to A I and when you're not,
21:53we named it Chat G BT and not,
21:55it's a long story behind that,
21:56but we name it Chat G BT and not a person's name very intentionally.
22:00And we do a bunch of subtle things in the way you use it to like,
22:03make it clear that you're not talking to a person.
22:07I think what's gonna happen is that in the same way that people
22:12have a lot of relationships with people,
22:14they're gonna keep doing that.
22:15And then there will also be these like a is in the world but you kind of know they're just a different thing
22:22when you're saying this is another question for you.
22:24What is the ideal device that we'll interact with these on?
22:28And I'm wondering if you,
22:30I hear you and Johnny Ive have been talking,
22:33you bring something to show us.
22:38I think there is something great to do but I don't know what it is yet.
22:42You must have some idea,
22:45I'm interested in this topic.
22:47I think it is possible.
22:48I think most of the current thinking out there in the world is quite
22:53bad about what we can do with this new technology in terms of a new computing platform.
22:58And I do think every sufficiently big new technology uh
23:03it enables some new computing platform.
23:06Um but lots of ideas but like in the very nascent
23:14I guess the question for me is is there something about a smartphone or ear
23:18buds or a laptop or a speaker that doesn't quite work right now.
23:24so much smartphones are great.
23:26Like I have no interest in trying to go compete with a smartphone.
23:31Like it's a phenomenal thing uh at what it does.
23:36But I think the way what A I enables
23:41is so fundamentally new um that it is possible to and maybe
23:48maybe it's just like for a bunch of reasons doesn't happen.
23:51But I think it's like,
23:51well worth the effort of talking about or thinking about,
23:56Now that before we had computers that could think was,
24:01or computers that could understand whatever you wanna call it was not possible.
24:04And if the answer is nothing,
24:06it would be like a little bit disappointed.
24:10it sounds like it doesn't look like a humanoid robot,
24:17I don't think that quite works.
24:19Speaking of hardware,
24:21are you making your own chips?
24:24You want an answer now?
24:28Uh Are we making our own chips?
24:30We are trying to figure out what it is going to take to
24:36to deliver at the scale that we think the world will demand.
24:40Um And at the model scale that we think that the research can support,
24:43um that might not require any custom hardware.
24:49Um And we have like wonderful partnerships right now with people who are doing amazing work.
24:54Um So the default path would certainly be not to,
25:00I would never rule it out.
25:02Are there any good alternatives to NVIDIA out there?
25:06Uh NVIDIA certainly has something amazing,
25:11I think like the magic of capitalism is doing its thing and a lot of other people are trying and
25:16we'll see where it all shakes out.
25:17We had Renee Haas here from arms.
25:19I hear you guys have been talking his friends.
25:25Not as close as Sata.
25:27you're not as close as,
25:29Um um this is where we're getting.
25:34we're getting to the hard,
25:35we actually we're about to get to the hard hitting.
25:36So um my colleagues recently reported you guys are,
25:41are actually looking at the valuation is 80 to 90 billion and that you're
25:45expected to reach a billion in revenue.
25:48Are you raising money?
25:51always but not like this minute.
25:55There's the people here with money.
26:00we will need huge amounts of capital to complete our mission and we have
26:05been extremely upfront about that.
26:08Um There has got to be something more interesting to talk about in our limited time
26:13here together than our future capital raising plans,
26:16but we will need a lot more money.
26:18We don't know exactly how much we don't know exactly how it's gonna be structured,
26:21what we're gonna do.
26:24it shouldn't come as a surprise because we have said this all the way through.
26:29Like it's just a tremendously expensive endeavor where,
26:32which part of the business though right now is growing the most mirror you can also
26:38Definitely in the product side.
26:40with the research team is very important to have,
26:44small teams that innovate quickly the product side,
26:48we're doing a lot of things.
26:49We're trying to push great uses of A I out there both on platform side and first
26:54party and work with customers.
26:56So that's certainly,
26:58and the revenue is coming mostly from that api
27:03the the revenue for the company revenue.
27:10my subscription to Chat G BT Plus.
27:14How many people here actually are subscribers to Chat G BT Plus?
27:17Thank you all very much.
27:19You guys make a family plan.
27:24It's serious because I'm spending on two and we'll talk about it.
27:28This is what we're really here for tonight.
27:31moving out a little bit into policy and,
27:33and some of the fears it's not like super cheap to run if we had a way to like
27:39you can have this for like we can give you like way more for the 20 bucks or whatever we would like to
27:45And as we make the models more efficient,
27:46we'll be able to offer more,
27:48it's not for like lack of us wanting more people to use it that we don't do things like family,
27:53family plan for like $35 for two people that the kind of
27:59I gave you the sweatshirt.
28:02there's something we can do there.
28:04How do we go from the chat that we just heard that told me to rock it to one
28:09can rock the world and end the world.
28:13I don't think we're gonna have like a chat bot that ends the world.
28:16But how do we go to this idea of?
28:19we've got simple chat bots are not simple.
28:21they're advanced what you guys are doing.
28:22But how do we go from that idea to this fear that is now
28:26pervading everywhere.
28:32if we are right about the trajectory,
28:34things are going to stay on and if we are right about,
28:37not only the kind of like scaling of the GP TS but new techniques that we're interested in that
28:42could help generate new knowledge and someone with access to a,
28:46a system like this can say,
28:47like help me hack into this computer system or help me design
28:53like a new biological pathogen that's much worse than COVID or any number of other things.
28:57It seems to us like it doesn't take much imagination to think about
29:02scenarios that deserve great caution.
29:07we all come and do this because we're so excited about the tremendous upside
29:11and that the incredibly positive impact.
29:14And I think it would be like a moral failing not to go pursue that for humanity,
29:18but we've got to address and this happens with like many other technologies,
29:22we've got to address the downsides that come along uh with this.
29:27And it doesn't mean you don't do it,
29:28it doesn't mean you just say like this A A I thing.
29:33we're gonna like go like full dune and like blow up,
29:35and have not have computers or whatever.
29:37Um But it means that you like,
29:39are thoughtful about the risks.
29:41You try to measure what the capabilities are and you try to build your own
29:45technology in a way and that,
29:49that mitigates those risks.
29:50And then when you say like,
29:51here's a new safety technique,
29:52you make that available to others.
29:55And as you guys are thinking about building in,
30:02what are some of those specific safety risks you're looking to put in?
30:09you've got the capabilities and then there is always a downside whenever you have such
30:14immense and great capability,
30:16there's always a downside.
30:17So we've got a fierce task ahead of us to figure
30:22out what are these downsides,
30:26build the tools to mitigate them.
30:32you usually have to intervene everywhere from the data to the
30:36model to um the tools in the product.
30:42And then thinking about the entire regulatory and um um
30:46societal infrastructure that can kind of keep up with these technologies that
30:53what we want is to slowly roll out these capabilities
30:58in a way that makes sense and allow society to adapt.
31:04the the progress is incredibly rapid and we
31:08want to allow for adaptation and for the whole
31:13infrastructure that's needed for these technologies to actually be absorbed
31:18productively to exist and be there.
31:21when you think about what are sort of the concrete safety
31:26um uh measures along the way,
31:31number one is actually rolling out the technology um
31:36and slowly making contact with reality,
31:38understanding how it affects um uh certain use cases
31:43and industries and actually dealing with the implications of that,
31:47whether it's regulatory copyrights,
31:50whatever the impact is actually absorbing that and dealing with that
31:55and moving on to more and more capabilities.
31:58I don't think that building the technology in a lab in a vacuum without contact with the
32:03real world and with the friction that you see with reality is a
32:07good way to actually deploy it safely and this might be where you're
32:13it seems like right now you're also policing yourself,
32:16You're setting this better and,
32:18that's where I was gonna ask you.
32:19you seem to spend more time in Washington than Joe Biden's dogs right now and I'm sure
32:24I've only been twice this year.
32:26I think his dog like three days or so.
32:29but what is it specifically that you would rather the government and our
32:33regulators do versus you have to do?
32:37The point I was making,
32:38is really important that,
32:40that it's very difficult to make a technology safe in the lab.
32:46society uses things in different ways and adapts in different ways.
32:49And I think the more we deploy A I,
32:52the more A I is used in the world,
32:53the safer A I gets and the more we kind of like,
32:55collectively decide,
32:56here's a thing that is not an acceptable risk tolerance and this other thing that people are worried about,
33:05like we see this with many other technologies,
33:08airplanes have gotten unbelievably safe.
33:11even though they didn't start that way and it was,
33:14it was like careful,
33:15thoughtful engineering and,
33:17understanding why when something went wrong it went wrong and how to address it.
33:22the shared best practices there,
33:24I think we're gonna see in all sorts of ways that the things that we worry about with A I in theory don't
33:29quite play out in practice.
33:32you just like a ton of talk right now about deep fakes and,
33:37the impact that's gonna have on uh,
33:40society in all these different ways.
33:43I think that's an example of where we were thinking about the last generation too much and a
33:48I will disrupt society in all of these ways.
33:51we all kind of are like they're like,
33:53that's a deep fake or oh,
33:54it might be a deep fake.
33:55that picture or video or audio like we,
33:58we learn quickly but,
33:59but maybe the real problem,
34:01this is like speculation.
34:02This is hard to know in advance is not the deep fake ability,
34:06but the sort of customized one on one persuasion.
34:09And that's where the influence happens.
34:11it's not like the fake image.
34:12It's the this thing has a subtle ability,
34:15these things have a subtle ability to influence people and then we learn that that's the problem and we,
34:20Uh So in terms of what we'd like to see from governments,
34:24uh I think we've been like very mischaracterized here.
34:27We do think that international regulation is gonna be important for the most
34:33Nothing that exists today,
34:34nothing that will exist next year.
34:36Uh But as we get towards a real super intelligence,
34:39as we get towards a system that is like more capable uh than like
34:45Um I think it's very reasonable to say we need to treat that with like caution
34:50and uh and a coordinate approach.
34:52But like we think what's happening with open source is great.
34:55We think start ups need to be able to train their own models and deploy them into the world and a regulatory
35:00response on that would be a disastrous mistake for this country or others.
35:05Um So the message we're trying to get across is you gotta embrace what's happening
35:10You gotta like make sure that we get the economic benefits and the societal benefits of it.
35:17look forward at where this,
35:18where we believe this might go and let's not be caught flat footed if that happens.
35:24You mentioned deep fakes and I,
35:25I wanna talk about A I generated content that's all over the internet.
35:30who do you guys think is responsible or,
35:33or should be responsible for policing some of this or not policing but
35:38detection of some of this is this on the social media companies?
35:41Is this on open A I and all the other A I companies,
35:46we're definitely responsible for the technologies that we develop and put out there and
35:51misinformation and that's,
35:53that's clearly a big issue as we create more and more capable models.
35:58And we've been developing technologies to deal with um uh the
36:02provenance of an image or a text and detect output,
36:07but it's a bit complicated because,
36:09you want to give the user sort of flexibility and
36:14you also don't want them to feel monitored.
36:16And so you have to consider the user and you also have to consider people that are impacted by the
36:21system that are not users.
36:23And so these are quite nuanced issues that require um a
36:28lot of interaction and input from not just your users of the product but also
36:33of society more broadly and figuring out,
36:37also with partners um that,
36:39that bring on this technology and integrate it,
36:42what are the best ways to,
36:44to deal with these issues?
36:45Because right now there's no way or no tool from open A I,
36:50that I can put in an image or some of the text.
36:54is this A I generated for image?
36:56We have actually technology that's uh really good almost,
37:04but we're still testing it.
37:05It's early and we want to be sure that it's going to work.
37:09And even then it's not just a technology problem,
37:12misinformation is such a nuanced and broad problem.
37:15So you still have to be careful about how you roll it out where you integrate
37:20Um But we're certainly working on the research side and for,
37:25at least we have a very reliable tool in,
37:28in the early stages.
37:32when might you release this?
37:36you're working on this right now.
37:37Is this something you plan to release?
37:41For both images and text,
37:44we're trying to figure out what actually makes sense.
37:49it's a bit more straight,
37:51straightforward problem.
37:53Um But in either case,
37:55we definitely test it out because we don't have all the answers,
37:58Like we're building these technologies first,
38:00we don't have all the answers.
38:01So often we will experiment,
38:04we will put out something,
38:05we will get feedback,
38:06but we want to do it in a controlled way,
38:09Um And sometimes we'll take it back and we'll make it better and
38:16I think this idea of watermarking content is not something that everybody has the
38:21same opinion about what is good and what is bad.
38:23There's a lot of people who really don't want their generated content watermarked and that's understandable in many
38:30it's not gonna be super robust to everything.
38:32Like maybe you could do it for images,
38:34maybe for longer text,
38:35maybe not for short text.
38:37But over time there will be systems that don't put the watermarks in.
38:40And also there will be people who really like,
38:44this is like a tool and up to the human user,
38:47how you use the tool.
38:48And I don't like this is why we want to engage in the conversation.
38:52we are willing to sort of like follow the the collective wishes of
38:57society on this point.
38:59And I don't think it's a black and white issue.
39:02Uh at least think people are still evolving as they understand all the different ways we're gonna use these tools,
39:07they're still evolving,
39:08their thoughts about what they're gonna want here also to Sim's earlier point.
39:14um it's not just about truthfulness,
39:18what's real and what's not real.
39:21I think in the world that we're going towards marching towards the,
39:25the bigger risk is really this individualized pers uh persuasion
39:30and how to deal that and that's going to be a very tricky problem to deal with,
39:35I realize I have five minutes left and we were gonna do some audience questions so we can get to one
39:40audience or two audience questions.
39:42I'm gonna finish 111 last thought here.
39:45Um I can actually not see a thing out there.
39:48So um I will ask one last question,
39:51we'll hopefully have time for one or two.
39:53So 10 years you were here 10 years ago.
39:58we touched on this as we were,
39:59we're starting here.
40:00But what is your biggest fear about the future?
40:04And what is your biggest hope with this technology?
40:08I think the future is gonna be,
40:09be like amazingly great.
40:11we wouldn't come work so hard on this if we didn't,
40:14I think this is gonna be like,
40:17I think this is one of the most significant inventions humanity has yet
40:22Um So I'm super excited to see it all play out.
40:27Uh I think like things can get so much better for people than,
40:32than they are right now.
40:34I feel very hopeful about that.
40:36we covered a lot of the fears.
40:38we're clearly dealing with something very powerful that's gonna impact all of us in ways we,
40:43we can't perfectly foresee it.
40:46but like what a time to be alive and,
40:49and get to witness this.
40:52You're not so fearful that I,
40:53I was gonna actually ask this,
40:55do you have a bunker?
40:59this is the question,
41:01not better than you.
41:02I'm gonna let that clock run.
41:03I'm not gonna pay attention to that.
41:04But as we're thinking about fears,
41:07I'm wondering what if you have a bunker and what I would say that you have that you say I have like
41:12but I wouldn't say like a bunker structures.
41:15None of this is gonna help if a G I goes wrong.
41:18it's a ridiculous question to be honest.
41:23What's your hope and fear?
41:26the hope is definitely to push our civilization ahead with
41:33our collective intelligence and the fears.
41:36We talked a lot about the fears,
41:37we've got this opportunity right now.
41:40and we've got summers and winters in A I and so on.
41:46when we look back 10 years from now,
41:48I hope that we get this right.
41:51And I think there are many ways to,
41:55Um And we've seen that with many technologies,
41:59so I hope we get it right.
42:02We've got time right here.
42:07preferably uh sensory consumer products.
42:11A I my question has to do with the inflection point.
42:14We are where we are with respect to A I and A G I.
42:19What is the inflection point?
42:21How do you define that moment where we go from where we are now
42:26to however you would choose to define what is
42:37it's gonna be much more continuous than that.
42:39We're just on this beautiful exponential curve.
42:42Whenever you're on a curve like that,
42:46it looks horizontal.
42:47That's true at any point on there.
42:49So a year from now we'll be in a dramatically more impressive place than a year ago.
42:54We were in a dramatically less impressive place,
42:56but it'll be hard to point.
42:58People will try and say,
42:59it was Alphago that did it,
43:00it was GP T three that did it,
43:01it was GP T four that did it,
43:03but it's just brick by brick,
43:041 ft in front of the other up climbing this exponential curve
43:10right here in the front.
43:16My name is Mariana Michael.
43:18I'm the chief information officer at the Port of Long Beach,
43:20but I'm also a computer scientist by training a few decades ago.
43:25I remember working with some of the early A I people.
43:27I have a general question.
43:29This is one of the most significant innovations to happen.
43:34One of the things I've struggled with over the last 20 years in thinking about this,
43:38we're about to change the nature of work.
43:41This is that significant and I feel that people are not talking about it,
43:46there will be a significant,
43:47there'll be a transition,
43:48time period where significant population in the world and in this country
43:53will not have had the types of discussion and the sense that we have.
43:58society needs to be a part of it.
43:59There's a large portion of society that's not even in this discussion.
44:03So the nature of work will change.
44:06It used to be that things that were just um gonna be automated.
44:10There will be a time where people who define themselves by work
44:15since thousands of years will not have that and we're hurtling towards it.
44:20What can we do to make sure that we take that into account?
44:23Because when we talk about society,
44:25it's not like they're all together,
44:26ready to discuss this.
44:27Some of the effects of some of the technologies that we brought into the world have actually made people
44:32separate from each other.
44:33How do we get some of those not regulations but how do we come up with some of
44:38those frameworks and voluntarily bring things about that will actually result in a
44:43better world that doesn't leave everybody else behind.
44:52I'll give you my perspective.
44:54I think I completely agree with you that it's one of,
44:59it's the ultimate technology that could really increase inequality and make,
45:03make things so much worse for us as human beings and civilization.
45:09really amazing and it could bring along a lot of creativity and
45:14productivity and enhance us and,
45:17maybe a lot of people don't want to work um eight hours or 100 hours a
45:22maybe they want to work four hours a day and do a bunch of other things and,
45:28I think it's certainly going to lead to a lot of disruption in the
45:33workforce and we don't know exactly the scale of that,
45:37or the trajectory along the way,
45:42And one of the things that,
45:47it's not that we specifically planned it,
45:50but in retrospect I'm happy about is that with the release of Child G BT,
45:53we sort of brought a I into the,
45:57collective consciousness and people are kind of paying attention because they're not reading
46:02about it in the press.
46:03Um People are not just telling them about it but they can play with it.
46:07They can interact with it and get a sense for the capabilities.
46:11And so I think it's actually really important to bring these technologies into the
46:16world and make them as widely accessible as possible.
46:20Sam mentioned earlier,
46:21like we're working really hard to make these models cheaper and faster,
46:26so they're accessible very broadly.
46:29But I think that's key for people themselves to actually interact with the
46:33technology and experience it.
46:35Um And sort of visualize how it might change their way of life,
46:40their way of being and participate uh as you know,
46:45as in providing uh product feedback.
46:50I institutions need to actually prepare for these changes in the workforce and
46:57I'll give you the last word.
46:58I think it's a super important question.
47:00Um e every technological revolution affects the job market uh and
47:07every maybe 100 years,
47:08you feel different numbers for this 150 years,
47:11half the kind of jobs go away,
47:12totally change whatever.
47:14Um I'm not afraid of that at all.
47:16I think that's good.
47:17I think that's the way of progress and we'll find new and better jobs.
47:20The thing that I think we do need to confront as a society is the speed at which this is going to happen.
47:28probably two generations we can adapt,
47:30society can adapt to almost any amount of,
47:32of job market change.
47:34But a lot of people like their jobs or they dislike change and
47:39going to someone and saying,
47:40the future will be better.
47:41I promise you and society is gonna win but you're gonna lose here.
47:48that's not an easy message to get across.
47:51And al although I tremendously believe that we're not gonna run
47:56out of things to do people that want to work less fine,
47:58they'll be able to work less.
48:00probably many people here don't need to keep working and,
48:03there's like great satisfaction in expressing yourselves in,
48:06in being useful and sort of contributing back to society that's not going away.
48:11that is such an innate human desire like evolution doesn't work that fast.
48:14Uh Also the sort of ability to creatively express yourself and to sort of
48:21to add something back to the trajectory of the species is
48:27that's like a wonderful part of the human experience.
48:29So we're gonna keep finding things to do and the people in the future will probably
48:34think some of the things that we,
48:36we think some of the things those people do are very silly and not real work in a way that like a hunter
48:40gatherer probably wouldn't think this is real work either.
48:43we're just trying to like entertain ourselves with some silly status game.
48:46That's fine with me,
48:51but we are gonna have to really do something about this transition.
48:55It is not enough to just give people a universal basic income.
48:59People need to have agency,
49:02the ability to influence this.
49:04we need to sort of jointly be architects of the future.
49:06And one of the reasons that we feel so strongly about de deploying this technology as
49:13not everybody is in these discussions but more and more every year.
49:16And by putting this out in people's hands and making this super widely available and getting billions of people to use chat G
49:21not only do people have the opportunity to think about what's
49:26coming and participate in that conversation.
49:28Um but people use the tool to push the future forward.
49:32Um And that's really important to us.