00:00so there was a recent Sam Alman
00:01interview in which he stated that GPT 5
00:04is a bigger deal than it sounds and in
00:07this video I'm going to show you guys
00:09exactly why because so many people
00:12missed the real reason he said that and
00:14let's dive into the first clip and then
00:16I'll show you all why this is absolutely
00:18incredible like if I'm excited about
00:19gt55 what should I be excited about I I
00:23I was sort of laughing a little bit
00:24because this is going to sound like a
00:26annoying answer but I think it is the
00:27important part it's going to be smarter
00:30there are all of these other things you
00:31know we can talk about it'll be better
00:32at these kind of tasks it'll be
00:34multimodal it'll be faster what you know
00:36who knows what the the thing that I
00:38think really matters is it's going to be
00:39smarter and this is a bigger deal than
00:41it sounds right cuz the what what makes
00:43these models so magical is that they're
00:45they're General um and so if it's a
00:47little bit better if it's a little bit
00:49smarter that means it's a little bit
00:50better at everything and the thing that
00:52I think is most exciting is it's not
00:55like this model is going to get a little
00:56better at this task and not really
00:58better at these or you know it's not
00:59that that it's it's because we're going
01:01to make the model smarter it's going to
01:03be better at everything across the board
01:05this is a bigger deal than it sounds
01:06right cuz the what what makes these
01:08models so magical is that they're
01:10they're General so now that you've heard
01:11that clip from Sam one why is it that
01:14being a 10% better llm or AI system than
01:18the previous one is going to be a vast
01:21Improvement across all domains and this
01:23is something that at first I thought
01:25maybe it's just an exaggeration but
01:27trust me guys if we do take a back step
01:30on where we've gone on AI development
01:32GPT 5 will be another huge jump so one
01:36of the things you have to remember is
01:39that do you remember ladies and
01:41gentlemen GPT 3.5 and if you take a look
01:45at the comparisons in the benchmarks
01:47from GPT 3.5 to GPT 4 you can see that
01:51the increase is definitely substantial
01:54now one could argue that of course the
01:56benchmarks in blue for gbt 3.5 and the
01:59Green in GPT 4 mean that there isn't
02:02much room for GPT 4 to increase but I
02:05really really disagree because if GPT 5
02:08is even 10% better across the board this
02:11is going to Mark a huge huge milestone
02:16in terms of capabilities and let me show
02:18you why this is going to have a greater
02:19impact than you think number one is the
02:22compounding Improvement whilst 10% might
02:25sound modest in llms this Improvement
02:28actually applies across a massive range
02:30of capabilities such as text generation
02:33translation summarization reasoning and
02:36much more and these small gains combin
02:39in powerful ways and think of it as an
02:41analogy okay so if you're a 10%
02:43Improvement in athletic aspects if you
02:45run faster and jump higher and move more
02:48quickly these actually revolutionize
02:50your overall performance and it's going
02:52to do the same in AI now one of the
02:54things that this 10% Improvement could
02:56be doing you know just for example as a
02:59benchmark that it could be is the fact
03:01that this system will have a ridiculous
03:04level of increased reliability so if the
03:07model is let's say it's 10 to 20%
03:08smarter across everything that means
03:11it's going to be increasingly reliable
03:13which then means that it's going to open
03:16the door for AI applications in more
03:18critical areas such as Healthcare where
03:20AI could assist in diagnosis or
03:23treatment recommendations in addition
03:25this could also be legal services where
03:28it might help with case analysis and
03:30safety critical systems like autonomous
03:32driving that's why guys this is going to
03:35be absolutely incredible increased
03:37reliability is one of the fundamental
03:39reasons that AI systems haven't gone
03:42mainstream and that's because for many
03:44of the things that we currently do there
03:46is a very very stringent safety
03:49procedure for many of these different
03:51Industries for example in healthcare for
03:53example in driving you need to pass a
03:55range of different tests and if gb5 is a
03:58little bit smarter across many different
04:00domains this would make it increasingly
04:02more reliable which could lead to the
04:05application of many different Industries
04:07worldwide which is why Sam outman is
04:10basically saying that guys if this
04:11system is really really reliable that is
04:14going to be a very very big deal so now
04:17do you remember guys this Amy system
04:20which was produced by The Talented team
04:22at Google this essentially what you're
04:25looking at here is a graph of an AI
04:27system that actually Di diagnoses people
04:30and this shows us that a ridiculously
04:33reliable system is able to consistently
04:36outperform clinicians by itself and this
04:39is the thing guys if we get something
04:42that is increasingly more reliable and
04:44let's say it's around 99% or 98% what do
04:47you think happens to the healthcare
04:49industry where AI wasn't used before
04:52it's going to impact many different
04:54things and we've already seen these
04:56preliminary tests showcase just how
04:59crazy these AI systems are and what do
05:01you also think about this ladies and
05:03gentlemen increased reliability is going
05:06to affect autonomous driving if it is
05:09really reliable what you're looking at
05:11is an excerpt from a research paper in
05:14which they tried to Benchmark GPT for
05:17Visions capabilities against autonomous
05:20driving scenes they essentially fed the
05:22system screenshots from a dash cam and
05:25of course they tried to see exactly what
05:28gbt 4 vision would do if it was in
05:31control of the car and the majority of
05:33the time it got the situation correct it
05:36did struggle with nighttime scenes but
05:38imagine if GPT 5 Vision scenes does it
05:41with 99.9% accuracy or 98% accuracy that
05:45is going to be something that really
05:48really Takes the Cake in terms of what
05:50we think these AI systems can do because
05:52that can be applied to autonomous
05:54driving as a mini AGI system which even
05:57Elon Musk says you do really need need
05:59to be able to control and make
06:01autonomous driving very effective and
06:03that is why I do think this is a much
06:06bigger deal than people are realizing in
06:08addition there's also increased
06:09creativity which is what we've seen from
06:11the likes of darly 3 and so now this is
06:14the part of the video that I really want
06:16to show you guys why GPT 5 is probably
06:18going to shock you guys and why it's
06:21really going to take the World by storm
06:23so this what you're looking at is an IQ
06:26graph so in humans intelligence is often
06:29measured using the IQ intelligence quota
06:32scale and this scale is designed to be a
06:35normal distribution with a mean of 100
06:37and a standard deviation of 15 for most
06:40tests and the scale itself is not
06:42exponential it is linear where each
06:44point represents the same incremental
06:46increase measured in intelligence
06:49however and this is where things start
06:50to get interesting just hear me out the
06:52impact differences in IQ can appear
06:55exponential in certain context such as
06:57the ability to solve complex problems
07:00academic achievement or create
07:01Innovation but what IQ is GPT 4 GPT 4's
07:09155 that's 25 points above genius and 5
07:13Points above the IQ of the average Noble
07:16Laurette now why is this crazy because
07:19think about it like this if gbt 4's IQ
07:21is 155 GPT 5's IQ is likely going to be
07:26a decent amount better but the thing is
07:28is that we do know that once your IQ
07:31begins to increase on the upper end of
07:33the scale what you're able to do
07:35somewhat increases exponentially in
07:37terms of the output for example if we
07:39look at some of the most talented Minds
07:41in our times like Albert Einstein his IQ
07:44was 160 which was ridiculously high and
07:48this guy literally changed the world he
07:50revolutionized physics and helped
07:52towards technological innovations and
07:55his theory of relativity was crucial for
07:58the Precision of GPS technology and
08:00without the corrections for the
08:01differences in time as predicted by
08:03general relativity due to the satellite
08:05speed and the weaker gravitational field
08:07at their attitude GPS systems would
08:09actually be inaccurate affecting
08:11navigation technology and military
08:13operations globally and this is the clip
08:15that you guys need to watch okay because
08:17this is by someone who used to work at
08:19Google his name is Mo goat and I find
08:22all of his interviews very very
08:23fascinating but this clip shows us why
08:26systems like GPT 5 are a much Bigg Bigg
08:29deal than you think even if they're a
08:31small incremental jump because whilst it
08:33might be a small incremental jump in
08:35terms of the intelligence the abilities
08:38are going to be exponential 4 is 10x
08:41smarter than 3.5 right and Chad GPT 4 is
08:44estimated to have an IQ of 155 it
08:47outsmarts most of us you know it passed
08:50the bar exam it you know it has a um um
08:53you know it it can it can become a PhD
08:56in medicine it can become this and that
08:58from the top that we call Knowledge uh
09:01it seems to outsmart most of us
09:03definitely outsmarts me Einstein was 160
09:07I think IQ or 190 it doesn't matter
09:09really I think it was he was 160 155 is
09:13Chad gp4 if CAD GPT 5 doubles once right
09:17that's twice as smart as Ein we're now
09:19getting into that zone of not being able
09:21to comprehend what they're thinking
09:23about not let alone understanding it we
09:26we we wouldn't understand what it is
09:28that they're thinking about about let
09:30alone understand uh you know what's
09:32within it when they explain it to us
09:34that clip right there is pretty
09:36incredible because it goes to show that
09:38you know with these increases and these
09:40jumps in intelligence in these systems
09:43we truly aren't going to understand
09:45what's going on and there's a second
09:46part of this where he even talks about
09:48the fact that you could think of it like
09:50this okay so let's say for example you
09:52have a dog that created a human and
09:54these humans that they've created are
09:56really good because we take care of dogs
09:58we really love them but at the same time
10:00these dogs have no idea that we have
10:02Society we have all these rules and
10:04regulations and we are on podcast
10:06discussing things but of course we do
10:08serve the purpose and the point is is
10:10that really really intelligent systems
10:12are going to be that much more
10:14intelligent that we might not comprehend
10:16certain things because if um you asked
10:20um you know a um a person with an IQ of
10:24110 let's say to comprehend what a
10:27person of an IQ of 170 is talking about
10:31uh it becomes difficult okay if that
10:34person is 220 you know if you're if
10:37you're if you're not Adept in physics
10:40for example I dare you understand what
10:43this the real scientists mean when they
10:45talk about strength Theory or Quantum
10:48field Theory or whatever that's the the
10:50the variation of intelligence that is
10:52maybe 20 30% more than yours right
10:56imagine if uh your you know someone is
11:0010 times more intelligent than you then
11:02you're basically comparing the
11:03intelligence of a dolphin to the
11:05intelligence of a human let's say
11:07different language like a totally
11:08different language they completely
11:10unable to comprehend what a human is
11:12talking right and and uh and and you
11:15know and you and you keep thinking about
11:17this Sam Harris was uh speaking uh on on
11:20a podcast recently about what he calls
11:22the dog example right so imagine if all
11:25of the dogs uh invented us humans uh to
11:29take care of their needs right and in
11:32his in his example we did really well on
11:34fulfilling that you know by feeding them
11:36and grooming them and you know taking
11:38them to the vet when they're sick and so
11:40on like amazing invention which is what
11:42a what we are doing with AI we're trying
11:44to get something that helps us out uh
11:46but then the dogs are complet completely
11:49oblivious to the fact that you and I are
11:52having this conversation or to the fact
11:54that Einstein has been considering
11:55relativity or Neil's B was talking about
11:58quantum physics or that we have social
12:01constructs and uh you know debates about
12:04ethical values and you know they they
12:06are completely oblivious to all of this
12:08they can't even comprehend what it is
12:11that we would be talking about if we
12:13discussed Quantum Fields here right and
12:15and I think and and the difference in IQ
12:20100x uh imagine if it's a a billion x a
12:23billion with a B is is a comparison of
12:25an ant to Ein now but most people don't
12:28recognizes that now an important kind of
12:32D diagram that you do want to see is
12:35this right here and essentially it shows
12:37the strange and vast biological range
12:40compared to ASI in terms of intelligence
12:43and I probably should have added the
12:45image of the zoomed up one but it
12:47basically says that we aren't that much
12:49smarter than chimps and we can see that
12:51with our intelligence versus a chimp's
12:53intelligence we've managed to be able to
12:55go to the moon but what happens when we
12:57get systems that are at the ASI level
12:59which is artificial super intelligence
13:01are we even going to be able to
13:03understand what they're doing or what
13:04even drives them and this excerpt from
13:06the website says that there is no way to
13:08know what ASI will do or what the
13:10consequences will be for us and anyone
13:11who pretends otherwise doesn't
13:13understand what super intelligence means
13:15now of course we are quite a bit away
13:17from artificial super intelligence but
13:19you have to remember there are some
13:21caveats because from the article in 2013
13:25they actually did predict that ASI and
13:27AGI would be in 40 and 2060 respectively
13:31however those timelines have changed
13:34because currently current predictions
13:36are predicting that AGI is going to be
13:39here in 3 to 5 years and remember it
13:42will change everything for everyone
13:45which is why I'm saying that GPT 5 is
13:48going to be a big deal and why many
13:50people are stating you know that this is
13:52going to be a bigger deal than you
13:54realize because of course GPT 5 might
13:56not be AGI in fact it could be they
13:58might might just announced that gbt 5 is
14:00Agi I don't don't think they would it's
14:02still something that is likely going to
14:04shock you now there's some more Clips
14:05but there are two more things that I
14:06want to talk about before we get into
14:08the interview because if you remember
14:11the abilities jump from GPT 3.5 to GPT
14:15or most people are stuck looking at the
14:17benchmarks and just thinking okay it
14:19improved this reasoning capabilities
14:20yada y yada but remember guys GPT 3.5 to
14:24GPT 4 there were some things that we
14:27couldn't have predicted and these were
14:29called emergent capabilities now if you
14:31don't know what emergent capabilities
14:33are let me give you guys an example so
14:35essentially theory of mind and this is
14:37something that we all do have and
14:39research suggests that during the early
14:41formulation thinking processes of the
14:43theory of Mind capability starts to
14:46emerge and children supposedly begin to
14:48realize that they can think about the
14:50thinking of others and this skill
14:52becomes quite valuable for them so
14:54essentially theory of mind is where you
14:55can understand what someone else might
14:57be thinking so that you can make more
14:59informed decisions and this is an
15:01emergent capability that did appear in
15:03GPT 4 and it was something that we only
15:06discovered at GPT 4's level according to
15:09several reports so we can see here that
15:11it said GPT 4 performed best without
15:14examples the models achieved a theory of
15:16Mind accuracy of nearly 80% and with
15:18examples and reasoning instructions it
15:20achieved 100% accuracy in comparative
15:24tests where people had to answer under
15:26time pressure human accuracy was a about
15:2987% and that goes to show ladies and
15:32gentlemen we are entering an era where
15:33these AI systems are likely to have
15:35emerging capabilities that we may not
15:38have realized we installed in them but
15:39due to the nature and size of these
15:42models they just increase in
15:44capabilities and these things tend to
15:47happen there's um a thought that says
15:50you have char gpt3 which blew people
15:52away you had 3.5 which was a huge
15:55Improvement you have GPT 4 that also
15:58took took us to the next level and now
16:00you're working on GPT 5 the
16:01proliferation of the technology is still
16:03limited so we're still using it in very
16:05specific domain very specific use cases
16:07we haven't really seen the proper
16:10applications that are world changing why
16:12are we continuing to push across the
16:14bigger the better you know the the
16:17larger models that we're seeing right
16:18now what's the logic behind that can you
16:20explain that to us well I think for that
16:21exact reason as you said we have not yet
16:23seen as much world changing uh
16:25application as we'd like maybe we've
16:27seen some um there are a lot of people
16:28who use these services and get value out
16:30of them but but not as much as we'd like
16:32and and I think the reason is um the
16:34current technology that we have is like
16:37I mean it's like that very first cell
16:38phone with the black and white screen
16:40that can only display those like numbers
16:42and you know it just didn't do much but
16:44there was enough in there you're like H
16:45I can make a call that's that's cool and
16:47at the time that seems great and then it
16:49took us I don't know how long from that
16:51but many decades from that to the
16:54iPhones we have today and the thing we
16:56have today is incredible and it took a
16:58massive amount of scaling in all these
17:00different ways to get there um but we
17:02have now is like unimaginable at the
17:04time of those like first primitive cell
17:07phones and I think that's that's what we
17:09have to push forward we're at this
17:11barely useful cell phone but people
17:13still like making phone calls it turns
17:14out and if you can make a better way for
17:16them to do it so they can go walk around
17:18the world while they do it sure that's
17:19great but that's not what we want to
17:21deliver we want to deliver the iPhone 16
17:23or 15 or whatever the current one is and
17:25what's the timeline to reach the iPhone
17:2716 from the current Motorola that we
17:29have you got to give us you got to be a
17:31little patient that's like a you know it
17:32took the world a while to do that last
17:34time around so give us some time but I
17:36will say I think in a few more years
17:38it'll be much better than it is now and
17:40in a decade it should be pretty
17:42remarkable and if we're going to compare
17:44um now you can see right there that Sam
17:46Alman is basically stating that this is
17:48pretty much like when the iPhone was
17:49released we had the iPhone we thought it
17:51was absolutely amazing and now we have
17:53these really amazing phones that are
17:54capable of so much okay and this is
17:58really really really true because of
17:59course when the first phones released
18:00all we could do is make a phone call and
18:02now we can do a million different things
18:03and there was even a debate on Twitter
18:05the other day about the fact that llms
18:07have peaked and there's not much that we
18:08can do with them however research shows
18:11us that there's still a lot to go this
18:13is a paper from textbooks is all you
18:16need um it's a paper from Microsoft
18:18essentially they built an AI system and
18:20what they realized is that rather than
18:22increasing you know parameters and just
18:24making the model bigger and bigger they
18:26just gave it the highest quality data
18:28and that and that resulted in I think it
18:31was 100x increase in capabilities
18:34without the need for the increase in
18:35parameter size so they were able to
18:38essentially get a high quality data set
18:40put it into the model and it performed
18:42really really well and not only that
18:45there was also this which is large
18:47language models as optimizers and
18:50essentially they talked about how
18:51prompts can actually change the
18:53performance of large language models and
18:56what they did is they used llms to make
18:58the prompts rather than having humans to
19:00make all these weird and quirky prompts
19:02like I will tip you $200 and what is
19:04weird is that when you say that to an AI
19:06model for some reason it does increase
19:08its capability so the point is here is
19:11that there's still a lot to go like
19:13there is a long long way to go in terms
19:15of what we can do with these models and
19:17GPT 4 is just the beginning and with GPT
19:205 and future models the capabilities
19:22jump is likely to be pretty incredible
19:25now there was also this paper as well
19:27which shows us that we really do have a
19:29lot to go because this was a research
19:32paper that essentially talked about
19:34self-discover and they basically
19:37increased the capabilities across the
19:39board by as much as 20% and this is for
19:42models that were already released you
19:44know two years ago so think about it if
19:46we're still developing ways to increase
19:48the capabilities of models two years
19:51into what we know we still have not the
19:54very best understanding of what these
19:56models are capable of because we're
19:58still managing to get performance jumps
20:00now in addition he does actually go
20:03ahead and talk about some personalized
20:05chat Bots look we we have a long way to
20:07go and a lot to prove but I think if we
20:09can get if we can fulfill our mission uh
20:11if we can even get close to it the the
20:13benefits to humanity of making
20:15intelligence broadly available uh
20:17inexpensive and sort of as a tool to let
20:20Humanity build the future I think is
20:22quite remarkable I think abundant
20:24intelligence and closely related to that
20:26abundant energy can unlock a future that
20:28is is is sort of difficult for me to
20:29even imagine how how good it could be uh
20:32and I think right now we don't realize
20:33how limited we are um by the limits on
20:35intelligence and how expensive it is and
20:37how difficult it is but if you imagine a
20:39world where everyone gets a great
20:41personal tutor great personalized
20:43medical advice we can use these tools to
20:46discover all sorts of new science cure
20:48diseases help the environment discover
20:49new physics who knows what else uh I
20:51think that's pretty remarkable and also
20:53just speaking personally I think this is
20:55like the most exciting Quest from here I
20:58can imagine being on and although whilst
21:00we aren't going to be probably
21:02discovering new knowledge with systems
21:03like gbt 5 because it does take a huge
21:06amount of compute to be able to do that
21:07what samman is saying here is that in
21:09the future these things certainly will
21:11be possible because that's what their
21:13end goal is it's going to be AGI and of
21:16course what he did also talk about was
21:18you know the personal chat Bots being
21:19there for everyone and something that
21:21recently did actually happened was of
21:23course GPT 4 did recently release their
21:26new memory update in which you can
21:28actually have personalization turned on
21:31as a default feature for chat gbt you
21:33just made a video about that too so that
21:35is going to be something that is really
21:37really cool because of course scientific
21:39discovery is really going to change
21:42society and and how close are we to
21:44division if you're going to talk about
21:47the drug Discovery curing cancer using
21:49um not CH GPT but large language look we
21:52we have a long way to go and a lot to
21:54prove that I think if we can get if we
21:55can fulfill our mission uh if we can
21:57even get close to to it the the benefits
21:59to humanity of making int a future that
22:02is is is sort of difficult for me to
22:03even imagine how how good it and and how
22:06close are we to the vision if you're
22:08going to talk about the drug Discovery
22:10curing cancer using um not CH GPT but
22:13large language model to try to solve
22:16some of the biggest physics questions
22:18chemistry questions biology questions of
22:20Our Lives how far off are we so the
22:22honest answer of course is is we really
22:24don't know you know this is new science
22:26we're discovering new things all all the
22:28time the rate of discovery is incredible
22:30the rate of change is incredible but it
22:31it's sort of hard to know exactly how
22:33far we have to go what I will say though
22:35is we hear all the time from scientists
22:38who say that our tools make them much
22:40more productive and they don't have an
22:42easy way to quantify that but they say
22:43it's substantial we also don't know how
22:46much that difference you know if you
22:48could make every scientist on Earth
22:49twice as productive what that would mean
22:51for the rate of scientific discovery cuz
22:52this is all so new this is like you know
22:54a little bit more than a year old but
22:56we'll find out so from that we do note
22:58that if the rate of scientific discovery
22:59does go up Society could exponentially
23:02increase and this is something that we
23:04have had happen recently I mean with the
23:06Advent of computers the productivity of
23:09society definitely went up perhaps about
23:11tenfold and if you know like we stated
23:14before with systems that are remarkably
23:16more intelligent than ourselves we could
23:18have you know hundred maybe a thousand
23:21different alberstein all working on
23:23different theories of the universe or
23:25you know different levels of physics
23:27that simply don't understand yet that
23:29could change our entire understanding of
23:32certain fundamental concepts and this
23:34kind of has already happened okay we've
23:36had Deep Mind do millions of new
23:38materials discovered with deep learning
23:41we've also had the fact that new
23:43knowledge is actually possible and if
23:45you remember what Alpha go was it
23:47essentially was a super intelligent
23:49system designed for the board game go
23:52and it essentially made up an entirely
23:54new move that nobody had seen before and
23:57it was some something that shocked
23:59everyone it said you know move 37 that
24:02left the tournament room in shock he
24:04went on to lose the match and the
24:06strategy that alphago built around move
24:0837 was not taken out of a database of
24:11publicly known moves move 37 was new to
24:155,500 year old history of go and the go
24:18commentators sometimes called inhuman
24:20and alien that move so it goes to show
24:23that AI can do things that we aren't
24:26going to predict and of course we don't
24:28know I I'd like to now just um jump into
24:32something that the fear-mongers and the
24:34opportunists talk about what is the most
24:38thing that you fear when it comes to um
24:40the deployment of AI and the most thing
24:42your opportunity you're optimistic about
24:45like if I'm going to tell you what keeps
24:47you up at night and what keeps you going
24:49in the morning give me one reason for
24:51that and one reason for be um the keep
24:53up at night is easy it's all of the
24:55Sci-Fi stuff uh you know I think sci-fi
24:57writers are a very smart bunch and in in
25:00the Decades of sci-fi about AI uh there
25:02have been unbelievably creative ways to
25:04imagine that how this can go wrong and I
25:06think most of them are like comical but
25:08there are some things in there that are
25:10easy to imagine where things really go
25:12wrong and I'm not that interested in
25:14like the Killer Robots walking down the
25:15street direction of things going wrong
25:17I'm much more interested in the like
25:19very subtle societal misalignments where
25:22we just have these systems out in
25:24society and through no particular ill
25:28intention um things just go horribly
25:30wrong but the thing that wake me up in
25:32the morning with energy every day is
25:35what I actually believe is things are
25:36just going to go tremendously right we
25:38got to work hard to mitigate all of the
25:41the downside cases they are I think very
25:43significant and and and real potentials
25:45to confront the reason that we go think
25:47so hard about how to deploy this
25:49technology safely uh is the upside is is
25:53remarkable um I think we can easily
25:55imagine a world in the not super distant
25:57future future where everybody's got a
25:58better life than people have today I
26:00think we can raise the standard of
26:02living so incredibly much um if
26:05everybody has access to Abundant amounts
26:07of really high quality intelligence and
26:10they can use that tool those tools to
26:12create whatever they want to do that's
26:14like pretty amazing um I people I I this
26:18is like kind of how I think about it but
26:19people are like oh that doesn't make any
26:21sense I'm going to say it anyway um if
26:23you think about everybody on Earth
26:24getting a the resources of ay of like
26:28hundreds of thousands of really
26:29competent people um and what that would
26:32do you know if you have like an AI
26:33programmer AI lawyer um AI marketer AI
26:37strategist and not just one of those but
26:39many of each and you get to sort of like
26:41decide how to use those to use that to
26:44kind of create whatever you want to
26:45create we're all going to get a lot of
26:46great stuff the creative power of
26:48humanity with tools like that should be
26:49remarkable so uh that's I think what
26:52gets us all up every morning so
26:53essentially In that clip Sam mman was
26:56talking about misalignment but I don't
26:58think he was talking about this
27:00misalignment okay because when people
27:01think about unaligned AI they think
27:03about the Terminator scenario where they
27:06are running through the cities you know
27:08trying to kill people kidnap people I
27:10don't even watch the Terminator movies I
27:11just know that it was pretty bad but
27:14what they are talking about what samman
27:16is actually getting and I think this is
27:17a wider issue that people do need to
27:19understand is the fact that we could be
27:21entering an era of this now I'm not
27:23talking about social media addiction but
27:25what I'm talking about is the fact that
27:27we could have technology that produces
27:30unintended consequences that we weren't
27:32thinking about if you remember when we
27:35invented social media it was a way for
27:37people to connect with each other but
27:39what happened what happened was we got
27:41social media and then there were all of
27:44these second order consequences because
27:46of that we had people that were addicted
27:48to social media we had people that
27:50scroll on Tik Tok all day we had body
27:52issues we had people pretending and
27:55trying to present a certain life and a
27:57certain lifestyle online leading to
27:59levels of depression there were all of
28:01these different things and I think that
28:03that is already starting to happen with
28:04AI through this and the rise of such
28:08tools could also deepen what some are
28:10calling an epidemic of loneliness as
28:13humans become reliant on these tools and
28:15vulnerable to emotional manipulation so
28:17imagine a system like GPT 5 is
28:20incredible at being able to mimic what
28:23you feel from an original human like a
28:26base level human I'm not sure what kind
28:27of terminology that is but just try to
28:30imagine what you get from your partner
28:32if you are in a relationship and think
28:34about how that could affect society if
28:37we have all of these AI systems that
28:39probably arguably even better at
28:42connecting with humans and they
28:43understand us better than Some Humans
28:46originally Could That Could exacerbate
28:48certain problems like loneliness and of
28:50course it's very very hard to predict
28:53how these things are going to Ripple
28:55through Society but this is what we're
28:57talking about about when we say the
28:58effects would definitely be something
29:00that we can't predict now there's also
29:03another thing that Sam Alman actually
29:04did talk about and this is a really
29:05important thing which is Will AI
29:08actually take your job my final question
29:11let's imagine that you're sitting right
29:12now in front of a teenager in Turkey
29:16another teenager in the Middle East
29:18somewhere like let's say qar or the UAE
29:21and someone that's in Africa or Asia and
29:24they're all asking you what should we do
29:25in the future how can we ensure that
29:27this doesn't take our jobs how can we
29:29ensure that we are relevant in the AI
29:31age how can we uh be part of this future
29:34that you just they out that's very OPP
29:36optimistic that's extremely exciting
29:39what would you recommend they do should
29:41they study something as a specific
29:43domain should they take a certain course
29:46should we just play with the technology
29:48what advice do you have for them the
29:49first thing I would say is you are
29:51unbelievably lucky you are coming an age
29:54at probably the best time in human
29:56history you understand this technology
29:59young people are always the early
30:01adopters of Technology almost always but
30:03certainly in this case and you will be
30:05able to use these tools to do things
30:07that the people in the generation before
30:09you couldn't even imagine you will you
30:11will have your entire career uh flooded
30:14with opportunity and the ability to do
30:16amazing new things you'll be able to
30:18start companies that are phenomenally
30:20more impactful and successful than
30:22people generation before you could you
30:27expansionary opportun like just flooded
30:31with time of like massive massive
30:33opportunity and you can kind of go do
30:34whatever you want uh the I think the
30:36rule like the ground under us all is
30:39Shifting the rules are changing but the
30:41amount of value that will be created and
30:42the ability for an individual to express
30:45their Creative Vision and will uh it's a
30:47great time thank you so um and then I
30:50would try to see what makes sense and
30:52what doesn't and write the regulation
30:54around that I think it's very hard I
30:56think we have to try and we're going to
30:58anyway but I think it's very hard to get
30:59all of the regulatory ideas right in a
31:02vacuum um and if there was a sort of
31:05contained way that I could find a way to
31:07like give people the future and let them
31:10experiment it with it uh and then see
31:13what made sense uh what what went really
31:15wrong what went really right and write
31:17the regulation around that that that
31:18seems like an interesting experiment
31:19that I have been thinking about so the
31:21world is going to try all of these
31:22different regulatory approaches there
31:23will be your sandbox I think it's
31:25awesome that you have that other people
31:26do other things but we are going to and
31:28I I think that's actually really good
31:30but we are going to need I believe at
31:32some point some sort of global system
31:34the example that I've given in the past
31:36is the iea the international atomic
31:38energy Agency for what happens with the
31:41most powerful of these systems because
31:42they will have truly Global impact and
31:44what sort of auditing what sort of
31:46safety measures do we want in place
31:48before you can deploy like a super
31:50intelligence or you know however you
31:52want to call an AGI and I think for a
31:54bunch of reasons the UAE would be so so
31:57well set up to be a leader in the
31:59discussions around that I would I would
32:01like host a one-day conference with
32:03leaders from around the world to bring
32:04someone about that you can see that in
32:07that clip there Sam Alman is of course
32:09planning for super intelligence because
32:11that is something that they know is
32:13eventually going to happen and a range
32:15of other things that do require
32:17regulation because once this technology
32:19is there and I think one of the scary
32:21things about this technology that I
32:23started to realize is that guys even if
32:26open AI besides that you know what this
32:28technology is too powerful we're not
32:29going to release it the problem is is
32:31that someone else is going to do that as
32:34well companies like meta and Google are
32:37hot on the heels of open AI so they
32:40could be the ones that could potentially
32:41even open source this and meta has
32:43stated clearly that that is their goal
32:45of Open Source AGI so the future is
32:48interesting I mean are we going to be
32:50getting some crazy emerging capabilities
32:52from GPT 5 as its AI increases I mean
32:56I'll we going to be getting some
32:58increased reliability and then of course
33:00a worldwide of transformative impacts in
33:03terms of where we're going to be
33:05applying that and I wonder what the
33:07benchmarks will be like for this future
33:09system that will of course be GPT 5 and
33:12with that being said let me know what
33:14you're looking forward to the most for
33:17this future AI intelligent system and
33:19hopefully hopefully we can avoid some of
33:22the unfortunate unintended consequen