00:04there are usually many ways a problem
00:07can be tackled when writing computer
00:09programs often there is no best approach
00:12although programmers are often
00:14considering trade-offs between
00:15processing time memory or disk
00:17requirements and the readability of an
00:20algorithm however sometimes when there's
00:24more than one approach that could be
00:25taken some algorithms are better than
00:28others in terms of how efficiently they
00:30execute in this first video of a
00:32two-part series we explore how
00:34algorithms can be measured a method of
00:37determining efficiency and how we can
00:39use a notation to express this so let's
00:43take a very simplistic piece of code and
00:46consider how we could measure it so here
00:49in this algorithm we're reading in three
00:52names into an array called names here we
00:56can see there are seven lines of code
00:58that need to be executed so we could
01:00state that there are seven steps to
01:02solve the algorithm it has a complexity
01:05of 7 now let's look at a piece of code
01:09that achieves the same result but this
01:11time with a simple for loop here we can
01:15see there are seven lines of code again
01:17but this time three of them are part of
01:19a loop so we could express the
01:23complexity of the algorithm as three
01:26lines plus three lines plus one line or
01:30three plus three plus one you probably
01:34appreciate by now that we use loops
01:36where possible because it keeps the
01:38program very manageable in terms of the
01:40number of variables and the number of
01:42lines of code they give us a significant
01:45advantage because it makes a program
01:47scalable in other words we could
01:49increase the value of n and it would
01:52have no effect on the number of lines of
01:54code but the so called time complexity
01:56has increased for example let's assume
02:00we wanted to read in 30 names instead of
02:02three all we need to do is change the
02:05value of n in this program from three to
02:08thirty and it would still work without
02:10any additional lines of code and like
02:13the first example above that we looked
02:18number n is pretty arbitrary as long as
02:21you have sufficient memory and could be
02:23any value therefore we're better
02:25expressing our algorithm in terms of
02:28three plus three n plus one that means
02:32three lines of code plus three lines of
02:35code that's execute n number of times
02:38depending on the value of n plus one
02:40line of code the console dot write line
02:42at the end in fact we would probably
02:47write the program like this to give the
02:48user the maximum flexibility since we
02:51were already using an array so here
02:54we're asking the user how many students
02:56do they want to enter before reserving
02:58the memory and then entering that number
03:01of students into the array so in terms
03:04of complexity we can represent this as
03:07four lines of code plus three lines of
03:10code executed n times depending on the
03:12number of students plus one line of code
03:15or four plus three n plus one so a
03:20common misconception here is that
03:22complexity is about how difficult the
03:24algorithm is to write that's not true
03:27there are very simple algorithms that
03:30GCSE or a level students would be
03:32capable of including this one complexity
03:35or time efficiency is about how many
03:38steps are required to complete the
03:40algorithm another misconception is that
03:43complexity is a measure of processing
03:45time although clearly related to
03:47complexity that's not strictly true
03:49either a four gigahertz processor is
03:52clearly going to execute code more
03:54quickly than a 1 gigahertz processor but
03:57the number of steps taken in an
03:58algorithm of the same there's also user
04:01input involved in this algorithm so
04:03actually the time to complete is
04:05entirely dependent on how quickly the
04:07user makes the inputs when considering
04:10efficiency we're looking at the number
04:13of steps to be taken called the time
04:15complexity and the memory requirements
04:17known as the space complexity you could
04:20have two different algorithms programmed
04:22in two different ways
04:23requiring no input and executing on the
04:26same processor architecture they're
04:28likely to have different time and space
04:31even though they achieved the same task
04:34so the interesting thing to consider
04:36when we express the complexity of an
04:38algorithm is that not all parts are
04:40equal in our last example 4 + 3 n plus 1
04:45it's really only the 3n part that is
04:48significant that is the part of the
04:50calculation that has the most impact if
04:53N is a thousand then 3n is three times a
04:57thousand which equals three thousand so
05:01the numbers four and one really have no
05:03overall impact on the complexity we
05:06always think about scaling up an
05:08algorithm and not scaling it down
05:11therefore 4 + 3 n + 1 can actually be
05:15represented simply as 3 n there are two
05:20simple rules to follow remove all the
05:24terms except for the one with the
05:25largest factor or exponent and remove
05:28any constants there is a notation
05:32computer scientists used to express the
05:34complexity of an algorithm named after
05:36the German number theoretician Edmund
05:39Landau he described the scalability of
05:42an algorithm called its order or Oh for
05:45short this is lando symbol and in
05:48computer science we call it Big O so our
05:53algorithm had a complexity of 3 n we can
05:57remove the constant 3 and all we have
06:00left is n this is expressed as Big O n
06:06the algorithm is set to have linear
06:08complexity as the data set grows the
06:12number of lines of code to be executed
06:14grows at the same rate consider a piece
06:19of code that initializes a
06:20two-dimensional array how do we measure
06:23the complexity of this algorithm when
06:25the number of lines to execute is not
06:27determined just by one loop but by a
06:30nested loop 1 loop inside another in
06:33this situation we would refer to it as N
06:36squared or quadratic complexity and for
06:413 dimensional arrays n cubed known as
06:45t-there for expressed in Big O this
06:48would be O N squared there are other
06:53complexities you need to be aware of so
06:55we will summarize these now do some
06:57examples with complexity expressions and
06:59in the next video consider how this
07:02relates to our common algorithms in
07:03computer science with the aim of proving
07:06that one algorithm is often better than
07:08another oh one described constant
07:12complexity it's an algorithm that always
07:15executes in the same time regardless of
07:18the size of the data set these are the
07:20very best algorithms o log n is
07:24logarithmic complexity an algorithm that
07:27halves the data set in each pass is the
07:29opposite to exponential it's efficient
07:32with large data sets these are good
07:34algorithms o n is linear complexity an
07:39algorithm whose performance is
07:41proportional to the size of the data set
07:42and declines as the data set grows it
07:46reduces efficiency with large data sets
07:48these are fair algorithms o N squared is
07:53polynomial complexity an algorithm whose
07:56performance is proportional to the
07:58square of the size of the data set it
08:01significantly reduces efficiency with
08:03increasingly large data sets these are
08:06poor algorithms and Oh 2 n is
08:10exponential complexity an algorithm that
08:13doubles or more with each addition to
08:16the data set in each pass it's the
08:18opposite to logarithmic it's inefficient
08:21and extremely poor algorithms that
08:23should be avoided so let's consider some
08:27examples of expressing the complexity of
08:29algorithms here we have 3n squared plus
08:334 n plus 2 and remember we can remove
08:36all the terms except for the one with
08:38the highest exponent that leaves us with
08:423n squared we can remove the constant
08:46that leaves us with N squared or Oh N
08:50squared in Big O notation this is an
08:53algorithm with polynomial complexity or
08:56quadratic complexity
08:58it is relatively poor so let's have a
09:02look at another example here we've got N
09:05squared plus six n cubed plus two n plus
09:10one so following the rules we remove all
09:14the terms except for the one with the
09:16largest factor or exponent so in this
09:19case we're going to keep six and cubed
09:22because it's the expression that has the
09:24largest impact on the algorithm when we
09:28remove the constant we get left with n
09:30cubed or cubic complexity so in Big O
09:35notation this is o n cubed polynomial
09:39the algorithm is poor here's another
09:43example two n to the power four plus n
09:48minus one so again applying the rules
09:51we're looking for the factor that has
09:55the most significant impact on the
09:57algorithm and in this case it's going to
10:00be two n to the power four we can now
10:03remove any constants and that leaves us
10:06with n to the power four or o n to the
10:10power four it's polynomial complexity
10:14again okay let's just do one more
10:17example here we've got twelve plus six n
10:21plus three plus four n so applying the
10:25rules we're going to look for the term
10:28that has the largest factor on the
10:31algorithm and here we'd say it would be
10:346n so we can remove all the other parts
10:38of the expression we can remove the
10:40constant and we get left with n so Big O
10:44n it's linear complexity this is a fair