Chapter 1
Computer science is a field of study that deals with
solving a variety of problems by using computers.
The problem to be solved could be as simple as
performing the addition of two numbers, or as
complex as designing a robot capable of making
decisions in a real-time environment. To solve a
given problem by using computers, you need to
design an algorithm for it. The nature of an
algorithm often depends closely on the nature of the
data on which the algorithm works. Therefore, the
study of algorithms also involves the study of the
data structures that algorithms work on.
This chapter discusses the role of algorithms and
data structures in problem solving through
computers. It also discusses some standard
techniques that can be used to design algorithms. In
addition, it discusses the effect of the selected
algorithm on the efficiency of the solution.
In this chapter, you will learn to:
 Explain the role of data structures and
algorithms in problem solving through
computers
 Identify techniques to design algorithms and
measure their efficiency
Objectives ¤NIIT Introducing Algorithms and Data Structures 1.3
Problem solving is an essential part of every scientific discipline. In today’s world,
computers are widely used to solve problems pertaining to various domains, such as
banking, commerce, medicine, manufacturing, and transport.
To solve a given problem by using a computer, you need to write a program for it. A
program consists of two components, algorithm and data structure.
Many different algorithms can be used to solve the same problem. Similarly, various
types of data structures can be used to represent a problem in a computer.
To solve the problem in an efficient manner, you need to select a combination of
algorithms and data structures that provide maximum efficiency.
The word algorithm is derived from the name of the Persian mathematician Al
Khwarizmi.
An algorithm can be defined as a step-by-step procedure for solving a problem. It helps
the user arrive at the correct result in a finite number of steps. Consider the following
step-by-step procedure to display the first 10 natural numbers:
1. Set the value of counter to 1
2. Display counter
3. Increment counter by 1
4. If counter <= 10, go to step 2
The preceding step-by-step procedure is an algorithm because it produces the correct
result in a finite number of steps.
An algorithm has five important properties:
„ Finiteness: An algorithm terminates after a finite number of steps.
„ Definiteness: Each step in an algorithm is unambiguous. This means that the action
specified by the step cannot be interpreted in multiple ways and can be performed
without any confusion.
„ Input: An algorithm accepts zero or more inputs.
„ Output: An algorithm produces at least one output.
„ Effectiveness: An algorithm consists of basic instructions that are realizable. This
means that the instructions can be performed by using the given inputs in a finite
amount of time.
Role of Algorithms
Role of Algorithms and Data Structures in Problem
Solving 1.4 Introducing Algorithms and Data Structures  ¤NIIT
A problem can be solved by using a computer only if an algorithm can be written for it. In
addition, the use of algorithms provides many other benefits:
„ While writing an algorithm, you identify the step-by-step procedure, the major
decision points, and the variables necessary to solve the problem. This helps you in
the development of the corresponding program.
„ Identification of the procedure and decision points reduces the problem into a series
of smaller problems of more manageable size. Therefore, problems that would be
difficult or impossible to solve as a whole can be approached as a series of small,
solvable subproblems.
„ With the use of an algorithm, decision making becomes a more rational process. This
is because algorithms comprise of sub tasks, where each sub task is atomic in nature
and is supported by facts.
„ With the use of an algorithm, the same specified steps are used for performing the
task. This makes the process more consistent and reliable.
Multiple algorithms can be designed to solve a particular problem. However, the
algorithms may differ in how efficiently they can solve the problem. In such a situation,
an algorithm that provides the maximum efficiency should be used for solving the
problem. Efficiency here means that the algorithm should work in minimal time and use
minimal memory.
One of the basic techniques for improving the efficiency of algorithms is to structure the
data that they operate on in such a way that the resulting operations can be efficiently
performed.
The way in which the various data elements are organized in memory with respect to each
other is called a data structure.
Data can be organized in many different ways; therefore, you can create as many data
structures as you want. However, there are some standard data structures that have proved
useful over the years. These include arrays, linked lists, stacks, queues, trees, and graphs.
You will learn more about these data structures in the subsequent chapters.
All these data structures are designed to hold a collection of data items. However, the
difference lies in the way the data items are arranged with respect to each other and the
operations that they allow. Because of the different ways in which the data items are
arranged with respect to each other, some data structures prove to be more efficient than
others to solve a given problem.
Role of Data Structures ¤NIIT Introducing Algorithms and Data Structures 1.5
Suppose you have to write an algorithm that enables a printer to service the requests of
multiple users on a first-come-first-served basis. In this case, using a data structure that
stores and retrieves the requests in the order of their arrival would be much more efficient
than a data structure that stores and retrieves the requests in a random order.
In addition to improving the efficiency of an algorithm, the use of appropriate data
structures also allows you to overcome some other programming challenges, such as:
„ Simplifying complex problems
„ Creating standard, reusable code components
„ Creating programs that are easy to understand and maintain
Consider an example where you have to find the maximum value in a set of 50 numbers.
In this scenario, you can either use 50 variables or a data structure, such as an array of
size 50, to store the numbers. When 50 different variables are used to store the numbers,
the algorithm to determine the maximum value among the numbers can be written as:
1. Accept 50 numbers and store them in num1, num2, num3, …, num50
2. Set max = num1
3. If num2 > max then:
max = num2
4. If num3 > max then:
max = num3
5. If num4 > max then:
max = num4
.
.
6. If num50 > max then:
max = num50
7. Display max
On the other hand, when an array of size 50 is used, the algorithm can be written as:
1. Set max = num[0]
2. Repeat step 3 varying i from 1 to 49
3. If num[i] > max
max = num[i]
4. Display max 1.6 Introducing Algorithms and Data Structures  ¤NIIT
Note
From the preceding two algorithms, it can be seen that the algorithm using an array
manipulates memory much more efficiently than the algorithm using 50 variables. Also,
the algorithm using an array involves very few steps, and is therefore, easier to understand
and implement as compared to the algorithm that uses 50 variables.
Data structures also enable the creation of reusable code components. Suppose you have
created a class to implement a data structure that stores and retrieves requests in the order
of their arrival. Once the class is created, the same class can be used in several different
applications that need to service the requests of multiple users on a first-come-first-served
basis.
This means that a data structure once implemented can be used as a standard component
to provide standard solutions to a specific set of problems. The use of standard
components helps simplify the maintenance process. This is because the standard
components are time tested and therefore, do not need much maintenance.
Data structures can be classified under the following two categories:
„ Static: These are data structures whose size is fixed at compile time, and does not
grow or shrink at run time. An example of a static data structure is an array. Suppose
you declare an array of size 50, but store only 5 elements in it; the memory space
allocated for the remaining 45 elements will be wasted. Similarly, if you have
declared an array of size 50 but later on want to store 20 more elements, you will not
be able to store these extra required elements because of the fixed size of an array.
„ Dynamic: These are data structures whose size is not fixed at compile time and that
can grow and shrink at run time to make efficient use of memory. An example of a
dynamic data structure would be a list of items for which memory is not allocated in
advance. As and when items are added to the list, memory is allocated for those
elements. Similarly, when items are removed from the list, memory allocated to
those elements is deallocated. Such a list is called a linked list.
Arrays and linked lists are basic data structures that are used to implement other data
structures such as stacks, queues, trees, and graphs.
An array is always a static data structure, and a linked list is always a dynamic data
structure. However, the other data structures can be static or dynamic depending on
whether they are implemented using an array or a linked list.
Types of Data Structures ¤NIIT Introducing Algorithms and Data Structures 1.7
Just a minute:
An array is a _____________ data structure, and a linked list is a ___________ data
structure.
Answer:
static, dynamic 1.8 Introducing Algorithms and Data Structures  ¤NIIT
Designing an algorithm for a given problem is a difficult intellectual exercise. This is
because there is no systematic method for designing an algorithm. Moreover, there may
exist more than one algorithm to solve a given problem. Writing an effective algorithm
for a new problem or writing a better algorithm for an already existing algorithm is an art
as well as a science because it requires both creativity and insight.
Although there is no systematic method for designing an algorithm, there are some
well-known techniques that have proved to be quite useful in designing algorithms. Two
commonly used techniques for designing algorithms are:
„ Divide and conquer approach
„ Greedy approach
Divide and Conquer Approach
The divide and conquer approach is an algorithm design technique that involves breaking
down the problem recursively into subproblems until the subproblems become so small
that they can be directly solved. The solutions to the subproblems are then combined to
give a solution to the original problem.
Divide and conquer is a powerful approach for solving conceptually difficult problems. It
simply requires you to find a way of:
1. Breaking the problem into subproblems.
2. Solving the trivial cases.
3. Combining the solutions to the subproblems to solve the original problem.
Divide and conquer often provides a natural way to design efficient algorithms.
Consider an example where you have to find the minimum value in a list of numbers. The
list is as shown in the following figure.
7 3 4 2 1 9 8 10
List of Numbers
Designing Algorithms and Measuring Their Efficiency
Identifying Techniques for Designing Algorithms ¤NIIT Introducing Algorithms and Data Structures 1.9
To find the minimum value, you can divide the list into two halves, as shown in the
following figure.
7 3 4 2  1 9 8 10
List Divided into Two Equal Parts
Again, divide each of the two lists into two halves, as shown in the following figure.
7 3  4 2  1 9  8 10
List Divided into Four Equal Parts
Now, there are only two elements in each list. At this stage, compare the two elements in
each list to find the minimum of the two. The minimum value from each of the four lists
is shown in the following figure.
3  2  1  8
Minimum Values in the Four Lists
Again, compare the first two minimum values to determine their minimum. Also compare
the last two minimum values to determine their minimum. The two minimum values thus
obtained are shown in the following figure.
2  1
Minimum Values in the Two Halves of the Original List
Again, compare the two final minimum values to obtain the overall minimum value,
which is 1 in the preceding example.
Greedy Approach
The greedy approach is an algorithm design technique that selects the best possible option
at any given time. Algorithms based on the greedy approach are used for solving
optimization problems, where you need to maximize profits or minimize costs under a
given set of conditions. Some examples of optimization problems are:
„ Finding the shortest distance from an originating city to a set of destination cities,
given the distances between the pairs of cities.
„ Finding the minimum number of currency notes required for an amount, where an
arbitrary number of notes for each denomination are available. 1.10 Introducing Algorithms and Data Structures  ¤NIIT
„ Selecting items with maximum value from a given set of items, where the total
weight of the selected items cannot exceed a given value.
Consider an example where you have to fill a bag of capacity 10 kg by selecting items,
(from a set of items) whose weights and values are given in the following table.
Item Weight (in kg) Value (in $/kg) Total Value (in $)
A 2 200 400
B 3 150 450
C 4 200 800
D 1 50 50
E 5 100 500
Weights and Values of Items
A greedy algorithm acts greedy, and therefore selects the item with the maximum total
value at each stage. Therefore, first of all, item C with total value of $800 and weight 4 kg
will be selected. Next, item E with total value $500 and weight 5 kg will be selected. The
next item with highest value is item B with a total value of $450 and weight 3 kg.
However, if this item is selected, the total weight of the selected items will be 12 kg (4 + 5
+ 3), which is more than the capacity of the bag.
Therefore, we discard item B and search for the item with the next highest value. The
item with the next higher value is item A having a total value of $400 and a total weight
of 2 kg. However, this item also cannot be selected because if it is selected, the total
weight of the selected items will be 11 kg (4 + 5 + 2). Now, there is only one item left,
that is, item D with a total value of $50 and a weight of 1 kg. This item can be selected as
it makes the total weight equal to 10 kg. ¤NIIT Introducing Algorithms and Data Structures 1.11
The selected items and their total values and weights are listed in the following table.
Item Weight (in kg) Total value (in $)
C 4 800
E 5 500
D 1 50
Total 10 1350
Items Selected Using Greedy Approach
For most problems, greedy algorithms usually fail to find the globally optimal solution.
This is because they usually do not operate exhaustively on all data. They can make
commitments to certain choices too early, which prevent them from finding the best
overall solution later.
This can be seen from the preceding example, where the use of a greedy algorithm selects
items with a total value of $1350 only. However, if the items were selected in the
sequence depicted by the following table, the total value would have been much greater,
with the weight being 10 kg only.
Item Weight (in kg) Total value (in $)
C 4 800
B 3 450
A 2 400
D 1 50
Total 10 1700
Optimal Selection of Items
In the preceding example, you can observe that the greedy approach commits to item E
very early. This prevents it from determining the best overall solution later. Nevertheless,
greedy approach is useful because it is quick and easy to implement. Moreover, it often
gives good approximations to the optimal value. 1.12 Introducing Algorithms and Data Structures  ¤NIIT
Just a minute:
The _____________ technique involves selecting the best available option at each step.
Answer:
Greedy
Recursion refers to the technique of defining a process in terms of itself. It is used to solve
complex programming problems that are repetitive in nature.
The basic idea behind recursion is to break a problem into smaller versions of itself, and
then build up a solution for the entire problem. This may sound similar to the divide and
conquer technique. However, recursion is not similar to the divide and conquer technique.
Divide and conquer is a theoretical concept that may be implemented in a computer
program with the help of recursion.
Recursion is implemented in a program by using a recursive procedure or function. A
recursive procedure is a function that invokes itself.
Consider a function f (n), which is the sum of the first n natural number. This function can
be defined in several different ways.
In mathematics, the function will be defined as:
f(n) = 1 + 2 + 3 + 4 + 5 +…+ n
However, the same function can be defined in a recursive manner as:
f(n) = f(n – 1) + n
Where n > 1; and f(1) = 1
In this case, the recursive definition of the function f(n) calls the same function, but with
its arguments reduced by one. The recursion will end when n = 1, in which case f(1) = 1
has been defined.
To understand this concept, consider a factorial function. A factorial function is defined
as:
n! = 1 × 2 × 3 × 4 × … × n
Designing Algorithms Using Recursion ¤NIIT Introducing Algorithms and Data Structures 1.13
This same factorial function can be redefined as:
n! = (n – 1)! × n     where n > 1; and 0! = 1
This definition of n! is recursive because it refers to itself when it uses (n – 1)!. The value
of n! is explicitly given when n = 0; and the value of n! for arbitrary n is defined in terms
of the smaller value of n, which is closer to the base value 0.
If you have to calculate 3! By using recursion, you first define 3! in terms of 2!:
3! = (3 × 2!)
Now, you will define 2! in terms of 1!:
3! = (3 × (2 × 1!))
Now, you will define 1! in terms of 0!:
3! = (3 × (2 × (1 × 0!)))
Now, 0! is defined as 1. Therefore, the expression becomes:
3! = (3 × (2 × (1 × 1)))
3! = (3 × (2 × 1))
3! = (3 × 2)
3! = 6
The recursive algorithm for determining the factorial of a number n can be written as:
Algorithm: Factorial(n)
1. If n = 0, then:  // Terminating condition
a. Return (1)
2. Return (n × Factorial(n – 1))
Please note that every recursive algorithm should have a terminating condition.
Otherwise, the algorithm will keep on calling itself infinitely.
The main advantage of recursion is that it is useful in writing clear, short, and simple
programs. One of the most common and interesting problems that can be solved by using
recursion is the Tower of Hanoi problem. 1.14 Introducing Algorithms and Data Structures  ¤NIIT
Tower of Hanoi
Tower of Hanoi is a classical problem, which consists of n different sized disks and three
pins over which these disks can be mounted. All the disks are placed on the first pin with
the largest disk at the bottom and the remaining disks in decreasing order of their size, as
shown in the following figure.
Tower of Hanoi
The objective of the game is to move all disks from the first pin to the third pin in the
least number of moves by using the second pin as an intermediary.
To play this game, you need to follow the following rules:
„ Only one disk can be moved at a time
„ A larger disk cannot be placed over a smaller one
Let n be the number of the discs. If n = 3, it will require seven moves to transfer all discs
from pin one to pin three, as shown in the table.
Steps  Moves
1. move top disc from pin 1 to pin 3
2. move top disc from pin 1 to pin 2
3. move top disc from pin 3 to pin 2
4. move top disc from pin 1 to pin 3
5. move top disc from pin 2 to pin 1 ¤NIIT Introducing Algorithms and Data Structures 1.15
Steps  Moves
6. move top disc from pin 2 to pin 3
7. move top disc from pin 1 to pin 3
Sequence of Moves for n – 3
The moves given in the preceding table are illustrated in the following figure.
Moves for Solving the Tower of Hanoi Problem
Step 4 Step 5
Step 6 Step 7
Step 0 Step 1
Step 2 Step 3 1.16 Introducing Algorithms and Data Structures  ¤NIIT
When n = 2, we would move the top disc from pin 1 to pin 2, move the top disc from pin
1 to pin 3, and then move the top disc from pin 2 to pin 3.
The solution for n = 1 will be to move the disc from pin 1 to pin 3.
In general, to move n discs from pin 1 to pin 3 using pin 2 as an intermediary, you first
need to move the top n – 1 discs from pin 1 to pin 2 using pin 3 as intermediary.
The following algorithm can be used to move the top n discs from the first pin START to
the final pin FINISH through the temporary pin TEMP:
MOVE (n, START, TEMP, FINISH)
1. When n = 1:
a. MOVE a disc from START to FINISH
b. Return
2. Move the top n – 1discs from START to TEMP using FINISH as an intermediary
[MOVE (n – 1, START, FINISH, TEMP)]
3. Move the top disc from START to FINISH
4. Move the top n – 1 discs from TEMP to FINISH using START as an intermediary
[MOVE (n – 1, TEMP, START, FINISH)]
In general, this solution requires 2n
– 1 moves for n discs.  ¤NIIT Introducing Algorithms and Data Structures 1.17
Just a minute:
Identify the problem in the following algorithm that attempts to find the sum of the first
n natural numbers:
Algorithm: Sum (n)
1. s = n  + Sum(n – 1)
2. Return(s)
Answer:
There is no terminating condition in the given recursive algorithm. Therefore, it will
call itself infinitely. The correct algorithm would be:
1. If (n = 1)
Return(1)
2. s = n + Sum(n – 1)
3. Return(s)
The greatest difficulty in solving programming problems is not how to solve the problem,
but how to solve the problem efficiently. Factors that affect the efficiency of a program
include the speed of the machine, the compiler, the operating system, the programming
language, and the size of the input. However, in addition to these factors, the way data of
a program is organized, and the algorithm used to solve the problem also has a significant
impact on the efficiency of a program.
There can be cases, where a number of methods and algorithms can be used to solve a
problem. In such a situation, it can be difficult to decide which algorithm to use.
When there are several different ways to organize data and devise algorithms, it becomes
important to develop criteria to recommend a choice. Therefore, you need to study the
behavior of algorithms under various conditions and compare their efficiency.
The efficiency of an algorithm can be computed by determining the amount of resources
it consumes. The primary resources that an algorithm consumes are:
„ Time: The CPU time required to execute the algorithm.
„ Space: The amount of memory used by the algorithm for execution.
Determining the Efficiency of an Algorithm 1.18 Introducing Algorithms and Data Structures  ¤NIIT
The lesser resources that an algorithm uses, the more efficient it is.
Time/Space Tradeoff
To solve a given programming problem, many different algorithms may be used. Some of
these algorithms may be extremely time-efficient and others extremely space-efficient.
Time/space tradeoff refers to a situation where you can reduce the use of memory at the
cost of slower program execution, or reduce the running time at the cost of increased
memory usage.
An example of a situation where a time/space tradeoff can be applied is that of data
storage. If data is stored in a compressed form, the memory usage is less because data
compression reduces the amount of space required. However, it is more time consuming
because some additional time is required to run the compression algorithm. Similarly, if
data is stored in its uncompressed form, the memory usage is more, but the running time
is less.
Memory is generally perceived to be extensible because you can increase the memory of
your computer. Time, however, is not extensible. Therefore, time considerations generally
override memory considerations.
Method for Determining Efficiency
Although, the efficiency of an algorithm depends on how efficiently it uses time and
memory space, the scope of this course is limited to determining only the time efficiency
of an algorithm.
To measure the time efficiency of an algorithm, you can write a program based on the
algorithm, execute it, and measure the time it takes to run. The execution time that you
measure in this case would depend on a number of factors such as:
„ Speed of the machine
„ Compiler
„ Operating system
„ Programming language
„ Input data
However, to determine how efficiently an algorithm solves a given problem, you would
like to determine how the execution time is affected by the nature of the algorithm.
Therefore, you need to develop fundamental laws that determine the efficiency of a
program in terms of the nature of the underlying algorithm. ¤NIIT Introducing Algorithms and Data Structures 1.19
To understand how the nature of an algorithm affects the execution time, consider a
simple example. Suppose assignment, comparison, write, and increment statements take a,
b, c, and d time units to execute respectively. Now, consider the following code used to
display the elements stored in an array:
1. Set I = 0    // 1 assignment
2. While (I < n):    // n comparison
a. Display a[I]   // n writes
b. Increment I by 1  // n increments
The execution time required for the preceding algorithm is given by:
T = a + b  ×  n + c  ×  n + d  ×  n
T = a + n (b + c + d)
Here, T is the total running time of the algorithm expressed as a linear function of the
number of elements (n) in the array. From the preceding expression, it is clear that T is
directly proportional to n.
In fact, the total running time T is directly proportional to the number of iterations
involved in the algorithm. The number of iterations can be determined by counting the
number of comparisons involved in the algorithm.
In the preceding code segment, a total of n comparisons are being made. Therefore, the
total running time of the algorithm, T is directly proportional to n.
As T is directly proportional to n, an increase in the value of n will result in a proportional
increase in the value of T, as shown in the following figure.
Problem size (n)
Execution time (T)
Rate of Change of T with an Increase in the Value of n 1.20 Introducing Algorithms and Data Structures  ¤NIIT
Note
Now, consider the following algorithm:
1. Set I = 0     // 1 assignment
2. While (I < n):     // n comparisons
a. Set J = 0     // n assignments
b. While (J < n):    // n × n comparisons
i. Display (a[I][J])   // n × n writes
ii. Increment J by 1   // n × n increments
c. Increment I by 1    // n increments
If you count the number of comparisons in the preceding code segment, they come out to
be (n2
+ n), which is a quadratic function of n. Therefore, the total running time is directly
proportional to n2
.
Although the number of comparisons is n2
+ n, the value of n is very small as compared
to the value of n2
(especially when n is very large). Therefore, the value of n can be
ignored for finding the approximate running time.
As the running time is directly proportional to n2
, an increase in the value of n will result
in a quadratic increase in the running time. This means that if the value of n is doubled,
the running time will increase four times. The rate of change of T with an increase in the
value of n is depicted in the following figure.
Problem size (n)
Execution time (T) Rate of Change of T with an Increase in the Value of n
From the preceding discussion, you can conclude that the running time of a program is a
function of n, where n is the size of the input data. The rate at which the running time of
an algorithm increases as a result of an increase in the volume of input data is called the
order of growth of the algorithm.  ¤NIIT Introducing Algorithms and Data Structures 1.21
The order of growth of an algorithm is defined by using the big O notation. The big
O notation has been accepted as a fundamental technique for describing the efficiency of
an algorithm.
The following table lists some possible orders of growth, and their corresponding big O
notations.
Order of Growth Big O notation
Constant  O(1)
Logarithmic O (log n)
Linear O(n)
Loglinear O (n log n)
Quadratic O (n2
)
Cubic O (n3
)
Exponential O (2n
), O (10n
)
Big O Notations
If an algorithm has a linear order of growth, the algorithm is said to be of the order O(n).
Similarly, if an algorithm has a quadratic order of growth, the algorithm is said to be of
the order O(n2
).
Selecting an Efficient Algorithm
Now that you know how the efficiency of a particular algorithm is determined, let us see
how this knowledge can be used to select an efficient algorithm.
According to their orders of growth, the big O notations can be arranged in an increasing
order as:
O(1) < O(log n) < O(n) < O(n log n) < O(n2
) < O(n3
) < O(2n
) < O(10n
)
Therefore, if a problem can be solved by using algorithms of each of the preceding orders
of growth, an algorithm of the order O(1) will be considered best, and an algorithm of the
order O(10n
) will be considered the worst. The goal of algorithm development should be
to make algorithms of the smallest possible orders. 1.22 Introducing Algorithms and Data Structures  ¤NIIT
The following table depicts the orders of growth for the preceding big O notations.
Big O Notation Order of Growth
O(1)
Problem size (n)
Execution time (T)
Order of Growth of an O(1) Algorithm
O (log n)
Problem size (n)
Execution time (T)
Order of Growth of an O(log n) Algorithm ¤NIIT Introducing Algorithms and Data Structures 1.23
Big O Notation Order of Growth
O(n)
Problem size (n)
Execution time (T)
Order of Growth of an O(n) Algorithm
O (n log n)
Problem size (n)
Execution time (T)
Order of Growth of an O(n log n) Algorithm
O(n2
)
Problem size (n)
Execution time (T)
Order of Growth of an O(n2
) Algorithm 1.24 Introducing Algorithms and Data Structures  ¤NIIT
Big O Notation Order of Growth
O(n3
)
Problem size (n)
Execution time (T)
Order of Growth of an O(n3
) Algorithm
O(2n
)
Problem size (n)
Execution time (T)
Order of Growth of an O(2n
) Algorithm
O(10n
)
Problem size (n)
Execution time (T)
Order of Growth of an O(10n
) Algorithm
Orders of Growth ¤NIIT Introducing Algorithms and Data Structures 1.25
Now, consider that assignment, comparison, write, and increment statements take a, b, c,
and d time units to execute, respectively. Also, suppose all arithmetic operations require e
time units to execute. Now, consider the following two algorithms to find the sum of the
first n natural numbers:
Algorithm A
1. Set sum = 0    // 1 assignment
2. Set i = 0    // 1 assignment
3. While (i <= n):   // n comparisons
a. Set sum = sum + i   // n arithmetic operations, n assignments
b. Increment i by 1   // n increments
4. Display (sum)    // 1 write
Algorithm B
1. Set sum = (n × (n + 1))/2  // 3 arithmetic operations, 1 assignment
2. Display (sum)    // 1 write
Both Algorithm A and Algorithm B perform the same task, that is, both determine the
sum of the first n natural numbers. Algorithm A adds each number iteratively to a variable
sum. However, Algorithm B uses a formula to calculate the sum of the first n natural
numbers.
The execution time required for Algorithm A is given by:
T = (n + 2)  ×  a + n  ×  b + 1  ×  c + n  ×  d + n  ×  e
T = an + 2n + bn + c + dn + en
T = c + n (a + b + d + e + 2)
As T is a linear function of n. Therefore, the algorithm is of the order O(n).
Now determine the time required to execute algorithm B:
T = 1 × a +1 × c + 3 × e
T = a + c + 3e
Unlike Algorithm A, the time taken by Algorithm B is a constant, and does not depend on
the value of n. Therefore, the algorithm is of the order O(1). 1.26 Introducing Algorithms and Data Structures  ¤NIIT
Because Algorithm A is of the order O(n), the execution time of Algorithm A increases
linearly with the value of n. However, Algorithm B is of the order O(1). Therefore, the
execution time of Algorithm B is constant. This means that an increase in the value of n
does not have any impact on the execution time of the algorithm. Therefore, however
large the problem be, Algorithm A solves it in the same amount of time.
Suppose for n = 10, both Algorithm A and Algorithm B take 10 nanoseconds (ns) to
execute. However, when n is increased to 100, Algorithm A will take 100 ns to execute,
but Algorithm B will take only 10 ns to execute. Similarly, when n is increased to 1000,
Algorithm A will take 1000 ns to execute, but Algorithm B will take only 10 ns.
This means that when the problem is very large in size, Algorithm B would prove to be
much more efficient than Algorithm A.
Best, Worst, and Average Case Efficiency
Suppose you have a list of names in which you have to search for a particular name. You
have designed an algorithm that searches the name in the list of n elements, by comparing
the name to be searched with each element in the list sequentially.
The best case in this scenario would be if the first element in the list matches the name to
be searched. The efficiency in that case would be expressed as O(1) because only one
comparison was made.
Similarly, the worst case in this scenario would be if the complete list is traversed and the
element is found at the end of the list or is not found in the list. The efficiency in that case
would be expressed as O(n) because n comparison were made.
Continuing with the same example, the average case efficiency can be obtained by finding
the average number of comparisons. Here,
Minimum number of comparisons = 1
Maximum number of comparisons = n
Therefore, average number of comparisons = (n + 1)/2
(n + 1)/2 is a linear function of n. Therefore, the average case efficiency will be expressed
as O(n).
The worst case efficiency of the preceding search program can be improved by using an
alternate search algorithm that provides a better worst case efficiency. A search algorithm
with a better worst case efficiency is binary search that provides an efficiency of O(log n)
in the worst case. You will learn more about this algorithm in the subsequent chapters. ¤NIIT Introducing Algorithms and Data Structures 1.27
Problem Statement
You need to write an algorithm to search for a given word in a dictionary. Discuss how
different algorithms and different ways of organizing the dictionary data affect the
efficiency of the process.
Solution
To search for a given word in a dictionary, many different strategies can be used. Both the
algorithm used and the way the data is organized have a significant impact on the
efficiency of the process. This is explained through various examples in the following
table.
How is data organized? Which algorithm is used? How efficient is the process?
Words are arranged in
the alphabetic order.
Starting from the first word
in the dictionary, each word
is compared to the word to
be searched. Search stops
when either the word is
found, or word greater than
the word is found, or the end
of the dictionary is reached.
This process is very inefficient
because it has a very large search
space. No attempt has been made to
reduce the search space. If there
were 10,000 words in the
dictionary, on an average it would
make 5,000 comparisons.
Words are arranged in
the alphabetic order and
grouped under alphabetic
sections.
First, search for the section
in which the word falls.
Then, search for the word in
that section.
This process is far more efficient as
there are only 26 sections. When the
correct section is found, you need to
search the words in that section
only.
Group Discussion: Dependence of Efficiency on
Selected Algorithm  1.28 Introducing Algorithms and Data Structures  ¤NIIT
How is data organized? Which algorithm is used? How efficient is the process?
Groups of words are
ordered onto successive
pages, which are
numbered. Each page on
the top indicates the first
word and last word
contained on that page.
The word being searched
for could be within this
shortened search space.
Starting from the first page
in the dictionary, compare
the word to be searched with
the first word and the last
word on each page. If the
word falls in the range of a
particular page, then starting
from the first word on that
page, each word is compared
to the word to be searched
until either the word is found
or the end of the page is
reached.
If we assume 20 words per page,
then the search space of 10,000
words reduces to only 500 pages,
and then 20 words within that page.
Words are arranged in
the alphabetic order and
grouped under alphabetic
sections. Also the first and
last word on each page is
written on top of that
page.
First, search for the section
in which the word falls.
Then, starting from the first
page in that section, compare
the word to be searched with
the first and last word on
each page. If the word falls
in the range of a particular
page, then starting from the
first word on that page,
compare each word to the
word to be searched until
either the word is found or
the end of the page is
reached.
After the correct section is found,
you do not have to search all words
in that section. You can first do a
page-wise search, and after the
correct page is found, you can
search the words on that page. This
further improves the efficiency.
Words are arranged in
the alphabetic order.
Physically open the
dictionary in two halves.
Check whether the word is in
the left half or the right half?
If, it is in left half, then again
split the left half into equal
left/right halves. Continue
this process until the word is
found.
In each step, the search space is
reduced to half. Therefore, if there
are 10,000 words in the dictionary,
then after first step the search space
is reduced to 5,000, after second
step, it is reduced to 2,500, and so
on. This process proves to be very
efficient especially for large search
spaces.
Dependence of Efficiency on Selected Algorithms ¤NIIT Introducing Algorithms and Data Structures 1.29
1. Which of the following CANNOT be a static data structure?
a. Stack
b. Queue
c. Array
d. Linked list
2. Which of the following is NOT a characteristic of an algorithm?
a. Finiteness
b. Effectiveness
c. Efficiency
d. Input
3. Do greedy algorithms always find the globally optimal solution? Why or why not?
4. Which of the following is considered most efficient?
a. An algorithm of the order O(n2
)
b. An algorithm of the order O(n)
c. An algorithm of the order O(log n)
d. An algorithm of the order O(n log n)
Practice Questions 1.30 Introducing Algorithms and Data Structures  ¤NIIT
In this chapter, you learned that:
„ An algorithm can be defined as a step-by-step procedure for solving a problem that
produces the correct result in a finite number of steps.
„ An algorithm has five important properties:
z Finiteness
z Definiteness
z Input
z Output
z Effectiveness
„ An algorithm that provides the maximum efficiency should be used for solving the
problem.
„ Data structures can be classified under the following two categories:
z Static
z Dynamic
„ Two commonly used techniques for designing algorithms are:
z Divide and conquer approach
z Greedy approach
„ Recursion refers to a technique of defining a process in terms of itself. It is used to
solve complex programming problems that are repetitive in nature.
„ The primary resources that an algorithm consumes are:
z Time: The CPU time required to execute the algorithm.
z Space: The amount of memory used by the algorithm for execution.
„ Time/space tradeoff refers to a situation where you can reduce the use of memory at
the cost of slower program execution, or reduce the running time at the cost of
increased memory usage.
„ The total running time of an algorithm is directly proportional to the number of
comparisons involved in the algorithm.
„ The order of growth of an algorithm is defined by using the big O notation.
Summary ¤NIIT Introducing Algorithms and Data Structures 1.31
Exercise 1
Write an algorithm to check whether a number is a prime number or not. In addition,
write a program to implement this algorithm.
Exercise 2
Write an algorithm to generate the first 10 prime numbers. In addition, write a program to
implement this algorithm.
Exercise 3
Write an algorithm to accept a number between 1 and 9 and display a pattern. For
example, if the number entered is 5, the following pattern should be displayed:
1
2     1
3     2     1
4     3     2     1
5     4     3     2     1
Write a program to implement this algorithm.
Exercises 1.32 Introducing Algorithms and Data Structures  ¤NIIT
Exercise 4
Write an algorithm to accept a number between 1 and 9 and display a pyramid. For
example, if the number entered is 5, the following pyramid will be displayed:
1
1     2     1
1     2     3     2     1
1     2     3     4     3     2     1
1     2     3     4     5     4     3     2     1
Write a program to implement this algorithm.
Exercise 5
Write an algorithm to accept two strings and check whether the second string exists
within the first string. For example, if the first string is “concatenation” and the second
string is “cat”, the algorithm should display “Substring found at position 4 in the string”.
However, if the first string is “concatenation” and the second string is “tent”, the
algorithm should display “Substring not found in the string”. Write a program to
implement this algorithm.
Exercise 6
Suppose you have two arrays of size 10 each containing elements in ascending order.
Write an algorithm to merge the two arrays in such a way that the elements in the
resulting array are arranged in the ascending order. In addition, write a program to
implement this algorithm.
Exercise 7
Write an algorithm to find the Highest Common Factor (HCF) of three numbers. In
addition, write a program to implement this algorithm. ¤NIIT Introducing Algorithms and Data Structures 1.33
Exercise 8
Write an algorithm to multiply two 3 × 3 matrices. In addition, write a program to
implement this algorithm.
Hint: Given two 3 × 3 matrices, A and B:
A =
B =

The product of the two matrices is a 3 x 3 matrix, C in which the various terms are
calculated by using the following relations:
c00 = a00 × b00 + a01 × b10 + a02 × b20 //Row 1 of Matrix A  ×  Column 1 of Matrix B
c01 = a00 × b01 + a01 × b11 + a02 × b21 //Row 1 of Matrix A  ×  Column 2 of Matrix B
c02 = a00 × b02 + a01 × b12 + a02 × b22 //Row 1 of Matrix A  ×  Column 3 of Matrix B
c10 = a10 × b00 + a11 × b10 + a12 × b20 //Row 2 of Matrix A  ×  Column 1 of Matrix B
c11 = a10 × b01 + a11 × b11 + a12 × b21 //Row 2 of Matrix A  ×  Column 2 of Matrix B
c12 = a10 × b02 + a11 × b12 + a12 × b22 //Row 2 of Matrix A  ×  Column 3 of Matrix B
c20 = a20 × b00 + a21 × b10 + a22 × b20 //Row 3 of Matrix A  ×  Column 1 of Matrix B
c21 = a20 × b01 + a21 × b11 + a22 × b21 //Row 3 of Matrix A  ×  Column 2 of Matrix B
c22 = a20 × b02 + a21 × b12 + a22 × b22 //Row 3 of Matrix A  ×  Column 3 of Matrix B
Exercise 9
Write a recursive algorithm to print the first n numbers in the Fibonacci series. In
addition, write a program to implement this algorithm.
Hint: The Fibonacci series is of the form: 0, 1, 1, 2, 3, 5 . . .
a00 a01 a02
a10 a11 a12
a20 a21 a22
b00 b01 b02
b10 b11 b12
b20 b21 b221.34 Introducing Algorithms and Data Structures  ¤NIIT