
ReadingQuestions
Algorithms_Levitin.html
Reading Questions from Introduction to The Design and Analysis of Algorithms
by Anany Levitin
 Chapter 1.1 (Including the introduction to the chapter)
 What is the cornerstone of computer science?
 Give three reasons why the study of algorithms is important,
both within and beyond the field of computer science.
 Did algorithms exist before computers? Explain why your answer makes sense.
 Compute gcd(30,70) using each of the three algorithms described.
Make sure to show all of your work so it is clear that you understand how each
algorithm works.
 Attempt to rank the three gcd algorithms based on how long they take to solve the problem,
from fastest to slowest.
 Chapter 1.2
 Name at least six different algorithm design techniques.
 Why is it important to learn about algorithm design techniques
(as opposed to just learning specific algorithms to solve specific problems)?
 Discuss the relationship between data structures and algorithms.
(Your answer should be longer than a sentence or two.)
 Give three ways that an algorithm can be specified.
 When we analyze an algorithm, what two things are we usually most interested in?
 Which is more important: the simplicity of an algorithm or its efficiency? Explain.
 What is meant by the optimality of an algorithm?
Is it just another name for efficiency? Explain.
 True or false: Designing algorithms is easy. Explain.
 True or false: Designing algorithms is a oneanddone sort of activity. Explain.
 Chapter 1.3
 If someone claims to have a comparison sorting algorithm
(that is, an algorithm that sorts by comparing keys) that takes n steps
for a list of size n, should you believe them?
Give a clear and complete explanation of your reasoning.
 Which of these are inplace sorting algorithms:
insertion sort, bubble sort, selection sort, quick sort, and merge sort?
 Hopefully you recall the binary search algorithm.
Give one or two benefits of the algorithm (versus sequential search, for example)
and one or two limitations of the algorithm.
 Give an example of a situation where string matching can be used to solve a problem.
 What is the difference between a graph algorithm and a graph problem.
Give examples that clearly demonstrate the difference.
 I would claim that many combinatorial problems
are both easy to solve and difficult to solve at the same time. What can I possibly mean by this?
How can they be both?
 What makes numerical problems different than the other types of problems discussed
in this section? There are at least two possible ways to answer this question (they are related,
but slightly different).
 Chapter 1.4
 What two different basic data structures can be used to store a list?
 Give several advantages and disadvantages between singlylinked lists
and doublylinked lists.
 Give an example of a problem that can be solved with the help of a queue.
 Give an example of a problem that can be solved with the help of a stack.
 Give an example of a problem that can be solved with the help of a priority queue.
 What basic data structure can be used to implement a priority queue?
 Prove that an undirected graph with v vertices has at most v(v1)/2 edges.
(Hint: Use induction or some of the combinatorics you have previously learned.)
 Give the adjacency list representation of the graph in Figure 1.9.
 What size matrix is needed to store the adjacency matrix representation
of the graph in Figure 1.9? How many entries of the matrix are 1s?
Give the third row of the matrix.
 Give an example of a problem that can be modeled using a weighted graph
whose solution involves a simple path.
 Given a graph, I want to know if there is a way to get from any vertex to any other
vertex by following edges. What property of the graph am I trying to determine?
 Given a graph, I want to travel along a series of edges such that I do not repeat any
edges and end up back where I started. What am I looking for?
 Explain how you can model a rooted tree using a (free) tree.
 Is a binary search tree a free tree? A rooted tree? An ordered tree? A binary tree?
 Assume I have a universal set of 8 elements. What bit vector is represented by the number
117? What set is represented by the number 117?
 What is the best data structure? Explain.
 Chapter 2.1 (Including the introduction to the chapter)
 When analyzing an algorithm, what two resources do we typically focus on?
Of these two, which one is more important? Why is it more important?
 What is meant by size of input?
 Explain what is meant by basic operation
and what it has to do with analyzing an algorithm.
 Of worstcase, averagecase, and bestcase efficiencies,
which is least important and why?
 What is the difference between the worstcase and averagecase efficiencies?
 Why is it typically more difficult to determine the averagecase
complexity of an algorithm than the other two?
 Give an example of an algorithm whose three time complexities (best, average, worst)
are all the same
(give the complexities).
 Give an example of an algorithm whose three time complexities are not all the same
(give the complexities). Bonus points if you can think of one where they are all different.
 What is the spacecomplexity of merge sort?
 Chapter 2.2
 Write down the definition of Onotation.
 Write down the definition of Θnotation.
 In English, what does it mean that f(n)=Θ(g(n))?
 Give a simple BigΘ bound on 13 n^{3}+8 n^{2}5 n + 35.
 If you can, prove that n^{2}=O(n^{3})
using limits.
If you can't, you might want to talk with me about limits
(especially those who haven't had calculus).
 If you can, prove that 13 n^{3}+8 n^{2}5 n + 35=Θ(n^{3})
using limits.
If you can't, you might want to talk with me about limits
(especially those who haven't had calculus).
 Chapter 2.3
 Do exercise 2.3.4 (page 67).
 Analyze the algorithm from Exercise 1.3.1 (page 23).
Give both the bestcase and worstcase efficiencies.
(Hint: It involves a double sum.
Also, if you have i or j in your answer then you are doing it wrong.)
 Chapter 2.4
 Do exercise 2.4.4 (page 77).
 Chapter 2.5
 Are Fibonacci numbers cool or what?
 Compute F(6) using the recursive algorithm on page 81.
(Draw a tree of the recursive calls.)
How many additions were required? (Hint: One addition is performed for every internal node of the tree.)
How much extra space was required?
 Compute F(6) using the iterative algorithm on page 82.
How many additions were required?
How much extra space was required?
 Explain why the recursive algorithm to compute F(n) is so bad.
Specifically, how exactly does it end up computing F(n)?
 Compute F(5) by computing the appropriate matrix product as suggested at
the end of the section. Show your work!
 Chapter 3.1 (Including the introduction to the chapter)
 Explain why the phrase "work harder, not smarter" is essentially saying
use a bruteforce technique.
 Give a brief description of selection sort in your own words.
 Near the bottom of page 99, the book gives a formula, C(n), for the number of times the
basic operation is executed in the SelectionSort algorithm.
 Why is the comparison the basic operation instead of the min←j
or the swap? Clearly explain why each of those is the wrong choice, and why
the comparison is the right choice.
 Explain very clearly why the double sum given is correct.
 Finish simplifying the sum, showing all of the necessary steps.
(Note: They give the answer on the next page, but not the work to get the answer.)
There are two ways of simplifying the sumtry to use a clever approach that requires
less computation.
 Give a brief description of bubble sort in your own words.
 What is the main reason that bubble sort is worse than selection sort?
 Choose the best word in the following sentence and
explain why it is the correct choice:
Brute force algorithms are always/sometimes/never
good algorithms.
 Chapter 3.2
 About how much faster is SequentialSearch2 than SequentialSearch?
Justify your answer.
 Give one strength of a bruteforce approach to problem solving.
 Give one weakness of a bruteforce approach to problem solving.
 Clearly explain why the BruteForceStringMatch algorithm
(page 105)
makes m(nm+1) comparisons in the worst case.
(Hint: Your answer should contain phrases such as "the while loop executes...")
 Do exercise 3.2.6 on page 107.
(Hint: Don't forget that the book has hints for most exercises.)
 Chapter 3.3 (for 385)
 What is the Euclidean distance between two points?
Give both a technical definition and an explanation of what it represents.
 Explain why the sqrt is unnecessary in BruteForceClosestPair
algorithm.
 Explain why removing the sqrt from BruteForceClosestPair
is a good idea.
 Use what you learned earlier in the semester to conclude that the
number of times the basic operation of BruteForceClosestPair is
executed is n(n1). That is, do it without using summations.
 Is the convex hull of 3 points always a triangle? Explain.
 Let S be a set of n distinct points.
What are the possible sizes of the convex hull of S?
 How many extreme points does the set represented in
Figure 3.5 (page 111) have?
 Given the set of points S={ (0,0), (1,1), (0.5, 2), (3,0), (0,3) },
do the computation described on page 112113 to show that the line segment between
(0,0) and (3,0) is on the convex hull of S.
 Which points are on the convex hull of S
in the previous question?
 Chapter 3.4
 What's the good news about exhaustive search algorithms?
 What's the bad news about exhaustive search algorithms?
 Given an exhaustive search algorithm, is it sometimes possible to make it
"less exhaustive" (so that it is more efficient)
but still give a correct solution? Explain, using an example.
 Does anybody currently know a polynomialtime algorithm to solve the
Traveling Salesman Problem
or the Knapsack Problem? Explain how you know.
 Which exhaustive search algorithm is more efficient, the one for
the Knapsack Problem or the one for the Assignment Problem? Prove it.
(Note: It is a little silly to compare the efficiencies of algorithms that solve different
problems, but the point here is to review your understanding of the growth rates of functions.)
 Choose the best word in the following sentence and explain why it is the correct choice:
If a problem domain grows exponentially or faster
(e.g. a problem of size n has 2^{n} or n! possible solutions)
it is always/sometimes/never possible to find a polynomialtime algorithm
to solve the problem.
 Chapter 3.5
 Do Exercise 3.5.1 on page 128. Indicate on the original graph which edges are tree edges
and which are back edges.
 The book talks about a stack when it discusses DFS, yet no stack is explicitly mentioned
in the algorithm. So what are they talking about?
 Why are back edges not encountered in BFS?
 Do Exercise 3.5.4 on page 128. Indicate on the original graph which edges are tree edges
and which are cross edges.
 Given a vertex v in a graph, the shortest path problem gives the shortest
distance from v to every other vertex in the graph. So in the Example from Figures 3.12
on page 127, it would give the distance from a to both b and e as 1,
and from a to both c and f as 2, etc.
Modify BFS so it solves the shortest path problem.
Hint: Use a data structure that stores d(u), which is the distance from a
to u, which is initially set to ∞ for every vertex except a
for which clearly d(a)=0.
(You can think of d as an array that is indexed by vertices.)
You should only need to add a few things to BFS(G) (to initialize things) and
bfs(v) (to update things).
For simplicity, assume d is a global variable.
(Hint: If d(v)=k, and u is adjacent to v but has not been previously discovered,
what is d(u)?)
 What do DFS and BFS have in common? That is, what is the purpose of both algorithms?
 When do you DFS and BFS produce the exact same tree? Can you give a precise answer?
 If you are going to implement DFS and/or BFS, is it better to use an adjacency matrix
representation of the graph, or an adjacency list representation? Explain.
 Choose either DFS or BFS and explain why the algorithm has complexity Θ(V+E).
Hint: This one is more difficult than algorithms we have previously analyzed. It is not as
easy as setting up and solving a recurrence relation.
If you do that, it will overestimate the complexity.
Instead, talk through how the algorithm works to at a slightly higher level while still making
sure you account for everything.
 Chapter 4.1 (Including the introduction to the chapter)
 Which typically leads to a more efficient algorithm:
decrease by a constant or decrease by a constant factor? Explain.
 Which typically leads to a more efficient algorithm:
decrease by a constant factor or variable size decrease? Explain.
 Explain how to sort a hand of playing cards using insertion sort.
(Hint: Assume the cards start in a pile on the table and that you are going to sort them
into your hand, one by one.)
 Why is insertion sort generally considered to be better than selection sort?
 Demonstrate insertion sort on the array [3, 7, 8, 1, 4, 6, 2].
(Show the steps as they did in Figure 4.4.)
 You already know a decrease by a constant factor algorithm
(you probably saw it in CSCI 235, and you certainly saw it earlier this semester).
What is it?
 Chapter 4.2
 Is the graph in Figure 1.6(b) (page 28) a dag? Explain.
 Can the graph in Figure 1.6(b) be topologically sorted? Explain.
 Go to
CS Course Catalog.
Create a directed graph that shows the dependencies
between the following courses, where an directed edge from A to B means A is a prerequisite for B:
112, 225, 235, 245, 255, 265, 321, 354, 361, 376, 385, 470.
 Find a valid order the courses from the previous question can be taken
by using the DFSbased algorithm to solve the topological sorting problem.
Make sure to show all of your work!
 Find a valid order the courses from question 3 can be taken
by using the algorithm based on decreaseandconquer (the source removal algorithm)
to solve the topological sorting problem.
Make sure to show all of your work!
 Did you get the same solutions in the previous two questions? If so, will that always be the case?
If not, is that a problem? Explain in either case.
 What is the computational complexity of the source removal algorithm? Justify your answer.
 Chapter 4.3 (for 385)
 You first saw a decreasebyaconstantfactor algorithm back in 235. What was it?
(Hint: If you can't figure this out, read the first part of section 4.4 on pages 150152.
But try to figure it out before turning to those pages!)
 How might you represent the arrows in the JohnsonTrotter algorithm? Overall, what data structure(s) do you need?
(Note: You need to store a permutation, the arrows for a permutation, and a list of permutations.
How do you store a permutation? A list of permutations? Etc.)
Be complete and specific.
 What is a mobile element?
 If the JohnsonTrotter algorithm generates n! permutations, why isn't it obvious that it should
take Θ(n!) time? (This is a very important question, so make sure you give it some thought, and ask if
you aren't sure of the answer. Of course, you should always ask if you don't know the answer.)

Imagine you are in the middle of the JohnsonTrotter algorithm algorithm
and have this:
Show the next 3 steps of the algorithm.
 Given the permutation 352461, what are the next 3 permutations generated
by LexicographicPermute?
 What is the computational complexity of LexicographicPermute,
assuming a reasonable implementation?
 Explain why generating bit strings of length n and generating subsets of a set of n elements
are essentially the same thing.
 Why does the order in which bit strings are generated matter?
 Use the BRGC algorithm to generate all bit strings of length 4.
Make sure to show all of the intermediate steps.
 Give a recurrence relation for T(n), the number of steps it takes to call BRGC(n).
(There are a few possible answers to this depending on an assumption you make.
If you do not think you ever had to make an assumption when you analyzed the algorithm, you
probably were not thinking carefully enough.)
Do not forget to ask me about solving the recurrence relation in class!
(I would have you do it, but I do not want you to spend time on it if you happen to get the incorrect recurrence relation.)
 How much space is required by BRGC(n)?
 Chapter 4.4
 What does the binary representation of n have to do with the complexity of
binary search?
 The textbook demands that I ask you to do Exercise 4.4.10 on page 157. So do it!
 Compute 73*25 using the Russian peasant method.
 Did you watch the Josephus video linked on the schedule?
 Chapter 4.5
 Run Quickselect to find the median of the array [5,4,3,2,1].
 Run Quickselect to find the median of the array [1,2,3,4,5].
 Discuss how efficient Quickselect was in the previous two questions.
Was the algorithm more or less efficient than the expected average case?
Which one was worse? What did you learn about what makes the algorithm behave badly?
 Demonstrate interpolation search for the number 62
in the list [1, 3, 10, 14, 27, 35, 50, 67, 83, 100].
Write down l, r, and x for every iteration
and show your computation of x each time.
 Give an example of an array of 1020 elements for which interpolation search
might perform badly (i.e. inefficiently).
It has to be a sorted array so that the technique is valid, of course.
Give an example of a number to search for which the algorithm
is not efficient and explain why it behaves badly for the given number.
 Were you surprised to learn that several algorithms on a binary search tree
are an example of variablesizedecrease? Explain.
 Explain why binary search tree algorithms are generally variablesizedecrease
instead of decreasebyaconstantfactor.
 For what type of binary search tree are algorithms decreasebyaconstantfactor
(or at least close to it), if any?
 Chapter 5.1 (Including the introduction to the chapter)
 Are divideandconquer algorithms always efficient? If not, give an example
of an inefficient divideandconquer algorithm. (Hint: You may have seen one previously
this semester and/or in CSCI 235.)
 Explain why the Master Theorem is ideally suited to help analyze
many divideandconquer algorithms. Be detailed!
 Make an argument for why some decreaseandconquer algorithms can be
considered a special case of divideandconquer. Use an example to clearly
illustrate the point.
 At the bottom of page 173, they give an exact formula for C_{worst}(n),
but do not provide the details. Fill in the details. In other words, give
an exact solution to the recurrence
relation C_{worst}(n) =2 C_{worst}(n/2)+n1 if n > 1,
C_{worst}(1)=0.
 Name one or two advantages that merge sort has over other sorting algorithms
such as quick sort and heap sort.
 Name one or two disadvantages that merge sort has over other sorting algorithms
such as quick sort and heap sort.
 Chapter 5.2
 Of mergesort and quicksort, one of them essentially does the work on the way
down the recursive calls, and the other one does the work on the way back up from the
recursive calls. Which is which? Clearly justify your answer.
 Which partition algorithm did we discuss in CSCI 235, Lomuto or Hoare?
(You will have to base your answer on the details of the algorithm since I doubt that I
specified the name.)
 What sorts of arrays does quicksort perform worst on? Why do those arrays make it
perform badly?
 Can the worstcase behavior of quicksort be prevented, or at least the chances
of it occurring minimized? If so, explain how.
 Which algorithm is considered the best sorting algorithm:
quicksort, mergesort, or heapsort? Why?
 If you want to choose a pivot element other than the leftmost element of the array, how
do you have to modify the partition algorithm?
 Give two or three downsides of quicksort.
 Chapter 5.3
 Explain why it makes perfect sense that the divideandconquer technique
applies very naturally to binary trees.
 Did you notice that they solved the recurrence relation for A(n(T)) without really
"solving" it (i.e. with one of the techniques we learned earlier this semester)?
Hopefully it is clear why they couldn't solve it directly: it requires knowing the sizes of
the left and right subtrees, and we do not know that for a generic tree.
So how did they arrive at the answer instead? Explain why the technique they used makes sense and
indeed arrives as the correct answer.
 Give pseudocode for inOrderTraversal(Node r) (r is for root).
A "visit" to a node r should just call Visit(r).
 In languages like C++, you need to delete memory when you are done with it.
So if you want to delete a binary tree, you need to delete all of its nodes, one by one.
Give pseudocode for an algorithm deleteTree(Node r)
that deletes all of the nodes of a tree rooted at r.
(Hint: Use the appropriate traversal and when you "visit" a node u,
call delete(u).
Your algorithm should be very similar to your algorithm from the last
problem but with two differences.)
 For each of the following binary search tree operations, does it use
the divideandconquer technique or the variablesizedecrease technique?
insert, search/contains, preOrderTraversal, height,
maximum
 Chapter 5.4
 Do exercise 5.4.2 on page 191. Use the version of the algorithm based on the decimal
representation (not the binary representation).
Show all of the steps and organize your work so you can
get a clear sense of how the algorithm is working and how much work it requires.
This will involve about a page of computations,
so if you have fewer computations than this you probably aren't doing it recursively.
 Note that if we use the regular matrix multiplication algorithm,
c_{00}=a_{00}b_{00}+a_{01}b_{10}.
Show that this is what we get when we compute
m_{1}+m_{4}m_{5}+m_{7}
by simplifying that expression (using the definitions of these given on the next page).
 Explain where the 7 A(n/2) comes from in the recurrence relation for
A(n) on page 191.
 Explain where the 18 (n/2)^{2} comes from in the recurrence relation for
A(n) on page 191.
 What is the best you could possibly hope for for the worstcase complexity of an algorithm
that multiplies two n× n matrices? Explain.
 Chapter 5.5 (for 385)
 List at least three divideandconquer algorithms that you have learned about previously.
 What does the Master Theorem have to do with divideandconquer algorithms?
 Consider the set of points {(1,4), (6,3), (2,3), (4,4), (8,2), (3,10)}.
Give the ordered lists P and Q as suggested in the book (page 192).
 Consider the divideandconquer algorithm to solve ClosestPairs.
 There are three possibilities for where the smallest distance is. What are they?
 Why do we only need to consider points that are of distance at most d from the
vertical strip? (Hint: in your answer, you will need to make use of how d is defined.)
 Why is it important to consider the points in order by y
coordinate in the second part of the algorithm
(the nonrecursive part)?
 Redraw Figure 5.7(b), but include a point p along with 5 points you have to check it against.
(In other words, create a worstcase example.)
Clearly indicate which point is p and then draw big dots where the other points are.
Your points should obey the rules of minimum distance!
That is, if two points are on the same side,
they must be of distance at least d from each other.
 Why is the list Q important in the algorithm?
 In EfficientClosestPair (page 194), what is the maximum number of times the while loop
executes each time through the for loop? (This is important because the efficiency of the algorithm
depends in part on this.)

Consider the following points. When doing computations, you can just use your best guess since
the exact coordinates are not given.
Show the result of each of the following on separate drawings
(do your best to recreate the location of the points).
 Show the first step of the quickhull algorithm. Clearly label p_{1}
and p_{n}.
 Do the next step of the quickhull algorithm for the upper hull.
Clearly label p_{max}.
 Do the next step of the quickhull algorithm for the lower hull.
Clearly label p_{max}.
 Do one more step for the lower hull (which will involve 2 more subproblems).
 Do one more step for the lower hull (which will involve 2 more subproblems).
 Describe a set of points that might be considered a worstcase input for quickhull.
 Chapter 6.1 (Including the introduction to the chapter)
 Describe how the PresortMode algorithm given on page 204 works.
That is, describe what the algorithm is doing in paragraph form.
 If I want to search for a single item in a list, does it make sense to sort the list
and use binary search to find the element? Explain.
 If a lineartime algorithm exists to solve a problem, should I attempt to
create a better algorithm using the idea of presorting? Explain why or why not.
 If an algorithm with complexity Θ(n log n)
exists to solve a problem, should I attempt to
create a better algorithm using the idea of presorting?
Explain why or why not.
 An algorithm of complexity Θ(n^{3}) exists to solve a certain problem.
If I sort the input, I can use an algorithm that has complexity Θ(n^{2}).
Is presorting worth it in this case? Clearly justify your answer.
 Chapter 6.2 (for 385)
Note: If you aren't very familiar with matrices and matrix multiplication, ask a classmate, upperclass student,
or me to help you understand the parts of the section that do not make sense.
 Answer the question posed on page 208:
"what general design technique would such a method be based upon?"
 Solve this system:
3x + 2y = 5
4x  y = 14
 Can you extend the technique you used to solve the previous problem
to a system of, say, 10 equations and 10 unknowns? Would it be fun?
 Give the matrices A and b for the following system of equations
3 x_{1}  5 x_{2}  52 x_{3} + 13 x_{4} = 34
4 x_{1} + 45 x_{2}  5 x_{3} + 3 x_{4} = 4
56 x_{1}  3 x_{2} + 2 x_{3}  5 x_{4} = 3
23 x_{1} + 7 x_{2}  45 x_{3} + 7 x_{4} = 12
Note: The book does not explicitly state it, but the matrix x would be
x_{1} 
x_{2} 
x_{3} 
x_{4} 
x_{5} 

Solve the system of equations represented by the matrices A' and b' given below.
Use variables x, y, and z, in that order since they are easier to write
than x_{1}, x_{2}, and x_{3}.
Note: This should be fairly straightforward.
If it is not, take another look at the middle of page 209 where they
describe the process. If that does not make sense,
ask around for some assistance in understanding how to do it.
Was this easier to solve because the matrix is uppertriangular rather than just a generic matrix? Explain.
 Do Exercise 6.2.1 on page 216.
(Hint: See Example 1 on page 210 for the technique.)
Show all of your work!
 What two problems are present in the algorithm ForwardElimination?
 What was the "glaring inefficiency" in ForwardElimination
that was fixed in BetterForwardElimination?
(Note: this is not asking about the two problems referred to in the previous question.
Thiat question was about errors in the algorithm.
This question is about inefficiencies in the algorithm.)
 Read through the work on page 212 where they are simplifying the summation very carefully.
Is it correct? If not, where is the error?

Note: The book does not make this entirely clear, but when doing the row operation
that replaces a row with a sum/difference of it and another row, you need to do it in a
very specific way in order to get the correct row multiples for an LUdecomposition.
Let R_{i} be the ith row. Then you would replace
R_{i} with R_{i}  c R_{j} for some number c.
Notice two things: the ith row is not multiplied by anything, and the jth
row is subtracted. So the row multiple is c, not c.
For instance, if you replace the first row with the first row plus 3 times the second row, the multiple
would be 3, not 3.
Now let's continue with Exercise 6.2.1, but this time solve it using LU decomposition.
 When you did Exercise 6.2.1, you created the matrix U, and you (hopefully)
wrote down the row multiples. So what is the matrix L for that problem?
 Compute the matrix product LU and verify that it is the same as the original matrix A
(which you constructed based on the equations).
(This is not helping us solve this problem. I just want you to see that it works.)
 Solve the system Ly=b from the previous problem.
(Note: the vector y is a new vector of unknowns that we create as an intermediate step to finding
x, which is what we are really trying to find.)
 Given the solution y from the previous problem, solve the system
Ux=y.
 What is the advantage of using an LU decomposition to solve systems of equations over
just using Gaussian elimination? In other words, in what situation(s) is it worth doing?
 Chapter 6.3
 Are BSTs an example of transformandconquer? Explain.
 Are balanced search trees an example of transformandconquer? Explain.
 Do Exercise 6.3.1 on page 225.

Since rotations change the structure of a tree, it seems possible that rotating might
cause a BST to not be a BST anymore. That is, the rotated version might violate the
binary search tree property.
Explain in detail why a BST remains a BST after a right rotation is performed.
Make sure you make a complete argument that it is still a BST.
Refer to Figure 6.4 on page 221 for the required notation (e.g. T_{1}, etc.)
 What's so great about AVL trees?
 What's not so great about AVL trees?
 Do Exercise 6.3.3 on page 226.
 Let's make sure you understand how 23 trees are supposed to work.
Consider the tree on the right side of second row of Figure 6.8 on page 225 (the one that has
(3,8) as the root and (4,5) as its middle child).
How many key comparisons are needed to determine whether or not 7 is in the tree?
Which key comparisons are performed?
 Again, consider the 23 tree on the right side of the second row of Figure 6.8.
Show what the tree looks like after inserting 11.
Then show what it looks like after inserting 10.
(Remember: A node can have one or two keys. If a node has more than 2, the tree has to be fixed!)
 What's so great about 23 trees?
 What's not so great about 23 trees?
 Chapter 6.4
 What are the main three operations on a priority queue?
 Are heaps and priority queues the same thing? Explain
 Generally speaking, an array implementation of a binary tree is not a good idea. Why not?
 Contrary to the previous question, heaps, which are binary trees, can be efficiently
implemented using arrays. What is so special about heaps that makes this possible?
 Explain what happens in each iteration of the forloop in
HeapBottomUp.
In other words,
explain what each iteration of the loop accomplishes and how it accomplishes it.
Be as detailed as possible.
 Why does the forloop in HeapBottomUp go from ⌊n/2⌋ down to 1.
Address both why it goes backwards and why it doesn't start at n.
(Hint: what is different about nodes 1 through ⌊n/2⌋ versus
nodes ⌊n/2⌋+1 through n?)
 Which has a more structure (i.e. which is more organized), a BST or a heap?
Which takes longer to construct? (Recall that it takes Θ(n log n) time to create a BST.)
Explain why this makes sense.
 (Optional) If you are bored, have extra time, and like to understand the details of things,
work out the final step of the derivation of C_{worst}(n) from page 230.
(It has several trick/subtle parts, so you have to be really careful!)
 Heapsort is not as fast as quicksort. Thinking about how it works, can you see
what heapsort does that might make it slower?
 Chapter 6.5
 Let p(x)=7x^{5}+12x^{4}2x^{3}5x^{2}+45x+13.
 Compute p(4).
How many multiplications and additions did your computation require?
 Generalize the result: How many multiplications and additions are required to
evaluate a polynomial of degree n using the obvious algorithm?
(Hint: You should get a summation that you need to simplify.)
 Write p(x) in the form given in equation 6.2 on page 234.
(The example on page 235 might help see how to do it if the formula looks a bit confusing.)
 Compute p(4) using the Horner algorithm on page 235
(that is, based on the form given in the previous question).
How many multiplications and additions did your computation require?
 Were your two computations of p(4) the same? If not, hopefully you realize
that something must have gone wrong—try again!
 Again, generalize:
How many multiplications and additions are required to evaluate a polynomial of degree
n using Horner's rule?
(Hint: The book tells you.)
 Can you see why Horner's rule is so awesome? Explain.
 Explain why Horner's rule is an example of transformandconquer.
 You will probably need to use something like Wolfram Alpha
to help you with the computations on this one.
 How many multiplications are required to compute 3^{37} using the obvious algorithm?
 Compute 3^{37} using the naive algorithm. (O.K., you can use Wolfram Alpha!)
 What is the binary representation of 37?
 Compute 3^{37} using LeftToRightBinaryExponentiation.
Make sure to show the details of each step of the algorithm.
 How many multiplications are required to compute 3^{37}
using LeftToRightBinaryExponentiation?
 Compute 3^{37} using RightToLeftBinaryExponentiation.
Make sure to show the details of each step of the algorithm.
(This one requires keeping track of more information at each step.)
 How many multiplications are required to compute 3^{37}
using RightToLeftBinaryExponentiation?
 Did you get the same answer for all 3 algorithms? If not, try again!
 Compare the three algorithms. Is one of them clearly the best? Is one clearly the worst? Explain.
 How can you modify the last two algorithms so you do not need to pass in the binary representation
of the exponent? Let's start small: Given n, how can you easily compute b_{i},
the ith bit of its binary representation?
(Hint: you need a bit shift or division and the modulus operator.)
 Rewrite either LeftToRightBinaryExponentiation or
RightToLeftBinaryExponentiation (choose your favorite!)
so that the second argument is n instead of
the binary representation of n.
 Chapter 6.6 (for 385)
 In your own words, describe the problem reduction strategy.
 What is the main benefit of problem reduction?
 Fill in the blank: The problem of computing lcm(m,n) reduces to computing __________.
 The problem of counting paths in graphs can be reduced to a problem on matrices.
It turns out that many graph problems can be reduced to matrix problems. Explain why this makes sense.
 Hopefully you recall learning about efficient shortest path algorithms for graphs.
Can finding a longest path be reduced to finding a shortest path?
Hint: Edges with negative weights can cause problems with shortest path algorithms.
(Feel free to do an online search for "longest simple path in graph" if you wish.)

My brother owns a company that makes two products, The Screamer Fuzz (SF)
and the TapAWhirl (TAW). He know he can sell at most 175 SFs and
at most 250 TAWs per day. His profit for a SF is $1 and for TAW is it $5.
Combined, he can make at most 350 total products. He wants to maximize his profit.
Express this problem as a linear program, including all of the constraints
and the function that needs to be maximized or minimized.
 What is the difference between a regular linear programming problem and an
integer linear programming problem?
 Chapter 7.1 (Including the introduction to the chapter)
 Is ComparisonCountingSort a very good algorithm? Explain.
 Would you use DistributionCountingSort on an array of size 1,000 whose elements
are 32bit integers (you do not know any other details about their possible values)? Explain.
 Explain why DistributionCountingSort processes the list in reverse order in the final step.
 What is the spacecomplexity and timecomplexity of DistributionCountingSort on a list of
size n whose values are between 1 and m (a positive integer)?
 In the DistributionCountingSort algorithm,
why does it matter whether or not we copy the values into a new array or the original array?
 Even though the name should make it somewhat obvious, give a clear explanation of the idea
behind the time and space tradeoff technique.
 Chapter 7.2 (through page 262)
 Use the ShiftTable algorithm to compute the shift table for the pattern
mississippi, assuming only lowercase alphabetic characters can occur in the string.
 Do Exercise 7.2.1 on page 267. How many comparisons were required?
 How many comparisons would be required to search for the pattern from the previous question
using the bruteforce algorithm from Section 3.2?
 Does the number of comparisons from the previous two questions allow you to
make a conclusion about which algorithm is better? Explain.
 Chapter 7.3 (skipped)
 Chapter 7.4 (for 385)
 When analyzing data structures such as arrays and trees,
we usually care about the number of key comparisons used to perform an operation (e.g. search).
With Btrees this is not what we care about. What do we care about instead and why?
 Do problem 7.4.2 (both parts) on page 279. You might have to pull out your discrete mathematics textbook
for this one!
 Would you use linear search or binary search in a node of a Btree with m=5?
What about m=1000? Justify your answers.
More generally, for what values of m (if any) would it be reasonable to use binary search?
 Can any of the algorithms you have previously learned be considered applications of the
space and time tradeoff technique?
Try to name 2 or 3, explaining how they exploit the technique for improved performance.
 Chapter 8.1 (Including the introduction to the chapter)
 Explain why dynamic programming is a special case of space and time tradeoff.
 Look back at Section 2.5.
The iterative algorithm given in that section was an example of dynamic programming. In this case,
it is a bottomup implementation.
If you look at your answers to the reading questions, you should see that it was definitely
an improvement over the straightforward recursive algorithm.
Now, implement the same idea using the topdown approach.
That is, use a recursive algorithm that also uses the array to save time.
(Hint: Start with the recursive algorithm on page 81. Instead of just computing
F(N1) and F(N2), see if they have already been computed first.
For simplicity, assume you have a global array, and that it is initialized to whatever values you want
(If you want something other than 0, specify that.)
 Solve the coinrow problem for coins 1,5,3,9,10,4.
Give the maximum value of coins that can be taken along with which coins.
Show all of your work!
 Solve the following instance of the coincollecting problem:
Show the table of values (F) produced by the algorithm.
Then give the number of coins collected and draw the path that collects the coins.
 Chapter 8.2
 Give a very clear description of what F(i,j) is storing of given values
of i and j.
 Consider solving the knapsack subproblem involving the first i objects.
Finish the following sentences.
 The dynamic programming algorithm for the knapsack
problem is based on the fact that there are two types of solutions
that involve the first i objects: those that ___________ and those that__________.
 If the ith item is not included in the optimal solution, then we can just look up the value
of the optimal solution. In other words, F(i,j)=______________ in this case.
 If the ith item is included in the optimal solution, then we can compute
the value by realizing that since the ith object weighs _________, the best we can do
is solve the problem involving the first _________ objects with weight _______ and then add the
ith object to that. Thus, F(i,j)=_________________ in this case.
 We overlooked an important case: If the ith item does not fit in the knapsack, then
the optimal solution must be the same as that involving the first _______ items. Thus, when computing
F(i,j), if w_{i}>______, then F(i,j)=______________.
Otherwise, we consider one of the two previous cases.
 Putting this all together, we get the following recursive formula: F(i,j)=________________.
(Hint: it should have two cases, one of which has a max in it.)
 Of course, no recursive definition would be complete without base case(s). So, our base cases are
that F(0,j)=______ for j≥0 (because _____________)
and F(i,0)=_____ for i≥0 (because ______________).
 Tricky question: Is the dynamic programming algorithm for the knapsack problem a
polynomial time algorithm? Explain.
 Briefly explain how memory functions can be used to implement dynamic programming algorithms.
Are they used with the topdown or bottomup approach (or both)?
 Name one or two advantages to the memory function approach to dynamic programming.
 Chapter 8.3 (for 385)
 What other data structure are binary search trees are used to implement?
 What is the main point of an optimal binary search tree?
What makes it different than a normal binary search tree?
 Why is the exhaustive search algorithm to construct an optimal binary search tree not practical?
 The questions are all about C(i,j).
 What is C(i,j)?
 What are the conditions on i and j? (e.g. what values can they take on?)
 What is the final goal of the algorithm? In other words, for what values of i and j
are we trying to find C(i,j)? Explain.
 When computing C(i,j), how many possible nodes might be the root of the optimal tree
containing keys a_{i} through a_{j}? (Be precise!)
 How does the algorithm determine which root is the optimal one?
 In the middle of page 299, the second step of the derivation of the formula for C(i,j)
has a summation from s=i to j of p_{s}. Explain where this came from.
 In the next step, the first two summations seem to be magically replaced with references to
C(i,k1) and C(k+1,j). Explain the logic behind this step.
 Explain in your own words the logic behind/meaning of recurrence relation 8.8 on page 299.
Imagine you are explaining it to a CSCI 255 student who is just learning about dynamic programming.
Make sure to explain the logic behind the summation in the formula.
 Why is the second table needed in the algorithm?
 Do Exercise 8.3.1 on page 303 (finish the computation of the example). Make sure to write down all of your work.
 Chapter 8.4
 What does the matrix R^{(k)} represent? In other words, what do the entries tell us?
 What does it mean if r_{ij}^{(k)}=1? Be detailed!
 In order for there to be a path from u_{i} to u_{j} that uses only vertices
numbered 1 through k, then either there is a path from
u_{i} to u_{j} that only vertices numbered 1 through k1
or there must be paths from u_{i} to u_{k}
and from u_{k} to u_{j}
that only contain vertices numbered 1 through k1.
In terms of the entries from matrices R^{(k)} and R^{(k1)}, this means
that r_{ij}^{(k)}=_____________________.
 Do Exercise 8.4.1 on page 311.
 What does the element d_{ij}^{(k)} from D^{(k)} represent?
Be detailed!
 Put formula (8.14) on page 310 into words. In other words, do the opposite of what I did in question #3 above
(where I gave the explanation and asked for the formula).
 Compare Warshall's algorithm and Flyod's algorithm. What do you notice?
 Chapter 9.1 (Including the introduction to the chapter)
 In your own words, describe the greedy technique. Be complete!
 Pick the best word: The greedy technique always/sometimes/never produces
optimal solutions. Explain, giving examples if it helps.
 Some textbooks discuss the term greedy choice property that a problem is required to have
if the greedy technique is to guarantee an optimal solution.
Based on this description, try to define the term. I'll get you started:
A problem has the greedy choice property if__________.
 If the greedy technique does not yield an optimal solution to a particular problem, then is it
worthless for that problem? Explain.

The complete graph, K_{n}, has n^{n2} spanning trees.
Given this, does an exhaustive search algorithm to find a minimum spanning tree of
K_{n} seem reasonable? Explain.
 If a graph has about 2n edges (which really is not that many),
do you suppose an exhaustive search algorithm to find a
minimum spanning tree is reasonable?
For simplicity, assume the exhaustive search algorithm considers
every subset of n1 edges, even though some of them will not be spanning trees.
(Did you realize that a spanning tree has to have n1 edges? If not, make sure you
understand why.)
 Note that as presented on page 319, Prim is not a fully specified algorithm.
The first step in the for loop is not clearly specified—how do you find such an edge?
The correctness and efficiency of the algorithm depends on how that step is performed.
(For instance, if you just look through all of the edges each time you will get horrible performance!)
The book discusses how this can be done. Summarize how that step is performed, including specifying any
necessary data structure(s) and how they need to be initialized and updated at each step.
In fact, rewrite the algorithm so that the whole thing is more clearly specified.
The example in Figure 9.3 on page 321 should help you understand the details a little better—pay close
attention to how the numbers in parentheses change.
Also, read to the end of the section before answering this question.
 What makes Prim's algorithm greedy?
 Chapter 9.2 (through page 327)
 As with Prim in the last section, Kruskal on page 325 is not fully specified.
Which step(s) of the algorithm are unclear/ambiguous?
What details are needed to complete the implementation?
 The book claims that the complexity of Kruskal is Θ(E log E).
I claim that it is Θ(E log V). Who is correct? Explain.
 What makes Kruskal's algorithm greedy?
 Chapter 9.2 (pages 327331) (Skipped)
 Chapter 9.3 (for 385)
 Give at least 4 applications of the shortest path problem.
 Is the problem discussed in this section the same problem that is solved by
the BellmanFord Algorithm (we might have covered it in 255)
and/or Floyd's Algorithm (from Section 8.4).
(Hint: one solves the same problem, the other solves a more general problem.)
 What limitation does Dijkstra's algorithm have?
 Why is it not a good idea for the algorithm to just iterate over all of the vertices that are already
in the tree, going through each one's adjacency list to find the next vertex to add?
 Explain what the algorithm does instead of what I suggested in the last question.
 What data structure(s) does Dijkstra's algorithm need to use to be efficient?
 When implementing Dijkstra's algorithm, should you use an adjacency matrix or
adjacency list? Explain.
 Modify Dijkstra on page 335 to turn it into Prim.
(Hint: You only need to make two minor changes to two lines of it.)
 What make's Dijkstra's algorithm greedy?
 Chapter 9.4
 Give an example of a fixedlength encoding.
 What is a variablelength encoding?
 Why would one use a variablelength encoding instead of a fixedlength encoding?
 True or false: Huffman's algorithm always joins leaf nodes together before it joins other nodes together.
In other words, as long as there are leaf nodes that have not been joined, it will choose them.
Explain, using an example if appropriate.
 How many bits are required to store a file with 100 characters using the Huffman encoding
computed in the example on pages 339340? (Of course, we assume the file's characters have the frequencies as
specified in the example.)
 What is the minimum number of bits to store the file from the previous question if a
fixedlength encoding is used?
 What makes Huffman's algorithm greedy?
 Chapter 10.1 (Skipped)
 Chapter 10.2 (for 385; Including the introduction to the chapter)
 There is a specific iterative improvement technique called hill climbing which is
alluded to in the introduction to the chapter.
 Using the "hill" analogy, explain how iterative improvement algorithms work.
 The hill analogy also reveals a potential problem with the technique. Describe it.
 Look at the graphs in problem 4.2.1 on page 142. If we added positive weights to the edges,
could these be considered flow networks? If so, identify a valid source and sink.
If not, explain why not.
 Draw a flow network that has a maximum flow of 10. Try to come up with an example
that is not completely obvious (e.g. two vertices with an edge of weight 10 between them).
 Interpret equation 10.8 at the bottom of page 361. That is, describe what it is saying.
I will help you get started:
"For every vertex in the graph (except for the source and sink), __________"
 Explain the difference between u_{ij} and x_{ij}.
Is there a relationship between them? If so, what is it?
 Explain what a flow in a network is/means.
 Fill in the blanks for the following pseudocode for the FordFulkerson method below.
(Since it is pseudocode, you may use English phrases/sentences.)
While( ) {
}
(Note: You may need 13 lines of code in the while loop, depending on exactly how you specify the algorithm.)
 Does the original graph (with zero flow) contain forward edges, back edges, or both? Explain.
(Note: It is important to fully understand what forward and backward edges are in order to understand
how the algorithm works.)
 Explain what a flowaugmenting path is.
 What is r_{ij}? (Not the equation, but what is the meaning of it.)
How is it related to u_{ij} and x_{ij}?
 Why can decreasing the flow on a backward edge increase the overall flow?
 Explain why the FordFulkerson method is considered an example of iterative improvement.
 Do you fully understand the example on page 365 of why how we select augmenting paths is important
to the efficiency of the algorithm? Good. Then briefly explain the point behind it.
 Give a brief highlevel description of the shortestaugmentingpath algorithm that is described
on pages 365366 and specified more formally in pseudocode on pages 366367.
 Given the analogy that a graph can be thought of as a bunch of balls (vertices)
attached to each other in various ways with strings (edges),
explain why the term cut makes sense by bringing scissors into the analogy.
 Give a minimum cut of the graph in Figure 10.4 on page 362 along with its capacity.
That is, which vertices are in X and which are in X.
(Hint: Figure 10.7 might help.)
 True or false: It is possible that a graph has a cut with capacity c and a flow with value v,
such that c < v. Clearly explain.
 What is the worstcase efficiency of the ShortestAugmentingPath algorithm
(sometimes referred to as the EdmondsKarp algorithm)?
Is it a good algorithm? A great algorithm? Explain.
 Chapter 10.3 (for 385)
 For what values of n is K_{n} (complete graph on n vertices) bipartite?
 For what values of n is C_{n} (cycle on n vertices) bipartite?
 Why are augmenting paths of bipartite graphs always of odd length?
 The way the algorithm works, it is suggested that an augmenting path has one more
edge from EM than from M. Explain why.
 When you get stuck understanding what Cases 1 and 2 are trying to say at the top of page 376,
skip ahead to the algorithm and walk through the algorithm with the example in Figure 10.10.
Then reread the cases and hopefully they make more sense.
This isn't a question, by the way. It's just a helpful tip.
But since you are expecting a question, here's one: Did you do what I just suggested?
 Are you prepared to talk the class through the example in Figure 10.10 on page 377?
Because I might just call on you to do so!
 What is the worstcase complexity of the MaximumBipartiteMatching algorithm.
 Explain why the MaximumBipartiteMatching algorithm
is considered an example of iterative improvement.
 Chapter 10.4 (Skipped)
 Chapter 11.1 (for 385; Including the introduction to the chapter)
 Was Lord Kelvin wrong?
 Explain what a lower bound is and why it is a useful concept.
 What would a reasonable trivial lower bound be for a problem on a
graph with n vertices and m edges?
 What would the informationtheoretic lower bound be for a problem that
needs to assign to each of the n vertices of a graph a number between a to a
and also assign to each of the m edges a number between 1 and 32?
(The idea being that we need to determine what each of these numbers is, but I am not
specifying exactly how. But that should not affect your answer.)
 Describe in your own words what an adversary argument for establishing a lower bound
on any algorithm to solve a problem.
 To show that P is at least as difficult to solve as Q, do you reduce
P to Q or Q to P? Clearly explain why.
 Which problem is more difficult, Euclidean minimum spanning tree or
element uniqueness? Explain.
 Chapter 11.2 (for 385)
 Are you convinced that if a binary tree of height h has l leaves, then
h ≥ ⌈ log_{2} l ⌉? If not, reread the argument on page 395.
 Answer the following questions about decision trees.
 What is in each node?
 What does the height of the tree tell you (relative to the problem/algorithm being analyzed)?
 What has to be true about the number of leaves of the tree
(relative to the problem/algorithm being analyzed)?
 If a problem has 2^{n} possible answers,
what can you say about the lower bound for any comparisonbased algorithm to solve that problem?
 Explain why the possible number of outcomes of a sorting algorithm is n!.
 Justify the second step of the algebra in the middle of page 396.
 Summarize the argument for why all comparison sorting algorithms require at least
n log_{2} n comparisons.
Make sure to mention what the height of the tree is, how many leaves are in the tree and why, etc.
 Chapter 11.3 (for 385)
 Name three tractable problems (you can't use any of those listed on page 402).
 Name one or two intractable problems.
 Technically speaking, is sorting in P? Explain.
 What is the difference between tractable/intractable and decidable/undecidable?
 Explain the contradiction about Q(Q) on page 403.
 Are Hamiltonian circuit and traveling salesman problems intractable? Explain.
 Give a longer description of problems in NP than "decision problems that can be solved
by nondeterministic polynomial algorithms." That is, expand on what that means.
 What is the most important open question in theoretical computer science?
 Would we rather it be the case that P = NP or that P ≠ NP? Explain.
 What does it mean that the problems in NPcomplete are the most difficult problems in NP?
Be specific.
 What do you need to do to prove that a problem is in NPcomplete?
(Hint: you need to do 2 things.)
 What would it mean if I could find a polynomialtime algorithm to solve Hamiltonian circuit?
I mean, would it be a good thing, or a very good thing? Explain.
 Chapter 11.4 (for 385)
 What is the main difference between numerical algorithms and most of those we have studied?
 What are the two types of errors that are of concern with numerical algorithms?
Give as clear and precise a definition of each as possible. Make sure to also clearly explain
what the cause of each is. (Note: I am not talking about absolute and relative errors here.)
 What is the main challenge of implementing numerical algorithms? That is, what is the limitation
that computer have that make it a challenge to implement algorithms that deal with real numbers?
 Clearly explain the difference between overflow and underflow.
 As mentioned in the book, real numbers are stored in a computer by storing the mantissa
and the exponent. First, define these. Second, can you see a connection between these and the
problems of overflow and underflow? What is it?
 Why do you suppose that relative error is of interest? In other words, why isn't
it enough to think about absolute error?
 Do you understand subtractive cancellation enough to explain what the problem is?
Well, at least try to.
 Use the procedure on bottom of page 416 to estimate the square root of 25.
Use at least 4 iterations. How close is the estimate to the correct answer
(which should be 5, in case you can't compute it yourself).
 Chapter 12.1 (for 385; Including the introduction to the chapter)
 Give three possible ways to approach solving a problem when you realize that it is NPcomplete.
 How many children does a nonpromising node have in a statespace tree?
 Find a solution for the 5queens problem. (Hint: put the first queen in the second column and
you should pretty quickly find a solution.)
 Are the problems of finding a single solution to the nqueens problem and
finding all solutions equivalent? In other words, if one can be done in polynomial time, can the other?
 Draw the statespace tree of the backtracking algorithm applied to the subset {3, 5, 6, 8} with d=17.
Label the nodes as in Figure 12.4 on page 428.
 Is an actual tree constructed when implementing a backtracking algorithm?
 How does one analyze the running time of a backtracking algorithm?
 Chapter 12.2 (for 385)
 What is the difference between a feasible solution and an optimal solution?
 Is branchandbound a special case of backtracking, or viceversa? (or neither?). Explain.
 Explain in detail what it means to prune a branch of a statespace tree in brandandbound.
 Explain how the bestfirst brandandbound technique works.
 In Figure 12.5 on page 434, explain why in node 3 the lower bound is lb=7+4+5+4.
In other words, why is the bound the sum of those 4 numbers? Be specific.
In order to be sure you get it, also explain the computation for node 4.
 In Figure 12.6 on page 435, what numbers were added to get lb=14 in node 6?
 Shortly after cost=13 is computed in node 8 in Figure 12.7,
the rest of the nodes become nonpromising. Explain why.
 Is it a good idea to use the branchandbound algorithm to solve the assignment problem?
If so, why?
If not, why not and why do you think they presented this approach to solving that problem?
 Looking at figure 12.8, when did node 2 become nonpromising?
 Summarize the idea behind branchandbound. Explain what it does to improve upon the
idea of backtracking.
 Chapter 12.3 (Skipped)
 Chapter 12.4 (Skipped)
