Programming Resources
For Fun and Learning
Charles Cusack
Computer Science
Hope College
main

Python
C++
JAVA
PHP
SQL
Alice

ReadingQuestions


Algorithms_Dasgupta.html
Algorithms_Levitin.html
OpenMP.html
SIPC.html

Algorithms_Levitin.html

Reading Questions from Introduction to The Design and Analysis of Algorithms by Anany Levitin

  • Chapter 1.1 (Including the introduction to the chapter)
    1. What is the cornerstone of computer science?
    2. Give three reasons why the study of algorithms is important, both within and beyond the field of computer science.
    3. Did algorithms exist before computers? Explain why your answer makes sense.
    4. Compute gcd(30,70) using each of the three algorithms described. Make sure to show all of your work so it is clear that you understand how each algorithm works.
    5. Attempt to rank the three gcd algorithms based on how long they take to solve the problem, from fastest to slowest.

  • Chapter 1.2
    1. Name at least six different algorithm design techniques.
    2. Why is it important to learn about algorithm design techniques (as opposed to just learning specific algorithms to solve specific problems)?
    3. Discuss the relationship between data structures and algorithms. (Your answer should be longer than a sentence or two.)
    4. Give three ways that an algorithm can be specified.
    5. When we analyze an algorithm, what two things are we usually most interested in?
    6. Which is more important: the simplicity of an algorithm or its efficiency? Explain.
    7. What is meant by the optimality of an algorithm? Is it just another name for efficiency? Explain.
    8. True or false: Designing algorithms is easy. Explain.
    9. True or false: Designing algorithms is a one-and-done sort of activity. Explain.

  • Chapter 1.3
    1. If someone claims to have a comparison sorting algorithm (that is, an algorithm that sorts by comparing keys) that takes n steps for a list of size n, should you believe them? Give a clear and complete explanation of your reasoning.
    2. Which of these are in-place sorting algorithms: insertion sort, bubble sort, selection sort, quick sort, and merge sort?
    3. Hopefully you recall the binary search algorithm. Give one or two benefits of the algorithm (versus sequential search, for example) and one or two limitations of the algorithm.
    4. Give an example of a situation where string matching can be used to solve a problem.
    5. What is the difference between a graph algorithm and a graph problem. Give examples that clearly demonstrate the difference.
    6. I would claim that many combinatorial problems are both easy to solve and difficult to solve at the same time. What can I possibly mean by this? How can they be both?
    7. What makes numerical problems different than the other types of problems discussed in this section? There are at least two possible ways to answer this question (they are related, but slightly different).

  • Chapter 1.4
    1. What two different basic data structures can be used to store a list?
    2. Give several advantages and disadvantages between singly-linked lists and doubly-linked lists.
    3. Give an example of a problem that can be solved with the help of a queue.
    4. Give an example of a problem that can be solved with the help of a stack.
    5. Give an example of a problem that can be solved with the help of a priority queue.
    6. What basic data structure can be used to implement a priority queue?
    7. Prove that an undirected graph with v vertices has at most v(v-1)/2 edges. (Hint: Use induction or some of the combinatorics you have previously learned.)
    8. Give the adjacency list representation of the graph in Figure 1.9.
    9. What size matrix is needed to store the adjacency matrix representation of the graph in Figure 1.9? How many entries of the matrix are 1s? Give the third row of the matrix.
    10. Give an example of a problem that can be modeled using a weighted graph whose solution involves a simple path.
    11. Given a graph, I want to know if there is a way to get from any vertex to any other vertex by following edges. What property of the graph am I trying to determine?
    12. Given a graph, I want to travel along a series of edges such that I do not repeat any edges and end up back where I started. What am I looking for?
    13. Explain how you can model a rooted tree using a (free) tree.
    14. Is a binary search tree a free tree? A rooted tree? An ordered tree? A binary tree?
    15. Assume I have a universal set of 8 elements. What bit vector is represented by the number 117? What set is represented by the number 117?
    16. What is the best data structure? Explain.

  • Chapter 2.1 (Including the introduction to the chapter)
    1. When analyzing an algorithm, what two resources do we typically focus on? Of these two, which one is more important? Why is it more important?
    2. What is meant by size of input?
    3. Explain what is meant by basic operation and what it has to do with analyzing an algorithm.
    4. Of worst-case, average-case, and best-case efficiencies, which is least important and why?
    5. What is the difference between the worst-case and average-case efficiencies?
    6. Why is it typically more difficult to determine the average-case complexity of an algorithm than the other two?
    7. Give an example of an algorithm whose three time complexities (best, average, worst) are all the same (give the complexities).
    8. Give an example of an algorithm whose three time complexities are not all the same (give the complexities). Bonus points if you can think of one where they are all different.
    9. What is the space-complexity of merge sort?

  • Chapter 2.2
    1. Write down the definition of O-notation.
    2. Write down the definition of Θ-notation.
    3. In English, what does it mean that f(n)=Θ(g(n))?
    4. Give a simple Big-Θ bound on 13 n3+8 n2-5 n + 35.
    5. If you can, prove that n2=O(n3) using limits.
      If you can't, you might want to talk with me about limits (especially those who haven't had calculus).
    6. If you can, prove that 13 n3+8 n2-5 n + 35=Θ(n3) using limits.
      If you can't, you might want to talk with me about limits (especially those who haven't had calculus).

  • Chapter 2.3
    1. Do exercise 2.3.4 (page 67).
    2. Analyze the algorithm from Exercise 1.3.1 (page 23). Give both the best-case and worst-case efficiencies. (Hint: It involves a double sum. Also, if you have i or j in your answer then you are doing it wrong.)

  • Chapter 2.4
    1. Do exercise 2.4.4 (page 77).

  • Chapter 2.5
    1. Are Fibonacci numbers cool or what?
    2. Compute F(6) using the recursive algorithm on page 81. (Draw a tree of the recursive calls.) How many additions were required? (Hint: One addition is performed for every internal node of the tree.) How much extra space was required?
    3. Compute F(6) using the iterative algorithm on page 82. How many additions were required? How much extra space was required?
    4. Explain why the recursive algorithm to compute F(n) is so bad. Specifically, how exactly does it end up computing F(n)?
    5. Compute F(5) by computing the appropriate matrix product as suggested at the end of the section. Show your work!

  • Chapter 3.1 (Including the introduction to the chapter)
    1. Explain why the phrase "work harder, not smarter" is essentially saying use a brute-force technique.
    2. Give a brief description of selection sort in your own words.
    3. Near the bottom of page 99, the book gives a formula, C(n), for the number of times the basic operation is executed in the SelectionSort algorithm.
      1. Why is the comparison the basic operation instead of the min←j or the swap? Clearly explain why each of those is the wrong choice, and why the comparison is the right choice.
      2. Explain very clearly why the double sum given is correct.
      3. Finish simplifying the sum, showing all of the necessary steps. (Note: They give the answer on the next page, but not the work to get the answer.) There are two ways of simplifying the sum--try to use a clever approach that requires less computation.
    4. Give a brief description of bubble sort in your own words.
    5. What is the main reason that bubble sort is worse than selection sort?
    6. Choose the best word in the following sentence and explain why it is the correct choice:
        Brute force algorithms are always/sometimes/never good algorithms.

  • Chapter 3.2
    1. About how much faster is SequentialSearch2 than SequentialSearch? Justify your answer.
    2. Give one strength of a brute-force approach to problem solving.
    3. Give one weakness of a brute-force approach to problem solving.
    4. Clearly explain why the BruteForceStringMatch algorithm (page 105) makes m(n-m+1) comparisons in the worst case. (Hint: Your answer should contain phrases such as "the while loop executes...")
    5. Do exercise 3.2.6 on page 107. (Hint: Don't forget that the book has hints for most exercises.)

  • Chapter 3.3 (for 385)
    1. What is the Euclidean distance between two points? Give both a technical definition and an explanation of what it represents.
    2. Explain why the sqrt is unnecessary in BruteForceClosestPair algorithm.
    3. Explain why removing the sqrt from BruteForceClosestPair is a good idea.
    4. Use what you learned earlier in the semester to conclude that the number of times the basic operation of BruteForceClosestPair is executed is n(n-1). That is, do it without using summations.
    5. Is the convex hull of 3 points always a triangle? Explain.
    6. Let S be a set of n distinct points. What are the possible sizes of the convex hull of S?
    7. How many extreme points does the set represented in Figure 3.5 (page 111) have?
    8. Given the set of points S={ (0,0), (1,1), (0.5, 2), (3,0), (0,3) }, do the computation described on page 112-113 to show that the line segment between (0,0) and (3,0) is on the convex hull of S.
    9. Which points are on the convex hull of S in the previous question?

  • Chapter 3.4
    1. What's the good news about exhaustive search algorithms?
    2. What's the bad news about exhaustive search algorithms?
    3. Given an exhaustive search algorithm, is it sometimes possible to make it "less exhaustive" (so that it is more efficient) but still give a correct solution? Explain, using an example.
    4. Does anybody currently know a polynomial-time algorithm to solve the Traveling Salesman Problem or the Knapsack Problem? Explain how you know.
    5. Which exhaustive search algorithm is more efficient, the one for the Knapsack Problem or the one for the Assignment Problem? Prove it. (Note: It is a little silly to compare the efficiencies of algorithms that solve different problems, but the point here is to review your understanding of the growth rates of functions.)
    6. Choose the best word in the following sentence and explain why it is the correct choice:
        If a problem domain grows exponentially or faster (e.g. a problem of size n has 2n or n! possible solutions) it is always/sometimes/never possible to find a polynomial-time algorithm to solve the problem.

  • Chapter 3.5
    1. Do Exercise 3.5.1 on page 128. Indicate on the original graph which edges are tree edges and which are back edges.
    2. The book talks about a stack when it discusses DFS, yet no stack is explicitly mentioned in the algorithm. So what are they talking about?
    3. Why are back edges not encountered in BFS?
    4. Do Exercise 3.5.4 on page 128. Indicate on the original graph which edges are tree edges and which are cross edges.
    5. Given a vertex v in a graph, the shortest path problem gives the shortest distance from v to every other vertex in the graph. So in the Example from Figures 3.12 on page 127, it would give the distance from a to both b and e as 1, and from a to both c and f as 2, etc. Modify BFS so it solves the shortest path problem. Hint: Use a data structure that stores d(u), which is the distance from a to u, which is initially set to ∞ for every vertex except a for which clearly d(a)=0. (You can think of d as an array that is indexed by vertices.) You should only need to add a few things to BFS(G) (to initialize things) and bfs(v) (to update things). For simplicity, assume d is a global variable. (Hint: If d(v)=k, and u is adjacent to v but has not been previously discovered, what is d(u)?)
    6. What do DFS and BFS have in common? That is, what is the purpose of both algorithms?
    7. When do you DFS and BFS produce the exact same tree? Can you give a precise answer?
    8. If you are going to implement DFS and/or BFS, is it better to use an adjacency matrix representation of the graph, or an adjacency list representation? Explain.
    9. Choose either DFS or BFS and explain why the algorithm has complexity Θ(|V|+|E|). Hint: This one is more difficult than algorithms we have previously analyzed. It is not as easy as setting up and solving a recurrence relation. If you do that, it will over-estimate the complexity. Instead, talk through how the algorithm works to at a slightly higher level while still making sure you account for everything.

  • Chapter 4.1 (Including the introduction to the chapter)
    1. Which typically leads to a more efficient algorithm: decrease by a constant or decrease by a constant factor? Explain.
    2. Which typically leads to a more efficient algorithm: decrease by a constant factor or variable size decrease? Explain.
    3. Explain how to sort a hand of playing cards using insertion sort. (Hint: Assume the cards start in a pile on the table and that you are going to sort them into your hand, one by one.)
    4. Why is insertion sort generally considered to be better than selection sort?
    5. Demonstrate insertion sort on the array [3, 7, 8, 1, 4, 6, 2]. (Show the steps as they did in Figure 4.4.)
    6. You already know a decrease by a constant factor algorithm (you probably saw it in CSCI 235, and you certainly saw it earlier this semester). What is it?

  • Chapter 4.2
    1. Is the graph in Figure 1.6(b) (page 28) a dag? Explain.
    2. Can the graph in Figure 1.6(b) be topologically sorted? Explain.
    3. Go to CS Course Catalog. Create a directed graph that shows the dependencies between the following courses, where an directed edge from A to B means A is a prerequisite for B: 112, 225, 235, 245, 255, 265, 321, 354, 361, 376, 385, 470.
    4. Find a valid order the courses from the previous question can be taken by using the DFS-based algorithm to solve the topological sorting problem. Make sure to show all of your work!
    5. Find a valid order the courses from question 3 can be taken by using the algorithm based on decrease-and-conquer (the source removal algorithm) to solve the topological sorting problem. Make sure to show all of your work!
    6. Did you get the same solutions in the previous two questions? If so, will that always be the case? If not, is that a problem? Explain in either case.
    7. What is the computational complexity of the source removal algorithm? Justify your answer.

  • Chapter 4.3 (for 385)
    1. You first saw a decrease-by-a-constant-factor algorithm back in 235. What was it? (Hint: If you can't figure this out, read the first part of section 4.4 on pages 150-152. But try to figure it out before turning to those pages!)
    2. How might you represent the arrows in the Johnson-Trotter algorithm? Overall, what data structure(s) do you need? (Note: You need to store a permutation, the arrows for a permutation, and a list of permutations. How do you store a permutation? A list of permutations? Etc.) Be complete and specific.
    3. What is a mobile element?
    4. If the Johnson-Trotter algorithm generates n! permutations, why isn't it obvious that it should take Θ(n!) time? (This is a very important question, so make sure you give it some thought, and ask if you aren't sure of the answer. Of course, you should always ask if you don't know the answer.)
    5. Imagine you are in the middle of the Johnson-Trotter algorithm algorithm and have this:
      1423
      Show the next 3 steps of the algorithm.
    6. Given the permutation 352461, what are the next 3 permutations generated by LexicographicPermute?
    7. What is the computational complexity of LexicographicPermute, assuming a reasonable implementation?
    8. Explain why generating bit strings of length n and generating subsets of a set of n elements are essentially the same thing.
    9. Why does the order in which bit strings are generated matter?
    10. Use the BRGC algorithm to generate all bit strings of length 4. Make sure to show all of the intermediate steps.
    11. Give a recurrence relation for T(n), the number of steps it takes to call BRGC(n). (There are a few possible answers to this depending on an assumption you make. If you do not think you ever had to make an assumption when you analyzed the algorithm, you probably were not thinking carefully enough.) Do not forget to ask me about solving the recurrence relation in class! (I would have you do it, but I do not want you to spend time on it if you happen to get the incorrect recurrence relation.)
    12. How much space is required by BRGC(n)?

  • Chapter 4.4
    1. What does the binary representation of n have to do with the complexity of binary search?
    2. The textbook demands that I ask you to do Exercise 4.4.10 on page 157. So do it!
    3. Compute 73*25 using the Russian peasant method.
    4. Did you watch the Josephus video linked on the schedule?

  • Chapter 4.5
    1. Run Quickselect to find the median of the array [5,4,3,2,1].
    2. Run Quickselect to find the median of the array [1,2,3,4,5].
    3. Discuss how efficient Quickselect was in the previous two questions. Was the algorithm more or less efficient than the expected average case? Which one was worse? What did you learn about what makes the algorithm behave badly?
    4. Demonstrate interpolation search for the number 62 in the list [1, 3, 10, 14, 27, 35, 50, 67, 83, 100]. Write down l, r, and x for every iteration and show your computation of x each time.
    5. Give an example of an array of 10-20 elements for which interpolation search might perform badly (i.e. inefficiently). It has to be a sorted array so that the technique is valid, of course. Give an example of a number to search for which the algorithm is not efficient and explain why it behaves badly for the given number.
    6. Were you surprised to learn that several algorithms on a binary search tree are an example of variable-size-decrease? Explain.
    7. Explain why binary search tree algorithms are generally variable-size-decrease instead of decrease-by-a-constant-factor.
    8. For what type of binary search tree are algorithms decrease-by-a-constant-factor (or at least close to it), if any?

  • Chapter 5.1 (Including the introduction to the chapter)
    1. Are divide-and-conquer algorithms always efficient? If not, give an example of an inefficient divide-and-conquer algorithm. (Hint: You may have seen one previously this semester and/or in CSCI 235.)
    2. Explain why the Master Theorem is ideally suited to help analyze many divide-and-conquer algorithms. Be detailed!
    3. Make an argument for why some decrease-and-conquer algorithms can be considered a special case of divide-and-conquer. Use an example to clearly illustrate the point.
    4. At the bottom of page 173, they give an exact formula for Cworst(n), but do not provide the details. Fill in the details. In other words, give an exact solution to the recurrence relation Cworst(n) =2 Cworst(n/2)+n-1 if n > 1, Cworst(1)=0.
    5. Name one or two advantages that merge sort has over other sorting algorithms such as quick sort and heap sort.
    6. Name one or two disadvantages that merge sort has over other sorting algorithms such as quick sort and heap sort.

  • Chapter 5.2
    1. Of mergesort and quicksort, one of them essentially does the work on the way down the recursive calls, and the other one does the work on the way back up from the recursive calls. Which is which? Clearly justify your answer.
    2. Which partition algorithm did we discuss in CSCI 235, Lomuto or Hoare? (You will have to base your answer on the details of the algorithm since I doubt that I specified the name.)
    3. What sorts of arrays does quicksort perform worst on? Why do those arrays make it perform badly?
    4. Can the worst-case behavior of quicksort be prevented, or at least the chances of it occurring minimized? If so, explain how.
    5. Which algorithm is considered the best sorting algorithm: quicksort, mergesort, or heapsort? Why?
    6. If you want to choose a pivot element other than the leftmost element of the array, how do you have to modify the partition algorithm?
    7. Give two or three downsides of quicksort.

  • Chapter 5.3
    1. Explain why it makes perfect sense that the divide-and-conquer technique applies very naturally to binary trees.
    2. Did you notice that they solved the recurrence relation for A(n(T)) without really "solving" it (i.e. with one of the techniques we learned earlier this semester)? Hopefully it is clear why they couldn't solve it directly: it requires knowing the sizes of the left and right subtrees, and we do not know that for a generic tree. So how did they arrive at the answer instead? Explain why the technique they used makes sense and indeed arrives as the correct answer.
    3. Give pseudocode for inOrderTraversal(Node r) (r is for root). A "visit" to a node r should just call Visit(r).
    4. In languages like C++, you need to delete memory when you are done with it. So if you want to delete a binary tree, you need to delete all of its nodes, one by one. Give pseudocode for an algorithm deleteTree(Node r) that deletes all of the nodes of a tree rooted at r. (Hint: Use the appropriate traversal and when you "visit" a node u, call delete(u). Your algorithm should be very similar to your algorithm from the last problem but with two differences.)
    5. For each of the following binary search tree operations, does it use the divide-and-conquer technique or the variable-size-decrease technique? insert, search/contains, preOrderTraversal, height, maximum

  • Chapter 5.4
    1. Do exercise 5.4.2 on page 191. Use the version of the algorithm based on the decimal representation (not the binary representation). Show all of the steps and organize your work so you can get a clear sense of how the algorithm is working and how much work it requires. This will involve about a page of computations, so if you have fewer computations than this you probably aren't doing it recursively.
    2. Note that if we use the regular matrix multiplication algorithm, c00=a00b00+a01b10. Show that this is what we get when we compute m1+m4-m5+m7 by simplifying that expression (using the definitions of these given on the next page).
    3. Explain where the 7 A(n/2) comes from in the recurrence relation for A(n) on page 191.
    4. Explain where the 18 (n/2)2 comes from in the recurrence relation for A(n) on page 191.
    5. What is the best you could possibly hope for for the worst-case complexity of an algorithm that multiplies two n× n matrices? Explain.

  • Chapter 5.5 (for 385)
    1. List at least three divide-and-conquer algorithms that you have learned about previously.
    2. What does the Master Theorem have to do with divide-and-conquer algorithms?
    3. Consider the set of points {(1,4), (6,3), (2,3), (4,4), (8,2), (3,10)}. Give the ordered lists P and Q as suggested in the book (page 192).
    4. Consider the divide-and-conquer algorithm to solve Closest-Pairs.
      1. There are three possibilities for where the smallest distance is. What are they?
      2. Why do we only need to consider points that are of distance at most d from the vertical strip? (Hint: in your answer, you will need to make use of how d is defined.)
      3. Why is it important to consider the points in order by y coordinate in the second part of the algorithm (the non-recursive part)?
      4. Redraw Figure 5.7(b), but include a point p along with 5 points you have to check it against. (In other words, create a worst-case example.) Clearly indicate which point is p and then draw big dots where the other points are. Your points should obey the rules of minimum distance! That is, if two points are on the same side, they must be of distance at least d from each other.
      5. Why is the list Q important in the algorithm?
      6. In EfficientClosestPair (page 194), what is the maximum number of times the while loop executes each time through the for loop? (This is important because the efficiency of the algorithm depends in part on this.)
    5. Consider the following points. When doing computations, you can just use your best guess since the exact coordinates are not given.

      Show the result of each of the following on separate drawings (do your best to recreate the location of the points).
      1. Show the first step of the quickhull algorithm. Clearly label p1 and pn.
      2. Do the next step of the quickhull algorithm for the upper hull. Clearly label pmax.
      3. Do the next step of the quickhull algorithm for the lower hull. Clearly label pmax.
      4. Do one more step for the lower hull (which will involve 2 more subproblems).
      5. Do one more step for the lower hull (which will involve 2 more subproblems).
    6. Describe a set of points that might be considered a worst-case input for quickhull.

  • Chapter 6.1 (Including the introduction to the chapter)
    1. Describe how the PresortMode algorithm given on page 204 works. That is, describe what the algorithm is doing in paragraph form.
    2. If I want to search for a single item in a list, does it make sense to sort the list and use binary search to find the element? Explain.
    3. If a linear-time algorithm exists to solve a problem, should I attempt to create a better algorithm using the idea of presorting? Explain why or why not.
    4. If an algorithm with complexity Θ(n log n) exists to solve a problem, should I attempt to create a better algorithm using the idea of presorting? Explain why or why not.
    5. An algorithm of complexity Θ(n3) exists to solve a certain problem. If I sort the input, I can use an algorithm that has complexity Θ(n2). Is presorting worth it in this case? Clearly justify your answer.

  • Chapter 6.2 (for 385)
    Note: If you aren't very familiar with matrices and matrix multiplication, ask a classmate, upper-class student, or me to help you understand the parts of the section that do not make sense.
    1. Answer the question posed on page 208: "what general design technique would such a method be based upon?"
    2. Solve this system:
          3x + 2y = 5
          4x -  y = 14
      
    3. Can you extend the technique you used to solve the previous problem to a system of, say, 10 equations and 10 unknowns? Would it be fun?
    4. Give the matrices A and b for the following system of equations
      3 x1 - 5 x2 - 52 x3 + 13 x4 = -34
      -4 x1 + 45 x2 - 5 x3 + 3 x4 = 4
      -56 x1 - 3 x2 + 2 x3 - 5 x4 = -3
      23 x1 + 7 x2 - 45 x3 + 7 x4 = 12
      Note: The book does not explicitly state it, but the matrix x would be
      x1
      x2
      x3
      x4
      x5
    5. Solve the system of equations represented by the matrices A' and b' given below. Use variables x, y, and z, in that order since they are easier to write than x1, x2, and x3.
      Note: This should be fairly straightforward. If it is not, take another look at the middle of page 209 where they describe the process. If that does not make sense, ask around for some assistance in understanding how to do it.
      A'=  
      2 0 1
      0 1 -1
      0 0 2
       ,   b'= 
      8
      -3
      4
      Was this easier to solve because the matrix is upper-triangular rather than just a generic matrix? Explain.
    6. Do Exercise 6.2.1 on page 216. (Hint: See Example 1 on page 210 for the technique.) Show all of your work!
    7. What two problems are present in the algorithm ForwardElimination?
    8. What was the "glaring inefficiency" in ForwardElimination that was fixed in BetterForwardElimination? (Note: this is not asking about the two problems referred to in the previous question. Thiat question was about errors in the algorithm. This question is about inefficiencies in the algorithm.)
    9. Read through the work on page 212 where they are simplifying the summation very carefully. Is it correct? If not, where is the error?
    10. Note: The book does not make this entirely clear, but when doing the row operation that replaces a row with a sum/difference of it and another row, you need to do it in a very specific way in order to get the correct row multiples for an LU-decomposition. Let Ri be the ith row. Then you would replace Ri with Ri - c Rj for some number c. Notice two things: the ith row is not multiplied by anything, and the jth row is subtracted. So the row multiple is c, not -c. For instance, if you replace the first row with the first row plus 3 times the second row, the multiple would be -3, not 3.
      Now let's continue with Exercise 6.2.1, but this time solve it using LU decomposition.
      1. When you did Exercise 6.2.1, you created the matrix U, and you (hopefully) wrote down the row multiples. So what is the matrix L for that problem?
      2. Compute the matrix product LU and verify that it is the same as the original matrix A (which you constructed based on the equations). (This is not helping us solve this problem. I just want you to see that it works.)
      3. Solve the system Ly=b from the previous problem. (Note: the vector y is a new vector of unknowns that we create as an intermediate step to finding x, which is what we are really trying to find.)
      4. Given the solution y from the previous problem, solve the system Ux=y.
    11. What is the advantage of using an LU decomposition to solve systems of equations over just using Gaussian elimination? In other words, in what situation(s) is it worth doing?

  • Chapter 6.3
    1. Are BSTs an example of transform-and-conquer? Explain.
    2. Are balanced search trees an example of transform-and-conquer? Explain.
    3. Do Exercise 6.3.1 on page 225.
    4. Since rotations change the structure of a tree, it seems possible that rotating might cause a BST to not be a BST anymore. That is, the rotated version might violate the binary search tree property. Explain in detail why a BST remains a BST after a right rotation is performed. Make sure you make a complete argument that it is still a BST. Refer to Figure 6.4 on page 221 for the required notation (e.g. T1, etc.)
    5. What's so great about AVL trees?
    6. What's not so great about AVL trees?
    7. Do Exercise 6.3.3 on page 226.
    8. Let's make sure you understand how 2-3 trees are supposed to work. Consider the tree on the right side of second row of Figure 6.8 on page 225 (the one that has (3,8) as the root and (4,5) as its middle child). How many key comparisons are needed to determine whether or not 7 is in the tree? Which key comparisons are performed?
    9. Again, consider the 2-3 tree on the right side of the second row of Figure 6.8. Show what the tree looks like after inserting 11. Then show what it looks like after inserting 10. (Remember: A node can have one or two keys. If a node has more than 2, the tree has to be fixed!)
    10. What's so great about 2-3 trees?
    11. What's not so great about 2-3 trees?

  • Chapter 6.4
    1. What are the main three operations on a priority queue?
    2. Are heaps and priority queues the same thing? Explain
    3. Generally speaking, an array implementation of a binary tree is not a good idea. Why not?
    4. Contrary to the previous question, heaps, which are binary trees, can be efficiently implemented using arrays. What is so special about heaps that makes this possible?
    5. Explain what happens in each iteration of the for-loop in HeapBottomUp. In other words, explain what each iteration of the loop accomplishes and how it accomplishes it. Be as detailed as possible.
    6. Why does the for-loop in HeapBottomUp go from ⌊n/2⌋ down to 1. Address both why it goes backwards and why it doesn't start at n. (Hint: what is different about nodes 1 through ⌊n/2⌋ versus nodes ⌊n/2+1 through n?)
    7. Which has a more structure (i.e. which is more organized), a BST or a heap? Which takes longer to construct? (Recall that it takes Θ(n log n) time to create a BST.) Explain why this makes sense.
    8. (Optional) If you are bored, have extra time, and like to understand the details of things, work out the final step of the derivation of Cworst(n) from page 230. (It has several trick/subtle parts, so you have to be really careful!)
    9. Heapsort is not as fast as quicksort. Thinking about how it works, can you see what heapsort does that might make it slower?

  • Chapter 6.5
    1. Let p(x)=7x5+12x4-2x3-5x2+45x+13.
      1. Compute p(4). How many multiplications and additions did your computation require?
      2. Generalize the result: How many multiplications and additions are required to evaluate a polynomial of degree n using the obvious algorithm? (Hint: You should get a summation that you need to simplify.)
      3. Write p(x) in the form given in equation 6.2 on page 234. (The example on page 235 might help see how to do it if the formula looks a bit confusing.)
      4. Compute p(4) using the Horner algorithm on page 235 (that is, based on the form given in the previous question). How many multiplications and additions did your computation require?
      5. Were your two computations of p(4) the same? If not, hopefully you realize that something must have gone wrong—try again!
      6. Again, generalize: How many multiplications and additions are required to evaluate a polynomial of degree n using Horner's rule? (Hint: The book tells you.)
      7. Can you see why Horner's rule is so awesome? Explain.
    2. Explain why Horner's rule is an example of transform-and-conquer.
    3. You will probably need to use something like Wolfram Alpha to help you with the computations on this one.
      1. How many multiplications are required to compute 337 using the obvious algorithm?
      2. Compute 337 using the naive algorithm. (O.K., you can use Wolfram Alpha!)
      3. What is the binary representation of 37?
      4. Compute 337 using LeftToRightBinaryExponentiation. Make sure to show the details of each step of the algorithm.
      5. How many multiplications are required to compute 337 using LeftToRightBinaryExponentiation?
      6. Compute 337 using RightToLeftBinaryExponentiation. Make sure to show the details of each step of the algorithm. (This one requires keeping track of more information at each step.)
      7. How many multiplications are required to compute 337 using RightToLeftBinaryExponentiation?
      8. Did you get the same answer for all 3 algorithms? If not, try again!
      9. Compare the three algorithms. Is one of them clearly the best? Is one clearly the worst? Explain.
      10. How can you modify the last two algorithms so you do not need to pass in the binary representation of the exponent? Let's start small: Given n, how can you easily compute bi, the ith bit of its binary representation? (Hint: you need a bit shift or division and the modulus operator.)
      11. Rewrite either LeftToRightBinaryExponentiation or RightToLeftBinaryExponentiation (choose your favorite!) so that the second argument is n instead of the binary representation of n.

  • Chapter 6.6 (for 385)
    1. In your own words, describe the problem reduction strategy.
    2. What is the main benefit of problem reduction?
    3. Fill in the blank: The problem of computing lcm(m,n) reduces to computing __________.
    4. The problem of counting paths in graphs can be reduced to a problem on matrices. It turns out that many graph problems can be reduced to matrix problems. Explain why this makes sense.
    5. Hopefully you recall learning about efficient shortest path algorithms for graphs. Can finding a longest path be reduced to finding a shortest path? Hint: Edges with negative weights can cause problems with shortest path algorithms. (Feel free to do an online search for "longest simple path in graph" if you wish.)
    6. My brother owns a company that makes two products, The Screamer Fuzz (SF) and the Tap-A-Whirl (TAW). He know he can sell at most 175 SFs and at most 250 TAWs per day. His profit for a SF is $1 and for TAW is it $5. Combined, he can make at most 350 total products. He wants to maximize his profit. Express this problem as a linear program, including all of the constraints and the function that needs to be maximized or minimized.
    7. What is the difference between a regular linear programming problem and an integer linear programming problem?

  • Chapter 7.1 (Including the introduction to the chapter)
    1. Is ComparisonCountingSort a very good algorithm? Explain.
    2. Would you use DistributionCountingSort on an array of size 1,000 whose elements are 32-bit integers (you do not know any other details about their possible values)? Explain.
    3. Explain why DistributionCountingSort processes the list in reverse order in the final step.
    4. What is the space-complexity and time-complexity of DistributionCountingSort on a list of size n whose values are between 1 and m (a positive integer)?
    5. In the DistributionCountingSort algorithm, why does it matter whether or not we copy the values into a new array or the original array?
    6. Even though the name should make it somewhat obvious, give a clear explanation of the idea behind the time and space trade-off technique.

  • Chapter 7.2 (through page 262)
    1. Use the ShiftTable algorithm to compute the shift table for the pattern mississippi, assuming only lower-case alphabetic characters can occur in the string.
    2. Do Exercise 7.2.1 on page 267. How many comparisons were required?
    3. How many comparisons would be required to search for the pattern from the previous question using the brute-force algorithm from Section 3.2?
    4. Does the number of comparisons from the previous two questions allow you to make a conclusion about which algorithm is better? Explain.

  • Chapter 7.3 (skipped)

  • Chapter 7.4 (for 385)
    1. When analyzing data structures such as arrays and trees, we usually care about the number of key comparisons used to perform an operation (e.g. search). With B-trees this is not what we care about. What do we care about instead and why?
    2. Do problem 7.4.2 (both parts) on page 279. You might have to pull out your discrete mathematics textbook for this one!
    3. Would you use linear search or binary search in a node of a B-tree with m=5? What about m=1000? Justify your answers. More generally, for what values of m (if any) would it be reasonable to use binary search?
    4. Can any of the algorithms you have previously learned be considered applications of the space and time trade-off technique? Try to name 2 or 3, explaining how they exploit the technique for improved performance.

  • Chapter 8.1 (Including the introduction to the chapter)
    1. Explain why dynamic programming is a special case of space and time trade-off.
    2. Look back at Section 2.5. The iterative algorithm given in that section was an example of dynamic programming. In this case, it is a bottom-up implementation. If you look at your answers to the reading questions, you should see that it was definitely an improvement over the straightforward recursive algorithm.
      Now, implement the same idea using the top-down approach. That is, use a recursive algorithm that also uses the array to save time. (Hint: Start with the recursive algorithm on page 81. Instead of just computing F(N-1) and F(N-2), see if they have already been computed first. For simplicity, assume you have a global array, and that it is initialized to whatever values you want (If you want something other than 0, specify that.)
    3. Solve the coin-row problem for coins 1,5,3,9,10,4. Give the maximum value of coins that can be taken along with which coins. Show all of your work!
    4. Solve the following instance of the coin-collecting problem:
      Show the table of values (F) produced by the algorithm. Then give the number of coins collected and draw the path that collects the coins.

  • Chapter 8.2
    1. Give a very clear description of what F(i,j) is storing of given values of i and j.
    2. Consider solving the knapsack subproblem involving the first i objects. Finish the following sentences.
      1. The dynamic programming algorithm for the knapsack problem is based on the fact that there are two types of solutions that involve the first i objects: those that ___________ and those that__________.
      2. If the ith item is not included in the optimal solution, then we can just look up the value of the optimal solution. In other words, F(i,j)=______________ in this case.
      3. If the ith item is included in the optimal solution, then we can compute the value by realizing that since the ith object weighs _________, the best we can do is solve the problem involving the first _________ objects with weight _______ and then add the ith object to that. Thus, F(i,j)=_________________ in this case.
      4. We overlooked an important case: If the ith item does not fit in the knapsack, then the optimal solution must be the same as that involving the first _______ items. Thus, when computing F(i,j), if wi>______, then F(i,j)=______________. Otherwise, we consider one of the two previous cases.
      5. Putting this all together, we get the following recursive formula: F(i,j)=________________. (Hint: it should have two cases, one of which has a max in it.)
      6. Of course, no recursive definition would be complete without base case(s). So, our base cases are that F(0,j)=______ for j≥0 (because _____________) and F(i,0)=_____ for i≥0 (because ______________).
      7. Tricky question: Is the dynamic programming algorithm for the knapsack problem a polynomial time algorithm? Explain.
    3. Briefly explain how memory functions can be used to implement dynamic programming algorithms. Are they used with the top-down or bottom-up approach (or both)?
    4. Name one or two advantages to the memory function approach to dynamic programming.

  • Chapter 8.3 (for 385)
    1. What other data structure are binary search trees are used to implement?
    2. What is the main point of an optimal binary search tree? What makes it different than a normal binary search tree?
    3. Why is the exhaustive search algorithm to construct an optimal binary search tree not practical?
    4. The questions are all about C(i,j).
      1. What is C(i,j)?
      2. What are the conditions on i and j? (e.g. what values can they take on?)
      3. What is the final goal of the algorithm? In other words, for what values of i and j are we trying to find C(i,j)? Explain.
      4. When computing C(i,j), how many possible nodes might be the root of the optimal tree containing keys ai through aj? (Be precise!)
      5. How does the algorithm determine which root is the optimal one?
      6. In the middle of page 299, the second step of the derivation of the formula for C(i,j) has a summation from s=i to j of ps. Explain where this came from.
      7. In the next step, the first two summations seem to be magically replaced with references to C(i,k-1) and C(k+1,j). Explain the logic behind this step.
      8. Explain in your own words the logic behind/meaning of recurrence relation 8.8 on page 299. Imagine you are explaining it to a CSCI 255 student who is just learning about dynamic programming. Make sure to explain the logic behind the summation in the formula.
      9. Why is the second table needed in the algorithm?
    5. Do Exercise 8.3.1 on page 303 (finish the computation of the example). Make sure to write down all of your work.

  • Chapter 8.4
    1. What does the matrix R(k) represent? In other words, what do the entries tell us?
    2. What does it mean if rij(k)=1? Be detailed!
    3. In order for there to be a path from ui to uj that uses only vertices numbered 1 through k, then either there is a path from ui to uj that only vertices numbered 1 through k-1 or there must be paths from ui to uk and from uk to uj that only contain vertices numbered 1 through k-1. In terms of the entries from matrices R(k) and R(k-1), this means that rij(k)=_____________________.
    4. Do Exercise 8.4.1 on page 311.
    5. What does the element dij(k) from D(k) represent? Be detailed!
    6. Put formula (8.14) on page 310 into words. In other words, do the opposite of what I did in question #3 above (where I gave the explanation and asked for the formula).
    7. Compare Warshall's algorithm and Flyod's algorithm. What do you notice?

  • Chapter 9.1 (Including the introduction to the chapter)
    1. In your own words, describe the greedy technique. Be complete!
    2. Pick the best word: The greedy technique always/sometimes/never produces optimal solutions. Explain, giving examples if it helps.
    3. Some textbooks discuss the term greedy choice property that a problem is required to have if the greedy technique is to guarantee an optimal solution. Based on this description, try to define the term. I'll get you started: A problem has the greedy choice property if__________.
    4. If the greedy technique does not yield an optimal solution to a particular problem, then is it worthless for that problem? Explain.
    5. The complete graph, Kn, has nn-2 spanning trees. Given this, does an exhaustive search algorithm to find a minimum spanning tree of Kn seem reasonable? Explain.
    6. If a graph has about 2n edges (which really is not that many), do you suppose an exhaustive search algorithm to find a minimum spanning tree is reasonable? For simplicity, assume the exhaustive search algorithm considers every subset of n-1 edges, even though some of them will not be spanning trees. (Did you realize that a spanning tree has to have n-1 edges? If not, make sure you understand why.)
    7. Note that as presented on page 319, Prim is not a fully specified algorithm. The first step in the for loop is not clearly specified—how do you find such an edge? The correctness and efficiency of the algorithm depends on how that step is performed. (For instance, if you just look through all of the edges each time you will get horrible performance!) The book discusses how this can be done. Summarize how that step is performed, including specifying any necessary data structure(s) and how they need to be initialized and updated at each step. In fact, rewrite the algorithm so that the whole thing is more clearly specified. The example in Figure 9.3 on page 321 should help you understand the details a little better—pay close attention to how the numbers in parentheses change. Also, read to the end of the section before answering this question.
    8. What makes Prim's algorithm greedy?

  • Chapter 9.2 (through page 327)
    1. As with Prim in the last section, Kruskal on page 325 is not fully specified. Which step(s) of the algorithm are unclear/ambiguous? What details are needed to complete the implementation?
    2. The book claims that the complexity of Kruskal is Θ(|E| log |E|). I claim that it is Θ(|E| log |V|). Who is correct? Explain.
    3. What makes Kruskal's algorithm greedy?

  • Chapter 9.2 (pages 327-331) (Skipped)

  • Chapter 9.3 (for 385)
    1. Give at least 4 applications of the shortest path problem.
    2. Is the problem discussed in this section the same problem that is solved by the Bellman-Ford Algorithm (we might have covered it in 255) and/or Floyd's Algorithm (from Section 8.4). (Hint: one solves the same problem, the other solves a more general problem.)
    3. What limitation does Dijkstra's algorithm have?
    4. Why is it not a good idea for the algorithm to just iterate over all of the vertices that are already in the tree, going through each one's adjacency list to find the next vertex to add?
    5. Explain what the algorithm does instead of what I suggested in the last question.
    6. What data structure(s) does Dijkstra's algorithm need to use to be efficient?
    7. When implementing Dijkstra's algorithm, should you use an adjacency matrix or adjacency list? Explain.
    8. Modify Dijkstra on page 335 to turn it into Prim. (Hint: You only need to make two minor changes to two lines of it.)
    9. What make's Dijkstra's algorithm greedy?

  • Chapter 9.4
    1. Give an example of a fixed-length encoding.
    2. What is a variable-length encoding?
    3. Why would one use a variable-length encoding instead of a fixed-length encoding?
    4. True or false: Huffman's algorithm always joins leaf nodes together before it joins other nodes together. In other words, as long as there are leaf nodes that have not been joined, it will choose them. Explain, using an example if appropriate.
    5. How many bits are required to store a file with 100 characters using the Huffman encoding computed in the example on pages 339-340? (Of course, we assume the file's characters have the frequencies as specified in the example.)
    6. What is the minimum number of bits to store the file from the previous question if a fixed-length encoding is used?
    7. What makes Huffman's algorithm greedy?

  • Chapter 10.1 (Skipped)

  • Chapter 10.2 (for 385; Including the introduction to the chapter)
    1. There is a specific iterative improvement technique called hill climbing which is alluded to in the introduction to the chapter.
      1. Using the "hill" analogy, explain how iterative improvement algorithms work.
      2. The hill analogy also reveals a potential problem with the technique. Describe it.
    2. Look at the graphs in problem 4.2.1 on page 142. If we added positive weights to the edges, could these be considered flow networks? If so, identify a valid source and sink. If not, explain why not.
    3. Draw a flow network that has a maximum flow of 10. Try to come up with an example that is not completely obvious (e.g. two vertices with an edge of weight 10 between them).
    4. Interpret equation 10.8 at the bottom of page 361. That is, describe what it is saying. I will help you get started: "For every vertex in the graph (except for the source and sink), __________"
    5. Explain the difference between uij and xij. Is there a relationship between them? If so, what is it?
    6. Explain what a flow in a network is/means.
    7. Fill in the blanks for the following pseudocode for the Ford-Fulkerson method below. (Since it is pseudocode, you may use English phrases/sentences.)
      	    While(          ) {
      		
      		} 
      (Note: You may need 1-3 lines of code in the while loop, depending on exactly how you specify the algorithm.)
    8. Does the original graph (with zero flow) contain forward edges, back edges, or both? Explain. (Note: It is important to fully understand what forward and backward edges are in order to understand how the algorithm works.)
    9. Explain what a flow-augmenting path is.
    10. What is rij? (Not the equation, but what is the meaning of it.) How is it related to uij and xij?
    11. Why can decreasing the flow on a backward edge increase the overall flow?
    12. Explain why the Ford-Fulkerson method is considered an example of iterative improvement.
    13. Do you fully understand the example on page 365 of why how we select augmenting paths is important to the efficiency of the algorithm? Good. Then briefly explain the point behind it.
    14. Give a brief high-level description of the shortest-augmenting-path algorithm that is described on pages 365-366 and specified more formally in pseudocode on pages 366-367.
    15. Given the analogy that a graph can be thought of as a bunch of balls (vertices) attached to each other in various ways with strings (edges), explain why the term cut makes sense by bringing scissors into the analogy.
    16. Give a minimum cut of the graph in Figure 10.4 on page 362 along with its capacity. That is, which vertices are in X and which are in X. (Hint: Figure 10.7 might help.)
    17. True or false: It is possible that a graph has a cut with capacity c and a flow with value v, such that c < v. Clearly explain.
    18. What is the worst-case efficiency of the ShortestAugmentingPath algorithm (sometimes referred to as the Edmonds-Karp algorithm)? Is it a good algorithm? A great algorithm? Explain.

  • Chapter 10.3 (for 385)
    1. For what values of n is Kn (complete graph on n vertices) bipartite?
    2. For what values of n is Cn (cycle on n vertices) bipartite?
    3. Why are augmenting paths of bipartite graphs always of odd length?
    4. The way the algorithm works, it is suggested that an augmenting path has one more edge from E-M than from M. Explain why.
    5. When you get stuck understanding what Cases 1 and 2 are trying to say at the top of page 376, skip ahead to the algorithm and walk through the algorithm with the example in Figure 10.10. Then re-read the cases and hopefully they make more sense. This isn't a question, by the way. It's just a helpful tip. But since you are expecting a question, here's one: Did you do what I just suggested?
    6. Are you prepared to talk the class through the example in Figure 10.10 on page 377? Because I might just call on you to do so!
    7. What is the worst-case complexity of the MaximumBipartiteMatching algorithm.
    8. Explain why the MaximumBipartiteMatching algorithm is considered an example of iterative improvement.

  • Chapter 10.4 (Skipped)

  • Chapter 11.1 (for 385; Including the introduction to the chapter)
    1. Was Lord Kelvin wrong?
    2. Explain what a lower bound is and why it is a useful concept.
    3. What would a reasonable trivial lower bound be for a problem on a graph with n vertices and m edges?
    4. What would the information-theoretic lower bound be for a problem that needs to assign to each of the n vertices of a graph a number between -a to a and also assign to each of the m edges a number between 1 and 32? (The idea being that we need to determine what each of these numbers is, but I am not specifying exactly how. But that should not affect your answer.)
    5. Describe in your own words what an adversary argument for establishing a lower bound on any algorithm to solve a problem.
    6. To show that P is at least as difficult to solve as Q, do you reduce P to Q or Q to P? Clearly explain why.
    7. Which problem is more difficult, Euclidean minimum spanning tree or element uniqueness? Explain.

  • Chapter 11.2 (for 385)
    1. Are you convinced that if a binary tree of height h has l leaves, then h ≥ ⌈ log2 l ⌉? If not, re-read the argument on page 395.
    2. Answer the following questions about decision trees.
      1. What is in each node?
      2. What does the height of the tree tell you (relative to the problem/algorithm being analyzed)?
      3. What has to be true about the number of leaves of the tree (relative to the problem/algorithm being analyzed)?
    3. If a problem has 2n possible answers, what can you say about the lower bound for any comparison-based algorithm to solve that problem?
    4. Explain why the possible number of outcomes of a sorting algorithm is n!.
    5. Justify the second step of the algebra in the middle of page 396.
    6. Summarize the argument for why all comparison sorting algorithms require at least n log2 n comparisons. Make sure to mention what the height of the tree is, how many leaves are in the tree and why, etc.

  • Chapter 11.3 (for 385)
    1. Name three tractable problems (you can't use any of those listed on page 402).
    2. Name one or two intractable problems.
    3. Technically speaking, is sorting in P? Explain.
    4. What is the difference between tractable/intractable and decidable/undecidable?
    5. Explain the contradiction about Q(Q) on page 403.
    6. Are Hamiltonian circuit and traveling salesman problems intractable? Explain.
    7. Give a longer description of problems in NP than "decision problems that can be solved by nondeterministic polynomial algorithms." That is, expand on what that means.
    8. What is the most important open question in theoretical computer science?
    9. Would we rather it be the case that P = NP or that P ≠ NP? Explain.
    10. What does it mean that the problems in NP-complete are the most difficult problems in NP? Be specific.
    11. What do you need to do to prove that a problem is in NP-complete? (Hint: you need to do 2 things.)
    12. What would it mean if I could find a polynomial-time algorithm to solve Hamiltonian circuit? I mean, would it be a good thing, or a very good thing? Explain.

  • Chapter 11.4 (for 385)
    1. What is the main difference between numerical algorithms and most of those we have studied?
    2. What are the two types of errors that are of concern with numerical algorithms? Give as clear and precise a definition of each as possible. Make sure to also clearly explain what the cause of each is. (Note: I am not talking about absolute and relative errors here.)
    3. What is the main challenge of implementing numerical algorithms? That is, what is the limitation that computer have that make it a challenge to implement algorithms that deal with real numbers?
    4. Clearly explain the difference between overflow and underflow.
    5. As mentioned in the book, real numbers are stored in a computer by storing the mantissa and the exponent. First, define these. Second, can you see a connection between these and the problems of overflow and underflow? What is it?
    6. Why do you suppose that relative error is of interest? In other words, why isn't it enough to think about absolute error?
    7. Do you understand subtractive cancellation enough to explain what the problem is? Well, at least try to.
    8. Use the procedure on bottom of page 416 to estimate the square root of 25. Use at least 4 iterations. How close is the estimate to the correct answer (which should be 5, in case you can't compute it yourself).

  • Chapter 12.1 (for 385; Including the introduction to the chapter)
    1. Give three possible ways to approach solving a problem when you realize that it is NP-complete.
    2. How many children does a nonpromising node have in a state-space tree?
    3. Find a solution for the 5-queens problem. (Hint: put the first queen in the second column and you should pretty quickly find a solution.)
    4. Are the problems of finding a single solution to the n-queens problem and finding all solutions equivalent? In other words, if one can be done in polynomial time, can the other?
    5. Draw the state-space tree of the backtracking algorithm applied to the subset {3, 5, 6, 8} with d=17. Label the nodes as in Figure 12.4 on page 428.
    6. Is an actual tree constructed when implementing a backtracking algorithm?
    7. How does one analyze the running time of a backtracking algorithm?

  • Chapter 12.2 (for 385)
    1. What is the difference between a feasible solution and an optimal solution?
    2. Is branch-and-bound a special case of backtracking, or vice-versa? (or neither?). Explain.
    3. Explain in detail what it means to prune a branch of a state-space tree in brand-and-bound.
    4. Explain how the best-first brand-and-bound technique works.
    5. In Figure 12.5 on page 434, explain why in node 3 the lower bound is lb=7+4+5+4. In other words, why is the bound the sum of those 4 numbers? Be specific. In order to be sure you get it, also explain the computation for node 4.
    6. In Figure 12.6 on page 435, what numbers were added to get lb=14 in node 6?
    7. Shortly after cost=13 is computed in node 8 in Figure 12.7, the rest of the nodes become nonpromising. Explain why.
    8. Is it a good idea to use the branch-and-bound algorithm to solve the assignment problem? If so, why? If not, why not and why do you think they presented this approach to solving that problem?
    9. Looking at figure 12.8, when did node 2 become nonpromising?
    10. Summarize the idea behind branch-and-bound. Explain what it does to improve upon the idea of backtracking.

  • Chapter 12.3 (Skipped)

  • Chapter 12.4 (Skipped)