Week 3 Exhaustive Search, Graph traversal, and Divide and Conquer
This week in Design and Analysis of Algorithms we explored and examined several algorithms that utilize the brute force design technique. We discovered that brute force technique is one of the easiest to apply but not the most efficient design of algorithms. For example in the brute force approach for a sequential search we have to index through every element one by one until the target is found or when the list has ended. It is essentially trying every option until something works. The lecture has mentioned that this technique is feasible and applicable to many problems, though you must refine it later to find a more efficient approach.
We uncovered a couple of algorithms that utilize brute-force or exhaustive techniques. Mainly we took a brief look at the search and string-matching problem where given n characters called a text and another string of m characters called a pattern, we need to find the sub-string of the text that matches the pattern. In this approach we go from each index and try to find if the pattern matches the sub-string, and we analyze the different best case and worst case scenarios for this problem. The best case is when there's an instant match from the start. And the worst case is when we have to iterate through all the indexes of text, until the last letter of the pattern matches. We also examined exhaustive search algorithms for problems such as traveling salesmen where we must find the most optimal route that visits a set of cities once and returns to the origin city. In addition we tackled the knapsack problem where we are given a set of items, with a weight and value; the goal is to find the best combination of item's value where the capacity does not exceed a given limit. I thought these problems were very tedious because of how much we had to create sets of possibilities and calculate the weights/value. Furthermore, we took a look at the ways to solve graph traversal problems using Depth first search and breadth first search algorithms. In depth first search we identify visited nodes by using a match array and the stack, we essentially go through each vertex and find adjacent nodes in a sorted order, until there is none and backtrack to check if any other adjacent nodes have been visited. While on the other end the breadth first search uses a match array for visited nodes, it instead utilizes a queue to track adjacent vertices, essentially it explores ahead of the adjacent vertex. I thought the problems for these algorithms were quite simple to understand compared to the recursion relations problems from last week.
Lastly we took a brief look at divide and conquer, and master theorem. Divide and conquer essentially breaks down complex problems into smaller portions and solves them independently until it's combined to solve the original problem. Mostly utilizing recursive calls to solve the smaller problems then merging the solutions for the final iteration. We will take a look at one of these problems through the merge sort algorithm next week. The master theorem was very interesting, I personally thought it looked overwhelming the first time I saw it but it was relatively easy. It involves finding the time complexities of divide and conquer algorithms, it provides a quicker way to find the time complexities by comparing the recurrences to a simple function and dropping them into three cases, where one case will identify the time complexity. There was a lot of problem solving this week, I really need to go back and review the past week's modules because the midterm is coming up. Other than that it has been another busy week, although this time it was quite enjoyable. Here's to another week!
We uncovered a couple of algorithms that utilize brute-force or exhaustive techniques. Mainly we took a brief look at the search and string-matching problem where given n characters called a text and another string of m characters called a pattern, we need to find the sub-string of the text that matches the pattern. In this approach we go from each index and try to find if the pattern matches the sub-string, and we analyze the different best case and worst case scenarios for this problem. The best case is when there's an instant match from the start. And the worst case is when we have to iterate through all the indexes of text, until the last letter of the pattern matches. We also examined exhaustive search algorithms for problems such as traveling salesmen where we must find the most optimal route that visits a set of cities once and returns to the origin city. In addition we tackled the knapsack problem where we are given a set of items, with a weight and value; the goal is to find the best combination of item's value where the capacity does not exceed a given limit. I thought these problems were very tedious because of how much we had to create sets of possibilities and calculate the weights/value. Furthermore, we took a look at the ways to solve graph traversal problems using Depth first search and breadth first search algorithms. In depth first search we identify visited nodes by using a match array and the stack, we essentially go through each vertex and find adjacent nodes in a sorted order, until there is none and backtrack to check if any other adjacent nodes have been visited. While on the other end the breadth first search uses a match array for visited nodes, it instead utilizes a queue to track adjacent vertices, essentially it explores ahead of the adjacent vertex. I thought the problems for these algorithms were quite simple to understand compared to the recursion relations problems from last week.
Lastly we took a brief look at divide and conquer, and master theorem. Divide and conquer essentially breaks down complex problems into smaller portions and solves them independently until it's combined to solve the original problem. Mostly utilizing recursive calls to solve the smaller problems then merging the solutions for the final iteration. We will take a look at one of these problems through the merge sort algorithm next week. The master theorem was very interesting, I personally thought it looked overwhelming the first time I saw it but it was relatively easy. It involves finding the time complexities of divide and conquer algorithms, it provides a quicker way to find the time complexities by comparing the recurrences to a simple function and dropping them into three cases, where one case will identify the time complexity. There was a lot of problem solving this week, I really need to go back and review the past week's modules because the midterm is coming up. Other than that it has been another busy week, although this time it was quite enjoyable. Here's to another week!


Comments
Post a Comment