Basic Algorithm Design Techniques

Index

  1. Divide and Conquer
  2. Greedy Approach
  3. Randomization
  4. Dynamic Programming
  5. Branch and Bound
  6. Backtracking Algorithms

Algorithm Design Techniques

For a given problem, there are many ways to design an algorithm for it, (e.g., Insertion sort is an Incremental approach). The following is a list of several popular design approaches.

  • Divide-and-Conquer (D and C)
  • Greedy Approach
  • Dynamic Programming
  • Branch-and-Bound
  • Randomized Algorithms
  • Backtracking Algorithms

Divide-and-Conquer

  • Divide the original problem into a set of subproblems
  • Solve every subproblem individually, recursively
  • Combine the solutions of the subproblems (top-level) into a solution of the whole original problem.

Traditionally, an algorithm is only called "divide and conquer" if it contains at least two recursive calls.

Application:

  • Quicksort
  • Merge sort
  • Binary Search

Greedy Approach

Greedy algorithms seek to optimize a function by making choices (greedy criterion) that are the best locally but do not look at the global problem.

The result is a good solution but not necessarily the best one.

The greedy algorithm does not always guarantee the optimal solution however it generally produces solutions that are very close in value to the optimal.

An optimization problem is one in which we want to find, not just a solution, but the best solution.

A "greedy algorithm" sometimes works well for optimization problems.

A greedy algorithm works in phases:
  • We take the best we can get right now, without regard for future consequences.
  • We hope that by choosing a local optimum at each step, we will end up at a global optimum

Application:

  • Fractional knapsack
  • Activity Selection

Randomized Algorithms

A randomized algorithm is defined as an algorithm that is allowed to access a source of independent, unbiased random bits, and it is then allowed to use these random bits to influence its computation.

A randomized algorithm uses a random number at least once during the computation to make a decision

Application:

  • In Quicksort, using a random number to choose a pivot
  • Trying to factor a large number by choosing random numbers as possible divisors.

Dynamic Programming

Dynamic programming is a technique for efficiently computing recurrences by storing partial results.

It is a method of solving problems exhibiting the properties of overlapping subproblems and optimal substructure that takes much less time than naive methods.

A Dynamic programming algorithm remembers past results and uses them to find new results, Dynamic programming is generally used for optimization problems.

In dynamic programming, multiple solutions exist, need to find the best one.

It "requires substructure" and "overlapping subproblems".

Optimal substructure means optimal solution contains solutions to sub-problems.

Overlapping sub-problems means solutions to sub-problems can be stored and reused in a bottom-up fashion.

This differs from Divide and Conquer, where sub-problems generally need not overlap.

Application:

  • 0-1 Knapsack
  • Subset-sum problem

Branch and Bound

In a branch and bound algorithm a given subproblem, which cannot be bounded, has to be divided into at least two new restricted subproblems.

Branch and bound algorithms are methods for global optimization in nonconvex problems.

Branch and bound algorithms can be (and often are) slow, however, in the worst case they require effort that grows exponentially with problem size, but in some cases we are lucky, and the methods converge with much less effort.

Branch and Bound algorithms are generally used for optimization problems.

As the algorithm progresses, a tree of subproblems is formed.

The original problem is considered the "root problem".

A method is used to construct an upper and lower bound for a given problem.

At each node, apply the bonding methods

If the bounds match, it is deemed a feasible solution to that particular sub-problem.

If bounds do not match, partition the problem represented by that node, and make the two sub-problems into children nodes.

Continue, using the best-known feasible solution to trim sections of the tree, until all nodes have been solved or trimmed.

Application:

  • Artificial Intelligence

Backtracking Algorithms

Backtracking algorithms try each possibility until they find the right one.

It is a depth-first search of the set of possible solutions.

During the search, if an alternative doesn't work, the search backtracks to the choice point, the place which presented different alternatives, and tries the next alternative.

When the alternatives are exhausted, the search returns to the previous choice point and tries the next alternative there.

If there are no more choice points, the search fails.

Backtracking algorithms are based on a depth-first recursive search.

Application:

  • Solve n queen problem
  • Maze Solving Problem
  • 8 number puzzle solver
  • To color a map with no more than four colors.
Top

Comments

Popular posts from this blog

Backtracking

Algorithms