By Kameron Hertel, David Meierer, Kyle Ptacek, Rafael Rivera, and Brian Turner

What is an Algorithm?

An algorithm is defined as a specific, unambiguous set of instructions for carrying out a procedure or solving a problem.

Real world examples of algorithms include:

Cooking recipes

Math Theorems

A guide to building a model

Anything that pretty much requires a set number of steps

Why is an Algorithm Important to Computer Science?

Coding and developing algorithms have helped computer scientists tackle a variety of complex problems whether it is mathematical or logical.

In a way most computer programs can be described as groups of algorithms that function on in sets of organized data.

As the technology for computers and software grows exponentially, more problems soon come to surface which ultimately leads to the creation of new algorithms.

Ultimately, a computer scientist is trying to code an algorithm that runs on the lowest amount of resources but completes a problem quickly.

History of an Algorithm

The word algorithm originates from the ninth century scientist Persian mathematician Abu Abdullah Muhammad ibn Musa Al-Khwarizmi.Originally, the word Abu defined in his work was algorism, as it referred to the rules of arithmetic using the customary Hindu-Arabic numerals.The word algorithm was established in the late eighteenth century.The first formalization of an algorithm in computer science wasn’t until 1936 with the combined efforts of Alan Turing’s Turing machine and Alonzo Church’s lambda calculus.Later on in the late 1930’s J.B. Rosser defined algorithms as “an effective method of solving certain sets of problems exists if one can build a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer.”

S.C. Keene also noted in his famous “Church-Turing thesis” a section he defined as “Algorithmic theories... In setting up a complete algorithmic theory, what we do is to describe a procedure, performable for each set of values of the independent variables, which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, "yes" or "no," to the question, "is the predicate value true?"  (Kleene 1943:273)

As of today, computer scientists continue to refine their coding and executions of algorithms especially when it comes to the foundations of mathematics, and philosophy of mind which focuses on how human brain functions and the body interact with each other. This applies to the Artificial Intelligence side of the discipline where researchers attempt to have computers think and adapt to problems as a human would respond to them. 

Computational Complexity Theory

The computational Complexity Theory is used in theoretical computer science in order to classify computational problems to their difficulty level. When it comes to looking at algorithms the theory analyzes how many steps and how much memory an algorithm takes for the computer to complete the problem.

Computer scientists also consider these questions when it comes to figuring out how to create an ideal algorithm for execution:

What is the problem at hand?

What type of algorithm is needed?

Does the problem require more than one algorithm?

How fast can we make the algorithm solve the problem?

Expressing Algorithms

    Algorithms have a variety of notations in which they can be expressed these include:

Natural Languages 

        These are less structured ambiguous ways to express an algorithm. They are used for less complex equations.

Pseudocode, flowcharts, and Control Tables 

        These are more structured ways to express algorithms than that of natural languages.

Representations of algorithms have also been split into three categories based off of Alan Turing’s work on his Turing machine:

    “1 High-level description:

     "...prose to describe an algorithm, ignoring the implementation details. At this level we do not need to mention how the machine manages its tape or head."

    2 Implementation description:

     "...prose used to define the way the Turing machine uses its head and the way that it stores data on its tape. At this level we do not give details of states or transition function."

    3 Formal description:

     Most detailed, "lowest level", gives the Turing machine's "state table".”

(Pulled from

Types of Algorithms Using Big O Notation

What is Big O Notation? Quite simply, big O notation describes the performance of an algorithm. This also can be stated as the execution time of the algorithm or the amount of memory that the algorithm requires for execution.

Examples include:

Linear Time


Log Time (“real-time”)


Quadratic Time


Factorial Time



Explanations of O Notation Algorithms

Linear Time (O(n))

 Algorithms that are said to be in linear time means that that the greater number of input size leads to a linear increase in the amount it takes to run the algorithm. Linear time is considered to be an ideal algorithm as many computer scientists have tried to implement this by exploiting the concept of parallelism to achieve this. This algorithm’s speed is defined as whenever “n” number of inputs increase that the running time of the program will take that same amount of time. Examples include: the Boyer-Moore Algorithm and Ukknonen’s Algrorithm

Log Time (Olog(n))

An algorithm that uses the binary number system to achieve its output. These algorithms will continue to get slower as the number of inputs increases. Say that the number of inputs was to double this would then lead the running time of the program to increase by constant that is greater than that of the input. Examples of this algorithm can be found in binary trees or when using binary search.

Quadratic Time (O(n^2))

Quadratic algorithms have been known to be used on relatively smaller problems. Whenever the number of inputs in this algorithm increases (for example purposes let say it doubled) then that would mean that the running time of the program would increase by a factor of eight. The quadratic sieve algorithm is one example of an algorithm using quadratic time.

Factorial Time (O(n!))

Mathematics has defined a factorial as the product of all positive integers less than or equal to n numbers. This means that in a factorial algorithm whenever the number (for example purposes say “n”) of inputs increases by one the running time increases by a factor of n. Factorial algorithms have been used to solve such things as the traveling salesman problem.

Analysis of Some Special Algorithms

Boyer-Moore Algorithm (mentioned in linear time)

This algorithm is considered to be one of the most efficient string matching algorithms of its kind. Here the algorithm is designed to scan and look for certain characters/phrases in code.

This example from explains the idea:

“If it starts a search at the beginning of a text for the word "WIKIPEDIA", for instance, it checks the ninth position of the text to see if it contains an "A". If it finds the "A", it moves to the eighth position to see if that contains the last "I" of the word, and so on until it checks the first position of the text for a "W".

Why Boyer-Moore takes this backward approach is clearer when we consider what happens if the verification fails -- for instance, if instead of an "A" in the ninth position, we find an "X". The "X" doesn't appear anywhere in "WIKIPEDIA", and this means there is no match for the search string at the very start of the text -- or at the next eight positions following it, since those would all fall across the "X" as well. After checking just one character, we're able to skip ahead and start looking for a match starting at the tenth position of the text, just after the "X".

This explains why the best-case performance of the algorithm, for a text of length N and a fixed pattern of length M, is N/M: in the best case, only one in N characters needs to be checked. This also explains the somewhat counter-intuitive result that the longer the pattern we are looking for, the faster the algorithm will be usually able to find it.

The algorithm precomputes two tables to process the information it obtains in each failed verification: one table calculates how many positions ahead to start the next search based the identity of the character that caused the match attempt to fail; the other makes a similar calculation based on how many characters were matched successfully before the match attempt failed. (Because these two tables return results indicating how far ahead in the text to "jump", they are sometimes called "jump tables", which should not be confused with the more common meaning of jump tables in computer science.)”

Quadratic Sieve Algorithm (mentioned in quadratic time)

The quadratic sieve algorithm is known as an integer factorization algorithm. It is considered the second fastest method of solving such factorizations only lagging behind the general number field sieve. In practice the algorithm is trying to set up a congruence of squares modulo “n” in which n is the target integer to be factorized. The algorithm then does this in two phases data collection where the algorithm collects information that could lead to congruent squares, and data processing which puts the collected data into matrices to solve for the congruence of the squares. Here are the main steps summarized:

1. Choose a smoothness bound B. The number π(B), denoting the number of prime numbers less than B, will control both the length of the vectors and the number of vectors needed.

2. Use sieving to locate π(B) + 1 numbers ai such that bi=(ai2 mod n) is B-smooth.

3. Factor the bi and generate exponent vectors mod 2 for each one.

4. Use linear algebra to find a subset of these vectors which add to the zero vector. Multiply the corresponding ai together naming the result mod n: a and the bi together which yields a B-smooth square b2.

5. We are now left with the equality a2=b2 mod n from which we get two square roots of (a2 mod n), one by taking the square root in the integers of b2 namely b, and the other the a computed in step 4.

          6. We now have the desired identity: Compute the GCD of n with the difference (or sum) of a and b. This produces a factor, although it may be a trivial factor (n or 1). If the factor is trivial, try again with a different linear dependency or different a.”

(pulled from


A Top-Down Approach to Algorithms using a Warnier-Orr Diagram

Another way to look at algorithms is to look at them, not in a code format, but in a Top-Down format general format. A way we can do is this is to use a Warnier-Orr diagram to lay out how you are going to organize the entire program. With this diagram the programmer can see how it easy to build a program from the Top-Down. The picture below depicts different kinds of algorithms in different formats.

Let’s walk-through the first algorithm:

Algorithm 1 starts off by inputting x and y. These values can be anything, but for now we will assume that that they are integer values (1, 2, 3, etc.).  The use can input these or they can be read from a file. After the two variables have been assigned values, the algorithm will go to the next step which is to take the sum of x and y and assign that value to the variable z. The final step for this algorithm is to output or print the value stored in z.

The other two algorithms feature aspects like looping and condition statements. These can cause your algorithm to become more and more complex. This makes organization even more important. Make sure that you keep that in mind when you are designing your algorithms, organization will save you a lot of troubleshooting in the future.

Now that we have walked through the first algorithm together, do you think that you could walk-through the others by yourself? Give it a try. You might be surprised at what you can do with algorithms.

Works Cited

Lund University. Algorithms. 14 October 2009. 29 September 2012 <>.

Princeton University. Analysis of Algorithms. 5 August 2011. 29 September 2012 <>.

Scriptol. History of Algorithms and Algorithmics. n.d. 28 September 2012 <>.

Wikipedia. Algorithms. n.d. 30 September 2012 <>.

Wikipedia. Big O Notation. n.d. 28 September 2012 <>.

Wikipedia. Quadratic Sieve. n.d. 29 September 2012 <>.

wordiQ. Boyer-Moore String Searching Algorithm - Definition. 2010. 1 October 2012 <>.