An algorithm is a method of solving a problem using a computer. The implementation of an algorithm consists of a finite set of steps, each of which can consist of one or more operations. We hear about algorithms increasingly often in different contexts nowadays.

Back in the medieval era, mathematicians regarded algorithms as a rule for making arithmetic calculations. With the development of computers, however, the term has acquired a special meaning, turning from a specific instrument used in mathematics into a fundamental way to address problems in various fields.

In computer science, an algorithm is a method of solving problems of a certain type. Solving a problem means producing the right output for any given input data. The algorithm itself consists of a sequence of operations that describe, step by step, how to compute the output data (the result) based on the input. For example, an algorithm for computing the greatest common factor of two numbers would produce ‘5’ as output for the input consisting of ’10’ and ‘15’.

As mentioned above, each step of an algorithm requires one or more operations. These operations must be clearly defined before we can implement them on a computer. Secondly, any operation must be effective. This means, in principle, that any person can perform it in a finite amount of time using just pencils and paper. For example, integer arithmetic is effective. Real number arithmetic, on the other hand, is not, because some numbers can be expressed by infinite sequences of digits. Generally, we consider that any algorithm must end after a finite number of operations in a reasonable, finite time period.

A good algorithm for finding the solution to a problem is one whose correctness can be proved. In addition, the use of appropriate data structures and choosing a programming language that can adapt the algorithm well are also important steps.

For example, let’s consider we have to compute the greatest common factor of two given positive integers. Given the inputs 35 and 14, we could perform the following steps:

- Subtract the smaller number from the bigger one and replace the bigger with the result of the subtraction (in our example 35 – 14 = 21, so we end up with 21 and 14);
- Repeat until the two numbers become equal (21 – 14 = 7, so now we have the pair 14 and 7, and finally 14 – 7 = 7, so we end up with 7 and 7).
- The greatest common factor is the value we have obtained at the end of the previous step (7 in this case).

The steps just mentioned represent an algorithm. In order to prove its correctness, we must show that it terminates after a finite amount of steps and it produces the right result. The algorithm just described always terminates because regardless of the two positive integers we give as an input, they will always end up with an equal value after repeated subtractions. Finally, we know that the greatest common factor of any two numbers divides both their sum and their difference, which means our algorithm produces the right result as well.

One could write algorithms to solve problems in any domain.

For example, any recipe can be considered an algorithm because it starts with raw materials (input) and leads to a finished product (output), obtained after a finite sequence of operations.