Algorithms are most important and durable part of computer science because they can be studied in a language- and machine-independent way. This means that we need techniques that capable us to compare the efficiency of algorithms without implementing them. Our two most important tools are:
- The RAM model of computation and,
- The asymptotic analysis of worst-case complexity.
In this post we are going to discuss the RAM model.
The RAM Model of Computation
Machine-independent algorithm design depends upon a hypothetical computer called the Random Access Machine or RAM (we cautious, This RAM is different from the RAM Memory, which refers to Random Access Memory). Under this model of computation, we are confronted with a computer where:
- Each simple operation \( ( +, *, -, =, if, call)\) takes exactly one time step.
- Loops and subroutines are not, considered simple operations. Instead, they are the composition of many single-step operation. It makes no sense for sort to be a single step operation, since sorting 1,000,000 items will certainly take much longer than sorting 10 items. The time it takes to run through a loop or execute a subprogram depends upon the number of loop operations or the specific nature of the subprogram.
- Each memory access takes exactly one time step. Further, we have as much memory as we needed. RAM model takes no notice of whether an item is in cache or on the disk.
Under the RAM model, we measure run time by counting up the number of steps an algorithms takes on a given problem instance. If we assume that our RAM model executes a given number of steps per second, this operation count converts naturally to the actually running time.
Loopholes in the RAM Model
The RAM is a simple model of how computer perform. Perhaps it sounds too simple to capture the execution on real computer. some examples are:
- Multiplying two number takes more time than adding two numbers on most processors. Which violates the first assumption of the model.
- Fancy compiler loop unrolling and hyper-threading may well violate the second assumption.
- Certainly memory access time differ greatly depending on whether data resides in cache or on the disk.
Despite these complaints, the RAM model proves an excellent model for understanding how an algorithm will perform on a real computer. It strikes a fine balance by capturing the essential behavior of computers while being simple to work with. We use RAM model because it is useful in practice.
Every model has a size range over which it is useful. Take, for example, the model that the earth is flat. You might argue that this is a bad model, since it has been fairly well established that the earth is in fact round. But when laying the foundation of a house, the flat earth model is sufficiently accurate that it can be reliably used. It is so much easier to manipulate a flat-earth model that it is inconceivable what you would try to think spherically when you don’t have to.
The same situation is true with RAM model of computation. We make an abstraction that is generally very useful. It is quite difficult to design an algorithm such that the RAM model gives you substantially misleading results. The robustness of the RAM model enables us to analyze algorithms in a machine independent way.
This article is contributed by Ram Kripal. If you like eLgo Academy and would like to contribute, you can mail your article to firstname.lastname@example.org. See your article appearing on the eLgo Academy page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.