You may have heard the word latency when you were checking your PC or laptop configurations. If you have given it a proper peek, you would have noticed something L1 cache latency ** and L2 cache latency **. Have you ever wondered what latency means and why it the most crucial word of our times. Latency can make or break a device because nobody would like to own a slow PC/laptop. In a computer network, latency is defined as the amount of time it takes for a packet of data to get from one designated point to another. Latency in simple terms mean wasted time. The less the latency, the better the communication between the two nodes in any device. In more general terms, it is the amount of time between the cause and the observation of the effect. Latency greatly affects how usable and enjoyable electronic and mechanical devices as well as communications are. Latency in communication is demonstrated in live transmissions from various points on the earth as the communication hops between a ground transmitter and a satellite and from a satellite to a receiver each take time. As you would expect, latency is important, very important. As programmers, you must be knowing the fact that reading from disk takes longer than reading from memory. Another interesting fact about computers is that L1 cache is faster than the L2 cache. But do you know the orders of magnitude by which these aspects are faster/slower compared to others? The table below presents the latency for the most common hardware operations. These data are only approximations and will vary with the hardware and the execution environment of your code. However, they do serve their primary purpose, which is to enable us make informed technical decisions to reduce latency. For better comprehension of the multi-fold increase in latency, scaled figures in relation to L2 cache are also provided by assuming that the L1 cache reference is 1 sec.