Let us consider the scenario of a moving camera capturing a video sequence. If we were to implement a purely
discrete algorithm to recover structure, then adjacent frames are considered to contain no useful information and are often used solely for the purpose of facilitating point tracking. On the other hand, there are many differential algorithms that are meant to deal with small motion between adjacent frames but fail when the motion is too large. In this paper, we observe that the difference between these two classes of algorithms is often artificial. Proper normalization of the data can enable the discrete algorithms to handle differential data without the noise associated with the differential approximation. We term this method “Time Normalization” (TN). We show how TN can be used to overcome degeneracies (or instabilities) under existing vision algorithms. We also provide a geometrical understanding of the problem.