This paper considers undiscounted Markov decision problems. With no restriction (on either the periodicity or chain structure of the problem) we show that the value iteration method for finding maximal gain policies exhibits a geometric rate of convergence, whenever convergence occurs. In addition, we study the behaviour of the value-iteration operator; we give bounds for the number of steps needed for contraction, describe the ultimate behaviour of the convergence factor and give conditions for the existence of a uniform convergence rate.