There is a series of conventions governing how Confirmatory Factor Analysis gets applied, from minimum sample size to the number of items representing each factor, to estimation of factor loadings so they may be interpreted. In their implementation, these rules sometimes lead to unjustified decisions, because they sideline important questions about a model’s practical significance and validity. Conducting a Monte Carlo simulation study, the present research shows the compensatory effects of sample size, number of items, and strength of factor loadings on the stability of parameter estimation when Confirmatory Factor Analysis is conducted. The results point to various scenarios in which bad decisions are easy to make and not detectable through goodness of fit evaluation. In light of the findings, these authors alert researchers to the possible consequences of arbitrary rule following while validating factor models. Before applying the rules, we recommend that the applied researcher conduct their own simulation studies, to determine what conditions would guarantee a stable solution for the particular factor model in question.