Multiple response (MR) items—such as multiple true-false, multiple-select, and select-N items—are increasingly used in assessments to identify partial knowledge and differentiate latent abilities more accurately. Allowing multiple selections, MR items provide richer information and reduce guessing effects compared to single-answer multiple-choice items. However, traditional scoring methods (e.g., Dichotomous, Ripkey, Partial scoring) compress response combination (RC) data, losing valuable information and ignoring issues like local dependence and incompatibility across item types. To address these challenges, we introduce a novel psychometric model framework: the Multiple Response Model with Inter-option Local Dependencies (MRM-LD), and its simplified version, the Multiple Response Model (MRM). These models preserve RC data across MR item types, offering a more comprehensive understanding for MR assessment. Parameters for MRM-LD and MRM were estimated using Markov chain Monte Carlo algorithms in Stan and R. Empirical data from an eighth-grade physics test showed that MRM-LD and MRM outperform Graded Response Model and Nominal Response Model combined with three scoring methods, by retaining more test information, improving reliability and validity, and providing more detailed analysis of item characteristics. Simulation studies confirmed the proposed models perform robustly under various conditions, including small samples and few items, demonstrating their applicability across diverse testing scenarios.