In this paper, we propose and implement an advanced manipulation framework that enables parametric learning of complex action trajectories along with their haptic feedback profiles. Our framework extends Dynamic Movement Primitives (DMPs) method with a new parametric nonlinear shaping function and a novel force-feedback coupling term. The nonlinear trajectories of the action control variables and the haptic feedback trajectories measured during execution are encoded with parametric temporal probabilistic models, namely parametric hidden Markov models (PHMMs). PHMMs enable autonomous segmentation of a taught skill based on the statistical information extracted from multiple demonstrations, and learning the relations between the model parameters and the properties extracted from the environment. Hidden states with high-variances in observation probabilities are interpreted as parts of the skill that could not be reliably learned and autonomously executed due to possibly uncertain or missing information about the environment. In those parts, our proposed force-feedback coupling term, which computes the deviation of the actual force feedback from the one predicted by the force-feedback PHMM, acts as a compliance term, enabling a human to scaffold the ongoing movement trajectory to accomplish the task. Our method is verified in a number of tasks including a real pick and place task that involves obstacles of different heights. Our robot, Baxter, successfully learned to generate the trajectory taking into the heights of the obstacles, move its end effector stiffly (and accurately) along the generated trajectory while passing through apertures, and allow human–robot collaboration in the autonomously detected segments of the motion, for example, when the gripper picks up the object whose position is not provided to the robot.