Eye gaze tracking is increasingly popular due to improved technology and availability. In the domain of assistive device control, however, eye gaze tracking is often used in discrete ways (e.g., activating buttons on a screen), and does not harness the full potential of the gaze signal. In this article, we present a method for collecting both reactionary and controlled eye gaze signals, via screen-based tasks designed to isolate various types of eye movements. The resulting data allows us to build an individualized characterization for eye gaze interface use. Results from a study conducted with participants with motor impairments are presented, offering insights into maximizing the potential of eye gaze for assistive device control. Importantly, we demonstrate the potential for incorporating direct continuous eye gaze inputs into gaze-based interface designs; generally seen as intractable due to the ‘Midas touch’ problem of differentiating between gaze movements for perception versus for interface operation. Our key insight is to make use of an individualized measure of smooth pursuit characteristics to differentiate between gaze for control and gaze for environment scanning. We also present results relating to gaze-based metrics for mental workload and show the potential for the concurrent use of eye gaze for control input as well as assessing a user’s mental workload both offline and in real-time. These findings might inform the development of continuous control paradigms using eye gaze, as well as the use of eye tracking as the sole input modality to systems that share control between human-generated and autonomy-generated inputs.