I. INTRODUCTION
While rehabilitation is widely used as an effective symptomatic therapy for patients with the balance disorders, there are still many barriers that prevent in-home patients from undergoing rehabilitation, such as high medical expenses, and the limited numbers of medical facilities that accept outpatients. Moreover, patients who need rehabilitation are increasing in the aging, aged, and hyper-aged groups. To provide patients a rehabilitation environment, low-cost, compact, and accurate in-home rehabilitation systems need to be developed.
Some in-home rehabilitation [Reference Chang, Chen and Huang1–Reference Galna3] and personalized medical care [Reference Clark, Bryant, Pua, McCrory, Bennell and Hunt4,Reference Stone and Skubic5] facilities employ recently developed game devices such as Kinect (Microsoft Corp.) and Wii Balance Board (WBB) (Nintendo Corp.). These can obtain the image, depth, and center of pressure (COP) of a user at low-cost (100–300 USD).
It is suggested that Kinect has adequate accuracy for measuring a user's posture for instant diagnosis of balance disorders [Reference Funaya, Wada, Yamanaka and Shibata6–Reference Galna, Barry, Jackson, Mhiripiri, Olivier and Rochester10]. The measurement accuracy of Kinect can be improved by integrated various sensors such as inertial sensors [Reference Bo, Hayashibe and Poignet11,Reference Hondori, Khademi and Lopes12]. However, so far, the systems do not measure the user's posture and balance, which are most important from a physical therapeutic viewpoint. The user's posture and balance are accurately measured using a motion capture system and a stabilograph. However, such equipment is expensive for personal use at home.
In this study, we developed a system measuring the user's posture for in-home rehabilitation using Kinect and WBB. Because physical therapists use three medical indices for evaluating posture, that is, the anterior folding and lateral bending angles, and the COP, these values must be accurately estimated by our system. Using Kinect and modeling the relationship between Kinect data and an optical motion capture data, we not only reduced the system cost, but also achieved high accuracy. The results of the experiment showed that the correlation coefficients between the estimated and measured angles reached high values (0.599–0.998, 0.869 on average, for the anterior folding angle; 0.423–0.999, 0.752 on average, for the lateral bending angle), indicating the effectiveness of our system.
II. PROPOSED METHODS
A) System
Figure 1 depicts our system, consisting of a PC, Kinect, and WBB\null. Kinect is a new and inexpensive RGB-D camera that acquires motion data in real-time via Kinect SDK, and is freely available from the Microsoft Corp. The WBB is a piece of equipment that measures the COP [Reference Esculier, Vaudrin, Beriault, Gagnon and Tremblay2,Reference Clark, Bryant, Pua, McCrory, Bennell and Hunt4].
Our system calculates the anterior folding and lateral bending angles of a user from skeleton data (green segments in Fig. 2) given by Kinect SDK, measures the COP using WBB (red circle in the lower-left in Fig. 2), and feeds back to the user current data in real-time with animated posture images (white bars in Fig. 2) and the RGB image from Kinect. The system records the 3D skeleton, the two-dimensional (2D) COP, and the movie obtained by Kinect.
B) Calculating the angles
A schematic view of the calculation is given in Fig. 3. The anterior folding and lateral bending angles are calculated using the center of the shoulder P S_CENTER and the center of the hip P H_CENTER obtained using Kinect SDK, where the angles are calibrated using the camera angle $\theta_{{cam}}$ given by the accelerometer in the camera.
The details of the procedure are shown below:
(i) Calculate each component of the vertical vector V expressing the direction of gravity as
(1)$$V_x = 0, \quad V_y = \cos\lpar \theta_{{cam}}\rpar , \quad V_z = \sin\lpar \theta_{{cam}}\rpar .$$(ii) Calculate the vector H expressing the direction of the user body as
(2)$$H =P_{S\_CENTER} - P_{H\_CENTER}$$(iii) Calculate the anterior folding angle θab and lateral bending angle θlf as
(3)$$\theta_{ab} =-{\rm sign} \left({{V_z} \over \vert V_{yz}\vert}-{{H_z}\over{\vert H_{yz}\vert}}\right) \arccos {V_{yz}\cdot H_{yz}\over{\vert V_{yz}\Vert H_{yz}\vert}},$$(4)$$\theta_{lf} =-{\rm sign} \left({{V_x} \over \vert V_{xy}\vert}-{{H_x}\over{\vert H_{xy}\vert}}\right) \arccos {V_{xy}\cdot H_{xy}\over{\vert V_{xy}\Vert H_{xy}\vert}},$$where A xy denotes the two-dimensional vector with the x- and y-components of the vector A, and so on.
III. EXPERIMENTS
To evaluate the accuracy of the anterior folding and lateral bending angles using the proposed system, we compared the calculated values with the angles acquired from an optical motion capture system.
A) Method
Subject's posture was simultaneously recorded using Kinect with a sampling frequency of 30 Hz, as well as by an optical motion capture system (MAC3D, Motion Analysis Corp.) at 180 Hz. The transformation between the coordinates system from Kinect to the motion capture system (calculation of a transformation matrix between the two coordinates) was calibrated in advance of the experiment using a combined marker composed of optical markers and an AR marker (Fig. 4). The motion capture data were downsampled to 30 Hz to match that of the Kinect data.
Because the center of the shoulder P S_CENTER and the center of the hip P H_CENTER obtained by Kinect SDK are inside the user's body (Fig. 4, right) [Reference Tamei, Funaya, Ikeda and Shibata13], we calculated the ground-truth of the angles from the positions of the two markers, the midpoint of the clavicle M1 and the midpoint of the projections on the outside of the ilium anterior superior M2 (Fig. 4, left). In the same way as the anterior folding angle θab and lateral bending angle θlf in Fig. 3.
B) Participants
Two healthy adults (ages 22 and 33) with no balance disorders participated in this experiment. Each of them wore casual clothes such as a T-shirt and jeans and sequentially performed four tasks: standing with eyes open, standing with eyes closed, swaying in the anterior direction, and swaying in the lateral direction, each of which lasted for 30 s.
C) Evaluation
The correlation coefficient between the angles $\theta_{MAC3D\_ab}$ and $\theta_{kinect\_ab}$ and that between the angles $\theta_{MAC3D\_lf}$ and $\theta_{kinect\_lf}$ were evaluated. This means that we modeled the relationship between the angles as a linear model,
where a, b, c, and d are appropriate constants.
D) Results
The correlation coefficients were very high (0.826–0.998) for both directions (Fig. 5).
If the constants a, b, c, and d in (5) and (6) are given for each participant by the least-squares method, the root-mean-square errors (RMSE) of the estimated angles and their standard deviation (STD) were small enough (Table 1 and Fig. 6).
IV. DISCUSSION
A) Calibration of the angles
The experimental result showed that our system can estimate the angles with great accuracy if it is well calibrated (Table 1). This shows that we need a calibration method without an optical motion capture when applying our system to in-home rehabilitation.
One possible way to calibrate the parameters is to use color markers and capture them with normal RGB video in Kinect. It is easy to calculate the anterior folding and lateral bending angles from 2D images captured from the dorsal and lateral sides (Fig. 7).
If the angles have only bias terms b and d, that is, the coefficients a and c are unity, the calibration is much simpler or even unnecessary. To confirm whether it is acceptable or not, we conducted an additional experiment with $a=c=1$. As a result, the RMSEs of the angles for the biased model (Table 2) were comparable with those for the linear model (Table 1).
B) Necessity for WBB
Our system measures the COP using WBB. However, the COP could possibly be estimated from the anterior folding and lateral bending angles. If so, the system could omit WBB and would cost less. To investigate whether the angles have information on the COP, we measured the anterior folding and lateral bending angles and the COP simultaneously. We then calculated the correlation coefficient between the anterior folding angle and the corresponding element of the COP and that between the lateral bending angle and the corresponding element of the COP.
One patient with Parkinson's disease participated in this experiment under the guidance of a physical therapist. The participant measured the angles and the COP by himself at his home using our system, with four trials a day (with comfortable and upright posture every morning and evening) for a week. One trial was composed of standing with eyes open and then closed for 30 s each.
As a result, they were found to have small averages and large deviations in their correlation coefficients over the trials (Fig. 8). This is because the subject may move not only the trunk (the anterior folding and lateral bending angles) but also other body parts. In fact, the existence of coordination (synergies) of multiple joints and muscles for keeping the balance is suggested [Reference Horak14–Reference Kim17]. Since COP reflects the motions of each part of the whole body, it should potentially include the information of the coordination [Reference Panzer, Bandinelli and Hallett18]. It is also found that spacial relationship between COP and COM (center of mass) is related to dynamic balance control [Reference Hass, Waddell, Fleming, Juncos and Gregor19,Reference Tucker, Kavanagh, Morrison and Barrett20]. These findings indicate that WBB is useful for assessing and rehabilitating dynamic balance control.
V. CONCLUSIONS
In this paper, we proposed a low-cost in-home rehabilitation system for patients with balance disorders. The system measures a participant's posture in terms of the anterior folding and lateral bending angles and the COP that are used by physical therapists.
Experimentally the system was confirmed to be very accurate in estimating the angles using MAC3D for the ground truth. In fact, the angles measured by Kinect had a high correlation coefficient with those measured by MAC3D and they are related by a linear transform.
In addition, our system provided the participant with visual feedback of the postural information and COP in real-time, which would assist the participant in rehabilitating at home.
ACKNOWLEDGEMENTS
This work was supported by a Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science (Grant no. 23240028).
APPENDIX
A. MEASURING METHOD FOR COP
Figure 9 shows the position of sensors for WBB. The COP was calculated by:
where, $F_{i} \ (i=\hbox{TL}, \hbox{TR}, \hbox{BL}, \hbox{BR}, \hbox{TOTAL})$ are the forces measured by each sensor and their total force. BSL and BSW are the distances between the sensors in the anteroposterior direction and lateral direction (Fig. 9).
Tomoya Tamei is an assistant professor in the Graduate School of Information Science, Nara Institute of Science and Technology, Japan. He received his Ph.D. degree from Nara Institute of Science and Technology in 2009. His research interests are robotics, motor neurophysiology, and machine learning. He received the best paper award from the Japanese Neural Network Society (2015) and IROS 2015 Best Application Paper Award (2015).
Yasuyuki Orito received his B.E. in Gifu National College of Technology. His research interests is development of in-home rehabilitation system using ICT.
Hiroyuki Funaya received his B.E. and M.E. degrees in the Graduate School of Informatics, Kyoto University, and Ph.D degree from Nara Institute of Science and Technology in 2011. From 2011 to 2014, he was with the the Graduate School of Information Science, Nara Institute of Science and Technology, as a postdoctoral fellow.
Kazushi Ikeda received his B.E., M.E., and Ph.D degrees in mathematical engineering and information physics from the University of Tokyo in 1989, 1991, and 1994, respectively. He was a research associate with the Department of Electrical and Computer Engineering, Kanazawa University from 1994 to 1998. In 1995, he was a research associate of the Chinese University of Hong Kong for three months. From 1998 to 2008, he was with the Graduate School of Informatics, Kyoto University, as an associate professor. Since 2008, he has been a full professor of Nara Institute of Science and Technology. He is the former editor-in-chief of Journal of Japanese Neural Network Society, an action editor of Neural Networks and an associate editor of IEEE Transactions on Neural Networks and Learning Systems.
Dr. Yohei Okada received the Ph.D. degrees in Health sciences from Osaka Prefecture University in 2012. He is currently an assistant professor of Department of Physical Therapy, Faculty of Health Science in Kio Univesity and Graduate School of Health Sciences, Neurorehabilitation reseach center of Kio University. His research interests are postural control and gait, motor learning, rehabilitation in Neurological Disorders (ex. Parkinson's disease). Research papers on these themes were published in the international refereed journals.
Tomohiro Shibata received his B.E., M.E. and Ph.D. in 1991, 1993, and 1996 from the University of Tokyo. He is currently a professor at the Graduate School of Life Science and Systems Engineering of Kyushu Institute of Science and Technology. His main research interest is on understanding and assisting motor control and decision making by humans by using interdisciplinary approaches. He received a young investigator award from the Robotics Society of Japan (1992), the best paper award from the Japanese Neural Network Society (2002), the Neuroscience Research Excellent Paper Award from the Japan Neuroscience Society (2007), and IROS 2015 Best Application Paper Award (2015). He is a visiting professor at the Nara Institute of Science and Technology and Chubu University, an Editorial Board Member of Neural Networks, an executive board member of the NPO Agora Music Club.