Introduction
Cognitive health poses a serious challenge for the expanding elderly population, and worldwide prevalence of mild cognitive impairment (MCI) is estimated to be around 15% (Bai et al., Reference Bai, Chen, Cai, Zhang, Cheung, Jackson, Sha, Xiang and Su2022). Both normative and illness-related cognitive changes can undermine the capacity to perform everyday tasks and make autonomous decisions (Beach et al., Reference Beach, Czaja and Schulz2023; Marshall et al., Reference Marshall, Aghjayan, Dekhtyar, Locascio, Jethwani, Amariglio, Johnson, Sperling and Rentz2017). Further, the domains of cognitive functioning affected most by normal aging and cognitive disorders are those that are required to learn new skills and process novel information (Park & Schwarz, Reference Park and Schwarz2012), with research finding direct correlations between cognitive abilities and technology adoption (Czaja, et al., Reference Czaja, Charness, Fisk, Hertzog, Nair, Rogers and Sharit2006). MCI is characterized by deterioration in cognitive abilities and daily functioning (Marshall et al., Reference Marshall, Rentz, Frey, Locascio, Johnson and Sperling2011), surpassing typical age-related decline, but not yet reaching the criteria for a dementia diagnosis. While not everyone with MCI progresses to dementia, MCI with amnestic features (aMCI) is a risk factor for developing Alzheimer’s disease (AD; Albert et al., Reference Albert, DeKosky, Dickson, Dubois, Feldman, Fox, Gamst, Holtzman, Jagust, Petersen, Snyder, Carrillo, Thies and Phelps2011). The lack of an efficacious medication for cognitive impairment in MCI/early AD (Petersen et al., Reference Petersen, Thomas, Grundman, Bennett, Doody, Ferris, Galasko, Jin, Kaye, Levey, Pfeiffer, Sano, van Dyck and Thal2005) has prompted research on the development of non-pharmaceutical interventions for add-on therapy, specifically computerized cognitive (CCT) training cognitive skills.
Meta-analyses support overall efficacy of cognitive training for MCI (Hill et al., Reference Hill, Mowszowski, Naismith, Chadwick, Valenzuela and Lampit2017; Zhang et al., Reference Zhang, Huntley, Bhome, Holmes, Cahill, Gould, Wang, Yu and Howard2019), AD in certain domains (Sherman et al., Reference Sherman, Mauser, Nuno and Sherzai2017), and older individuals with normal cognition (NC; Lampit et al., Reference Lampit, Hallock, Valenzuela and Gandy2014). However, there are moderators of the efficacy of CCT interventions for cognition in these populations including length of training session (less than then 30 minutes is less helpful) and dose per week (More than 3 times per week had diminishing returns). Studies have also suggested that lower baseline cognition scores (Roheger, et al., Reference Roheger, Kalbe, Corbett, Brooker and Ballard2020), adding an exercise component (Gavelin et al., Reference Gavelin, Dong, Minkov, Bahar-Fuchs, Ellis, Lautenschlager, Mellow, Wade, Smith, Finke, Krohn and Lampit2021), and more structured CCT (Roheger et al., Reference Roheger, Kessler and Kalbe2019) lead to greater benefits. Commercially available cognitive training software was associated with wide-ranging gains in older people (Tetlow and Edwards, Reference Tetlow and Edwards2017), suggesting that specialized CCT software may not be required.
Studies in psychiatric conditions have repeatedly found that training that is titrated in difficulty, momentarily adjusted in difficulty with achievement, sustained over time, augmented by coaching, and has engaging tasks led to the greatest gains, in schizophrenia (Bowie et al., Reference Bowie, Bell, Fiszdon, Johannesen, Lindenmayer, McGurk, Medalia, Penadés, Saperstein, Twamley, Ueland and Wykes2020) and major depression (Douglas et al., Reference Douglas, Jordan, Inder, Crowe, Mulder, Lacey, Beaglehole, Bowie and Porter2020). Remote delivery or primarily home-based computer training has had mixed results, with some reviews reporting successful training outcomes but possibly greater attrition (Best et al., Reference Best, Romanowska, Zhou, Wang, Leibovitz, Onno, Jagtap and Bowie2023; Douglas et al., Reference Douglas, Milanovic, Porter and Bowie2020) and others suggesting that home-based training is not effective (Lampit, et al., Reference Lampit, Hallock, Valenzuela and Gandy2014)
Even in studies where there were substantial cognitive gains with CCT alone and excellent near transfer to untrained cognitive skills (Edwards et al., Reference Edwards, Wadley, Myers, Roenker, Cissell and Ball2002), concurrent real-world functional gains were found to be limited to improved performance on previously acquired functional skills such as everyday activities (Edwards et al., Reference Edwards, Wadley, Vance, Wood, Roenker and Ball2005) and driving (Ross et al., Reference Ross, Edwards, O’Connor, Ball, Wadley and Vance2016) and not to impact on acquisition of novel daily skills (Willis, et al., Reference Willis, Tennstedt, Marsiske, Ball, Elias, Koepke, Morris, Rebok, Unverzagt, Stoddard and Wright2006). Our previous study of in-person training of 6 functional skills in MCI and NC found that over 50 % of participants with NC and with MCI improved in their completion time by one standard deviation or more across the 6 skills indexed to NC baseline performance (Czaja et al., Reference Czaja, Kallestrup, Harvey and Pak2020). However, full mastery of all six tasks was more common in the NC participants. As important as the differences in task mastery were the differences in drop-out. 32% of the MCI participants, who had lower levels of task mastery, dropped out before completing training, compared to 13% of the NC participants. Thus, the drop-out rate in participants with MCI in that study, particularly in the combined training intervention, was more than double that of the NC sample despite the substantial training gains seen in those who completed training.
The current report comes from a study of updated skills training software. Specifically, a new version of the FUNSAT™ program was developed and tested in a randomized clinical trial, featuring fully remotely delivered cognitive and functional skills training and targeting the same 6 technology-based activities of daily living, in older adults with NC and MCI. This trial (NCT046779441) has three different pre-planned outcomes presented separately. Improvements in performance on the training simulations in errors and time to completion, Czaja et al. (Reference Czaja, Kallestrup and Harvey2023) was the designated primary paper, real-world transfer of the technology-related skills assessed with ecological momentary assessment (EMA) is the second Dowell-Equivel et al. (Reference Dowell-Esquivel, Czaja, Kallestrup, Depp, Saber and Harvey2023), and near transfer to cognitive performance and far transfer to untrained functional capacity measures (Harvey, et al., Reference Harvey, Zayas-Bazan, Tibiriçá, Kallestrup and Czaja2023) is the third. The study reported in this paper is a secondary analysis that was targeted at earliest possible identification of the characteristics of participants who eventually failed to develop full mastery of the 6-task training program. Identification of participants at high risk for failure to master the task could allow for the development of corrective “secondary” interventions to support training and reduce tendencies toward drop-out in participants. It seems important to identify individuals who were having challenging experiences in mastering the training tasks as rapidly as possible.
Our goal is to identify differences between participants who achieved full mastery of the training tasks, redefined in this study as completion of all subtasks within each of the 6 training tasks with no errors or two consecutive attempts with 1 error. We aimed to compare the attributes of participants who achieved full proficiency in FUNSAT, referred to as graduates, with those of nongraduates. As we were interested in very early detection of failure to graduate, we used individual differences factors to predict mastery, including baseline performance on the Montreal Cognitive Assessment (MOCA; Nasreddine et al., Reference Nasreddine, Phillips, Bédirian, Charbonneau, Whitehead, Collin, Cummings and Chertkow2005) and years of education. We also used several FUNSAT task performance characteristics as potential predictors: the number of errors and time to completion at the baseline assessments as well as training gains immediately after baseline assessments, using training gains on the first post-baseline training session as training-related predictors.
We had several hypotheses. Given the previous reports that global cognitive status and lower baseline performance predicted more training gains, we expected that lower baseline scores on the FUNSAT and possibly scores on the MOCA would predict greater gains with training. Previous studies have reported that reduced engagement in CCT predicted reduced near transfer of training gains across populations (Harvey et al., Reference Harvey, Balzer and Kotwicki2019), so we hypothesized that reduced training gains on the first FUNSAT training session were candidate predictors of failure to master the full set of tasks.
Methods
Overall study design
This study was a randomized controlled trial carried out at a total of fourteen community centers in South Florida and New York City. These are nonmedical community facilities attended by community residents for a variety of social and personal reasons. All recruitment was done in person, through town hall meetings and word of mouth. After initial screening, participants underwent an orientation and an in-person baseline evaluation on a fixed difficulty assessment of six functional tasks. Participants then engaged in up to 12 weeks of self-administered computer-based training at home. The study received approval from the WCG IRB, and every participant gave their signed informed consent to participate.
Participants
The study included both male and female community members over 60 years of age, without limitations based on race or ethnicity. Subjects were required to be proficient in either English or Spanish, have at least 20/60 vision, be able to read from a computer screen, and operate a touch-screen device. A neuropsychological assessment based on the Jak–Bondi criteria (Jak et al., Reference Jak, Bondi, Delano-Wood, Wierenga, Corey-Bloom, Salmon and Delis2009) was used to determine MCI status of the participants. Based on this criteria, participants were categorized as either having normal cognitive function or falling into one of three MCI subcategories: Amnestic: deficits in two or more memory domains but not more than one in a non-memory area; non-amnestic: deficits in two non-memory cognitive areas, yet not more than one in a memory-related domain; multi-domain: deficits in two or more tests in both memory and other cognitive domains. To assess performance, normative standards were applied and impairment on any individual measure was defined as a performance of 1.0 or more standard deviations below the normative mean.
Individuals were not eligible for the study if they had a MOCA score below 18, had a reading proficiency below a 6th-grade reading level in the language in which they had selected to be assessed and trained, or could not engage in assessments conducted in English or Spanish. Participants were disqualified if they had undergone a similar intervention in the previous year. Medical reasons for exclusion included a previous history of a serious psychiatric condition, except for depression, or histories of past neurological incidents such as seizures, brain tumors, cerebral vascular accidents, or severe traumatic brain injuries resulting in extended periods of unconsciousness.
Cognitive assessments
Data for the performance-based MCI criteria were gathered using cognitive evaluations. Assessments were conducted in the language preferred by the participants, either English or Spanish.
Montreal Cognitive Assessment (MOCA)
The MOCA evaluates cognitive abilities with scores ranging from 0 to 30 and all assessments were conducted by certified bilingual raters.
Reading performance
English-speaking participants’ literacy levels were assessed with the Wide Range Achievement Test (WRAT; Jastak, Reference Jastak1993), 3rd edition. Spanish speakers were assessed with the Woodcock-Munoz Language Survey, 3rd edition (WMLS-III; Woodcock et al., Reference Woodcock, Alvarado, Ruef and Shrank2017).
Wechsler memory scale-revised, logical memory I and II (Anna Thompson story)
Participants were narrated the story and asked for an immediate recollection. After a 20-minute interval filled with other non-verbal tasks, they were then asked for a delayed recall of the original story.
Brief assessment of cognition (BAC): app version
The BAC evaluates cognitive domains associated with daily functioning (Keefe et al., Reference Keefe, Goldberg, Harvey, Gold, Poe and Coughenour2004). The application (Atkins et al., Reference Atkins, Tseng, Vaughan, Twamley, Harvey, Patterson, Narasimhan and Keefe2017) provides these assessments via a cloud-connected tablet, simplifying administration, and ensuring consistency.
The cognitive domains assessed include the following:
-
Verbal Memory; Working Memory; Motor Speed; verbal Fluency; Symbol Coding, and Executive functioning.
General procedures
The third generation of the FUNSAT™ program trains the same skills as previous generations. The skills include ATM usage, operating a ticket kiosk, Internet banking, online shopping, refilling a prescription using a telephone voice menu, and managing medication by both comprehending medication labels and organizing medications (Supplemental Figure 1). Each task was presented in a multi-media format including text, voice, and graphic representations. Baseline assessments included a fixed difficulty (Form A) version with 6 tasks, and all subtasks were administered without training or any corrective feedback. The 6 tasks had 3–6 subtasks with sequentially increasing difficulty demands. With each error made, the original instructions would reappear in a pop-up window. If a participant made more than four errors on any one item, the software automatically moved on to the next item. Completion time and errors were collected in real time while participants completed each task, with time measured while the participant was actively engaged in the task. Participants performed the baseline assessments at the research site and then trained at home, so there was an assistant present to give encouragement in case the participant stopped participating in the assessment.
After the baseline assessments, training started. In each training session, lasting up to one hour, participants aimed to make as much progress as possible in mastering the items on individual subtasks. The program delivered training only on subtasks that had not yet been mastered. NC participants only trained with FUNSAT™ to develop normative standards for training gains. MCI participants were randomized into two groups: FUNSAT™ only or FUNSAT™ + CCT. Randomization was stratified by overall geographic area (NY vs. Miami) and sex. The FUNSAT™ program targeted development of proficiency in 6 functional tasks, with participants training 2 hours weekly for up to 12 weeks or until they achieved full mastery of all six tasks.
Those in the combined FUNSAT™ + CCT group underwent an intensive 3-week CCT training (two one-hour sessions weekly) before transitioning to FUNSAT™ for up to 9 weeks. After the 12-week period or upon mastering all tasks, participants were reevaluated using a different version of the fixed difficulty simulation administered at baseline. Follow-up evaluations took place around 30 days after completion or mastery and 3 months after that, with those results reported elsewhere. Participants were compensated $30.00 for each in-person assessment and received a bonus of $15.00 for each task mastered.
Training procedures
FUNSAT™
FUNSAT™ training was delivered through a cloud-based system on a touch-screen device with all training performed at the home of the participant. To connect to the Internet, participants had the option to use a provided hotspot or their own Wi-Fi connection. The training protocol was adaptive, with participants receiving immediate feedback about the first error within each subtask, with additional corrective feedback being given after all subsequent errors. For example, if a participant was attempting the ATM task and entered the wrong pin, a pop-up window would appear stating “Try Again! Your ATM PIN is 1234.” Following a second error, a new pop-up window would appear stating “Try Again! Remember, your PIN is 1234. Please enter 1234.” A third error would prompt the participant “Try Again! Press 1, then press 2, then press 3, and then press 4. Then press ENTER.” And finally, after a fourth error, each key would light up in sequence with a statement telling participants to click the corresponding key as they light up. A subtask was considered mastered if the participant completed the subtask once with no errors or twice consecutively with a maximum of one error on each attempt. Each of the tasks was considered mastered once all subtasks within a specific task were mastered. After any break from training, only the non-mastered subtasks were retrained. Training was considered complete after 12 weeks or when a participant mastered all 6 tasks, at which point the endpoint fixed difficulty assessment was delivered.
Computerized cognitive training
The BrainHQ™ “Double Decision” training exercise was selected as the CCT for the FUNSAT™+ CCT group. ACTIVE and other trials (Edwards et al., Reference Edwards, Wadley, Vance, Wood, Roenker and Ball2005; Harvey et al., Reference Harvey, Balzer and Kotwicki2019) have shown significant benefits from similar speed of processing training exercises. The exercise included two concurrent tasks where participants had to identify an item that appeared in the middle of the screen while simultaneously locating a specific stimulus among 7 others in the periphery. Participants also had the option to train up to 20% of their sessions on another BrainHQ task named “Hawk Eye” to increase variety in training.
Data analyses
The objective of the study was to contrast the characteristics of participants who successfully mastered all elements of FUNSAT prior to the end of the planned 12-week protocol, referred to as “graduates,” with those who did not, known as “nongraduates.” We compared the frequencies of graduation across overall site (Miami vs New York), cognitive status (MCI vs NC), racial status, \, and Latinx Ethnicity. All analyses were performed with SPSS version 28. (IBM Corporation, 2023). As we expected that poor performance on less challenging tasks would be more informative, we limited our analyses to baseline performance on the three easiest tasks (Ticket Kiosk, ATM, and medication management) as defined by performance of the HC sample in the previous and current studies. Baseline information on completion time and errors from these three tasks was used to predict graduation status. The first analyses simply compared graduates and nongraduates in the total sample on the 6 baseline variables (3 tasks, 2 variables per task), the MOCA and education. We also examined changes from baseline to the first training session within graduators and non-graduators across all six variables to see if the changes were significant.
We used discriminant function analyses to predict graduation status (yes/no), first entering any of the 6 baseline variables that differed between groups to predict graduation status. We also used training gains (time and errors) after one training as a subsequent predictor. We used a forward entry stepwise procedure and a p value of p < 0.05 for a variable to enter the equation. After conducting the first analysis, we kept any predictive variables and added the time and error variables for training gains, for the first training session. After the best predictive variables were identified by the discriminant analysis, we added MOCA scores as a potential predictor. After final selection of predictors, we used ROC curve analysis to examine the area under the curve to quantify prediction of graduation status.
Results
Figure 1 presents the patient flow in the study. As can be seen in the figure, 287 participants signed a consent form and 184 were randomized, with the most common reason for not being randomized being failure to attempt to train. For randomized participants in the two cognitively defined subject groups, MCI, and NC, drop-out from training was modest. Three of 75 NC participants (4%) did not complete training, with drop-out for MCI participants in skills training only at 6 out of 51 participants (11%) and for combined training at 4 out of 52 trainees (8%).
Table 1 presents the demographic information on the participants separated by MCI status, including graduation. MCI participants had significantly less education and lower MOCA scores than NC participants but did not differ in age. There were no site, race, or training language differences in MCI status. There were slightly more Latinx participants and slightly more male participants in the MCI group than in the NC group. Chi-square tests found that MCI status was significantly associated with lower rates of graduation from all training tasks, but that ethnicity, race, location, or training language were not, all X 2(1) < 0.46, all p > .50. As we previously reported (Dowell-Esquivel et al., Reference Dowell-Esquivel, Czaja, Kallestrup, Depp, Saber and Harvey2023; Czaja et al., Reference Czaja, Kallestrup, Harvey and Pak2020), there were no site (NYC vs. Miami) differences in age, education, MOCS score, sex, and racial status due to our efforts to collect balanced samples. More participants reported Latinx Ethnicity (66%) and trained in Spanish at the Miami site (54%) than in New York (41% and 28%), X 2 (1) > 12.05, p < .001.
Table 2 presents the scores for the 6 baseline completion time and error variables across graduation status, education and MOCA scores, as well as training gains from baseline to the first training session. We used t-tests to compare the graduates and nongraduates on the baseline task performance variables, MOCA scores, and education. As seen in the table, nongraduates made more baseline errors, had baseline slower performance, lower MOCA scores, and less education than the graduates. Effect sizes for the differences were all d = 0.84 or larger. As the variance estimates appeared to be potentially unbalanced, we performed F tests for homogeneity of variance. Only 1 was significant, ATM baseline errors. When we used the Mann-Whitney U test to confirm the results of the t-tests, all 6 tests were significant, all U > 455, all z > 4.51, all p < .001. We performed similar analyses (data not shown) for the difference of graduates and nongraduates within the MCI participants alone. All 6 t-tests were statistically significant, with graduates performing better (all t >2.41, all p < .022).
At the bottom of the table, we present change scores from baseline to first training session. For graduates, all changes in completion time and errors were significant at p < 0.001, with effect sizes of d = 0.33 or larger. For the non-graduators, two of the variables did not change significantly from baseline to the first training session: time and errors on the medication management test. The effect sizes for group differences at baseline were uniformly larger, across all 6 measures, than the effect size for trial 1 training changes. Thus, all 6 baseline performance (time and error) variables and all trial 1 training gains were considered for use in the multivariate analyses.
Table 3 presents the results of the discriminant function analyses with the 6 baseline time and error variables. As can be seen in the top of the table, only baseline errors on the Ticket Kiosk Task entered the discriminant function, p < .001. This analysis yielded correct overall classification based on graduation status of 85%, while correctly identifying 94% of the graduates
When we entered the training gains after 1 training session as a predictor of graduation status, including both changes in errors and time to completion, none of the variables entered the discriminant function, all F <1.84, all p > .18.
In our final discriminant analysis, presented at the bottom of Table 3, we added MOCA scores to the original baseline variables as an additional predictor of graduation status. Interestingly, MOCA scores entered the analysis at a very significant level but did not displace ticket task baseline errors as the primary discriminator. Classification accuracy was improved by 2% overall, with detection accuracy for nongraduates increased by 2% and detection accuracy for graduates unaffected.
Figure 2 presents the ROC curve analysis for graduation status. Using ticket task baseline errors as the predictor, the area under the curve (AUC) was 0.83, with a standard error of measurement of .042. The p value for significance test was p < .001; and the 95% Confidence interval for the AUC was 0.75–.92.
Discussion
In a well-characterized sample of participants with NC and MCI, nearly all NC participants and the majority of those with MCI fully mastered 3 different functional skills training tasks. Prediction of those who did not manifest full mastery suggested that errors on the very first, and easiest, of the fixed difficulty pre-training simulations, the Ticket Kiosk Task, was a substantial predictor of eventual mastery. Further, errors on the task, not completion time nor training gains after initiation of training, were the best predictor of eventual mastery. Adding MOCA scores as a predictor did not change the proportion of graduating cases identified.
With the high levels of graduation, the incremental prediction was not numerically substantial, because if everyone was designated as a graduate, 78% of the classifications would be correct. The improvement is statistically significant across two analysis strategies. However, the finding that error scores on the very first task are the best predictor of this incremental prediction provides pragmatic information about how it is possible to rapidly identify failure to master the task.
Given that MCI and more severe cognitive challenges lead to disability, the availability of training that can lead to mastery of functionally relevant technology-related tasks may be a treatment advance. In our previous study with in-person training (Czaja et al., Reference Czaja, Kallestrup, Harvey and Pak2020), we have found that drop-out from training, although minimal compared to pharmacological interventions, can handicap global training outcomes. Drop-out rates in this study for participants with MCI were less than half that seen in the previous intervention. In the current fully remote version of the training simulations, identification as early as possible of possible challenges to completion could allow the developers to modify the tasks increase efficiency of training, further reduce drop-out, and attenuate experiences of frustration on the part of potential participants and their families. Given the high levels of mastery of MCI participants, we do not see any reason that participants with slightly more severe impairments could not receive some benefit from training.
The origin of high early error rates and eventual failure to master all tasks cannot be clearly identified from these data. Poor motivation seems unlikely as a cause of failure to master the tasks, because the participants who were identified as not mastering all tasks continued training until the end of the study. It is possible that reduced experience with technology-related tasks was associated with high error rates on the first simulation. It is also possible that the characteristics of the fixed difficulty assessment, where the task challenges are generally hierarchical in difficulty may lead to participants “getting behind” and never catching up. Also, some requirements for successful performance of the task, such as the need to orient to the touch-screen and correctly execute responses, are not trained by the current version of the software. Other fixed difficulty functional capacity assessments, such as the Virtual Reality Functional Capacity Assessment Task (VRFCAT; Keefe et al., Reference Keefe, Davis, Atkins, Vaughan, Patterson, Narasimhan and Harvey2016), have a formal orientation training program that preceded the task itself. However, the VRFCAT does not have a remote delivery option, so eventually having both remote delivery and a formal training period would be the optimal development. In the FUNSAT fixed difficulty stimulations, participants have only 4 opportunities to complete each item before it is designated as failed and a progression takes place.
It is worth noting that the FUNSAT is fully modular, and any combination of training simulations can be administered to participants. Since errors on all the tasks were greater in non-graduators, if a protocol was targeting only ATM banking, for instance, high levels of errors on that simulation also discriminated eventual graduators and those who did not.
The limitations of the study include inability to subdivide participants with MCI based on Jak–Bondi subtypes because we did not stratify at the time of selection. Racial and ethnic status is not balanced across the MCI overall subgroups and fewer participants overall trained in Spanish than in English. Failing to achieve mastery of all tasks does not mean that training gains were not substantial in general (see Table 2) or that real-world transfer did not occur in that subset of participants.
Although training gains in the FUNSAT across simulations were previously reported to be similar across different racial, ethnic, language, educational, and baseline cognitive factors (Dowell-Esquivel et al., Reference Dowell-Esquivel, Czaja, Kallestrup, Depp, Saber and Harvey2023), there were still a subset of participants, generally limited to those with MCI, who did not fully master the training tasks. The fact that these participants can be identified very early on, and through error rates at baseline rather than reduced early training gains (which requires completing the full baseline assessment before training starts), suggests that targeting these participants with task-based interventions may be possible. Formal training for orientation to the task demands, possible alternative assessment strategies, and modification of training strategies including more opportunities to pass easier items, smaller incremental training units, or more feedback might reduce the learning challenges. Given the general absence of previous successful computerized skills training interventions targeting this population, a 65% success rate for full mastery with training of 6 technology-related functional skills for participants with MCI seems substantial. The importance of these training gains is underscored by the results of the previous papers from this study showing the following: (1). greater proportionate gains on training tasks for MCI participants than NC (Czaja et al., Reference Czaja, Kallestrup and Harvey2023); (2). real-world transfer of performance of the trained functional skills task to the real-world environment, in both MCI and NC samples (Dowell-Esquivel et al., Reference Dowell-Esquivel, Czaja, Kallestrup, Depp, Saber and Harvey2023); and (3) training gains in cognition and functional capacity that were statistically significant with, effect sizes greater than d = 0.75 for MCI participants (Harvey et al., Reference Harvey, Zayas-Bazan, Tibiriçá, Kallestrup and Czaja2023). The fact that drop-out on the part of MCI participants was reduced by 30% through adjustments in training delivery and standards for mastery suggests that a goal of eliminating failure to master all tasks through alternations in training delivery does not seem to be an unrealistic goal.
Conflicts of interest
Peter Kallestrup is CEO of i-Function, Inc. Sara J. Czaja is Co-Chief Scientific Officer of i-Function, Inc. Courtney Dowell-Equivel, Justin Macchiarelli, and Alejandro Martinez have no competing interests. Philip D. Harvey is Co-Chief Scientific Officer of i-Function, Inc. He has other interests unrelated to the content of this paper: Consulting fees or travel reimbursements from Alkermes, Boehringer Ingelheim, Karuna Therapeutics, Merck Pharma, Minerva Neurosciences, and Sunovion Pharma in the past year. He receives royalties from the Brief Assessment of Cognition in Schizophrenia (Owned by WCG Endpoint Solutions, Inc. and contained in the MCCB). He is Scientific Consultant to EMA Wellness, Inc.
Source of funding
Funded by NIA Grant 2 R44 AG057238-03A1A Principal Investigator Peter Kallestrup.
Description of author(s)’ roles
PD Harvey: Designed and supervised the study. Analyzed data and wrote and edited the manuscript.
P Kallestrup: Designed and supervised the study. Wrote and edited the manuscript.
J Macchiarelli: Collated and organized data, wrote first draft of the manuscript.
A Martinez: Collated and organized data, wrote first draft of the manuscript.
C Dowell-Esquivel: Conceptualized the specific substudy, collated data, and wrote first draft of the manuscript.
S Czaja: Designed and supervised the study. Wrote and edited the manuscript.
Acknowledgements
The study was funded by the National Institute of Aging, who had no input into the content. All people who worked on the paper are listed as authors and their roles are described.
Supplementary material
The supplementary material for this article can be found at https://doi.org/10.1017/S1041610224000115