Adults who stutter (AWS) frequently engage in language monitoring to anticipate and manage stuttering. This linguistic monitoring may reallocate cognitive resources, with potential consequences for language production and memory. We investigated whether AWS’ increased monitoring during production imposes dual-task costs that limit encoding benefits, or whether it enhances memory through deeper conceptual engagement. Thirty-two AWS and sixty-four adults who do not stutter (AWNS) completed a referential communication task in which they described or identified pictures with an experimenter. To simulate AWS’ linguistic monitoring, half of the AWNS performed a simultaneous sound avoidance task (AWNS-SA), prohibiting certain word-initial phonemes. After the communication task, participants completed a recognition memory test for past referents. Results showed that AWS performed more similarly to AWNS than to AWNS-SA in both language production and memory, although AWS’ memory declined on a trial-by-trial basis when stuttering occurred. These findings suggest that linguistic monitoring in AWS does not impose substantial dual-task costs overall, but that stuttering moments can transiently disrupt memory encoding. Together, these results highlight the adaptive nature of linguistic monitoring in AWS and contribute to a broader understanding of how it supports language production and memory across AWS and AWNS.