In practical applications, many robots equipped with embedded devices have limited computing capabilities. These limitations often hinder the performance of existing dynamic SLAM algorithms, especially when faced with occlusions or processor constraints. Such challenges lead to subpar positioning accuracy and efficiency. This paper introduces a novel lightweight dynamic SLAM algorithm designed primarily to mitigate the interference caused by moving object occlusions. Our proposed approach combines a deep learning object detection algorithm with a Kalman filter. This combination offers prior information about dynamic objects for each SLAM algorithm frame. Leveraging geometric techniques like RANSAC and the epipolar constraint, our method filters out dynamic feature points, focuses on static feature points for pose determination, and enhances the SLAM algorithm’s robustness in dynamic environments. We conducted experimental validations on the TUM public dataset, which demonstrated that our approach elevates positioning accuracy by approximately 54% and boosts the running speed by 75.47% in dynamic scenes.