Computer Vision (CV) and SLAM are two different topics, But they can interact under what is called Visual SLAM (V-SLAM).
SLAM stands for Simultaneous Localization and Mapping, a technique used in Autonomous navigation for robots in unknown GPS-denied environments.
Computer Vision and Image processing terms could make you confused and wonder what is the difference between them; computer vision target is to get information from images like distance, depth, face detection, speed and direction of camera, etc. So to achieve these targets you have to “process” and prepare your image to be mature enough for computer vision algorithms to run, for example, to get depth from images one technique is to use double images from different point of view for the same object, depth algorithm requires that both images have to be aligned, so if our cameras are not aligned as the CV algorithm need, so we will go to “process” these image by transforming and rotating them till they are aligned as required by the CV algorithm
SLAM is an algorithm to map the unknown robot environment, so mapping needs depth or range finder sensor many sensors can be used for this target like laser scanners and depth cameras, so computer vision and SLAM can interact with each other through depth cameras.