This paper aims at contributing to a sub-symbolic, feedback-based “theory of robotic grasping” where no full geometrical knowledge of the shape is assumed. We describe experimental results on grasping 2D generic shapes without traditional geometrical processing. Grasping algorithms are used in conjunction with a vision system and a robot manipulator with a three-fingered gripper is used to grasp several different shapes. The altorithms are run on the shape as it appears on the computer screen (i.e. directly from a vision system). Simulated gripper ringer with virtual sensors are configured and positioned on the screen whose inputs are controlled by moving their position relative to the image until an equilibrium is reached among the control systems involved.