4 performs the visual analysis(i.e. update the Kalman snake), computes the velocityscrew according to Eq. (16), and sends the com-mands to act . Finally, a task vis displays the statusof the system. A hardware-oriented communicationlevel ensures correct timing table with the physicaldevices [4].In the following, distances are expressed in mmand angles in rad. Infinite impulse response digitalfilters were used for smoothing visual measurements.Filters and feedback gains of Eq. (16) are tuned ex-perimentally. A generic experiment with the robotsystem begins by bringing the robot in the desiredconfiguration. Then an active contour is initialized(using the computer mouse) in order to track thetarget object image appearance. After recording thecontrol points of the goal contour, the robot is movedto the initial configuration, while the current activecontour is tracking the object appearance. Finally,the control command is issued to move the robotaccording to Eq. (16). The matrix B.x x x/, whose in-verse is used in the control law, is updated at eachperiod Tctr . Two experiments are reported to evalu-ate the performance of the proposed hybrid visualcontrol approach. Figs. 5(a) and (b) report respec-tively an initial and goal image appearances forthe target object used in the experiments (a mu-sic tape). We consider the case of static object (i.e.cV V V ona D 0) and regulation (i.e. P x x x.d/D 0). The vec-tor of saturation thresholds has been fixed equal to
D [0:5; 0:5; 0:1; 0:1; 0:1; 15]T.In the first trial, the eye-in-hand robotic systemexecutes a relative positioning task with the follow-ing values of state variables. The initial state is x x x0 D[2:31;−0:41;−1:52; 0:41;−0:06; 231]T, and the de-sired state is x x x.d/D [−0:25;−0:09;−1:53; 0:33;−0:05; 209]T. Plots of the coordinates of the sur-face’s centroid and of the elements of the projectionmatrix are shown in Figs. 6(a) and (b); Figs. 6(c)and 6(d) show 3D orientation and distance parame-ters; Figs. 6(e) and (f) show the components of therequested camera velocity twist. The desired relativevalues are also indicated. In the second experiment,whose results are shown in Fig. 7, another typical po-sitioning task is performed. In this experiment thereis a larger mismatch between initial and desired con-figurations than in the first one. The initial state isx x x0 D [0:47; 1:01;−1:82;−0:16; 0:07; 334]T, and thedesired state is x x x.d/D [−2:23; 0:07;−1:52;−0:12;0:02; 253]T.The experiments show the convergence and the ro-bustness of the proposed hybrid visual control scheme.The centroid coordinates and projection matrix con-verge well in both cases. However the proposedcontrol scheme exhibits some practical limits, whichhave been found out during the implementation of theapproach: Some problems are evidenced relative to the 3D pa-rameters. In fact, large initial error in orientationare not compensated by the control system and asteady state error is noted for the distance parame-ter. This is due to the fact that these quantities areestimated from 2D information since they are notdirectly measurable using the camera sensor. Really large mismatches between initial and desiredobject appearances in the image plane can generateunfeasible 3D motion of the eye-in-hand roboticsystem, since joints limits are reached during visualservoing. The velocity commands do not exactly converge tozero. This is due to the resolution of the robot con-troller, since the commands are stored in a 16-bitregister and velocities smaller than a certain thresh-old are truncated and do not produce any motion ofthe robot. Finally, performance limits are also due to thehigh value of the control loop period Tctr , whichis determined depending on the computational ca-pacity of the processor implementing the visualfeedback.5. Conclusions and future workThe problem of camera–object relative positioningis addressed from the point of view of nonlinear con-trol theory.We take into account active vision require-ments by reducing the size of visual representationthrough an affine model of interaction.