AbstractThis paper is concerned with the performance evaluation of theH= Þlter for the 2-D visual servoing of amoving target. The e¦cacyof the H= Þlter is validated experimentally in comparison with the Kalman Þlter, using an eye-in-hand coordinated robotmanipulator system. The H= Þlter can provide a powerful tool for solving vision-based control problems in robotics, just as theKalman Þlter can. ( 1998 Elsevier Science ¸td. All rights reserved.Keywords: Robust estimation; visual motion; H= control; robust control; experimental evaluation 1. IntroductionBy integrating control and vision, a robotic system canmove appropriately in a dynamically changing workingspace. Tracking and grasping of a moving object bya robot manipulator is a typical example of this categoryof problem in real situations. The combination of robotcontrol with computer vision could become extremelyimportant, when dealing with a robotic system workingin uncertain and dynamic environments. Recent researche¤orts in this direction have been well covered collectedin (Hutchinson et al., 1996) (see also the referencestherein).In the visual servoing in particular, a powerful estima-tion algorithm taken from systems and control theoryplays an important role, since dynamic image scenes haveto be processed. 42310
The Kalman Þlter is a popular algo-rithm, and it has been frequently utilized, not only in thevisual servoing (Hutchinson et al., 1996) but also in theimplementation of computer vision (Matthies et al.,1989). At the present time, the Kalman Þlter is acceptedas a basic, standard tool for active/dynamic vision (Blakeand Yuille, 1992).The combination of Linear Quadratic (LQ) controlwith the Kalman Þlter gives the well-known LinearQuadratic Gaussian (LQG) theory, which was mainly developed during the 60s and 70s. Since the 80s, however,a great move has occurred, from the LQG theory to theH= theory (Doyle et al., 1989). The H= theory providesthe capability to handle model uncertainties in a morepractical way. While the LQG theory considers thee¤ects of uncertainty in a stochastic framework, theH= theory treats them in a functional analytic frame-work. Further, it gives a certain min-max optimal solu-tion to deal with the disturbances caused by uncertainties(Basar and Bernhard, 1991). It has been shown that theH= theory can be regarded as a natural generalization ofthe LQG theory (Doyle et al., 1989).Recently, theH= theory has been successfully appliedto visual feedback control (Ogura et al., 1994), where theemphasis was on the control aspect. Although the corres-ponding estimation theory in an H= setting has beendeveloped (Nagpal and Khargonekar, 1991; Shaked andTheodor, 1992), not much work has been done on the useof theH= Þlter in robotics and/or vision research (Fujitaet al., 1993, 1995). The superiority of the H= Þlter overthe existing estimation algorithms is theoretically con-vincing in some respects, since the model uncertaintiescan be handled more adequately (Nagpal and Khar-gonekar, 1991; Shaked and Theodor, 1992). Hence, anexperimental validation of its e¦cacy presents quitea challenge in the present situation.The purpose of this paper is to validate the per-formance of the H= Þlter experimentally, using aneye-in-hand coordinated robot manipulator system. Anempirical comparison is made, using the H= Þlter and the Kalman Þlter, for an estimation problem arising invisual servoing. The experimental results reveal the po-tential e¦cacy of theH= Þlter.
As in an important earlierpaper (Papanikolopoulos et al., 1993), two-dimensional(2-D) visual servoing of a moving target is considered inthe experiments. The development of the H= Þlter willlead to another powerful tool for the solution of vision-based control problems in robotics, in the same way asthe Kalman Þlter is already widely accepted.The rest of the paper is organized as follows. InSection 2, the conÞguration for the experiments withvisual servoing is introduced. The modeling of the visualservoing system is presented in Section 3. The controlstrategies and the estimation problem involved in ap-plying the H= Þlter are described in Section 4. Section5 presents the Algorithm of the H= Þlter. In Section 6,experimental results are presented. The e¦cacy of theH= Þlter is discussed, in comparison with the KalmanÞlter. Finally, in Section 7, the paper is summarized.2. Eye-in-hand systemsA variety of conÞgurations for experiments involvingvisual servoing have been proposed, depending on thedesign goals or the experimental setups of the robot andthe camera concerned. This paper considers a standardeye-in-hand conÞguration, where the manipulator carriesthe camera on its end-e¤ector. Then, the goal is to con-trol the robotmotion, such that the camera tracks a mov-ing target.For the performance evaluation of the H= Þlter invisual servoing, the eye-in-hand coordinated robotmanipulator system depicted in Fig. 1 was used in theexperiments. This apparatus for the experiments is stan-dard, and consists of the following items:f A 6-DOF industrial manipulator: MOTOMAN-K3S(YASKAWA).220 A. Kawabata, M. Fujita /Control En f A monochromatic CCD camera: TM-7EX (VIDTEX).f A real-time image-processing unit using transputers:TRP-IMG (Concurrent Systems).f A real-time control and estimation unit using trans-puters: TCS (Concurrent Systems).f A host PC for real-time programming: PC9801FA(NEC).f A host WS for o¤-line numerical computations: S-4/CL (Fujitsu).All the real-time software was written using C-TOOL-SET (INMOS). The o¤-line numerical computations forthe Þlter design were performed on the host workstationusing MATLAB (MathWorks) and MATRIXx (Integ-rated Systems).Strictly speaking, in an electrically driven robotmanipulator, the physical inputs are the voltages appliedto the actuators, or the corresponding signals to thedrivers. In the experimental system, the driver units havebeen appropriately modiÞed using an ASISS controlboard (Tecno). Hence, the velocity signal for each jointcan be directly applied, with the help of the low-levelservo. Simultaneously, in the experiments, another robotmoves the target, as in Fig. 1. Since the moving target hasdistinctive feature points on it (black on white), the neces-sary information from the dynamic images can easilybe computed, using an elementary image-processingtechnique.3. Modeling of visual servoing 视觉控制机器人英文文献和中文翻译:http://www.youerw.com/fanyi/lunwen_42698.html