Notice that it holds that P y y y D By .x x x/ cV V V cno,With where, using Eqs. (3) and (10), the elements of T areexpressed as functions of space state variables ast11 D 12  [.c  C 1/cx3 C .c  − 1/c2 −x3 ];t12 D 12  [.c  − 1/s2 −x3 − .c  C 1/sx3 ];t21 D 12  [.c  − 1/s2 −x3 C .c  C 1/sx3 ];t22 D 12  [.c  C 1/cx3 − .c  − 1/c2 −x3 ]: (13)being  .x x x/ D atan 2.−q;−p/.In the following we show that the proposed hy-brid visual representation of the camera–object inter-action model overcomes the drawback of perspectivelinearization, i.e. pose ambiguity. In fact, the statesof the system in Eq. (7) corresponding to ambiguityposes can be distinguished one from the other. Con-sider two ambiguity poses A and A0of the target ob-ject, i.e. A D A0, as shown in Fig. 2. Let  A,  A, and'A be the angles defining the orientation correspond-ing to pose A. From Eq. (2), the respective anglesrelative to pose A0result:  A0 D  A C  ,  A0 D  A,and 'A0 D 'A C  . The parameters p and q are re-lated to the object plane orientation (that is the direc-tion of jkjo axis), with respect to the camera, by theequations:p D−tan   cos   I q D−tan   sin  : (14)It follows thatpA0 D−tan  A0 cos  A0 D−pA;qA0 D−tan  A0 sin  A0 D−qA: (15)We conclude that the states of the system of Eq. (7)corresponding to the ambiguity poses A and A0canbe distinguished, since the state variables p and q aredifferent, in particular they are opposite in sign.3.2. Robust controlWe introduce the following definitions. Let x x x.d/andO x x x 2 Rn denote respectively the desired and estimatedstate vectors. The vectors Q x x x D x x x − x x x.d/and O Q x x x DO x x x − x x x.d/2 Rn denote respectively the real and es-timated error vectors. C.x x x;     / Dfx x x 2 Rn : jxi j 6 i ;i D 1;::: ;ng is a closed hypercube. AnB denotessubtraction between sets A and B. diag.x x x/ is the di-agonal matrix with the elements of x x x 2 Rn.sat.x x x=     / D [sat.x1= 1/;::: ; sat.xn= n/]T is a vec-tor of saturation functions, wheresat xi i D  i sgn.xi / if jxi j >  i ;xi otherwise;and sgn.x x x/ D [sgn.x1/;::: ; sgn.xn/]T is a vector ofsign functions. As a preliminary result, we show thatthe input matrix B.x x x/, whose inverse will be usedin the control law, is not singular (nonzero determi-nant) in the fronto-parallel configuration .p D q D0/. Indeed, by explicit inspection, the determinant inthis particular configuration is equal to det.B.x x x// D.f 2=.c2 .x x x/2//, since t12 D−t21, t11 D t22, and, asresults from Eq. (3), t211 C t212 D 1.In order to stabilize the system of Eq. (7), we applysliding mode control in the form: V V V D O B. O x x x/−1 −diag.k k k/sat O Q x x x     !C P x x x.d/!: (16)We have the following propositions, relative to thecase of both exact model and state measurementand bounded uncertainties. Proofs are provided inAppendix A.Lemma 1. Assume exact model and state space mea-surements (i.e. O B. O x x x/ D B.x x x/; O x x x D x x x/, then the system in Eq. (7) is asymptotically stable in the whole statespace X, provided that the control law in Eq. (16) isused with gains ki > 0;i D 1;::: ,6.Lemma 2. Assume multiplicative uncertainties in theinput matrix (i.e. B.x x x/ D .I C  / O B. O x x x/, j ij j 6 Dij ;Dij > 0;i;j D 1;::: , 6), and additive uncertaintiesin state variables (i.e. xi D b xi C 
 

xi ; j 

xi j 6  i; i>0;i D 1;::: , 6). Fix a vector       2 R6 such that  i > i;i D 1;::: ,6, if the maximum eigenvalue of D issmaller than 1, and jP x.d/ij 6  i ; i > 0;i D 1;::: ,6,then the set C. Q x x x; 2     / is a global attractor for the sys-tem (7), provided that the control law in Eq. (16) isused with gains:k k k D diag.     /−1.I − D/−1.D      C     /; (17)and  i > 0;i D 1;::: ,6.3.3. Visual measurementsTo track the image patch &, we use active con-tours (or Kalman snakes), represented by quadraticB-splines [1,2]. The proposed control system in Eq.(16) requires full state information, defined in Eq. (6).The centroid p p pcis computed directly from the controlpoints of the snake. Parameters p, q, and c are also re-lated to 3D space information: these data can be com-puted from the image space, without any calibrationprocedure. The estimation procedure adopted in thiswork is based on cross-ratios, and exploits the prop-erty that the cross-ratio of four collinear space points isequal to the cross-ratio computed with their projectedpoints [14]. From the observation model defined inEq. (2), known, at least, three image points [px ;py ]Tof visible surface (in general position), and the cor-responding set of object coordinates [ox;oy]T (whichare straightforward to obtain if, for example, a CADmodel of the object is available), the matrix T is com-puted by a least squares estimation. Finally, the angle  −' is computed using Eq. (9). Notice that the scalefactor   in Eq. (2), can be expressed as a function ofstate variables, since, by using perspective equations,it holds that zc D c=.1 − p.pcx =f / − q.pcy =f / /.4. Experimental resultsIn this section experimental results are presented.Our aim is to validate the proposed hybrid visual con-trol approach, showing the convergence and robust-ness properties of the systems both through simula-tions and in real conditions.4.1. Simulation resultsThe camera–object interaction model in Eq. (7) andthe control system in Eq. (16) have been simulatedin the Matlab/Simulink environment. The state vectorx x x is sampled at the rate of 10 Hz. In the following,distances are expressed in mm and angles in rad. Thegains have been determined according to Eq. (17) bythe following procedure. A first-attempt set of gainshas been chosen according to Lemma 1, and a set ofpreliminary simulations have been performed, usingthe first-attempt gains, in order to obtain a set of Nreference system trajectories. In order to determine anupper bound on the multiplicative uncertainties, themaximum value Dij of the element  ij (see Lemma2) has been searched for over the range of variationof: (i) values of state variables along the kth referencetrajectory; (ii) combinations of maximum additiveuncertainties of state variables; (iii) the index k ofthe reference trajectory (k D 1;::: ;N). Assumingas maximum additive uncertainties the vector       D[0:5; 0:5; 0:04; 0:04; 0:04; 10]T, the maximum eigen-value of D results:  m.D/ D 0:87. Moreover, fix thevector       D [0:5; 0:5; 0:1; 0:1; 0:1; 15]T, which dealswith the dimension of the global attractor, and bychoosing       D [0:08; 0:08; 0:001; 0:006; 0:02; 0:3]T,the resulting values of the control gains are: k k k D[0:5; 0:5; 0:55; 0:35; 0:35; 0:65]T. Two different testshave been carried out to evaluate the performance ofthe proposed control law. In the first simulation we as-sumed exact measurement of state variables. The ini-tial state is x x x0 D [0; 0; 1:04;−1:5;−0:866; 800]T andthe desired state is x x x.d/D [10; 10; 1:7;−0:5;−0:28;800]T. Plots of the coordinates of the surface’s cen-troid and of the elements of the projection matrix areshown in Figs. 3(a) and (b); Figs. 3(c) and (d) show3D orientation and distance parameters; Figs. 3(e)and (f) show the components of the requested cam-era velocity twist. In the second simulation, whoseresults are shown in Fig. 4, we repeated the previoussimulation after introducing additive white noise onstate variables with variances:  2xcD  2ycD 0:0025(centroid coordinates);  2' D  2p D  2q D 10−4 (orientation parameters);  2c D 2 (distance param-eter). The first simulation shows the asymptoticconvergence in the ideal case of exact state mea-surement. The second one shows the boundednessof state error as formally proven in Lemma 2. No-tice that the control chattering has been reduced bya suitable low-pass filtering (fp D 0:3 Hz) of statevariables. The stability proof given in Lemma 2 isstill valid, simply by considering as estimated statevariables the ones resulting from the filtering pro-cess, provided that the frequency response of thefilter does not affect the assumption of bounded statemeasurements.4.2. Real-time experimentsThe eye-in-hand robotic system, on which this ap-proach has been experimented, consists of a PUMA560 robot arm with a Sony CCD camera mountedon its wrist. The robot is commanded by the MARKIII controller (which realizes an inner loop), and aPC 486/66 MHz equipped with an Imaging Tech-nology frame grabber, which implements an outerloop. The MARK III controller operates under VALII programs and communicates with the PC throughthe ALTER real time protocol using an RS-232 serialinterface. All the acquisition and control activitiesrunning in the PC are executed under the HARTIKkernel [5], which has been specifically designed tosupport real-time control applications with timingconstraints. The control system has been implementedas a multitasking application. A task  act with pe-riod Tact D 28ms, reads commands (i.e. the controlvelocity screw) from a CAB (Cyclic AsynchronousBuffer), and sends them to the robot controller viaALTER. The control task  ctr , which has periodTctr D k 28ms;k D

上一篇:扭矩笛卡尔阻抗控制技术控制轻量机器人英文文献和中文翻译
下一篇:对热流道系统注塑工艺英文文献和中文翻译

数控机床制造过程的碳排...

新的数控车床加工机制英文文献和中文翻译

抗震性能的无粘结后张法...

锈蚀钢筋的力学性能英文文献和中文翻译

未加筋的低屈服点钢板剪...

台湾绿色B建筑节水措施英文文献和中文翻译

汽车内燃机连杆载荷和应...

网络语言“XX体”研究

LiMn1-xFexPO4正极材料合成及充放电性能研究

我国风险投资的发展现状问题及对策分析

张洁小说《无字》中的女性意识

安康汉江网讯

ASP.net+sqlserver企业设备管理系统设计与开发

新課改下小學语文洧效阅...

麦秸秆还田和沼液灌溉对...

老年2型糖尿病患者运动疗...

互联网教育”变革路径研究进展【7972字】