P D Pc C P
The tensor (of degree 2) Mc D A。t /A。t /−1 can p and q。 The centroid dynamics is given by the
following first-order motion field expression [1]:
P
be characterized in terms of differential invariants
[13]。 p 。t / D −f =zc0pxc =c 0 −f pyc cV cno ;
Pc 0−f =zc pyc =c f 0 −pxc (8)
3。 Hybrid visual servoing
In this section, a hybrid state space representation of camera–object interaction is first derived, and then a robust control law is synthesized。
3。1。 State representation
According to the first-order spatial structure of the motion field of Eq。 (5), the dynamic evolution of any image patch enclosing the object has six degrees of freedom, namely the velocity centroid coordinates vc — accounting for rigid translations of the whole patch — and the entries of the 2 2 tensor Mc — related to changes in shape of the patch [13]。 Let us choose as the state of the system the 6-vector
x D [pxc ; pyc ; − '; p; q; c]T; (6)
which is a hybrid vector, since it includes both im-age space 2D information and 3D orientation and dis-tance parameters。 Notice that the choice of − ' is due to the fact that this quantity is well defined also in the fronto-parallel configuration, which is a singularity of orientation representation for the angles ; , and '。 We demonstrate below that the state space repre-sentation of camera–object interaction can be written asxP D B。x/cV cno; (7)
where the notation aV bnc stands for relative twist screw of frame hbi with respect to frame hci ex-pressed in frame hai。 The system described by Eq。(7) is a driftless, input-affine nonlinear system, where cV cno D cV cna − cV ona is the relative twist screw
of camera and object。 cV cna D [cvTcna ; c!Tcna ]T is the control input and cV ona D [cvTona ; c!Tona ]T is a distur-bance input, and hai is an arbitrary reference frame。
Assuming that the object is almost centered in the vi-sual field, and sufficiently far from the camera plane, it
摘要:在本文中,视觉伺服问题通过耦合非线性控制理论与机器人使用的视觉信息的方便表示来解决。基于线性相机模型的视觉表示是非常紧凑的以符合主动视觉要求。假设精确的模型和状态测量,设计的控制律被证明确保在Lyapunov感觉的全局渐近稳定性。还表明,在有界不确定性的存在下,闭环行为的特征在于全局吸引子。通过选择包括图像空间(2D)信息和3D对象参数的混合视觉状态向量,在控制级解决了使用线性相机模型产生的众所周知的姿态模糊性。阐述了避免相机校准的在线视觉状态估计的方法。模拟和实时实验验证了系统收敛和控制鲁棒性方面的理论框架。 ©1999 Elsevier Science B。V。保留所有权利。文献综述