Fig. 8 2D visual servoing architecture simulation. Predictive Visual Servoing implementation. In this approach our goal is also to control the relative pose of the Robot in respect to the target. In a similar way the model corresponds to Fig. 8 but substituting the controller – in a first case is used a GPC and in another is used a MPC controller. In both experiments all the conditions and characteristics of the robot are the same. The goal is to control the end effector from the image error between a current image and desire image. 5.2 Visual servoing control results Visual Servoing using a PI Controller. To eliminate the position error was chosen a PI controller. The point coordinates in operational coordinates are: pi = [0.35 –0.15 0.40 π 0 π]T pd = [0.45 –0.10 0.40 π 0 π]T The points pi and pd correspond to the Robot position from which the images used to control the robot are obtained. In Fig. 7 it can be observed the translation and rotation of the end-effector around ox, oy and oz axis. Fig. 9 Visual servoing using PI control. Predictive GPC and MPC Visual servoing control. In both experiments a 2D visual servoing architecture were used. From figures 9, 10 and 11 it can be seen that the GPC has a more linear trajectory and is faster. The rise time is around 0.6s for the PI while for the GPC and MPC are 0.1s and 0.2s, respectively. The settling time is 1s for the PI, 0.3s for the GPC and 0.9s for the MPC. The results are less accurate in turn of z and for rotation of the end-effector. Fig. 10 Results of a 2D Visual servoing architecture using a GPC controller. Fig 11 Results of a 2D Visual servoing architecture using a MPC controller.
ZOH q iin q out Puma 560 + control Jr v J+− S CT pd dp,pd dp P-P* S0 Table 1 presents the computed errors for each algorithm which reveals the best performance for the GPC. TABLE1 r.m.s. values for the control algorithm SSR Tx Ty Tz θx θy θz error PI 2.50 2.40 1.20 2.20 0.30 0.47 1.51 GPC 2.14 0,81 0.22 1.84 0.22 0.02 0.87 MPC 1.36 0.67 3.01 0.76 6.3 6.21 3.04 6. EXPERIMENTAL PROCEDURE The experimental implementation of the proposed vision control algorithms was performed through the whole simulation system previously developed used as platform. In spite of the presented simulation works had been developed in an “eye in hand” configuration, the experimental works were performed according to an “eye to hand” one. This fact is related with the necessity of protecting the camera which was placed outside the robot allowing from this way a higher security system in the initial phase. The homogeneous transformation matrix which relates the camera frame with the robot frame,wc T , for the used configuration is given by: 1 0 0 0.452100 10.450100.91000 1wTc− ⎡⎤⎢⎥− ⎢⎥ =⎢⎥ −⎢⎥⎣⎦ (27) The application consisted in the robot control using in a first one a PI controller and in a second one a generalized predictive controller, GPC. 6.1 Implemented system configuration In the experimental developed work the potentiality given by the XPC Target and Matlab Simulink was used. Through this technology is possible to create an operative system which allows working in real time robot control. There were used two computers (Fig. 12), a Host-PC used for the visual information acquisition and processing and a target PC which receives the processing results from the Host-PC and performs the robot control. The image acquisition system processes them at a rate between 12 and 20 images per second and sends the processed data to the robot control system (Target PC) through RS232. This external loop control frequency is related with the algorithm weight, the numerical capacity of the computers and with the specific weight of the simulink program. Theoretically the used Vector camera could reach the rate of 300 images per second. In order to generate the robot control environment it was necessary to replace the original PUMA controller by an open control architecture. This procedure allows the adaptation of the system to different kinds of controllers. In the present case the internal controller was substituted by a velocity controller with gravitic compensation. Image information ttarget Fig 12 Experimental Scheme. The target target view by the camera is shown in figure 13. A planar target with 8 LED’s, placed at the vertices of two squares of 40 mm of side and spacing to each other of 150 mm, is used. In figure 13 is possible to observe the target viewed by the camera. The choice of the number of points was conditioned by the estimated sensibility from the obtained results in the theoretical study obtained through simulation and by the limitations of the image processing system. In spite of having redundant information this number of points lead to better results as it was verified in the simulation study. Fig. 13 Target view by the camera. The experimental works obey to the configurations shown in figure 14.
- 上一篇:六自由度电阻机器人英文文献和中文翻译
- 下一篇:光电检测与分选机器英文文献和中文翻译
-
-
-
-
-
-
-
NFC协议物理层的软件实现+文献综述
中国传统元素在游戏角色...
g-C3N4光催化剂的制备和光催化性能研究
巴金《激流三部曲》高觉新的悲剧命运
现代简约美式风格在室内家装中的运用
上市公司股权结构对经营绩效的影响研究
高警觉工作人群的元情绪...
C++最短路径算法研究和程序设计
江苏省某高中学生体质现状的调查研究
浅析中国古代宗法制度