菜单
  

    The width of a flange may vary depending on the part and the selected joining technology, with typical dimensions ranging from 5 to 10mm. Due to a number of errors that can be traced in the production (stamping) of car door elements, the flanges’ real positions do not always match their theoretical ones. Since the precision of the process is not always guaranteed, the flanges are designed wider than they should be, in order to ensure that the welding spots be always placed within the flange area and sufficiently away from the flange edge. For this purpose, vision systems can be used in order to identify the flange’s position and edges and provide a feedback to the robot controller indicating the necessary offset to ensure that the welding spot be performed within the desired area. However simple such an application may sound, there are several limitations and challenges that are introduced by the assembly environment. The decision as to what type of vision system to be used is the first challenge to be met. Laser-based and active vision systems can generate very accurate 3D surface maps of the digitized parts, depending on the quality of the components used, but they can be slow, since the sensor has to be continuously moving around the workpiece   [12]. However, automotive assembly cycle times of up to 30 seconds require fast recognition; therefore, complete part scanning is not an option. Moreover, the cost of active systems does not allow application to multiple stations. Passive vision systems on the other hand, although they are being significantly less cost intensive, present challenges such as the susceptibility to lighting conditions and most importantly the correspondence problem. The correspondence problem involves matching pixels from the two images, which in real life, correspond to the same location. Throughout literature a plethora of algorithms  [14] with SAD (sum of absolute differences) and the pattern matching algorithms  [15] being the most widely used applications proposed for the solution of this problem [9]  [13]. 3. Method Description In order for the aforementioned problems to be addressed, the use of a passive vision system, in combination with laser diodes to pinpoint the welding area, was developed. The laser diode is positioned in a way so as for the projected laser line(s) to be intersecting the flange vertically.
    The main principles are: x  The intensity of the laser beam allows it to remain visible after the application of filters on the image, thus, allowing the isolation of the areas of interest. x  The deformation of the laser line, at the edge of the flange, allows the identification of the edge and thus ensures that the welding spot be offset towards the desired area on the flange. The unique signature of the laser beam in the processed images allows a reduction in the image’s background complexity that is responsible for the poor performance of the image analysis algorithms. Therefore, the correspondence problem can be solved easier thus further allowing the application of conventional stereo optics principles. The measurement process is presented in detail in the following sections.   3.1. Measurement Following the concept discussed above, the measurement of the flange distance from the cameras can be carried out using the stereo triangulation. The setup of the systems is similar to the one in Figure 2. LaserDiode-IlluminationCamera1rnationameraCamera2 Fig. 2. Vision system concept. Once the two images from the left and right cameras have been acquired, they are processed as follows: 1. Un-distortion: The process involves the removal of radial and tangential lens distortion. During the calibration of the system the lens’ characteristics are identified and can be used for removing the effect from the picture   [16]. 2.  Image rectification: this is the software processing of the images that ensures that the pixels be arranged in a way so as to compensate for any misalignment between the two camera axes. Practically, it means that the pixels representing the same point in the image have equal height in both images. 3. Grey scale – Thresholding: In order for images to be processed, they are converted into binary ones. At first, they are converted into grey scale images, which are exclusively composed of grey shades, varying from black, at the weakest intensity, to white at the strongest. The threshold filter processes inpidual pixels in an image and replaces them with a white pixel if their value is greater than the specified threshold value. In our case, the high intensity of the laser does not allow their trace to be eliminated. 4. Disparity map and re-projection to 3d space: The process involves finding the same features in the left and right camera views (also known as correspondence problem). In order to reduce the time required for the identification of the same point in the two images, a hybrid method for identifying the proper pixels and their correspondence, in both images, has been developed.  The method is based on the concept of the laser diode line that breaks on the changes of geometry and actually marks the area of interest that would assist in tracing the correct pixel. In order for this method to be implemented, a reference image of the laser line is used. This image can be acquired from both cameras during the calibration phase. After the calibration, these images undergo the rectification process whilst the threshold filter application and the area of interest are cropped, leaving a small template image of 6 x 6 pixels. Such a template is shown in Figure 3.   Fig. 3. Laser line breaks in the rectified/ threshold image  The developed software is able to match the pattern of this image within the respective images that are acquired by the vision system (both left and right cameras). Having calculated the correspondence in both images, the output of this step is a disparity map [17], where the disparities  are the differences in x-coordinates on the image planes of the same feature, viewed on the left and right camera images. Since the arrangement of the cameras is known, it is then possible for stereo triangulation to be applied to convert the disparity map into distances of the points from the camera pair. 3.2. Path recalculation The coordinates of the points, calculated in the previous step, are referenced to the left camera’s coordinates frame. The coordinates that the robot should move to, in order to compensate for the part actual location, are calculated by converting them to the robot base frame system. Figure 4, shows these frames.   Fig. 4. Vision system frames of reference. The conversion is achieved as follows:»»»»»¼º«««««¬ª»»»»¼º««««¬ª»»»»¼º««««¬ª »»»»»¼º«««««¬ª1WWW1000PaonPaonPaon1000TkjiTkjiTkji1WWWAONkkkkjjjjiiiizzzzyyyyxxxxZYX(1)  Where x  WX, WY, WZ are the coordinates of a point with respect to the base frame (B). x  WN, WO, WA are the coordinates of a point with respect to the camera frame (C). x  TX, TY, TZ are the coordinates of the origin of the Flange (F) frame with respect to the base frame (B). x  Pi, Pj, Pk are the coordinates of the origin of the camera (C) frame point with respect to the flange frame (F). x »»»¼º«««¬ªzzzyyyxxxkjikjikji is the rotation matrix that represents the rotation of the F frame with respect to B. x   »»»¼º«««¬ªkkkjjjiiiaonaonaon is the rotation matrix that represents the rotation of the C with respect to F. 3.3. Path correction Once the point coordinates have been calculated with respect to the base frame, the developed software provides the additional offset that is required to compensate for the diameter of the welding tool (gun) and sends the coordinates to the controller. No further conversions are required since the controller’s built in routines for path calculation are used to instructing the robot on the motion to be performed. The desktop PC, where the cameras are connected, needs to communicate the calculated coordinates to the controller and this is realized through the software architecture presented in the following section. 4. System Implementation Following the analysis of chapter 3, the realization of the vision system involved a) the development of the software system that was responsible for image capturing, the analysis and the calculation of point coordinates,
  1. 上一篇:甲苯甲基化工艺生产对二甲苯英文文献和中文翻译
  2. 下一篇:注塑模具单一浇口改善英文文献和中文翻译
  1. 机械手系统英文文献和中文翻译

  2. 船舶运动仿真系统英文文献和中文翻译

  3. 新能源空调系统设计英文文献和中文翻译

  4. 齿轮平行转子系统英文文献和中文翻译

  5. 蜂窝移动通信系统英文文献和中文翻译

  6. 模糊PLC系统的伺服机构英文文献和中文翻译

  7. 机器人控制系统英文文献和中文翻译

  8. 上市公司股权结构对经营绩效的影响研究

  9. 现代简约美式风格在室内家装中的运用

  10. 巴金《激流三部曲》高觉新的悲剧命运

  11. 江苏省某高中学生体质现状的调查研究

  12. C++最短路径算法研究和程序设计

  13. g-C3N4光催化剂的制备和光催化性能研究

  14. 高警觉工作人群的元情绪...

  15. 中国传统元素在游戏角色...

  16. NFC协议物理层的软件实现+文献综述

  17. 浅析中国古代宗法制度

  

About

优尔论文网手机版...

主页:http://www.youerw.com

关闭返回