毕业论文

打赏
当前位置: 毕业论文 > 外文文献翻译 >

机器视觉三坐标测量英文文献和中文翻译(6)

时间:2019-04-16 21:25来源:毕业论文
3) calibration error of the camera; 4) rangemeasurement error of the binocular stereo vision method; and5) measurement errors along the XYZ directions of the large-scale CMM [31], [32].VI. EXPERIMENTA


3) calibration error of the camera; 4) rangemeasurement error of the binocular stereo vision method; and5) measurement errors along the XYZ directions of the large-scale CMM [31], [32].VI. EXPERIMENTAL RESULTSThe experimental setup is composed of the following:1) OPTON UMM 500 CMM, measuring range of 500 mm Fig. 4. Photograph of the experimental setup.Fig. 5. Measured energy-spectrum entropy function curve and its least-squares fitting curve.(length) × 300 mm (width) × 200 mm (height); 2) MintronMTV-1881EX black-and-white camera (795 × 596 pixels);3) CCTV lens, focal length of 25 mm and F-number of 1.4;4) VIDEO-VESA0-M real-time black-and-white frame grab-ber; 5) precision X–Y stage, measuring range of 50 × 15 mm2and measuring resolution of 10 μm.Fig. 4 is the photograph of the experimental setup.The measuring repeatability of the monocular vision algo-rithm is verified by the precision X–Y stage. First, the mea-sured workpiece is fixed, and the camera is put on a precisionX–Y stage. (Note that the optical axis of the camera lensis collimated with the cross-cutting lines on the workpiece).Second, the approximate position of the clearest image can befound by eye observation when the camera is moved by theprecision X–Y stage, equidistant fore and back positions areselected, and the images of the aforementioned three positionsare shot. Third, the approximate peak position can be calcu-lated by the position-from-defocus algorithm. Fourth, severalpositions around the approximate peak position are selected,their images are shot, and the accurate peak position can becalculated by the position-from-focus algorithm.In Fig. 5, X coordinates are the driving distances of theprecision X–Y stage, whereas Y coordinates are the energy-spectrum entropy function values. The approximate peak posi-tion is at 23.9 μm by the position-from-defocus algorithm. Fiftyimages, which are before and after the approximate peak posi-tion, are shot with the moving step of 10 μm, and the energy-spectrum entropy function value of each image is calculated.The accurate peak position, which is calculated by least-squaresfitting, is at 26.7 μm.In the test of measurement repeatability, the same cross-cutting lines area is measured ten times, and the calculatedfocusing positions are shown in Table I.To accelerate the processing speed of the proposed imageprocessing algorithm, the small part (100 × 100 pixels) of theoriginal image (620 × 420 pixels) that contains the intersectionpoint of cross-cutting lines is used to be processed. The reso-lution of the original image (Fig. 6) is 100 × 100 pixels. Thedimensions of the captured surface are about 1.8 × 1.8 mm2.Fig. 6 is the original image of crossing-cutting lines and theirintersection point. In Fig. 6,
there is only one horizontal cuttingline and one vertical cutting line. In Fig. 6, the horizontalcutting line is clearer than the vertical cutting line, and in themiddle of the horizontal direction of the image, the verticalcutting line is in the middle of the vertical direction of theimage. The vertical lines around the left and right of the verticalcutting line are the fabrication texture, and the vertical cuttingline is parallel to the fabrication texture and a little wider thanthe fabrication texture.The task of the proposed image processing algorithm is toobtain the 2-D image coordinates of the intersection point ofthe horizontal and vertical cutting lines. The partial imageprocessing results of the cross-cutting lines and the 2-D imagecoordinate calculation of the intersection point are described inFigs. 7–18. (Note that the image grayscale is scaled as 0–63.)The histogram of the original image is shown in Fig. 7.Fig. 8 gives the processing result by stretching and enhancingthe grayscale of the original image to [0 63]. Fig. 9 shows thehistogram of Fig. 8.Because the horizontal cutting line is perpendicular to thefabrication texture, it is relativity easy to be extracted. First,convolve the grayscale stretched image in Fig. 8 with theoperator of the low-pass filter [1/16 1/8 1/16; 1/8 1/4 1/8; 1/161/8 1/16] two times. Second, convolve the low-pass-filteredimage with the operator [0 −10;020;0 −1 0], and then, theimage of the horizontal cutting line will only be reserved. Third,stretch the grayscale of the image of the horizontal cutting lineto [0 63], and the image of the processed horizontal cutting lineis shown in Fig. 10. Fourth, the upper and lower edges of thehorizontal cutting line can be located with subpixel precision.Because the vertical cutting line is parallel to the fabricationtexture, it is relativity difficult to be extracted. First, convolvethe grayscale stretched image in Fig. 8 with the operator ofthe low-pass filter [1/16 1/8 1/16; 1/8 1/4 1/8; 1/16 1/8 1/16]two times. Second, convolve the low-pass-filtered image withthe operator of the high-pass filter [−1 −1 −1;−110 −1;−1−1 −1]. Fig. 11 gives the filtered result of Fig. 8 with the low-and high-pass filters. Fig. 12 shows the histogram of Fig. 11.Third, stretch the grayscale of Fig. 11 to [0 63], and the resultis shown in Fig. 13. Fig. 14 shows the histogram of Fig. 13.Fourth, edge detection will be done in Fig. 13. Fig. 15 is Fig. 14. Histogram of Fig. 13.the edge-detected image. Fifth, to eliminate the image noise,the binary edge-detected image, as shown in Fig. 16, can beobtained after the comparison of the edge-detected image ofFig. 15 with a threshold. Sixth, the partial fabrication texturecan be eliminated by the comparison of the width of the verticalcutting line with the fabrication texture in each line of the binaryedge-detected image. Fig. 17 gives the processing result byeliminating the partial fabrication texture in Fig. 16. Seventh,the left and right edges of the vertical cutting line can be locatedwith subpixel precision.Now, two subpixel edges of the horizontal and vertical cut-ting lines can be located, respectively, and the subpixel edgesconsist of discrete subpixel edge points. Least-squares fittingof the discrete subpixel upper and lower edge points of the Fig. 18. White mark lines that are drawn according to the 2-D image coor-dinates of the intersection point of cross-cutting lines are superimposed on theoriginal image.horizontal cutting line can be calculated. The central positionof two least-squares fitting lines is the Y -axis coordinate of thehorizontal cutting line and also the Y -axis coordinate of theintersection point of cross-cutting lines. Least-squares fitting ofthe discrete subpixel left and right edge points of the verticalcutting line can be calculated. The central position of two least-squares fitting lines is the X-axis coordinate of the verticalcutting line and also the X-axis coordinate of the intersectionpoint of cross-cutting lines.In Fig. 18, the white mark lines, which are drawn accordingto the Y -axis coordinate of the horizontal cutting line and theX-axis coordinate of the vertical cutting line, are superimposedon the original image. We can see that 2-D coordinates of theintersection point of cross-cutting lines are accurately extracted.The 2-D image coordinates of the intersection point of cross-cutting lines should be corrected according to the calibrationresults of the distortion coefficients of the camera opticalsystem.To test the measurement repeatability and stability of the 2-Dimage coordinate calculating algorithm of the intersection pointof cross-cutting lines, ten images of the same cross-cutting linearea are shot. Then, ten images are processed, and the 2-Dimage coordinates of the intersection point are calculated. Themeasuring results are shown in Table II.The final measurement precision is the synthesis of monoc-ular vision measurement precision, calibration precision of thecamera, 机器视觉三坐标测量英文文献和中文翻译(6):http://www.youerw.com/fanyi/lunwen_32194.html
------分隔线----------------------------
推荐内容