CMM.Ti can be calculated as follows:Ti =⎡⎣tXitYitZi⎤⎦ = T0 + R • Δi=⎡⎣tX0tY 0tZ0⎤⎦ +⎡⎣r11 r12 r13r21 r22 r23r31 r32 r33⎤⎦ •⎡⎣ΔXiΔYiΔZi⎤⎦=⎡⎣tX0 + r11 • ΔXi + r12 • ΔYi + r13 • ΔZitY 0 + r21 • ΔXi + r22 • ΔYi + r23 • ΔZitZ0 + r31 • ΔXi + r32 • ΔYi + r33 • ΔZi⎤⎦ . (22)According to the geometrical principle of similar trianglesbetween the CCD image plane coordinate system and thecamera coordinate system, (XPi,YPi) can be transferred to(xCi,yCi,zCi) as follows: xCi = XPiv • zCiyCi = YPiv • zCi.(23)Let xWi +ΔXi = ρXiyWi +ΔYi = ρYizWi +ΔZi = ρZi.(24)The geometrical model of camera calibration can be obtainedby the synthesis of the foregoing equations in (25), shown at thebottom of the page.This calibration model integrates the 3-D coordinates of themeasured feature points in the large-scale CMM coordinatesystem with the 2-D coordinates of corresponding image points,which contain distortion errors in the computer image coordi-nate system. The proposed calibration process will be done onthe basis of (25), and the specific calibration scheme can bepided into six steps.1) The points set in ΩW and ΩI , which adjacently locate tothe center of the image plane, are selected as calibrationpoints, and the partial orthogonal constraints of R areconsidered. Equation (25) can be simplified into (26) bysetting every parameter in set d to zero; thus, only set mneeds to be solved, i.e.,⎧⎪ ⎪ ⎨⎪ ⎪ ⎩v •r11ρXi+r12ρYi+r13ρZi+tX0r31ρXi+r32ρYi+r33ρZi+tZ0= xPi = kX • sX • xIi = kX • sX • (ˆ xIi − xI0)v •r21ρXi+r22ρYi+r23ρZi+tY 0r31ρXi+r32ρYi+r33ρZi+tZ0= yPi = sY • yIi = sY • (ˆ yIi − yI0).(26)2) In the first step, the distortion errors are not considered,and only partial orthogonal constraints of R are consid-ered. The solved R does not satisfy all the orthogonalproperties in (21). Then, the orthogonal penalty functionsof R will be used to further refine R.3) The first calibration result m(1)of set m is calculatedafter the first and second steps. Then, m(1)is consideredconstant, and set d can further be solved. Linear equationsabout set d can be obtained by using ΩW and ΩI , and thefirst calibration result d(1)of set d can be solved by theleast-squares method.4) d(1)is considered constant and substituted into (25), thenthe nonlinear equations of m can be obtained. First, thecalibration procedures of the first and second steps arerepeated, then the second calibration result m(2)of set mcan be obtained. Second, the calibration procedure of thethird step is repeated, then the second calibration resultd(2)of set d can be obtained. Third, the aforementionedcalibration procedures are repeated, then m(λ)and d(λ)can be obtained.5) The objective function E(λ) is defined in (27) and (28),shown at the bottom of the page. Then,m(λ)and d(λ)willbe treated as the final calibration results of sets m and dwhen E(λ) is less than a predefined threshold ε. Fig. 3. Measuring principle of the binocular stereo vision.6) 3-D coordinates in the large-scale CMM coordinate sys-tem can be transformed into 2-D coordinates in the cam-era coordinate system by R and T0 according to (20)and (22). On the contrary, 2-D coordinates in the cameracoordinate system can be transformed into 3-D coor-dinates in the large-scale CMMcoordinate systemin (29),shown at the bottom of the page. (Note that we defineR−1 =Ψ.)In the proposed camera calibration method, since set d islinearly calculated when set m is fixed and set m is linearlycalculated when set d is fixed, this optimization procedureis repeated until E(λ) is less than the predefined threshold;then,
the global nonlinear iteration optimization process canbe avoided. The accuracy, efficiency, and reliability of cameracalibration can be improved with less computation load andcomplexity [28]. Because the two optical axes of the camera areparallel in the proposed binocular stereo vision configuration,the calibration of stereo cameras is replaced by the calibrationof a single camera, and it is simplified when compared withother dual-camera calibration algorithms [29], [30].V. BINOCULAR STEREO VISION ALGORITHMBASED ON THE LARGE-SCALE CMMThe measuring principle of the binocular stereo vision isshown in Fig. 3.Let P L and P R denote the image points of the measuredfeature point P on the CCD image plane of the left and rightpositions of the camera, respectively. Let u denote the distancebetween P and the link line of the principle points (OL and OR)of the lens of the left and right positions of the camera (objectdistance), v has been described earlier in this paper, and let bdenote the distance between the principle points of the lens ofthe left and right positions of the camera. Let XPL and XPRdenote the X-axis coordinates of P in the CCD image planecoordinate system of the left and right positions of the camera,respectively. Then, XPL + XPR is equal to theX-axis parallaxof P in the left and right positions of the CCD image planecoordinate system.Let (XIL,YIL) denote the 2-D image coordinates of P in thecomputer image coordinate system of the left position camera.(XPL,YPL) can be calculated by XPL = kX • sX • XIL andYPL = sY • YIL.Let f denote the focal length of the lens whenthe measured point is at the focusing position of the camera,according to the principle of binocular stereo vision and lensformula. v can be substituted by v = f • (β +1). Then, the 3-Dcoordinates of P in the left position of the camera coordinatesystem can be calculated as follows:⎧⎪ ⎨⎪ ⎩xCL = kX•sX•XIL•uvyCL = sY •YIL•uvzCL =u= b•vXPL+XPR = b•vkX•sX•(XIL+XIR)= b•f•(β+1)kX•sX•(XIL+XIR).(30)If the accurate 2-D image coordinates in the computer imagecoordinate system of the left and right positions of P canbe obtained, then the 3-D coordinates of P in the cameracoordinate system can be calculated. According to the proposedcamera calibration method, the 3-D coordinates of P in thecamera coordinate system can be transformed into the unifiedlarge-scale CMM coordinate system.The general binocular stereo vision method should usuallysolve difficult problems such as stereo matching, calibrationof stereo cameras, and calibration of b and v. In the proposedvision measuring system, these problems can be solved throughthe following ways: 1) Stereo matching can be avoided bycalculating the 2-D coordinates of the left and right positionimages of cross-cutting feature points in the computer imagecoordinate system, respectively; 2) the calibration of a singlecamera is simpler than stereo cameras; 3) b can be obtainedthrough the transversal moving distance of the camera; and4) the calibration of v can be avoided by the proposed cameracalibration method.The measuring system errors can mainly be classified as fol-lows: 1) focusing position locating error; 2) calculating errorsof the 2-D image coordinates of the intersection feature point ofcross-cutting lines; 机器视觉三坐标测量英文文献和中文翻译(5):http://www.youerw.com/fanyi/lunwen_32194.html