terms that we choose to add are larger, we use a spiral pattern of iterating through ig and ip。 There is another heuristic which states that an object without any motion bias will have an average displacement of (0, 0)。 Since the spiral summation optimization works best when we find a minimum SSD value early in the search process, we can gain an additional benefit by beginning cross-correlation searches at the origin。
Timestamp
(a) This plot shows the robot tool’s position as a function of time。 (b-d) The robot moves toward an object of interest as it appears。 (e-g) The robot tracks the object forward and backward。
(h-k) The robot detects only the desired object of interest (calculator)。
(l-n) The robot ignores motion of other objects (spork and gum wrapper)。 (o-q) However, the robot does track the desired object’s motion。
Fig。 7。 Selective object detection and tracking (continued on facing page)。
5。 VISUAL DETECTION AND TRACKING HARDWARE
5。 1。 The Minnesota Vision Processing System
For our experimentation, we use an experimental
hardware system called the Minnesota Vision Processing System (MVPS) 28 The MVPS can receive
input from either live camera feed or from recorded imagery。 The transmission of image frames is performed by a Datacube MaxVideo 20 video processor。 Because of the pipeline architecture of the MaxVideo 20, all of its calculations are performed at input frame rate。 The MaxVideo 20 contains image processing elements that are pro- grammed with Datacube’s I1T1i18eflow software libraries to compute the figure and ground images。
The limited collection of MaxVideo 20 processing elements cannot perform the more complicated algorithms required for figure segmentation。 Instead, figure segmentation occurs on a separate Intel Max860 processing unit。 This RISC processor has a peak performance rating of 80 Megaflops and receives image frames from the MaxVideo 20 through a 20 MHz P2 pixel bus。 The segmentation of a single test object (200 x 300 pixels) in a 512 x480 pixel image takes 150 msec。 This time was clearly reduced through the use of multiple, smaller domains。 The Max860 processor is also used for correlation computations, because of their computa- tionally intensive nature。
A couple of other devices serve to support the vision system。 The high-level control of the MVPS is performed by software executed on a VME-based Motorola MVME-147 single board computer (SBC) running the 05-9 real-time operating system。 The MVME-147 is able to drive the other processing units through a portable 7-slot VME chassis。 Finally, a Sun workstation is used to develop the software source code and to store multiple versions of the system。
5。2。 The Minnesota Robot Control System
In order to apply the detection framework to the robotic domain, we have connected the MVPS with a robot control scheme called the Minnesota Robot Control System (MRCS)。 Input vectors provided by the vision system are filtered and validated by software which executes on a dedicated VME-based Ironics IV3230 (68030) SBC running Carnegie Mellon University’s Chimera real-time robotic
control environment。 29 Chimera requires a Sun
workstation to act as a host for the VME connection。