WO2000045229A1 - Uncalibrated dynamic mechanical system controller - Google Patents

Uncalibrated dynamic mechanical system controller Download PDF

Info

Publication number
WO2000045229A1
WO2000045229A1 PCT/US2000/001876 US0001876W WO0045229A1 WO 2000045229 A1 WO2000045229 A1 WO 2000045229A1 US 0001876 W US0001876 W US 0001876W WO 0045229 A1 WO0045229 A1 WO 0045229A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
robot
parameter
translation model
dynamic
Prior art date
Application number
PCT/US2000/001876
Other languages
French (fr)
Inventor
Jennelle Armstrong Piepmeier
Harvey Lipkin
Gary Von Mc Murray
Original Assignee
Georgia Tech Research Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Georgia Tech Research Corporation filed Critical Georgia Tech Research Corporation
Priority to AU33504/00A priority Critical patent/AU3350400A/en
Publication of WO2000045229A1 publication Critical patent/WO2000045229A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1607Calculation of inertia, jacobian matrixes and inverses
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0205Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric not using a model or a simulator of the controlled system
    • G05B13/024Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric not using a model or a simulator of the controlled system in which a parameter or coefficient is automatically adjusted to optimise the performance
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/404Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by control arrangements for compensation, e.g. for backlash, overshoot, tool offset, tool wear, temperature, machine construction errors, load, inertia
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39062Calculate, jacobian matrix estimator
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39397Map image error directly to robot movement, position with relation to world, base not needed, image based visual servoing

Definitions

  • the present invention is generally related to an uncalibrated, model independent controller for a mechanical system and, more particularly, is related to a system and method employed by a robot in visual tracking of a moving target.
  • Control of mechanical systems by a computer in the prior art, is typically based on a model of the mechanical system which relates controller inputs (sensor data) to the controller outputs (variables controlled by the computer).
  • controller inputs sensor data
  • controller outputs variables controlled by the computer
  • One example of a mechanical system controller is the cruise control apparatus of a car.
  • the controller input variable for the cruise control apparatus is the speed of the car as sensed by a tachometer, a speedometer, or the like.
  • the detected speed of the car, or a representative parameter such as an error signal is input into the controller computer.
  • controller computer Based on whether the car must accelerate or decelerate, the controller computer initiates a control action (controller output variable), such as depressing the gas pedal more to initiate an acceleration of the car.
  • a control action such as depressing the gas pedal more to initiate an acceleration of the car.
  • the key to controlling any mechanical system is the accurate modeling of the mechanical system that relates the controller inputs (sensor data) to the appropriate setting of the controller outputs.
  • Models of a mechanical system have been traditionally developed in one of two ways.
  • the first prior art method is to construct the mechanical system model using analytical techniques. This method involves mathematical modeling of all the system parameters and physical parameters. While this method is clearly the dominant prior art modeling method, several significant problems arise.
  • the model of the mechanical system is only as good as the data used to construct the model. Thus, if the measurements of the physical dimensions of the mechanical system are not extremely accurate, the model will not be valid. In the world of mass production, this means that parts must be constructed very accurately in order to duplicate the mechanical system upon which the controller model is based upon.
  • a slight difference between one part of the product and the mechanical system model could render the entire controller model invalid for the product in which that part is used. Additionally, if any parameter in the system changes over time, then the model may no longer be valid. Any time a model no longer accurately represents the mechanical system, the controller computer will be incapable of performing as desired.
  • Another more recent prior art modeling technique for deriving a model of a mechanical system involves the use of neural networks and/or fuzzy logic to construct the relationship between controller inputs and outputs.
  • the appeal of this second approach is that some of the measurement errors that frequently plague the first modeling method can be avoided.
  • the actual construction of the input/output relationship varies slightly between the neural network and the fuzzy logic modeling methods.
  • a training period is required to construct the proper relationship between the input and output variables. This training period can involve a rather lengthy period of time because of the inputting of various levels of system inputs and the recording the sensor data.
  • the computer controller can construct the appropriate mechanical system model.
  • Some advanced robots employ a dynamic look-and-move method which allows a robot to position its end-effector relative to a target on a workpiece such that the robot can complete a predetermined operation.
  • the robot's end-effector may be a tool, such as a cutter or a gripper, a sensor, or some other device.
  • An end-effector is typically located on the distal end of a robot arm.
  • the workpiece is the object of the robot's predetermined task.
  • a target is a predetermined reference point on or near the workpiece.
  • a robot may have an arm holding a paint sprayer (end- effector) for spray painting (predetermined task) an automobile (workpiece).
  • any movement or relocation of the camera will cause errors because reference locations will not be properly mapped into the visual servoing algorithm.
  • the model independent approach describes visual servoing algorithms that are independent of hardware (robot and camera systems) types and configuration of the working system (robot and workpiece).
  • the most thorough treatment of such a method has been performed by Jagersand and described in M. Jagersand, Visual Servoing Using Trust Region Methods and Estimation of the Full Coupled Visual- Motor Jacobian, IASTED Applications of Robotics and Control, 1996, and in M. Jagersand, O. Fuentes, and R. Nelson. Experimental Evaluation of Uncalibrated
  • Jagersand demonstrates the robust properties of this type of control, demonstrating significantly improved repeatability over standard joint control even on an older robot with backlash.
  • Jagersand' s work focuses on servoing a robot end-effector to a static target. That is, the workpiece is not moving.
  • a heretofore unaddressed need exists in the industry to address the aforementioned deficiencies and inadequacies.
  • the present invention provides an apparatus and method for enabling an uncalibrated, model independent controller for a mechanical system using a dynamic quasi-Newton algorithm which incorporates velocity components of any moving system parameter(s).
  • a preferred embodiment enabling tracking of a moving target by a robotic system having multiple degrees of freedom using an uncalibrated model independent visual servo controller is achieved by the method and apparatus of the present invention, and can be implemented as follows.
  • Model independent visual servo control is defined as using visual feedback to control robot servomotors without precisely calibrated kinematic robot models and camera models.
  • a sensing system such as a camera detects the position of at least a target on a workpiece and a point on the robot, such as on the robot's end effector.
  • the detected positions of the target and the point are converted by a processor into information used by the dynamic quasi-Newton algorithm of the present invention to calculate a translation model, hereinafter known as the Jacobian.
  • a Jacobian specifies differential changes in the robot position per differential changes in the joint position of each joint (rotary or prismatic) of the robot.
  • a controller send control signals, as specified by the Jacobian calculated by the processor, to servomotors controlling the robot's rotating joints. With each subsequent update of the Jacobian, the position of the robot's members are adjusted such that the end-effector is directed to a desired location relative to the target.
  • controller is equally applicable to controlling other types of mechanical systems. Because the controller does not require a kinematic model, the controller is particularly suited to mechanical systems having at least one moving parameter, or to mechanical systems for which a precise kinematic model is impractical or impossible to develop, such as when a parameter of the kinematic model is changing. Nor does the controller require a detailed model of the sensing system. For example, the camera of a visual tracking system can be repositioned or bumped without interrupting the tracking process or diminishing tracking accuracy. Similarly, the sensor system may utilize different types of sensors, such as mechanical sensors, magnetic sensors, visual sensors, optical sensors, sonic sensors, temperature sensors, infrared sensors, force sensors, proximity sensors or the like.
  • One object of the present invention is to enable a model independent controller for controlling a mechanical system.
  • Another object of the present invention is to enable a controller to control a mechanical system after a change in a system parameter.
  • Another object of the present invention is to enable a controller to control a mechanical system after a change in a system parameter after a shortened retraining period.
  • Another object of the present invention is to enable a model independent controller for tracking a moving target for use in a tracking system.
  • Another object of the present invention is to enable a model independent visual servoing controller for tracking a moving target for use by a robot.
  • Another object of the present invention is to enable an uncalibrated, model independent visual servoing controller for tracking a moving target for use by a robot. That is, the initial positioning of the robot end-effector, members and controlled couplers, or the initial positioning of the workpiece and the associated target, is not required. Also, exact positioning of the camera in a predetermined location is not required. The robot will effectively and accurately perform its predetermined task even if the camera is displaced or if the workpiece is not properly positioned on the moving conveyor belt.
  • Another object of the present invention is to enable a model independent controller for tracking a moving target for use in a controller system which utilizes a sensor, or combination of sensors, such as but not limited to a mechanical sensor, a magnetic sensor, an optical sensor, a visual sensor, a sonic sensor, a temperature sensor, an infra-red sensor, a force sensor, a proximity sensor or other sensors as commonly known in the art.
  • sensors such as but not limited to a mechanical sensor, a magnetic sensor, an optical sensor, a visual sensor, a sonic sensor, a temperature sensor, an infra-red sensor, a force sensor, a proximity sensor or other sensors as commonly known in the art.
  • FIG. 1 is a diagram of a robot and workpiece system where the workpiece is moving along a conveyor.
  • FIG. 2 is a controller block diagram of the preferred embodiment of the invention employing a dynamic quasi-Newton algorithm.
  • FIG. 3 is a block diagram of an embodiment of the invention employing a dynamic quasi-Newton with a recursive least squares (RLS) algorithm.
  • RLS recursive least squares
  • FIG. 4 is a block diagram showing the Jacobian estimation scheme of FIG. 3.
  • FIG. 5 is a diagram of a one degree-of-freedom system used in testing the dynamic controller of FIG. 2.
  • FIG. 6A is a graphical representation illustrating the tracking error of the one degree-of-freedom system of FIG. 5 using prior art control methods.
  • FIG. 6B is a graphical representation illustrating the tracking error of the one degree-of-freedom system of FIG. 5 using control methods of the present invention.
  • FIG. 7 is a diagram of a car speedometer which is controlled by an alternative embodiment of the controller.
  • FIG. 8 is a diagram of a mechanical system having two cars separated by a distance which is controlled by an alternative embodiment of the controller.
  • FIG. 9 is a diagram of a robot and workpiece system where the robot is moving towards the workpiece.
  • FIG. 1 shows a mechanical system which includes a controller computer 14, a robot 10 and a workpiece 38.
  • the robot 10 has a multi-member robot arm 12 connected to a controller computer 14 via a control cord 16.
  • processor 17 resides within the controller 14, however processor 17 may reside in any convenient location outside of the controller 14.
  • the robot 10, used to illustrate the method and apparatus of the present invention in a preferred embodiment includes a pedestal 24, upon which the robot arm 12 is mounted.
  • the robot arm 12 has three members 18, 20, and 22. Member 18 is connected to pedestal 24 with a rotating joint 26.
  • a sensor Residing within rotating joint 26 is a sensor, known as a joint resolver (not shown), which detects the angular position of the rotating joint 26.
  • the angular position of rotating joint 26 is controlled by a servoing motor (not shown).
  • the position of member 18 is known when the angular position of rotating joint 26 is determined.
  • Controller 14 sends control signals to the servoing motor to adjust the angular position of rotating joint 26, thereby controlling the position of member 18.
  • Member 20 is connected to member 18 by rotating joint 28.
  • Member 22 is connected to member 20 by rotating joint 30. Rotating joint 30 and rotating joint
  • End-effector 34 is connected to arm member 22 by wrist joint 32. Wrist joint 32 may have up to three degrees of freedom of movement. The position of member 20, member 22 and end effector 34 is controlled by adjusting the angular position of their respective joints.
  • a camera 36 views workpiece 38 positioned on conveyor 42. In this example system, workpiece 38 is moving on conveyor 42 in the direction shown by the arrow 44. The camera 36 is viewing the target 40 located on the workpiece 38 and a predefined point 46 on end-effector 34.
  • the image recorded by camera 36 which contains images of the target 40 and point 46, is transmitted to a processor 17 residing in controller 14 via means commonly employed in the art, such as by cable (not shown), radio signals, infrared signals or the like.
  • the image is converted by the processor 17 into pixels employing methods commonly used in the art. Pixels associated with target 40 and point 46 are then marked and used as the points for reference necessary for the processing algorithm.
  • the processing algorithm is then executed by the processor 17 to generate a translation model, or Jacobian.
  • the Jacobian specifies differential changes in the robot position per differential changes in the angular position of rotating joints 26, 28, 30 and 32.
  • Processor 17 provides the controller 14 control signals which are sent to the robot 10 such that the end-effector 34 is directed to the desired location relative to the target 40.
  • processor 17 can reside at any convenient location and provide control signals to controller 14 using methods known in the art without departing substantially from the principles of the present invention.
  • image-based visual servoing method which can be classified as dynamic look-and-move method, is illustrated.
  • processing method is endpoint closed-loop (ECL), which is a method employing a vision system that views both the target 40 and the point 46 located on the end effector 34.
  • FIG. 2 shows the controller block diagram as applied to the robot 10 of FIG. 1. Positions of rotating joints 26, 28 and 30, position of wrist joint 32, positioning of the pedestal (if movable) and any other moveable connectors of the robot 10 (see FIG. 1) are calculated using a nonlinear least squares optimization method which minimizes the error in the image plane.
  • the processing algorithm estimates the Jacobian on-line and does not require calibrated models of either the camera configuration or the robot kinematics.
  • a Jacobian, for the robot embodiment is the group of servo motor angle positions controlled by the controller 14 (controller outputs). Often, servo motor angle position information is grouped in a matrix format, or the like, to facilitate the mathematical representation and calculation of the controller outputs by the processor 17.
  • the processing algorithm hereinafter the dynamic quasi-Newton algorithm
  • the dynamic quasi-Newton algorithm provides a Jacobian (joint positions as servo motor angles) as reference inputs for the joint-level controllers.
  • the processing algorithms incorporate the velocity of the changing parameters to calculate a Jacobian update (new joint positions) for the controller such that the robot can perform its predetermined task (moving the end effector to the target).
  • the camera 36 captures an image which includes images of at least the target 40 and the point 46 on the end effector 34 (FIG. 1), as shown in block 210.
  • the image is then processed by the processor 17 residing in controller 14, shown by block 212, to generate a datum point, y*, of the moving target
  • An error signal / is generated at block 214 by subtracting datum point y* from the datum point y( ⁇ ).
  • the error signal contains rate of change information, such as velocity of the moving target 40 of FIG. 1.
  • the Jacobian is updated by the processor at block 216 and joint angle positions are calculated at block 218. Control signals are sent to the robot 10 (FIG. 1) causing the robot 10 to adjust its position at block 220.
  • the camera 36 captures the image of the new positions of point 46 and the target 40 at block 230.
  • the processor 17 processes the image at block 240 to generate a datum point y( ⁇ ).
  • y( ⁇ ) is passed to block 214 for processing as described above.
  • E( ⁇ + /% ,t + h, ) E( ⁇ ,t)+ E ⁇ ⁇ + E,b, + ...
  • E ⁇ , F are partial derivatives
  • b ⁇ , h are increments of ⁇ , t.
  • O(b g 2 j E ⁇ + E ⁇ b ⁇ + E, ⁇ b, + O(b ⁇ 2 )
  • O(b g 2 j indicates second order terms where h, is absorbed into b ⁇ since it is
  • Equation (1) is referred to as the
  • the Jacobian J can be replaced in the Gauss-Newton method by an estimated Jacobian, J , using Broyden estimation.
  • the iteration then becomes a quasi-Newton method where the term S is set to zero.
  • a dynamic Gauss-Newton method is similar to the dynamic Newton method provided S is not too large.
  • Broyden' s method is a quasi-Newton method; the algorithm substitutes an estimated Jacobian for the analytic Jacobian based on changes in the error function corresponding to changes in state variables.
  • the estimated Jacobian is a secant approximation to the Jacobian.
  • the moving target scenario requires that the appropriate time function derivatives are included.
  • a dynamic Broyden' s method for a moving target scenario can be derived in a similar manner to the dynamic quasi-Newton' s method. However, for brevity the derivation is omitted here.
  • J k represent the approximation to J k .
  • the Jacobian represents a composite Jacobian including both robot and image Jacobians.
  • the Broyden update, ⁇ J for a static target scenario is given as follows.
  • the proposed dynamic Broyden update contains an additional term. Sf k ( h t , having a dt rate of change component.
  • — ⁇ % - —( ⁇ ) term which represents a dt rate of change in the detected target location (velocity)
  • a scalar may have an added constant, and/or may be further integrated, without departing substantially from the principles of the method and apparatus of the present invention.
  • the term includes the velocity (rate of change) of the target, the
  • ⁇ 9t velocity (rate of change or movement) of the end-effector and any movement or repositioning of the camera. For example, if the camera is accidentally bumped or intentionally repositioned during the target tracking process, the tracking process will not be interrupted and the tracking process will be successfully and accurately effected.
  • the algorithm may display instabilities in the presence of noise above some level.
  • Equations for J k and P k define a recursive update for estimating J k .
  • a dynamic quasi- Newton method using the RLS Jacobian estimation scheme follows.
  • the format is similar to the dynamic Broyden' s method with the exception of ⁇ and P k .
  • the parameter ⁇ can be tuned to average in more or less of the previous information. A ⁇ close to 1 results in a filter with a longer memory.
  • the RLS algorithm may be used for system identification, particularly for visually guided control.
  • One skilled in the art will realize that the above-described quasi-Newton method with RLS estimation will be equally applicable to other systems.
  • FIG. 3 is a block diagram 310 representing the dynamic quasi-Newton method with RLS estimation.
  • Block 312 corresponds to the robot end-effector position.
  • Block 314 corresponds to the target position. The target position is subtracted from the robot end-effector position at node 316 to determine an error value.
  • Block 318 provides a gain to the error signal and block 320 provides a gain to the target position. the output of blocks 318 and 320 are summed as shown at node 322 and sent to the
  • FIG. 4 is a block diagram showing the Jacobian estimation 324 scheme of
  • FIG. 3 The gain K is given below: K b _, — ⁇ 7p k k-X ⁇ + /fe' A _ ⁇
  • Summing node 412 sums the two values as shown in FIG. 4.
  • Block 414 provides a gain to the output of node 412.
  • Node 416 and block 418 provide a feedback lop as shown.
  • Block 420 provides a final gain to the output of block 416.
  • the output of block 420 is sent to the dynamic quasi-Newton block 328 (FIG. 3) and is sent back to node 412.
  • FIG. 5 shows a simple one degree-of-freedom (1 DOF) system that has been simulated to test the dynamic controller of the present invention and dynamic Broyden' s update method.
  • a target 540 is shown on a workpiece 538.
  • An arm 510 fixed to a controllable rotary joint 520 is located 400 millimeters (mm) from the midpoint of the target 540 motion.
  • a sensory system such as the camera 36 and controller 14 of FIG. 1, determines the position of the target 540 and where the arm 510 crosses the line 560 of target 540 motion. No noise was added to the sensor data for this simulation. Error is measured as the distance between the arm 510 and the target 540 along the line of target motion.
  • the target 540 is moving sinusoidally with an amplitude of 250mm at 1.5 radians/second (rad/s) which results in a maximum speed of 375 millimeters/second (mm/sec).
  • the simulation is performed with a 50ms sampling period.
  • Velocity state estimates are computed by first order differencing of the target 540 position.
  • FIG. 6A shows the tracking error 610 for a controller operating under a prior art static quasi-Newton method for the system of FIG. 3.
  • FIG. 6B shows the tracking error 620 for a controller operating under a dynamic quasi-Newton method of the present invention for the system of FIG. 5.
  • the error 610 for the prior art static quasi- Newton method in Fig. 4A appears chaotic, and contains spikes several orders of magnitude higher than the amplitude of the target 540 (FIG. 5) motion.
  • the root mean square (rms) error for the prior art static quasi-Newton method in FIG. 6A is about 55mm.
  • the steady-state error 620 using the dynamic quasi-Newton method of the present invention shown in Fig. 6B, has an rms value of approximately 1mm.
  • the above-described one-dimensional sensor-based control example demonstrates a significant performance improvement for the tracking of moving targets using dynamic quasi-Newton method and dynamic Broyden Jacobian estimator for a 1 -DOF manipulator. Similar success for visual servo control of a 6- DOF manipulator in simulation is described in detail in the above-mentioned IEEE proceedings paper, A Dynamic Quasi-Newton Method for Uncalibrated Visual Servoing, which is incorporated entirely herein by reference.
  • the dynamic quasi-Newton algorithm can be implemented in hardware, software, firmware, or a combination thereof.
  • the dynamic quasi-Newton algorithm is implemented in software or firmware that is stored in a memory and that is executed by a suitable instruction execution system.
  • the dynamic quasi- Newton algorithm can implemented with any or a combination of the following technologies, which are all well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit having appropriate logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
  • a dynamic quasi-Newton algorithm program which includes an ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor- containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
  • a "computer-readable medium” can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium.
  • the computer-readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (magnetic), a read-only memory (ROM) (magnetic), an erasable programmable read-only memory (EPROM or Flash memory) (magnetic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical).
  • an electrical connection electronic having one or more wires
  • a portable computer diskette magnetic
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • CDROM portable compact disc read-only memory
  • the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
  • a mechanical system controller employing the above-described processing algorithm is a car cruise control system.
  • a car speedometer 710 is shown in FIG. 7. Here, the present car speed of 30 miles per hour (MPH) is indicated by the speedometer needle 712. If the desired speed is 40 mph, the car is required to accelerate until the desired 40 mph speed is reached.
  • MPH miles per hour
  • Such changing parameters include the mass of the car (weight), surface of the road (flat, uphill or downhill), and power delivery capabilities of the motor and power train (transmission gear state, fuel consumption rate, etc.).
  • Prior art methods have produced reasonably effective controllers for a car cruise control system which anticipates and accounts for many of these variable parameters.
  • other non-variable parameters in the prior art models could change, such as a change in a speed sensor as the speedometer cable stretches and wears with time. Changes in these types of parameters could not be accounted for in a prior art mechanical system model.
  • the car speed and the desired speed are the only two required controller inputs (mechanical system parameters) necessary for the computer controller.
  • the car speed could be sensed by the angular position of the speedometer needle 712.
  • the desired speed could be specified using any method commonly employed in the art.
  • a controller for a car cruise control system implemented by the processing algorithm would not be affected by changes in parameters modeled in the prior art controllers.
  • the above-described processing algorithm enables a controller which is applicable to other tracking systems.
  • An example of another alternative embodiment is an automatic car spacing control system.
  • Such a controller would be significantly more complex than the above-mentioned car cruise control system.
  • a mechanical system 810 with two cars travelling on a road 811 is shown in FIG. 8.
  • the objective of the controller is to ensure that a minimum distance, D 812, is maintained.
  • D 812 is measured from the rear bumper 814 of the lead car 816 to the front bumper 818 of the following car 820.
  • the control action would be deceleration of the following car 820 whenever the minimum distance, D 812, criteria is violated.
  • Sensor 822 measures the actual distance between the front bumper 818 and the rear bumper 814.
  • Such a sensor could be based on any one of a variety of sensing techniques commonly employed in the art, such as infrared, microwave, radar, sound, vision or the like.
  • a controller employing the apparatus and method of the present invention would sense the actual distance and adjust the speed of the following car 820 whenever the minimum distance, D 812, criteria is violated.
  • a significant advantage of the controller is that a detailed model of the position of sensor 822 is not required. That is, the sensor could be disposed on the following car 820 at any convenient location. Also, since no mechanical system model is required, the controller would work equally well on any model, make or variation of any motorized vehicle. Furthermore, this embodiment of the controller would work equally well on any moving object, such as a boat, train, airplane or the like, particularly if the controller provided a warning indication coupled with the control action.
  • FIG. 9 illustrates such a mechanical system having a mobile robot 910.
  • Mobile robot 910 is similar in construction to the robot 10 of FIG. 1.
  • This alternative embodiment operates in a similar manner as the preferred embodiment of the present invention, except that workpiece 938 does not move.
  • a pedestal 924 is mounted on a mechanized carriage 925, or the like.
  • Elements in FIG. 9 that are similar to those in FIG. 1 bear similar reference numerals in that reference numerals in FIG. 9 are preceded by the numeral "9".
  • Processor 917 provides control signals to controller 914 based upon the images of the target 940 on workpiece 938 and the point 946 on end effector 934 detected by camera 936.
  • the predetermined task of moving end effector 934 to the workpiece 938 is accomplished by moving robot arm 912 members 918, 920 and 922

Abstract

An apparatus and method for enabling an uncalibrated, model independent controller for a mechanical system using a dynamic quasi-Newton algorithm which incorporates velocity components of any moving system parameter(s) is provided. In the preferred embodiment, tracking of a moving target (40) by a robot (10) having multiple degrees of freedom is achieved using an uncalibrated model independent visual servo control. Model independent visual servo control is defined as using visual feedback to control a robot's servomotors without precisely calibrated kinematic robot (10) model or camera (36) model. A processor (17) updates a Jacobian and a controller (14) provides control signals such that the robot's end effector (34) is directed to a desired location relative to a target (40) on a workpiece (38).

Description

UNCALIBRATED DYNAMIC MECHANICAL SYSTEM CONTROLLER
CLAIM OF PRIORITY This application claims priority to, and the benefit of the filing date of, copending U.S. provisional application entitled, "Dynamic Quasi-Newton Method for Uncalibrated Visual Servoing," having ser. no. 60/117, 829, filed January 29, 1999, which is entirely incorporated herein by reference.
TECHNICAL FIELD The present invention is generally related to an uncalibrated, model independent controller for a mechanical system and, more particularly, is related to a system and method employed by a robot in visual tracking of a moving target. BACKGROUND OF THE INVENTION Control of mechanical systems by a computer, in the prior art, is typically based on a model of the mechanical system which relates controller inputs (sensor data) to the controller outputs (variables controlled by the computer). One example of a mechanical system controller is the cruise control apparatus of a car. Here, the controller input variable for the cruise control apparatus is the speed of the car as sensed by a tachometer, a speedometer, or the like. The detected speed of the car, or a representative parameter such as an error signal, is input into the controller computer.
Based on whether the car must accelerate or decelerate, the controller computer initiates a control action (controller output variable), such as depressing the gas pedal more to initiate an acceleration of the car.
For prior art, the key to controlling any mechanical system is the accurate modeling of the mechanical system that relates the controller inputs (sensor data) to the appropriate setting of the controller outputs. Models of a mechanical system have been traditionally developed in one of two ways. The first prior art method is to construct the mechanical system model using analytical techniques. This method involves mathematical modeling of all the system parameters and physical parameters. While this method is clearly the dominant prior art modeling method, several significant problems arise. First, the model of the mechanical system is only as good as the data used to construct the model. Thus, if the measurements of the physical dimensions of the mechanical system are not extremely accurate, the model will not be valid. In the world of mass production, this means that parts must be constructed very accurately in order to duplicate the mechanical system upon which the controller model is based upon. A slight difference between one part of the product and the mechanical system model could render the entire controller model invalid for the product in which that part is used. Additionally, if any parameter in the system changes over time, then the model may no longer be valid. Any time a model no longer accurately represents the mechanical system, the controller computer will be incapable of performing as desired.
Another more recent prior art modeling technique for deriving a model of a mechanical system involves the use of neural networks and/or fuzzy logic to construct the relationship between controller inputs and outputs. The appeal of this second approach is that some of the measurement errors that frequently plague the first modeling method can be avoided. The actual construction of the input/output relationship varies slightly between the neural network and the fuzzy logic modeling methods. For neural networks, a training period is required to construct the proper relationship between the input and output variables. This training period can involve a rather lengthy period of time because of the inputting of various levels of system inputs and the recording the sensor data. After the neural network system performs sufficient tests over the entire range of input and output variables, the computer controller can construct the appropriate mechanical system model.
Developing a controller model of a mechanical system using fuzzy logic requires a person knowledgeable in the mechanical system. The person must construct the fuzzy sets and derive the correct relationships in order for the controller to function properly. The time and cost necessary for such a highly skilled person to develop the necessary fuzzy logic algorithms and computer control code is often prohibitive. As with the first modeling technique, models based upon either neural networking or fuzzy logic become invalid if any mechanical system parameter should change with time. In such a case, the neural networking and fuzzy logic controllers must be retrained in order to work properly. As a further illustrative example of a mechanical system, a simple robot is considered in detail. Robotic technology is a fast paced changing art allowing each new generation of robots to perform tasks of ever-increasing difficulty. Some advanced robots employ a dynamic look-and-move method which allows a robot to position its end-effector relative to a target on a workpiece such that the robot can complete a predetermined operation. The robot's end-effector may be a tool, such as a cutter or a gripper, a sensor, or some other device. An end-effector is typically located on the distal end of a robot arm. The workpiece is the object of the robot's predetermined task. A target is a predetermined reference point on or near the workpiece. For example, a robot may have an arm holding a paint sprayer (end- effector) for spray painting (predetermined task) an automobile (workpiece).
If the workpiece is moving, such as when the above-described automobile is travelling down a continuously moving assembly line, the robot's predetermined task becomes considerably more complicated. Tracking of moving targets with cameras is known in the art. However, these prior art control methods are model based and require a precise kinematic model of the robot and the camera system geometry. That is, control algorithms directing movement of the robot members through controlled couplers, such as a joint, screw or the like, must have a detailed model of the relationships of each member of the robot, each coupler of the robot, and the robot's end-effector. Also, the control algorithm requires a precise model of the relationship between the robot, the camera and the workpiece. An algorithm based upon a camera and servo control of the controlled couplers is known as a visual servoing algorithm.
Before the robot can begin its predetermined task, all necessary reference locations must be calibrated. That is. the initial position of the robot's end effector, members and controlled couplers must be determined. Also, the position of the workpiece and the associated target must be determined. All reference locations must be calibrated to the camera position. If any of the above described elements are not in the proper initial position, algorithms must be recalculated or the out-of-position element must be moved into its predetermined location. For example, the workpiece may have to be placed into a jig and the jig positioned at an initial starting position.
Also, any movement or relocation of the camera will cause errors because reference locations will not be properly mapped into the visual servoing algorithm.
Development of a model independent approach for robotics control has been considered. The model independent approach describes visual servoing algorithms that are independent of hardware (robot and camera systems) types and configuration of the working system (robot and workpiece). The most thorough treatment of such a method has been performed by Jagersand and described in M. Jagersand, Visual Servoing Using Trust Region Methods and Estimation of the Full Coupled Visual- Motor Jacobian, IASTED Applications of Robotics and Control, 1996, and in M. Jagersand, O. Fuentes, and R. Nelson. Experimental Evaluation of Uncalibrated
Visual Servoing for Precision Manipulation, Proceedings of International Conference on Robotics and Automation, 1997. Jagersand formulates the visual servoing problem as a nonlinear least squares problem solved by a quasi-Newton method using Broyden Jacobian estimation. That is, tracking a moving target is a predictive method employing a static based algorithm solving a series of static problems and equations.
Jagersand demonstrates the robust properties of this type of control, demonstrating significantly improved repeatability over standard joint control even on an older robot with backlash. However, Jagersand' s work focuses on servoing a robot end-effector to a static target. That is, the workpiece is not moving. Thus, a heretofore unaddressed need exists in the industry to address the aforementioned deficiencies and inadequacies.
SUMMARY OF THE INVENTION The present invention provides an apparatus and method for enabling an uncalibrated, model independent controller for a mechanical system using a dynamic quasi-Newton algorithm which incorporates velocity components of any moving system parameter(s). Briefly described, in architecture, a preferred embodiment enabling tracking of a moving target by a robotic system having multiple degrees of freedom using an uncalibrated model independent visual servo controller is achieved by the method and apparatus of the present invention, and can be implemented as follows. Model independent visual servo control is defined as using visual feedback to control robot servomotors without precisely calibrated kinematic robot models and camera models.
A sensing system, such as a camera, detects the position of at least a target on a workpiece and a point on the robot, such as on the robot's end effector. The detected positions of the target and the point are converted by a processor into information used by the dynamic quasi-Newton algorithm of the present invention to calculate a translation model, hereinafter known as the Jacobian. A Jacobian specifies differential changes in the robot position per differential changes in the joint position of each joint (rotary or prismatic) of the robot. A controller send control signals, as specified by the Jacobian calculated by the processor, to servomotors controlling the robot's rotating joints. With each subsequent update of the Jacobian, the position of the robot's members are adjusted such that the end-effector is directed to a desired location relative to the target.
The above-described controller is equally applicable to controlling other types of mechanical systems. Because the controller does not require a kinematic model, the controller is particularly suited to mechanical systems having at least one moving parameter, or to mechanical systems for which a precise kinematic model is impractical or impossible to develop, such as when a parameter of the kinematic model is changing. Nor does the controller require a detailed model of the sensing system. For example, the camera of a visual tracking system can be repositioned or bumped without interrupting the tracking process or diminishing tracking accuracy. Similarly, the sensor system may utilize different types of sensors, such as mechanical sensors, magnetic sensors, visual sensors, optical sensors, sonic sensors, temperature sensors, infrared sensors, force sensors, proximity sensors or the like.
One object of the present invention is to enable a model independent controller for controlling a mechanical system.
Another object of the present invention is to enable a controller to control a mechanical system after a change in a system parameter.
Another object of the present invention is to enable a controller to control a mechanical system after a change in a system parameter after a shortened retraining period.
Another object of the present invention is to enable a model independent controller for tracking a moving target for use in a tracking system.
Another object of the present invention is to enable a model independent visual servoing controller for tracking a moving target for use by a robot. Another object of the present invention is to enable an uncalibrated, model independent visual servoing controller for tracking a moving target for use by a robot. That is, the initial positioning of the robot end-effector, members and controlled couplers, or the initial positioning of the workpiece and the associated target, is not required. Also, exact positioning of the camera in a predetermined location is not required. The robot will effectively and accurately perform its predetermined task even if the camera is displaced or if the workpiece is not properly positioned on the moving conveyor belt.
Another object of the present invention is to enable a model independent controller for tracking a moving target for use in a controller system which utilizes a sensor, or combination of sensors, such as but not limited to a mechanical sensor, a magnetic sensor, an optical sensor, a visual sensor, a sonic sensor, a temperature sensor, an infra-red sensor, a force sensor, a proximity sensor or other sensors as commonly known in the art. Other systems, methods, features, and advantages of the present invention will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present invention, and be protected by the accompanying claims.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present invention. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
FIG. 1 is a diagram of a robot and workpiece system where the workpiece is moving along a conveyor.
FIG. 2 is a controller block diagram of the preferred embodiment of the invention employing a dynamic quasi-Newton algorithm.
FIG. 3 is a block diagram of an embodiment of the invention employing a dynamic quasi-Newton with a recursive least squares (RLS) algorithm.
FIG. 4 is a block diagram showing the Jacobian estimation scheme of FIG. 3. FIG. 5 is a diagram of a one degree-of-freedom system used in testing the dynamic controller of FIG. 2.
FIG. 6A is a graphical representation illustrating the tracking error of the one degree-of-freedom system of FIG. 5 using prior art control methods.
FIG. 6B is a graphical representation illustrating the tracking error of the one degree-of-freedom system of FIG. 5 using control methods of the present invention. FIG. 7 is a diagram of a car speedometer which is controlled by an alternative embodiment of the controller.
FIG. 8 is a diagram of a mechanical system having two cars separated by a distance which is controlled by an alternative embodiment of the controller. FIG. 9 is a diagram of a robot and workpiece system where the robot is moving towards the workpiece.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT I. Description of a Preferred Embodiment FIG. 1 shows a mechanical system which includes a controller computer 14, a robot 10 and a workpiece 38. The robot 10 has a multi-member robot arm 12 connected to a controller computer 14 via a control cord 16. In the preferred embodiment, processor 17 resides within the controller 14, however processor 17 may reside in any convenient location outside of the controller 14. The robot 10, used to illustrate the method and apparatus of the present invention in a preferred embodiment, includes a pedestal 24, upon which the robot arm 12 is mounted. The robot arm 12 has three members 18, 20, and 22. Member 18 is connected to pedestal 24 with a rotating joint 26. Residing within rotating joint 26 is a sensor, known as a joint resolver (not shown), which detects the angular position of the rotating joint 26. The angular position of rotating joint 26 is controlled by a servoing motor (not shown). The position of member 18 is known when the angular position of rotating joint 26 is determined. Controller 14 sends control signals to the servoing motor to adjust the angular position of rotating joint 26, thereby controlling the position of member 18. Member 20 is connected to member 18 by rotating joint 28. Member 22 is connected to member 20 by rotating joint 30. Rotating joint 30 and rotating joint
28 are controlled in a similar manner as rotating joint 26. End-effector 34 is connected to arm member 22 by wrist joint 32. Wrist joint 32 may have up to three degrees of freedom of movement. The position of member 20, member 22 and end effector 34 is controlled by adjusting the angular position of their respective joints. A camera 36 views workpiece 38 positioned on conveyor 42. In this example system, workpiece 38 is moving on conveyor 42 in the direction shown by the arrow 44. The camera 36 is viewing the target 40 located on the workpiece 38 and a predefined point 46 on end-effector 34. The image recorded by camera 36, which contains images of the target 40 and point 46, is transmitted to a processor 17 residing in controller 14 via means commonly employed in the art, such as by cable (not shown), radio signals, infrared signals or the like. The image is converted by the processor 17 into pixels employing methods commonly used in the art. Pixels associated with target 40 and point 46 are then marked and used as the points for reference necessary for the processing algorithm. The processing algorithm is then executed by the processor 17 to generate a translation model, or Jacobian. The Jacobian specifies differential changes in the robot position per differential changes in the angular position of rotating joints 26, 28, 30 and 32. Processor 17 provides the controller 14 control signals which are sent to the robot 10 such that the end-effector 34 is directed to the desired location relative to the target 40. When the robot 10 end- effector 34 is properly positioned, the robot 10 can begin its predefined task. One having ordinary skill in the art will realize that processor 17 can reside at any convenient location and provide control signals to controller 14 using methods known in the art without departing substantially from the principles of the present invention. In a preferred embodiment an image-based visual servoing method, which can be classified as dynamic look-and-move method, is illustrated. In addition, the processing method is endpoint closed-loop (ECL), which is a method employing a vision system that views both the target 40 and the point 46 located on the end effector 34. One skilled in the art will appreciate that alternative embodiments may detect the position of two points using any detection means or combination of detection means, such as but not limited to, mechanical sensors, magnetic sensors, visual sensors, optical sensors, sonic sensors, temperature sensors, infrared sensors, force sensors, proximity sensors or the like. Additionally, one skilled in the art will realize that alternative embodiments of the invention will perform equally well with an eye-in-hand method, or an endpoint open loop system, where the camera is mounted on the robot's end effector. II. Controller Block Diagram
FIG. 2 shows the controller block diagram as applied to the robot 10 of FIG. 1. Positions of rotating joints 26, 28 and 30, position of wrist joint 32, positioning of the pedestal (if movable) and any other moveable connectors of the robot 10 (see FIG. 1) are calculated using a nonlinear least squares optimization method which minimizes the error in the image plane. The processing algorithm estimates the Jacobian on-line and does not require calibrated models of either the camera configuration or the robot kinematics. A Jacobian, for the robot embodiment, is the group of servo motor angle positions controlled by the controller 14 (controller outputs). Often, servo motor angle position information is grouped in a matrix format, or the like, to facilitate the mathematical representation and calculation of the controller outputs by the processor 17. This means, for example, that the processing algorithm, hereinafter the dynamic quasi-Newton algorithm, provides a Jacobian (joint positions as servo motor angles) as reference inputs for the joint-level controllers. As system parameters change with time (work piece moving down the conveyor belt), the processing algorithms (dynamic quasi-Newton algorithm) incorporate the velocity of the changing parameters to calculate a Jacobian update (new joint positions) for the controller such that the robot can perform its predetermined task (moving the end effector to the target).
Describing FIG. 2 in detail, the camera 36 captures an image which includes images of at least the target 40 and the point 46 on the end effector 34 (FIG. 1), as shown in block 210. The image is then processed by the processor 17 residing in controller 14, shown by block 212, to generate a datum point, y*, of the moving target
40. An error signal /is generated at block 214 by subtracting datum point y* from the datum point y(θ). The error signal contains rate of change information, such as velocity of the moving target 40 of FIG. 1. The Jacobian is updated by the processor at block 216 and joint angle positions are calculated at block 218. Control signals are sent to the robot 10 (FIG. 1) causing the robot 10 to adjust its position at block 220.
The camera 36 captures the image of the new positions of point 46 and the target 40 at block 230. The processor 17 processes the image at block 240 to generate a datum point y(θ). y(θ) is passed to block 214 for processing as described above. III. Dynamic Quasi-Newton Algorithm Defined
The dynamic quasi-Newton algorithm, as described above, is now described in greater detail. For a moving target 40 (FIG. 1) at position y*(t), and point 46 of the end-effector 34 at position y(θ), as seen in an image plane, the residual error between the target 40 and point 46 can be expressed as f (θ,t) = yθ-y*(t). The objective function to be minimized, F, is a function of squared error.
1
E(θJ) = - ' (θJ)/(θJ)
2 The Taylor series expansion about θ,t is
E(θ + /% ,t + h, ) = E(θ ,t)+ Eθ ^ + E,b, + ... where Eθ, F, are partial derivatives and bθ, h, are increments of θ, t. For a fixed sampling period h„ F is minimized by solving
0 _ aE(θ +be ,t + b,) δθ
0 = Eθ + Eθθbθ + E,θb, + O(bθ 2) The term O(bg 2 j indicates second order terms where h, is absorbed into bθ since it is
assumed they are on the same order of magnitude. Dropping the higher order terms
yields
0 = EΩ + Fac + FΛ
Figure imgf000013_0001
where the discretization bθ = θk+l - Qk is introduced. Equation (1) is referred to as the
Figure imgf000013_0002
"dynamic" Newton's method. If the target is static, E is a function of θ and E,θ = 0. This results in the "static" Newton's method, ΘA+/ = θk - (Fθθ)'] (FQ). Expanding the terms Eθ Eθe, and Fm
Figure imgf000014_0001
Faa — J I. J b + t τ dfk= Jk dt and substituting results in
Figure imgf000014_0002
where
δθ and
J = S _ 8
the Jacobian. To compute the terms S and J analytically would require a calibrated system model. The term S is difficult to estimate, but as θk approaches the solution, it approaches zero. Hence, it is often dropped (also known as the Gauss-Newton method). The convergence properties of a dynamic Gauss-Newton method are similar to the dynamic Newton method provided S is not too large.
The Jacobian J can be replaced in the Gauss-Newton method by an estimated Jacobian, J , using Broyden estimation. The iteration then becomes a quasi-Newton method where the term S is set to zero.
Figure imgf000014_0003
The convergence properties of a dynamic Gauss-Newton method are similar to the dynamic Newton method provided S is not too large. The term ---^ is a rate of change term, where — = — — , such as the dt dt dt velocity of the target 40 (FIG. 1) in the image space viewed by the camera. Since only first order information on the target 40 is available from the vision information, velocity estimates are used. Broyden' s method is a quasi-Newton method; the algorithm substitutes an estimated Jacobian for the analytic Jacobian based on changes in the error function corresponding to changes in state variables. The estimated Jacobian is a secant approximation to the Jacobian. As with Newton's method, the moving target scenario requires that the appropriate time function derivatives are included. A dynamic Broyden' s method for a moving target scenario can be derived in a similar manner to the dynamic quasi-Newton' s method. However, for brevity the derivation is omitted here.
Let Jk represent the approximation to Jk. For this problem formulation, the Jacobian represents a composite Jacobian including both robot and image Jacobians. The Broyden update, ΔJ , for a static target scenario is given as follows.
ΔJ ,αe = (Δ/ - JΛ
Δ/ = Λ - Λ_I
Λ = /(Θ* ) =θ* -ΘA_,
The proposed dynamic Broyden update contains an additional term. Sfk( ht , having a dt rate of change component.
Figure imgf000015_0001
Notice that if the target stops moving, the term — ^-^ = 0 , and the dynamic Broyden dt update term is identical to that for a static target.
Incorporating the dynamic Broyden update above results in the following quasi-Newton approach, known as a dynamic Broyden' s Method, given by the following:
Dynamic Broyden 's Method
G Giivveenn fϊ : Rn→ Rm; θ0, θ, e Rn; J0 e R" Do for k = 1 ,2 .
4/ = Λ -Λ be =θ, -θ k-\
Figure imgf000016_0001
K
Figure imgf000016_0002
One skilled in the art will appreciate that the — <% - —(<) term, which represents a dt rate of change in the detected target location (velocity), may be multiplied by a scalar, may have an added constant, and/or may be further integrated, without departing substantially from the principles of the method and apparatus of the present invention. The term includes the velocity (rate of change) of the target, the
<9t velocity (rate of change or movement) of the end-effector and any movement or repositioning of the camera. For example, if the camera is accidentally bumped or intentionally repositioned during the target tracking process, the tracking process will not be interrupted and the tracking process will be successfully and accurately effected.
IV. Recursive Least Squares Estimation
Since the Jacobian update contains only information from the previous update, the algorithm may display instabilities in the presence of noise above some level.
Greater stability can be achieved if the Jacobian estimation considers data over a period of time instead of just the previous iteration. The increased stability can be achieved using an exponentially weighted recursive least squares (RLS) algorithm.
Equations for Jk and Pk define a recursive update for estimating Jk. A dynamic quasi- Newton method using the RLS Jacobian estimation scheme follows.
Figure imgf000017_0001
A dynamic quasi-Newton method with a RLS estimation is shown below.
Given / : 9T → «mθ1 e 9T;J0mxn,Pϋ e ^nx",λ e (θ,l) Dofor k = 1, 2, ...
Figure imgf000017_0002
= Θ A -®k-\ jk = _, + (λ + / P„ Δ - Λ-Λ -^-h^ζp ■ k k -X
pk = λ { pk-x - (λ + f$Pk-x Y {Pk-ιK Pk-ι )
Figure imgf000018_0001
Endfor The RLS algorithm minimizes a weighted sum of the squares of the differences in each iteration for (θ, t) = (θ,_, J,_, ) . The format is similar to the dynamic Broyden' s method with the exception of λ and Pk. The parameter λ can be tuned to average in more or less of the previous information. A λ close to 1 results in a filter with a longer memory. The RLS algorithm may be used for system identification, particularly for visually guided control. One skilled in the art will realize that the above-described quasi-Newton method with RLS estimation will be equally applicable to other systems. The expected output from the previous iteration,
JkIiQ is compared with the desired signal Af * ht . This is multiplied by the dt gain Kk_l and the product is used to update the Jacobian estimate. FIG. 3 is a block diagram 310 representing the dynamic quasi-Newton method with RLS estimation. Block 312 corresponds to the robot end-effector position. Block 314 corresponds to the target position. The target position is subtracted from the robot end-effector position at node 316 to determine an error value. Block 318 provides a gain to the error signal and block 320 provides a gain to the target position. the output of blocks 318 and 320 are summed as shown at node 322 and sent to the
Jacobian Estimation block 324. Additionally, the error value from block 316 is summed with the output of block 320 as shown at node 326 and sent to the dynamic quasi-Newton block 328. The output of the dynamic quasi-Newton block 328 is returned to the target position, block 314, for calculations for the next time period. FIG. 4 is a block diagram showing the Jacobian estimation 324 scheme of
FIG. 3. The gain K is given below: Kb_, — κ7p k k-X λ + /fe' A
Summing node 412 sums the two values as shown in FIG. 4. Block 414 provides a gain to the output of node 412. Node 416 and block 418 provide a feedback lop as shown. Block 420 provides a final gain to the output of block 416. The output of block 420 is sent to the dynamic quasi-Newton block 328 (FIG. 3) and is sent back to node 412.
One skilled in the art will realize that the RLS estimation is a special case of a Kalman filter.
Derivations, theorems and proofs for the above mentioned dynamic quasi- Newton algorithm, dynamic Broyden Jacobian update and the RLS estimation are described in detail in the Ph.D. thesis paper entitled Dynamic Quasi-Newton Method or Model Independent Visual Servoing, by Jennelle Armstrong Piepmeier, Georgia Institute of Technology, Atlanta, Georgia, July 29, 1999, which is incorporated entirely herein by reference. V. Test Results of the Present Invention when Reduced to Actual Practice
FIG. 5 shows a simple one degree-of-freedom (1 DOF) system that has been simulated to test the dynamic controller of the present invention and dynamic Broyden' s update method. A target 540 is shown on a workpiece 538. An arm 510 fixed to a controllable rotary joint 520 is located 400 millimeters (mm) from the midpoint of the target 540 motion. A sensory system, such as the camera 36 and controller 14 of FIG. 1, determines the position of the target 540 and where the arm 510 crosses the line 560 of target 540 motion. No noise was added to the sensor data for this simulation. Error is measured as the distance between the arm 510 and the target 540 along the line of target motion. The target 540 is moving sinusoidally with an amplitude of 250mm at 1.5 radians/second (rad/s) which results in a maximum speed of 375 millimeters/second (mm/sec). The simulation is performed with a 50ms sampling period. Velocity state estimates are computed by first order differencing of the target 540 position. FIG. 6A shows the tracking error 610 for a controller operating under a prior art static quasi-Newton method for the system of FIG. 3. FIG. 6B shows the tracking error 620 for a controller operating under a dynamic quasi-Newton method of the present invention for the system of FIG. 5. The error 610 for the prior art static quasi- Newton method in Fig. 4A appears chaotic, and contains spikes several orders of magnitude higher than the amplitude of the target 540 (FIG. 5) motion. The root mean square (rms) error for the prior art static quasi-Newton method in FIG. 6A is about 55mm.
The steady-state error 620 using the dynamic quasi-Newton method of the present invention, shown in Fig. 6B, has an rms value of approximately 1mm. These results for a one degree-of-freedom system of FIG. 5 with a controller operating under a dynamic quasi-Newton method of the present invention strongly validate the control law and dynamic Broyden Jacobian update of the present invention.
The above-described one-dimensional sensor-based control example demonstrates a significant performance improvement for the tracking of moving targets using dynamic quasi-Newton method and dynamic Broyden Jacobian estimator for a 1 -DOF manipulator. Similar success for visual servo control of a 6- DOF manipulator in simulation is described in detail in the above-mentioned IEEE proceedings paper, A Dynamic Quasi-Newton Method for Uncalibrated Visual Servoing, which is incorporated entirely herein by reference.
VI. Alternative Embodiments
The dynamic quasi-Newton algorithm can be implemented in hardware, software, firmware, or a combination thereof. In the preferred embodiment, the dynamic quasi-Newton algorithm is implemented in software or firmware that is stored in a memory and that is executed by a suitable instruction execution system. If implemented in hardware, as in an alternative embodiment, the dynamic quasi- Newton algorithm can implemented with any or a combination of the following technologies, which are all well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit having appropriate logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
A dynamic quasi-Newton algorithm program, which includes an ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor- containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a nonexhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (magnetic), a read-only memory (ROM) (magnetic), an erasable programmable read-only memory (EPROM or Flash memory) (magnetic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). Note that the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory. Another alternative embodiment of a mechanical system controller employing the above-described processing algorithm is a car cruise control system. A car speedometer 710 is shown in FIG. 7. Here, the present car speed of 30 miles per hour (MPH) is indicated by the speedometer needle 712. If the desired speed is 40 mph, the car is required to accelerate until the desired 40 mph speed is reached. One skilled in the art will realize that this simple control problem is actually quite complex. Many system parameters are constantly changing. Such changing parameters include the mass of the car (weight), surface of the road (flat, uphill or downhill), and power delivery capabilities of the motor and power train (transmission gear state, fuel consumption rate, etc.). Prior art methods have produced reasonably effective controllers for a car cruise control system which anticipates and accounts for many of these variable parameters. However, other non-variable parameters in the prior art models could change, such as a change in a speed sensor as the speedometer cable stretches and wears with time. Changes in these types of parameters could not be accounted for in a prior art mechanical system model. With the present invention, the car speed and the desired speed are the only two required controller inputs (mechanical system parameters) necessary for the computer controller. For example, the car speed could be sensed by the angular position of the speedometer needle 712. The desired speed could be specified using any method commonly employed in the art. A controller for a car cruise control system implemented by the processing algorithm would not be affected by changes in parameters modeled in the prior art controllers.
Additionally, the above-described processing algorithm enables a controller which is applicable to other tracking systems. An example of another alternative embodiment is an automatic car spacing control system. Such a controller would be significantly more complex than the above-mentioned car cruise control system. A mechanical system 810 with two cars travelling on a road 811 is shown in FIG. 8. The objective of the controller is to ensure that a minimum distance, D 812, is maintained. D 812 is measured from the rear bumper 814 of the lead car 816 to the front bumper 818 of the following car 820. For this embodiment of the present invention, the control action would be deceleration of the following car 820 whenever the minimum distance, D 812, criteria is violated. Sensor 822 measures the actual distance between the front bumper 818 and the rear bumper 814. Such a sensor could be based on any one of a variety of sensing techniques commonly employed in the art, such as infrared, microwave, radar, sound, vision or the like. A controller employing the apparatus and method of the present invention would sense the actual distance and adjust the speed of the following car 820 whenever the minimum distance, D 812, criteria is violated. A significant advantage of the controller is that a detailed model of the position of sensor 822 is not required. That is, the sensor could be disposed on the following car 820 at any convenient location. Also, since no mechanical system model is required, the controller would work equally well on any model, make or variation of any motorized vehicle. Furthermore, this embodiment of the controller would work equally well on any moving object, such as a boat, train, airplane or the like, particularly if the controller provided a warning indication coupled with the control action.
The above-described controller is equally applicable to other types of robotic devices with at least one degree of freedom (DOF) in movement of an end effector. One such possible alternative embodiment would control a mobile robot when the workpiece position is fixed. FIG. 9 illustrates such a mechanical system having a mobile robot 910. Mobile robot 910 is similar in construction to the robot 10 of FIG. 1. This alternative embodiment operates in a similar manner as the preferred embodiment of the present invention, except that workpiece 938 does not move. A pedestal 924 is mounted on a mechanized carriage 925, or the like. Elements in FIG. 9 that are similar to those in FIG. 1 bear similar reference numerals in that reference numerals in FIG. 9 are preceded by the numeral "9".
Processor 917 provides control signals to controller 914 based upon the images of the target 940 on workpiece 938 and the point 946 on end effector 934 detected by camera 936. The predetermined task of moving end effector 934 to the workpiece 938 is accomplished by moving robot arm 912 members 918, 920 and 922
(by adjusting the angular positions of joints 926, 928, 930 and 932) in a manner substantially similar to that in the preferred embodiment. Additionally, the mobile robot 910 is moved towards the workpiece 938 by repositioning the carriage 925 in the direction shown by arrow 944. Using the above-described controller, a detailed model of the mechanical system shown in FIG. 9 is not required. Jacobian updates based upon the dynamic quasi-Newton algorithm would be similar to that of the preferred embodiment, and would include additional velocity components for the motorized carriage 925.
VII. Variations and Modifications
It should be emphasized that the above-described embodiments of the present invention, particularly, any "preferred" embodiments, are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the invention. Many variations and modifications may be made to the above-described embodiment(s) of the invention without departing substantially from the spirit and principles of the invention. All such modifications and variations are intended to be included herein within the scope of this disclosure and the present invention and protected by the following claims.

Claims

CLAIMSTherefore, having thus described the invention, at least the following is claimed:
1. A method for calculating control data for controlling a first parameter of a system, comprising the steps of: receiving data that represents a measurement of said first parameter in said system; translating said data using at least rate of change information of said first parameter; and producing a translation model, wherein said translation model enables control of said first parameter so that said first parameter converges toward a predefined second parameter.
2. The method of claim 1, wherein said translating step uses a dynamic quasi-Newton algorithm.
3. The method of claim 1, wherein said translation model includes a dynamic Broyden Jacobian update.
4. The method of claim 1, wherein said translation model includes a dynamic recursive least squares Jacobian update.
5. The method of claim 1, wherein said translation model is a linear mathematical Jacobian.
6. The method of claim 1 , wherein said translation model is a non-linear mathematical Jacobian.
7. The method of claim 1 , wherein said predefined second parameter is chosen from the group consisting of a distance, a specific location, a specific temperature, a temperature range, a specific pressure, a pressure range, a specific volume, and a volume range.
8. The method of claim 1, further comprising the step of producing at least one control signal for a controller, based upon said translation model, and without creating and using a kinematic model of said system.
9. The method of claim 1, wherein said translation model includes at least velocity information associated with said first parameter.
10. The method of claim 9, wherein said velocity information is modified by addition of a number.
11. The method of claim 9, wherein said velocity information is modified by multiplication by a number.
12. The method of claim 9, wherein said velocity information is modified by integration.
13. The method of claim 9, wherein said system has a workpiece and a robot having at least one degree of freedom based upon at least one moveable connector on said robot, said first parameter represents a distance between an end- effector of said robot and a target point associated with said workpiece, and said rate of change information includes velocity information associated with said workpiece.
14. The method of claim 9, wherein said system has a workpiece and a robot having at least one degree of freedom based upon at least one moveable connector on said robot, said first parameter represents a distance between an end- effector of said robot and a target point associated with said workpiece, and said rate of change information includes velocity information associated with said target point.
15. The method of claim 9, wherein said system has a workpiece and a robot having at least one degree of freedom based upon at least one moveable connector on said robot, said first parameter represents a distance between an end effector of said robot and a target point associated with said workpiece, and said rate of change information includes velocity information associated with said end effector.
16. The method of claim 9, wherein said system has a workpiece, a camera and a robot having at least one degree of freedom based upon at least one moveable connector on said robot, said first parameter represents a distance between an end effector of said robot and a target point associated with said workpiece, and said rate of change information includes velocity information associated with said camera.
17. The method of claim 9, wherein said system has a workpiece and a robot having at least one degree of freedom based upon at least one moveable connector on said robot, said first parameter represents a distance between an end effector of said robot and a target point associated with said workpiece, and said velocity information comprises an angle for each at least one moveable connector of said robot and a corresponding time associated with said angle.
18. The method of claim 17, wherein said at least one moveable connector has a joint.
19. The method of claim 1, wherein said second parameter represents a predefined distance threshold.
20. The method of claim 1 , wherein said second parameter represents a predefined location having a parameter value of zero.
21. The method of claim 1 , wherein said first parameter represents a distance between a first point and a second point.
22. The method of claim 21, wherein said second point is moving and said method further comprises the step of adjusting said translation model based upon movement of said second point.
23. The method of claim 21 , wherein said system has a robot and a workpiece.
24. The method of claim 23, wherein said first point is on said robot and said second point is a target point on said workpiece.
25. The method of claim 24, wherein said first point is an end-effector on said robot.
26. The method of claim 1 , further comprising the step of sensing said data.
27. The method of claim 26, wherein said predefined second parameter is any one of the following: a distance, a specific location, a specific temperature, a temperature range, a specific pressure, a pressure range, a specific volume, a volume range.
28. The method of claim 26, wherein the sensing step employs a visual detection system.
29. The method of claim 28, wherein said visual detection scheme employs at least one camera.
30. The method of claim 29, wherein said at least one camera is moving.
31. The method of claim 20, wherein said first parameter represents a distance between a first point and a second point and said system has a robot having at least one degree of freedom based upon at least one moveable connector, said method further comprising the steps of: acquiring at least one image of said first point and said second point; producing an error signal from said at least one image, said error signal representing a displacement between said first point and said second point; producing said translation model as a mathematical dynamic Broyden Jacobian update, said mathematical dynamic Broyden Jacobian update comprising at least one angle with a corresponding time for said error signal; producing control data for said system based upon at least one future error signal derived from a dynamic Broyden Jacobian update, said control data comprising said at least one angle and said corresponding time information; and adjusting said moveable connector according to said control data.
32. The method of claim 1, further comprising the step of determining control data for a controller from said translation model, said controller configured to control said first parameter.
33. The method of claim 32, wherein said control data comprises an angular position of a moveable connector associated with said system and a time associated with said angular position.
34. The method of claim 33, further comprising the step of adjusting said moveable connector according to said control data.
35. A method for controlling a system having at least one degree of freedom based upon at least one moveable connector so that said system learns how to track such that a first point tracks a moving second point, comprising the steps of: acquiring at least one image of said first point and said moving second point; producing an error signal from said at least one image, said error signal representing a displacement between said first point and said second point in said image; producing a translation model for converting image space coordinates into system space coordinates, said translation model having at least velocity information pertaining to said first point and said second point; and causing said first point to track said moving second point by producing control data for said first point based upon at least one future error signal and said translation model.
36. The method of claim 35, further comprising the step of causing said first point to converge toward a predefined distance of said second point.
37. An apparatus for controlling a first parameter in a system, comprising: a receiver which receives data that represents a measurement of said first parameter in said system; a translator which translates said data using at least rate of change information of said first parameter; and a translation model based upon translation of said data.
38. The apparatus of claim 37, wherein said translator uses a dynamic quasi-Newton algorithm.
39. The apparatus of claim 37, wherein said translation model includes a dynamic Broyden Jacobian update.
40. The apparatus of claim 37, wherein said translation model includes a dynamic recursive least squares Jacobian update.
41. The apparatus of claim 37, wherein said translation model is created without a kinematic model of said system.
42. The apparatus of claim 37, wherein said translation model includes at least velocity information associated with said first parameter.
43. The apparatus of claim 42, wherein said system has a workpiece and a robot having at least one degree of freedom based upon at least one moveable connector on said robot, said first parameter represents a distance between a first
' point and a second point, and said velocity information comprises an angle for each at least one moveable connector of said robot and a corresponding time associated with said angle.
44. The apparatus of claim 43, wherein said at least one moveable connector has a joint.
45. The apparatus of claim 43, wherein said first point is an end effector of said robot and said second point is a target point associated with said workpiece.
46. The apparatus of claim 43, further comprising a controller, said controller producing at least one control signal for controlling said first parameter.
47. The apparatus of claim 46, wherein said second point is moving and translator adjusts said translation model based upon movement of said second point.
48. The apparatus of claim 47, wherein said at least one control signal adjusts said at least one moveable connector so that said apparatus learns how to track and tracks said moving second point.
49. The apparatus of claim 48, wherein said at least one control signal adjusts said at least one moveable connector causing said first point to converge toward a predefined distance of said second point.
50. The apparatus of claim 47, further comprising: a visual detector, said visual detector acquiring at least one image of said first point and said second point; and an error signal generator producing an error signal from said at least one image, each said error signal representing a displacement between said first point and said second point.
51. The apparatus of claim 50, said control signal is based upon at least one future error signal derived from a dynamic Broyden Jacobian update.
52. A computer readable medium having a program for a first parameter in a system, the program comprising: logic configured to receive data that represents a measurement of said first parameter in said system; logic configured to translate said data using at least rate of change information of said first parameter; and logic configured to generate a translation model based upon translation of said data.
53. The program as defined in claim 43, wherein said logic configured to translate said data uses a dynamic quasi-Newton algorithm.
54. The program as defined in claim 52, wherein said logic configured to generate said translation model includes a dynamic Broyden Jacobian update.
55. The program as defined in claim 52, wherein said logic configured to generate said translation model includes a dynamic recursive least squares Jacobian update.
56. The program as defined in claim 54, further comprising logic to include at least velocity information associated with the first parameter.
57. The program as defined in claim 56, further comprising: logic to control a robot having at least one degree of freedom based upon at least one moveable connector on said robot; logic configured to interpret said first parameter as a distance between a first point and a second point; and logic configured to determine said velocity information as an angle for each at least one moveable connector of said robot and a corresponding time associated with said angle.
58. The program as defined in claim 57, further comprising logic generate a control signal to control adjustment of said moveable connector of said robot such that said first point converges toward to a predefined distance of said second point.
59. The program as defined in claim 58, wherein said second point is moving.
60. The program as defined in claim 59, further comprising: logic to interpret an image from a visual detector, said visual detector acquiring at least one image of said first point and said second point; and logic configured to generate an error signal from said at least one image, each said error signal representing a displacement between said first point and said second point.
61. The program as defined in claim 60, wherein said logic configured to generate a control signal is based upon at least one future error signal derived from a dynamic Broyden Jacobian update.
62. The program as defined in claim 60, wherein said logic configured to generate a control signal is based upon at least one future error signal derived from a dynamic recursive least squares Jacobian update.
PCT/US2000/001876 1999-01-29 2000-01-27 Uncalibrated dynamic mechanical system controller WO2000045229A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU33504/00A AU3350400A (en) 1999-01-29 2000-01-27 Uncalibrated dynamic mechanical system controller

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11782999P 1999-01-29 1999-01-29
US60/117,829 1999-01-29

Publications (1)

Publication Number Publication Date
WO2000045229A1 true WO2000045229A1 (en) 2000-08-03

Family

ID=22375066

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/001876 WO2000045229A1 (en) 1999-01-29 2000-01-27 Uncalibrated dynamic mechanical system controller

Country Status (3)

Country Link
US (1) US6278906B1 (en)
AU (1) AU3350400A (en)
WO (1) WO2000045229A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002033495A1 (en) * 2000-10-17 2002-04-25 Lumeo Software Oy Simulation of a system having a mechanical subsystem and a hydraulic subsystem
WO2005039836A2 (en) * 2003-10-20 2005-05-06 Isra Vision Systems Ag Method for effecting the movement of a handling device and image processing device
CN102922522A (en) * 2012-11-19 2013-02-13 吉林大学 Control method for force feedback of electro-hydraulic servo remote control manipulator of multiple degrees of freedom
CN106144524A (en) * 2016-08-24 2016-11-23 东莞市三瑞自动化科技有限公司 With CCD vision positioning method and device in a kind of high-speed motion
WO2018215047A1 (en) * 2017-05-22 2018-11-29 Abb Schweiz Ag Robot-conveyor calibration method, robot system and control system
WO2020010876A1 (en) * 2018-07-09 2020-01-16 五邑大学 Mechanical arm control method based on least squares method for use in robot experimental teaching
CN111230866A (en) * 2020-01-16 2020-06-05 山西万合智能科技有限公司 Calculation method for real-time pose of six-axis robot tail end following target object
CN111360879A (en) * 2020-02-19 2020-07-03 哈尔滨工业大学 Visual servo automatic positioning device based on distance measuring sensor and visual sensor
CN111775154A (en) * 2020-07-20 2020-10-16 广东拓斯达科技股份有限公司 Robot vision system
CN114378827A (en) * 2022-01-26 2022-04-22 北京航空航天大学 Dynamic target tracking and grabbing method based on overall control of mobile mechanical arm

Families Citing this family (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19936148A1 (en) * 1999-07-31 2001-02-01 Abb Research Ltd Procedure for determining spray parameters for a paint spraying system
US8768516B2 (en) * 2009-06-30 2014-07-01 Intuitive Surgical Operations, Inc. Control of medical robotic system manipulator about kinematic singularities
US6944516B1 (en) * 2000-04-28 2005-09-13 Ford Global Technologies, Llc Method for error proofing body shop component selection
US6430474B1 (en) * 2001-04-03 2002-08-06 Xerox Corporation Tooling adapter for allowing selected manipulation of a workpiece
US6456901B1 (en) * 2001-04-20 2002-09-24 Univ Michigan Hybrid robot motion task level control system
JP2005515910A (en) * 2002-01-31 2005-06-02 ブレインテック カナダ インコーポレイテッド Method and apparatus for single camera 3D vision guide robotics
US6836700B2 (en) * 2002-07-29 2004-12-28 Advanced Robotic Technologies, Inc. System and method generating a trajectory for an end effector
US6866417B2 (en) * 2002-08-05 2005-03-15 Fmc Technologies, Inc. Automatically measuring the temperature of food
DE10242710A1 (en) * 2002-09-13 2004-04-08 Daimlerchrysler Ag Method for producing a connection area on a workpiece
JP4174342B2 (en) * 2003-02-19 2008-10-29 ファナック株式会社 Work transfer device
US7325667B1 (en) * 2003-10-10 2008-02-05 Damick Keith D Systems and methods for feeding articles to and removing articles from an automatic washer
US7181314B2 (en) * 2003-11-24 2007-02-20 Abb Research Ltd. Industrial robot with controlled flexibility and simulated force for automated assembly
JP4543967B2 (en) * 2004-03-31 2010-09-15 セイコーエプソン株式会社 Motor control device and printing device
JP4137862B2 (en) * 2004-10-05 2008-08-20 ファナック株式会社 Measuring device and robot control device
JP4266946B2 (en) * 2005-03-17 2009-05-27 ファナック株式会社 Offline teaching device
US8073528B2 (en) * 2007-09-30 2011-12-06 Intuitive Surgical Operations, Inc. Tool tracking systems, methods and computer products for image guided surgery
US10555775B2 (en) 2005-05-16 2020-02-11 Intuitive Surgical Operations, Inc. Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery
US8147503B2 (en) * 2007-09-30 2012-04-03 Intuitive Surgical Operations Inc. Methods of locating and tracking robotic instruments in robotic surgical systems
US8108072B2 (en) * 2007-09-30 2012-01-31 Intuitive Surgical Operations, Inc. Methods and systems for robotic instrument tool tracking with adaptive fusion of kinematics information and image information
US7904182B2 (en) * 2005-06-08 2011-03-08 Brooks Automation, Inc. Scalable motion control system
WO2007035943A2 (en) * 2005-09-23 2007-03-29 Braintech Canada, Inc. System and method of visual tracking
KR101234797B1 (en) * 2006-04-04 2013-02-20 삼성전자주식회사 Robot and method for localization of the robot using calculated covariance
US8437535B2 (en) 2006-09-19 2013-05-07 Roboticvisiontech Llc System and method of determining object pose
AT512731B1 (en) * 2007-09-19 2014-03-15 Abb Ag SYSTEM AND METHOD FOR SPEED AND / OR DISTANCE MEASUREMENT IN ROBOTIZED MANUFACTURING AND MANUFACTURING PROCESSES
DE102008007382A1 (en) * 2008-02-01 2009-08-13 Kuka Innotec Gmbh Method and device for positioning a tool on a workpiece of a disk in a motor vehicle
US8213706B2 (en) * 2008-04-22 2012-07-03 Honeywell International Inc. Method and system for real-time visual odometry
US8238612B2 (en) * 2008-05-06 2012-08-07 Honeywell International Inc. Method and apparatus for vision based motion determination
DE102008036759A1 (en) * 2008-08-07 2010-02-11 Esg Elektroniksystem- Und Logistik-Gmbh Apparatus and method for testing systems with visual output
US8559699B2 (en) 2008-10-10 2013-10-15 Roboticvisiontech Llc Methods and apparatus to facilitate operations in image based systems
US20100246893A1 (en) * 2009-03-26 2010-09-30 Ashwin Dani Method and Apparatus for Nonlinear Dynamic Estimation of Feature Depth Using Calibrated Moving Cameras
CN101870104B (en) * 2009-04-25 2012-09-19 鸿富锦精密工业(深圳)有限公司 Manipulator inverse moving method
US8600552B2 (en) * 2009-10-30 2013-12-03 Honda Motor Co., Ltd. Information processing method, apparatus, and computer readable medium
CN102791214B (en) 2010-01-08 2016-01-20 皇家飞利浦电子股份有限公司 Adopt the visual servo without calibration that real-time speed is optimized
US20110218550A1 (en) * 2010-03-08 2011-09-08 Tyco Healthcare Group Lp System and method for determining and adjusting positioning and orientation of a surgical device
JP5306313B2 (en) * 2010-12-20 2013-10-02 株式会社東芝 Robot controller
JP5948913B2 (en) * 2012-02-01 2016-07-06 セイコーエプソン株式会社 Robot apparatus, image generation apparatus, image generation method, and image generation program
JP6307431B2 (en) * 2012-05-25 2018-04-04 学校法人立命館 Robot control device, robot control method, program, recording medium, robot system
US9488971B2 (en) 2013-03-11 2016-11-08 The Board Of Trustees Of The Leland Stanford Junior University Model-less control for flexible manipulators
US9586320B2 (en) * 2013-10-04 2017-03-07 GM Global Technology Operations LLC System and method for controlling a vision guided robot assembly
JP6421408B2 (en) * 2013-10-31 2018-11-14 セイコーエプソン株式会社 Control device, robot, robot system and control method
JP6511715B2 (en) * 2013-10-31 2019-05-15 セイコーエプソン株式会社 Robot control device, robot system, and robot
MX368389B (en) * 2013-11-08 2019-10-01 Halliburton Energy Services Inc Estimation of three-dimensional formation using multi-component induction tools.
JP6351243B2 (en) * 2013-11-28 2018-07-04 キヤノン株式会社 Image processing apparatus and image processing method
JP6317618B2 (en) * 2014-05-01 2018-04-25 キヤノン株式会社 Information processing apparatus and method, measuring apparatus, and working apparatus
WO2016073367A1 (en) 2014-11-03 2016-05-12 The Board Of Trustees Of The Leland Stanford Junior University Position/force control of a flexible manipulator under model-less control
KR20180044946A (en) * 2015-08-25 2018-05-03 카와사키 주코교 카부시키 카이샤 Information sharing system and information sharing method between a plurality of robot systems
DE102015014485A1 (en) * 2015-11-10 2017-05-24 Kuka Roboter Gmbh Calibrating a system with a conveyor and at least one robot
JP2018024044A (en) * 2016-08-09 2018-02-15 オムロン株式会社 Information processing system, information processor, workpiece position specification method, and workpiece position specification program
JP6514156B2 (en) * 2016-08-17 2019-05-15 ファナック株式会社 Robot controller
JP6963748B2 (en) * 2017-11-24 2021-11-10 株式会社安川電機 Robot system and robot system control method
CN107932514A (en) * 2017-12-15 2018-04-20 天津津航计算技术研究所 Airborne equipment based on Robot Visual Servoing control mounts method
TWI648136B (en) * 2017-12-27 2019-01-21 高明鐵企業股份有限公司 Gripping assembly with functions of identification and auto-positioning
WO2019148428A1 (en) * 2018-02-01 2019-08-08 Abb Schweiz Ag Vision-based operation for robot
US10613490B2 (en) * 2018-02-05 2020-04-07 Mitsubishi Electric Research Laboratories, Inc. Method and apparatus for preconditioned predictive control
JP6748126B2 (en) * 2018-02-08 2020-08-26 ファナック株式会社 Work robot system
JP6816060B2 (en) * 2018-04-23 2021-01-20 ファナック株式会社 Work robot system and work robot
US11314220B2 (en) * 2018-04-26 2022-04-26 Liberty Reach Inc. Non-contact method and system for controlling an industrial automation machine
US10576526B2 (en) * 2018-07-03 2020-03-03 Komatsu Industries Corporation Workpiece conveying system, and workpiece conveying method
CN108972558B (en) * 2018-08-16 2020-02-21 居鹤华 Multi-axis robot dynamics modeling method based on axis invariants
CN109592057B (en) * 2018-12-07 2021-12-31 天津津航计算技术研究所 Vision servo-based aerial refueling machine oil receiving implementation method
US11883947B2 (en) * 2019-09-30 2024-01-30 Siemens Aktiengesellschaft Machine learning enabled visual servoing with dedicated hardware acceleration
CN112518748B (en) * 2020-11-30 2024-01-30 广东工业大学 Automatic grabbing method and system for visual mechanical arm for moving object
CN112757292A (en) * 2020-12-25 2021-05-07 珞石(山东)智能科技有限公司 Robot autonomous assembly method and device based on vision
US20220305646A1 (en) * 2021-03-27 2022-09-29 Mitsubishi Electric Research Laboratories, Inc. Simulation-in-the-loop Tuning of Robot Parameters for System Modeling and Control
CN113359461B (en) * 2021-06-25 2022-12-27 北京理工大学 Kinematics calibration method suitable for bionic eye system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4338672A (en) * 1978-04-20 1982-07-06 Unimation, Inc. Off-line teach assist apparatus and on-line control apparatus
US4753569A (en) * 1982-12-28 1988-06-28 Diffracto, Ltd. Robot calibration
US4954762A (en) * 1989-02-01 1990-09-04 Hitachi, Ltd Method and apparatus for controlling tracking path of working point of industrial robot
US5241608A (en) * 1988-11-25 1993-08-31 Eastman Kodak Company Method for estimating velocity vector fields from a time-varying image sequence
US5321353A (en) * 1992-05-13 1994-06-14 Storage Technolgy Corporation System and method for precisely positioning a robotic tool
US5390288A (en) * 1991-10-16 1995-02-14 Director-General Of Agency Of Industrial Science And Technology Control apparatus for a space robot
US5430643A (en) * 1992-03-11 1995-07-04 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Configuration control of seven degree of freedom arms

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4942538A (en) * 1988-01-05 1990-07-17 Spar Aerospace Limited Telerobotic tracker
US5059789A (en) * 1990-10-22 1991-10-22 International Business Machines Corp. Optical position and orientation sensor
US5293461A (en) * 1991-11-20 1994-03-08 The University Of British Columbia System for determining manipulator coordinates
US5684713A (en) * 1993-06-30 1997-11-04 Massachusetts Institute Of Technology Method and apparatus for the recursive design of physical structures
US6000827A (en) * 1993-09-10 1999-12-14 Fujitsu Limited System identifying device and adaptive learning control device
US5752513A (en) * 1995-06-07 1998-05-19 Biosense, Inc. Method and apparatus for determining position of object
DE69636230T2 (en) * 1995-09-11 2007-04-12 Kabushiki Kaisha Yaskawa Denki, Kitakyushu ROBOT CONTROLLER
US5726916A (en) * 1996-06-27 1998-03-10 The United States Of America As Represented By The Secretary Of The Army Method and apparatus for determining ocular gaze point of regard and fixation duration

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4338672A (en) * 1978-04-20 1982-07-06 Unimation, Inc. Off-line teach assist apparatus and on-line control apparatus
US4753569A (en) * 1982-12-28 1988-06-28 Diffracto, Ltd. Robot calibration
US5241608A (en) * 1988-11-25 1993-08-31 Eastman Kodak Company Method for estimating velocity vector fields from a time-varying image sequence
US4954762A (en) * 1989-02-01 1990-09-04 Hitachi, Ltd Method and apparatus for controlling tracking path of working point of industrial robot
US5390288A (en) * 1991-10-16 1995-02-14 Director-General Of Agency Of Industrial Science And Technology Control apparatus for a space robot
US5430643A (en) * 1992-03-11 1995-07-04 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Configuration control of seven degree of freedom arms
US5321353A (en) * 1992-05-13 1994-06-14 Storage Technolgy Corporation System and method for precisely positioning a robotic tool

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002033495A1 (en) * 2000-10-17 2002-04-25 Lumeo Software Oy Simulation of a system having a mechanical subsystem and a hydraulic subsystem
WO2005039836A2 (en) * 2003-10-20 2005-05-06 Isra Vision Systems Ag Method for effecting the movement of a handling device and image processing device
WO2005039836A3 (en) * 2003-10-20 2005-11-24 Isra Method for effecting the movement of a handling device and image processing device
CN102922522A (en) * 2012-11-19 2013-02-13 吉林大学 Control method for force feedback of electro-hydraulic servo remote control manipulator of multiple degrees of freedom
CN106144524A (en) * 2016-08-24 2016-11-23 东莞市三瑞自动化科技有限公司 With CCD vision positioning method and device in a kind of high-speed motion
CN110621447A (en) * 2017-05-22 2019-12-27 Abb瑞士股份有限公司 Robot conveyor calibration method, robot system and control system
WO2018215047A1 (en) * 2017-05-22 2018-11-29 Abb Schweiz Ag Robot-conveyor calibration method, robot system and control system
US11511435B2 (en) 2017-05-22 2022-11-29 Abb Schweiz Ag Robot-conveyor calibration method, robot system and control system
WO2020010876A1 (en) * 2018-07-09 2020-01-16 五邑大学 Mechanical arm control method based on least squares method for use in robot experimental teaching
CN111230866A (en) * 2020-01-16 2020-06-05 山西万合智能科技有限公司 Calculation method for real-time pose of six-axis robot tail end following target object
CN111230866B (en) * 2020-01-16 2021-12-28 山西万合智能科技有限公司 Calculation method for real-time pose of six-axis robot tail end following target object
CN111360879A (en) * 2020-02-19 2020-07-03 哈尔滨工业大学 Visual servo automatic positioning device based on distance measuring sensor and visual sensor
CN111775154A (en) * 2020-07-20 2020-10-16 广东拓斯达科技股份有限公司 Robot vision system
CN114378827A (en) * 2022-01-26 2022-04-22 北京航空航天大学 Dynamic target tracking and grabbing method based on overall control of mobile mechanical arm
CN114378827B (en) * 2022-01-26 2023-08-25 北京航空航天大学 Dynamic target tracking and grabbing method based on overall control of mobile mechanical arm

Also Published As

Publication number Publication date
US6278906B1 (en) 2001-08-21
AU3350400A (en) 2000-08-18

Similar Documents

Publication Publication Date Title
US6278906B1 (en) Uncalibrated dynamic mechanical system controller
Piepmeier et al. A dynamic quasi-Newton method for uncalibrated visual servoing
Cheah et al. Adaptive Jacobian tracking control of robots with uncertainties in kinematic, dynamic and actuator models
Wilson et al. Relative end-effector control using cartesian position based visual servoing
Xiao et al. Sensor-based hybrid position/force control of a robot manipulator in an uncalibrated environment
US7509177B2 (en) Self-calibrating orienting system for a manipulating device
CN114265364B (en) Monitoring data processing system and method of industrial Internet of things
Du et al. An online method for serial robot self-calibration with CMAC and UKF
WO2019110577A1 (en) Control system and control method of manipulator
Li et al. Real-time trajectory position error compensation technology of industrial robot
Armesto et al. On multi-rate fusion for non-linear sampled-data systems: Application to a 6D tracking system
Gaber et al. Uncertainty solution of robot parameters using fuzzy position control applied for an automotive cracked exhaust system inspection
CN116652939A (en) Calibration-free visual servo compliant control method for parallel robot
WO2023192681A1 (en) Inertia-based improvements to robots and robotic systems
Wang et al. An adaptive controller for robotic manipulators with unknown kinematics and dynamics
Mokogwu et al. Experimental assessment of absolute stability in bilateral teleoperation
Du et al. Human-manipulator interface using particle filter
Elsheikh et al. Practical path planning and path following for a non-holonomic mobile robot based on visual servoing
JP3206775B2 (en) Copying control method and copying control device for machining / assembly device
Islam et al. On the design and development of vision-based autonomous mobile manipulation
JP2737325B2 (en) Robot trajectory generation method
KR100644169B1 (en) Apparatus and method for estimating kinematic parameter in a robot
Alontseva et al. Development of Control System for Robotic Surface Tracking
Yu et al. Modeling and Performance Evaluation of 6-Axis Robot Attitude Sensor
Nunes et al. Sensor-based 3-D autonomous surface-following control

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)