Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20070238073 A1
Publication typeApplication
Application numberUS 11/398,400
Publication dateOct 11, 2007
Filing dateApr 5, 2006
Priority dateApr 5, 2006
Publication number11398400, 398400, US 2007/0238073 A1, US 2007/238073 A1, US 20070238073 A1, US 20070238073A1, US 2007238073 A1, US 2007238073A1, US-A1-20070238073, US-A1-2007238073, US2007/0238073A1, US2007/238073A1, US20070238073 A1, US20070238073A1, US2007238073 A1, US2007238073A1
InventorsRocco Portoghese, Richard Hebb, Edward Purvis, James Purvis
Original AssigneeThe United States Of America As Represented By The Secretary Of The Navy
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Projectile targeting analysis
US 20070238073 A1
Abstract
An exemplary embodiment of the invention relates to determining aimpoint relative to a target located within a firing range. The target can be either fixed or moving, and if the target is moving then it can be mounted on tracks or controlled by servomechanisms. In addition to aimpoint, predicted projectile effects and personnel performance can be calculated for evaluation and for training. The firing range is mapped so that placement of infrared emitters is associated with specific targets. Placement of the emitter external to the target perimeter produces accurate results so long as the emitter is within the field of view of a camera mounted upon the projectile launcher. The invention results in large firing ranges with the ability to place targets wherever desired without decreasing aimpoint measurement accuracy.
Images(15)
Previous page
Next page
Claims(37)
1. A system for determining a location vector for an aimpoint relative to a target located within a firing range, comprising:
a. the target having a perimeter further defining a centroid for locating the target;
b. an near infrared emitter, placed within the firing range, the emitter being further placed outside the perimeter of the target and the emitter being associated with the centroid for determining the angular offset between the target and the emitter;
c. an emitter controller for wirelessly controlling the illumination sequence of the emitter;
d. a projectile launcher for testing;
e. a digital camera having a lens, the camera adjustably mounted on the projectile launcher for alignment of the camera optical axis with the projectile launcher site line, the camera field of view encompassing the emitter, and the camera being mapped to derive the angular offset between the image formed by the emitter emission from the target centroid position;
f. a sensor responsive to the projectile launcher for detecting triggering events when the projectile launcher is fired;
g. communication means for transferring information between the computer and each of the sensors, emitter, camera; and
h. a system computer for receiving inputs from the emitter, the camera and the sensor for controlling the emitter controller, and for determining the aimpoint vector at the triggering event.
2. The system of claim 1 wherein the sensor comprises a microphone.
3. The system of claim 1 wherein the camera includes of an array of photonic detectors sensitive to near infrared light.
4. The system of claim 3 wherein the camera further includes a filter for blocking non near infrared light and permitting near infrared light to pass through the filter.
5. The system of claim 1 wherein the near infrared emitter comprises a halogen lamp and a filter for blocking non near infrared light.
6. The system of claim 1 wherein the communication means comprises relay radios.
7. The system of claim 1 wherein the field of coverage is between 10 meters and 100.
8. The system of claim 1 wherein the emitter is located between 50 meters and 1000 meters from the camera lens plane.
9. The system of claim 1 wherein the computer includes a computer program for calculating the fall of the shot.
10. The system of claim 9 wherein the computer program calculates the aerodynamic effects of air velocity upon the fall of the shot.
11. The system of claim 9 wherein the computer includes a computer program for calculating the effects of the projectile launcher burst.
12. The system of claim 1 including a video recording means controlled by the system computer for capturing visual and projectile launcher event information.
13. The system of claim 12 wherein the visual and projectile launcher event information is captured within the audio portion of the recording;
14. The system of claim 12 wherein the video recording means comprises a digital videocassette recorder.
15. A system for determining a location vector for an aimpoint relative to a moving target located within a firing range, comprising:
a. the moving target having a perimeter further defining a centroid for locating the target;
b. an near infrared emitter, placed within the firing range, the emitter being further placed outside the perimeter of the moving target and the emitter being associated with the centroid for determining the angular offset between the target and the emitter;
c. an emitter controller for wirelessly controlling the illumination sequence of the emitter;
d. a projectile launcher for testing;
e. a digital camera having a lens, the camera adjustably mounted on the projectile launcher for alignment of the camera optical axis with the projectile launcher site line, the camera field of view encompassing the emitter, and the camera being mapped to derive the angular offset between the image formed by the emitter emission from the target centroid position;
f. a sensor responsive to the projectile launcher for detecting triggering events when the projectile launcher is fired;
g. a downrange controller for controlling the position of the moving target, wherein the computer is responsive to output from the downrange controller for determining the aimpoint vector at the triggering event;
h. communication means for transferring information between the computer and each of the sensors, emitter, camera; and
i. a system computer for receiving inputs from the emitter, the camera, the downrange controller and the sensor for controlling the emitter controller, and for determining the aimpoint vector at the triggering event.
16. The system of claim 15 wherein the moving target is movably mounted on tracks.
17. The system of claim 15 wherein the downrange controller is provided with capacity to store up to 60 seconds of motion.
18. The system of claim 15 wherein the sensor comprises a microphone.
19. The system of claim 15 wherein the camera includes of an array of photonic detectors sensitive to near infrared light.
20. The system of claim 19 wherein the camera further includes a filter for blocking non near infrared light and permitting near infrared light to pass through the filter.
21. The system of claim 15 wherein the near infrared emitter comprises a halogen lamp and a filter for blocking non near infrared light.
22. The system of claim 15 wherein the communication means comprises relay radios.
23. The system of claim 15 wherein the field of coverage is between 10 meters and 100.
24. The system of claim 15 wherein the emitter is located between 50 meters and 1000 meters from the camera lens plane.
25. The system of claim 15 wherein the computer includes a computer program for calculating the fall of the shot.
26. The system of claim 25 wherein the computer program calculates the aerodynamic effects of air velocity upon the fall of the shot.
27. The system of claim 25 wherein the computer includes a computer program for calculating the effects of the projectile launcher burst.
28. The system of claim 15 including a video recording means controlled by the system computer for capturing visual and projectile launcher event information.
29. The system of claim 28 wherein the visual and projectile launcher event information is captured within the audio portion of the recording;
30. The system of claim 28 wherein the video recording means comprises a digital videocassette recorder.
31. A method for determining projectile launcher aimpoint comprising:
a. selecting the location of a shooter at a firing line;
b. surveying a firing range for determining the location of target and emitter coordinates;
c. calculating the centroid of a target having a defined perimeter, the target being placed within the firing range;
d. placing emitters on the firing range externally to the target perimeter;
e. associating the target with the emitter
f. mounting a camera sensitive to near infrared light to the projectile launcher
g. calibrating the optical axis of a camera with the boresight of the projectile launcher;
h. mapping the camera with the target and emitter position
i. providing equipment means for controlling the emitter and for determining and recording the predicted aimpoint at a triggering event
j. determining the projectile launcher aimpoint at the triggering event
32. The method of claim 31 further comprising the following when the target is a moving target:
a. providing a downrange controller for controlling the position of the moving target, wherein the computer is responsive to output from the downrange controller for determining the aimpoint vector at the triggering event.
33. The method of claim 31 further comprising determining the fall of the shot.
34. The method of claim 33 further comprising determining projectile launcher burst effects.
35. The method of claim 31 further comprising recording the projectile launcher aimpoint determination data on the equipment means and providing performance feedback to personnel.
36. The method of claim 31 further comprising preparing an evaluation report for a projectile from data captured by the equipment means.
37. A system for determining the contemporaneous position of a moving target comprising, a movable platform, the target mounted on the platform, an emitter mounted on the platform, the emitter being in fixed relationship to the target, a reflector mounted on the platform the emitter being in fixed relationship to the emitter, a rangefinder and a downrange controller receiving range information from the rangefinder wherein the position of the reflector is determined by the rangefinder and the position of the target centroid being determined by the target position relative to the emitter.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention

This invention relates generally to projectile targeting and more particularly to small projectile trajectory analysis.

2. Description of the Related Art

Determination of aimpoint is well known. Aimpoint is used in actual field situations to determine a fire control solution. Aimpoint is also used in training weapons operators to sharpen their skills and to improve their performance. U.S. Pat. Nos. 4,804,325 and 5,213,503 each of which is incorporated herein by reference illustrate representative relevant techniques in the prior art.

U.S. Pat. No. 4,804,325 discloses a weapons training simulator that predicts aimpoint and trajectory. The invention uses a sensor mounted on a weapon to generate a target position signal based on point sources located within the perimeter of simulated targets, thereby defining a diffusely illuminated target field. Set in a real-world environment the aimpoint is determined, using the output from light emitting diodes (LED's), defining point sources to a quadrature detector array to create uniform diffuse sources.

U.S. Pat. No. 5,213,503 discloses a commercially available aimpoint infrared spot tracking system that includes a charge coupled device (CCD) video camera interfaced to a digital frame grabber operating at standard video rates for use in simulator training. A lens system images the tracking area (i.e., video projection screen) onto the CCD imaging sensor. The frame grabber digitizes each frame of video data collected by the CCD camera. This data is further processed with digital signal processing hardware as well as software algorithms to find position coordinates of the imaged IR spot. The 503′ patent uses tracking system software to allow the aimpoint to be continuously monitored during a training scenario. For this application a CCD-based tracking system or similar device utilizing a two-dimensional position sensing detector (PSD) lateral-effect photodiode provides the aimpoint position data. The aimpoint analysis of the 503′ patent is limited to a virtual environment with all targets displayed upon a video projection screen. In particular, an infrared source is mounted to the weapon and the beam from the infrared source-is projected onto the screen.

Live testing is generally performed to determine the path of projectiles by physically tracking the projectile path (hereafter the “fall of shot”) from the nozzle exit to the actual end-point of the projectile path, which could be the location at detonation. In particular, aimpoint analysis is the focus of testing when experimenting with new weapon sights or fire control systems. As used in this specification weapon and projectile launcher are used interchangeably.

There are several disadvantages to live testing including technical obstacles to tracking the projectile. For example, ballistic radar is the technology most commonly used to track the fall of shot, but radar cannot track sub-sonic rounds. In addition, measuring the fall of shot by any method cannot separate system error from user error, round-to-round dispersion or environmental error. Furthermore, live testing typically requires the expenditure of large numbers of projectiles: this can become expensive especially with prototypes. Finally, measuring the fall of shot captures only a snapshot of the result of the projectile's flight.

In addition to live fire testing, small arms weapons simulators are used extensively for training. During training, it is important for a simulator to replicate the environment that a shooter could encounter. In the real world, targets may be either stationary moving. In training operators, determining the time required to acquire the target, engage the target, and manipulate a fire control system as well as other projectile launcher-handling data are useful. In training, it is also important to minimize the expenditure of ammunition while maximizing the training benefits. Especially important is determining the accuracy of the shooter so that the shooter can improve his or her skill. A need exists to measure a shooter's aimpoint and to predict the impact of the projectile from the projectile launcher aimpoint.

Therefore, there is a need for determining aimpoint that allows the small arms testing community to separate the aimpoint of a projectile launcher from the actual fall of shot in physical testing environments. In addition, there is a further need to determine aimpoint for training weapons operators.

BRIEF SUMMARY OF THE INVENTION

It is an object of the invention to track aimpoint.

It is another object of the invention to calculate the predicted trajectory rather than the actual trajectory of the projectile.

It is a further object of the invention to compute the difference between the actual aimpoint and the aimpoint required to hit the target: user error.

It is still another object of the invention to be able to modify the calculated projectile path to incorporate system error dispersion, and external effects to determine the probability of a hit or contact (P(h)).

It is yet another object of the invention to capture the aimpoint at the moment of dry-fire.

It is further an additional object of the invention to predict the effects of the projectile in the impact zone.

It is a final object of the invention to collect digitized video of the projectile launcher's aimpoint during the aiming procedure to evaluate the shooter's aiming performance.

In order to accomplish the above objects, in accordance with a first aspect of the present invention there is provided an aimpoint tracking and data collection system for projectiles during live fire testing or training events. The exemplary embodiment uses infrared reference emitters and a projectile launcher-mounted camera system to measure aimpoint relative to predefined targets on a live testing or training range. Calculated aimpoint angles may be used in conjunction with a ballistics model to predict a projectile's miss distance at the plane of the intended target. In addition, if the projectile is from a weapon then the effects may be incorporated to predict the impact.

The system according to the first aspect may further include moving targets.

The system according to the first aspect may further include a computer program for performing coordination and control of the equipment and calculations. The calculation may further include the effects of external conditions such as wind on the trajectory.

In a second aspect of the present invention a method for determining projectile launcher aimpoint is disclosed. The method comprises surveying a firing range for determining the location of target and emitter coordinates, calculating the centroid of a target having a defined perimeter, the target being placed on the firing range, placing fixed infrared emitting sensors on the firing range wherein the sensors are located external to the target perimeter, selecting the location of the shooter at a firing line, mounting a camera to the projectile launcher, calibrating the optical axis of a camera with the boresight of the projectile launcher, mapping the camera with the target and emitter position, providing equipment means for controlling the emitter and for determining the predicted aimpoint at a triggering event, determining the projectile launcher aimpoint at the triggering event, and outputting the measuring projectile launcher aimpoint at the triggering event.

These and other features and advantages of the present invention may be better understood by considering the following detailed description of certain preferred embodiments. In the course of this description, reference will frequently be made to the attached drawings.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING

Referring now to the drawing wherein like elements are numbered alike in the several FIGURES:

FIG. 1 is a perspective view of an exemplary embodiment illustrating the location of components of the present invention;

FIG. 2 is a signal flowchart of an exemplary embodiment of the present invention;

FIG. 3 is a signal flowchart for the emitter controller of the present invention;

FIG. 4 is a signal flowchart for the downrange controller of the present invention;

FIG. 5 is a signal flowchart for the system interface box of the present invention; and

FIG. 6 is a flow diagram of the software for controlling the illustrative embodiment of the present invention;

FIG. 7 is a software program flowchart of the System Interface Box (SIB);

FIG. 8 is a software program flowchart of the Emitter Controller (EC);

FIGS. 9 a, 9 b, and 9 c, sheets 1, 2, and 3 respectively are a flow diagram of the software for controlling the illustrative embodiment of the present invention;

FIG. 10 is a plan view illustrating a firing range showing the mathematical relationships for fixed targets, emitters and a projectile launcher, and includes the boresighting relationship between the projectile launcher aimpoint and the camera optical axis;

FIG. 11 is a plan view of a firing range showing the sightline and the sightline intersection with the moving target path plane indicating the mathematical relationship between the sightline and the moving target at shot fired time.

FIG. 12 is a plan view illustrating a track mounted moving target with attached emitter illustrating the target, shot, and round trajectories, along with equations for calculating the coordinates at the time of the round intercept with the target path plane.

DETAILED DESCRIPTION OF THE INVENTION

In the following detailed description of the preferred embodiment, reference is made to the accompanying drawings in which is shown by way of illustration a specific embodiment whereby the invention may be practiced. It is to be understood that other embodiments may be utilized and changes may be made without departing from the scope of the present invention.

Referring to FIG. 1 and FIG. 2, an exemplary embodiment a projectile launcher aimpoint tracking system is shown generally at 10. A camera 12 is affixed to a mount 14, and the mount is affixed to a projectile launcher such as a rifle 16. Stationary targets 18 or moving targets 20 including both stationary and moving infrared emitters 24, respectively along with emitter controllers 22 are located in a firing range. A firing range is a volume where targets are located and can include an entire battle theatre. Infrared emitters 24, typically LED's or optically filtered halogen lamps, are located downrange as references for the determination of target location. The associated emitter controller 22 controls each infrared emitter and receives signals from the system interface box 28 or a downrange controller 25 which communicates with the emitter controller by radio transmission. A computer 26 provides the overall control of the system with interfaces provided though a system interface box 28, each being mounted in a rack 30. Radio transmitters 32, relays 34 and receivers 36 along with connecting cables 38 provide communication connections between system components. Sensors that detect projectile launcher related events, for example but not limited to firing, lasing, etc. include one or more of a microphone 40 a, a powered sensor 40 b, or a normally open switch 40 c to provide input to the system interface box 28. In addition a video recorder 42, for example a videocassette recorder, digital recorder, digital compact disk recorder, etc., receives and stores the camera video and event information, including information from the trigger sensors.

The projectile launcher mounted camera 12 views the live-fire range from the perspective of the projectile launcher bore sight. Both the camera and its lens 44 are removable and may be replaced with a different camera and or different lens for each test depending upon the test objectives. Typical variables for selecting the camera and lens are the camera size, resolution, required field-of-view, and mounting requirements. For example, small-bore projectile launchers require a small, light camera to minimize interference with operator control and a narrow field-of-view to maximize angular resolution per video pixel. As a further example, a wider field-of-view is required if the targets are dispersed over a wide firing range, such as battle theatre, for example 30 degrees as opposed to a narrow target range, for example 8 degrees.

In the exemplary embodiment, the camera 12 is an ELMO CCD (charged coupled device) Black and White Camera, model ME4111R that does not have the typical near-infrared blocking filter installed (although other suitable cameras are commercially available, for example, a small rectangular ″ format camera with an electronic high speed shutter manufactured by Panasonic that can have its near-infrared blocking filter removed). The image pick-up device is a inch interline-transfer CCD with 768 H494 V effective picture elements (pixels). The camera typically consists of a camera head with lens, a camera control unit (CCU) and an interconnecting cable. The CCU is powered by 12 VDC from either a battery or an AC adapter. The camera is operated in the standard NTSC mode with a vertical frequency (field) of 59.94 Hz, horizontal frequency (line) of 15.734 kHz and provides a composite video output.

The Elmo camera uses an Elmo lens mounting and two lenses, having focal lengths of 24 mm and 36 mm. The Panasonic camera uses standard C mount lenses and a single 35 mm lens along with two focal length adapters to provide a 1.5 and or 2 focal length products. Using the adapters, the 35 mm lens can be modified to provide 52.5, 70, and 105 mm focal lengths by using combinations of the adapters.

The camera mount 14 is adjustable permitting the camera lens optical axis to be aligned roughly with the projectile launcher bore sight line. A software managed boresight process is used for fine aligning the angular offset of the camera optical axis from the projectile launcher boresight line. After the camera is roughly aligned, the remaining angular and linear camera axis offset from the sightline is recorded and used in subsequent projectile launcher aimpoint error calculations. In particular, when viewing targets at any distance, the angular and linear offsets of the camera optical axis from the sightline is corrected via the software calculations to remove effects upon the aimpoint measurement accuracy thereby eliminating the need for optical corrective algorithms.

It is to be appreciated that in the illustrated embodiment the field coverage must provide for the camera 12 imaging of a target's reference emitter 24 when the projectile launcher 16 is sighted on a target 18, 20. This requirement originates from the need to mark the reference emitter when a projectile is fired at the target. From this field of coverage it is to be further appreciated that the field coverage is optimally between 10 meters and 100 meters at the target distance thereby permitting location of the emitter outside of the perimeter of the target so long as the emitter is within the field of camera coverage.

A titler 46, located in the operator station main rack 30, overlays text onto the incoming camera video and transmits the result to the VCR 42. The titler is responsive to the system computer 26 through an RS232 interface and provides output signals to the system computer through the RS232 interface and to the VCR through an RS170 video interface. The system computer provides commands to the titter to overlay test data onto the camera video for correlation to events of interest occurring during the test.

Referring to FIG. 2 and FIG. 5 The digital video recorder 42 is a typical, commercially available cassette unit, but may alternately be a digital recorder that utilizes other media such as digital disks or internal hard disks. In the exemplary embodiment, the digital recorder is mounted in the operator station main rack 30. During testing, the recorder, records the video from the camera 12, the titler 46, as well as the main and auxiliary trigger events from the system interface box 28 on the left and right audio analog channels. This input is first transmitted to a tone generator 50 for the trigger and then to the videocassette recorder 42. The near-infrared CCD image of the live fire range is also output from the video recorder 42 via an RS170 video interface to the system computer's frame grabber board for marking the emitter centroid relative to the projectile launcher sightline. In this manner, the marking of the emitter can be performed immediately after an event. Alternately, the marking of the emitter may be performed during post-test processing, where the videocassette recorder provides recorded event data and test video to the system computer and the computer's frame grabber board to be described hereinbelow. Microphone sensor 40 a inputs representing the trigger events recorded on the left and right audio channels are received by the VCR. The sensor signals 40 a, 40 b, 40 c are also outputted to the system interface box 28 main and auxiliary audio inputs for trigger event re-generation.

The frame grabber (not shown) is a commercially available Matrox Orion board that decodes the composite video from the CCD camera 12, described hereinabove, and digitizes the image to a 640H by 480V pixel format. All measurements of the emitter 24 centroid are made on video frames digitized by the frame grabber. The 10-nanosecond jitter of the Matrox Orion board, which equates to less than ten percent of a pixel width, has minimal effect on the emitter centroid measurement and the resulting system error.

As is well known in the art, the targets 18, 20 are those typically used for live fire ranges. The targets previously used have been E-silhouettes. Other targets may be used; the only consideration is that there is a defined target centroid. The targets are attached to supports that hold the targets in position.

Referring again to FIG. 1, both fixed targets 18 and moving targets 20 are illustrated in the exemplary embodiment. Each moving target is mounted on a track 52 and responds to signals from the system computer 26 to traverse a portion of the firing range through a preprogrammed path when commanded by the computer. Alternatively, a dedicated preprogrammed target motion controller (not shown) can control each target using servomotors. In yet another variation the target motion controller is able to respond to contemporaneous operator commands using servomotors. Mounting the moving targets on tracks simplifies the calculation of location within the firing range by the downrange controller (to be described hereinafter). In alternative embodiments, the location of the moving target is transmitted through alternate means, for example a multiplexed optical signal from a rangefinder or a global positioning system.

The exemplary embodiment includes the implementation of a moving target 20 measurement subsystem. The goal of the moving target measurement subsystem is to provide the 3D coordinates of the moving target and emitter centroids at the time that a shot is fired (to determine the projectile launcher sightline angles relative to the target center) and at the time of the terminal position of the fired projectile (where it intersects the target path plane, hits the ground, or air bursts) to differentiate the location of the moving target and the projectile at the time the projectile reaches its terminus or intersects the target. The location of the moving targets' centroid, like the fixed targets' centroid, is determined with reference to the infrared emitters' centroid.

The infrared emitters 24 (IR's) are located on the firing range and are used in conjunction with the projectile launcher-mounted CCD camera 12 to determine where a projectile launcher 16 is aimed relative to targets 18, 20 located on the range. In the exemplary embodiment, the emitters for fixed targets 18 are mounted to upright posts 54 that have been driven into the ground to rigidly fix the emitter position. Preferably, the ability to support live fire operations is provided by placing the emitters to one side of the target to minimize the possibility that an emitter will be struck by a projectile. The emitters are located on the range such that the camera 12 can sense the emitter during the projectile launcher aiming process. The camera does not need to sense the target so long as the emitter associated with the target under fire is in the camera's field of view and is identified as the target reference emitter in the system computer program software. The system computer 26 defines the target's reference emitter 24 via a target-emitter database.

The emitters 24 are halogen light sources that are optically filtered using an RG780, or generally equivalent, filter to block visible output while allowing near-infrared output to pass through. “Near infrared” is commonly defined as the region of the electromagnetic spectrum wavelength between 0.77 to 1.4 microns. The RG 780 Filter makes the emitter output invisible to a shooter, but provides near infrared for the CCD camera 12 to detect. The CCD camera is also optically filtered, preferably using an RT830 filter (or general equivalent) to optimize detection of near-infrared output. The filter blocks most of the visible wavelengths but passes near-infrared radiation that corresponds to the emitter output and is within the CCD's near-infrared response. The diameter of the infrared emitter 24 aperture is typically 50 millimeters, with smaller or larger sizes used for nearer or farther emitter positions.

The emitters used in the illustrated embodiment have angular output profiles that range from a 10-degree symmetrical cone to a 45-degree horizontal by 10-degree vertical compressed cone shape. Considering only the physical size of the emitter aperture, the calculated angular, size of the emitter presented to the CCD camera 12 varies as a function of distance from the camera. The emitter subtends a relatively small angle at the 50-meter range and the angle reduces linearly as the range increases. In the exemplary embodiment, 50 meters is the closest planned range to an emitter, while 1000 meters is the maximum range to an emitter although closer and further ranges are within the contemplation of the illustrated embodiment. In general, an emitter placed at a further range will present itself as a smaller object in the CCD camera image and would allow a more precise location to be derived from the emitter marking/location process. If an emitter subtends less than a pixel when imaged on the CCD camera, marking the emitter centroid will be difficult due to an inability to determine the emitter shape. However, the image of the emitter sensed by the CCD sensor is actually larger, due to a phenomenon known as blooming. Blooming will become important as the angular size of an emitter aperture falls below the angle covered by a pixel.

Because of blooming, the physical size of the emitter 24 aperture does not provide a direct relationship to angular size in the CCD camera 12 image. In the exemplary embodiment, high output emitters are used to provide sufficient near-infrared output to overcome ambient radiation from the sun. This high output increases blooming. During actual live fire testing, captured images of the emitter at various ranges show larger angular sizes depending on the ambient illumination level and the emitter range.

IR emitters 24 are commonly available along with power sources. A portable 12V power source may be used, for example a battery (not shown). The portable battery power source combined with the radio controlled emitter controllers, described hereinafter, advantageously provide for remote placement of the emitters within a live fire test range while maintaining communication with the system computer 26. As is well known, a DC generator or powered inverter may also used to power the emitters.

An emitter controller 22 controls each emitter 24. The emitter controller communicates with the computer using a broadcast radio antenna 32 and receiver antenna 36 although other types of communication links, for example microwave, wide area wireless network, telephonic network etc. are well known in the art.

Referring to FIG. 3, the emitter controller includes a Zworld BL1800 microprocessor 56. A 3.686 kHz resettable TX/RX clock 58 provides timing to the BL1800 microprocessor upon initiation of an enable signal from the microprocessor. A radio transceiver 60 connected to the microprocessor handles radio communications. The transceiver is a TEC T400 as is well known to those of skill in the art. The microprocessor sends a signal to the emitter power control circuit 64 to turn the emitter on and off. An ID select circuit 66 is provided to enable each emitter controller to be set to a unique ID, allowing radio broadcast commands to be addressed to specific controllers. A battery level detector circuit 68 is provided to monitor the battery charge state. Auxiliary circuits provide status LEDs and an override switch 70. The override switch is provided to allow a user at the emitter controller to cause the controller to power its attached emitter without recourse to radio commands, a feature useful during system setup or test.

In the exemplary embodiment, the emitters 24 are placed in relation to the firing range to provide a coordinate reference. In this arrangement the need to locate the IR's within close proximity to or at the fixed targets 18 is eliminated so long as the distance vector from the emitter to the target is known. The IR's are therefore generally located external to targets within the firing range but can also be located within the target boundary. The downrange placement of the IR's takes advantage of the high resolution of the camera 12 in order to render the optic and alignment errors negligible for determination of the aimpoint position vector. The Lens Mapping Procedure within the system computer software to be described hereinbelow is used to provide the microRadiansperPixel_H and microRadiansperPixel_V values associated with a particular CCD camera 12 and lens 44 combination that will be mounted to a projectile launcher 16 under test.

Referring to FIG. 4, the downrange controller 22 consists of a Zworld BL1800 microprocessor 80, a RIEGL LD90-3100-VLS-FLP rangefinder 82, a TEC T400 radio transceiver 96, and a custom circuit board containing an array of control and interface circuits. The transceiver controls radio communications. A 3.686 kHz resettable TX/RX clock circuit 86 provides timing for radio communications. The SIB and all DRC's are tied together through a hardwired dual-channel current loop and RS485 channel. The two channels of the current loop are used by the SIB to send time critical signals, while the RS485 serial channel is used for bulk data transfer. All hardwired connections to the DRC are optoisolated, protecting the DRC from potentially damaging transient voltages. The current loop channels are isolated by the custom isolation circuit 94. The RS485 serial channel is isolated by a B&B Electronics 485OPIN optoisolator 102. Either the SIB, through the current loop, or the DRC's internal microprocessor 80 may enable the 2 kHz Laser Trigger Clock 84. The Laser Trigger Clock in turn causes the range finder 82 to gather range data at 2 kHz. Data from the rangefinder is downloaded to the microprocessor for analysis and storage through a RS232 channel. An ID select circuit 100 provides each DRC with a unique address.

Referring again to FIG. 1, a downrange controller 25 is positioned at the end of each moving target's track 52 in order to monitor the location of the moving target 20. When enabled, the DRC's rangefinder takes 2000 range samples per second. The DRC's microprocessor averages groups of ten samples so that range data is recorded 200 times per second. Repetitive samples are discarded so that only motion is recorded. Each controller is provided with capacity to store up to 60 seconds of motion. The present embodiment has a capacity for 16 individually addressed downrange controllers, although it is within the contemplation of the present embodiment to include additional controllers. A DC supply, for example a battery (not shown), powers each controller 25, although it is within the contemplation of the invention to use other power sources including inverters, fuel cells and DC generators.

Referring again to FIG. 2 and FIG. 5, trigger event sensors 40 a, 40 b, 40 c, are used to initiate the aimpoint vector determination sequence. Three types of sensors are available for the initiation function, and include a microphone, a powered sensor, and a switch. Each trigger event sensor provides output to the system interface box SIB 28 and to the VCR 42.

The microphone 40 a generates a trigger event by detecting fire by listening for projectile launcher recoil and outputs an analog signal. The microphone is a standard commercial microphone having a sensitivity of 6 dB although higher and lower sensitivities will function in the embodiment. The microphone is mounted on, or near, the projectile launcher such that the diaphragm of the microphone responds to the initiating sound of the projectile launcher firing.

The powered sensor 40 b is any digital sensor that requires external power. The power sensor is provided with +5VDC from the system interface box (SIB) 28. The SIB generates a trigger event when the powered sensor digital output goes high. Powered sensors include Hall Effect sensors to detect the motion of some projectile launcher part or an accelerometer to detect the shock of the projectile launcher recoil. The powered sensor is mounted on the projectile launcher such that the sensor is responsive to the movement or acceleration of the sensed component.

The switch 40 c is a normally open switch that completes a circuit when closed. The SIB 28 generates a trigger event when the switch is closed or pressed. The switch is used to detect a preselected physical projectile launcher operation, such as a trigger pull or a button press, or may be a hand switch held by the operator, test director or system operator.

When the particular microphone 40 a, power sensor 40 b or switch 40 c is used, the associated output signal is sent to a glitch detector with lockout timer 110. The lockout timer provides a period after each trigger event during which further trigger events are ignored. The glitch detector output is sent to the Zworld BL 1800 microprocessor 112 within the SIB 28, alerting the microprocessor of the trigger event. The glitch detector output is also sent to a trigger tone generator 50. The trigger tone generator produces a short burst of audio line level tone that is recorded by the system video recorder 42, marking the trigger event on the video recording.

The BL 1800 microprocessor 112 of the SIB 28 serves as a general interface between the downrange and firing line hardware, including the emitter controllers 22 and downrange controllers 32, the trigger event sensors 40 a, 40 b, 40 c, the VCR 42 and the system computer 26. The TEC T400 radio transceiver 120 handles radio communications.

The SIB 28 communicates to the system computer 26 through three channels: RS232 serial, parallel digital, and RS485 serial. General communication between the SIB and the system computer are passed through the RS232 serial channel. The parallel digital signals are used to transmit time critical communication. The digital signals from the system computer are received by the SIB through a Data Translations STP68 interface board 124. Signal conditioning circuitry 122 turns two of the digital signals into the dual-channel current loop that joins the RS485 serial channel in hardwire linking the SIB and all of the DRC's. The current loops transmit time-critical events to the DRC's while the RS485 serial channel allows for bulk data transfer. A Keypad 114 and a Matrix Orbital LK2204-25 LCD display 116 allow the user to interact with the SIB, monitor its status, issue commands, and run test functions.

The system computer 26 is a typical commercially available PC capable of running software embodied on the computer media. In the exemplary embodiment the computer includes a Windows 2000™ or higher operating system. PC requirements in the exemplary embodiment are at least 128 MB of RAM and a 733 MHz Pentium III processor or equivalent although the system will run on other platforms. A CDROM drive or equivalent means, for example, an external memory device or Internet download capability is required for software installation. As is well known, the PC contains a memory device such as a hard disk, a microprocessor and input device and output devices. Preferably, the output devices include both a display screen and a printer. To accept the software and databases, the memory device, for example the hard disk, preferably should have a capacity of at least 10 GB free hard drive space for program operation and data storage. However, the memory requirement may vary and depends upon the expected size of the database. As is well known in the art, memory can be selected to match the database or increased by installing a higher capacity disk drive.

Software Control

The software for controlling the equipment that comprises the illustrative embodiment will now be described. It is to be appreciated that the software can be expressed in many forms by those skilled in the art and only the necessary functions will be described herein. The software provides control among downrange hardware, operation station hardware and firing line hardware.

SIB Program

Referring to FIG. 6, the compiled code in the Zworld BL 1800 microprocessor 112 (See FIG. 5) automatically initiates at system power up 150. At initialization 152 variables and arrays are created and the digital I/O system is initialized with outputs set to their default startup states. The main loop is started 154 and repeats until the system is powered down. The main and auxiliary trigger pulse digital inputs are polled 156 to check for hardware detection of projectile launcher events. The system computer is alerted 158 to a hardware detected projectile launcher event by raising the trigger flag. Then the main and auxiliary triggers reset digital inputs are polled 160 to determine if the system computer commands the main or auxiliary trigger flags to be lowered 162. The main or auxiliary trigger flags are lowered as commanded by the system computer. The RS232 serial buffer serving communications to the LCD display is polled 164 to determine if a keypad key press has been received. In the Handle Key press 166 step the program responds appropriately to the key press. The RS232 serial buffer serving communications to the system computer is then polled 168 to determine if a computer command has been received. In the ping steps 170, 172, the computer determines whether a ping is received. All commands from the system computer other than a ping are meant for the emitter controllers via the radio link and the command bytes are placed in the outgoing queue 174. In the bytes in outgoing queue 176, the program determines if there are any computer commands waiting to be sent to the emitter controllers via the radio link. The SIB radio transceiver's received signal strength indicator line is polled 178 to determine if another unit's transceiver is currently transmitting. Then, all computer command bytes in the outgoing queue are sent to the emitter controllers via the radio transceiver 180. The bytes from the outgoing queue are cleared after being sent 182. Next the SIB radio transceiver's received signal strength indicator is polled to determine if another unit's transceiver is currently transmitting 184. If another unit is transmitting, incoming radio messages are received 186 and the system responds as appropriate 188 to the received radio message. Finally, the main loop ends 190 and repeats in the start step 154.

Emitter Controller (EC) Program

Referring to FIG. 7, the emitter controller program will now be described. The compiled code resident in the emitter Zworld BL1800 microcontroller 56 (See FIG. 3) is initiated upon powerup 202. At initialization 204 variables and arrays are created and the digital I/O system is initialized with outputs set to their default startup states. The two digital inputs of the EC's 2-position DIP switch are polled to determine the controller's ID tens digit, and the four digital inputs of the 10-position rotary switch are polled to determine the controller's ID ones digit 206. The status LEDs are flashed to indicate the emitter identification number; the LEDs are cycled a number of times equal to the tens digit, then flashed a number of times equal to the ones digit 208.

The main loop starts 210 and repeats until power down. The program queries whether there are any bytes waiting to be sent to the system computer via radio link 212. In a responses allowed step 214, the EC settings are queried to determine if radio responses have been disabled by system computer command. Radio responses may be disabled to prevent confused radio traffic when many EC's are deployed. The program pauses for a duration determined by the emitter ID number 216, 218, 220. This pause will prevent the EC's from attempting to transmit simultaneously, as well as causing them to respond to broadcast commands in numerical order. The EC radio transceiver's received signal strength indicator line is polled to determine if another unit's transceiver is currently transmitting. Next, in the “Bytes in outgoing queue” step, the waiting messages in the outgoing queue are broadcast via the EC's radio transceiver 222. The outgoing queue is cleared 224 after the waiting messages are sent. The EC's set ID is checked 226. Then the EC determines its battery state 228 through two onboard voltage comparators, dividing the range of battery voltages into three categories: good, warn and critical. If manual override functionality has not been disabled by the system computer 230 the manual override switch state is polled and if “on” the manual override flag is set 232. The EC then polls the radio transceiver's RSSI line to determine if another unit's transceiver is currently transmitting 234. In steps 236 through 240 incoming bytes are received and the message transmitted by the bytes is handled. Appropriate responses, if any, are placed in the outgoing queue. The program polls whether the emitter manual override flag has been raised 242. If the flag is raised then the emitter is turned on 244 and the emitter status LED is flashed steadily 246, indicating that the emitter is on due to manual override, and the main loop ends. If the manual override flag is not raised, the program polls whether the emitter flag is raised 248. If the flag is raised then the emitter and the emitter status LEDs are turned on 250 and the main loop ends. If neither the manual override nor the emitter flag are raised, the emitter and emitter status LED are turned off 254, the emitter status LED turns off 256 and the main loop ends 258.

Downrange Controller Program

Referring to FIG. 8, on powerup 262 the compiled code resident in the downrange controller Zworld BL 1800 microcontroller 80 starts and runs. Variables, states and settings are initialized 264. The hardware ID is checked 266 by polling the four digital inputs of the DRC's 16 position rotary switch to determine the unit's ID number. The handle temp function 268 uses onboard sensors to determine the DRC's current internal temperature. The DRC's fan, defroster, and/or heater are then activated as needed to keep the unit's internal temperature within tolerance limits.

The main loop 270 is then started and repeated until shutdown. Once every 15 seconds a subset of commands within the main loop is performed 272 wherein the hardware ID lines are checked and the DRC ID number is ascertained 274, the handle temperature routine performed 276, and the check battery voltage routine is performed 278 thereby ending 280 the periodic check cycle. When this main loop periodic subset is completed, the main loop functions are performed.

Each pass 282, the main loop begins by checking the state of Trigger1 284. Trigger1 is a current loop passing through the SIB and all of the DRC's. When Trigger1 is active, all rangefinders will begin ranging and sending data to their microcontrollers. If Trigger1 is active, the DRC then determines if the system computer has enabled the DRC for recording 286. If the DRC is enabled for recording, the microcontroller will begin processing and recording the incoming data from the rangefinder 288. The rangefinder will continue to range until Trigger1 returns inactive. The microcontroller will continue to record data from the rangefinder until the rangefinder until no more data is forthcoming or a full sixty seconds of data. Once all data has been recorded the DRC will disable itself 292 for recording if the system computer has placed it in disable after record mode. Disable after record mode 292 prevents new data from being accidentally overwritten before the data can be downloaded to the system computer. If the DRC is in the disabled after record mode then no further rangefinder data will be recorded until the DRC is again enabled by a system computer command. If Trigger1 is not active but rangefinder data has been received 294 the data can be discarded. The RS485 serial buffer serving the communications with the system computer is then polled 298 to determine if commands have been received. If so, the microprocessor acts appropriately 300, 302. Any responses to the system computer are placed in the outgoing queue 304. The microprocessor next checks the outgoing queue. If the queue contains any bytes, they are sent to the system computer through the RS485 serial channel and the queue is cleared 306, 308. The RS232 serial buffer serving communications with the keypad/display unit is polled to determine if a keypad keypress has been received 310. Appropriate responses to any keypresses are generated 312 and the main loop ends 314.

System Computer Program

The system computer program comprises a main program and two threads, the frame grabber and the event detection thread. A process events process captures data for later analysis, but the process also executes in real time.

Referring to FIG. 9 a main program is illustrated at 300. The program starts 302 by initializing data structures 304 entered into databases 306, 308, 310. The data is collected from the on-site survey 312, the lens mapping procedure, 314 and from the test plan and ballistic models 316. After the data structures are initialized, the program initiates communication linkage and verification with the system hardware through wires and radios along with its internal communications 318. The event detection and frame grabber threads are initialized and placed in a suspended state 320. The computer display shows the main menu 322 whereby the operator selects either the capture or analyze modes 324. In the capture mode filenames are assigned and new data files are opened and initialized for the event 326. In the analyze mode the data is analyzed 336 a. The detection processing mode is selected for either live processing 330, 332, 334, 336 b or post system processing. 338, 340, 342. During Live Processing, data from a single event is captured and immediately analyzed at 336 b. For Post Processing, where a large amount of data is to be captured sequentially, the data are recorded and analyzed later via AnalyzeData( ) at 336 a. The System Interface Box (SIB) turns on the default emitter for the selected target after a target is selected 329 or 337. The event detection sequence initiates when an event is selected at 330 or 338 and the video recorder is started.

Referring to FIG. 9 b, the thread for monitoring event detection ports on the SIB hardware is shown at 350. Following initialization, the event detection circuits are reset 354 and the event counters are initialized 356. The event detection loop starts 360. The video recorder status is checked 362 and the detection thread is suspended if the video recorder is off 364. If the video recorder is operating, the video frames from the projectile launcher-mounted camera are recorded on the digital video recorder. If Live Processing mode was selected 366, then the video frames are also input into a computer memory buffer configured as a six second first in first out (FIFO) memory buffer 368. If an event has occurred 370 then the event detector circuits are reset 372 and event data are processed 374. If an event has not occurred, then the burst timer is checked 380 and the burst count is incremented if the burst timeout has passed 382. If not in the burst mode, or after events are processed 372, 374, the event detection loop is restarted 360.

Referring to FIG. 9 c, the process for analysis of event data is illustrated at 400. The process handles the selection, display, and marking of the emitter locations for event images stored in the FIFO buffer by the video frame grabber. User controls for displaying, marking, and saving marking data are provided 404. An event is selected from the available events 410. The data related to an event is used to find and display the video frame for the event 412. The operator verifies the event occurrence and emitter marking, and the marked event data is saved 414. The marked emitter data is used with the target and emitter coordinate data provided via the survey process to determine aimpoint performance 416. Aimpoint data for the target and event of interest is saved 418 and combined with the ballistics model to provide flyout of the round towards the target. The operator may then select another event to be analyzed, or return to the previous process 420, 422.

A projectile dynamics model as is well known in the art is included for both training and testing. The dynamics model calculates the fall of the shot. Such testing may include modifications to the projectile launcher site or triggering mechanisms. The projectile dynamics model may include aerodynamic drag effects as well as lift and gravity forces upon the projectile. Wind and other shocks encountered by the projectile are also included in the dynamics model. Such modeling is well known in the art including the aerodynamic effects of lift and drag caused by the exogenous aerodynamic forces.

In addition to the dynamics model, a projectile burst effect model, the development of which is well known in the art may be included. Burst effects are correlated to the actual projectile parameters, for example, the quantity of explosive, the fracture characteristics of the casing and the effects of proximity devices.

Setup and Operation

The operation of the invention will now be described.

Range Survey

First, a survey of the firing range is performed. The system uses survey equipment to measure the three-dimensional coordinates of the targets, reference emitters and shooting position. Since all of the subsequent calculations are based upon these measurements, the accuracy of these coordinates bounds the accuracy of the system.

The scenario objectives, whether testing or training, are factored into the design of the range. The location of the firing line is determined along with placement of downrange hardware consisting of rangefinders and IR's, thereby defining the range and available targets. The static emitters preferably should be placed within 20 mils of the targets but may also be placed further from the target with acceptable loss of accuracy. Next, a static target survey is performed in which range data for the shooter, static targets, and emitter three-dimensional coordinates within the live fire range are determined along with ground plane elevation at each target.

Equipment used to survey live fire test ranges typically includes theodolites, transits, laser range finders and global positioning systems. The exemplary embodiment uses a survey that incorporates a theodolite angular measurement device with a laser rangefinder to provide azimuth, elevation, and range to a retro-reflective marker with an option to calculate and output three-dimensional coordinates. Typically, measurements are referenced to geodetic coordinates. In the exemplary embodiment, a relative reference method is used by defining a local coordinate system origin (defined as (0, 0, 0)) that coincides with a predefined shooter position. The target and emitter coordinates are measured relative to the shooter position. Regardless of the measurement method, the desired results are coordinates reported in Northing, Easting, and Elevation coordinates (n, e, h).

Survey equipment, under ideal circumstances, can supply accuracies on the order of at least + or −1 mm. The uncertainty in the target and reference emitter position can increase to + or −0.1 meter (for each) due to additional errors that can occur during subsequent target and reference emitter placement. Preferably, the target and reference emitters will be positioned before the survey. The target and reference emitter coordinates will then be obtained by surveying to their centers. Even with the survey performed after target and emitter placement, the best achievable uncertainty in their placement is considered to be + or −0.01 meters (for each). Therefore, with respect to the total uncertainty error the survey contributes + or −0.01 meters for each of the shooter position, target position and emitter position.

Communication and control connections are made between the operation station (comprising the system computer 26 and the SIB 28) and the firing line and downrange hardware. Some connections are made through cables and other connections are made through radio means (including radio relays). After controls are established, the components are set to their specific addresses for communication with the SIB. In particular, the trigger events are established and each camera is associated with each projectile launcher. It is important that the location of each target and IR emitter within the range be precisely determined. With this range data determined the Input Files are input into the computer.

Surveyed Target and Emitter Coordinates

The targets and emitters are located on the firing range at specific northing, easting, and elevation coordinates provided by survey. The accuracy of the target and emitter positions depends on when the survey is preformed relative to placing the targets and emitters on the range. One method is to survey and mark the desired positions on the ground, followed by the later placement of the targets and emitters. Adding measured elevation offsets to the ground survey positions give the final coordinates of the targets and emitters. Using this method, estimated errors in the coordinates range from + or −50 mm to + or −100 mm.

In the exemplary embodiment, targets and emitters are located by securely positioning the targets on the firing range at the approximate ranges desired, and then surveying directly to the target and emitters. If surveying is performed after placement, the coordinates of the targets and emitters should fall within + or −10 mm or better. Recording measured offsets of the target centroids from the ground are used to evaluate projectile ground impact in the immediate area of the target.

During aimpoint analysis, when marking the location of the emitter relative to the pixel coordinates, the centroid of each emitter is located by visual interpolation. Marking precision is increased by allowing for sub-pixel marking via a zoom function (implemented in the system software). The zoom function magnifies the area of the image that contains the emitter signature. Sub-pixel marking precision is the inverse of the zoom factor chosen; a zoom factor of 4 provides sub-pixel precision of 0.25 pixels. Marking of the emitter centroid is performed by noting the horizontal and vertical dimensions of the emitter image in pixels. The location of the emitter centroid is determined by dividing these dimensions in half.

Correspondence between the Shooting Range and CCD Camera Image

Referring to FIG. 10, the geometry of the shooter, emitter and target measurement process is illustrated. The shooter, emitter, and target geometry forms a long, narrow triangle with the shooter at the apex and the emitter and target located downrange at the other triangle vertices. Equations describing the angles from the reference emitter to the target are shown along with the sight line-emitter angles with reference to the CCD camera image.

The shooting position will preferably be located beforehand by having the surveyor place a marker at the defined shooting position on the ground. The accuracy of the shooting position coordinates would follow the best-case coordinate accuracies of + or −10 mm.

With the shooter, emitter and target positions ascertained, the process for finding the sightline to target aimpoint angles is performed. The process consists of four basic steps: 1) Using the 3D coordinates of the shooter, emitter, and target, calculate the emitter and target angles (vertical and horizontal) relative to a line parallel to a reference (northing) axis. 2) Calculating the emitter to target angles by subtracting the emitter angles from the target angles. 3) Measuring the sightline to emitter angles by marking the emitter centroid on the CCD image captured when a shot is fired and correct for boresight angular errors. 4) Using the calculations made from survey coordinates and the calculations made via the CCD measurements for the particular shooter, target, and emitter combination, calculating the sightline to target angles by subtracting the emitter to target angles from the sightline to emitter angles.

Lens Mapping

The Lens Mapping Procedure will now be described. The lens mapping is preferably performed onsite after setup of the equipment to ensure that the lens mapping accurately reflects the camera and lens settings. During lens mapping, the location of an emitter in the camera image is used in finding the angular offset of the emitter from the camera's optical axis, and subsequently the angular offset from the projectile launcher's line of sight. In order to determine the angular offsets, camera and lens combinations are mapped to determine the angle represented by each pixel of the camera's imaging device. Locating the camera and lens at a known distance in millimeters from a target calibrated in millimeters, and capturing the resulting image comprise camera/lens mapping. The captured image (in digital format) is examined to determine the relationship between the pixels and the calibration target markings. The LMP is a function of the system computer software and is used to provide the microRadiansperPixel_H and microRadiansperPixel_V values associated with a particular CCD camera and lens combination that will be mounted to a projectile launcher under test during a scenario. The microradians per pixel across the field of view of the camera/lens combination are calculated from the relationship of the pixels to the linear target markings, and the known distance from the camera lens to the target. In the preferred embodiment a standard camera faceplate format of ″ is typically used although ″, ⅓″ and ⅔″ formats as well as nonstandard formats are suitable.

To begin the lens mapping process, the particular camera and lens that are suitable for the field of view and range are selected for the aimpoint measurement task. The selection of a specific lens is dependent on both the camera selected (due to the camera image format used and also due to the lens mounting requirements) and on the focal length needed to provide a suitable field of view for the downrange targets. As an example and not by way of limitation, focal length lenses from f=24 mm to f=100 mm can be used in combination with CCD image formats of in. and ⅓ in. When using the same focal length lenses for the in. and ⅓ in. formats, the field of view (FOV) for the ⅓ in CCTV format is less than the FOV for the in. CCTV format because of the difference in the physical size of the image of the image surface. However, since the number of pixels is the same for the two image formats (768h494v), the ⅓ in. format provides a higher angular resolution at the expense of a smaller field of view. In the exemplary embodiment, the field coverage must provide for the camera's imaging of a target's reference emitter when the projectile launcher is sighted on the target. This requirement originates from the need to mark the reference emitter when a shot is fired at a target.

It is too be appreciated that each scenario will have its own particular requirements, and each scenario should be analyzed to determine what field coverage is required. In the exemplary embodiment, field coverage falls between 10 meters and 100 meters although larger fields are within the contemplation of the present invention. Smaller field coverages lead to higher system measurement resolution, but the emitters will need to be closer to the targets. Larger field coverages will reduce the measurement resolution, but will allow larger target to emitter separations and will help to ensure that the emitter will be visible when a shot is fired.

After the camera and lens are selected the camera is mounted to a tripod and placed near the planned shooter position. Three reference emitters are located at a distance from the camera that approximates the range of the planned scenario targets. The three reference emitters should be placed in a straight line, preferably parallel to the horizon. To align the reference emitters, the CCD camera image is referred to and the spacing is arranged to provide for all three emitters to be in the camera's FOV when the center of the camera's FOV is aligned to the left-most emitter.

For the horizontal lens mapping, the second emitter should be preferably placed at about 25% of the image width from the image center towards the right side of the image. The third emitter should be placed preferably at about 70% of this same width. This placement of the emitters will allow for the camera to be rotated 90 degrees counterclockwise and the same three emitters to be used when performing the vertical lens mapping. Using survey equipment, the 3D coordinates of the camera and the three emitters are measured and recorded. If desired, the camera can be set to be the origin with the reference axis defined by the line between the camera and the left-most emitter. With the camera mounted on a tripod, the camera is panned and tilted to visually attempt to align the center of the left-most emitter to be in the center of the camera image. It is to be appreciated that all three emitters are operating and are within the camera's horizontal FOV.

Once these steps are completed the camera and emitter 3D coordinates are calculated. Although the calculations can be performed manually, in the preferred embodiment the data is entered into a software program that is part of the system software. When the software program is being used the camera and emitter 3D coordinates are entered. After saving the coordinates, the camera image is brought up on the computer screen and the process of performing the fine camera alignment and marking the emitters for lens mapping is initiated. The mouse cursor is placed over the emitters and the (pix.x, pix.y) pixel coordinates are viewed on the screen. The cursor is placed over the Eo and the pixel coordinates are checked against the center of the CCD image coordinates. If the emitter is not a center, the camera is panned and tilted until Eo is at the pixel coordinates for the center of the image. Next the cursor is placed over the emitter E2H and checked if the pix.y value for E2H is the same as Eo. If the pix.y coordinates are not the same, the camera is rolled about its axis to make both Eo and E2H (and also E1H) have the same pix.y value.

For the vertical lens mapping, the camera is rotated 90 degrees counterclockwise and aligned visually so that the left-most emitter becomes the center emitter within the CCD image with the remaining emitters being within the camera's vertical FOV. The adjustment of the camera to properly image the three emitters for the vertical lens mapping is similar to the horizontal alignment, except that the pix.x coordinates of all three emitters should be the same value, which should be the pixel coordinate of the horizontal center of the image. The camera is panned, tilted, and rolled to perform the alignment and then mark the emitters in the same order. After the last emitter is marked for the vertical lens mapping, the horizontal and vertical tens mapping values will be displayed and an option for storing the values will be given. Lens map values are stored with a name that corresponds with the camera and lens combination used for later retrieval during the live fire aimpoint scenario.

The errors that can occur in the camera/lens mapping are: errors in the camera/lens distance to the target, errors in the target calibration, and errors in the reading of the pixel relationship to the target. In measurement of the camera and lens combination mapping function, standard deviation of the pixel in terms of angle was determined to be less than 0.15 milliradians (mrad). This angle is considered to be the lens mapping uncertainty contributing to total system error. The angular value is constant and does not change with range.

Camera Mounting and Boresight Calibration

The camera is mounted to the projectile launcher, preferably with a rigid mount providing a view of the emitters over the entire projectile launcher super-elevation range. The linear offsets (horizontal and vertical) of the camera aperture from the projectile launcher sight line are measured. The ideal alignment of the camera for measuring emitter angles would have the optical axis of the camera lens coaxial with the sightline of the projectile launcher, and the camera lens located at the surveyed shooter position coordinates. However, this is not practical for live fire since the camera must be mounted out of the way of the projectile path and any projectile launcher operations. The mounting requirements lead to coordinate offsets and angular deviations of the optical axis of the lens from the projectile launcher sightline, which are corrected by the boresight process.

As can be appreciated by those skilled in the art, the boresight process provides a set of boresight angles that are used to correct the subsequent measured emitter to sightline angles obtained during the firing event. When establishing a boresight to an emitter geometrical relationship at the boresight range, the boresight angles derived from the boresight emitter measurement includes angular deviations between the sightline and the camera optical axis as well as apparent angular deviations due to the coordinate offset from the sightline, assuming that the optical axis to sightline offsets are negligible. However, even small optical axis to sightline offsets will cause apparent angular deviations, and their effect should be considered.

It is to be appreciated that because the coordinate offsets are constant and the boresight emitter range can vary from the boresight target range, errors will occur in the aimpoint calculations as the boresight target range varies. The magnitude of these errors depends on the offsets and the range difference between the range to the target and the range to the emitter. Without correction, the aimpoint angles will only be correct when the target's emitter is at the boresight range.

To account for the error that will be introduced for firing at targets that are at different ranges than the boresight range, the portion of the boresight angles that is due to the sightline to camera offset is mathematically subtracted from the aimpoint calculations. This adjustment is done by calculating the angular deviations due to the sightline to camera offset at the boresight range, and then subtracting those angles from the angular deviations that are due to the camera lens offset at the target emitter range. This difference is than added to the boresight angles to produce the boresight correction angles for emitters at ranges other than the boresight range. As can be appreciated by those skilled in the art, there will still be some residual errors after this boresight correction process due to measurement uncertainty.

When the CCD camera is preferably rigidly mounted to the projectile launcher under test, it must be aligned to the projectile launcher sightline. Some adjustment capability is provided in the mount to adjust the optical axis of the camera lens to be approximately parallel to the projectile launcher sightline. If the optical axis is parallel, there will be a constant linear offset between the sightline and the optical axis, but no angular offset. However, the constant linear offset translates into an angular offset, which reduces linearly with increasing range. Since the mechanical alignment process is relatively coarse, and electronic alignment, or boresighting, is performed to find the residual error for removal during actual aimpoint error measurements.

Aiming the projectile launcher at the boresight target and emitter pair that is placed at a known range performs the boresight calibration. An expert gunner fires one or more shots (preferably three) at the boresight target and the boresight emitter centroid is marked on the CCD image. The shot group is analyzed to verify proper aiming by the expert gunner relative to the projectile launcher under test. If acceptable, the average of the shot grouping is used as the boresight correction values to account for the remaining angular offset of the optical axis to the sightline. As can be appreciated by one skilled in the art, the sight settings remain unchanged after the boresight process is completed.

The acceptability of the aimpoint errors is dependent on the projectile launcher under test. For example, an expert M16 gunner using the iron sights can aim at a well-marked target to within 0.5 milliradians (mrad), while sighting errors for an M203 Quad sight may be as high as 4 mrad.

For adjustable sight projectile launchers, for example OICW, OCSW, MK-19, M203, etc. the adjustable sights cause two additional concerns caused by the projectile launchers' significant changes in the sightline to barrel angle relative to changes in the target range. As is well known in the art, the adjustable sight design is used to produce super-elevation of the barrel to fly a projectile to a target. In some cases the elevation angle may be on the order of 36 degrees.

The change in the sightline to barrel angle produces a change in the camera optical axis (OA) to sightline (SL) angle and offset. The primary concerns are that since the camera is attached to the barrel, the camera will rotate with the barrel's super-elevation. Since the camera is attached to the barrel of the projectile launcher, the camera will rotate with the barrel's super-elevation. If the barrel is super-elevated more than a few degrees, the infrared reference emitters may be out of the camera's field of view (FOV) and the emitter cannot be marked. Second, for each sightline to barrel angle setting, the coordinate and angular offsets of the optical axis to the sightline will be different, requiring a separate compensation for each new sight setting.

It is within the contemplation of the present invention that the loss of emitter in the camera's FOV be accommodated by a mount that provides for indexed angular rotation of the camera to counteract the super-elevation of the barrel. A selection of indexed camera rotation angles can be used to keep the emitter within the FOV for a group of sight angle settings. The number of camera rotation settings depends on the range of the sight angles versus the FOV. In an alternate embodiment, emitters can be mounted at higher elevations on the range so that they are in the FOV when the projectile launcher is super-elevated.

When the geometry of the projectile launcher sightline versus sight adjustment is ascertained, it would be possible to boresight at an intermediate sight range, and then to add or subtract the angular changes in the sightline angles as the sight is indexed to setting for ranges other than the boresight range. These sightline angular changes would include both elevation and azimuth angles since adjustable sights also produce a horizontal change in the sightline angle to account for the increased effect of projectile precession with range. In addition to changing the OA to SL angles, there will also be a change in the optical axis to sightline offsets that will have to be taken into account.

In another embodiment for adjustable sight projectile launchers, a separate boresight for each indexed sight setting, with its corresponding indexed camera rotation, could be performed. In effect, each setting of the sight, along with any required rotation of the camera, would act as an individual fixed sight projectile launcher. In practice, a family of boresight and OA to SL offsets would need to be saved for each sight setting that is planned for use in an experiment or exercise.

For either of the hereinabove embodiments for adjustable sight use, the shooter would have to indicate the current indexed sight setting for the firing event. The software would then use the boresight and OA to SL offsets for that sight setting via either a mathematical sightline model or a sight setting lookup table. The result of using either of these two methods for adjustable sights in the software will result in not additional uncertainties in aimpoint measurements.

Moving Target Measurements

The goal of the moving target measurement system is to provide the 3D coordinates of the moving target centroid at the time that a shot is fired (to determine the projectile launcher sightline angles relative to the target center), and at the time of the terminal position of the fired round (where it intersects the target path plane, hits the ground, or air bursts), to differentiate the location of the moving target and the round at the time the round reaches its terminus, or intersects the target. The location of the moving targets' centroid, like the fixed targets' centroid is determined with reference to the infrared emitters.

Moving targets, along with associated emitters and laser rangefinders are established during setup. The laser rangefinders are aligned and the downrange controllers are setup. The laser rangefinder is zeroed at the moving target home or initial position. Then, the moving target and emitter 3D coordinates at the home (or initial) position and the end positions are obtained.

Referring to FIGS. 11 and 12, which depict the target path plane and identify the target angles from the shooter's position, the sightline offset angles from the target and the subsequent sightline angles, the method of calculating the sightline intersection with the target plane will now be explained. The 3D coordinates of the moving target and emitter at the time a shot is fired are found by measuring the target sled offset from the target sled home position via a rangefinder. This offset is used to produce a 3D offset relative to surveyed 3D coordinates of the target and emitter at the home position. The 3D offsets are added to the home position 3D coordinates to produce the target and emitter 3D coordinates within the test range coordinate system at the time of an event. These target and emitter 3D coordinates at shot fired are used, along with the CCD image pixel coordinates of the emitter, to produce the sightline offset angles relative to the target center. Once the target and emitter 3D coordinates within the test range are found, the calculation of the sightline offset angles follows the same procedure that is used for static targets described hereinabove.

The sightline offset angle is added to the horizontal and vertical angles of the target within the firing range coordinate system (derived from the target's surveyed 3D coordinates) in order to determine the horizontal and vertical angles of the sightline in the firing range coordinate system. After sightline orientation is found, the intersection of the sightline with the target path plane is calculated.

Once the drop of the round, which is calculated relative to the sightline, is provided by the ballistics model, the 3D coordinate of the fired round on the target path plane can be found by subtracting the drop from the sightline intersection with the target path plane.

The moving target setup will now be described. The target sled moves along a linear track under remote control. Preferably, motion is provided by an electric motor powered by 12 VDC batteries although pneumatic, hydraulic or other mechanical drivers that are well known in the art may provide motion. The target sled can move in either direction along the track and the target can be popped up or down. Remote control may be independent of the system computer. The target track, while not necessarily level, is assumed to be flat and straight such that it does not cause up and down or left and right deviations of the target from a straight line of more than 20 mm. While an assumption of 20 mm may be optimistic, this assumption allows the target path to be considered as following a straight line. When the 20 mm assumption does not hold, additional measurements of the vertical position of the target centroid as the target moves along the track can be used to increase the vertical accuracy of the target centroid calculations. The beginning and ending target sled positions (also referred to as “home” and “end”) are electronically referenced to insure that these positions are known and repeatable. Sensors are used on the track to detect the home and end reference positions of the target sled. The target and emitter centroid positions are measured in 3D coordinates (via survey to + or −10 mm) for the home and end positions of the target sled. These 3D target and emitter centroid home and end positions are defined as TB & TE, and EB & EE, respectively. The moving target setup procedure requires that the rangefinder SetRangefinderZero( ) command be sent at the time of surveying the target's home position 3D coordinates. Subsequent rangefinder readings of the target sled position via the rangefinder will provide the relative target sled offset from the target home position, TB. The 3D coordinates of TB and TE, are used to produce a 3D unit vector tBtE by the linear target offset from the home position of the track. Summing this offset with the 3D coordinates of the target home position TB, provides the 3D coordinates of the intermediate target position.

A system emitter is physically attached to the moving target sled, and the position of the emitter centroid relative to the target centroid is fixed. The 3D coordinates of the emitter centroid at an intermediate target position are calculated by summing TO with the 3D coordinates of the emitter at the home position, EB since the target and emitter are assumed to follow parallel paths). The 3D coordinates of the target and emitter centroid at the intermediate target sled position are used to find the shooter's aimpoint angles (εx, εy) relative to the target centroid. The aimpoint angles produced will be for the aimpoint at the point in time that corresponds to the video frame used to mark the emitter position. The target path plane, TPP, is defined as containing the TB and TE points and is constrained to be normal to the firing range n-e plane by defining a third point on the plane as TB′=TB+(0,0,1). These three points on the target path plane are used to produce the 3D equation of the target path plane, which will be used in finding the sightline intersection with the target path plane.

During an event, target linear offsets are captured continuously during target movement and referenced to a event start time. These offsets and time references are stored during the event and transferred to the system computer after the completion of the event. When analyzing the event via the recorded video frames, the point in time corresponding to the midpoint of the video frame selected for marking the emitter position, t@MarkedFrame, is used to retrieve the stored target linear offset for the marked frame. The point in time when the shot is fired, t@shotFired, is used to retrieve the target linear offset for when the shot was fired. The linear offset, doffset, of the target sled from the home position for a particular time is provided via an optical rangefinder that bounces an infrared beam off a retroreflector attached to the moving target sled. The offset is provided to an accuracy of + or −10 mm. This is equal to the measurement accuracy specification of the rangefinder for an average of twenty 2000 Hz measurements providing a 200 Hz sample output (5 nanosecond). The linear target sled offset, doffset is multiplied by the track unit vector tBtE, to arrive at the 3D coordinate offset of the target sled, TO, relative to the home position of the target sled. Adding TO TB, the 3D coordinate of the target centroid at the home position, produces the 3D position of the target centroid, T3D, within the firing range (T3D=TB+TO). Adding TO and EB, the 3D coordinate of the emitter centroid at the home position, produces the 3D position of the emitter centroid, E3D, within the firing range. (E3D=TO+EB). The location of the emitter in the CCD camera image will be used to determine the aimpoint angles (εx, εy) of the sightline relative to the moving target centroid at the t@MarkedFrame, time. These aimpoint angles for the moving target will be calculated using the same methods as for static targets, except for the fact that the target and emitter 3D coordinates, TMF and EMF, will be provided via calculations that use the rangefinder and survey data.

Referring again to FIG. 1 at shot fired time (SF) time the 3D coordinates of the target TSF will be found by retrieving the target linear offset from the rangefinder at the t@shotfired time. The aimpoint angles (εx, εy) will be summed with the angles to TSF eh) to produce the sightline angles, (Ee,Eh), of the sightline vector, pslI within the firing range relative to the surveyed shooter position P. A sightline unit vector is created by using the sightline angles (Ee,Eh) to find the direction of the sightline within the firing range. The sightline unit vector is defined as:
psl I=(cos(E e)/|psl I|,sin(E e)/|psl I|,sin(E h)/|psl I)=(n psl ,e psl ,h psl)

Round intercept (RI) is next and the definition of RI depends on the type of round, either kinetic energy (KE) or high explosive (HE). For a KE round, the RI occurs when the round either passes through the target path plane or the round impacts the ground short of the target. For a HE projectile the RI occurs at whichever occurs first: the round impacts the target, the round impacts the ground, or, if fused, the fuse time expires. In any of the different cases, the sightline angles and the sightline intersection with the target path plane must be known in order to define the path of the round. If the round impacts the ground, or airbursts, the distance from the shooter must also be known to calculate the 3D coordinate. Additionally, the time-of-flight of the round at round intercept, TOFRI, must be known to determine the target sled offset and to find the target 3D coordinates, TRI, at the time of RI.

KE Rounds

For KE rounds, the ground range to the target path plane in the n-e plane of the firing range and the sightline elevation angle determines the time-of-flight of the round unless the round impacts the ground before the target path plane. In order to determine the ground range to the target path plane, the sightline vector intersection, SLI, to be described hereinbelow, with the plane of the moving path is calculated. The 3D coordinates of the shooter, P, and the sightline intersection, SLI, is used to find the slant range to the target plane, rTP. The rRI range is the corresponding ground range to the target path plane parallel to the n-e plane of the firing range. The range rRI is used to set the maximum range for the ballistics model to fly the round towards the target path plane. The ballistics model provides the time-of-flight to round impact, TOFRI, for the KE round in use.

The flight time for a KE round depends upon whether the round passes through the target path plane or impacts the ground before the target. Therefore, depending on the outcome, TOFRI=TOFTP (time of flight to the target path plane) or TOFRI=TOFGI (time of flight to ground impact). By setting the maximum range for the ballistics model to rRI, the maximum time of flight from the ballistics model will automatically be TOFRI. Using the ballistics of the KE round in use, the drop, relative to the sightline, of the round at the target path plane is calculated and used to calculate the 3D coordinate, RRI, of the round as it intersects the target path plane. For a KE round, no consideration is given to the 3D location of the round after it passes through the target path plane. For this case, RRI=SLI−(0,0,drop).

For the possibility of a KE round impacting the ground before the target, RGI is defined at the 3D coordinate where the ballistics calculation indicates that the round drops to the plane of the ground. For this possibility, the time-of-flight of the round stops at the point of ground impact and the time is labeled TOFGI. If the round impacts the ground, then TOFRI=TOFGI and RRI=RGI=(PSlE(n,e,0)*rGI)+P(n,e,0), which results in a 3D coordinate on the ground. The TOFRI will be used to determine the moving centroid coordinates, TRI, at the time that the round arrives downrange. This is accomplished by obtaining a target sled offset, dRI, via the rangefinder at the shot fired time plus the TOFRI and repeating the process for finding the 3D coordinate of the target centroid within the firing range at the round intercept time, TRI. Once TRI and RRI are known, the distance between the two 3D coordinates can be used to determine whether a hit or miss occurred, based upon the size of the target.

High Explosive (HE) Rounds

For HE rounds, the TOFRI time and the RRI coordinate calculations have additional considerations. Airburst HE rounds are fused for a particular range and the time-of-flight for an air burst is dependent on the fuse range and the speed of the round. Since an HE round is still considered active after it passes the target, the intersection of the round with the target path plane is only important if the round actually hits the target (in which case the round is assumed to have exploded). If the round passes through the target path plane and then air bursts or impacts the ground, a hit or miss calculation can be performed using the target and round coordinates to determine if the target was hit. In case a target hit did not occur, the air burst or ground impact coordinates are calculated and can be used to find the distance from RRI to the target at TRI. In addition, if the round impacts the ground before the target path plane, the 3D coordinate RGI is also needed to determine the distance from RRI to the target at TRI.

There are three possible time-of-flights to consider for HE rounds. The first is the fuse range related, TOFHE, the second is the ground impact, TOFGI, and the third is the target impact, TOFTP (note, TOFTP is the same as the KE round TOFTP). For each of these TOF's, there is a related 3D round terminus coordinate. The first step is to fly the round out using the ballistics model and the aimpoint angles derived from the emitter marking at shot fired. If the found is fused, then the round is only flown out to the fuse range. Then a check is made to determine if the round air burst or hit the ground. Then another check is made to determine if the rounds terminal range (for either air burst or ground impact) is before the target path plane or after the moving plane. If the terminal range is less than the range to the target path plane, the round has either hit the ground or airburst in front of the target path plane. No target impact is then possible. If the terminal range is greater than or equal to the range to the target path plane, then the round has either hit the target, hit the ground, or airburst. Since the determination of an air burst or ground impact has already been performed, the only possibility that needs to be checked is whether a target impact has occurred. The result is that the sightline intercept with the target path plane must be calculated to determine the TOFTP to the target path plane. This is the same calculation described hereinabove for the KE rounds.

If TOFHE is greater than or equal to TOFTP, then the 3D positions of the round at TOFTP, RTP, can be compared with the 3D position of the target at TOFTP, TTP, TTP. |RTPTTP| can be used to determine if a hit or miss occurred, where a hit would occur if the distance |RTPTTP| is less than target radius. If a hit occurred, then RRI is set to RTP and there is no airburst or ground impact after target path plane. If no target intersection occurs then the 3D position of the round at either airburst or ground impact, RRI is calculated and the distance |RTITTI| is calculated.

Sightline Intersection with the Target Path Plane

An important task in the moving target calculations is to find the intersection of the sightline vector with the target path plane. The sightline vector is defined by the shooter position P and the sightline angles (Ee,Eh) into the firing range (FIG. 11). The target path plane contains the target home and end positions, TB and TE, and is specified to be a vertically oriented plane parallel to the h-axis.

The first step in defining the target path plane is to add an additional point to the beginning and ending target coordinates TB and TE, to make up the required the required third point for a plane. This third point is created by adding a unit length vector parallel to the h-axis of the firing range coordinate system to the TB coordinate. This point is labeled as T′B and is equal to:
T B(n TB ,e TB ,h TB)+(0,0,1)=T′ B(n TB′ ,e TB′ ,h TB′).

These three points, TB, TE, and T′B, define a plane that includes the path of the target along the track and is vertically oriented to be parallel to the elevation axis.

The equation of the target path plane is produced by using the coordinates of TB, TE, and T′B in calculating the coefficients for the general form of the equation of a plane, Ax+By+Cz+D=0. The A,B, and C coefficients are the components of a vector that is normal to the target path plane, TPPN. These coefficients are calculated by taking the cross product of the vectors TBTE and TBT′B on the target path plane, where:
T B T E=(n BE ,e BE ,h BE)=((n E −n B),(e E −e B),(h E −h B)
T B T′ B=(n BB′ ,e BB′ ,h BB′)=((n B′ −n B),(e B′ −e B),(h B′ −h B)=(0,0,1)

And the cross product result (A,B,C) simplifies to: A=eBE, B=−nBE, C=0.

The coefficient D is calculated by taking the negative determinant of the three co-planar points, TB TE and T′B, and setting the result to be equal to D.

Dividing these four coefficients by the length of the vector TPPN,
|TPP N|=(A 2 +B 2 +C 2)1/2,
produces the unit normal vector to the target path plane,
tPP N=(A/|TPP N |,B/|TPP N C/|TPP N)=(a,b,c)
and a vector magnitude:
d=−D/|TPP N|,

That is the minimum distance from the origin to the plane. This unit normal vector tpPN and distance d will be used in finding the intersection of the sightline vector with the target path plane.

The sightline vector is defined by a starting point, the shooter position P, and set of easting and elevation angles (Ee,Eh), providing the projection direction into the firing range coordinates system from P. The general equation for the sightline vector is:
PSL I =P+(r TP *psl I).

The direction vector of the sightline vector, defined hereinabove is:
psl I=(cos(E e)/|psl I|,sin(E e)/|psl I|,sin(E h)/|psl I|)=(n psI ,e psl ,h psl) and,
rTP is the unknown distance from the shooter P to the sightline vector intersection with the target path plane at SLI.

The sightline vector intersection, SLI, with the target path plane is found by determining the unknown distance rTP and then using rTP in the general equation for the sightline vector:
r TP=−((tpp N *P)+d)/(tpp N *psl I),
is used, where the numerator is the distance from the shooter at P to the point on the plane at the minimum distance from the origin and the denominator is the cosine of the angle between the target path plane unit vector, tppN, and the sightline unit vector, pslI. The result rTP, is the length of the vector PSLI, which is used to find the sightline intersection with the target path plane, SLI, via the equation:
SL I =P+(r TP *psl I).

After the scenario has begun, a shooter will align the projectile launcher site with a target. The center of the lens image at the trigger event determines the reference for aimpoint. The infrared sensors illuminate at the trigger event to provide distance measurements from the aimpoint reference to the fixed IR emitter or the moving target position as determined by the downrange controller. With the information that is collected at the trigger event, the aimpoint and projectile launcher effects are calculated, preferably using the software resident in the system computer. Output from the sensors and the camera are integrated into the VCR with titling for event analysis. The information collected, including references and calculated aimpoint and projectile dynamics provide information for training and projectile launchers testing.

Error between the actual and predicted aimpoint result from uncertainties associated with the shooter, target, and emitter positions. The errors are range dependent and are minor contributors to the overall error. In addition error associated with the offset of the CCD camera from the projectile launcher sightline lens mapping and the emitter marking process can also produce minor errors and these errors can be minimized through techniques to be described hereinafter. Finally, the uncertainty associated with the bore-sight as performed by an expert gunner is the limiting factor in the cumulative error of the system.

While preferred embodiments have been shown and described, various modifications and substitutions may be made thereto without departing from the spirit and scope of the present invention. Accordingly, it is to be understood that the present invention has been described by way of illustrations and not limitation.

Any element in a claim that does not explicitly state “means for” performing a specified function, or “step for” performing a specific function, is not to be interpreted as a “means” or “step” clause as specified in 35 U.S.C. 112 paragraph 6. In particular, the use of “step of” in the claims herein is not intended to invoke the provisions of 35 U.S.C. 112 paragraph 6.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7926408 *Nov 28, 2006Apr 19, 2011Metadigm LlcVelocity, internal ballistics and external ballistics detection and control for projectile devices and a reduction in device related pollution
US8006427 *Jul 29, 2008Aug 30, 2011Honeywell International Inc.Boresighting and pointing accuracy determination of gun systems
US8100694 *Jun 11, 2007Jan 24, 2012The United States Of America As Represented By The Secretary Of The NavyInfrared aimpoint detection system
US8270081 *Nov 10, 2008Sep 18, 2012Corporation For National Research InitiativesMethod of reflecting impinging electromagnetic radiation and limiting heating caused by absorbed electromagnetic radiation using engineered surfaces on macro-scale objects
US8621774Aug 12, 2010Jan 7, 2014Metadigm LlcFirearm with multiple targeting laser diodes
US8655257 *Apr 24, 2012Feb 18, 2014Daniel SpychaiskiRadio controlled combat training device and method of using the same
US8827706Mar 25, 2009Sep 9, 2014Practical Air Rifle Training Systems, LLCDevices, systems and methods for firearms training, simulation and operations
US8876533 *Jun 30, 2009Nov 4, 2014Saab AbEvaluating system and method for shooting training
US8986010 *Dec 5, 2012Mar 24, 2015Agency For Defense DevelopmentAirburst simulation system and method of simulation for airburst
US9052161 *Dec 19, 2006Jun 9, 2015Raydon CorporationPerspective tracking system
US20070166669 *Dec 19, 2006Jul 19, 2007Raydon CorporationPerspective tracking system
US20100003642 *Jun 30, 2009Jan 7, 2010Saab AbEvaluating system and method for shooting training
US20100118407 *Nov 10, 2008May 13, 2010Corporation For National Research InitiativesMethod of reflecting impinging electromagnetic radiation and limiting heating caused by absorbed electromagnetic radiation using engineered surfaces on macro-scale objects
US20120208150 *Aug 16, 2012Daniel SpychaiskiRadio controlled combat training device and method of using the same
US20140065578 *Dec 5, 2012Mar 6, 2014Joon-Ho LeeAirburst simulation system and method of simulation for airburst
Classifications
U.S. Classification434/21
International ClassificationF41G3/26
Cooperative ClassificationF41G3/2661, F41G3/323, G01S5/163, F41G3/2605
European ClassificationF41G3/32B, F41G3/26C1F, G01S5/16B, F41G3/26B
Legal Events
DateCodeEventDescription
Apr 5, 2006ASAssignment
Owner name: GOVERMMENT OF THE UNITED STATES, SECRETARY OF THE
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PORTOGHESE, ROCCO;PURVIS, EDWARD JOHN;HEBB, RICHARD CHRISTOPHER;REEL/FRAME:017781/0977;SIGNING DATES FROM 20060329 TO 20060401