WO1992011563A1 - Teachable camera - Google Patents

Teachable camera Download PDF

Info

Publication number
WO1992011563A1
WO1992011563A1 PCT/US1991/009081 US9109081W WO9211563A1 WO 1992011563 A1 WO1992011563 A1 WO 1992011563A1 US 9109081 W US9109081 W US 9109081W WO 9211563 A1 WO9211563 A1 WO 9211563A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
focus
neural network
picture
aperture
Prior art date
Application number
PCT/US1991/009081
Other languages
French (fr)
Inventor
Constantine Nicholas Anagnostopoulos
Original Assignee
Eastman Kodak Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=24531550&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=WO1992011563(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Eastman Kodak Company filed Critical Eastman Kodak Company
Publication of WO1992011563A1 publication Critical patent/WO1992011563A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G3/00Weighing apparatus characterised by the use of elastically-deformable members, e.g. spring balances
    • G01G3/12Weighing apparatus characterised by the use of elastically-deformable members, e.g. spring balances wherein the weighing element is in the form of a solid body stressed by pressure or tension during weighing
    • G01G3/14Weighing apparatus characterised by the use of elastically-deformable members, e.g. spring balances wherein the weighing element is in the form of a solid body stressed by pressure or tension during weighing measuring variations of electrical resistance
    • G01G3/142Circuits specially adapted therefor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G23/00Auxiliary devices for weighing apparatus
    • G01G23/18Indicating devices, e.g. for remote indication; Recording devices; Scales, e.g. graduated
    • G01G23/36Indicating the weight by electrical means, e.g. using photoelectric cells
    • G01G23/37Indicating the weight by electrical means, e.g. using photoelectric cells involving digital counting
    • G01G23/3707Indicating the weight by electrical means, e.g. using photoelectric cells involving digital counting using a microprocessor
    • G01G23/3714Indicating the weight by electrical means, e.g. using photoelectric cells involving digital counting using a microprocessor with feedback means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G23/00Auxiliary devices for weighing apparatus
    • G01G23/48Temperature-compensating arrangements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G3/00Weighing apparatus characterised by the use of elastically-deformable members, e.g. spring balances
    • G01G3/12Weighing apparatus characterised by the use of elastically-deformable members, e.g. spring balances wherein the weighing element is in the form of a solid body stressed by pressure or tension during weighing
    • G01G3/14Weighing apparatus characterised by the use of elastically-deformable members, e.g. spring balances wherein the weighing element is in the form of a solid body stressed by pressure or tension during weighing measuring variations of electrical resistance
    • G01G3/142Circuits specially adapted therefor
    • G01G3/147Circuits specially adapted therefor involving digital counting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G3/00Weighing apparatus characterised by the use of elastically-deformable members, e.g. spring balances
    • G01G3/18Temperature-compensating arrangements
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B7/00Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
    • G03B7/08Control effected solely on the basis of the response, to the intensity of the light received by the camera, of a built-in light-sensitive device
    • G03B7/091Digital circuits
    • G03B7/097Digital circuits for control of both exposure time and aperture
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S706/00Data processing: artificial intelligence
    • Y10S706/902Application using ai with detail of the ai system
    • Y10S706/903Control

Definitions

  • the present invention is directed to a camera
  • sensors and circuits within the camera evaluate the scene to be photographed and determine, among other parameters, the 15 proper focus, whether the flash should be activated, the exposure time, etc. To make these determinations the camera depends on hard wired or software algorithms resident within the camera. These algorithms have been developed by the camera designer and are based on 20 statistical studies of many photographs taken by a random collection of amateur photographers. The primary shortcoming of this approach is that the camera algorithm, while valid on the average, is not generally optimum for each individual user or situation. There 25 are cameras as described in U.S. Patent 4,855,779 which allow the user to change the picture taking algorithm by replacing an electronic board within the camera. Changing camera components, such as an electronic board, is similar to having a manual camera. The 30 shortcomings of this second approach is that the photographer must not only be technically knowledgeable about cameras and camera algorithms, but also in the i taking of photographs. This approach also limits the
  • the above objects can be accomplished by a camera which includes a neural network that alters the camera picture taking algorithms based on user inputs concerning the desired characteristics of photographs.
  • FIG. 1 illustrates the architecture of a teachable camera in accordance with the present invention
  • Fig. 2 illustrates a design for altering camera focus
  • Fig. 3 depicts a template matching neural network for focus alternation
  • Fig. 4 illustrates a design for F-stop or aperture selection
  • Fig. 5 illustrates a template matching neural network for F-stop selection
  • Fig. 6 illustrates a neural network controlled camera.
  • a teachable camera in accordance with the present invention allows a photographer a number of camera algorithmic changes, without manual intervention and without the need for the photographer to be aware of camera functions or to be knowledgeable about the technology of picture taking.
  • an amateur photographer can teach the camera to take photographs that are pleasing to the photographer.
  • the computer interface to the cameras for making algorithmic changes is also of use to more experienced photographers.
  • a teachable camera is one which for the same set of inputs can produce many different sets of outputs.
  • the camera learns to produce a preferred set of camera settings in certain situations. This behavior is similar to that of the brain.
  • the lens In all cameras equipped with automatic focussing, the lens is set to produce the sharpest focus possible. However, in situations where closeup portrait photographs are taken, it is preferable to achieve a soft focus, which means that the lens of the camera must be set slightly out of focus.
  • the following example will show how a camera has to be modified so that whenever a closeup portrait is taken the focus is altered to cause the image to be slightly out of focus.
  • Fig. 1 shows a block diagram of a camera 8 that includes a conventional microprocessor 10, such as a 688C805B6 available from Motorola for controlling the camera 8 and which includes a nonvolatile programmable memory where the camera algorithms are stored.
  • the conventional microprocessor 10 receives inputs from a conventional focus sensor 12, a conventional exposure sensor 14, a conventional film speed indicator 16, a conventional flash control unit 18, a conventional motion sensor 20 and a conventional film type indicator 22.
  • the microprocessor 10 based on inputs from these devices and the results of the conventional camera picture taking algorithms, controls the picture characteristic units which control the characteristics of the picture and include a flash unit 24, a shutter speed control unit 26, a lens focus unit 28, and aperture unit 30 and possibly any filters 32 which may be attached to the camera.
  • an alterable neural network is positioned between the microprocessor 10 and the picture characteristic units 24-32.
  • This alterable network 40 changes the outputs of the picture algorithm of the microprocessor 10 and allows image defocusing in closeup portrait situations.
  • the alteration of the network 40 is performed through the microprocessor 10 from either a personal computer interface 42 or a user interface 44 included on the camera 8.
  • the user interface 44 would typically be the conventional liquid crystal display and numerical keyboard which are provided on today's more sophisticated cameras.
  • the alterable network includes subsystems for focus control, aperture control, shutter control and flash control where the input parameters for controlling the devices can overlap. For example, aperture and shutter speed are inter-related and the network subsystems for controlling the shutter speed and aperture units include common inputs.
  • a more detailed description of the focus altering subsystem is illustrated in Fig. 2. In the embodiment illustrated in Fig.
  • the focus sensor 12 is of the range finding type and when it completes the processing for determining subject focus, as initiated by the microprocessor, the focus sensor 12 outputs a four bit binary code which identifies the range of the subject.
  • the code produced by the sensor represents one of 16 possible zones within which the subject is found to reside.
  • the four bit code is fed to a VLSI neural network circuit 50, more details of which will be described with respect to Fig. 3 and the construction of which is described in Analog VLSI and Neural Systems by C. Meade, New York, Addison Lesley, 1989, incorporated by reference herein.
  • the subject is at a closeup range if the subject is found in zones 1 through 4.
  • the algorithm that is needed to produce the slightly out of focus situation is then as follows: 1. In a case where the two most significant bits (bits 2 and 3) from the focus sensor 12 are zero (an AND function), add 1 to the range value. 2. In all other cases add a zero.
  • the function of the neural network 50 is to generate a 1 or a 0 for bit M as shown in Fig. 2 when the two most significant bits are zero.
  • the value of M is then added to the four bit number from the focus sensor 12 by three half adders 52, 54 and 56 and the final code is provided to a digital- to- analog converter 60 and converted into an analog voltage that drives a conventional lens stepper motor.
  • this example adds one to the range value, it is also possible to subtract one from the range value.
  • Fig. 3 illustrates the neural network 50 preferred by the present invention for focus control and is a template matching type neural network as described in "Analog Electronic Neural Networks" by Garf et al. IEEE Circuits and Devices Magaz ne/ p. 44- 55., July 1989, incorporated by reference herein.
  • the 4 bits from the autofocus sensor 12, in the network 51 are connected to inverting buffers 52-58, the outputs of which are provided to NMOS transistors 60-66.
  • the outputs of all the transistors 60-66 connect to a common node and to a conventional sigmoid transconductance amplifier 68.
  • each transistor is controlled by the microprocessor 50.
  • instructions have been stored in the microprocessor memory indicating that the gates of the two most significant bits should be biased high, that is, turned on, while the gates of the two least significant bits should biased low. Alternately, if the transistors were of the floating gate type, they would have been programmed during the teaching phase. Because the common node of the transistors is connected to the input of the transconductance amplifier 68 when bits 2 and 3 are both zero, both NMOS transistors 60 and 62 contribute to the input current to the amplifier 68. Whenever the current supplied to amplifier 68 is high, amplifier 68 produces an output ("1") and when low produces no output ("0").
  • the current contributed by one of the transistors is removed by the other.
  • the transistors of the other bits are off and they do not contribute any current to the amplifier 68.
  • the amplifier 68 preferred is sigmoid type rather than a very high gain differential pair to allow for some mismatching between the transistors of the most significant bits when one is removing the contribution of the other.
  • the template matching network of Fig. 3 produces a value of 1 for M when both the high order bits are 0 and 0 when either or both of these bits are 1. In this case, the amount of defocusing amounts to advancing the lens at least one significant bit or one zone.
  • different templates could implement different algorithms such as subtracting one bit or adding two bits.
  • neural networks are certainly not the only way to alter the camera algorithms. However, neural networks are the preferable way of implementing changes in the camera controls. The reason is that since the two most basic characteristics of a neural network are pattern matching and learning, they are the most suitable types of circuits for a teachable camera, designed to learn the pattern of the desires of the photographer. Another benefit in using neural networks to implement all of the changes in control of the camera is that neural networks are faster as compared to the current control by using a microprocessor, since in neural networks all of the calculations are done simultaneously. Finally, the camera can be equipped with an external default switch which the photographer can activate to remove the pattern matching function which matches for a portrait picture pattern.
  • the default switch would cause the •microprocessor 10 to send a zero voltage to the gates of all transistors 52-58 of the template matching network and this will produce a zero current into the input of the amplifier 68 for all inputs from the focus sensor 12 and consequently a value of zero for M in all situations, thereby turning off the template matching and focus algorithm altering features of the present invention.
  • a user interface 44 which could be the conventional buttons and display of the camera, could be used.
  • computers in general, and personal computers in particular are becoming very common, it would be much more convenient to interface the camera to a personal computer 70.
  • the present invention is primarily aimed at the amateur photographer who has no technical knowledge about cameras or picture taking.
  • the program in the personal computer 70 must be constructed so that it asks questions in plain English and translates the responses into technical terms that can then be further translated into instructions for in the camera 8. It is of course obvious that this methodology can extended to technically knowledgeable photographers who can directly input instructions into the personal computer 70 for compilation and downloading into the camera microprocessor 10. Any suitable computer language can be used to generate the questions to the photographer and compile the answers, however, an expert system language as discussed later is preferred.
  • the teaching process begins with the photographer shooting a number of scenes, having the film developed, connecting the camera 8 to the personal computer 70 and beginning the evaluation process.
  • the personal computer 70 asks a number of questions attempting to find out whether the photographer is pleased with the result and if not determine the camera settings the next time a similar scene is photographed.
  • One extra task the teachable camera 8 has to perform, compared to present day cameras, is to store in the memory of the microprocessor the settings, for example, subject distance (which is the focus), aperture setting, flash (direct or bounced or both) , and the others for each photograph.
  • a simple modification to the programs of the camera would allow the microprocessor 10 to store these settings in the EEPROM or some other suitable memory.
  • the personal computer 70 would then access that data. In the case of an electronic camera, this process is further simplified in some instances since the photograph can be displayed immediately after the photograph is taken and the camera need not store settings for each photograph.
  • Programs such as this are presently available and are generally artificial intelligence rule based type programs. They are applied in a variety of situations including the more complicated task of medical diagnosis, as described in the Handbook of Artificial Intelligence, by Bar et al. Stanford: Huris Technical Press, 1980, Vol 2, page 177-192 incorporated by reference herein.
  • the program receives hard facts, such as lab data and the doctor's observations and the patient's description of the symptoms, which are typically not very well articulated.
  • the program arrives at a diagnosis by following one of various types of algorithms, such as the production of rules augmented by certainty factors described in Bar et al., The Handbook of
  • the computer then examines the camera settings for photograph number 1 and finds that the focus indicates a subject distance consistent with a portrait situation.
  • the program has enough information to download instructions to the microprocessor 10 in the camera, so that the memory in the microprocessor 10 is loaded with the required data so that the next time a portrait situation arises, the neural network output M takes the value of one.
  • the microprocessor can then find all possible pairs of A and T that satisfy the above equation for the particular camera.
  • the pairs of A, T calculated are: (2.8, 1/60), (4, 1/30), (5.6, 1/15), (8, .125), (11, .250), (16, .25), (22, 1) , and (32 , 2 ) .
  • One feature of nature scenes that can be used to identify them is their high light level, generally above 250 foot lamberts.
  • the microprocessor 10 is instructed to store in EEPROM the maximum F-stop along with the corresponding shutter speed whenever the light meter reading exceeds, for example, 255 foot lamberts.
  • the microprocessor 10 would load the smallest possible F-stop into the microprocessor memory.
  • a block diagram of a system for accomplishing these tasks is shown in Fig. 4.
  • a 12 bit word from the light meter 14 corresponding to 1028 different light levels is split into two groups.
  • the first bits consisting of group 0-7 are input to a template matching network which includes buffers, transistors and an amplifier which perform the function of an 8 bit AND gate 86. This function can be performed by inverting buffers etc. like those performing the AND operation of bits 2 and 3 in Fig. 5.
  • the other bits 8-11 are input to the network which performs the function of a 5 input OR gate 88.
  • the AND gate or the OR gate will be 1.
  • the A aperture and T shutter speed values transmitted to the shutter driver will be those coming from the EEPROM 10 which will have the highest F-stop calculated and the corresponding shutter speed. If the nu ber output by the light meter 14 is less than 255, then both NMOS transistors 90 and 92 will be off causing the amplifier 94 to be high, which in turn will make the transmission gate 102 conductive and thus transmit to the shutter driver values of A and T determined by the microprocessor 10.
  • the teaching process in this situation would indicate to the microprocessor 10 that the f-stop/shutter speed pair stored is based on the largest f-stop if the photographer desires to have the background sharp and the smallest f-stop if the background should be blurry.
  • one step further it is possible to provide an additional template matching neural segment which will determine whether the maximum f-stop matches the calculated f- stop and if not, in a manner similar to the increase in the range setting previously discussed, set the f-stop up one setting and the shutter speed down one setting. This would allow the camera to adjust to less than the maximum and to only slightly decrease the sharpness of the background elements.
  • Fig. 6 illustrates a further embodiment of the present invention.
  • a pattern matching neural network 110 is substituted for the microprocessor 10 and network 40 of Fig. 1.
  • pattern matching segments for each of the picture taking algorithms in the microprocessor 10 are provided as well as pattern matching segments corresponding to the features of the network 40.
  • a camera system has been described which includes a conventional film or electronic camera that is augmented with alterable neural networks, or completely controlled by the neural network, and an interface to a computer, and a computer that can accept the interface to the camera and which includes software resident in the computer that is of the artificial intelligence type. This camera system will be of most use to amateur photographers.
  • the concept can be extended to the use by knowledgeable photographers who will change their camera algorithms to optimize them for the scenes which they are photographing, and by professionals in studio or similar settings to have the computer make all the camera settings, based on instructions they issue to the computer, so that they do not have to do all the camera settings manually.
  • the description of the alterable network architecture and related circuit diagrams and the artificial intelligence program provide examples in which soft focus portraits and landscape photographs are taken. There are, however, many more situations in which the invention could be used to produce more pleasing photographs for amateur photographers. For example, by altering the exposure and direct vs bounce flash algorithms, faces can be made to come out darker or lighter than their actual tone.
  • the camera control is implemented using neural nets, the time to calculate the camera settings is reduced compared to microprocessor controlled cameras and this is beneficial in many instances, such as when attempting to photograph a fast moving object.
  • the invention may be applied to either film or electronic still or movie cameras.
  • algorithm altering can be done dynamically, by viewing the captured image on a television screen as the picture is taken and altering the network immediately thereafter using the personal computer.

Abstract

A teachable camera (8) which includes an alterable template matching neural network (40) positioned between a microprocessor (10) that performs camera picture taking algorithms and the units (24-32) such as the shutter which control the characteristics of the picture. The network (40) alters the output of the algorithms to match the picture characteristics desired by the photographer. The network (40) is altered by a rule based expert system executing in a personal computer (70) which determines how to alter the matching template of the network (40).

Description

TEACHABLE CAMERA BACKGROUND OF THE INVENTION
* Field of the Invention
5 The present invention is directed to a camera
» which a photographer can teach to take pictures as desired, and more particularly, to a camera in which a neural network, capable of learning the preferences of a photographer, changes camera control algorithms to 10 match the photographer's desires. Description of the Related Art
In automated or electronic cameras, sensors and circuits within the camera evaluate the scene to be photographed and determine, among other parameters, the 15 proper focus, whether the flash should be activated, the exposure time, etc. To make these determinations the camera depends on hard wired or software algorithms resident within the camera. These algorithms have been developed by the camera designer and are based on 20 statistical studies of many photographs taken by a random collection of amateur photographers. The primary shortcoming of this approach is that the camera algorithm, while valid on the average, is not generally optimum for each individual user or situation. There 25 are cameras as described in U.S. Patent 4,855,779 which allow the user to change the picture taking algorithm by replacing an electronic board within the camera. Changing camera components, such as an electronic board, is similar to having a manual camera. The 30 shortcomings of this second approach is that the photographer must not only be technically knowledgeable about cameras and camera algorithms, but also in the i taking of photographs. This approach also limits the
* photographer to a few fixed camera algorithms.
35 SUM ARY OF THE INVENTION It is an object of the present invention to provide a camera which will adapt to a photographer's needs.
It is also an object of the present invention to allow all photographers to control cameral picture taking algorithms.
It is a further object of the present invention to provide a camera which learns how to take desired pictures.
It is another object of the present invention to provide a camera controlled by a neural network. The above objects can be accomplished by a camera which includes a neural network that alters the camera picture taking algorithms based on user inputs concerning the desired characteristics of photographs.
These together with other objects and advantages which will be subsequently apparent, reside in the details of construction and operation as more fully hereinafter described and claimed, reference being had to the accompanying drawings forming a part hereof, wherein like numerals refer to like parts throughout.
BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 illustrates the architecture of a teachable camera in accordance with the present invention; Fig. 2 illustrates a design for altering camera focus;
Fig. 3 depicts a template matching neural network for focus alternation;
Fig. 4 illustrates a design for F-stop or aperture selection;
Fig. 5 illustrates a template matching neural network for F-stop selection; and
Fig. 6 illustrates a neural network controlled camera.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
A teachable camera in accordance with the present invention allows a photographer a number of camera algorithmic changes, without manual intervention and without the need for the photographer to be aware of camera functions or to be knowledgeable about the technology of picture taking. By interfacing the camera to a computer, an amateur photographer can teach the camera to take photographs that are pleasing to the photographer. The computer interface to the cameras for making algorithmic changes is also of use to more experienced photographers.
A teachable camera is one which for the same set of inputs can produce many different sets of outputs. The camera learns to produce a preferred set of camera settings in certain situations. This behavior is similar to that of the brain.
In all cameras equipped with automatic focussing, the lens is set to produce the sharpest focus possible. However, in situations where closeup portrait photographs are taken, it is preferable to achieve a soft focus, which means that the lens of the camera must be set slightly out of focus. The following example will show how a camera has to be modified so that whenever a closeup portrait is taken the focus is altered to cause the image to be slightly out of focus.
Fig. 1 shows a block diagram of a camera 8 that includes a conventional microprocessor 10, such as a 688C805B6 available from Motorola for controlling the camera 8 and which includes a nonvolatile programmable memory where the camera algorithms are stored. The conventional microprocessor 10 receives inputs from a conventional focus sensor 12, a conventional exposure sensor 14, a conventional film speed indicator 16, a conventional flash control unit 18, a conventional motion sensor 20 and a conventional film type indicator 22. In a conventional camera the microprocessor 10, based on inputs from these devices and the results of the conventional camera picture taking algorithms, controls the picture characteristic units which control the characteristics of the picture and include a flash unit 24, a shutter speed control unit 26, a lens focus unit 28, and aperture unit 30 and possibly any filters 32 which may be attached to the camera. However, in the present invention an alterable neural network is positioned between the microprocessor 10 and the picture characteristic units 24-32. This alterable network 40, as will be discussed in more detail later, changes the outputs of the picture algorithm of the microprocessor 10 and allows image defocusing in closeup portrait situations.
The alteration of the network 40 is performed through the microprocessor 10 from either a personal computer interface 42 or a user interface 44 included on the camera 8. The user interface 44 would typically be the conventional liquid crystal display and numerical keyboard which are provided on today's more sophisticated cameras. The alterable network includes subsystems for focus control, aperture control, shutter control and flash control where the input parameters for controlling the devices can overlap. For example, aperture and shutter speed are inter-related and the network subsystems for controlling the shutter speed and aperture units include common inputs. A more detailed description of the focus altering subsystem is illustrated in Fig. 2. In the embodiment illustrated in Fig. 2 the focus sensor 12 is of the range finding type and when it completes the processing for determining subject focus, as initiated by the microprocessor, the focus sensor 12 outputs a four bit binary code which identifies the range of the subject. Other types of focus sensors are of course possible. In the present embodiment, the code produced by the sensor represents one of 16 possible zones within which the subject is found to reside. The four bit code is fed to a VLSI neural network circuit 50, more details of which will be described with respect to Fig. 3 and the construction of which is described in Analog VLSI and Neural Systems by C. Meade, New York, Addison Lesley, 1989, incorporated by reference herein. The subject is at a closeup range if the subject is found in zones 1 through 4. The algorithm that is needed to produce the slightly out of focus situation is then as follows: 1. In a case where the two most significant bits (bits 2 and 3) from the focus sensor 12 are zero (an AND function), add 1 to the range value. 2. In all other cases add a zero. Thus, the function of the neural network 50 is to generate a 1 or a 0 for bit M as shown in Fig. 2 when the two most significant bits are zero. The value of M is then added to the four bit number from the focus sensor 12 by three half adders 52, 54 and 56 and the final code is provided to a digital- to- analog converter 60 and converted into an analog voltage that drives a conventional lens stepper motor. Although this example adds one to the range value, it is also possible to subtract one from the range value.
Fig. 3 illustrates the neural network 50 preferred by the present invention for focus control and is a template matching type neural network as described in "Analog Electronic Neural Networks" by Garf et al. IEEE Circuits and Devices Magaz ne/ p. 44- 55., July 1989, incorporated by reference herein. However, it is well within the skill of those of ordinary skill in the art to substitute a different type of neural network for the template matching type preferred here. The 4 bits from the autofocus sensor 12, in the network 51, are connected to inverting buffers 52-58, the outputs of which are provided to NMOS transistors 60-66. The outputs of all the transistors 60-66 connect to a common node and to a conventional sigmoid transconductance amplifier 68. The gate of each transistor is controlled by the microprocessor 50. During the teaching phase, to be discussed in more detail later, instructions have been stored in the microprocessor memory indicating that the gates of the two most significant bits should be biased high, that is, turned on, while the gates of the two least significant bits should biased low. Alternately, if the transistors were of the floating gate type, they would have been programmed during the teaching phase. Because the common node of the transistors is connected to the input of the transconductance amplifier 68 when bits 2 and 3 are both zero, both NMOS transistors 60 and 62 contribute to the input current to the amplifier 68. Whenever the current supplied to amplifier 68 is high, amplifier 68 produces an output ("1") and when low produces no output ("0"). However, if either bit is not zero, the current contributed by one of the transistors is removed by the other. The transistors of the other bits are off and they do not contribute any current to the amplifier 68. As discussed previously the amplifier 68 preferred is sigmoid type rather than a very high gain differential pair to allow for some mismatching between the transistors of the most significant bits when one is removing the contribution of the other. The template matching network of Fig. 3 produces a value of 1 for M when both the high order bits are 0 and 0 when either or both of these bits are 1. In this case, the amount of defocusing amounts to advancing the lens at least one significant bit or one zone. Of course, different templates could implement different algorithms such as subtracting one bit or adding two bits.
Those familiar with digital circuit design can easily find other circuits that will implement the above algorithm equally well. Neural networks are certainly not the only way to alter the camera algorithms. However, neural networks are the preferable way of implementing changes in the camera controls. The reason is that since the two most basic characteristics of a neural network are pattern matching and learning, they are the most suitable types of circuits for a teachable camera, designed to learn the pattern of the desires of the photographer. Another benefit in using neural networks to implement all of the changes in control of the camera is that neural networks are faster as compared to the current control by using a microprocessor, since in neural networks all of the calculations are done simultaneously. Finally, the camera can be equipped with an external default switch which the photographer can activate to remove the pattern matching function which matches for a portrait picture pattern. In the above example, the default switch would cause the •microprocessor 10 to send a zero voltage to the gates of all transistors 52-58 of the template matching network and this will produce a zero current into the input of the amplifier 68 for all inputs from the focus sensor 12 and consequently a value of zero for M in all situations, thereby turning off the template matching and focus algorithm altering features of the present invention. As previously discussed, to interface with the teachable camera, a user interface 44, which could be the conventional buttons and display of the camera, could be used. However, given that computers in general, and personal computers in particular, are becoming very common, it would be much more convenient to interface the camera to a personal computer 70. A simple personal computer interface 42 based on an RS232 type serial interface, similar to the ones used for keyboards, would be appropriate. Having a computer 70 will allow much more complex software to be utilized in the teaching process and would not require extra circuits and software to reside within the camera, thereby keeping the camera cost down. As shown in Fig. 1 the personal computer 70 communicates through the interface 42 to the camera microprocessor 10 which in turn writes into its own memory the required code.
As previously discussed, the present invention is primarily aimed at the amateur photographer who has no technical knowledge about cameras or picture taking. As a result, the program in the personal computer 70 must be constructed so that it asks questions in plain English and translates the responses into technical terms that can then be further translated into instructions for in the camera 8. It is of course obvious that this methodology can extended to technically knowledgeable photographers who can directly input instructions into the personal computer 70 for compilation and downloading into the camera microprocessor 10. Any suitable computer language can be used to generate the questions to the photographer and compile the answers, however, an expert system language as discussed later is preferred.
The teaching process begins with the photographer shooting a number of scenes, having the film developed, connecting the camera 8 to the personal computer 70 and beginning the evaluation process. As the photographer picks up the photographs one by one, the personal computer 70 asks a number of questions attempting to find out whether the photographer is pleased with the result and if not determine the camera settings the next time a similar scene is photographed. One extra task the teachable camera 8 has to perform, compared to present day cameras, is to store in the memory of the microprocessor the settings, for example, subject distance (which is the focus), aperture setting, flash (direct or bounced or both) , and the others for each photograph. A simple modification to the programs of the camera would allow the microprocessor 10 to store these settings in the EEPROM or some other suitable memory. During the evaluation process the personal computer 70 would then access that data. In the case of an electronic camera, this process is further simplified in some instances since the photograph can be displayed immediately after the photograph is taken and the camera need not store settings for each photograph.
Consider the example above in which the photographer likes to take portraits, and although not able to articulate it in technical terms, he would prefer that his subject be slightly out of focus. It is the job of the computer program in the personal computer 70 to figure out that the photographer would like his portrait slightly out of focus, by examining the camera setting for each photograph and by asking the photographer a series of leading questions.
Programs such as this are presently available and are generally artificial intelligence rule based type programs. They are applied in a variety of situations including the more complicated task of medical diagnosis, as described in the Handbook of Artificial Intelligence, by Bar et al. Stanford: Huris Technical Press, 1980, Vol 2, page 177-192 incorporated by reference herein. In medical diagnosis, for example, and similar to the present situation the program receives hard facts, such as lab data and the doctor's observations and the patient's description of the symptoms, which are typically not very well articulated. The program then arrives at a diagnosis by following one of various types of algorithms, such as the production of rules augmented by certainty factors described in Bar et al., The Handbook of
Artificial Intelligence, Stanford: Huris Technical Press, 1981, Vol. 1, pp. 190-191 also incorporated by reference herein.
After the photographer has connected the camera 8 to the computer 70 and loaded the appropriate program, the following interchange between the computer 70 and the photographer could take place: Computer: Are you pleased with photograph No. 1? Photographer: No. The computer then examines the camera settings for photograph number 1 and finds that the focus indicates a subject distance consistent with a portrait situation. Computer: Is the subject in the center of the picture a person or an object? Photographer: Person. If it were not a person the computer would have to make different assumptions as to the reason that the picture is not acceptable. Computer: Is the persons face washed out or blurry? Photographer: No. If the photographer had answered yes, the computer would have to examine whether the exposure time was too long compared to the flash time length, which would indicate a camera malfunction. Otherwise, the computer would assume that the photographer shook the camera while taking the picture and would respond with a suggestion to use a tripod the next time such pictures are taken. Computer: Are all the facial features clearly visible? Photographer: Yes. Computer: Are -li¬
the eyelashes and other facial hair clearly visible? Photographer: Yes. Computer: Are small wrinkles and even skin pores visible? Photographer: Yes. Computer: Would you prefer if they were not? Photographer: Yes. At this point, the program has enough information to download instructions to the microprocessor 10 in the camera, so that the memory in the microprocessor 10 is loaded with the required data so that the next time a portrait situation arises, the neural network output M takes the value of one.
The above example involved changing the output of the camera picture taking algorithm based on the input from a single sensor. However, as mentioned previously a modification to the output of the camera picture taking algorithm based on the inputs of plural sensors is possible. For example, let us consider the situation where the photographer takes nature scenes with a person usually included. In this situation the photographer would have to teach his camera to use the smallest possible aperture, that is, the highest F- stop, every time nature shots are taken so that both the subject as well as the background or other natural objects of the picture are in focus. To determine the aperture setting and the shutter speed, the microprocessor 10 in the camera 8 finds all possible solutions to the equation B=(KA**2)/(TS) , where B is the scene brightness in foot lamberts (fl) as determined by the light sensor 14, K is a constant, A is the F- stop, that is 11 for F/ll, T is the shutter speed in seconds and S is the film ASA number. The microprocessor can then find all possible pairs of A and T that satisfy the above equation for the particular camera. For a camera where the value of K equals 3.91, ASA equals 100 and the maximum F-stop is 32, the pairs of A, T calculated are: (2.8, 1/60), (4, 1/30), (5.6, 1/15), (8, .125), (11, .250), (16, .25), (22, 1) , and (32 , 2 ) .
One feature of nature scenes that can be used to identify them is their high light level, generally above 250 foot lamberts. During the teaching process, the microprocessor 10 is instructed to store in EEPROM the maximum F-stop along with the corresponding shutter speed whenever the light meter reading exceeds, for example, 255 foot lamberts. Conversely, if the photographer's intent is to have his subject at a sharp focus and a background blurry, then the microprocessor 10 would load the smallest possible F-stop into the microprocessor memory. A block diagram of a system for accomplishing these tasks is shown in Fig. 4.
During the operation, as shown in Fig. 5 a 12 bit word from the light meter 14 corresponding to 1028 different light levels is split into two groups. The first bits consisting of group 0-7 are input to a template matching network which includes buffers, transistors and an amplifier which perform the function of an 8 bit AND gate 86. This function can be performed by inverting buffers etc. like those performing the AND operation of bits 2 and 3 in Fig. 5. The other bits 8-11 are input to the network which performs the function of a 5 input OR gate 88. Thus, when the light level is 255 or more (the pattern to be matched by the template matching network) , either the AND gate or the OR gate will be 1. This will cause at least one of the two NMOS transistors 90 and 92 to conduct and bring the input of a sigmoid amplifier 94 to zero. This in turn, through the inverters 96 and 98, makes the transmission gate 100 conduct while keeping the transmission gate 102 in the off state. Thus, the A aperture and T shutter speed values transmitted to the shutter driver will be those coming from the EEPROM 10 which will have the highest F-stop calculated and the corresponding shutter speed. If the nu ber output by the light meter 14 is less than 255, then both NMOS transistors 90 and 92 will be off causing the amplifier 94 to be high, which in turn will make the transmission gate 102 conductive and thus transmit to the shutter driver values of A and T determined by the microprocessor 10. The teaching process in this situation would indicate to the microprocessor 10 that the f-stop/shutter speed pair stored is based on the largest f-stop if the photographer desires to have the background sharp and the smallest f-stop if the background should be blurry.
Taking the embodiment of Fig. 5 one step further it is possible to provide an additional template matching neural segment which will determine whether the maximum f-stop matches the calculated f- stop and if not, in a manner similar to the increase in the range setting previously discussed, set the f-stop up one setting and the shutter speed down one setting. This would allow the camera to adjust to less than the maximum and to only slightly decrease the sharpness of the background elements.
Fig. 6 illustrates a further embodiment of the present invention. In this embodiment a pattern matching neural network 110 is substituted for the microprocessor 10 and network 40 of Fig. 1. In this embodiment pattern matching segments for each of the picture taking algorithms in the microprocessor 10 are provided as well as pattern matching segments corresponding to the features of the network 40. A camera system has been described which includes a conventional film or electronic camera that is augmented with alterable neural networks, or completely controlled by the neural network, and an interface to a computer, and a computer that can accept the interface to the camera and which includes software resident in the computer that is of the artificial intelligence type. This camera system will be of most use to amateur photographers. The concept, however, can be extended to the use by knowledgeable photographers who will change their camera algorithms to optimize them for the scenes which they are photographing, and by professionals in studio or similar settings to have the computer make all the camera settings, based on instructions they issue to the computer, so that they do not have to do all the camera settings manually. The description of the alterable network architecture and related circuit diagrams and the artificial intelligence program provide examples in which soft focus portraits and landscape photographs are taken. There are, however, many more situations in which the invention could be used to produce more pleasing photographs for amateur photographers. For example, by altering the exposure and direct vs bounce flash algorithms, faces can be made to come out darker or lighter than their actual tone. Because the camera control is implemented using neural nets, the time to calculate the camera settings is reduced compared to microprocessor controlled cameras and this is beneficial in many instances, such as when attempting to photograph a fast moving object. As previously mentioned, the invention may be applied to either film or electronic still or movie cameras. In electronic cameras algorithm altering can be done dynamically, by viewing the captured image on a television screen as the picture is taken and altering the network immediately thereafter using the personal computer. The many features and advantages of the invention are apparent from the detailed specification and thus it is intended by the appended claims to cover all such features and advantages which fall within the true spirit and scope thereof. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation illustrated and described. Accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the invention.

Claims

What is claimed is:
1. A control system for a camera including an input sensor and a picture characteristic unit controlling a characteristic of a taken picture, said control system comprising: means for receiving an input from the input sensor and determining an output for the picture characteristic units; and means for modifying the output for the picture characteristic unit responsive to desired picture characteristics supplied by a user.
2. A system as recited in claim 1, wherein said means for modifying comprises a neural network connected to the input sensors, said means for receiving and the picture characteristic unit.
3. A system as recited in claim 2, further comprising teaching means for modifying said neural network responsive to desired picture characteristics.
4. A system as recited in claim 1, wherein said input sensor comprises a focus sensor producing a focus setting and said modifying means comprises a template matching neural network connected to said focus sensor and changing the focusing setting when the focus setting matches a target focus.
5. A system as recited in claim 1, wherein the input sensor comprises a light meter and wherein said means for receiving produces a calculated aperture setting and a calculated shutter speed setting and an optimized aperture setting and an optimized shutter speed setting both optimized for a desired picture characteristic, and said modifying means comprises a template matching neural network for selecting the optimized shutter speed setting and the optimized aperture setting when the light meter indicates the optimized picture characteristic is desired.
6. A teachable camera, comprising: input sensors for focus, exposure, film speed, flash, motion and film type; output units controlling flash, shutter speed, lens focus, aperture and filters; a microprocessor connected to said input sensors and producing focus, shutter speed and aperture control signals; a focus control template matching neural network connected to said microprocessor, said input sensor for focus and said output unit controlling lens focus, and modifying the focus control signals applied to said output unit for focus; and an aperture control template matching neural network connected to said microprocessor, said input sensor for focus and said output units controlling shutter speed and aperture, and modifying the shutter speed and aperture control signals applied to said output units controlling shutter speed and aperture.
7. A camera as recited in claim 6, further comprising learning means for altering said focus control template matching neural network and said aperture control template matching neural network responsive to a users desired picture characteristics.
8. A teachable camera, comprising: input sensors for focus, exposure, film speed, flash, motion and film type; output units controlling flash, shutter speed, lens focus, aperture and filters; a microprocessor connected to said input sensors and producing output unit control signals; a neural network connected to said input sensors, said output units and said microprocessor; and learning means for teaching said neural network to recognize characteristics of a visual scene from the input sensors, and said neural network modifying the control signals when the characteristics are recognized.
9. A camera as recited in claim 8, wherein said neural network is taught responsive to picture characteristics desired by a user.
10. A teachable camera, comprising: input sensors sensing scene characteristics of a scene; output units controlling picture characteristics of a picture of the scene; and a neural network connected to said input sensors and said output units and determining the picture characteristics responsive to the scene characteristics.
11. A method of teaching a camera how to take pictures with desired characteristics, comprising the steps of:
(a) recording camera settings when a picture is taken; (b) eliciting responses from a user concerning desired characteristics of the picture; and
(c) teaching a neural network in the camera to recognize the settings and change the settings responsive to the desired characteristics.
PCT/US1991/009081 1990-12-21 1991-12-10 Teachable camera WO1992011563A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US631,517 1990-12-21
US07/631,517 US5227835A (en) 1990-12-21 1990-12-21 Teachable camera

Publications (1)

Publication Number Publication Date
WO1992011563A1 true WO1992011563A1 (en) 1992-07-09

Family

ID=24531550

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1991/009081 WO1992011563A1 (en) 1990-12-21 1991-12-10 Teachable camera

Country Status (4)

Country Link
US (1) US5227835A (en)
EP (1) EP0516815A1 (en)
JP (1) JP3128072B2 (en)
WO (1) WO1992011563A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2697096A1 (en) * 1992-10-20 1994-04-22 Asahi Optical Co Ltd Single lens reflex camera with learning function
US5485315A (en) * 1992-04-17 1996-01-16 Asahi Kogaku Kogyo Kabushiki Kaisha Backlash removing mechanism for zoom lens assembly
US5682558A (en) * 1992-10-20 1997-10-28 Asahi Kogaku Kogyo Kabushiki Kaisha Camera with learning function

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5383027A (en) * 1992-02-27 1995-01-17 Lifetouch National School Studios Inc. Portrait printer system with digital image processing editing
US5751844A (en) * 1992-04-20 1998-05-12 International Business Machines Corporation Method and apparatus for image acquisition with adaptive compensation for image exposure variation
JP3513916B2 (en) * 1994-06-20 2004-03-31 株式会社ニコン Camera system
US5726737A (en) * 1995-11-02 1998-03-10 Eastman Kodak Company System for controlling photofinishing of photosensitive material
US5715371A (en) * 1996-05-31 1998-02-03 Lucent Technologies Inc. Personal computer-based intelligent networks
DE19726877C2 (en) * 1997-06-24 2002-01-24 Fraunhofer Ges Forschung Image processing system and image processing method for endoscopes
US5973734A (en) 1997-07-09 1999-10-26 Flashpoint Technology, Inc. Method and apparatus for correcting aspect ratio in a camera graphical user interface
US6317141B1 (en) 1998-12-31 2001-11-13 Flashpoint Technology, Inc. Method and apparatus for editing heterogeneous media objects in a digital imaging device
JP2001036773A (en) * 1999-07-21 2001-02-09 Canon Inc Electronic equipment, its controlling method and memory medium
JP4272882B2 (en) * 2002-10-09 2009-06-03 キヤノン株式会社 Imaging device, imaging method thereof, program for operating imaging method, and storage medium storing the program
GB2403365B (en) * 2003-06-27 2008-01-30 Hewlett Packard Development Co An autonomous camera having exchangeable behaviours
GB2404511B (en) * 2003-07-26 2007-08-22 Hewlett Packard Development Co Image capture device having a learning function
US8013909B2 (en) * 2004-12-29 2011-09-06 Nokia Corporation Method and apparatus for adjusting exposure in digital imaging
US20060170816A1 (en) * 2005-01-28 2006-08-03 Silverstein D A Method and system for automatically adjusting exposure parameters of an imaging device
US9224145B1 (en) 2006-08-30 2015-12-29 Qurio Holdings, Inc. Venue based digital rights using capture device with digital watermarking capability
US8446480B2 (en) * 2006-12-20 2013-05-21 Nokia Corporation Exposure control based on image sensor cost function
US8248482B2 (en) * 2008-05-15 2012-08-21 Samsung Electronics Co., Ltd. Digital camera personalization
US8498486B2 (en) * 2009-03-12 2013-07-30 Qualcomm Incorporated Response to detection of blur in an image
US8379130B2 (en) * 2009-08-07 2013-02-19 Qualcomm Incorporated Apparatus and method of processing images based on an adjusted value of an image processing parameter
KR101249524B1 (en) * 2012-08-23 2013-04-02 이우정 Defecating cage for pets
US9769367B2 (en) 2015-08-07 2017-09-19 Google Inc. Speech and computer vision-based control
US9836819B1 (en) 2015-12-30 2017-12-05 Google Llc Systems and methods for selective retention and editing of images captured by mobile image capture device
US10732809B2 (en) 2015-12-30 2020-08-04 Google Llc Systems and methods for selective retention and editing of images captured by mobile image capture device
US9838641B1 (en) 2015-12-30 2017-12-05 Google Llc Low power framework for processing, compressing, and transmitting images at a mobile image capture device
US9836484B1 (en) 2015-12-30 2017-12-05 Google Llc Systems and methods that leverage deep learning to selectively store images at a mobile image capture device
US10225511B1 (en) 2015-12-30 2019-03-05 Google Llc Low power framework for controlling image sensor mode in a mobile image capture device
US10594926B2 (en) 2017-01-25 2020-03-17 International Business Machines Corporation Preferred picture taking
US11093743B2 (en) 2018-08-10 2021-08-17 International Business Machines Corporation Intelligent personalization of operations of an image capturing device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4855779A (en) * 1986-11-19 1989-08-08 Minolta Camera Kubushiki Kaisha Camera system
US4978990A (en) * 1988-10-04 1990-12-18 Olympus Optical Co., Ltd. Exposure control apparatus for camera

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS52119961A (en) * 1976-03-31 1977-10-07 Kubota Ltd Weight inspecting apparatus
JPS5890856A (en) * 1981-11-26 1983-05-30 Toshiba Corp Sampling phase synchronizing circuit
JPH065179B2 (en) * 1983-10-25 1994-01-19 株式会社石田衡器製作所 Weighing device
JPS60220615A (en) * 1984-04-17 1985-11-05 Yokogawa Hokushin Electric Corp Digital filter
JPS62201304A (en) * 1986-02-28 1987-09-05 Canon Inc Measuring method for film thickness
US4728978A (en) * 1986-03-07 1988-03-01 Minolta Camera Kabushiki Kaisha Photographic camera
JPH076829B2 (en) * 1986-11-21 1995-01-30 株式会社石田衡器製作所 Weighing device
JPH0769232B2 (en) * 1987-02-18 1995-07-26 株式会社イシダ Method and apparatus for temperature compensation of load cell
JPS63282616A (en) * 1987-05-15 1988-11-18 Tanita:Kk Weigher
US4853733A (en) * 1988-07-08 1989-08-01 Olympus Optical Company Limited Program rewritable camera
JP2761391B2 (en) * 1988-10-04 1998-06-04 オリンパス光学工業株式会社 Focus detection device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4855779A (en) * 1986-11-19 1989-08-08 Minolta Camera Kubushiki Kaisha Camera system
US4978990A (en) * 1988-10-04 1990-12-18 Olympus Optical Co., Ltd. Exposure control apparatus for camera

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5485315A (en) * 1992-04-17 1996-01-16 Asahi Kogaku Kogyo Kabushiki Kaisha Backlash removing mechanism for zoom lens assembly
FR2697096A1 (en) * 1992-10-20 1994-04-22 Asahi Optical Co Ltd Single lens reflex camera with learning function
US5682558A (en) * 1992-10-20 1997-10-28 Asahi Kogaku Kogyo Kabushiki Kaisha Camera with learning function
US5774746A (en) * 1992-10-20 1998-06-30 Asahi Kogaku Kogyo Kabushiki Kaisha Camera with learning function

Also Published As

Publication number Publication date
US5227835A (en) 1993-07-13
JPH05505044A (en) 1993-07-29
EP0516815A1 (en) 1992-12-09
JP3128072B2 (en) 2001-01-29

Similar Documents

Publication Publication Date Title
US5227835A (en) Teachable camera
US7672580B2 (en) Imaging apparatus and method for controlling display device
CN102685379B (en) Image processing apparatus and method with function for specifying image quality
US7262798B2 (en) System and method for simulating fill flash in photography
US20100110229A1 (en) Terminal having photographing function and display method for the same
CN101478641A (en) Digital photographing apparatus and method of controlling the same
CN105847666B (en) Picture pick-up device and its control method
CN102833479A (en) Imaging control device and imaging control method
CN103929586A (en) Focus aid system
CN105915791B (en) Electronic apparatus control method and device, electronic device
CN111771372A (en) Method and device for determining camera shooting parameters
JPH0462052B2 (en)
US20040212695A1 (en) Method and apparatus for automatic post-processing of a digital image
US9544462B2 (en) Image capturing apparatus and control method of image capturing apparatus
US10541002B2 (en) Imaging apparatus and imaging method
JP7114274B2 (en) Information recording device and information recording method
CN103037160B (en) Digital photographing apparatus and the method for controlling the digital photographing apparatus
JP7200030B2 (en) Learning device, imaging device, learning system, AI information providing device, learning method and learning program
JP2020137050A (en) Imaging device, imaging method, imaging program, and learning device
CN114979498B (en) Exposure processing method, device, electronic equipment and computer readable storage medium
US11595564B2 (en) Universal control interface for camera
Yang et al. Personalized attention-aware exposure control using reinforcement learning
JP2017184290A (en) Digital camera
JP2022040912A (en) Detection device and detection method
JP2023166258A (en) Image evaluation device, method for evaluating image, and image evaluation program

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU MC NL SE

WWE Wipo information: entry into national phase

Ref document number: 1992903047

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1992903047

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1992903047

Country of ref document: EP