WO1995024684A1 - A neural network - Google Patents
A neural network Download PDFInfo
- Publication number
- WO1995024684A1 WO1995024684A1 PCT/DK1995/000105 DK9500105W WO9524684A1 WO 1995024684 A1 WO1995024684 A1 WO 1995024684A1 DK 9500105 W DK9500105 W DK 9500105W WO 9524684 A1 WO9524684 A1 WO 9524684A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- network
- units
- signal
- dependence
- network units
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
Definitions
- a neural network A neural network
- the invention concerns a neural network of the type stated in the introductory portion of claim 1.
- Neural networks are used for data processing purposes on the basis of a plurality of complexely related input parameters to give the best possible response thereto without necessarily knowing the relation between the indi- vidual input parameters. This is extremely advantageous when no such linear relation exists.
- neural networks are therefore con ⁇ structed according to the same basic principles as the human brain, comprising a multitude of decision-making cells or neurons as well as connections or synapses between these.
- these control para ⁇ meters comprise a threshold value which determines whether the neuron concerned fires or applies an electric pulse after having received corresponding pulses from other neu ⁇ rons.
- the fired pulses are transferred via one or more synapses to other neurons, and the strength or the ampli ⁇ tude of the individual pulses transferred is one of the adjustable control parameters in the network.
- a plurality of learning approaches is known, by means of which the parameters can be established according to given applications, which takes place prior to putting the network into service.
- EP-A-492 641 discloses a neural network and a learning procedure for it.
- the learning procedure comprises submitting to the network an input data signal and a learning signal containing both desired and undesired data.
- the network will subsequently respond more expediently. If, e.g., the network controls a process system, it would be fatal if the network would e.g. cause an increase in temperature owing to specific input data which would in turn result in an increase in pressure, if the pressure was then above the value which the system could stand.
- US-5 107 454 discloses a neural network for use in pattern recognition, and this network is based on feedback, since the learning procedure is iterative, which means that the pattern concerned and the subsequent intermediate result patterns are run through the network.
- EP-A-405 174 discloses a learning method which ensures that the network does not "drop" in a local maximum, which might happen if the correlation between input and output data in a region was of a certain size. The data proces ⁇ sing system could thus not get out of these conditions, and further adaptation would not be possible.
- US-5 010 512 discloses implementation of a neural network as MOSFET transistor elements.
- the neurons may here be regarded as being threshold switches having several in ⁇ puts.
- the network can be operated in two modes, a learning mode in which the control parameters of the neurons are adjusted, and an associative mode in which the control parameters are constant.
- the object of the invention is to provide a new generation of neural networks which possess greater adaptability, and where the network is currently capable of adapting to new conditions as well as new or changed surroundings.
- the object of the invention is achieved in that the net ⁇ work has associated to it sensor unit which register changes in the condition under which the network works and adjust the control parameters of the network units in response thereto and in response to whether the network unit received and applied a signal prior to the evaluated response.
- the network of the invention can therefore be used in tasks where the possible results are not known, or where they are difficult to define, and the network can current ⁇ ly adapt itself to variations in the task as well as changes in the desired goals. While prior art neural net ⁇ works are normally adapted to a given task, it may said that the network of the invention automatically adapts itself, the modes of the network units and the strengths of the compounds between these being currently updated during the execution of a task.
- the network of the invention may be said to be in a performing mode, since the network continues its learning during the exe ⁇ cution of a task. Even though no external feedback to a network according to the invention takes place, it will still possess advantages over prior art neural networks because of a local reinforcement rule and a simple eva ⁇ rankive feedback in the learning.
- the network units are preferably constructed in layers, and all the network units in a layer receive signals from network units in the preceding layer and emit signals to network units in the subsequent layer. It is hereby ensured that a direct signal transfer path is established in the network without any risk of loops being formed internally in the network.
- the signals which the network units emit to each other via the network connections are formed by short pulses, and the pulse height is one of the parameters that can be con ⁇ trolled when the surroundings of the network change dynamically.
- the pulse height regulation will normally be incorporated in the network connections or the synapses.
- the network units may be considered as being pulse transmitters, and the pulses propagate through the network in a chain reaction.
- the time it takes for the pulses to propagate through the net ⁇ work i.e. from when a parameter is received on the input of the network until it provides a response on the output of the network, may be considered as being the response time of the network.
- One of the essential elements in the network of the inven ⁇ tion is an internal activity control, e.g. implemented through a self-monitoring threshold value or strength adjustment.
- the network which may be called a performing network, differs from other known, artificial networks precisely by this activity control.
- it is attempted to keep down the total number of pulses in the network. This is done by using an internal control mechanism which keeps down the total number of pulses.
- the total signal from a region which can e.g. be identified with the output region is detected. If this exceeds the unit signal, i.e. if more than one output unit fires, then the threshold value in ⁇ creases and if the total signal is below the unit signal (no signal), then the threshold value decreases.
- the threshold value can hereby be controlled so that the number of firing neurons approaches a number of the same order as the number of layers. This means that the number of firing neurons will be small with respect to the total number of neurons of the network, but will be sligthly larger than the number of neuron layers in the network.
- the network will hereby be very sensitive to changes in the dynamic conditions of the external system.
- the sensitivity of the network is increased precisely because there is path of firing neurons through the net ⁇ work. If the input or feedback is changed, more new paths will be formed, and when new stationary modes of the external system occur, these new paths will be narrowed again so that there will again be a path through the net- work.
- Each output of the network will be associated with a predetermined action in a preferred embodiment.
- the inputs of the network will be associated with sensing of corresponding action-specific parameters. This gives an unsurpassed adaptability to the network, since the networks adapts itself to the use concerned. Therefore, the network does not have to be tied to a specific use, it being possible to connect the inputs and the outputs randomly to the external system, since with time the feedback signal optimizes the parameters of the network to the given purpose.
- the invention also concerns a method of optimizing a neu ⁇ ral network to changed conditions of operation, said method being characterized by the features defined in claim 8.
- the threshold value in the pre ⁇ ferred embodiment is global, i.e. the same for all network neurons, while the strength of the connections is local.
- the net ⁇ work of the invention lends itself for adaptation to a given application, without it having been adapted to the given task beforehand.
- the invention is particu ⁇ larly useful for tasks in which the surroundings change their nature.
- the network When the network has adapted itself to a given application, it can also adapt itself to changes occurring in that connection.
- the invention will thus be extremely useful in connection with networks in which learning takes place in a special learning phase, since the network of the invention will still be able to adapt itself to dynamic variations in the external system.
- fig. 1 is a schematic view of a preferred embodiment of a performing network according to the invention
- fig. 2 shows a network unit in one layer and network units in other layers connected therewith;
- fig. 3 is a schematic view of the coupling between the neurons of the network
- figs. 4a-d are schematic views of interesting details of the control principles in connection with the preferred embodiment of a performing network according to the invention.
- fig. 5 shows the efficiency of a performing network according to the invention in connection with a first examples of application
- figs. 6a-c illustrate the activity of the network
- fig. 7 illustrates the power spectrum of the network shown in fig. 5;
- figs. 8 and 9 show the efficiency of the performing net- work according to the invention in connection with a second and a third examples of application.
- fig. 10 shows the power spectrum of the network whose performance is shown in figs. 8 and 9.
- Fig. 1 shows a preferred embodiment of a neural network according to the invention, and the network, which is of the performance type, will be explained below.
- the network is constructed as a matrix of net- work units 1 or neurons, each network unit 1 being connected to one or more other network units 1 through network connections 2 or synapses.
- the neural network is layered, the network units 1 in one layer being connected to a plurality of network units 1 in the adjacent layers.
- a network unit 1 receives signals or firings from network units 1 in the previous layer and emits signals or firings to network units in the following layer.
- Neural networks of this type are called feed forward networks. It will be seen that the network has an input layer of network units, and these are designated 1.
- These network units 1. are supplied in a manner known per se with an input signal in the form of the parameters to which the network is desirably to respond or react in response to the "expe ⁇ rience" which the network has. There is likewise an output layer having network units 1 which supply a response or a representation in dependence on the parameter complex applied to the input layer which is to be evaluated.
- the network is connected to an external system 3, and this external system may e.g. be a process which is controlled by the network.
- the network can control many different systems, examples being traffic control, control of foreign exchange transactions, control of meat quality, etc.
- the network units 1. on the input of the network receive data from the external system via a connection 5. These received data have been processed in advance in a manner known per se, so that data are represented in a correct data format. In dependence on this, the network provides the output with a response to the data applied, and this response is supplied via a connection 4 to the external system. This response will then contain instructions on the manner in which the external system is to be affected to achieve the desired processes.
- the feedback signal from the external system contains an evaluation of whether the system control of the network itself is good or bad. To prevent useful information from getting lost among random firings from the network units, it is generally desirable that the firing level is kept as low as possible, typically just a few firings in each network layer.
- the outputs from the neurons may be coupled to a multiplier so that the signals are added. The output from this may be coupled to a comparator by means of which the amount of the firing may be coupled with a desired threshold value, and the thres- hold value of the network is adjusted in response to this.
- a performing network according to the invention has four dynamic elements which vary in terms of time.
- the first one of these is the network units comprising both network units contained in the actual neural network and units contained in the input and output layers.
- the temporal mode variation of the network units corresponds to prior art known per se, since the variation solely depends on the signal applied to the input of the network.
- the network units are pulse transmitters which fire or emit a pulse, if the received complete signal exceeds a given threshold value T.
- T a threshold value
- To use pulse transmitters as net ⁇ work units is essential according to the invention, but, naturally, it is no innovation per se.
- the threshold value may e.g. be controlled in that the total signal from a plurality of network units in a region, e.g. the output region, is detected and compared with a desired signal, the threshold value being then adjusted in response to this comparison. This may mean that if the total signal exceeds the unit signal, i.e. if more than one output unit fires, then the threshold value increases, and if the total signal is below and there is thus no signal, then the threshold value decreases.
- This new idea is an internal control mechanism where the total number of pulses is kept down.
- the network may be composed of transistor logics, which will explained below, and it will therefore have a certain response time, i.e. the time it takes from the application of a signal to the input of the network to the production of a response to this signal.
- This mechanism will thus cause the threshold value to be complexely oscillating. It is i.a. this property that gives the network its adaptability. It should be mentioned here that the change in the threshold value must be relatively small over a period corresponding to the response time of the network.
- the third dynamic element of a performing network is the connection between the individual network units.
- a strength q(t) is associated with each of these connections, q being located in the range between 0 and 1. The strength q thus determines the size of a possible pulse which is transmitted through the connection concerned.
- Each network unit 1 is connected to other pulse transmitters, from which signals are received, through input connections. If the total, received signal exceeds the threshold value concerned, the network unit transmits a pulse which is spread through the output connections, the sum of the strengths of the output connections being precisely one.
- the strength concept is well-known in connection with artificial neural networks of a known type in which the strength values are updated by pattern learning. It is here desired to strengthen the connections that lead to correct patterns.
- the performing network of the invention is connected to a peripherial user-selected system via an input and an out ⁇ put as well as a feedback which is essential to the in ⁇ vention and which may be described with a value r(t).
- the feedback signal r(t) will normally not be stepped, since the individual feedbacks will be delta-shaped or in the form of voltage peaks. If the feedback signal occurs periodically, the signal r(t) may have the shape of a periodically jumping function with segments of exponential functions with a negative argument.
- each net ⁇ work unit in the central part of the network is connected to three other network units in the preceding layer and to three network units in the subsequent layer.
- the network unit 10 has three output connections with three network units 14-16 in the subsequent layer, and these network units 14- 16 have the states (n., n 2 , n_), and the connections to these have the strengths (q.,, q,, q 3 ).
- the network unit receives a signal:
- T(t + ⁇ t) T(t) + ⁇ .
- r(t)f(q(t)) is just included in the adjustment of the strength of a connection when the two neurons, which the connection connects, fire successively so that n(t) and n,(t + ⁇ t) both assume the value 1.
- the strength values are fixed subsequently, so that the sum of strengths from a neuron assumes the value 1. Then the strength may be expressed by:
- Fig. 3 shows a functional diagram of the coupling between the neurons in two network layers.
- a network unit 1 receives a signal on the input which is the sum of the input signals from three network units in the preceding layer.
- a network unit will be a flip-flop manufactured by IC technology.
- the output signal from the flip-flop is a delta voltage or a voltage pulse.
- the strength of a connection between two neurons or network units is adjusted as follows.
- the output signals 27, 28 of the two neurons (see fig. 4b) are passed to a logic AND gate 29, and the signal from this is integrated in an integrator 30, which may e.g. be an operation amplifier having an integrated feedback.
- the signal from the integrator 30 is used for adjusting the strength of the connection.
- connection strength regulator 25 which is shown in fig.
- connection strength regulator 25 is shown in detail in fig. 4c, and it is the input VI that receives the strength regulating signal from the integrator 30.
- the strength regulating element here consists of a dual gate MOSFET transistor.
- the gain changing, slowly varying input is passed to the gate electrode of the transistor through a resistor Rl.
- the firing signal is likewise passed to the gate electrode of the transistor in which DC components, if any, are removed by a decoupling capacitor CI.
- the resistor R2 is adjusted together with the resistor Rl, R3 and R4 to provide suitable working conditions for the transistor.
- the output 01 and 02 are inverted and non- inverted, respectively.
- the signal on the output 02 corre ⁇ sponds to the signal on the input V2, but with a changed strength or amplitude.
- the signal on 02 is passed from the connection strength regulator 25 to a multiplier 26 shown in fig. 3.
- the firing signal is multiplied here with the feedback signal r(t), and it is worth noting that the AND gate 29 ensures that only connections between firing neurons are strengthened, as long as the network works successfully.
- the output from the multiplier 26 is coupled to a network unit 1 in the next layer of network units.
- the actual neural network 34 forms part of a computing unit, which moreover has an input/output unit 35 which is connected via respective interfaces 37, 38 to the external system.
- the interface 37 generates input para ⁇ meters for the neural network, while the interface 38 controls the external system in response to the output of the network.
- a sensing unit 31 monitors the external system, and this sensing unit 31 may e.g. be an optical sensing unit that supplies a signal to the calculating unit. If the neural network tries to minimize the distance to the movement of a given object, the sensing unit 31 may supply a signal which represents the distance concerned.
- This signal is applied to a central unit 32 which compares this signal to a previously measured value of the dis- tance, and this value is obtained from a storage 33. If this distance has been made smaller, the central unit 32 supplies a positive feedback to the neural network 34, and the feedback value concerned is likewise obtained from the storage 33. If, on the other hand, the distance has in- creased, a negative feedback signal is supplied to the neural network 34.
- the network is used for choosing a randomly selected acti- vity among many, said activity being considered desirable.
- the performance P of the network is the part of the active actions of the network which is desired.
- Fig. 5 shows how the performance P of the network, which has been averaged over 1000 time intervals, as well as the threshold value T changes with time. It will be seen that the variation of the threshold value is confined between about 0.35 and 0.50. It is also noted that the performance P increases rapidly from 0 to about 0.8, and the value of P fluctuates until it reaches 1.
- Fig. 6 shows how the average activity slowly narrows.
- the activity has been collected over 1000 time intervals and is concentrated within a range 20, which is slowly narrowed down to a single path 21.
- the neurons of the network will then form a pattern of firing neurons until either the input signal or the feedback signal changes.
- Inset in fig. 5 is a section of the temporal fluctuations in the feedback signal r and the threshold value T.
- the signal with more than 25000 time intervals was divided into 50 segments having 1024 intervals each (i.e. with 50% overlap), and a standard Parsen window was used for obtaining the power spectrum from these segments.
- S(f) constant f ⁇ , where ⁇ is approximately equal to 1.1.
- the network is caused to find and follow a movable target, which is first assumed to move along a straigth line.
- the network is formed by network units arranged in a 16 x 16 matrix, in which each unit in the output layer is associated with a motion of the sighting point in a predetermined direction and with a given size.
- the adjacent unit may thus be related to various motions. If several units in the output layer fire simultaneously, the motion is taken as the weighted average value. If the sighting point moves away from the object, the feedback will be negative, otherwise the feedback will be positive.
- FIG. 8 shows the linear motion of the object in dashed line, while the sighting point is marked by a solid line, and this coincides with the motion of the object after a short sequence.
- the variations in the threshold value are likewise shown.
- the inset image shows how the sighting follows the object, and how the feedback signal reacts during this sequence.
- Fig. 9 shows the same case, but here the objects move along a sine curve.
- the sighting point rela ⁇ tively rapidly captures the object and follows it.
- the feedback signal is shown in an inset image, and both the movement of the object and the movement of the sighting point may be followed. It is important to the invention that the feedback of the system does not lock, but constantly changes between negative and positive. It is primarily this that gives the performing network its adaptability and performance.
- Fig. 10 shows the power spectrum of the applications shown in figs. 8 and 9, and it will be noted that peak values occur in the power spectrum S(f) and in the distribution D(r), indicating that some time intervals occur particu ⁇ larly frequently during the performing process.
- peak values occur in the power spectrum S(f) and in the distribution D(r), indicating that some time intervals occur particu ⁇ larly frequently during the performing process.
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DK95911225T DK0749601T3 (en) | 1994-03-08 | 1995-03-08 | Neural network |
DE69504316T DE69504316T2 (en) | 1994-03-08 | 1995-03-08 | NEURONAL NETWORK |
AU18898/95A AU1889895A (en) | 1994-03-08 | 1995-03-08 | A neural network |
EP95911225A EP0749601B1 (en) | 1994-03-08 | 1995-03-08 | A neural network |
US08/700,386 US5857177A (en) | 1994-03-08 | 1995-03-08 | Neural network |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DK0268/94 | 1994-03-08 | ||
DK26894 | 1994-03-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1995024684A1 true WO1995024684A1 (en) | 1995-09-14 |
Family
ID=8091632
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/DK1995/000105 WO1995024684A1 (en) | 1994-03-08 | 1995-03-08 | A neural network |
Country Status (7)
Country | Link |
---|---|
US (1) | US5857177A (en) |
EP (1) | EP0749601B1 (en) |
AT (1) | ATE170303T1 (en) |
AU (1) | AU1889895A (en) |
DE (1) | DE69504316T2 (en) |
DK (1) | DK0749601T3 (en) |
WO (1) | WO1995024684A1 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7962482B2 (en) | 2001-05-16 | 2011-06-14 | Pandora Media, Inc. | Methods and systems for utilizing contextual feedback to generate and modify playlists |
US20040064427A1 (en) * | 2002-09-30 | 2004-04-01 | Depold Hans R. | Physics based neural network for isolating faults |
US20040064425A1 (en) * | 2002-09-30 | 2004-04-01 | Depold Hans R. | Physics based neural network |
US20040064426A1 (en) * | 2002-09-30 | 2004-04-01 | Depold Hans R. | Physics based neural network for validating data |
US7831416B2 (en) * | 2007-07-17 | 2010-11-09 | Caterpillar Inc | Probabilistic modeling system for product design |
US8005781B2 (en) * | 2008-06-09 | 2011-08-23 | International Business Machines Corporation | Connection of value networks with information technology infrastructure and data via applications and support personnel |
US10949737B2 (en) | 2016-07-13 | 2021-03-16 | Samsung Electronics Co., Ltd. | Method for neural network and apparatus performing same method |
KR102399548B1 (en) * | 2016-07-13 | 2022-05-19 | 삼성전자주식회사 | Method for neural network and apparatus perform same method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4774677A (en) * | 1981-08-06 | 1988-09-27 | Buckley Bruce S | Self-organizing circuits |
US4933871A (en) * | 1988-12-21 | 1990-06-12 | Desieno Duane | Graded learning device and method |
US5274745A (en) * | 1989-07-28 | 1993-12-28 | Kabushiki Kaisha Toshiba | Method of processing information in artificial neural networks |
US5283855A (en) * | 1990-11-22 | 1994-02-01 | Ricoh Company, Ltd. | Neural network and method for training the neural network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4926064A (en) * | 1988-07-22 | 1990-05-15 | Syntonic Systems Inc. | Sleep refreshed memory for neural network |
-
1995
- 1995-03-08 DK DK95911225T patent/DK0749601T3/en active
- 1995-03-08 EP EP95911225A patent/EP0749601B1/en not_active Expired - Lifetime
- 1995-03-08 US US08/700,386 patent/US5857177A/en not_active Expired - Lifetime
- 1995-03-08 AT AT95911225T patent/ATE170303T1/en not_active IP Right Cessation
- 1995-03-08 WO PCT/DK1995/000105 patent/WO1995024684A1/en active IP Right Grant
- 1995-03-08 DE DE69504316T patent/DE69504316T2/en not_active Expired - Fee Related
- 1995-03-08 AU AU18898/95A patent/AU1889895A/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4774677A (en) * | 1981-08-06 | 1988-09-27 | Buckley Bruce S | Self-organizing circuits |
US4933871A (en) * | 1988-12-21 | 1990-06-12 | Desieno Duane | Graded learning device and method |
US5274745A (en) * | 1989-07-28 | 1993-12-28 | Kabushiki Kaisha Toshiba | Method of processing information in artificial neural networks |
US5283855A (en) * | 1990-11-22 | 1994-02-01 | Ricoh Company, Ltd. | Neural network and method for training the neural network |
Also Published As
Publication number | Publication date |
---|---|
DE69504316T2 (en) | 1999-01-21 |
DE69504316D1 (en) | 1998-10-01 |
DK0749601T3 (en) | 1999-02-08 |
ATE170303T1 (en) | 1998-09-15 |
EP0749601A1 (en) | 1996-12-27 |
US5857177A (en) | 1999-01-05 |
AU1889895A (en) | 1995-09-25 |
EP0749601B1 (en) | 1998-08-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8306931B1 (en) | Detecting, classifying, and tracking abnormal data in a data stream | |
US20220245429A1 (en) | Recursive coupling of artificial learning units | |
Lin et al. | Supervised learning in multilayer spiking neural networks with inner products of spike trains | |
Chiang et al. | A self-learning fuzzy logic controller using genetic algorithms with reinforcements | |
Beck et al. | Complex inference in neural circuits with probabilistic population codes and topic models | |
KR20190098106A (en) | Batch normalization layer training method | |
Guan et al. | Using a parallel distributed processing system to model individual tree mortality | |
CN110033081A (en) | A kind of method and apparatus of determining learning rate | |
WO1995024684A1 (en) | A neural network | |
Wang et al. | Evolving gradient boost: A pruning scheme based on loss improvement ratio for learning under concept drift | |
Florian | A reinforcement learning algorithm for spiking neural networks | |
Rani et al. | Neural Network Applications in Design Process of Decision Support System | |
Kassaymeh et al. | A hybrid salp swarm algorithm with artificial neural network model for predicting the team size required for software testing phase | |
Ji et al. | Automatic recall machines: Internal replay, continual learning and the brain | |
Shen et al. | Brain-inspired neural circuit evolution for spiking neural networks | |
Yellakuor et al. | A multi-spiking neural network learning model for data classification | |
US20220299232A1 (en) | Machine learning device and environment adjusting apparatus | |
Hu et al. | Incremental learning framework for autonomous robots based on q-learning and the adaptive kernel linear model | |
KR20200094354A (en) | Method for generating spiking neural network based on burst spikes and inference apparatus based on spiking neural network | |
Abdelfattah et al. | Evolving robust policy coverage sets in multi-objective markov decision processes through intrinsically motivated self-play | |
Kasi et al. | Energy-efficient event pattern recognition in wireless sensor networks using multilayer spiking neural networks | |
Konsoulas | Adaptive neuro-fuzzy inference systems (anfis) library for simulink | |
Ünal et al. | Artificial neural networks | |
Traub et al. | Learning precise spike timings with eligibility traces | |
Shahana et al. | Adaptive Neuro Fuzzy Data Aggregation Model for Developing and Planning for Aquaculture Farming Practices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AM AT AU BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU JP KE KG KP KR KZ LK LR LT LU LV MD MG MN MW MX NL NO NZ PL PT RO RU SD SE SG SI SK TJ TT UA UG US UZ VN |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): KE MW SD SZ UG AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1995911225 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 08700386 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 1995911225 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
NENP | Non-entry into the national phase |
Ref country code: CA |
|
WWG | Wipo information: grant in national office |
Ref document number: 1995911225 Country of ref document: EP |