|Publication number||US6539092 B1|
|Application number||US 09/347,493|
|Publication date||Mar 25, 2003|
|Filing date||Jul 2, 1999|
|Priority date||Jul 2, 1998|
|Also published as||CA2334597A1, CA2334597C, DE69935913D1, DE69935913T2, EP1092297A2, EP1092297A4, EP1092297B1, US7941666, US9852572, US20030188158, US20080049940, US20110113248, US20120017089, WO2000002342A2, WO2000002342A3|
|Publication number||09347493, 347493, US 6539092 B1, US 6539092B1, US-B1-6539092, US6539092 B1, US6539092B1|
|Inventors||Paul C. Kocher|
|Original Assignee||Cryptography Research, Inc.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (23), Non-Patent Citations (22), Referenced by (136), Classifications (20), Legal Events (4)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This application claims the benefit of U.S. provisional patent application No. 60/091,644, filed on Jul. 2, 1998.
This application is related to co-pending U.S. patent application Ser. No. 09/224,682, filed on Dec. 31, 1998.
The present invention relates to systems for securely managing and using cryptographic keys, and more specifically to methods and apparatuses for securing cryptographic devices against external monitoring attacks.
Attackers who gain access to cryptographic keys and other secrets can potentially perform unauthorized operations or forge transactions. Thus, in many systems, such as smartcard-based electronic payment schemes, secrets need to be protected in tamper-resistant hardware. However, recent work by Cryptography Research has shown that smartcards and other devices can be compromised if information about cryptographic secrets leaks to attackers who monitor devices' external characteristics such as power consumption or electromagnetic radiation.
In both symmetric and asymmetric cryptosystems, secret parameters must be kept confidential, since an attacker who compromises a key can decrypt communications, forge signatures, perform unauthorized transactions, impersonate users, or cause other problems. Methods for managing keys securely using physically secure, well-shielded rooms are known in the background art and are widely used today. However, previously-known methods for protecting keys in low-cost cryptographic devices are often inadequate for many applications, such as those with challenging engineering constraints (cost, size, performance, etc.) or that require a high degree of tamper resistance. Attacks such as reverse-engineering of ROM using microscopes, timing attack cryptanalysis (see, for example, P. Kocher, “Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems,” Advances in Cryptology—CRYPTO'96, Springer-Verlag, pages 104-113), and error analysis (see, for example, E. Biham and A. Shamir, “Differential Fault Analysis of Secret Key Cryptosystems,” Advances in Cryptology—CRYPTO'97, Springer-Verlag, 1997, pages 513-525) have been described for analyzing cryptosystems.
Key management techniques are known in the background art for preventing attackers who compromise devices from deriving past keys. For example, ANSI X9.24, “Financial services—retail management” defines a protocol known as Derived Unique Key Per Transaction (DUKPT) that prevents attackers from deriving past keys after completely compromising a device's state. Although such techniques can prevent attackers from deriving old keys, they have practical limitations and do not provide effective protection against external monitoring attacks in which attackers use partial information about current keys to compromise future ones.
Cryptography Research has also developed methods for using iterated hashing operations to enable a client and server to perform cryptographic operations while the client protects itself against external monitoring attacks. In such methods, the client repeatedly applies a cryptographic function to its internal secret between or during transactions, such that information leaked in each of a series of transactions cannot be combined to compromise the secret. However, the system described has a disadvantage in that the server must perform a similar sequence of operations to re-derive the symmetric session key used in each transaction. Thus, in cases such as where there are a large number of unsynchronized server devices (such as electronic cash applications where a large number of merchant terminals operate as independent servers) or if servers have limited memory, the server cannot reliably precompute all possible session keys clients might use. As a result, transaction performance can suffer since a relatively large number of operations may be required for the server to obtain the correct session key. For example, the n-th client session key can require n server operations to derive. A fast, efficient method for obtaining leak-resistant and/or leak-proof symmetric key agreement would thus be advantageous.
The present invention describes ways to make smartcards (and other cryptographic client devices) secure even if attackers are able to use external monitoring (or other) attacks to gather information correlated to the client device's internal operations. In one embodiment, a cryptographic client device (e.g., a smartcard) maintains a secret key value as part of its state. The client can update its secret value at any time, for example before each transaction, using an update process that makes partial information that may have previously leaked to attackers about the secret no longer (or less) usefully describe the new updated secret value. (Information is considered useful if it can help or enable an attacker to implement an actual attack.) Thus, the secret key value is updated sufficiently frequently (perhaps as often as once per transaction) such that information leaked about the input state does not as usefully describe the updated state. By repeatedly applying the update process, information leaking during cryptographic operations that is collected by attackers rapidly becomes obsolete. Thus, such a system can remain secure against attacks involving repeated measurements of the device's power consumption or electromagnetic characteristics, even when the system is implemented using leaky hardware and software (i.e., that leak information about the secret values). (In contrast, traditional systems use the same secret value repeatedly, enabling attackers to statistically combine information collected from a large number of transactions.)
The present invention can be used in connection with a client and server using such a protocol. To perform a transaction with the client, the server obtains the client's current transaction counter (or another key index value). The server then performs a series of operations to determine the sequence of transformations needed to re-derive the correct session key from the client's initial secret value. These transformations are then performed, and the result is used as a transaction session key (or used to derive a session key).
The present invention can include a sequence of client-side updating processes that allow for significant improvements in the performance of the corresponding server operations, while maintaining leak-resistant and/or leak-proof security characteristics in the client device. In one embodiment of the invention, each process in the sequence is selected from among two forward cryptographic transformations (FA and FB) and their inverses (FA −1 and FB −1). Using methods that will be described in detail below, such update functions are applied by the client in a sequence that assures that any single secret value is never used or derived more than a fixed number of times (for example, three). Furthermore, the update functions and sequence also assure that the state of (and hence the secret session key value used in) any transaction is efficiently derivable from a starting state (such as the state used in the first transaction) within a small number of applications of FA and FB (or their inverses).
If the number of operations that can securely be performed by a client is n (i.e., n different transactions can be performed, without using the same secret value more than a fixed number of times), a server knowing or capable of obtaining the client's initial secret value K (or initial state corresponding thereto) can derive any resulting secret value (or corresponding state) in the series of transactions significantly faster than by performing n corresponding updates. Indeed, the state for any given transaction can often be derived by a server using O(log n) calculations of FA and FB (or their inverses). If the system designer has made n sufficiently large, this can allow a virtually limitless set of transactions to be performed by clients while providing excellent server performance.
FIG. 1 shows an exemplary embodiment of a key update process through a series of transactions.
FIG. 2 shows an exemplary client-side indexed key update process.
FIG. 3 shows an exemplary server process for deriving a transaction key from a key index and base key.
FIG. 4 shows exemplary embodiments of four state transformation operations.
Indexed Key Management
The invention enables parties to perform cryptographic operations with increased security against external monitoring attacks. Although exemplary embodiments are described involving two parties, a “client” and a “server”, the terms “client” and “server” are chosen for convenience and might not necessarily correspond directly to any particular role in a system design. For example, the client could be a smartcard, and the server could be a mainframe computer, or vice versa. Furthermore, although most cryptographic operations involve two parties (e.g., one at the client and one at the server), the invention can, of course, be applied in environments involving only one party (such as in secure memory or storage systems in which both client and server are under a single party's control or are combined in a single device) or in environments involving more than two parties and/or devices.
In an exemplary embodiment, the client is initialized with a secret key K0 for a symmetric cryptosystem, where K0 is also known to (or derivable by) the server. The key K0 is usually (but not necessarily) specific to a particular client device or party. The client also has a (typically non-secret) index or transaction counter C, which may be initialized to zero. An additional parameter is an index depth D. The value of D may also be non-secret, and (for example) may be client-specific or may be a system-wide global constant. The value of D determines the cycle length of the key update process.
FIG. 1 shows an exemplary sequence of client device secret state values usable to perform a series of transactions, typically (but not necessarily) using one state per transaction. (The client process used to produce the sequence will be described with respect to FIG. 2 and the corresponding server process will be described with respect to FIG. 3.) A state's secret value typically, but not necessarily, includes a secret session key; therefore, as a matter of convenience, the secret value will be denoted by K and the term “secret value” may be used somewhat interchangeably with “key.” Nevertheless, those skilled in the art will appreciate that they may be different in the general case. Also for clarity of exposition, the figure is drawn showing an exemplary key update process with D=5, meaning that five levels of key values are present. However, there is no specific limitation on D, and those skilled in the art will readily understand how the general principles underlying the exemplary embodiment can be used for other such cycle lengths. Indeed, commercially deployed systems would normally use larger values for D.
Each of the boxes in the figure represents a value of the secret value (KC). Thus, multiple dots in a box represent different states sharing the same secret value KC. The top row (row 0) of the figure contains one box, which corresponds to the initial state K0 110 as well as subsequent states K30 140 and K60 170, all of which share the same secret value KC. The next row (row 1) contains two boxes, the left of which corresponds to a trio of states (K1 111, K15, and K29) sharing the same secret value, and the right box in the second row corresponds to a second trio of states (K31, K45, and K59) sharing yet another secret value. Similarly, row 2 contains four boxes, representing a total of twelve states of which 4 trios each share among themselves the same secret value. More generally, in this exemplary embodiment, row N (where N<D−1) contains 2N boxes (or unique secret values) and 3(2N) states, and the last row (N=D−1) contains 2N boxes and 2N states. The thicker (curved) path diagrams the process by which the states are updated, starting from the initial state 110 and continuing through to the final state 170. As the states are updated, counter C is also updated (by one for each update).
The exemplary state update processes involve two functions (FA and FB), and their inverses (FA −1 and FB −1), for a total of four functions. At step 100, the client is initialized or personalized with a starting counter C=0 and a starting state having a starting secret value KC=K0. At step 110, the device performs the first transaction, using KC (or a key derived from KC). The key can be used in virtually any symmetric cryptographic transaction. (For example, such a transaction could involve, without limitation, computing or verifying a MAC (Message Authentication Code) on a message, encrypting or decrypting a message, producing a pseudorandom challenge value, deriving a key, etc. Examples of messages include, without limitation, data specifying the amounts of funds transfer operations, e-mail messages, challenge/response authentication data, parameter update authorizations, code updates, audio messages, digitized images, etc.)
After step 110, the client device's secret value KC is updated by applying the function FA and the counter C is incremented, i.e. by performing C←C+1 and KC←FA(KC). (Thus, at step 111, C=1 and KC=FA(K0).) The updated value of KC is used to perform a transaction at step 111. After step 111, C is incremented again and FA is again applied to KC, i.e. by performing C←C+1 and KC=2←FA(KC), yielding the secret key used at step 112. The same pair of operations (C←C+1 and KC←FA(KC)) are similarly applied between steps 112 and 113, and between steps 113 and 114.
The transaction at step 115 should use the same value of KC as did the transaction at step 113, since steps 113 and 115 are shown in the same box. Thus, after the transaction at step 114 the update process is performed by computing C←C+1 (yielding C=5) and KC=5←FA −(KC). Note that KC=5=FA −1(KC=4)=FA −1(FA(KC=3))=KC=3. Thus, the value of KC used at step 115 is the same as the value used at step 113. After the transaction at step 115, KC is updated using function KB by incrementing C and computing KC=6←FB(KC). After the transaction at step 116, the secret value for transaction 117 is computed by applying the function FB −1 to KC.
The update process operates such that after each transaction, a key state update process is performed. The key update involves incrementing C and applying one of the functions FA, FB, FA −1, or FB −1 to the state KC. The use of invertable functions allows a first state and a second state to share the same secret value, where the first state precedes entry into a child (lower level) box from a parent (upper level) box, and the second state is created by reentry into the parent box from the child box. Further, the multiplicity of functions (e.g., FA and FB in the exemplary embodiment) allows the creation of multiple child boxes from each parent box and, hence, a large number of allowable states before the sequence is exhausted (e.g., at end state 190). In going from one particular state to another particular state, the choice of functions (e.g., in the exemplary embodiment of FIG. 1, whether to use FA, FB, FA −1, or FB −1) depends on the current direction and location of the two particular states. In particular, referring again to the exemplary embodiment shown in FIG. 1, when moving downward from a parent box to the left-hand child, such as between steps 112 and 113, FA is applied by computing KC←FA(KC). Further, when moving downward from a parent box to the right-hand child, such as between steps 115 and 116, FB is applied. Still further, when moving from a left-hand child to its parent, such as between steps 114 and 115, FA −1 is applied by computing KC←FA −1(KC). Finally, when moving from a right-hand child to its parent, such as between steps 116 and 117, FB −1 is applied. More generally, the choice of which function to apply in any particular state transition can be determined solely as a function of C, so the client need not maintain any information beyond its current state and its current counter value. This will be explained in greater detail in the section “Client Side Indexed Key Update,” below, in the context of the exemplary embodiment of FIG. 1.
Eventually, the client may reach a point at which the entire table has been traversed. For example, the end of the process of FIG. 1 is reached at step 170, where C=60. After this transaction (or at an earlier point if the table length exceeds the maximum number of transactions allowed by the system), the client device could, and might typically, disable itself, such as by deleting its internal secrets. However, other actions may be preferable in some cases (e.g., by repeating back to step 110, entering a state in which rekeying is required, etc.). In the illustrated exemplary embodiment, the number of transactions that can be performed before the end of the process occurs is equal to
(In the example with D=5, there can thus be 26−3=61 transactions.) By choosing a sufficiently large value for D, a system designer can make the maximum number of transactions so large that the “end” will never be reached. For example, D =39 will allow more than 1 trillion (1012) transactions without repeating.
Client-Side Indexed Key Update
For the exemplary embodiment of FIG. 1, the processes of incrementing C and choosing which function to apply (FA, FB, FA −1, or FB −1) can be performed by the client as shown in FIG. 2. At step 210, the client device verifies that C is valid, for example by confirming that C is non-negative and that C is less than 2D+1−3. (If C is invalid, then the transaction fails or other appropriate action is taken.) Since the client maintains C internally, step 210 can be omitted if the client is confident that C is valid. At step 220, the device initializes temporary depth and counter variables, N and V, with the values stored in D and C, respectively.
At step 230, the device tests whether the variable V is equal to the quantity 2N−3. If equal, function FA −1 should be applied, and processing proceeds to step 235 where the device increments C and updates KC by computing KC←FA −1(KC). Otherwise, at step 240, the device tests whether the variable V is equal to the quantity 2(2N−2). If equal, function FB −1 should be applied, and processing proceeds to step 245 where the device increments C and updates KC by computing KC←FB −1(KC). Otherwise, at step 250, the device tests whether the variable V is equal to zero. If equal, function FA should be applied, and processing proceeds to step 255 where the device increments C and updates KC by computing KC←FA(KC). Otherwise, at step 260, the device tests whether the variable V is equal to the quantity 2N−2. If equal, function FB should be applied, and processing proceeds to step 265 where the device increments C and updates KC by computing KC←FB(KC).
At step 270, the device checks whether the value of V exceeds 2N−2. If not, processing proceeds directly to step 280. If V is larger than 2N−2, the value of V is diminished by 2N−2 and processing proceeds to step 280. At step 280, V and N are each decremented, then processing proceeds to step 230.
After performing a state update function at step 235, step 245, step 255, or step 265, the client process terminates successfully at step 290. After the successful conclusion of the process of FIG. 2, the secret value KC is used to perform a cryptographic transaction (or derive a key used to perform the transaction, for example by hashing or encrypting KC, appending a salt or nonce, etc.).
Note that each iteration of the process of FIG. 2 corresponds to moving down one level in the drawing of FIG. 1, until the correct update operation is determined. Thus, the number of iterations of the loop cannot exceed D. Except for the key update functions (in the exemplary embodiment, FA, FB, FA −1, and FB −1), implementations of the function selection process need not be at all leak resistant; the function selection process of FIG. 2, its input value (i.e., C), and the choice of update functions need not be secret. Finally, as mentioned earlier and illustrated above in the case of the exemplary embodiment, the selection of which function to apply in any particular state transition can be characterized solely as a function of C, so the client need not maintain any information beyond its current state and its current counter value.
Server-Side Indexed Key Derivation
FIG. 3 shows an exemplary server-side process compatible with the exemplary client-side process of FIG. 2. Prior to commencing the process of FIG. 3, the server obtains the client's counter value C (typically by receiving C from the client device via a digital I/O interface), which is used as a key index. (In this exemplary embodiment, a transaction counter is used as a key index, but alternate embodiments can use a different value or representation of the key index.)
The server also obtains the client's base key value K0 (for example, by retrieving K0 from the server's memory, by cryptographically deriving K0 using other secret keys or secret algorithms, by obtaining K0 from a third party such as a key server, etc.). The server also knows or obtains D. At step 310, the server validates C to reject any possible invalid values of C. At step 320, the temporary variables N, V, and K are initialized with the values of D, C, and K0, respectively. At step 330, the server checks whether the value of V is equal to zero. If so, the value of K equals the client's current secret (KC), and the process concludes at step 390. Otherwise, processing continues to step 340 where the server tests whether V equals the value 2N−2. If so, the value of K equals the client's current secret (KC), and the process concludes at step 390. Otherwise, processing continues to step 350 where the server tests whether V equals the value 2(2N−2). If so, the value of K equals the client's current secret (KC), and the process concludes at step 390. Otherwise, at step 360, the server checks whether V is larger than 2N−2. If not, processing continues at step 370 where V is decremented, K is updated by applying FA (i.e., K←FA(K)), and N is decremented. If the test at step 360 reveals that V is larger than 2N−2, processing continues to step 380, where the value 2N−1 is subtracted from V, K is updated by applying FB (i.e., K←FB(K)), and N is decremented. After either step 370 or step 380, processing continues at step 330. Processing continues until step 330, step 340, or step 350 indicates completion. When the process of FIG. 3 completes at step 390, the value contained in the variable K is equal to the value of KC at the client for counter value C. The client and server can thus use K=KC to secure a cryptographic transaction. If an error or error-causing attack occurs, K and KC will differ and the cryptographic transaction should fail.
State Transformation Operations
The above discussion involved the exemplary cryptographic operations FA and FB, and their inverses FA −1 and FB −1, which will now be described in greater detail. A variety of such functions can be used, and the most appropriate form for these functions depends on the requirements and characteristics of the system.
In the exemplary functions shown in FIG. 4, the input and output of each function is 128-bits in size. For the function FA, input state 400 is divided into a left half 405 and a right half 410, which are each 64 bits. The right half is provided as the input to a DES operation 415, which encrypts its input (right half 410) using a fixed key KA1. The DES operation is only used as a nonlinear transformation that decreases or eliminates the usefulness of partial information an attacker might have about the input. Consequently, the key KA1 does not need to be secret and can be a published constant. At operation 420, the result of the DES encryption is XORed onto the left half of the input. The result of the XOR becomes both the result left half 435 and the input to a second DES operation 425. The second DES operation uses key KA2 to produce a result which, at operation 430, is XORed with the input right half 410. The XOR result becomes the result right half 440. The result left half 435 and result right half 440 are combined to produce the final result 445.
The structure of the function FB can be essentially identical, except that different keys are used. In particular, the first DES operation 455 encrypts the right half of input 450 using key KB1, and DES operation 460 encrypts the XOR of the left half and the first DES result using key KB2. As with FA, the result left half 465 and right half 468 are combined to produce the final result 470.
The function FA −1 (the inverse of FA) is computed using similar functions as FA but in the opposite order. The input 475 is divided into a left half 476 and right half 477. At DES operation 478, the left half 476 is encrypted using the DES key KA2, and the result is XORed with the right half 477. The XOR result becomes the result right half 481 and is used as the input to DES operation 479 which encrypts using the key KA1. The result of the second DES operation 479 is XORed with the input left half 476 to produce the result left half 480. Finally, the result left half 480 and right half 481 are combined to produce the final result 482. The function FB −1 is similar to FA −1 except that the input 485 is transformed into output 490 using keys KB2 and KB1 instead of KA2 and KA1.
The primary objective of the functions FA, FB, FA −1, and FB −1is to destroy the usefulness of partial information about the input that might have been obtained by an attacker. For example, the DES operations used in the exemplary function FA shown in FIG. 4 make the function extremely nonlinear. An attacker with statistical information about the value of each of the 128 input bits (such as a guess of the bit's value that is correct with probability slightly greater than 0.5) will have statistical information about the input to the first DES operation 415. However, the DES output will be effectively randomized—even though attackers might know the DES key KA1. The two DES operations in each update process “mix” the entire input state.
Thus partial statistical information about individual DES input bits does not provide useful statistical information about the DES output bits, provided that attackers never gain enough information to be able to guess the transformation operation entire input.
FIG. 4 shows just one exemplary set of functions for FA and FB; many other variant or alternate designs can be used. For example, functions produced using additional rounds can be used (for example, a 3-round Luby-Rackoff block cipher). More generally, encryption and decryption using any block cipher can be used for the functions and their inverses. The basic functions used to construct the update function only need to prevent partial information leaked about the input from providing useful information about the output, so the functions do not necessarily need to be cryptographically hard to invert. For example, reduced-round variants of DES can be used. Further, although FA and FB in FIG. 4 have similar structure, this is not necessary. FA and FB can also be selected or modified depending on the state position (for example by using different functions or modified functions for each of the D levels).
Other types of functions can be used for FA and FB. For example, if the input state is an odd value between 0 and 2B, FA and FB could be implemented using multiplication modulo 2B with odd constants and the inverse functions could be implemented using multiplication with the constants' inverses also mod 2B. (Of course, other operations such as multiplication with prime moduluses can also be used.) The foregoing are provided as examples only; one of ordinary skill in the art will appreciate that a wide variety of other functions exist that can be used to implement functions FA, FB, FA −1, and FB −1.
For additional leak resistance, larger states can be used, for example a 256-bit state can be implemented by using four 64-bit blocks and using four (or more) DES operations to update the state, or by using two (or more) applications of a 128-bit hash function.
In alternate embodiments of the invention, other key update processes can be used. For example, by using more than two update functions (and their inverses), each parent state can have more than 2 child states. In fact, parents can have any number of child states, although as the number of child states increases, the number of cryptographic operations involving the parent state value, and the number of states sharing the same secret key, also increase; thus potentially increasing attackers' opportunity to attack the system.
The type of state updating process illustratively described with respect to FIG. 1 is advantageous because it uses very little memory and very little processing overhead, while the maximum number of transactions using the same secret value is small. (The more often such secret values are used, the greater the likelihood of successful external monitoring attack.) Therefore, in an alternate embodiment, transactions are performed using only the states at the lowest level of the diagram (which are produced only once), so that secret values are not reused. This reduces the opportunity for information to leak, but increases the processing overhead per transaction to an average of about four updates. (Also, the amount of time per transaction is not exact, since the number of update processes ranges from 2 to 2D−2. However, this is often not a problem, since few applications will ever need values of D larger than about 40 and many devices can perform thousands of cryptographic operations per second.)
In yet another an alternate embodiment, the client can cache a value at each vertical level or row. By caching higher-up values, it is not necessary to perform inverse operations, but slightly more memory is required. In such an embodiment, an average of two applications of FA or FB (which, in such an embodiment, do not need to have easy inverse functions) are required per operation if only bottom-level (single-use) states are used for transactions. A diagram of the state update processes for such an implementation would resemble a hash tree. For implementations requiring constant-time or more predictable performance, the additional processing time available during operations requiring only a single application of FA or FB can be used to precompute values that will be needed in the future, and thereby limit the execution time to two FA or FB operations per transaction.
In still other embodiments, the key index used by the server can be a value other than a transaction counter, since all the server requires is information sufficient to derive the current transaction key from the root key.
In some applications, C can be incremented periodically (e.g., if C is driven by a timer) or by some event other than transactions being performed. In such embodiments, if the client (or server) fails to correctly update C and derive the corresponding updated key, the transaction will fail. If the first value of C that is tried by the client (or server) fails, other likely session key values (such as those with close values of C) can be tried. (Of course, if the client and server versions of C diverge too far, the transaction will not proceed.) While the key index (e.g., C) is normally exchanged explicitly, in cases such as this the server might be able to guess or obtain C indirectly.
If both the client and server need to be secured against external monitoring attacks, the transaction can be performed using the larger of the two parties' transaction counters C. In particular, the client and server can exchange counter values, and (if the counters are not equal) each device can set its counter value to equal the larger of its value and the received value. The device with the lower value updates its secret to derive the appropriate transaction key. This update can be implemented by applying a combination of the usual update functions and their inverses. (For example, referring to the technique exemplified in FIG. 1, a client at state 117 could skip to state 136 by applying FA −1 twice then applying FB three times. In general, the total number of update functions required should be less than 2D−1. This “fast-forward” capability maintains the property that no state is used or derived more than a finite number of—here three—TIMES.) In devices implementing this capability, care should be taken to assure that the system will not fail if a large, incorrect value of C is encountered. (For example, devices can reject excessively large jumps in C or can require additional cryptographic authentication, for example of the most significant bits of C.) Such a protocol can be used to agree on a transaction counter for embodiments involving more than two parties in cryptographic transactions.
Finally, the actual value used for the transaction key can be the value produced from the transformation function, or a value derived from the transformation result can be used. For example, the transformation result can be encrypted or hashed to produce the session key. A hashing step can help to limit the number of operations performed with any given key and thus help to limit the amount of information about the key that can leak to attackers. Alternatively or additionally, additional hashing operations can be performed periodically during the use of the session key, or fresh session keys can be required periodically.
To observe the largest possible number of transactions with a given secret key, an attacker might try to reset a target device before the device's memory can be updated with the new value of KC (e.g., during or immediately after the computation of FA or FB). However, such a reset does not necessarily mean an attack is in progress, since resets can occur during the normal operation of many systems. (For example, power can be lost if a smartcard is removed during a transaction.) Therefore, in a preferred embodiment, a failure counter stored in nonvolatile memory is updated prior to each update process. Before the update begins, the counter is tested to determine whether the number of sequential failures exceeds a maximum value and, if not, the transaction proceeds normally. Once the new value of KC has been computed and safely written to memory and C has been incremented, the failure counter is reset. The probability that the counter threshold will be exceeded during normal operation of the device (i.e., when no attack is in progress) will be small, particularly if the update process is rapid.
The exemplary key update process described with regard to FIGS. 1, 2, and 3 assures that no secret key value is ever used in more than a relatively small number of (here, three) transactions. Attackers thus have the opportunity to collect information about the secret state during the three transactions themselves, the three key update processes that produce the transaction keys, and the three update processes that transform the transaction keys after the transactions. Implementers must make sure that the total amount of information about the secrets that leaks to attackers during these processes is not enough to compromise the secret state. When characterizing a design, it is often useful to determine or estimate the maximum amount of information that can leak from each transaction without compromising security.
Cryptographic operations should normally be checked to ensure that incorrect computations do not compromise keys or enable other attacks. Cryptographic implementations of the present invention can be, and in a preferred embodiment of the invention are, combined with error-detection and/or error-correction logic to ensure that cryptographic operations are performed correctly. For example, a simple and effective technique is to perform cryptographic operations twice, ideally using two independent hardware processors and implementations, with a comparator to verify that both produce identical results. If the results produced by the two units do not match, the comparator will prevent either result from being used. In situations where security is more important than reliability, the comparator can make the device self-destruct if serious errors occur. For example, the comparator can cause a self-destruct if two defective DES operations occur sequentially or if five defective DES operations occur during the lifetime of the device. In some cryptosystems, redundancy is not necessary. For example, with RSA, self-checking functions can be incorporated into the cryptosystem implementation itself or verification can be performed after the operations.
Self-diagnostic functions such as a POST (power-on-self-test) should also be incorporated to verify that cryptographic functions have not been damaged. In some smartcards and other devices, the ATR (answer-to-reset) must be provided before a comprehensive self-test can be completed. In such cases, the self-test can be deferred until the first transaction or until a sufficient idle period. For example, a flag indicating successful POST completion can be set upon initialization. While the card is waiting for a command from the host system, it can attempt the POST. Any I/O received during the POST will cause an interrupt, which will cancel the POST (leaving the POST-completed flag at zero). If any cryptographic function is called, the device will check the POST flag and (if it is not set) perform the POST first.
The invention therefore encompasses a family of related techniques that enable the construction of devices that are significantly more resistant to attack than devices of similar cost and complexity that do not use the invention. In addition, multiple security techniques might be required to make a system secure; and leak resistance can be used in conjunction with other security methods or countermeasures.
As those skilled in the art will appreciate, the techniques described above are not limited to particular host environments or form factors. Rather, they can be used in a wide variety of applications, including without limitation: cryptographic smartcards of all kinds including without limitation smartcards substantially compliant with ISO 7816-1, ISO 7816-2, and ISO 7816-3 (“ISO 7816-compliant smartcards”); contactless and proximity-based smartcards and cryptographic tokens; stored value cards and systems; cryptographically secured credit and debit cards; customer loyalty cards and systems; cryptographically authenticated credit cards; cryptographic accelerators; gambling and wagering systems; secure cryptographic chips; tamper-resistant microprocessors; software programs (including without limitation programs for use on personal computers, servers, etc. and programs that can be loaded onto or embedded within cryptographic devices); key management devices; banking key management systems; secure web servers; electronic payment systems; micropayment systems and meters; prepaid telephone cards; cryptographic identification cards and other identity verification systems; systems for electronic funds transfer; automatic teller machines; point of sale terminals; certificate issuance systems; electronic badges; door entry systems; physical locks of all kinds using cryptographic keys; systems for decrypting television signals (including without limitation, broadcast television, satellite television, and cable television); systems for decrypting enciphered music and other audio content (including music distributed over computer networks); systems for protecting video signals of all kinds; intellectual property protection and copy protection systems (such as those used to prevent unauthorized copying or use of movies, audio content, computer programs, video games, images, text, databases, etc.); cellular telephone scrambling and authentication systems (including telephone authentication smartcards); secure telephones (including key storage devices for such telephones); cryptographic PCMCIA cards; portable cryptographic tokens; and cryptographic data auditing systems.
All of the foregoing illustrates exemplary embodiments and applications of the invention, from which related variations, enhancements and modifications will be apparent without departing from the spirit and scope of the invention. Therefore, the invention should not be limited to the foregoing disclosure, but rather construed by the claims appended hereto.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4203166||Dec 5, 1977||May 13, 1980||International Business Machines Corporation||Cryptographic file security for multiple domain networks|
|US4214126 *||Apr 30, 1945||Jul 22, 1980||Rca Corporation||Cadence suppression system|
|US4243890 *||Aug 23, 1976||Jan 6, 1981||Miller Bruce J||Isolator/switching assembly for data processing terminal|
|US4908038 *||Oct 27, 1988||Mar 13, 1990||Toppan Printing Co., Ltd||High-security integrated-circuit card|
|US5136646||Mar 8, 1991||Aug 4, 1992||Bell Communications Research, Inc.||Digital document time-stamping with catenate certificate|
|US5241598||May 22, 1991||Aug 31, 1993||Ericsson Ge Mobile Communications, Inc.||Rolling key resynchronization in cellular verification and validation system|
|US5297201 *||Oct 13, 1992||Mar 22, 1994||J.D. Technologies, Inc.||System for preventing remote detection of computer data from tempest signal emissions|
|US5341423 *||Feb 6, 1987||Aug 23, 1994||General Electric Company||Masked data transmission system|
|US5369706||Nov 5, 1993||Nov 29, 1994||United Technologies Automotive, Inc.||Resynchronizing transmitters to receivers for secure vehicle entry using cryptography or rolling code|
|US5401950||Feb 18, 1992||Mar 28, 1995||Omron Tateisi Electronics Co.||IC card having improved security checking function|
|US5412379||May 18, 1992||May 2, 1995||Lectron Products, Inc.||Rolling code for a keyless entry system|
|US5420925||Mar 3, 1994||May 30, 1995||Lectron Products, Inc.||Rolling code encryption process for remote keyless entry system|
|US5544086||Sep 30, 1994||Aug 6, 1996||Electronic Payment Services, Inc.||Information consolidation within a transaction network|
|US5552776 *||Jun 24, 1994||Sep 3, 1996||Z-Microsystems||Enhanced security system for computing devices|
|US5559887||Sep 30, 1994||Sep 24, 1996||Electronic Payment Service||Collection of value from stored value systems|
|US5600324||Feb 29, 1996||Feb 4, 1997||Rockwell International Corporation||Keyless entry system using a rolling code|
|US5633930||Sep 30, 1994||May 27, 1997||Electronic Payment Services, Inc.||Common cryptographic key verification in a transaction network|
|US5733047||Dec 19, 1995||Mar 31, 1998||Nippon Soken, Inc.||Enciphering system applicable to various keyless entry systems|
|US5761306||Feb 22, 1996||Jun 2, 1998||Visa International Service Association||Key replacement in a public key cryptosystem|
|US5991415 *||May 12, 1997||Nov 23, 1999||Yeda Research And Development Co. Ltd. At The Weizmann Institute Of Science||Method and apparatus for protecting public key schemes from timing and fault attacks|
|US5995629 *||Aug 15, 1997||Nov 30, 1999||Siemens Aktiengesellschaft||Encoding device|
|EP0529261A2||Jul 10, 1992||Mar 3, 1993||International Business Machines Corporation||A hybrid public key algorithm/data encryption algorithm key distribution method based on control vectors|
|EP0582395A2||Jul 14, 1993||Feb 9, 1994||Digital Equipment Corporation||Computer network with modified host-to-host encryption keys|
|1||"Data Encryption Standard," Federal Information Processing Standards Publication (FIPS PUB) 46-2, U.S. Department of Commerce, National Institute of Standards and Technology, Dec. 30, 1993.|
|2||"Security Requirements for Cryptographic Modules," Federal Information Processing Standards Publication (FIPS PUB) 140-1, U.S. Department of Commerce, National Institute of Standards and Technology, Jan. 1994.|
|3||American National Standard for Financial Services, Secretariat-American Bankers Association (ANS/ABA X9.24-1997), "Financial Services Key Management," approved Apr. 6, 1992, American National Standards Institute.|
|4||American National Standard for Financial Services, Secretariat—American Bankers Association (ANS/ABA X9.24-1997), "Financial Services Key Management," approved Apr. 6, 1992, American National Standards Institute.|
|5||Biham, E. et al., "Differential Fault Analysis of Secret Key Cryptosystems" in: Kaliski, B., Advances in Cryptology-CRYPTO 97, (Berlin, Springer, 1997) 17th Annual International Cryptology Conference, Aug. 17-21, 1997, pp. 513-525.|
|6||Biham, E. et al., "Differential Fault Analysis of Secret Key Cryptosystems" in: Kaliski, B., Advances in Cryptology—CRYPTO 97, (Berlin, Springer, 1997) 17th Annual International Cryptology Conference, Aug. 17-21, 1997, pp. 513-525.|
|7||Brucer Schneier, Applied Cryptography, Second Edition "Protocols, Algorithm and Source Code in C"; p. 53.|
|8||*||Burt Kaliski, Timing Attacks on Cryptosystem, RSA Laboratories,Bulletin, No. 2, Jan. 23, 1996.*|
|9||Doug Conner (Technical Editor), "Cryptographic Techniques-Secure Your Wireless Designs", Jan. 18, 1996; pp. 57-68.|
|10||Doug Conner (Technical Editor), "Cryptographic Techniques—Secure Your Wireless Designs", Jan. 18, 1996; pp. 57-68.|
|11||Friedrich L. Bauer, "Cryptology-Methods and Maxims", Technical University Munich, 1998; pp. 31-48.|
|12||Friedrich L. Bauer, "Cryptology—Methods and Maxims", Technical University Munich, 1998; pp. 31-48.|
|13||G. Hornauer et al., Eurocrypt 91, 1991, 453-460.|
|14||*||Hachez et. al. Timing Attack:What Can Be Achieved By A Powerful Adversary? 1999.*|
|15||Koblitz, A Course in Number Theory and Cryptography 2e, 1994, Chapter III.|
|16||Krawczyk, H. et al., "HMAC: Keyed-Hashing for Message Authentication," Network Working Group Request for Comments RFC 2104, Feb. 1997.|
|17||Lai et al., Eurocrypt 91, 1991, 17-38.|
|18||*||Paul Kocher, Cryptanalysis of Diffie-Hellman, RSA, DSS, and Other Systems Using Timing Attacks Reprot 7 Dec. 1995.*|
|19||*||Paul Kocher, Timing Attacks on Implementation of Diffie-Hellman, RSA, DSS, and Other System, Crypto'96 pp. 104-113, Aug. 1996.*|
|20||Robert R. Jueneman, "Analysis of Certain Aspects of Output Feedback Mode", Satellite Business Systems, 1998; pp. 99-127.|
|21||RSA Data Security, RSAREF Cryptographic Toolkit Source Code, File R-RANDOM.C, available fronftp://ftp.rsa.com.|
|22||Ryan, J. "Blinds for Thermodynamic Cipher Attacks," unpublished material on the world wide web at http://www.cybertrace.com/thrmatak.html Mar. 1996.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7020281 *||Jan 18, 2001||Mar 28, 2006||Certicom Corp.||Timing attack resistant cryptographic system|
|US7363494 *||Dec 4, 2001||Apr 22, 2008||Rsa Security Inc.||Method and apparatus for performing enhanced time-based authentication|
|US7406502||Jul 9, 2003||Jul 29, 2008||Sonicwall, Inc.||Method and system for classifying a message based on canonical equivalent of acceptable items included in the message|
|US7477741||Oct 1, 2004||Jan 13, 2009||The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration||Analysis resistant cipher method and apparatus|
|US7539726||Apr 23, 2003||May 26, 2009||Sonicwall, Inc.||Message testing|
|US7554865||Sep 21, 2006||Jun 30, 2009||Atmel Corporation||Randomizing current consumption in memory devices|
|US7562122||Oct 29, 2007||Jul 14, 2009||Sonicwall, Inc.||Message classification using allowed items|
|US7599488||Oct 29, 2007||Oct 6, 2009||Cryptography Research, Inc.||Differential power analysis|
|US7613907||Nov 9, 2006||Nov 3, 2009||Atmel Corporation||Embedded software camouflage against code reverse engineering|
|US7668310||Aug 15, 2001||Feb 23, 2010||Cryptography Research, Inc.||Cryptographic computation using masking to prevent differential power analysis and other attacks|
|US7730518||Jul 31, 2003||Jun 1, 2010||Emc Corporation||Method and apparatus for graph-based partition of cryptographic functionality|
|US7787620||Oct 18, 2005||Aug 31, 2010||Cryptography Research, Inc.||Prevention of side channel attacks against block cipher implementations and other cryptographic systems|
|US7792287||Oct 30, 2007||Sep 7, 2010||Cryptography Research, Inc.||Leak-resistant cryptographic payment smartcard|
|US7826610 *||Jul 7, 2003||Nov 2, 2010||Gemalto Sa||Method to secure an electronic assembly against attacks by error introduction|
|US7882189||Oct 29, 2007||Feb 1, 2011||Sonicwall, Inc.||Using distinguishing properties to classify messages|
|US7908330||Oct 29, 2007||Mar 15, 2011||Sonicwall, Inc.||Message auditing|
|US7921204||Oct 29, 2007||Apr 5, 2011||Sonicwall, Inc.||Message testing based on a determinate message classification and minimized resource consumption|
|US7941666 *||Mar 24, 2003||May 10, 2011||Cryptography Research, Inc.||Payment smart cards with hierarchical session key derivation providing security against differential power analysis and other attacks|
|US7984301||Nov 9, 2006||Jul 19, 2011||Inside Contactless S.A.||Bi-processor architecture for secure systems|
|US8027926||Sep 22, 2009||Sep 27, 2011||Stamps.Com||Secure and recoverable database for on-line value-bearing item system|
|US8027927 *||Oct 27, 2009||Sep 27, 2011||Stamps.Com||Cryptographic module for secure processing of value-bearing items|
|US8031540||Jun 25, 2009||Oct 4, 2011||Atmel Corporation||Randomizing current consumption in memory devices|
|US8041032||Aug 19, 2005||Oct 18, 2011||Cardiac Pacemakers, Inc.||Symmetric key encryption system with synchronously updating expanded key|
|US8041644||May 18, 2010||Oct 18, 2011||Stamps.Com||Cryptographic module for secure processing of value-bearing items|
|US8054967 *||Apr 15, 2005||Nov 8, 2011||Panasonic Corporation||Computer system and computer program executing encryption or decryption|
|US8059818 *||Feb 11, 2005||Nov 15, 2011||Nokia Corporation||Accessing protected data on network storage from multiple devices|
|US8108477||Jul 13, 2009||Jan 31, 2012||Sonicwall, Inc.||Message classification using legitimate contact points|
|US8112486||Sep 20, 2007||Feb 7, 2012||Sonicwall, Inc.||Signature generation using message summaries|
|US8266215||Feb 20, 2003||Sep 11, 2012||Sonicwall, Inc.||Using distinguishing properties to classify messages|
|US8266446||Oct 17, 2008||Sep 11, 2012||Sandisk Il Ltd.||Software protection against fault attacks|
|US8271603||Jun 16, 2006||Sep 18, 2012||Sonicwall, Inc.||Diminishing false positive classifications of unsolicited electronic-mail|
|US8281129 *||Jan 18, 2006||Oct 2, 2012||Nader Asghari-Kamrani||Direct authentication system and method via trusted authenticators|
|US8296382||Apr 5, 2011||Oct 23, 2012||Sonicwall, Inc.||Efficient use of resources in message classification|
|US8301572||Aug 24, 2011||Oct 30, 2012||Stamps.Com||Cryptographic module for secure processing of value-bearing items|
|US8301890||Aug 10, 2006||Oct 30, 2012||Inside Secure||Software execution randomization|
|US8334705||Oct 27, 2011||Dec 18, 2012||Certicom Corp.||Analog circuitry to conceal activity of logic circuitry|
|US8341356 *||May 3, 2011||Dec 25, 2012||Intel Corporation||Protected cache architecture and secure programming paradigm to protect applications|
|US8369524 *||Oct 30, 2003||Feb 5, 2013||Thomson Licensing||Simplified method for renewing symmetrical keys in a digital network|
|US8379850||Oct 8, 2010||Feb 19, 2013||Xilinx, Inc.||Method and integrated circuit for secure encryption and decryption|
|US8386800||Dec 2, 2010||Feb 26, 2013||Cryptography Research, Inc.||Verifiable, leak-resistant encryption and decryption|
|US8396926 *||Mar 11, 2003||Mar 12, 2013||Sonicwall, Inc.||Message challenge response|
|US8463861||Jan 30, 2012||Jun 11, 2013||Sonicwall, Inc.||Message classification using legitimate contact points|
|US8484301||Jan 27, 2011||Jul 9, 2013||Sonicwall, Inc.||Using distinguishing properties to classify messages|
|US8498943||Aug 25, 2011||Jul 30, 2013||Stamps.Com||Secure and recoverable database for on-line value-bearing item system|
|US8539254||Jun 1, 2010||Sep 17, 2013||Xilinx, Inc.||Method and integrated circuit for protecting against differential power analysis attacks|
|US8583944||Aug 4, 2010||Nov 12, 2013||Xilinx, Inc.||Method and integrated circuit for secure encryption and decryption|
|US8615085||Sep 20, 2010||Dec 24, 2013||Zamtec Ltd||Encrypted communication system with limited number of stored encryption key retrievals|
|US8635452||Aug 12, 2009||Jan 21, 2014||Nxp B.V.||Method for generating a cipher-based message authentication code|
|US8635455||Sep 20, 2010||Jan 21, 2014||Zamtec Ltd||Encrypted communication device with restricted rate of encryption key retrievals from memory|
|US8635467||Oct 27, 2011||Jan 21, 2014||Certicom Corp.||Integrated circuit with logic circuitry and multiple concealing circuits|
|US8650408||Sep 8, 2010||Feb 11, 2014||Xilinx, Inc.||Protecting against differential power analysis attacks on decryption keys|
|US8688794||Jan 30, 2012||Apr 1, 2014||Sonicwall, Inc.||Signature generation using message summaries|
|US8707052||Feb 8, 2013||Apr 22, 2014||Cryptography Research, Inc.||Cryptographic device with resistance to differential power analysis and other external monitoring attacks|
|US8732256||Mar 6, 2013||May 20, 2014||Sonicwall, Inc.||Message challenge response|
|US8832462||Sep 8, 2010||Sep 9, 2014||Xilinx, Inc.||Protecting against differential power analysis attacks on sensitive data|
|US8879724||Dec 14, 2009||Nov 4, 2014||Rambus Inc.||Differential power analysis—resistant cryptographic processing|
|US8909941||Mar 31, 2011||Dec 9, 2014||Xilinx, Inc.||Programmable integrated circuit and a method of enabling the detection of tampering with data provided to a programmable integrated circuit|
|US8924484||Jul 16, 2002||Dec 30, 2014||Sonicwall, Inc.||Active e-mail filter with challenge-response|
|US8935348||Jun 8, 2013||Jan 13, 2015||Sonicwall, Inc.||Message classification using legitimate contact points|
|US8966253||Jun 1, 2010||Feb 24, 2015||Xilinx, Inc.||Method and apparatus for authenticating a programmable device bitstream|
|US8977864||Mar 7, 2014||Mar 10, 2015||Cryptography Research, Inc.||Programmable logic device with resistance to external monitoring attacks|
|US8990312||Oct 29, 2007||Mar 24, 2015||Sonicwall, Inc.||Active e-mail filter with challenge-response|
|US9009495||Jun 28, 2013||Apr 14, 2015||Envieta, LLC||High speed cryptographic combining system, and method for programmable logic devices|
|US9021039 *||Mar 26, 2014||Apr 28, 2015||Sonicwall, Inc.||Message challenge response|
|US9106405 *||Jun 25, 2012||Aug 11, 2015||Amazon Technologies, Inc.||Multi-user secret decay|
|US9189516||Jun 6, 2013||Nov 17, 2015||Dell Software Inc.||Using distinguishing properties to classify messages|
|US9215198||Oct 23, 2012||Dec 15, 2015||Dell Software Inc.||Efficient use of resources in message classification|
|US9275241 *||Dec 21, 2011||Mar 1, 2016||Giesecke & Devrient Gmbh||Cryptographic method|
|US9313158 *||Apr 27, 2015||Apr 12, 2016||Dell Software Inc.||Message challenge response|
|US9325649||Jan 10, 2014||Apr 26, 2016||Dell Software Inc.||Signature generation using message summaries|
|US9367693||Jun 26, 2015||Jun 14, 2016||Cryptography Research, Inc.||Bitstream confirmation for configuration of a programmable logic device|
|US9419790||Nov 3, 2014||Aug 16, 2016||Cryptography Research, Inc.||Differential power analysis—resistant cryptographic processing|
|US9420456 *||May 3, 2013||Aug 16, 2016||Telefonaktiebolaget L M Ericsson (Publ)||Centralized key management in eMBMS|
|US9497021||Aug 27, 2010||Nov 15, 2016||Nxp B.V.||Device for generating a message authentication code for authenticating a message|
|US9503255||Oct 17, 2013||Nov 22, 2016||Synopsys, Inc.||Cryptographic sequencing system and method|
|US9503406||Mar 3, 2015||Nov 22, 2016||Dell Software Inc.||Active e-mail filter with challenge-response|
|US9524334||Nov 11, 2015||Dec 20, 2016||Dell Software Inc.||Using distinguishing properties to classify messages|
|US9569623||Feb 9, 2015||Feb 14, 2017||Cryptography Research, Inc.||Secure boot with resistance to differential power analysis and other external monitoring attacks|
|US9576133||Jun 11, 2015||Feb 21, 2017||Cryptography Research, Inc.||Detection of data tampering of encrypted data|
|US9641641 *||Apr 21, 2014||May 2, 2017||Google Inc.||Temporal adjustment of identifiers|
|US9674126||Dec 14, 2015||Jun 6, 2017||Sonicwall Inc.||Efficient use of resources in message classification|
|US9703938||Oct 2, 2012||Jul 11, 2017||Nader Asghari-Kamrani||Direct authentication system and method via trusted authenticators|
|US9710675 *||Mar 26, 2015||Jul 18, 2017||Intel Corporation||Providing enhanced replay protection for a memory|
|US9727864||Sep 7, 2012||Aug 8, 2017||Nader Asghari-Kamrani||Centralized identification and authentication system and method|
|US9792229||Mar 27, 2015||Oct 17, 2017||Intel Corporation||Protecting a memory|
|US9852572 *||Sep 26, 2011||Dec 26, 2017||Cryptography Research, Inc.||Cryptographic token with leak-resistant key derivation|
|US20010033655 *||Jan 18, 2001||Oct 25, 2001||Ashok Vadekar||Timing attack resistant cryptographic system|
|US20010053220 *||Aug 15, 2001||Dec 20, 2001||Cryptography Research, Inc.||Cryptographic computation using masking to prevent differential power analysis and other attacks|
|US20020124178 *||Dec 3, 2001||Sep 5, 2002||Kocher Paul C.||Differential power analysis method and apparatus|
|US20030028771 *||Apr 29, 2002||Feb 6, 2003||Cryptography Research, Inc.||Leak-resistant cryptographic payment smartcard|
|US20030105964 *||Dec 4, 2001||Jun 5, 2003||Brainard John G.||Method and apparatus for performing enhanced time-based authentication|
|US20030131087 *||Jan 4, 2002||Jul 10, 2003||Shippy Keith L.||Method of using billing log activity to determine software update frequency|
|US20030188158 *||Mar 24, 2003||Oct 2, 2003||Kocher Paul C.||Payment smart cards with hierarchical session key derivation providing security against differential power analysis and other attacks|
|US20040015554 *||Jul 16, 2002||Jan 22, 2004||Brian Wilson||Active e-mail filter with challenge-response|
|US20040167968 *||Feb 20, 2003||Aug 26, 2004||Mailfrontier, Inc.||Using distinguishing properties to classify messages|
|US20050021990 *||Sep 4, 2002||Jan 27, 2005||Pierre-Yvan Liardet||Method for making secure a secret quantity|
|US20050036615 *||Jul 31, 2003||Feb 17, 2005||Jakobsson Bjorn Markus||Method and apparatus for graph-based partition of cryptographic functionality|
|US20050149457 *||Dec 24, 2003||Jul 7, 2005||Intel Corporation||Method and apparatus for establishing trust in smart card readers|
|US20050182934 *||Jan 21, 2005||Aug 18, 2005||Laszlo Elteto||Method and apparatus for providing secure communications between a computer and a smart card chip|
|US20050193199 *||Feb 11, 2005||Sep 1, 2005||Nokia Corporation||Accessing protected data on network storage from multiple devices|
|US20060045264 *||Oct 18, 2005||Mar 2, 2006||Kocher Paul C||Prevention of side channel attacks against block cipher implementations and other cryptographic systems|
|US20060104440 *||Oct 30, 2003||May 18, 2006||Alain Durand||Simplified method for renewing symmetrical keys in a digital network|
|US20060235934 *||Jun 16, 2006||Oct 19, 2006||Mailfrontier, Inc.||Diminishing false positive classifications of unsolicited electronic-mail|
|US20060269066 *||Dec 21, 2005||Nov 30, 2006||Schweitzer Engineering Laboratories, Inc.||System and method for converting serial data into secure data packets configured for wireless transmission in a power system|
|US20070053516 *||Aug 19, 2005||Mar 8, 2007||Cardiac Pacemakers, Inc.||Symmetric key encryption system with synchronously updating expanded key|
|US20070237326 *||Apr 15, 2005||Oct 11, 2007||Masao Nonaka||Computer System and Computer Program Executing Encryption or Decryption|
|US20070299684 *||Jun 13, 2007||Dec 27, 2007||Goodwin Jonathan D||Secure on-line ticketing|
|US20080021969 *||Sep 20, 2007||Jan 24, 2008||Sonicwall, Inc.||Signature generation using message summaries|
|US20080022146 *||Dec 21, 2006||Jan 24, 2008||Kocher Paul C||Differential power analysis|
|US20080040593 *||Nov 9, 2006||Feb 14, 2008||Atmel Corporation||Embedded software camouflage against code reverse engineering|
|US20080040607 *||Aug 10, 2006||Feb 14, 2008||Majid Kaabouch||Software execution randomization|
|US20080049940 *||Oct 24, 2007||Feb 28, 2008||Kocher Paul C||Payment smart cards with hierarchical session key derivation providing security against differential power analysis and other attacks|
|US20080072051 *||Nov 9, 2006||Mar 20, 2008||Atmel Corporation||Bi-processor architecture for secure systems|
|US20080104400 *||Oct 30, 2007||May 1, 2008||Kocher Paul C||Leak-resistant cryptographic payment smartcard|
|US20080123446 *||Sep 21, 2006||May 29, 2008||Stephen Charles Pickles||Randomizing Current Consumption in Memory Devices|
|US20080130869 *||Jul 7, 2003||Jun 5, 2008||Mehdi-Laurent Akkar||Method to Secure an Electronic Assembly Against Attacks by Error Introduction|
|US20080168145 *||Oct 29, 2007||Jul 10, 2008||Brian Wilson||Active E-mail Filter with Challenge-Response|
|US20090113214 *||Oct 17, 2008||Apr 30, 2009||Sandisk Il Ltd.||Software protection against fault attacks|
|US20090257295 *||Jun 25, 2009||Oct 15, 2009||Atmel Corporation||Randomizing Current Consumption in Memory Devices|
|US20100070765 *||Sep 22, 2009||Mar 18, 2010||Ogg Craig L||Secure and recoverable database for on-line value-bearing item system|
|US20100091982 *||Dec 14, 2009||Apr 15, 2010||Kocher Paul C||Differential power analysis - resistant cryptographic processing|
|US20100228674 *||May 18, 2010||Sep 9, 2010||Stamps.Com||Cryptographic module for secure processing of value-bearing items|
|US20110078449 *||Sep 20, 2010||Mar 31, 2011||Silverbrook Research Pty Ltd||Encrypted Communication System with Limited Number of Stored Encryption Key Retrievals|
|US20110078454 *||Sep 20, 2010||Mar 31, 2011||Silverbrook Research Pty Ltd||Encrypted communication device with restricted rate of encryption key retrievals from memory|
|US20110184976 *||Jan 27, 2011||Jul 28, 2011||Wilson Brian K||Using Distinguishing Properties to Classify Messages|
|US20110208907 *||May 3, 2011||Aug 25, 2011||Shlomo Raikin||Protected Cache Architecture And Secure Programming Paradigm To Protect Applications|
|US20110231503 *||Apr 5, 2011||Sep 22, 2011||Wilson Brian K||Efficient use of resources in message classification|
|US20130294603 *||May 3, 2013||Nov 7, 2013||Telefonaktiebolaget L M Ericsson (Publ)||Centralized key management in embms|
|US20130326235 *||Dec 21, 2011||Dec 5, 2013||Giesecke & Devrient Gmbh||Cryptographic method|
|US20140207892 *||Mar 26, 2014||Jul 24, 2014||Sonicwall, Inc.||Message challenge response|
|US20160283750 *||Mar 26, 2015||Sep 29, 2016||David M. Durham||Providing enhanced replay protection for a memory|
|EP2738974A1||Nov 28, 2013||Jun 4, 2014||Spirtech||Method for deriving multiple cryptographic keys from a master key in a security microprocessor|
|EP3089398A1 *||Apr 30, 2015||Nov 2, 2016||Nxp B.V.||Securing a cryptographic device|
|WO2007024828A2 *||Aug 23, 2006||Mar 1, 2007||Soft Resources, Llc||Authentication protection apparatus and method|
|WO2007024828A3 *||Aug 23, 2006||Apr 23, 2009||Andrei V Bardachenko||Authentication protection apparatus and method|
|WO2011038443A1||Sep 20, 2010||Apr 7, 2011||Silverbrook Research Pty Ltd||Communication system, method and device with limited encryption key retrieval|
|U.S. Classification||380/252, 713/193, 380/1, 713/194|
|International Classification||G06Q20/40, G06Q20/34, G07F7/10, H04L9/08|
|Cooperative Classification||H04L9/0891, H04L9/0625, H04L9/003, G07F7/1008, G06Q20/40975, G06Q20/341, G06F2207/7219|
|European Classification||G06Q20/341, H04L9/06C, G06Q20/40975, H04L9/08, G07F7/10D|
|Jul 2, 1999||AS||Assignment|
Owner name: CRYPTOGRAPHY RESEARCH, INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KOCHER, PAUL C.;REEL/FRAME:011156/0288
Effective date: 19990702
|Sep 1, 2006||FPAY||Fee payment|
Year of fee payment: 4
|Aug 24, 2010||FPAY||Fee payment|
Year of fee payment: 8
|Sep 25, 2014||FPAY||Fee payment|
Year of fee payment: 12