While Security Innovation’s expertise isn’t limited to any specific platform or technology, we do have deep experience and specialization in automotive and embedded security.

- What is NTRU?
- What makes NTRU fast?
- What makes NTRU quantum computing resistant?
- Has the NTRU crypto algorithm been scrutinized and tested sufficiently?
- How fast is NTRU?
- Has NTRU been adopted in any standards?
- Which applications/industries would benefit most from NTRU?
- What business problem does NTRU solve?
- How does the key size compare to popular solutions like RSA?
- Why has NTRU been open source licensed?
- Doesn't open sourcing make NTRU implementations more vulnerable to hackers?
- Are there any commercial license available?
- Is replacing RSA with NTRU the best solution moving forward?
- How can I get the source code?

- Are quantum computers ever going to be a reality?
- Are there crypto algorithms that are resilient to quantum attacks?

- Why is NTRUEncrypt so much faster than RSA, El Gamal, and ECC? (A non-technical overview)
- How do we measure and compare the speeds of NTRUEncrypt, RSA and ECC?
- Will NTRU work on a 16-bit card?
- What benchmarks do you have that show feasibility of NTRUEncrypt running on an 8051 card without a co-processor?
- Is NTRU vulnerable to attacks based on quantum computing?
- What hard problem underlies the NTRUEncrypt Public Key Cryptosystem?
- Has NTRUEncrypt been reviewed by the cryptographic community?
- How is the security of NTRUEncrypt measured?
- I read that NTRU doesn’t always decrypt properly. Is that true?
- Why is the NTRUEncrypt Public Key Cryptosystem more secure than earlier lattice based systems, such as knapsack systems, that have been broken?
- Do you have any mathematical proof of the security of NTRUEncrypt and NTRUSign?
- Why is NTRUEncrypt resistant to parallelized attacks?
- How does NTRUEncrypt cope with SPA-based attacks?
- What is the core idea of the NTRUSign algorithm?
- Is there any difference between how NTRUSign and NTRUEncrypt use the NTRU lattice?
- What's the difference between this algorithm and NSS?
- What were the details of the attacks on NSS?
- What hard problems does NTRUSign depend on?
- Can you provide more background on Lattices?

NTRU is a lattice-based asymmetric (public/private key) cryptosystem from Security Innovation and the leading alternative to RSA and Elliptic Curve Cryptography (ECC) due to its higher performance and resistance to attacks from quantum computers. NTRU was developed in 1996 as a visionary solution to cyber security challenges for the twenty-first century. NTRU is based on a mathematical problem called the “Approximate close lattice vector problem” and comprises three algorithms: NTRUEncrypt, NTRUSign, and PASSSign. It has been reviewed and published in scholarly journals and presented at Crypto, Eurocrypt, and RSA, and has been adopted in IEEE and X9 standards.

Because it is based on different math from RSA and ECC, the NTRU algorithm has different cryptographic properties. At comparable cryptographic strength, NTRU performs costly private key operations much much faster than RSA. In addition, NTRU's comparative performance increases with the level of security required. As key sizes increase by n, RSA's operations/second decrease at n^{3} whereas NTRU's decrease at n^{2}.

There are no known significant quantum computer attacks on NTRU. In contrast, a quantum computer running “Shor’s algorithm” would be able to break RSA or ECC of any practical size in negligible time, rendering today’s security obsolete. This has been validated by external reviewers such as the National Institute of Standards and Technology (NIST). NIST wrote: “Of the various lattice based cryptographic schemes that have been developed, the NTRU family of cryptographic algorithms appears to be the most practical... They are viable alternatives for both public key encryption and signatures that are not vulnerable to Shor’s Algorithm.”

Of course, future research could find a quantum computing based algorithm that attacks NTRU. However, finding such an algorithm has been a top priority for quantum computing researchers for 20 years and none has yet emerged.

There have been more than 20 reports issued regarding the NTRU algorithm over the past 16 years. This research came from academic institutions, including Brown University, L’École normale supérieure (ENS), University of California San Diego, and Shanghai Jiaotong University. This scrutiny has led to even stronger parameter choices and hardened implementations. Now that NTRU is available under an open source license, the algorithm will receive even more testing.

At comparable cryptographic strength, NTRU performs private key operations 20x to 200x faster than openSSL RSA. Faster means less processing time (cheaper) and offers the ability to encrypt more data (more secure). In addition, as key sizes (security levels) increase by n, RSA's operations/second decrease at a rate of n^{3} whereas NTRU's decrease at n^{2}. A University of Leuven report states "NTRU is extremely fast on parallelizable processors." Ari Juels, Chief Scientist, RSA Labs stated, "[NTRU] is considerably faster; that is something we acknowledge"

We are in the process of getting our performance results, including Signature Verification, included in eBATS, so users can compare NTRU’s speed vs a variety of encryption solutions.

NTRU has been adopted by two standards bodies, IEEE and the Financial Services Industry’s Accredited Standards Committee X9.

- IEEE P1363 Working Group for Standards In Public Key Cryptography.
- X9.98 Lattice-Based Polynomial Public Key Establishment Algorithm for the Financial Services Industry. “This standard specifies the use of the NTRUEncrypt algorithm to establish secure communications for financial services… X9.98 marks a particularly significant step forward in improving the robustness of systems based on X9 standards: it allows the deployment of systems that are protected against quantum computing attacks as well as against classical attacks.”

Additionally, an Internet Draft standardizing NTRU-based ciphersuites in Transport Layer Security (TLS) is currently progressing through the Internet Engineering Task Force (IETF) standards process.

Any application that requires fast performance (large amounts of data to be protected in a short amount of time) and/or high-levels of security for the next 10 years would benefit from the NTRU solution. Furthermore, the small code size (small footprint) of the NTRU implementations makes it suitable for even small embedded processors.

These applications include Payment Systems, secure messaging and email, mobile eCommerce, Healthcare, Near Field Communications (NFC), Vehicle Communications (V2V, V2I), Military/Aerospace, Web Browsers and Servers, Remote Backup Solutions, Voice over IP (VoIP), Online Presentations/Virtual Classrooms, Infrastructure (Railway switching, Traffic lights, etc), Utility meters and Cloud Provides/Datacenters.

We're providing a data protection solution that can help ensure long-term privacy of internet and financial transactions, something that has been compromised lately with RSA/ECC. Industry needs a better and more transparent secure data communications solution, both now and in the future.

NTRU can improve communication efficiency while enhancing data security. The most commonly used encryption solution (RSA) is painfully slow, especially with the larger keys that are required for acceptable security standards. Rather than slow down data transmission, businesses today often choose to not protect all of their data.

NTRU on the other hand, provides much stronger security with substantially better performance. Higher performing NTRU encryption requires fewer servers while still protecting (encrypting) all data. If you are encrypting all transactions with a secure algorithm, the damage caused by intrusions can be significantly lessened. Secure encryption reduces the chances of costly data breaches, improves privacy and compliance and saves money by reducing the need for some intrusion detection systems and other security solutions.

The key size is comparable to RSA for similar security levels, but the speed advantages of NTRU far exceeds RSA at similar security levels.

By offering NTRU source code and patents under the Gnu Public License (GPL) v2, we are intending to remove barriers to widespread deployment. We want to enable the developers of the open-source software that powers the internet to test, use, deploy, and start transitioning to fast, future-proof cryptography. Recent revelations and speculation about NSA influence on both crypto algorithms and crypto implementations have made it clear that the security community desperately needs alternatives to existing crypto solutions.

Making NTRU open-source also removes barriers to testing of both the algorithm and the implementation. Open scrutiny and testing is the only way to instill confidence in any encryption solution.

Furthermore, the open source licensing allows users to implement the NTRU algorithm in other languages and for other operating systems beyond those we currently support.

On the contrary, the opposite is true. NTRU has been tested by several external groups in addition to the commercial implementations over the past 10 years. By exposing it to even more users, the strength of the algorithm will be proven and the implementations will be strengthened. Hiding behind a veil of patents and licensing does not equate to greater or lesser security. The underlying strength of the algorithm is unaffected by the chosen licensing model. In the event of any vulnerability being discovered in a particular implementation of the crypto algorithm, open source software allows users to build in short-term mitigation defenses to protect themselves until the vulnerability is fixed. We feel this situation is better than leaving users exposed and unaware.

For commercial (not open source) applications, Security Innovation offers a commercial license (link to .doc file) that is not limited to use in open source applications only.

We don’t think a single encryption solution is the best idea, regardless of the algorithm. Double encryption using two fast algorithms such as NTRU plus another post-quantum crypto algorithm, or even ECC, would provide far greater security at a considerable higher performance than RSA alone today. Our Chief Scientist, William Whyte, wrote a blog post on this subject.

The source code is available on GitHub at https://github.com/NTRUOpenSourceProject/ntru-crypto . The source code is currently available in C programming language, but the Java-based implementation will be available in the near future.

Different experts have different opinions! Andrew Dzurak, a Professor in Nanoelectronics at the University of New South Wales, has predicted it will be 20 years before quantum computers capable of modelling and simulating complex biological and chemical systems to create new materials will become commercially available, according to a CIO article. This most likely means governments and some large organizations will have working quantum computers much earlier. The D-Wave 2 is already commercially available, although it is only 512 qubits. Data encrypted by RSA/ECC could be stolen today and stored until quantum computers are able to break them. If the cost is low enough, why not protect against quantum computers now?

One class of public key algorithms is based on properties of a mathematical concept known as a lattice. No algorithms yet invented for a quantum computer have had the potential of undermining the security of these algorithms. In a report titled “Quantum Resistant Public Key Cryptography,” NIST analyzed public key cryptographic algorithms that are believed to be resistant to quantum computing based attacks. NTRU’s lattice-based cryptographic algorithms were found to be the most resilient and practical:

"Of the various lattice based cryptographic schemes that have been developed, the NTRU family of cryptographic algorithms appears to be the most practical...smallest key size...highest performance" - “ |
---|

The NTRUEncrypt cryptosystem is much faster than exponentiation systems such as RSA, El Gamal, and ECC. One reason is that the basic operations used by NTRUEncrypt involve manipulation of small numbers, generally numbers less than 255. Exponentiation systems, on the other hand, require numbers with hundreds of digits. Let's say that you want to improve security, so you double the key length. In this case, the RSA cryptosystem will slow by a factor of 8 and the NTRU cryptosystem by a factor of 4. Here's one way to understand what's happening:

A private key for any cryptosystem can be thought of as a long list of bits, for example:

key = (1,1,0,1,0,0,1,0,0,1,1,1,1,0,0,0,1,0,1,0,........,0,0,1,0,1,1,1,0,0,1,0,1).

The number of 0's and 1's in the list is called the bit-length of the key. A typical key for NTRUEncrypt, RSA, or El Gamal might be between 1000 and 2000 bits long, so it's not a difference in key lengths that makes NTRUEncrypt faster. (ECC uses even shorter keys.) Instead, its how NTRUEncrypt uses the key. When RSA, El Gamal, or ECC want to encrypt a message, they need to do lots of computations involving **every **single bit in the key; so they need to manipulate the entire long key.

By way of contrast, NTRUEncrypt breaks the key up into small chunks, generally consisting of 12 bits each. For example:

NTRUEncrypt key = (1,1,0,1,0,0,1,0) --- (0,1,1,1,1,0,0,0,) ---....--- (1,1,1,0,0,1,0,1).

When encrypting a message, NTRUEncrypt works with the key chunks **one at a time**, so it never needs to manipulate extremely long lists of bits. This makes for speedy computations, even on low power processors.

A careful mathematical analysis shows that for keys consisting of around N bits, the RSA, El Gamal, and ECC systems require on the order of N3 operations to encrypt or decrypt a message, while NTRUEncrypt requires only on the order of N2 operations to encrypt or decrypt a message. The tremendous difference between N3 and N2 can be clearly seen in the timing comparisons between NTRUEncrypt, RSA, and ECC.

[Technical Addendum: The above description is highly oversimplified. In practice, use of fancy techniques (Fast Fourier Transforms, Karatsuba Multiplication, etc) will increase RSA and ECC efficiency to N2*log(N) operations and increase NTRUEncrypt efficiency to N*log(N) operations. NTRUEncrypt retains its order of magnitude advantage.]

Statements like Cryptosystem XXX is 100 times faster than Cryptosystem YYY are meaningless without additional information. Unfortunately, the marketplace insists on succinct quantitative comparisons, so we and our competitors frequently publish these sorts of statements. To be accurate, three issues need to be addressed:

The operating speed of a cryptosystem depends heavily on the particular implementation, on the compiler, on the machine or device processor, and on a host of other similar factors. Optimizations for a particular machine or using various programming tricks may easily give speed differentials of 2 to 3 times. We also note that we are comparing fully optimized implantations of RSA and ECC with the implementation of NTRUEncrypt in the first commercial release of Tumbler. We certainly expect significant implementation-derived speed improvements for NTRUEncrypt in future releases. So when we say that NTRUEncrypt 263 is currently 30 times faster than RSA 1024, it might be more accurate to say that the current release is 10 to 90 times faster, and that we expect the next release to be between 20 and 180 times faster. Similarly, the current release of NTRUEncrypt 503 is between 25 and 225 times faster than RSA 2048, which we feel justifies our claim of being 100 times faster.

Another important application that more than justifies our claim to a 100x speed advantage is through the use of disposable keys. For example, NTRUEncrypt is currently being implemented in a commercial product that will generate a new public/private key pair for each few seconds of audio and use that pair to encrypt and decrypt a symmetric key for that piece of audio. So in this real world situation, a single application of the public key cryptosystem consists of three operations:

Generate Key / Encrypt One Block / Decrypt One Block

If we compare NTRUEncrypt N=263 with RSA 1024 in this situation, we find that NTRUEncrypt is far more that 100 times faster. Indeed, NTRUEncrypt key pair generation takes about 3.0 milliseconds, while Crypto++ takes over 1 second to generate an RSA key pair (on our 300 MHz machines), so the NTRUEncrypt speed advantage tops 300.

Many cryptosystems encrypt at a different speed than they decrypt, sometimes by using tricks (such as low exponent RSA) and sometimes due to the underlying algorithm. So it is important to compare apples to apples. We feel that a valid measurement is the total time it takes to first encrypt and then decrypt a single block of data. This is a useful quantity because public key cryptosystems are generally used to send a single block of data containing either a short message (e.g., a credit card number) or a key to be used by a private key cryptosystem such as triple DES or AES.

Yes, NTRU algorithms are suited for any environment. NTRUEncrypt is unique among asymmetric cryptosystems in that it can be implemented on any processor, even 8-bit processors, without requiring a co-processor.

encrypt: 214,000 cycles (107 ms)

decrypt: 234,000 cycles (117 ms)

Much of this code is written in C, and we expect these figures to improve. Also, note that encrypting and decrypting with the standard NTRU formatting method requires five calls to SHA-1 when encrypting and four when decrypting, and these calls are included in the figures above. The figures for raw encrypt/decrypt are:

Encrypt: 47,000 cycles (23.5 ms)

Decrypt: 80,000 cycles (40 ms)

There is no known quantum computing algorithm which speeds up attacks on NTRU.

In general, there are two quantum computing algorithms of great significance to cryptanalysis: Shor's algorithm and Grover's algorithm.

Shor's algorithm uses the fact that quantum computing methods can calculate the period of a periodic function very quickly. This can be used to massively speed up attacks on RSA, Discrete Log based cryptosystems such as DSA and Diffie-Hellman, and Elliptic Curve Discrete Log systems such as ECDSA, ECMQV, and so on. (Note that in the case of RSA the period of the function essentially is the private key). Shor's algorithm has no obvious application to NTRU: NTRU has no similar periodic function to exploit.

Grover's algorithm speeds up unsorted database and brute force searches: using Grover's algorithm, it is possible to search a list of N items in sqrt(N) time. This effectively halves the keylength of symmetric ciphers. However, there is no obvious application to NTRU, as in the case of NTRU the brute force attack is not the most efficient, even if sped up by Grover's algorithm. (Note that there is a known meet-in-the-middle attack on NTRU private keys which also speeds up a brute force search by a square-root factor; there is no known way to compose this speed-up with a quantum computing speed-up).

Additional quantum computing algorithms exist (e.g. to find eigenvalues quickly): again, there is no obvious application to breaking NTRU.

In November 2009, NIST demonstrated a 'universal' programmable quantum processor that could be a module in a future quantum computer – a machine that theoretically could solve some important problems that are intractable today, but would also threaten to undermine the security assumptions upon which today’s public key cryptographic algorithms are based. In a report titled “Quantum Resistant Public Key Cryptography,” NIST analyzed public key cryptographic algorithms that are believed to be resistant to quantum computing based attacks. NTRU’s lattice-based cryptographic algorithms were found to be the most resilient and practical.

The NTRUEncrypt Public Key Cryptosystem is based on the hard mathematical problem of finding very short vectors in lattices of very high dimension. The process of solving this problem is called "Lattice Reduction", and the general study of small vectors in lattices goes by the name "Geometry of Numbers". This subject has a long history due to its importance in a variety of areas of mathematics (both pure and applied) and physics. For example, in 1880 Hermite proved an upper bound for the length of the shortest vector in a lattice, and in 1896 Minkowski gave an important criterion for when a symmetric convex body must contain a nonzero lattice vector. (This is the same Minkowski responsible for the formulation of space-time in relativity.) Minkowski also gave the subject its name in his book Geometrie der Zahlen (Geometry of Numbers), published in Leipzig in 1910.

The geometry of numbers has flourished as a mathematical field. A standard reference is Lekkerkerker's Geometry of Numbers (John Wiley, NY, 1969), a massive 510 page volume with a 32 page bibliography. When Lekkerkerker was updated (Elsevier, 1987), it grew to 732 pages with a 93 page bibliography. As a further indication of the importance of lattices in modern mathematics, a search of Mathematical Reviews found over 14,000 articles published between 1986 and 1999 that have the word "lattice" in their title. Although not all research on lattices is directly relevant to cryptography, there is no question that lattices are a fundamental object of study in algebra, geometry, analysis, and physics.

With the advent of modern computers, attention turned to the practical problem of finding short vectors in lattices. One can always find the shortest nonzero vector by an exhaustive search, but the search space grows exponentially in the dimension of the lattice, so people looked for better methods. The most important modern advance in the algorithmic theory of lattice reduction is the method discovered by Lenstra, Lenstra and Lovasz and dubbed LLL.

A.K. Lenstra, H.W. Lenstra Jr. and L. Lovasz, Factoring polynomials with rational coefficients, Mathematische Annalen 261 (1982), 513-534.

The LLL algorithm finds a moderately small vector in polynomial time, but it will generally not find the smallest vector. There have been a number of improvements, mostly of a combinatorial nature, to the LLL algorithm due to Schnorr, Euchner, and others. These improvements go by names like "deep insertions", "block reduction", "pruning". But the problem of finding the smallest vector in a general lattice remains an exponentially hard problem using even the best known algorithms.

It is worth observing that LLL was invented BEFORE lattices became important in cryptography. LLL has numerous applications in signal processing, combinatorics, computer algebra, algebraic number theory, Diophantine equations, and physics, as well as cryptography. To indicate the widespread interest in LLL and computational lattice reduction, we note that the original LLL article was cited in more than 145 research articles between 1986 to 1999, the LLL algorithm is included in many computer packages (Mathematica, Maple, Pari, Simath, NTL, ...), and the LLL algorithm is featured in numerous books.

Important Disclaimer We cannot prove that breaking NTRUEncrypt is as hard as finding a shortest vector in a general lattice. All we can assert is that the best method anyone has found for breaking NTRUEncrypt involves finding the shortest vector in certain lattices of high dimension, and that experiments indicate that known lattice reduction methods are incapable of breaking NTRUEncrypt if one uses the suggested parameter sets. Notice the similarity with RSA. No one knows if breaking RSA in general is equivalent to the general problem of factoring large numbers; and even if they are equivalent, it may still be possible that some numbers are easier to factor than others, so some particular RSA keys may be weak.

Since its introduction at Crypto '96, NTRUEncrypt has been subject to constant scrutiny by top cryptographers, including a EuroCrypt '97 security analysis by Adi Shamir and Don Coppersmith. NTRUEncrypt has been publicly presented at top cryptographic conferences, has been described through publication in refereed journals and conference proceedings, and has been reviewed by outside experts. Additionally, the National Institute of Standards & Technology (NIST) accredited NTRU with being the most practical lattice-based cryptographic solution that can resist a quantum computing attack.

The hard problem underlying the NTRUEncrypt Public Key Cryptosystem is that of finding a very short vector in a lattice of very high dimension. The best way known to attack NTRUEncrypt is to use the LLL lattice reduction method to search for the target vector. In order to test the security of NTRUEncrypt, we used LLL to run numerous tests on NTRUEncrypt lattices of various dimensions and graphed the amount of time it took to find the target vector. From this graph we extrapolated the running time for lattices of higher dimension and used these figures to select appropriate parameters for NTRUEncrypt. The following table gives estimated breaking times for NTRUEncrypt and RSA at various security levels. (Times are rounded to the nearest 10 MIPS-years.)

Cryptosystem Security Level Estimated Breaking Time |
---|

RSA 512 bits 10 |

NTRUEncrypt experiments were done using the implementation of LLL in Victor Shoup's highly regarded NTL package and run on 400 MHz Celeron processors operating under Linux.

The security levels and estimated breaking times for RSA are taken from the P1363 draft standards.

Our discussion of security in this FAQ is of necessity brief and somewhat sketchy. For complete details, including experiments and a full security analysis, see the papers and technical notes at our Technical Center.

A MIPS-year is the number of operations performed in one year by a computer that is capable of performing one million instructions per second. Roughly speaking, a single 400 MHz computer does about 400 million instruction per second (one instruction per clock cycle), so it is a 400 MIPS machine. According to the above table, it would have to run for about 10^{12}/400 years (that's about 2,500,000,000 years) to break an RSA 1024 bit key. So if you could harness the power of one million 400 MIPS machines, they would still require 2500 years to break a single key

With NTRU, there's a tradeoff to be made in terms of the parameter q. The larger q is, the larger keys and ciphertexts are; the smaller it is, the greater the chance that a valid ciphertext will fail to decrypt. An attacker learns information from these decryption failures, so it's important to stop this happening. For sufficiently large q, there will be no decryption failures at all, and in fact some sets of NTRU parameters have had this property. However, it's not necessary to go this far. In practice, we can choose q for a security level of k bits so that the chance of a decryption failure is 2^-k. There's no need for a higher level of protection against decryption failures, because once the decryption failure probability drops below 2^-k, the attacker will simply choose a different attack method. This choice of q gives the optimum size parameters for a given security level. So the answer to "Does NTRU decrypt correctly now?" is "Yes!" (except with negligible probability).

The original collection of cryptosystems broken by lattice reduction were those based upon variations of the knapsack problem. As each new variation appeared, hopeful authors did everything in their power to avoid potential transformations of their "hard problem" into one that could be attacked by lattice reduction. As the survey article by Odlyzko describes, no matter how obscured the original problem was, it still always proved in the end to be possible to transform it into one vulnerable to LLL lattice reduction methods. The last one to bite the dust was the Chor-Rivest system, broken by Schnorr's recent improvements to the LLL algorithm.

One of the fundamental problems these authors faced was the fact that the key size of their system grew with the square of the dimension of the lattice used for an attack. Because of this, it was impossible to make the dimension of an attack lattice greater than a couple of hundred without simultaneously making the cryptosystem impractical. Developments in LLL technology by Schnorr, Euchner, and others now allow lattices of dimension 200 or less to be analyzed in a short amount of time, and even certain lattices of dimension up to 300 can occasionally be broken. This is enough to dispose of all proposed practical knapsack systems. It was also enough to dispose of the lattice based system proposed in 1997 by Goldreich, Goldwasser and Halevi. The GGH system also had a key size which grew with the square of the lattice dimension, and thus either the lattice was too small for security or the key size was too large to be practical.

One important point of the NTRUEncrypt cryptosystem is to make the underlying lattice problem perfectly clear and straightforward from the outset. Thus unlike the knapsack cryptosystems, there is no question of trying to prevent the use of lattice reduction methods. A second crucial point for the NTRUEncrypt cryptosystem is that the key sizes are linear in the dimension. This means that it is easy to set up an NTRUEncrypt cryptosystem with practical key sizes (and blinding speed) which nevertheless relates to a lattice problem of dimension 500 to 1000. Current LLL techniques are completely incapable of dealing with lattice problems of this size.

Expanding on this last point, developments in LLL technology have been similar to, but somewhat less advanced than, advances in the problems of factoring large integers or finding discrete logarithms. For example, the number field sieve is a very sophisticated method of factoring which uses advanced mathematical ideas to solve the factorization problem in subexponential time. Improvements in LLL technology by Schnorr, Euchner and others have been at a simpler, more combinatorial, level. Their innovations have improved the efficiency of the LLL algorithm, but they still leave the breaking time at least exponential in the dimension.

For NTRU, we have taken the best LLL technology available today and applied it to the problem of breaking the lattice problems associated to the NTRUEncrypt cryptosystem. We have extrapolated out breaking times in much the same way that factoring attacks have been used to extrapolate breaking times for RSA 1024 or 2048 keys. Our estimates based on these experiments have been extremely conservative and have made generous assumptions about the abilities and power of the LLL programs.

NTRUEncrypt uses techniques derived from an approach due to Fujisaki and Okamoto, which give the property of Indistinguishability against Adaptive Chosen Ciphertext Attack (IND-CCA2). Details are in [1] and [2].

[1] E.Fujisaki,T.Okamoto, How to Enhance the Security of Public-Key Encryption at Minimum Cost, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, Vol.E83-A, No.1, Special Issue on Cryptography and Information Security (January 2000)

[2] NTRU Technical Note 16, available from

Technical Notes (all tech notes)

NTRUTech016.pdf (PDF of #16)

The security relies on various assumptions, of which the most important are:

- the hash function used behaves as a random oracle;
- the Shortest Vector Problem and Closest Vector Problem are hard to solve in the NTRU lattice.

NTRUSign, in the random oracle model, also reduces to the assumption that SVP and CVP are hard problems in the NTRU lattice.

NTRU Tech Notes 12 and 13 give the results of numerical experiments which allow us to estimate the difficulty of SVP and CVP in the NTRU lattice. There are no known techniques which take advantage of the special structure of the NTRU lattice; the best known technique to attack NTRU-based algorithms is the standard LLL technique. It certainly appears that, in so far as the NTRU lattice differs from a random lattice, those differences make standard lattice reduction techniques less likely, rather than more likely to succeed (for example, the lattice contains a set of public moderately short vectors, which LLL will find in polynomial time, but which prevent the algorithm from finding the shorter vectors which actually act as keys).

The hard problem underlying the NTRUEncrypt cryptosystem is that of finding short vectors in lattices of high dimension. To solve this problem, one uses lattice reduction methods, more specifically the Lenstra-Lenstra-Lovasz (LLL) algorithm with various improvements due to Schnorr, Euchner, Horner, and others. In a similar manner, people use the number field sieve (NFS) to factor integers and break RSA, and they use the Pollard rho method (PRM) to find elliptic curve discrete logarithms and break ECC.

One way to make these methods (LLL, NFS, PRM) faster is to simultaneously use a large number of computers. In order to understand how this is done, we need to distinguish between two different ways that computers can interact:

If many computers are permanently connected so that they can perform computations while continuously communicating with one another, then they able to perform parallel computations. Note that it is possible to have a single computer box that contains many computer chips, in which case that single box is doing parallel computing. In practice, it is often not necessary for all of the computer chips to be directly connected to one another; it is enough that they be arranged in a grid with each chip connected to the four adjacent chips. Thus it might be possible to connect all of the computers in a building to perform parallel processing, but it would not be possible to do this with any significant fraction of the computers in a city or a country.

Some tasks can be split into small pieces that can be distributed among widely separated computers. Each of the computers performs its piece of the computation without communicating with the other computers. Then, when each computer has done its part, the pieces are reassembled to solve the original problem. The internet is an ideal medium for distributed computing, since there are millions of individual computers attached to the internet.

Briefly, parallel computations require computers to be in continuous contact, while distributed computations can be performed on isolated computers with infrequent communications. Parallel computations are done by many computers (or computer chips) in a single room or building, while distributed computations may be done over a large diffuse network such as the internet.

Security levels for NTRUEncrypt, RSA, and ECC are generally given in MIPS-Years. This measures how many years it would take a single computer, operating at one millions instructions per second, to break the system. In order to establish suitable security levels, cryptographers estimate the total computational power, in MIPS, of all of the computers currently in existence, and they also make a reasonable guess for the total computational power of all computers that will exist at certain times in the future (10 years, 50 years, 100 years, etc.). They then assume that some portion of the world's computer power could be devoted to breaking a cryptosystem and use that number to suggest appropriate security levels. Thus security levels are set under the most conservative assumption that breaking a cryptosystem can be fully distributed, since no one believes that 1% (or even .01%) of the world's computational power will ever be parallelized. We have followed this conservative approach in setting NTRUEncrypt security levels, so even if lattice reduction is ever possible in a fully distributed environment, it will not affect the security of NTRUEncrypt.

The basic LLL algorithm is sequential, which means that each step relies on the results of the previous step. Briefly, one starts with a list of not-very-short vectors, modifies them to get a list that is (hopefully) a little shorter, and then keeps repeating the process. After a large number of repetitions, one hopes to find some very short vectors. Lattice reduction computations (using the best current algorithms) cannot be distributed, since each step relies on the results of the previous step. Thus it does not appear to be possible to distribute lattice reduction problems over a large number of computers connected by the internet.

Parallelization, on the other hand, does offer the possibility of improved performance, and there has been important research on this topic. Remember that LLL takes a list of vectors and "modifies" it to get a better list. This modification process itself requires a large number of basic operations (additions, subtractions, multiplications, etc.). The idea for parallelized LLL is to connect a large number of computers (or computer chips) in the form of an n-by-n grid and do the list modification in one (or a few) computer cycles. In principle, using n2 computers can speed up LLL by a factor of n2. However, there is an important caveat. Parallelization only helps up to n equal to the dimension of the lattice, so for NTRUEncrypt the theoretical maximum gain is at most 106. Thus assembling a million computers in one building might give a corresponding speed increase, but as noted above, NTRUEncrypt security levels are set using conservative assumptions that already account for this scenario. For those interested in pursuing this topic, see for example:

J.L. Roch and G. Villard, Parallel gcd and lattice basis reduction, Parallel Processing: CONPR92-VAPPV, Lyon, 1992, Lecture Notes in Computer Science 634, Springer-Verlag, 1992, 557-564.

C. Heckler and L. Thiele, Complexity analysis of a parallel lattice basis reduction algorithm, Siam Journal of Computation 27 (1998), 1295-1302.

The number field sieve (NFS) is the fastest known method for factoring large numbers. It consists of two distinct steps, "Relation Creation" and "Relation Reduction". The Relation Creation stage of NFS can be fully distributed, and indeed the factorization of a 512 bit RSA modulus by NFS did Relation Creation on hundreds of computers distributed over the internet. The Relation Reduction stage, by contrast, does not seem to be distributable, since it is a sequential process similar in some ways to lattice reduction. (More precisely, it involves matrix reduction of very large sparse matrices over the field with two elements.) However, parallelization using a grid of computers might well allow Relation Reduction to be done more rapidly.

[Glossary: SPA = Simple Power Analysis; DPA = Differential Power Analysis; both are ways of extracting cryptographic data from a device by measuring the power it consumes. The difference between SPA and DPA is that DPA, which is by and large more powerful, averages results over a large number of runs.]

The core NTRU convolution is relatively easy to blind against SPA attacks. The operations are simply adds and reductions modulo a power of two. We would be happy to collaborate with interested device vendors on constructing an NTRU implementation that is entirely immune to SPA attacks.

Full NTRU encryption requires the use of a hash function, typically SHA-1. This would need to be blinded against DPA and SPA to protect individual messages. It does not seem, however, that an SPA-based attack on the hash algorithm would endanger the private key.

The NTRUSign algorithm is based on the difficulty of finding a point in a lattice which is close to a random vector in space. This is known as the Approximate Closest Vector Problem, or appr-CVP. For an introduction to lattices, see Technical FAQ, question 3.

In NTRUSign, the signer knows a good basis for the entire lattice, and uses this to solve appr-CVP for an arbitrary point in space which is linked to the document to be signed. The verifier knows a bad, public, basis for the lattice, and so can verify that the signature is a lattice point and that it is close to the point associated with the signed document.

The underlying idea of basing a signature scheme on the use of a good basis to find close vectors in a lattice was proposed by Goldreich, Goldwasser and Halevi in their 1998 paper, Public-key cryptography from lattice reduction problems. However, in that case the keys and signatures were of size N2 lnN (where N is the dimension of the lattice), making the keys impractically large if the scheme was to be secure. The breakthrough in NTRUSign is that we have been able to use the techniques of the NTRU lattice to achieve an N-fold improvement in speed and keylength.

NTRUSign and NTRUEncrypt are based on exactly the same hard lattice problem. For an introduction to how this lattice is used in NTRUEncrypt, see the Learning Center.

In both NTRUSign and NTRUEncrypt, key generation starts by generating two small private polynomials, f and g, and the larger public polynomial h = g/f mod q. All of these polynomials have the same form in NTRUSign as they do in NTRUEncrypt, and they form a good basis for half of the NTRU lattice. However, in NTRUSign, the person generating the key must also find a good basis for the other half of the lattice. This means that NTRUSign private keys are larger than the corresponding private keys for NTRUEncrypt. However, the underlying lattice problem is the same.

NSS, the NTRU Signature Scheme, was first proposed at the rump session of Crypto 2000, and successfully attacked (following a series of other results) by Gentry and Szydlo at the rump session of Crypto 2001. NSS was also related to solving appr-CVP in the NTRU lattice. However, because the signer only knew half of the good basis, it was necessary to introduce a certain structure into the points representing messages to be signed. This structure was exploited by attackers.

NTRUSign, in contrast, is a transparent implementation of appr-CVP in the NTRU lattice. The points to be signed do not have any structure, nor do they need to. None of the attacks against NSS have any applicability against NTRUSign.

None of the attacks on NSS compromised the security of the NTRUEncrypt algorithm, or of the principles underlying NTRU's technology. Indeed, many of the attacks were based on observations of how the NSS algorithm differed from NTRUEncrypt:

In an early version of the NSS algorithm, the private key polynomials had some coefficients that were much larger than the average. This fact, and an attack based on it, were noted independently by Mironov and the NTRU research team. The attack does not apply to NTRUEncrypt or NTRUSign. In these algorithms, the secret polynomials have very little structure, and their coefficients lie within a narrow range.

The NSS algorithm relied for security on the fact that some coefficients of the signature polynomial had been reduced modulo the parameter q. However, the signature polynomial was the product of two small polynomials, and because of this it was possible for an attacker to detect which coefficients had been reduced and correct for this reduction, "lifting" the polynomial to the space of the integers and halving the effective size of the lattice. This attack does not apply to NTRUEncrypt. Although NTRUEncrypt relies for its security on the fact that the coefficients of the ciphertext have been reduced modulo q, the ciphertext is the product of one small polynomial and one (statistically) random one. This means that almost all coefficients will naturally be reduced mod q, and there appears to be no way for an attacker to lift in this case.

In the version of the NSS algorithm that appears in the Eurocrypt proceedings, the verification tests did not check tightly enough that the signature was bound to the correct message. There is no analogy between this and any possible attack on NTRUEncrypt or NTRUSign.

Forging a signature given only a public key, or recovering a private key from the corresponding public key, are equivalent respectively to solving the approximate closest vector problem and the short vector problem in the NTRU lattice.

All lattice-based signature schemes slowly leak information about the geometry of the lattice, and it is important to be able to quantify the point at which this information is useful to an attacker.

After about 10-20,000 signatures have been generated by a given key, the attacker knows additional information; this is known as the Gram matrix of the lattice basis. This information (known as "second-moment information") does not reduce the dimension of the lattice problem which must be solved, and there is currently no known way of exploiting it. The current version of the paper additionally suggests simple enhancements to the signing algorithm which appear to remove all possibility of recovering useful information from second moment attacks.

After many, many more signatures - significantly more than a billion - have been generated by a given key, additional information (known as "fourth-moment information") becomes available to an attacker which may endanger the security of the private key. This is the only known attack that recovers the private key in less time than an attack on the core NTRU lattice problem. To prevent it, it is sufficient to roll over the signing key after a billion signatures have been generated. In order to be conservative, we recommend that an implementation rolls over the signing key after 107 signatures. This frequency of key rollover is consistent with good security practice for virtually any architecture. The enhancements mentioned above can also be extended to greatly reduce or eliminate the risk posed by fourth-moment attacks.

The hard problem underlying the NTRUEncrypt Cryptosystem is the problem of finding short vectors (SVP) or close vectors (CVP) in a lattice. Although it has not been proven that breaking NTRU is equivalent to solving this problem in a random lattice, the most effective known attacks on NTRU involve solving an SVP or a CVP without taking the precise structure of the lattice into account. (This is analogous to the fact that although breaking RSA has not been proven to be equivalent to factoring, the most effective known attacks involve factoring.)

A lattice L of dimension N is a collection of vectors

a1v1 + a2v2 + … + aNvN for all integers a1,a2,…,aN

where v1,v2,…,vN is a basis of vectors for RN. Cryptography uses lattices having integer coordinates.

- The Shortest Vector Problem (SVP) in L is the problem of finding the shortest nonzero vector in L.
- The Closest Vector Problem (CVP) in L is the problem of finding the vector in L that is closest to a given vector w in RN. In full generality, the CVP is NP-complete.

Two early important results in the theory of lattices are:

- Hermite's Theorem (ca. 1880) - Gives an upper bound for the shortest nonzero vector in a lattice in terms of the dimension and the covolume of the lattice.
- Minkowski's Convex Body Theorem (1896) - A convex symmetric set in RN with volume larger than 2NCovolume(L) contains a nonzero lattice vector.

Lattices and the Geometry of Numbers

- Lattices and the key problems (SVP and CVP) have been the subject of intense mathematical investigation for over 100 years.
- Minkowski named this subject the Geometry of Numbers. (Geometrie der Zahlen, Leipzig, 1910.)
- The "bible" is Lekkerkerker's Geometry of Numbers (1969)-510 pages long with a 32 page bibliography. When Lekkerkerker was updated (2nd edition, 1987), it grew to 732 pages with a 93 page bibliography.
- There were more than 14,000 articles published between 1986 and 1999 with the word "Lattice" in their title.
- Not all research on lattices is directly relevant to cryptography; there is no question that lattices are a fundamental object of study in algebra, geometry, analysis, and physics, as well as in cryptography.

Lattice Reduction: Finding Small Vectors in Practice

- Small vectors in a lattice may always be found by an exhaustive search. The exhaustive search algorithm is exponential in the dimension of the lattice.
- The most important modern advance in the algorithmic theory of lattice reduction (i.e., finding small vectors in lattices) is the LLL method of Lenstra, Lenstra and Lovász.

A.K. Lenstra, H.W. Lenstra Jr., L. Lovász, Factoring polynomials with rational coefficients, Mathematische Annalen 261 (1982), 513-534.

- LLL finds a moderately small vector in polynomial time.
- LLL was invented before lattices became important in cryptography. Lattice reduction is of independent interest in many fields.
- Improvements to LLL are due to Schnorr, Euchner, and others (deep insertions, block reduction, pruning). But finding very short vectors remains exponentially difficult.
- LLL has numerous applications in signal processing, combinatorics, computer algebra, algebraic number theory, Diophantine equations and physics, as well as cryptography.
- There is widespread interest in LLL and lattice reduction:
- The original LLL article was cited in 146 research articles (1986-1999).
- The LLL algorithm is included in many computer packages, including: Mathematica, Maple, Pari, Simath, NTL,…
- The LLL algorithm is featured in numerous books.

Ajtai and Dwork (1997) and Goldreich, Goldwasser, and Halevi (1997) have recently proposed public key cryptosystems based on lattice problems. In both cases the public key consists of an entire basis for the lattice, so the key size is on the order of N2 bits for a lattice of dimension N. For this reason, they require very large keys (around 1MB) to be secure, which makes them impractical.

Earlier knapsack cryptosystems (Merkle-Hellman, Chor-Rivest) were also broken using LLL because their underlying lattices, at practical key sizes, have relatively small dimension and other undesirable characteristics.

The NTRUEncrypt Public Key Cryptosystem is based on the closest and shortest vector problems (CVP and SVP). However, the NTRU lattice is associated to a quotient (convolution) ring and its public keys are associated to a cyclically generated basis for the lattice. An NTRUEncrypt public key has length on the order of N bits for a lattice of dimension N. (More accurately, approximately (N/2)*log2(N/4) bits.)

This means that NTRU encryption can be practical, and indeed extremely fast. This is true even for lattices of dimension 500 to 1000, which are well beyond the reach of current technology to break.

Copyright © 2016 Security Innovation, Inc. All rights reserved.