• Ingen resultater fundet

3.2 Integer Representation and Arithmetic

3.2.1 Non-Redundant Representation

The β-ary encoding of an integer is an example of a non-redundant fixed-radix representation[Kor93a, Chapter 1]. Let the integerxbeβ-ary encoded as a string of n digits, xn1xn2. . . x0. Then each digit belongs to the digit set{0,1, . . . , , β1}and the value of x is expressed by

x =

n1

i=0

xiβi (3.3)

In this sum the weight of digitxi is theith power of a fixedinteger β, which is called the radix. The smallest possible value of x is 0 and the largest possible value is βn1. Therefore the range [0;βn1] is called the range of the representable integers. Obviously, all possible values of x are integer

3.2. INTEGER REPRESENTATION AND ARITHMETIC 63 values. In total there are βn different strings with n digits. It is easy to see that an n-digit string is a unique representation of an integer x: Each digit xi, i= 0,1, . . . , n1, is uniquely determined by the expressionxi = (x div βi) mod β. Therefore, all βn integers in the range [0;βn 1]can be represented by an n-digit string. Since no two different strings represent the same value, the representation is said to be non-redundant.

The most natural representation of integers in computer systems is the binary encoding, which is identified as a non-redundant radix 2 representation with the digit set {0,1}. However, a string of bits can also be interpreted as a radix 2k representation with the digit set {0,1, . . . ,2k1}. Then each group of k bits, starting from the least significant bit, is a radix 2k digit.

The value of such a digit is binary encoded. It is common to denote radix 2k representations high-radix representations whenever k is greater than 1.

Since it is just a matter of interpretation of the meaning of a string of bits there is no additional cost when a representation is changed between a binary encoding and a 2k-ary encoding. In the literature on computer arithmetic the term high-radix multiplication usually refers to a method where the number of multiplier-bits scanned in each iteration is more than one bit, say k bits.

This corresponds to the 2k-ary exponentiation method from Section 2.1.1.

Similarly, the term high-radix modular multiplicationcorresponds to the 2k -ary exponentiation method.

The literature on computer arithmetic contains many different methods for efficient addition of binary numbers. Since these methods are described in standard textbooks on computer arithmetic (e.g. [Hwa79, Obe79, Spa81, Sco85, Kor93a]) and on VLSI design (e.g. [GD85, WE92]) the methods will not be treated in detail in this thesis. A simple addition method is known as carry ripple addition. The worst case computing time for this method is proportional to the operand bit-lengths. In fact, if the bit-lengths are n bits the computing time is equal to the delay of n full adders. The hard-ware architecture of a carry ripple adder is simple and very regular; it is a row of n full adders, where the communication is limited to the nearest neighbours. The fastest addition methods, e.g. carry lookahead addition, perform the addition in a time proportional to log2 n. However, the archi-tectures for these methods are more complex and less regular than a carry ripple adder. Assuming that negative integers are represented properly, e.g.

by two’s complement, subtraction is accomplished by addition of a negative integer. Furthermore, comparison is done by subtraction followed by

inspec-tion of the resulting sign. Hence, both addiinspec-tion, subtracinspec-tion and comparison of non-redundant binary numbers can be done in a time proportional to log2 n. No faster methods for these operations are known. Indeed, according to Koren [Kor93a, pp. 80–81] theoretical results indicate that the lower bound for binary addition is logarithmic in the number of bits.

Integer Multiplication and Modular Multiplication

In the beginning of this chapter it was explained how (modular) multipli-cation can be viewed similarly as exponentiation. It was argued that if the addition composition is associative the results obtained from exponentiation can be reused in multiplication. However, integer multiplication is inherently more efficient than integer exponentiation when the integers are represented by a fixed-radix representation. E.g. assume the integers are binary encoded and the bit-length is n. Further, assume the binary left-to-right exponen-tiation method in Section 2.1 is applied for both exponenexponen-tiation and mul-tiplication. The worst case computing time for an exponentiation is n−1 squarings plusn−1 multiplications. Integer squaring cannot be done essen-tially faster than integer multiplication1, so the total time is about 2(n1) multiplications. This indicates that the time for integer multiplication is about 2(n 1) additions. But, observe that a squaring operation b ∗b in the exponentiation process corresponds to a doubling operation b+b in the multiplication process. Since doubling can be done by a simple left-shift it is seen that the time for this operation is negligible in comparison to the time for a general addition. Hence, the worst case time for integer multiplication is about (n1) additions when the binary method is used. Now, the com-puting time for doubling is negligible ininteger multiplication but how about doubling in modular multiplication ? According to the simple modular ad-dition method in Equation (3.2) a modular doubling can be achieved by a left-shift and a subtraction. The subtraction is needed to perform a

modu-1If some very fast integer squaring method appeared then an integer multiplication could be implemented by thequarter-square multipliccation method. According to Chen [Che71] this method was used in analog computation. The method is based on the expres-sion,

Hence, integer multiplication is not significantly slower than squaring.

3.2. INTEGER REPRESENTATION AND ARITHMETIC 65 lar reduction of the left-shifted intermediate result. Even though a modular doubling is faster than a general modular addition it is not negligible. A modular doubling requires one integer addition while a general modular ad-dition requires two integer adad-ditions. Hence, the worst case computing time for modular multiplication performed by the binary method corresponds to n−1 plus 2(n1) integer additions; a total of 3(n1) integer additions.

This is three times more than the computing time for integer multiplication.

If the parallel computation of the binary right-to-left method in Section 2.3 is applied, the computing time for the integer multiplication corresponds to n integer additions while the computing time for the modular multiplication corresponds to 2n integer additions.