Using the Divide and Conquer approach to multiply arbitrarily large numbers
Summary
Multiplying is expensive, but addition/subtraction are cheap. Using some
clever algebra tricks, we can reduce the number of multiplications that are
needed which will decrease the asymptotic runtime of certain algorithms.
Multiplication is expensive, adding/subtracting are cheap.
Given two complex numbers, and :
We need 4 real number multiplications:
ac
bd
bc
ad
Can we get by using only 3 mulitiplications?
We can compute without computing or
If we notice:
We see the and terms that we want in our
previous example. We can then solve for the term:
So we can substitute this in for our original equation:
Which gets us down to 3 multiplication operations!
Easy Multiply
Input: n-bit integers x and y, assume n is power of 2
Goa: Compute
Start by breaking each input into 2 halves (n/2 bits)
Example
We have 4 products of numbers which we can recursively
compute:
Can we get down to 3?
Psuedocode
EasyMultiply(x,y):
# xl = first n/2 bits of x
# xr = last n/2 bits of x
binary = bin(x)[2:]
n = len(binary)
xl = binary[:int(n/2)]
xl = binary[int(n/2):]
# O(n) time to compute
binary = bin(y)[2:]
yl = binary[:int(n/2)]
yl = binary[int(n/2):]
# Each takes T(n/2) time to compute
A = EasyMultiply(xl,yl)
B = EasyMultiply(xr,yr)
C = EasyMultiply(xl,yr)
D = EasyMultiply(xr,yl)
# Shifts are O(n) and O(n/2) respectively
Z = (2**n)*A + (2**(n/2))(C + D) + B
return Z
Runtime is or
Fast Mutliply
Given:
Then:
Psuedocode
FastMultiply(x,y):
# xl = first n/2 bits of x
# xr = last n/2 bits of x
binary = bin(x)[2:]
n = len(binary)
xl = binary[:int(n/2)]
xl = binary[int(n/2):]
# O(n) time to compute
binary = bin(y)[2:]
yl = binary[:int(n/2)]
yl = binary[int(n/2):]
# Each takes T(n/2) time to compute
A = FastMultiply(xl,yl)
B = FastMultiply(xr,yr)
C = FastMultiply(xl + xr, yl+yr)
# Shifts are O(n) and O(n/2) respectively
Z = (2**n)*A + (2**(n/2))(C - A - B) + B
return Z
Runtime
Since it is an increasing geometric series, it is on the order of the last term.