Wednesday, June 29, 2016

Supersum? Subproduct?

When I was in school, like all the other kids, I was taught addition, multiplication and a little bit later exponentiation. There are several ways to think about all of them, some of them have been very fruitful in mathematics at large (like groups and geometry) and some others, less, like iterated composition (unless you think of arrow composition in category theory as abstracted composition), so many mathematicians don't think about them much.

My plan is to try to write this post with as few prerequisites as posible and to tell a story about some of my explorations in this area. This may trump rigour, precision and formalism. Everything I will write about here can be proven and made completely formal. I will also write the story as if it was presented to me coherently and clearly, which was, unfortunately not the case. There were many holes, which I filled in on my own later. Of course, also, my memory of it is diffuse after so much time.

The way addition was presented to me in school, was by counting.

$$ a + b = \underbrace{1+1+\cdots}_{a\ times}+ \underbrace{1+1+\cdots}_{b\ times}$$.


Later followed by tables to make calculations easier, of course.


Then they defined multiplication, which was more interesting,

$$ a * b = \underbrace{a+a+a\cdots}_{b\ times}=\underbrace{b+b+b\cdots}_{a\ times}$$.

again, followed by tables.

By looking at the definition, is not evident that the two ways to write the same multiplication, i.e. that $a*b = b*a$, are the same.
 We also used another equivalent definition which was geometric, appealing to the intuitive idea of area. This was done both discretely by arranging dots in a rectangle and counting them



  and continuously

 with the idea of area (or, for three elements and three dimensions, volume).

The area inside the rectangle is the multiplication of both sides, so the operation has to be commutative. If I remember correctly, this was both the definition of area and a kind of proof of commutativity (this was at school, as I said, the exposition was not very precise).

We learnt about all the properties of both operations. Even if at the time we didn't know the name, we learnt that addition and multiplication without the 0 are Abelian groups:
  1. Associative, $(a+b)+c = a+(b+c)$ and $(a*b)*c = a*(b*c)$.
  2. Commutative, because the groups are Abelian, $a+b = b+a$ and $a*b = b*a$.
  3. Identity element a+0 = a (for multiplication, $a*1 = a$).
  4. For each element there is an inverse $a-a = 0$, $a/a = 1$. We write the inverse of $a$ for addition $-a$ and the inverse of $a$ for multiplication $1/a$. $0$ would have no inverse for multiplication, which is why it is left out. It is also called the absorbing element because $a*0 = 0$ for all $a$.
  5. A group also needs to be closed, the operation with any two numbers cannot go out of the set.
We have one more property, which relates addition and multiplication, the distributive property
$$a*(b+c) = a*b + a*c.$$
 This property is quite intuitive if we draw the operation.

We can use the distributive property to define negative products,
$$a*(b-b) = 0 = a*b + a*(-b),$$
so
$$a*(-b)  = -(a*b)$$
and so on.
 A little later, exponentiation was defined, which also has a definition as an iterative composition
$$ a ^ b = \underbrace{a*a*a\cdots}_{b\ times},$$
where $a$ is called base and $b$ is the exponent.
The first surprise is that the exponentiation is not commutative, so $a^b \neq b^a$ in general.
We learnt about many properties of exponentiation, but I will center myself in two,
$$a^{b+c} = a^b*a^c $$
and
$$e^{b\ ln(a)} = a^b$$
where $e$ is a special number, the Euler number, $2.7182818284 \ldots$ and
$ ln(a) $ is the inverse of the exponential operation, the natural logarithm, the logarithm with base $e$, i.e. the solution to $e^x = a$.
Note that I have jumped from the definition of exponentiation by iterated composition (which only works for integer numbers) to something which can be defined for real numbers. Real numbers include fractions, like $0.5$ and numbers with infinite non-repeating digits after the decimal point like $e$ or $\pi$. We spent a year going from one to the other, and this was the first time I was exposed to an interesting proof and a generalization.
With the iterated composition definition, it is easy to find the result with a fractional base, for example
$$0.5^3 = 0.5*0.5*0.5=0.125$$.
The problem appears when we want to use a fractional exponent,
$$3^{0.5} = ??? $$
How do we apply the operation $0.5$ times? The problem here is that the definition is inherently asymmetric (which is why it is surprising that the multiplication, which is defined similarly is commutative).
The approach used to fix the problem is very interesting. We can use the property above,
$a^{b+c} = a^b * a^c$ recursively to find
$(a^b)^c = a^{b*c}$.
We need one more ingredient, negative exponents, but
$$a^{0} = a^{1-1} = a^{1}*a^{-1}$$
and $a^{0} = 1$ so $a^{-1} = mult\_inverse(a)$.
 The inverse element can be used to define the division, $a*mult\_inverse(b) = a/b$ which is the inverse operation of the multiplication.

Why is $a^0 = 1$? This is an intriguing question. It is not evident what applying the operation $0$ times should yield $1$. To see why, we can use again $a^{b+c} = a^b * a^c$,
when $c = 0$, $a^{b+0} = a^b = a^b * 1$. So for everything to work, the identity of multiplication
has to be the result of exponentiating by the identity of addition.

We can finally find the value when the exponents are fractions by solving the equation $a = (a^{1/b})^b$. For example, if we are looking for $a^{0.5}$, we only need to find a number that multiplied by itself is $a$. For more general fractions, we can do it by parts, $a^{b/c} =(a^b)^{1/c}$.
We already know how to do it with negative exponents, and they can be combined. For more general exponents, i.e. real numbers, we can approximate them as much as we want by greater and greater $b$ and $c$ in the fractions $\frac{b}{c}$ (more generally using a limit). What we are doing here is finding the completion of the rational numbers (fractions) which are the real numbers. This is a theme that appears a lot in mathematics.
 All this, gives us a definition, but it is not very satisfactory for calculation. To do that, we normally use more powerful tools; infinite sums, which in math are called series.
I am not going to derive them here, but I will write them down anyway, because they will appear later.

First lets define the exponential function as $f(x) = e^x$. This means that to obtain the value of the function, you plug an integer value $x=n$ in the exponent and you get a value. To do that, you can use the definition of both $e$, which is the number above (technically, we have to play with derivatives, series and other more advanced tools to define it) and apply the iterative composition definition. For example
$f(3) = e*e*e$. For fractional or real exponents, you use the approach described above.
We can then define the inverse function of $e^x$, which we have already seen, the natural logarithm which by definition by $ln(e^x) = x$ and conversely
$e^{ln(x)} = x$.

Euler discovered that the exponential can be written as an infinite series (infinite sum)
$$e^x = 1 + x+ \frac{x^2}{1*2} +\frac{x^3}{1*2*3} + \frac{x^4}{1*2*3*4}+\ldots$$.
We can write $1*2*3*4*5*\cdots n = n! $, which is called the factorial.
There is a similar series for the logarithm,
$$ln(x) = (x-1) - \frac{(x-1)^2}{2} + \frac{(x-1)^3}{3} - \frac{(x-1)^4}{4}\ldots$$

Even though the series are infinite, the terms keep getting smaller so, by picking a reasonable
number of terms, we get a good enough approximation of the value we are looking for.

We can use the fact that the logarithm is the inverse of the exponential,
$x^y = (e^{ln(x)})^y= e^{ln(x)*y}$,
and the series to calculate exponentials for any base values. We can start with the series and this is an alternative but equivalent approach to define exponentials for any values.

So,  after this rather long introduction, we are in the position to understand what my frame of mind was when I asked my highschool teachers the following two questions:
  1. Does it make sense to define operations by iterated composition after the exponentiation? What are their properties?
  2. We have defined a kind of fractional composition for the exponentiation and multiplication (multiplication by a fractional number and a fractional exponent respectively), can we do the same in general and find an operation between addition and multiplication?
 The first question was the simplest and was the first one that drew my attention. Of course, in the intellectual wasteland that is (or at least used to be) secondary math education in Spain, my teachers didn't help. When I asked them any questions about material not in the syllabus their eyes would glaze over or I would get told off for being weird. Also, there was no internet because it didn't exist yet, so I had no way of knowing if what I was doing had been previously done. More concretely, I continued iterating the exponentiation to get a new operation,
$$\underbrace{a^{a^{a^{a\ldots}}}}_{b\ times}.$$
The operation can be interpreted in two ways, one which is actually a new operation the other which is not.

The first way to parenthesize,
$$\underbrace{{(({a^a})^a})^a\ldots}_{b\ times}=a^{a^{b-1}},$$
is not a new operation, whereas
$$\underbrace{a^{(a^{(a^{(a\ldots)})})}}_{b\ times}$$
is actually a new operation. We can continue defining an infinite sequence of operations by continuing to iterate further each of them.
I wrote down all the properties I found. Later, the internet appeared and I discovered that
my operation was actually called tetration and the following ones are hyperoperations.
So I was late for these discoveries, but nevertheless, I got the pleasure of finding them myself.
There is a lot of work to do in this area still, but this post is about the second question.

The second question was much more difficult to answer. To make it more precise,
if we index the operations using a variant of Knuth notation, where there is an arrow to indicate the operation and a superindex to indicate which hyperoperation we are talking about:
$$a*b = a\uparrow^0 b$$
$$a^b = a\uparrow^1 b$$
Then, what would $a\uparrow^{0.5}b$ be?

I tried different strategies without success.
My first strategy, was to try to generalize iterated function composition. To go from addition to multiplication, we are actually compositing a function many times. We start with
$$f(x, y) = x + y,$$
and then
$$x*y = f(x, f(x, f(x, \ldots))\ldots).$$
Then we iterate for exponentiation and so on.

If we could iterate, say, $0.5$ times, then, we could define an operation between the sum and the multiplication. I will be calling it supersum for now, but I don't know what the right name for it is, hence the title of the post.
There are various places in mathematics where the concept of fractional composition in some sense appears including multiplication $a*0.5$ and exponentiation $a^{0.5}$. The most notable of it is in linear tranformations. One can define the root of a matrix, either by means of series or by diagonalization and multiplying by this matrix represents fractional application of the linear transformation.
Could we turn our problem into one of fractional compositon?
Trying to apply this approach to our problem in an elegant way is very difficult, at least for me. I got stuck for a long time.
I tried various other strategies, also without success, which I don't even remember.
Then, at some point, while reading about Lie groups, I had my first break.
I had the idea of using the exponential map of Lie groups. The approach is simple. We can use an exponential function to map addition to multiplication
$$a^{(b+c)} = a^b * a^c.$$
I only had to find an exponential for the multiplication $r(x)$ for which the analogous formula would need to hold,
$$r(a*b) = r(a) *r(b).$$ Then I could interpolate somehow between both exponentials.
This does not define uniquely the function we are looking for, for example, the identity would fit the bill.
I had to get the inspiration elsewhere; the Mellin transform. I won't delve into depth here, but the Mellin transform is invariant under dilation (multiplication), whereas the Fourier transform is invariant under translation (addition). We can use the functions found in the Mellin transform and they will be the the exponentials we seek for. The new exponential for multiplication is then $r(x) = 1/x$. Now we need a map which varies from one to the other. It is not going to be continuous because one is not defined in $0$, so we have to "break" the exponential to convert it into $r(x)$.
After a lot of time staring into these functions, I finally got my second break. Enter the Mittag-Leffler function, which plays the role of the exponentials for fractional derivatives.

$$E_\alpha(x) = 1 + \frac{x}{\Gamma(\alpha +1)} + \frac{x^2}{\Gamma(2*\alpha +1)}\ldots$$
The gamma function, $\Gamma(\alpha)$, also discovered by Euler, is a continuous generalization of the factorial, indeed if $n$ is a positive integer number,
$\Gamma(n) = (n-1)!$.
For Mittag-Leffler functions, when $\alpha = 1$, we have $E_1(x) = e^x$. When $\alpha = 0$, the function is $E_0(x) = 1/(1-x)$.
$E_0$ is not the $r(x)$ we where looking for, but it is close enough. We can use the function to define the supersum, when $0\leq \alpha \leq 1$,
$$a\circledast_{\alpha} b = E_{\alpha}^{-1}(E_{\alpha}(a)*E_{\alpha}(b))$$
where $E_{\alpha}^{-1}(x)$ is the inverse of $E_{\alpha}(x)$. We leave the proof that it meets all the properties to the reader (it is in the paper linked at the end of the post).
 It has the particular cases
$$a\circledast_1b = a+b = ln(e^a*e^b) $$
and
$$a\circledast_0b = a+b-a*b = c$$
which can be rewritten
$$c-1 = -(x-1)*(y-1),$$
which is a form of multiplication (reversed and shifted, which can be fixed, but it is essentially a multiplication). It gets better if you use $E_{\alpha}(-x)$ and you can shift the neutral element,
but I will leave that for another post. You may want to see an interesting (introductory) video related to this ideas.
There are many ways to extend and apply this new group and I am still sieving through the results. For example, we can use it to define a transform (this is what abstract harmonic analysis is about, read more here), but again, this is a story for another post.
I have informally called this group (technicaly the Lie group determined by the exponential map defined by the Mittag-Leffler functions applied on the multiplication) supersum.

What would you name it?

For more details on all this, see the paper I wrote. It is an Accepted Manuscript of an article published by Taylor & Francis in Integral Transforms and Special Functions and will be available available online: http://www.tandfonline.com/doi/full/10.1080/10652469.2016.1267730.





Tuesday, April 26, 2016

Wallis sieve, and lp n-balls III

This is the third installment, do read the previous posts. I left some standing questions in the last post, and I am going to answer at least one of them: Can we squeeze an $l_p$ ball´s volume out of a generalized Wallis sieve?. At the same time I will generalize and simplify the proof from the xkcd post. First, lets generalize the Wallis sieve.

In $d$ dimensions, starting with a $p$ sided hypercube and cutting it appropriately (I leave to the reader to draw it), we get the product,
$$A_n^d = \prod_{n=1}^\infty \frac{p^d n^{d-1} \Big(n+\frac{d}{p}\Big)} {(pn+1)^d},$$
which can be rewritten as the limit
$$A_n^d = lim_{n\to\infty} \frac{p^{nd}\Gamma(n+1)^{d-1}\Gamma\Big(n+1+\frac{d}{p}\Big)\Gamma\Big(1+\frac{d}{p}\Big)^d}{p^{nd}\Gamma(1+\frac{d}{p})\Gamma\Big(n+1+\frac{1}{p}\Big)^d}$$
where I have made use of the Euler gamma function recurrent properties (see previous posts).
Note that the $d$ in $A_n^d$ is an index, not an exponent.
 The limit can be separated into two factors (after cancelling the $p^{nd}$),
 $$A_n^d = lim_{n\to\infty}\Bigg[ \frac{\Gamma(n+1)^{d-1}\Gamma\Big(n+1+\frac{d}{p}\Big)}{\Gamma\Big(n+1+\frac{1}{p}\Big)^d} \Bigg]\Bigg[\frac{\Gamma\Big(1+\frac{1}{p}\Big)^d}{\Gamma(1+\frac{d}{p})}\Bigg]$$.
Remember that $\frac{\Gamma(n+a)}{\Gamma(n)}\sim n^a$, so the first term when $n\to\infty$
$$ \frac{\Gamma(n+1)^{d-1}\Gamma\Big(n+1+\frac{d}{p}\Big)}{\Gamma\Big(n+1+\frac{1}{p}\Big)^d} = \frac{\Gamma(n+1)^{d-1}\Gamma\Big(n+1+\frac{d}{p}\Big)}{\Gamma\Big(n+1+\frac{1}{p}\Big)^{d-1}\Gamma\Big(n+1+\frac{1}{p}\Big)} \sim \frac{n^{\frac{d-1}{p}}}{n^{\frac{d-1}{p}}}\sim 1.$$
So we obtain, finally,
 $$A_n^d = lim_{n\to\infty} \frac{\Gamma\Big(1+\frac{1}{p}\Big)^d}{\Gamma(1+\frac{d}{p})} = V_d^p\Big(\frac{1}{2}\Big).$$
The value of the limit is the volume of the $l_p$ ball, with the case of the hypersphere $p=2$ being a particular case.
Another remarkable case happens when $p=1$ and we obtain the volume of the $d$-dimensional cross-polytope,  of radius $R=\frac{1}{2},$ which is $\frac{1}{d!},$ . The cross-polytope is the generalization of the octahedron to $n$ dimensions and is the dual of the hypercube we start with.

This formula lets us also interpret the volume of various fat Cantor and other Smith-Cantor-Volterra sets.
The question still standing from last post is: Is there a geometrical interpretation for the intermediate $A_n^d$? And of course, What more can we learn from this relation between $l_p$ and these sets?



Saturday, April 23, 2016

Wallis sieve, and lp n-balls II

This post is a continuation of this one. In it I talked about this video by Matt Parker and some
of its consequences for two dimensions. In the video, he also asserted that, for three dimensions, the approach of  cutting off pieces of a cube in tha same way that is done in the Wallis sieve but in three dimensions, would give back the volume of a sphere (see the pretty drawings in the post by Evelyn Lamb).
This opened the  question of whether this would happen in higher dimensions. Someone in twitter (thanks!) pointed me to this post in the xkcd forum, where they proof this fact. I am going to transcript, dissect and explain this proof. The original ideas are all from the post, and the mistakes all mine.
First, remember the basic Wallis product formula,

$$\frac{\pi}{4}=\prod_{n=1}^{\infty}\frac{4n(n+1)}{(2n+1)^2}=lim_{n\to\infty} \frac{4^n n! (n+1)!}{(2n+1)!!^2}.$$

Remember that the doble factorial is the product $n!! = n(n-2)(n-4)\cdots~$ which stops when the terms would cease to be positive, i.e. with $⌈n/2⌉$ terms. An important property of the  double factorial is that it can be written in terms of Euler gamma function,
$$\Gamma\Big(n+\frac{1}{2}\Big) = \frac{(2n-1)!!\sqrt{\pi}}{2^n}.$$

Remember also that the volume of an $l_p$ hyperball is
$$V_d^p(R) = \frac{(2\Gamma(\frac{1}{p} + 1)R)^d}{\Gamma(\frac{d}{p} + 1)}.$$
So the, taking into account $\Gamma\big(\frac{3}{2}\big)=\frac{\sqrt{\pi}}{2}$, the volume of an d-hypersphere, which is the $l_2$ hyperball is
$$V_d^2(R) = \frac{R^d\pi^{d/2}}{\Gamma(\frac{d}{2} + 1)}.$$

The Wallis product can be written,
$$\frac{\pi}{4}=\prod_{n=1}^{\infty}\Bigg[1-\frac{1}{(2n+1)^2}\Bigg],$$
which is convergent, and we can move around terms (if we are careful), because the series
$$\sum_{n=0}^{\infty}a_n=\sum_{n=0}^{\infty}\frac{1}{(2n+1)^2},$$
is absolutely convergent and we can apply the test for product convergence.

The general product for d dimensions for the Wallis sieve turns out to be
$$\prod_{n=1}^\infty\frac{2^dn^{d-1}(n+\frac{d}{2})}{(2n+1)^d}.$$

For even $d$ we can write this product as
$$lim_{n\to\infty} \frac{2^{nd}(n!)^{d-1}(n+\frac{d}{2})!}{(\frac{d}{2})!((2n+1)!!)^d}.$$
What we have done here is that the factors coming from $n+\frac{d}{2}$ have been expanded into
$\frac{(n+\frac{d}{2})!}{(\frac{d}{2})!}$. This is the tricky part where we have assumed $
d$ to be even. For odd $d$ we can either rewrite in terms of $k$, i.e. $d=2k+1$, and follow a similar approach or be more general and use the gamma function.

 The trick to calculate the limit is to separate it into the product of two parts, one of which can be identified as a Wallis product, between square brackets,
$$lim_{n\to\infty}  A_n^d = lim_{n\to\infty}  \Bigg[\frac{4^{n-1} n! (n-1)!}{(2n-1)!!}\Bigg]^{d/2}\Bigg(\frac{2^dn^{d/2}(n+\frac{d}{2})!}{(\frac{d}{2})!n!(2n+1)^d}\Bigg),$$
$$lim_{n\to\infty}  A_n^d =\Big(\frac{\pi}{2}\Big)^{d/2}\Bigg(\frac{2^dn^{d/2}(n+\frac{d}{2})!}{(\frac{d}{2})!n!(2n+1)^d}\Bigg).$$

The last part is to show that the right term converges to $\Big(\frac{d}{2}\Big)!$.
We show it by parts, first, when $n \to \infty$
$$\frac{\Big(n+\frac{d}{2}\Big)!}{n!}=(n+1)(n+2)\cdots(n+\frac{d}{2}) \sim n^{d/2},$$
 so the limit can be rewritten as
$$lim_{n\to\infty}  A_n^d =\Big(\frac{\pi}{2}\Big)^{d/2}\Bigg(\frac{(2n)^d}{(\frac{d}{2})!(2n+1)^d}\Bigg).$$.

 Finally, for big enough $n$, we can use approximate the second term as,
$$\frac{(2n)^d}{(\frac{d}{2})!(2n+1)^d}\sim\frac{1}{(\frac{d}{2})!}=\frac{1}{\Gamma(\frac{d}{2}+1)},$$
so
$$lim_{n\to\infty}  A_n^d = \frac{(\frac{\pi}{4})^{d/2}}{\Gamma(\frac{d}{2} + 1)}=V_d^2\Big(\frac{1}{2}\Big).$$

So, now that we have the general formula for $d$ dimensions, the question still stands, can we interpret $A_n^d$ in geometrical terms as we did for $d=2$? What about $l_p$ for $p\neq2$? Stay tuned.

Edit: see the   next post.






Wednesday, April 20, 2016

Wallis sieve, and lp n-balls

I heard about the Wallis sieve the first time in this video by Matt Parker, which is fascinating. Instantly I recognized the pattern. I thought the relation between the Wallis sieve and the formula for the volume of an lp n-ball would be trivial and well-known, it turns out it is neither. Later, I read the blog post in scientific american by Evelyn Lamb and I still thought it would be easy to relate both. Finally, I sat down and did the work and found that the result is not only surprising, but (at least to me), completely non-obvious and has the potential to be very interesting.

The volume of an $l_p$ n-ball, a generalized ball is of radius $R$ is, (for more details and the calculation and history of the formula, see Xiafu Wang's paper),
$$V_d^p(R) = \frac{(2\Gamma(\frac{1}{p} + 1)R)^d}{\Gamma(\frac{d}{p} + 1)}.$$

Note that the ball for $l_2$ is an hypersphere of dimension $d$.
This explains the relation between the Euler gamma function and $\pi$, one of my favorites,
 $$\Gamma\Big(\frac{3}{2}\Big) = \frac{\sqrt{\pi}}{2}.$$
At the same time, the gamma function is a generalization of the factorial and satisfies all sorts of recursive formulas similar to the Wallis sieve.

I will refer you to Evelyn Lamb's post for a detailed introduction, but the Wallis sieve can be easily written as a limit using the gamma formula,

$$\frac{\pi}{2} = \prod_{n=1}^{\infty}\Bigg[\frac{(2n)^2}{(2n-1)(2n+1)}\Bigg] = \frac{2\cdot 2\cdot 4\cdot 6\cdot 6\ldots}{1\cdot 2\cdot 2\cdot 5\cdot 5\cdot 7\ldots},$$
 which can be rewritten as a limit,
$$\lim_{n\to\infty} \frac{2^{4n}}{n{{2n}\choose{n}}^2} = \pi \lim_{n\to\infty} \frac{n \Gamma(n)^2}{\Gamma(\frac{1}{2}+n)^2} = \pi.$$

We can then apply the gamma duplication formula,

$$\Gamma(z)\Gamma(z+\frac{1}{2})= 2^{1-2z}\sqrt{\pi}\Gamma(2z),$$

and the functional relation,

$$\Gamma(z+1) = z\Gamma(z),$$

to rewrite again the limit,

 $$\pi = \lim_{n\to\infty} n\Bigg[\frac{\Gamma(n)^2}{\Gamma(2n)2^{1-2n}}\Bigg]^2 = \lim_{n\to\infty} \frac{1}{n}\Bigg[\frac{\Gamma(1+n)^2 2^{2n}}{\Gamma(2n+1)}\Bigg]^2,$$
so
$$\pi = \lim_{n\to\infty}\frac{V_2^{\frac{1}{n}}(2^n)^2}{4n}.$$

This is, to say the least, surprising. Instead of hyperspheres and a trivial relationship, we get something which looks like an astroid (the image comes from Wikipedia).

So it is the limit of the square of the volume of this figure as it collapses upon itself, its inner radius getting smaller while the outer radio grows. This result is bizarre and not at all trivial.

The formula can be generalized (I will play with this the next time I have some free time) to higher dimensions. The video talks about this, but I have not written their formula down. Also, it will be interesting if fat Cantor sets can be written in terms of hyperballs too.
I have the conjecture which it will be related with the taxicab measure astroid ball, whatever that is.

Edit: fixed a missing n in the denominator in the limit.

Edit: see the next post continuing this one.



Tuesday, April 12, 2016

Swap without temporary space

There is a really bad technical interview question which appears now and then.
It goes like this: “how would you exchange the value of two variables without
using temporary space?” Whenever I hear about this question I cringe. It is one of those questions that does not measure anything other than: have you seen this before? or did you read Hacker's Delight?, which, by the way, I wholeheartedly recommend. And you may be a fine developer and human being and be just unlucky enough to not have seen this trick before.

It gets better, because the question is voided in some programming languages with tuple literals or multiple assignment. For example in go, the solution is trivial,

a, b = b, a

And you are done with it. Even better, the compiler may generate a swap of registers, which is as efficient as it gets.

In any case, I was chatting about this question with a friend and I remembered some  ideas I thought I had read somewhere, maybe in Hacker's Delight or The Art of Computer Programming or maybe
somewhere else. After checking, apparently, I hadn't read it in any of them, so maybe I have come up with them myself. In any case, the gist of it is, if you are ever asked this question, you can use matrices to go completely overboard with the answer.

So, say you want to swap two variables and you want to do it without temporary storage. One of the classic ways to do this is,

a = a + b
b = a - b
a = a - b

So how can we describe this in terms of matrices?
Well, each of the assignments is actually the multiplication of the vector
$\begin{bmatrix}a\\ b\end{bmatrix}$ by a matrix and as long as the matrix determinant is not zero, you
don't lose any information.

For example, the first assignment may be written in math,

$a' = a + b$
$b' = 0 + b$

or in matrix form:
$$\begin{bmatrix}a'\\ b'\end{bmatrix} = \begin{bmatrix}1 & 1\\0 & 1\end{bmatrix}\begin{bmatrix}a\\ b\end{bmatrix}$$

So, the three matrices describing the previous assignments are,

$$M = \begin{bmatrix}1 & 1\\0 & 1\end{bmatrix}$$
$$N = \begin{bmatrix}1 & 0\\1 & -1\end{bmatrix}$$
$$R = \begin{bmatrix}1 & -1\\0 & 1\end{bmatrix}$$
The multiplication of these matrices (be careful, the order has to be right)
$$RNM = \begin{bmatrix}1 & -1\\0 & 1\end{bmatrix}\begin{bmatrix}1 & 0\\1 & -1\end{bmatrix}\begin{bmatrix}1 & 1\\0 & 1\end{bmatrix} = \begin{bmatrix}0 & 1\\1 & 0\end{bmatrix}$$

which is a reverse identity i.e. a swap.

This already works (even if it overflows). In all truth any N factors of the
reverse identity do the trick. You may even rescale them, for example, multiply the first by 2 and the second by 1/2, if you are in floating point, for more obscurity. Or use reciprocals for integers (another trick from Hacker's Delight).

We can go even further and work in $GF2$, i.e. binary bit by bit operations.
In this space, the addition is the xor (^) and each number is its own inverse,
so the above equation can be written,

a = a ^ b
b = a ^ b
a = a ^ b

You can also write this code in terms of factors of the reverse identity with binary matrices.

The three assignments with the xor is probably what the (now completely stunned) interviewer was aiming for.