Little endian vs. big endian

This is about how the bytes of a data type are arranged in memory. An int variable for example occupies 4 bytes in memory. In case of little endian, the least significant byte of the integer value will be first in memory (at a smaller address). In case of big endian, the most significant byte if the integer value will be the first byte in memory.

Consider the following code that prints the digits of a number by remembering the last digit and by shifting to the left. ‘size’ is the number of bits to print, ‘val’ is the number to print:

void PrintBinary(int val, int size)
{
unsigned char* b  = new unsigned char[size];
memset(b, 0, size);
int pos = size – 1;
while ( val != 0 )
{
b[pos] = val % 2;
val = val >> 1;
pos–;
if (pos < 0) break;
}
for (pos = 0; pos< size; ++pos)
{
printf(“%d”, b[pos]);
if (pos%8 == 7) printf(” “);
}
delete[] b;
}
 
int x = 8;
PrintBinary(x, 32);

Then the above code will print:  00000000 00000000 00000000 00001000

The code below will print ‘00001000 00000000 00000000 00000000’ which  shows that it was run on a little endian machine because the least significant byte is first.

unsigned char* b = (unsigned char*) &x;
for (int i = 0; i
{
 PrintBinary(b[i], 8);
}

Here is a method to determine the endianess of a machine:

bool IsLittleEndian()
{
int b = 1;
return ((unsigned char*)(&b))[0];
}

A more interesting approach is to use a union (a C++ facility to agregate mode data types over the same memory space):

bool IsLittleEndian()
{
union local_t
{
int i;
unsigned char b;
};
local_t u;
u.i = 1;
return u.b;
}
 

That was the C++ approach. Java offers an API for it:

import java.nio.ByteOrder;

if (ByteOrder.nativeOrder().equals(ByteOrder.BIG_ENDIAN)) {
System.out.println(“Big-endian”);
} else {
System.out.println(“Little-endian”);
}
 

In C# the BitConverter class has the IsLittleEndian static method.

Posted in c++ | Leave a comment

Just another counting problem

If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000.

Source: http://projecteuler.net/problem=1

Solution:

There are n_3 = 1000/3 = 333 multiples of 3 and n_5 = 1000/5 - 1 = 199 multiples of 5. Some of those numbers are multiple of both 3 and 5 (i.e. they are multiples of 15). When summing, we need to avoid summing twice we need to subtract those numbers.

There are n_{15} = 1000/15 = 66 multiples of 5. The result is:

3 \cdot \frac{n_3(n_3+1)}{2}+5 \cdot \frac{n_5(n_5+1)}{2}-15 \cdot \frac{n_{15}(n_{15}+1)}{2}=233168

This is a simple C++ program that verifies the math:

int main()
{
   int n = 1000;
   int sum = 0;
   for ( int i = 1; i < n; i++ )
   {
      if ( ( i % 3 == 0 ) || (i % 5 == 0 ) )
      {
         sum += i;
      }
   }
   int div3 = ( n – 1 ) / 3;
   int div5  = ( n – 1 ) / 5;
   int div15 =  ( n – 1 ) / 15;
   int sumDirect =  3 * div3 * ( div3 + 1 ) / 2 + 
                                5 * div5 * ( div5 + 1 ) / 2  – 15 * div15 * ( div15 + 1 ) / 2;
   if ( sum != sumDirect )
   {
      cout<<“Bad idea!”;
   }
   else
   {
      cout<<“Excellent!”;
   }
}
Posted in Algorithms, Math | Leave a comment

SVD – a simple proof

Every m \times n real matrix A can be decomposed as: A=U \Sigma V^T where U is a m \times m orthogonal matrix, \Sigma is a m \times n matrix, having non-zero elements only on the diagonal, and  V is a n \times n orthogonal matrix.

We know from the previous post that a symmetric matrix is digonalisable, and can be diagonalised by a orthogonal matrix. In our case A^{T}A happens to be a n \times n symmetric matrix.

Therefore, \exists v_i, \lambda _i, i=1..n, and A^TAv_i=\lambda _iv_i. Real symmetric matrices have real eigenvalues and additionally \lambda _i \geq 0 because:

<A^TAv_i, v_i>=<Av_i, Av_i>= \lambda _i<v_i, v_i>

Because <x, x> is grater or equal to 0 for some vector x, it follows that \lambda _i=0 when Av_i=0 (i.e. v_i is in the null space of A) and \lambda _i>0 otherwise.

For \lambda _i > 0, multiplying A^TAv_i=\lambda _iv_i by {v_j}^T and considering that v_i and v_j are orthogonal unit vectors, we get:

{v_j}^TA^TAv_i=\lambda _i \delta _{i,j}  =>

{(Av_j)}^TAv_i=\lambda _i \delta _{i,j} =>

{(\frac {Av_j} {\sqrt {\lambda _j}})}^T{(\frac {Av_i} {\sqrt {\lambda _i}})}=\delta _{i,j}

Denoting u_i=\frac {Av_i} {\sqrt {\lambda _i}} we get: Av_i=\sqrt {\lambda _i}u_i =>

A[v_1 ... v_r v'_{r+1} ... v'_{n}]=[u_1 ... u_r u'_{r+1} ... u'_{m}]\begin{bmatrix} {\sqrt \lambda_1} & \cdots & 0 & \cdots & 0 \\ \vdots & \ddots & \vdots &  & \vdots \\ 0 & \cdots & {\sqrt \lambda_r} & \cdots & 0 \\ \vdots &  & \vdots &  & \vdots &  \\ 0 & \cdots & 0 & \cdots & 0 \end{bmatrix}

r is the rank of the matrix.

The set of vectors v_i is extended by the set of orthogonal vectors v'_j to form a basis in R^n.

The set of vectors u_i is extended by the set of orthogonal vectors u'_j to form a basis in R^m.

Posted in Math | 1 Comment

Real symmetric matrices are diagonalizable

This article involves advanced linear algebra knowledge but it definitely worth understanding it. The previous post contains a proof that a real symmetric matrix has real eigenvalues. Additionally the real symmetric matrices are diagonalizable by an orthogonal matrix. This means: \forall A symmetric,  \exists P orthogonal (P^tP=I) and D diagonal, such that A=P^{t}DP

To continue the proof I will use the following result:

Let let A be a real symmetric matrix and let V be a subspace of R^n and V^{\bot} its complement (R^n=V \oplus V^{\bot}). If \forall v \in V, Av \in V then: for w \in V^{\bot} \Rightarrow  Aw \in V^{\bot}.

Proof: I will use the dot product defined in the previous post. Given v \in V and w \in V^{\bot} , <Av, w> = <v, Aw> because A is real and symmetric. But <Av, w>=0 because Av \in V. Thus <v, Aw>=0, \forall v \in V. This means that Aw \in V^{\bot}.

Getting back to the problem, A has at least one eigenvalue. It results that exists X_1 and \lambda _1, such that AX_1=\lambda _1X_1.

If V_1 is the vector space generated by X_1, then the operator A is also symmetric when applied to the subspace V_1^{\bot} (this can be proven by changing the basis). This means that exists X_2 \in V_1^{\bot} such that AX_2 = \lambda_2X_2. Considering the vector space generated by X_1 and X_2, and by applying the operator A to its orthogonal we will get AX_3 = \lambda_3X_3. By induction we get: AX_i = \lambda_iX_i, i=1..n, where the vectors X_i are pair wise perpendicular: <X_i, X_j>=0, \forall i,j \in {1..n}.

Additionally the vectors can be divided by their norm to make them unit vectors. In a matrix form the relations above can be written as:

A[X_1 X_2 ... X_n]=[X_1 X_2 ... X_n]diag(\lambda_1, \lambda_2, ... ,\lambda_n)

Or A=P diag(\lambda_1, \lambda_2, ... ,\lambda_n) P^{-1}, where the columns of P are the vectors X_1, X_2, ...X_n. P is orthogonal because vectors X_i are pair wise perpendicular and unit vectors. This also means that P^{-1}=P^t.

Posted in Linear Algebra, Math | 3 Comments

Real symmetric matrices have real eigenvalues

A real matrix is symmetric if A^t=A. I will show in this post that a real symmetric matrix have real eigenvalues.

I will need a dot product for the prof and I’ll use the basic dot product for two vectors X and Y: <X,Y>=X^t\overline{Y}, where \overline{Y} is the complex conjugate of the vector Y.

The useful property of this dot product is that <AX,Y>=<X,{A^t}Y>, for any matrix A

And considering that A is real, a simple proof is:

<AX,Y>=(AX)^t\overline{Y}=X^tA^t\overline{Y}=X^t\overline{A^tY}=<X,{A^t}Y>

An eigenvalue have a correspondent eigenvector: AX=\lambda X.

We have <AX,X>=<\lambda X, X>=\lambda <X,X> and considering that A is symmetric <AX,X>=<X,{A^t}X>=<X,AX>=<X,\lambda X>=\overline{\lambda}<X,X>.

From \lambda<X,X>=\overline{\lambda}<X,X> and because X is not a zero vector it results that the imaginary part of \lambda is zero, so the eigenvalue is a real number.

Posted in Linear Algebra, Math | 3 Comments