Consider the following code that prints the digits of a number by remembering the last digit and by shifting to the left. ‘size’ is the number of bits to print, ‘val’ is the number to print:

void PrintBinary(int val, int size) { unsigned char* b = new unsigned char[size]; memset(b, 0, size); int pos = size – 1; while ( val != 0 ) { b[pos] = val % 2; val = val >> 1; pos–; if (pos < 0) break; } for (pos = 0; pos< size; ++pos) { printf(“%d”, b[pos]); if (pos%8 == 7) printf(” “); } delete[] b; } int x = 8;PrintBinary(x, 32);

Then the above code will print: 00000000 00000000 00000000 00001000

The code below will print ‘00001000 00000000 00000000 00000000’ which shows that it was run on a little endian machine because the least significant byte is first.

unsigned char* b = (unsigned char*) &x;for (int i = 0; i

{ PrintBinary(b[i], 8); }

Here is a method to determine the endianess of a machine:

bool IsLittleEndian(){ int b = 1;

return ((unsigned char*)(&b))[0]; }

A more interesting approach is to use a union (a C++ facility to agregate mode data types over the same memory space):

bool IsLittleEndian(){ union local_t

{ int i;

unsigned char b; };

local_t u;

u.i = 1;

return u.b; }

That was the C++ approach. Java offers an API for it:

import java.nio.ByteOrder;…

if (ByteOrder.nativeOrder().equals(ByteOrder.BIG_ENDIAN)) {

System.out.println(“Big-endian”);

} else {

System.out.println(“Little-endian”);

}

…

In C# the BitConverter class has the IsLittleEndian static method.

]]>Source: http://projecteuler.net/problem=1

Solution:

There are multiples of and multiples of . Some of those numbers are multiple of both and (i.e. they are multiples of ). When summing, we need to avoid summing twice we need to subtract those numbers.

There are multiples of . The result is:

This is a simple C++ program that verifies the math:

int main() { int n = 1000; int sum = 0; for ( int i = 1; i < n; i++ ) { if ( ( i % 3 == 0 ) || (i % 5 == 0 ) ) { sum += i; } } int div3 = ( n – 1 ) / 3; int div5 = ( n – 1 ) / 5; int div15 = ( n – 1 ) / 15; int sumDirect = 3 * div3 * ( div3 + 1 ) / 2 + 5 * div5 * ( div5 + 1 ) / 2 – 15 * div15 * ( div15 + 1 ) / 2; if ( sum != sumDirect ) { cout<<“Bad idea!”; } else { cout<<“Excellent!”; } } ]]>We know from the previous post that a symmetric matrix is digonalisable, and can be diagonalised by a orthogonal matrix. In our case happens to be a symmetric matrix.

Therefore, , and . Real symmetric matrices have real eigenvalues and additionally because:

Because is grater or equal to for some vector , it follows that when (i.e. is in the null space of ) and otherwise.

For , multiplying by and considering that and are orthogonal unit vectors, we get:

=>

=>

Denoting we get: =>

is the rank of the matrix.

The set of vectors is extended by the set of orthogonal vectors to form a basis in .

The set of vectors is extended by the set of orthogonal vectors to form a basis in .

]]>To continue the proof I will use the following result:

Let let be a real symmetric matrix and let be a subspace of and its complement (). If , then: for .

**Proof**: I will use the dot product defined in the previous post. Given and , because is real and symmetric. But because . Thus , . This means that .

Getting back to the problem, has at least one eigenvalue. It results that exists and , such that .

If is the vector space generated by , then the operator is also symmetric when applied to the subspace (this can be proven by changing the basis). This means that exists such that . Considering the vector space generated by and , and by applying the operator to its orthogonal we will get . By induction we get: , where the vectors are pair wise perpendicular: .

Additionally the vectors can be divided by their norm to make them unit vectors. In a matrix form the relations above can be written as:

Or , where the columns of are the vectors . P is orthogonal because vectors are pair wise perpendicular and unit vectors. This also means that .

]]>I will need a dot product for the prof and I’ll use the basic dot product for two vectors and : , where is the complex conjugate of the vector .

The useful property of this dot product is that , for any matrix .

And considering that is real, a simple proof is:

An eigenvalue have a correspondent eigenvector: .

We have and considering that A is symmetric .

From and because is not a zero vector it results that the imaginary part of is zero, so the eigenvalue is a real number.

]]>void solutionNaive(vector& inVals, int sum) //O(n^2) { size_t size = inVals.size(); //naive for (int i = 0; i < size-1; i++) { for (int j = i+1; j < size; j++ ) { if (inVals[i] + inVals[j] == sum) { printf("Values: %d and %d\n", inVals[i], inVals[j]); return; } } } printf("Not found\n"); }

This is and its performance definitely not acceptable for large arrays. A faster solution is to use a hash set for storing the elements tested so far, and to test if S minus the current element was already stored in the hash set. More memory is used in this approach, but the complexity in this case is because searching in a hash set or hash map is :

void solutionOk(vector& inVals, int sum) //O(n) { hash_set checkedVals; for (vector::iterator it = inVals.begin(); it!= inVals.end(); ++it) { if (checkedVals.find(sum - (*it)) != checkedVals.end()) { printf("Values: %d and %d\n", sum - (*it), *it); return; } else { checkedVals.insert(*it); } } printf("Not found\n"); }

An other possible solution, not so efficient but interesting, is to sort te numbers and then for each X in the array to use binary search for searching if S-X is there. The complexity is , and no additional memory is used. An important aspect of this method is the way it handles the case when S-X = X:

void solutionNotSoBad(vector& inVals, int sum) ////O(n log(n)) { size_t size = inVals.size(); //sorting is O(n log(n)) sort(inVals.begin(), inVals.end()); bool found = false; for (vector::iterator it = inVals.begin(); it != inVals.end(); ++it) { vector::iterator current = it; int toFind = sum - (*it); if ((toFind <= *it) && (it!= inVals.begin())) { found = binary_search(inVals.begin(), --current, toFind); current = it; } if ((toFind >= *it) && (it != inVals.end())) { found = binary_search(++current, inVals.end(), toFind); current = it; } if (found) { printf("Values: %d and %d\n", (*it), toFind); return; } } printf("Not found\n"); }

As a conclusion, keep in mind that hash based data structures are generally the most appropriate to use when fast search operations are needed.

]]>1. First line is full of zeros except the central element (position for 0 based index as in C++, C#, Java etc, and position for Matlab, Pascal etc) wich is 1. For the first line will be: 0 0 0 1 0 0 0

2. The line is computed based on line . Each element is computed based on the three upper neighbours. Thus for each combination of the possible values for upper neighbours we need to specify a value: 0 or 1

000 ->

001 ->

010 ->

011 ->

100 ->

101 ->

110 ->

111 ->

where can be 0 or 1.The array is the base 2 representation of a number between 0 and 255.

For simplicity the first elements of each row are always set to 0 because for those elements only two out of three upper neighbours are known.

Based on this rule different patterns can be generated. For example if and the following pattern is generated:

Here is the Matlab code for generating patterns:

function [] = automata() n = 261; m_in = zeros(n, 2*n+1); rule = [0, 0, 0, 1, 1, 1, 1, 0]; %rule 30 %rule = [0, 1, 0, 1, 1, 0, 1, 0]; %rule 90 m_out = generate(m_in, rule); imshow(1 - m_out); %--------------------------------------------------------------------- function [o] = generate(m, rule) problemSize = size(m, 1); %starting alue m(1, problemSize+1) = 1; mid = problemSize+1; %for each row for row = 2:problemSize-1 %for each column for col = mid -(row-1) : mid +(row-1) i1 = m(row-1, col-1); i2 = m(row-1, col); i3 = m(row-1, col+1); %based on te values from the previous row %and based in the rule generate the values in %current row n = i1*4+i2*2+i3; m(row, col) = rule(8-n); end end o=m;

A particular interesting pattern is generated by rule 30. This is how it looks for :

Compared to other rules, a chaotic pattern is generated (see the right side of the result). Rule 30 shows that using simple evolution rules, and starting from something basic (a single value of 1 in this case) something complex can be generated.

This lead to the following idea: what if the universe was generated in a similar way; A simple initial state and a simple rule that evoluates in time and lead to the complexity that we see around?

Source: Stephen Wolfram, “A New Kind of Science”, http://www.youtube.com/watch?v=_eC14GonZnU

]]>The obvious solution is X being the North Pole. But there are other positions on earth that respect the conditions.

Consider for example going on a meridian toward the South Pole until the length of the parallel you are situated on is exactly the same as the distance traveled. Moving to East in this case would mean making a complete rotation on that parallel, and moving to North is then performed in the reverse direction on the first segment of the trajectory.

More concrete, from any point situated 10 km North from the parallel having the length of 10 km ( diameter) we can travel as described above so there are infinite points X.

Even more, from a point situated at 10 km from the parallel of length 5km (that would be quite close to the South Pole), you can go to South until reaching the parallel, make two complete rotations then back in North direction.

Or more generally, move 10 km to South until reaching the parallel of length , make rotations, and then move back 10 Km North.

Source: Martin Gardner, “The Colossal Book of Short Puzzles and Problems” http://www.amazon.com/Colossal-Book-Short-Puzzles-Problems/dp/0393061140

]]>You are climbing a stair case. Each time you can either climb 1 stair or 2 stairs. The staircase has n stairs. In how many distinct ways can you climb the staircase ?

Answer:

There are two possibilities for the last performed step: one stair or two stairs.

Thus, the number of possibilities for stairs will be the number of possibilities to climb stairs (in case the last step is one stair long) plus the number of possibilities to climb stairs (in case the last step is two stairs long).

Looks like the answer is related to Fibonacci numbers:

with and

]]>

In the previous post I presented an algorithm that check the above statement. Here I will show the mathematical proof. Thanks to J who actually found this nice proof ;).

Consider one person . There are other 5 persons that he can know or not. If dont’t know at least 3 of them (case 1 is knows 3 of them) then it means that there are 3 persons that don’t know (case 2).

Consider the persons know (case 1) or that don’t know (case 2).

For case 1, if there are to persons and in A that know each other it means that is a group of 3 persons that know each other. Otherwise, if any two persons from A don’t know each other it means that A is a subgroup of 3 persons that don’t know each other.

Case 2 is similar to case 1, just replace ‘don’t know’ with ‘know’ and ‘know’ with ‘don’t know’.

]]>