Tuesday, December 9, 2014

Partitional clustering: number of clusters and performances a quantitative analysis

Abstract
Partitional clustering methods as k-medoid or k-means require an input parameter to specify the number of clusters to partition the data.
The complexity in time of such algorithms strictly depends on
  • the number of clusters used to initialise the computation.
  • the steps to update the centroids.
Whenever the similarity distance doesn't allow the determination of the centroid thru the analytical methods, the complexity in time tends to explode.
In the post I show an heuristic to minimise the complexity in time in non-Euclidean space.
Number of computational steps for standard k-mean executed. Chart depicts  the steps by number of points $N$ in the range [1k,10k] and  by number of clusters in the range [5,$N/10$].
The cardinality of the clusters has been simulated assuming uniform distribution.
The red line indicates the minimum number of steps.
Introduction
K-means or k-medoids are widely used because its implementation is very easy  and the complexity in time is low (it's linear over the iterations*#clusters#*#points).
Is the complexity linear  for whatever problem?
The answer is no!

The problem
The problem arises when the objects to be clustered don't lie in a space for which the computation of the centroids can be done thru analytical methods.
In this context, the computational complexity to update the centroids becomes preponderant respect the other steps and the overall complexity is not any longer linear in time.

The complexity in case of non-Euclidean space
In a Euclidean space, the points of a cluster can be averaged. 
In non-Euclidean spaces, there is no guarantee that points have an “average” (...some puritan of the measure theory might stuck up their nose on that) so we have to choose one of the members of the cluster as centroid.
As explained in the former post, for each cluster the computation of the centroid requires (in case of symmetric distance) $\binom{K_i}{2}$ computational steps.
The overall complexity (for each iteration) is the following:
  $ \sum_{i=1}^k \binom{z_i}{2}+k \cdot n$ 

A dataset having several points might require a number of computations simply not feasible; in that case it's essential the minimisation and optimisation of each param of the algorithm to reduce the complexity.
A numeric example will clarify the point.
Let's assume a data set of 10.000 elements, and let's assume that the compression rate of the clustering is 90% (meaning the number of clusters are 1.000).

  • If the clusters cardinality is homogeneous, than the number of steps for the first iteration will be = 10.045.000.
  • If the clusters cardinality is highly unbalanced, for instance in case that a cluster contains 95% of the points, than the number of steps is = 50.360.655.
As you can see... even if the data set is relatively small the computational effort is very high!

Determination of the #clusters that minimises the complexity in time

There are no many choices ... the only option is to determine the number of clusters that minimises the computational steps.

You might argue that the # clusters should be selected according to the distribution of the data set, but it's also true the following:
  • In the real world you don't know a priori the exact number of clusters and all the heuristics to estimate them are quite often not really helpful in non-Euclidean space with several points. I prefer so, to have a sub-optimal clustering solution but with much faster answer :)
  • clustering analysis is usually one of the many analysis that usually you perform to understand your data, therefore even if the clustering is not perfect, it doesn't hurt too much. Clustering it's just one piece of the entire puzzle!
The approach
If the clustering divides the data set in clusters having more or less the same size, the  solution is quite straightforward, we can consider the minimum of the function described above, thru the analysis of first derivate.
Assuming that each $z_i = \frac{n}{k}$ then the formula can be rewritten as:
$ f(n,k)=\frac{n^2}{2 k}+k n-\frac{n}{2}$
The value of $k$ that minimises the first derivative of the such function is:
$ \frac{\partial f(n,k) }{\partial k}=\frac{\partial}{\partial k} \frac{n^2}{2 k}+k n-\frac{n}{2} = n-\frac{n^2}{2 k^2}$.
And it takes value = 0 with $k= \frac{\sqrt{n}}{\sqrt{2}}$.

In all the other cases for which the clusters size is not homogeneous, then we can easily simulate it!

Computations for a data set of 10k points.
 The computations are estimated by number of clusters (cluster size ~ [2,1000]) and with unbalanced clusters sizes.
The red line indicates the minimum. 
As you can see the difference between  the computational steps to be executed in case of "best" configuration of #clusters and all the other possible configurations is quite impressive: it's almost 1*10^7 in worst case. And this is for each iteration!
I let you calculate how much time you can save working with the best configuration :).
Stay tuned!
c.

Sunday, September 21, 2014

Text Clustering, a non parametric version of K-medoids algorithm for data de-duplication

Abstract
It's presented a quick and effective method to leverage text clustering for data de-duplication and normalisation.
The customised version proposed in this post bypasses the very well known problem of the assignment of the number of clusters.
Text Clustering results: each row depicts a cluster found. 


Introduction
One of the most common problems in the data management  is about the consistency: customer addresses, company names and whatever attribute in string format can be represented in multiple formats, often mutually equivalent.
Data normalisation presents some challenges, especially in the following scenarios:
  • Variations/mutation of the trusted source of information are not a priori known.
  • The data to be normalised are characterised by high volume.
  • There are no deterministic and static rules to normalise the data.
  • ...you don't have plenty time to find the perfect solution :), and you have to deliver fast!
Why the brute force approach it's wasted time!
Let's assume that you already found for your specific use case the best distance to judge how far is a record  from another one.
To make the story a bit more realistic, let's assume hereafter that our attribute to be normalised is a string.
...I would hope that nobody of you really thought to create a big matrix to keep track of the all possible distances among the possible values of the attribute... Why? If you have for instance 10k possible values, you need to computate at least 49.995.000 comparison (assuming that your distance is symmetric!), that's because the complexity follows Binomial[N,2].
You might perform the brute force approach over a statistically representetive sample set, but in that case it's really hard the validation of the real accuracy of your results.

Why Partitional Clustering is efficient?
As you know, this is a "practical mean" blog, so I encourage you to refer to real books if you are looking for the theoretical proof. Anyhow it's easy to get convinced about the following:
  • number of comparisons to be performed assuming $K$ clusters, $I$ iterations, $N$ elements to be clustered: $N \cdot K \cdot I$.
  • number of comparison to update the centroids: $I \cdot (\binom{K_1}{2}+\binom{K_2}{2}+\cdots+\binom{K_K}{2}) << \binom{N}{2}$

The comparison of the computational complexity of the brute force approach with the partitional clustering is well depicted by the below example, where I assumed ten iterations to get the convergence, 10 clusters, a data set that grows from 1000 to 20.000 points:
The computational effort: in red the  brute force approach, in blue the clustering.
The approach in a nutshell 
1. Group together records that look similar each other using k-medoid clustering 
2. Store in a index the centroids found.
3. delete (only if the use case allows it) all the member of the clusters with exception of the centroids.
4. for each new record to be stored in the DB, perform the comparison with the elements of the index: if the distance is to high, add the new record to the index.

K-medoids vs K-means
The main difference of K-medoid respect the well known K-means is in the assignment of the centroids:
  • in K-means the centroid is the barycenter of the cluster, so it might not be a real record.
  • in K-medoids, the centroid is a record of the cluster that minimises the sum of the distances respect all the other points of the cluster. Of course, depending on the case, you might prefer minimise the variance, or even define more sophisticated minimisation criteria.
  • K-means is usually much faster, since the computation of the barycenter it's usually taken as average of the position of the records in the space,
  • K-medoids requires (for each cluster) the additional comparison of the distances of the points of the cluster from all the other points of the clusters (that explains the formula mentioned in the complexity formula.
K-medoids is more robust than k-means in the presence of noise and outliers.
...But there is still something that makes me unhappy with K-medoids: It's parametric, meaning it requires the priori knowledge of the number of clusters.
In the real world simply forget it! 

Customised version for auto-determination of number of clusters.
  1. Instantiate the standard k-medoids with arbitrary number of classes (in the case analysed later I used just two clusters as initialisation).
  2. after the operation to update the medoids,  compare the medoids with the others: if two medois are very similar, then merge the two clusters.
  3. calculate the mean distances (this computation can take advantages of the operations performed during the medoids updating) for each cluster: if the mean distance overcome a threshold (that indicates the sparsity of the points of the cluster) then split the cluster. In this experiment I used a wild approach: each point of the split cluster have been consider as new cluster.
  4. goto step 2 until convergence. 
You might argue that such customisation doesn't require the "number of cluster" as param but introduce two thresholds: one for the clustering merging and another one for the splitting.
Actually the threshold is just one:
  • if $\theta$ is the minimum acceptable distance for which two strings might be considered as the same, then $1- \theta$ is the threshold for which two strings are absolutely different!
The advantage of the approach lyes on the fact the estimation of the threshold can be easily estimated and verified. The estimation of #clusters it's not that easy!

Experiment: Addresses de-duplication.
As data set I downloaded from this link the addressees of the pizzeria in New York (I'm Italian, and I love pizza :D).
To assess the accuracy of the algorithm I randomised each entry introducing a gaussian noise.

Data Perturbation:
The perturbation method I used, is based on randomised swapping of characters of the string:
  1. Select the number $n$ of characters you want to swap. Such number is drawn within the range [0,StringLength[str]] using a gaussian distribution.
  2. Draw $n$ couples of integers (between[1,StringLength[str]]).
  3. For each couple swap the corresponding first character with the second.
Distance
The definition of distance is crucial: several options are available. In this experiment I used the common Nedelman-Wunsch similarity. I'm pretty sure that good results can be achieved also with Jaro-Winkler and Levenshtein distances.

Experiment Setting
I firstly estimated the param of the customised k-medoid algo: the Merging/Splitting threshold.
The value is dependent by the nature of the data.
  • What's the minimum acceptable distance between two strings to be considered the same?
I selected randomly several couples of strings, measured the distances and sorted by value.
I noticed that 70% of matching it's the good one.
In the next posts I'll show a technique more refined based on the convergence slope of the clustering... but that's another story.

Results:
The data set contains 50 records:
  • 10 different addresses
  •  for each address 5 deviations generated with the aforementioned perturbation.
The customised K-medoids found the best result (that is 10 clusters containing the 5 deviated addresses) ~20% of the cases (first bar of the below chart)
The below chart shows how many time (out of 100 trials) the alto found exactly $k$ number of clusters.
The first bar says that 22 times out 100 trials, the alto returned 10 clusters.
  • As you can see, in 70% of the cases the algorithm partitions the data set in [9,12] clusters!
Below the best and worst case
Best case: achieved 20% of the cases. Steps to converge ~26.  
Worst case: returned 17 clusters. Convergence after 40 steps.
Convergence of the algorithm:
The time to converge is the following:

  • On average the algorithm converges after 36 steps.
I noticed that longer is time to converge, lower is the probability to determine correctly the number of clusters.

As usual: stay tuned!
cristian

Tuesday, July 29, 2014

Fractals: parameters estimation (part IV)

Introduction
In the former posts we discussed about the following points:
  1. There are special points in the contour of the fractal that can be used to derive its contour.
  2. Such points can be used to describe the fractal thru an iterative and deterministic algorithm. 
It's still open the main issue: can we leverage the two above findings to determine the fractals parameters?
The answer is yes, provided that we use a good technique to extract the contour points of the fractal image.

The algorithm (practical explanation)
We already noticed that points in the contour of the fractal are in tight relationship each other.
Since each linear transformation is described by exactly six parameters, for their determination we need at least 6 points for each transformation function.
The below image makes the concept clear:
Image-1: Relationship between points of the fractal contour

The steps to obtain the estimation of the params are the following:
  1. Extraction of the contour points thru convex hull algorithm;
  2. Build relationships among the points. 

Left Image: Extraction of contour points thru Convex Hull.
Right Image: Relationship among the points extracted.
How to build the relationships among the points?
As showed both in the deterministic algorithm presented in the last post and in the above image, the points preserve an ordering that can be leveraged to simplify the complexity of the problem.
Consider for instance the fern case:
  • the transformation that leads to the fixed point lying on the contour (highlighted in image-1) can be obtained (as explained in the points 1,2 of the deterministic algorithm) just creating the relationship among the consecutive points of the set $ l_1 $ described in Image-1 and in the below image:


Set of points to determine the Transformations
At this point we obtained the parameters of the first transformation, but still, we don't know neither the number of transformations we really need to describe the fractal image,  nor the probability associated to them.
For the first problem, we have to strategies: the brute force, that is, try all the possible combinations, or more wisely try the soft computing approach described below.

The fuzzy clustering approach (...unfortunately here some formulas are required)
The fuzzy clustering is a very well known algorithm that assigns the membership of a point to a cluster with a certain "degree", so the point $p_i$ might belong to the cluster $A$ with degree $x$ and to the cluster $B$ with degree $y$.

In this problem I modified the algorithm in this way:
  • the clusters are the mapping functions that we want to discover.
  • the probability associated to the fractal maps can be easily derived by the "size" of each cluster.
  • The fuzzy-membership function is based on the minimisation of the distance between the point obtained applying to $x_i $ the contractive map $x_i^{'} = \tau_j(x_i) $  and all the other points falling in a reasonable neighbour. The animation below describes it.



  • The update step of the centroids is aimed to minimise the distance between $ d(\tau (x_i)),\phi(x_i,\tau_j))$ computed over each $\tau$ and each $x_i$, where 
  • $\phi(x_i,\tau_j) = \arg\min_{x_s} d(\tau_j(x_i),x_s) $
  • To minimise the above function,  I used the gradient technique, working on the the following cost function:
  • $ E(\theta)= \sum_{i=1}^{n}{(\mu_{x_i}(\tau_j)\cdot[d(\tau_j(x_i),\phi(x_i,\tau_j))]^2)} $
  • For each mapping function,  the correction to each param is the following:
  • $\theta_i= \theta_i - \eta \frac{\partial E(\theta)}{\partial \theta_i}$



    Results
    Starting from the image of the fern (a jpg image containing 100.000 points) the application of the algorithm for the determination of the contractive maps gives the following result:
    Contractive maps detection algorithm: in black the points of the original fern, in red,green, blue the three estimated maps.
    • The results are not bad, the weakness of the algorithm lays on the extraction of the points of the Convex hull of the fractal.
    • The application of smarter contour extraction algorithm should improve further the accuracy of the algorithm.
    Disclaimer
    In the last posts about fractals I often abused the term contour of fractal. ...Theoretically speaking, a fractal cannot have a contour, but to make clear the pragmatical aspects I decided voluntarily such terminology.

    Saturday, June 7, 2014

    Fractals: a deterministic and recursive algorithm to reproduce it (Part II)

    Abstract:
    Fern fractal estimation thru recursive and deterministic algorithm.

    A fractal can be described as fixed point of a Markov process: given the proper contractive maps it's easy to generate it.
    The question is: given the contractive maps, is it possible to generate the fractal using a pure deterministic approach?

    Problem
    We already observed that the iteration of the contractive maps starting from a fixed point that lyes on the border of the fractal returns points of the convex hull of the fractal.

    • What's happen if we recursively apply such schema to the points obtained from the the above procedure?
    • Is it possible recreate the markov process (or at least approximate it) removing any probabilistic aspects?
    The below algorithm, given the contractive maps of the fractal, returns at each iteration better approximation of it.

    At each iteration it considers the fractal as better approximation of the contour of the original image.

    The Algorithm (by practical means)
    1. Consider a fixed point $P_0$ of a contractive map $\tau_1$ that lyes on the convex hull of the fractal.
    2. Choose randomly a point $P_i$ of the fractal and apply the above contractive map until the distance to $P_0$ is negligible.
    3. Map the point calculated at step 2 using sequentially all the contractive maps.
    4. Map each point obtained from point 3 the former step with $\tau_1$ till the distance to $P_0$ is negligible.
    5. If[ITERATIONS< K]:
      1. K--;
      2. For each point obtained from point 4 go to point 3.
    To explain how it works I plotted the list of points got using $K=1$ iterations of the algorithm:



    Bigger is $K$, more accurate will be the result.
    The above procedure works only if the contractive map $\tau_1$ has a fixed point that lyes on the convex hull of the fractal!!

    Results:

    I iterated the algorithm with $K=4$ times. At each iteration the algorithm returns a deeper description of the contour of the fractal (...even though definition of contour for a fractal doesn't make any sense, it gives at least a practical meaning):
    Results of the Recursive Algorithm iterated with K=1,2,3,4. 

    If we try to overlap the results obtained with the original fern we get:
    Original Fern, and the overlap with the results obtained using the recursive algorithm (K=4).
    Conclusion and next steps
    I showed how to depict the IFS generated thru the markovian process as a recursive and completely deterministic process.
    We noticed also, that in the fractal there are special points (as for instance $P_0$) that play a crucial role to describe the IFS.

    The question now is the following:
    is it possible leverage such special points and the above recursive algorithm to estimate the parameters of the contractive maps?
    ... I will show you a procedure that partially answer the question and some other example of the algorithm above described!
    Stay Tuned.
    Cristian

    Saturday, May 24, 2014

    Fractal parameters estimation thru contour analysis - Part I

    Introduction
    I really love fractals theory. Nothing fascinates me as the amazing structures generated by a chaotic processes that describe (hyperbolic) fractals.
    It's fairly easy to draw a fractal image:
    • The input
      • A small set of very easy function (linear transformations)
      • A probability to choose one of the aforementioned functions.
    • The process
      • Choose randomly a starting point.
      • Choose randomly one of the above functions according to the above defined probability.
      • Map the starting point using the above selected function.
      • iterate as long as you want :)
    • Admire the meaning of the life!
      • just plot the points
    Wherever you are, whatever decision you make, despite the chaos, the overall picture will be always the same. More you will move, more clear the picture will be!

    When I approached for the first time fractals,  after the reading of the great Barnley's book (Fractals Everywhere) I opened the laptop and I wrote the routine to draw my first IFS!
    ...By mistake I also plotted the lines that join the points:



    ...It's quite a messy, isn't it?
    Sometimes it's better to give up to connect the points, just relax and wait until everything will be clear. If we focus on every single detail we risk to lose what really counts!




    What is a fractal (Iterated Fractal System)?
    There are many formal definitions to describe an IFS, in my opinion the most effective describe a fractal as the fixed point of the Markov process (mentioned above).
    The process converges to the fractal :)

    The powerful of the fractal
    In literature there are plenty articles in which fractals are used to solve complex problems (financial analysis, stock exchange preditictions, description of biological processes,...).
    Also in Computer Science fractals fond a good collocation, think about the image compression: the ivy leaf image takes around 25MB of space, all the information to obtain it thru the IFS takes less 1KB!!
    The problem 
    Given the ivy leaf image, what are the parameters of the Markov process to generate it?
    A bit more formally what we have to estimate is:
    • A set of functions: $\{ \tau_1, \cdots,\tau_N \}$, where $\tau_i(p_0)=A_1 \cdot p_0 +Q_1$, where:
    $ A= \begin{bmatrix}\alpha & \beta\\ \gamma & \delta \end{bmatrix} Q= \begin{bmatrix} \epsilon\\ \zeta \end{bmatrix} $
    • We need also the estimate the probability to choose each $\tau_j$
    The combination of the collage theorem and the box counting approach is the most common technique to solve the problem.
    When I was student I approached the problem from a different angle. I have to say that the results obtained were partials but I still think that something more can be done :)
    Before to start we need one more notion: the contractive maps (have a look at Banach Theorem)
    Under certains conditions the iteration  $\tau_j$ led to a fixed point:
    Example of contractive map applied to an ellipsoid.
    It converges to a fixed pointed.


    First conjecture:

    • An IFS is characterised by a fixed point that lies on its convex hull.
    • From a fixed point that lies on the border of the IFS, the iterations of the contractive maps that generate the fractal return the convex hull of the fractal.

    The ivy leaf IFS is generated by 4 contractive maps. Each color describes the map used to generate the point.
    The above animated gif shows the meaning of the conjecture.
    An experimental evidence about the fact that at least one fixed point lies on the convex hull of the fractal can be obtained changing the params of the maps:
    The light blue points depict the fixed point of the maps used to plot the fractals.
    Despite the changes in the maps, the fixed point on top of the leaf still lays on the convex hull of the fractal.
    In the next post I'll show you a nice recursive algorithm I found to obtain different levels of convex hull for an IFS.
    Stay tuned
    Cristian.




    Friday, May 2, 2014

    Waiting Time Polynomials: tech explanation - last part

    This is the last step to complete the explanation of the waiting time polynomials formula.

    Unfortunately it's a bit technical, but I'm sure that it can be understood without deep math knowledge.
    At the very end if you can't explain it simply you don't understand it well enough! (A. Einstein)

    The trick
    Last time we left with the tally of overall waiting time $ w(x_i) = \phi(x_i)-|x_i| $ where $\phi(x_i)$ returns just the position of the last $|x_i|$ in the vector $V$.
    Let's have a look at the following example that will be used during the remaining steps.
    There are two questions that might be answered:

    • given $|x_1|= i, |x_2|= j, |x_3|= k $ what are the tuples $\{w(x_1),w(x_2),w(x_3)\}$?
    • given a $\{w(x_1)=I,w(x_2)=J,w(x_3)\}=K$, how many vectors $V$ can be built?
    To answer to the first question, I noticed that the three coloured cells are the only ones that really count.
    The idea is the following:

    1. consider the three cells as placeholders
    2. analyse the admitted values for each of them
    3. replace the placeholders with all the possible permutations of the alphabet $\{x_1,x_2,x_3\}$.

    Let's start with the case depicted in the above image, where we assumed that $\phi(x_1) < \phi(x_3) < \phi(x_2) $, then we have the following facts:
    • $ \phi(x_1)$ can take values between: $0 \leq \phi(x_1) \leq |x_1|+|x_2|+ |x_3|-2$
    • $ \phi(x_2)$ can take just one value: $|V|=|x_1|+|x_2|+ |x_3|$
    • The upper bound of $ \phi(x_3)$ is $|V|-1$ because it can slide till the second last element of $V$, that is $\phi(x_2)-1$
    • what about the lower bound of $\phi (x_3)$? We have two cases depicted in the below image:



    To sum up, so far we explained the following part of the formula (I hope you don't mind that I changed a bit the indexes notation):

    We have now to consider that for each configuration of $\{w(x_1)=I,w(x_2)=J,w(x_3)\}=K$ we can have more than one vector $V$.
    Do you like combinatorics? The remaining part it's just matter of tally, and the only formula we need is the formula for permutation with repetitions. I let you rub up the concept on your favourite website for trivial combinatorics.

    The formula can be split in two chunks, because we have to blocks of cells to be filled

    1. In how many ways can we fill the cells between the positions $[1,\phi(x_1)]$?
    2. In how many ways can we fill the cells between the positions $[phi(x_1),phi(x_2)]$? 

    Let's answer the first question we have to find the values for the denominator of the following:
    \[\frac{(\phi(x_1)-1)!}{\#(x_1)!\#(x_2)!\#(x_3)!}\]
    • we have $|x_1|-1$ cells that can be filled.
    • it contains all the instances of $x_1$ (except for the last the occupied $\phi(x_1)$)
    • the number of $x_3$ instances depends by $\phi(x_1)$ and $\phi(x_3)$:


    • the computation of the number of instances of $x_2$ in the first slot is straightforward, and it can easily derived by difference:
      • $\frac{(\phi(x_1)-1)!}{(|x_1|-1)!(|x_3|-Min(|x_3|,\phi(x_3)))!\#(x_3)!}$;
      • $(|x_1|-1)+(|x_3|-Min(|x_3|,\phi(x_3)))+\#(x_2)=\phi(x_1)-1$
      • so $ \#(x_2)= \phi(x_1)-|x_1|-(|x_3|-Min(|x_3|,\phi(x_3))) $
    This explains the following boxed part of the formula:

    The final step is to count in how many ways we can fill the slot 2 depicted by the interval $[phi(x_1),phi(x_2)]$ and to make the formula more readable let's rename $(|x_3|-Min(|x_3|,\phi(x_3))= \epsilon$.
    As we did for the first slot we have to identify the values of the denominator of the below formula:
    \[\frac{(\phi(x_3)-\phi(x_1)-1)!}{\#(x_2)!\#(x_3)!}\]

    • Out of $|x_3|$ instances, $\epsilon$ have been placed in the slot 1, so the slot 2 contains exactly $|x_3|-1- \epsilon$.
    • again by difference we can get the instances of $x_2$:
      • the occurrences of $x_2$ before $\phi(x_3)$ are exactly $\phi(x_3)- (|x_1|+|x_3|)$
      • the occurrences of $x_2$ in the slot 1 (listed above) are: $ \#(x_2)= \phi(x_1)-|x_1|-\epsilon $
      • that is : $ \#(x_2)=\phi(x_3)- (|x_1|+|x_3|)- \phi(x_1)+|x_1|+ \epsilon$
      • finally we have: $ \#(x_2)=\phi(x_3)-\phi(x_1)-|x_3|+ \epsilon$

    That's complete the proof of the formula.
    It's quite easy now extend the formula to more variables. The only expedient to make it easier is to remove from the formula the operator $Min$ splitting the formulas in two branches. 
    I'll show it in paper.
    Note about Complexity
    What's the most painful point of this formula?
    ... The introduction of placeholders requires to apply the formula for each permutation of the variables involved. It means that having $k$ variables we need to calculate the formula $k!$
    Anyway I don't expect to use the formula with large set of variables, after all the principle of selecting the right and small set of features is always recommended!

    As usual Stay Tuned.
    Cristian.



      Saturday, April 19, 2014

      Waiting Time Polynomials: how to derive the analytical formula: Part IV

      Introduction before you start
      I got many clarification requests about the Waiting Time Polynomials I published on the blog in the last three posts.
      The paper is almost ready to be submitted for review, but I think that some technical explanation might be interesting also for not academic audience.
      I consider myself a curious and hungry seasoned student, and I know how can be tedious read formulas and mathematical passages especially when it comes from a blog!!
      So why technical explanations?
      The answer is in the following quote of one of my favourite scientists, Gregory Chaitin. In "The quest for Omega" he wrote:

      The books I loved were books where the author’s personality shows through, books with lots of words, explanations and ideas, not just formulas and equations! I still think that the best way to learn a new idea is to see its history, to see why someone was forced to go through the painful and wonderful process of giving birth to a new idea! To the person who discovered it, a new idea seems inevitable, unavoidable. The first paper may be clumsy, the first proof may not be polished, but that is raw creation for you, just as messy as making love, just as messy as giving birth! But you will be able to see where the new idea comes from. If a proof is “elegant”, if it’s the result of two-hundred years of finicky polishing, it will be as inscrutable as a direct divine revelation, and it’s impossible to guess how anyone could have discovered or invented it. It will give you no insight, no, probably none at all. 


      That's the spirit that leads the following explanation!

      Definition of the problem
      Given an alphabet of 3 elements $\{X_1,X_2,X_3\}$, the function $w(X_i) $ counts  the number of failed trials before the last event $ X_i $.
      Consider now the following configuration: \[ \{\left\vert{X_1}\right\vert =i , \left\vert{X_2}\right\vert =j,\left\vert{X_3}\right\vert =k\}: i+j+k= Z \wedge i,j,k>0 \]

      • What are the admitted sequences  $\{w(X_1),w(X_2),w(X_3)\}$ ?

      Step I: Find all the possible configurations of events
      How can we list the sequences of length $Z$ that can be built with $ \{\left\vert{X_1}\right\vert =i , \left\vert{X_2}\right\vert =j,\left\vert{X_3}\right\vert =k\}: i+j+k= Z \wedge i,j,k>0$ ?

      Example of overall waiting time $w(x_i)$  in a succession of events.
      • once we set the values of the first two variables, the third it's determined by $Z-i-j$.
      • we imposed that all the variables occur at least once, so we $X_1$ can assume all the values between $[1,Z-2]$.
      • for each value of $X_1$ the variable $X_2$ can assume values between $[1,Z-i]$.
      •  $p_i$ is the probability that $X_i$ occur in a Bernullian trial.
      Now we have all the ingredients to make the cake:

       $ \sum_{i=1}^{Z}\sum_{j=1}^{Z-i}\sum_{k=1}^{Z-i-j}{p_1^ip_2^jp_3^k}$  

      In the first two summations,  $i$ assumes values between $[1,Z]$ just to keep the formula cleaned.
      ...I let you proof why the result doesn't change :).
      last point about this step:the limit of the above summation $ Z \rightarrow \infty = \frac{p_1 p_2 p_3}{\left(p_1-1\right) \left(p_2-1\right) \left(p_3-1\right)}$ 
      Such limit will be used to build the probabilistic density function.
      Curiosity (helpful for complexity analysis...):
      • The number of sequences that can be built with vectors of length $[3,Z]$ are $\binom{Z}{3}$
      • The number of sequences that can be built with vectors of length $Z$ are $\binom{Z}{2}$
      Step II: Waiting for an event!
      What's the easiest way to describe the overall waiting time for an event in a finite succession?
      There are many ways to get the $w(x_i)$, the easiest I found is given by the position of the last occurrence of $x_i$ minus the number of occurrences of $x_i$.
      For instance, let's consider $w(x_1)$:
      • The position of the last occurrence of $x_1= 8$;
      •  $\left \vert{X_1} \right \vert = 4 $ 
      • $w(X_1)=4$
      Where we are:
      The first two steps explain the circled pieces of the formula:

      What the "overall waiting time" for?
      For each event $X_i$ we are counting the holes among all the occurrences, so smaller is the overall waiting time, closer each other are the events $X_i$: it's a measure of proximity for the occurrences of $X_i$.
      What I did, is to extend such measure (it would be interesting to prove that it's really a measure!) to different kind of events (aleatory variables) ${X_1, X_2,...,X_n}$ over the discrete line of the time.
      Applications
      There are several area for which such kind of analysis might be helpful, I showed last time an its application as powerful document classifier, where each variable $X_i$ is a word of a document.
      If we consider a document as a succession of $Z$ words, the proximity measure inducted by the waiting time polynomials is a sort of finger print for the document, since for similar documents we expect that the same words are characterised by similar overall waiting time.
      Moreover, the dependency among the words are considered, since we are taking in account simultaneously an arbitrary number of words (the alphabet ${X_1, X_2,...,X_n}$).

      In the next step I'll explain the logic to get the remaining pieces of the puzzle, that will make easier the generalisation of the approach to an arbitrary alphabet.
      Stay Tuned!
      cristian

        Tuesday, April 8, 2014

        Coefficients of Waiting Time Polynomials listed in OEIS Database!

        I'm happy to announce that the sequence generated by coefficients of waiting time polynomial has been listed in the OEIS database (The Online Encyclopedia of Integer Sequences).
        The sequence is: A239700.
        In the next posts I'm going to explain how I derived the analytical formula of the polynomials.
        As usual: Stay Tuned!
        cristian

        Friday, March 28, 2014

        Coefficients of Waiting Time Polynomials: a nice representation

        I was doing some simulation for the paper I'm writing about the waiting time polynomials, and I got something unexpected.
        If we sort in lexicographic order all the polynomials generated for a given length of the events vector, and we consider the coefficients, what  you get is an apparently irregular series of integers.
        To capture some regularity, I decided to plot such matrix:

        The matrix plot, obtained considering an alphabet of three words and vectors having length=35.

        Isn't nice?
        Sometimes, an appropriate graphical representation helps in capture interesting aspects of your data

        Stay Tuned
        Cristian

        Tuesday, March 18, 2014

        Waiting Time Polynomials Document Classifier - Part III

        Abstract
        In the post is presented an innovative definition of polynomials associated to waiting time processes (analyzed in the former posts). Such polynomials are here  successfully used as document classifier. Comparative tests with SVM show significant accuracy improvements.
        Boolean Classification tests based on test set of 8k randomly generated documents composed using an alphabet of three words.
        Introduction
        To encourage who found quite intricate the formula I presented a couple of posts ago, I'm going to present you an its practical application that might be a good incentive to analyze the formal aspects with more attention :)
        What I show to you today is one of several application of such approach: a document classifier having higher accuracy than traditional methods as SVM (trained with gaussian kernel) and Neural Networks back propagated.

        Characteristics of the classifier
        • It's a supervised learning algorithm
        • It's completely non parametric
        • It can be used natively to classify multiple classes datasets.

        The Algorithm
        Let's assume to have a training set composed by two classes of documents: Cl_1, Cl_2.

            Learning Phase: Estimation of Geometrically distributed Random Variables.
        1. Define an alphabet of three words {w1,w2,w3} using frequencies criteria or more sophisticated techniques.
        2. For each class of training set:
          • estimate parameters {p1, p2, p3} of the respective {X1(w1),X2(w2),X3(w3)} geometrically distributed random variables.
        3. Calculate the polynomials associated to {X1(w1),X2(w2),X3(w3)} using:



            Testing Phase: document classification
        1.  for each document Di of the test set:
          • Identify the number of occurrences of  {w1,w2,w3}: {O_w1,O_w2,O_w3}
          • Select the polynomial for which:
          •  {O_w1,O_w2,O_w3} =p1^O_w1 p2^O_w2 p3^O_w3.
          • Calculate the value of the polynomials P_Cl_1, P_Cl_2 using:
            1. {p1, p2, p3} estimated for Cl_1
            2. {p1, p2, p3} estimated for Cl_2
        2. Classify the document:
          1. If (P_Cl_1>P_Cl_2) Di belongs to Cl_1
          2. Else Di belongs to Cl_2
        Examples
        1. How to select the polynomial

        Let's consider the polynomials calculated using the aforementioned formula and assume that in document Di the word w1 is repeated 2 times, the word w2 is repeated 2 times and w3 is repeated 1 time.
        Then, In the step 1 of the testing phase we have to choose the polynomial boxed in the below list:
        Polynomials generated for O_w1+ O_w2+O_w3=5
        2. Same polynomial, different values of  p1, p2, p3.

        - How many polynomials are generated for O_w1+ O_w2+O_w3 = 25?
        The answer is straightforward: 276 that is, all the possible configurations of 3 addends for which the sum =25. In the Encyclopedia of Integer Sequences, there are exciting explanation on such series.

        - How does the polynomial change for different values of p1, p2, p3?
        It's quite impressive how two polynomials can be different despite the setting of {p1, p2, p3} is almost identical.
        Look at this example for which:
        • On the left chart I plotted a polynomial with p1= 0.33, p2= 0.33, p3=0.34.
        • On the right chart I plotted a polynomial with p1= 0.4, p2= 0.3, p3=0.3.
        • In the first case the polynomial takes the maximum value for O_w1=8, O_w2= 8, O_w3 =9 ...not a big surprise!
        • In the second case the polynomial takes the maximum value for O_w1=15, O_w2= 9, O_w3 =1. In this case the result is more complicated to be explained!
        Document classification comparative test

        To test the accuracy of my method I performed a comparative test using a randomly generated document set.
        • The training set was composed by 100 documents (used to train a Gaussian Kernel SVM and to estimate params p_i for my method).
        • The test set was composed by:
          • Cl_1: 4000 documents randomly generated using a configuration of {p1, p2, p3}.
          • Cl_2: 4000 documents randomly generated using a different configuration of {p1, p2, p3}.
        • The accuracy has been tested using different configurations of {p1, p2, p3}, and considering different size of documents.
        Results
        First experiment: 
        just the first 25 words have been considered  to estimate the params {p1, p2, p3}, to train the SVM and to test the the accuracy.
        Accuracy results considering just the first 25 words.
        Second experiment: 
        Same test as above using the first 35 words of the documents.

        Accuracy results considering just the first 35 words.
        All the results showed there are referred to accuracy achieved on average using 10 different randomly generated test set and trying to configure SVM params to maximize the accuracy.
        Considerations
        As you can see the method based on definition of "Waiting Time Polynomials" that I'm proposing, performs significantly better than SVM.
        More comparative tests will be shown in the publication I'm writing on such topic.

        Further notes
        Processes based on waiting time or geometrically distributed random variables are extremely important for risk assessment and risk evaluation.
        I'll show you in another post some application of such polynomials in this field.
        As usual, stay tuned
        cristian

        Thursday, March 6, 2014

        The Mind Map project: a new concept of Enterprise Knowledge Management

        Abstract
        A project to build an innovative Information Management tool to extract, correlate and expose unstructured information.
        A screen shot of one of the mindmap generated by the tool.
        Different colors have been used to depict sub-concepts. 


        The demo
        Time ago I published in this blog some posts where I presented some algorithms I designed to extract relevant information from documents and more in general unstructured content (as tweets, blogs post, web pages).
        I don't want spend too much words, I guess a demo better describes the idea I have in mind.
        It's still a prototype, and a lot of work is still required, but in this video I hope you appreciate the idea.
        In the video I tested the prototype of the application using a wikipedia page.

        PS
        To optimize the video, watch it on youtube with the option "quality HD".

        ...Looking forward to receive feedback!


        Stay Tuned
        cristian

        Monday, March 3, 2014

        Statistical Distribution to describe a Document - My Contribute Part II

        Abstract
        it's presented the second step to generalize a probabilistic density function (based on geometrical distribution) to describe a document thru the underlying stochastic process.
        Marginal PDF and CDF for document described by three words.
        In red has been depicted the distribution of a document having size = 35, in Yellow size=34, in Blue size 33.

        The marginal distribution that describes the density of a document for a fixed size composed by three words is the following:


        Example of alpha, beta, gamma parameters determination. 


        Introduction
        The bag of words approach and almost all the related techniques to extract features from a document are based on the manipulation of frequencies associated to the words of the document. Such methodologies tend to fail when documents are characterized exactly by the same bag of words, and by the same frequency.

        The proposed approach bypasses such problem since it takes in account the waiting time of the words in a document.

        From 2 variables to 3 variables
        In the former post I presented the easiest case: a document depicted by two words. The generalization of the problem it's not painless process.
        To understand the tricks used to tackle the problem I'm going to explain the passage from 2 to 3 variables.
        ...To be continued.
        c.
        PS: I'm back!