Friday, January 13, 2012

Extract meta concepts through co-occurrences analysis and graph theory.

During The Christmas period I had finally the chance to read some papers about probabilistic latent semantic and its applications in auto classification and indexing.
The main concept behind “latent semantic” lays on the assumption that words that occurs close in the text are related to the same semantic construct.
Based on this principle the LSA (and partially also the PLSA ) builds a matrix to keep track of the co-occurrences of the words in text, and it assign a score to these co-occurrences considering the distribution in the corpus as well.
Often TF-IDF score is used to rank the words.
Anyway, I was wondering if this techniques could be useful also to extract key concepts from the text.
Basically I thought: “in LSA we consider some statistics over the co-occurrences, so: why not consider the link among the co-occurrences as well?”.

So what I did is the following (be aware that is not the formal implementation of LSA!):
  1. Filter and take the base form of the words as usual.
  2. Build the multidimensional sparse matrix of the co-occurrences;
  3. I calculated for each instance the frequency to find it in the corpus;
  4. I calculated for each instance the frequency to find it in the doc;
  5. I weighted such TF-IDF considering also the distance among the co-occurrences.

In this way we are able to rank all co-occurrences and set a threshold to discard items having low rank.

In the last step I built a graph where I linked the co-occurrences.
As you can see in the following examples, the graphs are initially pretty complex, and to refine the results, I applied filter based on the number of connected components in the graph.

The below examples are representing the concepts extracted by the first three chapters of the books “The Media in the Network Society, author: Gustavo Cardoso” Available at http://ebooksgo.org/computer/networksecurity.pdf

Meta - concepts (unfiltered) extracted from "Introduction" chapter.

Meta - concepts (unfiltered) extracted from "Introduction" chapter.
The arrows, help to identify correct direction to read the "meta concepts". Of course is still required a bit of efforts to identify the underlying semantic but after a quick reading of the original source is easy convince oneself that this representation fits pretty good with the content.
Below the outcomes for the first chapter:
Unfiltered meta concepts for the first chapter
Filtered meta concepts for the first chapter
And, here you are the the second chapter:
Unfiltered meta concepts for the second chapter
Filtered meta concepts for the second chapter
I'm sure that there is room to big improvements of this rough method, for instance assigning the ranks of the co-occurrencies with more sophisticated ways respect the usual TF-IDF (like using centrality based measures, or considering cosine distances).

Are these constructs also useful to classify text?
…we will see the answer in the next post J.
Stay tuned
cristian


10 comments:

  1. I've always wanted to use Lambda Sets (http://www.analytictech.com/borgatti/papers/borgatti%20-%20LS%20sets,%20lambda%20sets%20and%20other%20cohesive%20subsets.pdf) to identify cohesive subsets between documents in a similarity matrix of all documents. It would be interesting to see if a Lambda Set analysis would identify good cohesive groups in your concept graph.

    ReplyDelete
  2. Dear Professor,
    first of all thanks for your contribute and for the link of your paper.
    I'll read carefully your works, to understand how we can join the lambda sets with my graphs.
    let's Keep in touch!
    cristian

    ReplyDelete
  3. What do you think of using BM25 measure?

    Thanks,
    michele.

    ReplyDelete
  4. Hi Michele,
    BM25 Okapi could be an option to assign the weight to the connections, where the queries in the argument could be the co-occurrencies.
    Do you have something in mind to specify the inner function of the BM25 meta-function?

    Just a consideration: BM25, at least in the most common implementation, is based on IDF: one of the disadvantages of such implementation is related to the fact that it doesn't work well with corpus with limited number of words, as for instance with invoices, tables, and so on.

    Cheers.

    ReplyDelete
  5. Hi all,

    Current solution to this problem is manually define Java annotation classes and try to capture all the metadata contained in the existing XDoclets into the Java annotations. Thanks a lot.....

    ReplyDelete
  6. Hi,

    I have a quick question.
    How do you determine the direction of an arrow? What's the difference between "network--4-->society" and "society--1-->network"?

    Thanks.

    ReplyDelete
    Replies
    1. Hi,
      the direction of the arrow depends on the position of the words (the co occurrence A,B is different to B,A, because the relation isn't symmetric).
      Consider two consecutive words A,B. If the algo determines that the co occurrence A,B is relevant with score 4, then in the graph you build the link A->B with weight 4.
      If the algorithm assign to B,A rank 1, than you build a link B->A with edge's weight 1.
      Cheers

      Delete
    2. Hi, thank you for your reply.

      But the position arises my another question. How do you define "cooccurrence"? Two words cooccur in one document, one sentence or next to each other? If concurrence means two words appear in one sentence and next to each other, recording position information is reasonable. But if it means two words cooccur in one document, I think positions of them cannot tell many things. Right?

      Delete
    3. AB is defined as co occurrence if A and B are two consecutive words in the same document.
      In the experiment I did, I considered also co occurrences where the distance between A and B is greater than 0. For instance I considered co occurrences where A and B are divided by a C.
      So ACB has been considered as co occurrence between AB with distance 1,
      and I assigned a penalty proportional to the distance to the rank of the co occurrence AB.
      -> As you can see when you parse the docs in the corpus, you must preserve the original order of the words!
      cheers
      cristian

      Delete
  7. Wow! That's a wonderful explanation about the salaries slabs estimation in 2022 and upcoming years in the Java developer jobs. It's such an informative blog. I am sure it will help numerous individuals like me who can evaluate the salaries offered to the java developers at different zones of the globe.

    ReplyDelete