Do Computers Need Grammar Books Too?

PDF version of the article

The world’s first known grammar “book” was conceived by Pānini in fourth century B.C. India, when only a tiny minority of people could read or write, and when their greatest challenge was bare survival. His grammar contained 3,959 rules of the Sanskrit language, which describe how words are composed, how they combine into sentences, and what they mean. 

Pānini’s grammar

Extract from Pānini’s grammar, the Aṣṭādhyāyī

This grammar probably didn’t influence the everyday life of the population as its primary goal was to teach the proper use of Sanskrit for the writers of sacred books. Who could have foreseen at the time that this knowledge of the composition of words, their combination into sentences and the description of their meaning would one day become so critical in today’s world of information overload.

Today, one of the greatest challenges of the “information society ” and “knowledge economy ” is making sense of and getting the most benefit out of “big data ”,  80 per cent of which is textual . Company analysts trying to follow which company bought which company, for example, might be faced with reviewing hundreds of thousands of documents. Doctors require lists of patients eligible for a clinical trial based on hundreds of thousands of patient reports. Companies need to learn about problems with their products which might be reported in hundreds of thousands of forums.

Understanding all the subtleties and complexities of human languages is, and probably will always be, the privilege of the human mind, but given the enormous quantities of texts, man must rely on the help of automated processing. The better automated processes manage to decode the meaning of the texts, the more they can help by extracting useful information from them, and thus mining knowledge.

Integrating grammar in the design of automated language processing tools can be of real help since texts are composed of words in different forms and roles, and those words make up sentences and the sentences convey complex meanings – which are described by grammar rules.

How does grammar help access information and knowledge? 

Today, the most widely- used tools that help us access  information and knowledge in texts are search engines based on keyword search. When you write a word in the keyword box of a search engine you will get a list of documents that contain that word. But could a business analyst ask a standard keyword-based search engine to provide a ‘list of the company transactions’? Could a doctor ask for the ‘list of patients who are eligible for a clinical trial’? Could a company manager obtain a ‘list of complaints’? The answer is clearly “no”. 

To illustrate the limitations of keyword-based search for complex queries, let’s take the example of a business analyst who would like to submit the query ‘buyers and companies bought’. Why, after submitting such a query would the title of this news article ‘Microsoft Acquires Sun Microsystems’ not be returned?

In order to be able to return this answer, the search-engine would have to be aware of how words are composed, how they combine into sentences and what they mean i.e., it would have to “know” some grammar. It would need to master at least the following concepts:

  • The concept of a company – to be able to match ‘Microsoft‘ and ‘Sun Microsystems ‘as company names
  • The concept of a transaction – to match ‘acquires with a transaction
  • The concepts of “buyer” and “thing bought” and their expression in sentences – so that it can match ‘Microsoft’ as a buyer and ‘Sun Microsystems’ as a company bought.

But keyword-based search engines are not aware of these concepts. More sophisticated search engines are required.

Beyond keyword search

Since the foundation of Xerox Research Centre Europe in the early nineties, one of our main research topics has been natural language processing. Over the past 10 years, the focus has been on information extraction so that automated tools can return answers to more in-depth queries. We have developed what we call ‘FactSpotter’, a sophisticated information extraction tool that takes into account the complexity of language structure, and can navigate the three concepts described in the example above.

Based on linguistic rules, FactSpotter can  detect the names of people, companies or locations, dates and various other so-called “named entities[i]” in texts (and it can do this is several different languages).1 In the example above, FactSpotter would identify ‘Microsoft’ and ‘Sun Microsystems’ as company names. It would analyse word forms, and provide a formalism, which allows users to constitute lists of words and expressions that convey the same concept. This ability makes it possible, for example,  to associate the word ‘acquires’ with the concept of transaction.

FactSpotter also can conduct syntactic  and semantic  analysis. Syntactic analysis identifies ‘Microsoft’ as the subject of ‘acquires’ and ‘Sun Microsystems’ as the direct object of ‘acquires’. The semantic analysis maps these syntactic functions into semantic roles . Thus ‘Microsoft’ can be recognized as a buyer and ‘Sun Microsystems’ as a company bought.

FactSpotter’s  grammatical analysis

 FactSpotter’s  grammatical analysis of ‘Microsoft acquires Sun Microsystems’ 


Context is another aspect of human language that FactSpotter can handle. It can differentiate among several meanings of the same word (disambiguation), e.g., it knows that in the sentence ‘I can see, and I see a can’, the first can is a verb, and the second is a noun. It also can recognize particular linguistic structures that carry the same meaning, e.g., ‘buy’, ‘acquire’ and ‘become the new owner of’, and it has the capability of recognizing different expressions that refer to the same entity, e.g., ‘Microsoft Corporation’ and ‘it’ in the sentence ‘Microsoft Corporation announced after the close today that it will buy Sun Microsystems’. As you can see, these skills are necessary to understand linguistic meaning _ something keyword-based search engines simply can’t do.

FactSpotter has been used in numerous information extraction tasks in different domains and languages: in clinical decision making, event extraction and the establishment of chronological order in news articles, the detection of political risk, mining clients’ complaints for customer relationship management, the extraction of biological knowledge from research articles, etc. We are currently engaged in new research that will make it very easy for FactSpotter to adapt to new tasks.

Grammatical rules were created some 2500 years ago and have been taught ever since schools exist - to the regret of many a pupil! From guidelines in writing sacred texts, to regulating national languages, to learning and translating foreign languages, the practical uses of grammar rules have increased over the centuries. Today even computers are more effective if they have been through grammar school!


[1] Factspotter named entity recognition capabilities ranked 1st in the following benchmark competitions: Semeval (2007) Named Entity Metonymy Resolution (English), Tempeval (2007) Detection of temporal expressions (English), Harem (2008) Named Entity Detection (Portuguese),  Ester 2 (2009), Named Entity Detection (French)The Xerox Incremental Parser component of Factspotter can be accessed online at


About the author:

Ágnes Sándor  is a researcher in the Parsing and Semantics group at the Xerox Research Centre Europe. Her research areas are information extraction from biological and medical documents, news articles and textual enterprise data, rhetorical analysis of argumentative discourse, and pragmatic analysis of social media postings.