Why Understanding Ambiguity in Natural Language Processing Is A Game Changer

Kimberly Surico |
 05/02/16 |
7 min read

elephant

 

“Last night I shot an elephant in my pajamas. How he got in my pajamas, I’ll never know.”

Why do we smile at Groucho Marx’s joke? Because it offers up the absurdly comical image of a half-ton animal wearing Groucho’s night clothes, a consequence of the fact that language is an ambiguous system.

In Natural Language Processing (NLP), we are confronted every day with words and sentences that occupy multiple regions in the meaning space. How does NLP deal with an expression having two or more distinct denotations? In what respect is sentiment analysis connected to the efficacy of a grammar in recognizing, and resolving, ambiguous patterns?


Using Context to Parse Plausible Interpretations

Ambiguity is a type of meaning uncertainty giving rise to more than one plausible interpretation. Being ambiguous is therefore a semantic attribute of a form (a word, an idea, a sentence, even a picture) whose meaning cannot be resolved according to a rule or process with a finite number of steps. Context, however, plays a role, because something can be ambiguous in one context but not in another. For example, consider this Italian sentence:

1) Mi piace il cane            “I like the dog”    OR “I like the gun hammer”

Cane, in Italian, means both “dog” and “hammer of firearms.” It is a case of lexical ambiguity, in particular, of homophony. If we see this sentence in isolation, we do not know if the positive sentiment refers to the animal or the weapon.

But imagine that you hear this sentence on an Italian beach where a merry family is playing with a Labrador puppy. A child points at the animal and says, “Mi piace il cane!.” No doubt remaining, right? As a matter of fact, context helps disambiguation. If computers were people, therefore, NLP solutions would be easier (but not easy, as we will see).

Types of Ambiguity

Ambiguity goes beyond the lexical form and can affect units even smaller than words. Morphemes can also display a special kind of meaning uncertainty known as syncretism. This occurs when two or more morphological features have the same form, such as the English [s]. In addition to the possessive (Mike’s), an “s” denotes both plural nouns (sheets), the third-person singular in verbs (eats). And then there are other kinds of ambiguity, made of bigger stuff: semantic and syntactic/structural ambiguity. One instance of the first is, again, provided by Groucho Marx:

2) Time flies like an arrow. Fruit flies like a banana.

This kind of ambiguity differs from the lexical, as the set of interpretations is less constrained and agreed upon. Here is another example of this:

3) I liked your picture.

Does this sentence mean that the person appreciated the picture, or that they pressed “like” on Facebook? Although this shows the increasing effect of social media on our meaning spaces, it is syntactic ambiguity that interests us more–how we parse natural language. The elephant wearing a pajama sentence contains some NPs (noun phrases), including an elephant and a pajama.  An NP is a grammatical constituent with a noun as its head that can become part of something bigger: for instance, see how my pajama, being introduced by a preposition, turns into a PP (prepositional phrase). The uncertainty arises from the fact that the PP in my pajamas could, in turn, find a head in either the NP an elephant (Groucho’s interpretation) or in the verb (shot). We have no way of ruling out one of the options on the basis of the syntactic features alone. It is a notorious issue among linguists, one of PP-attachment.

The Nature of Ambiguity

Different types of ambiguity can be combined. I would not add the “effect” part, as we are for large part unaware of language mechanisms. A famous example by N. Chomsky involves both syncretism (the -ing suffix can be both verb and derived adjective) and subject/object ambiguity:

4) Flying planes can be dangerous.[1]

Actually, the very possibility of alternative interpretations comes from how grammar is designed. To put it trivially, one can think of our thoughts and ideas as a huge web, a web that can put the Internet to shame.

The human mind consists of a tremendous number of semantic networks, where concepts and representation are interconnected.

Now, humans need to talk, or sign, about such ideas. We need a device that translates, so to speak, associations and networks into hierarchical constituent structures that  allow us to put one word after another. This device is syntax, which maps ideas onto forms, such as phrases. How these phrases are organized, then, determines the linear set of the words we speak (or write on Facebook) every day.

Ambiguity by Design

Syntax builds the structure where the PP in my pajama depends either on the NP elephant or on the verb shot. Structural ambiguity emerges because the reader cannot determine which kind of projection from thoughts to language the syntax is expressing.

Note that ambiguity is present in natural languages, but not in formal languages, unambiguous by design. Interestingly, it exists also in naturalistic conlangs (constructed languages, aiming at reproducing all the characteristics of natural languages). Dothraki, for example, the language of the Horse Lords in the TV show Game of Thrones and from the book series A Song of Ice and Fire, displays syncretism[2]:

5)

  1. Anha lajak. “I am a warrior”   OR    “I fight”
  2. Kisha lajaki. “We are warriors”          OR    “We fight”

The agentive suffix is identical to the first person singular present tense suffix, and Dothraki has no copula. Moreover, this conlang has cases like:

6) Anha laj mahrazhes m’arakhoon.                    “I fought the man with a sword”

As you see, Khal Drogo’s language also has structural ambiguity.

Ambiguity and Sentiment Analysis

So, whether we are confronted with natural or invented languages, “ambiguity is a practical problem” (Church and Patil, 1982: 139)[3]. Manning and Schütze (1999, 18)[4] interestingly named a section of their book “The Ambiguity of Language: Why NLP Is Difficult”:

“An NLP system needs to determine something of the structure of the text – normally at least enough that it can answer “Who did what to whom?” …Therefore, a practical NLP system must be good at making disambiguation decisions of word sense, word category, syntactic structure, and semantic scope.”

Our system creates a series of dependency structures to capture the mapping from semantic networks of concepts to syntactic elements. As a consequence, our parsing algorithms must disambiguate the string of symbols represented by Groucho’s and Dothraki sentences.

Under this light, here is an example of how NetBase treats a specific kind of syntactic ambiguity.  What follows are real-life sound bites in Italian.

7) ho mangiato la crepe alla Nutella.   “I ate the crepe with Nutella”

Now, compare it with the following:

8) ho mangiato la crepe al sole.   “I ate the crepe in the sun”

We want to extract a positive sentiment frame for Nutella, the product eaten in the crepe, and definitely not for sole, representing only an adjunct PP.  Considering that sentiment frames are built on the top of dependency structures, how the parse looks  determines whether only Nutella or both Nutella and sole are extracted as “objects” of the positive behavior. Extracting sole would obviously be wrong and lower our precision.

What needs to happen in the Italian rules is that the PP con la Nutella depends on the NP la crepe. On the contrary, we want the PP al sole to depend on the VP (verbal phrase) ho mangiato.  To disambiguate this sentence, three main things are needed: knowledge about grammar, knowledge about the world, and a way to combine them.  The task is to provide the system with these three fundamental “skills.”

 
Solving the Nutella puzzle

Which steps did we take at NetBase to solve the Nutella puzzle? First, Italian chunking and linking rules contribute to the knowledge that the system has about grammar. Namely, we tell the system what the constituents are, what depends on what, which grammatical categories are in play, and so on. Second, we make use of lexical resources (lexicons) for the second kind of knowledge, the one about the world (in a shallow sense). Finally, we adopt a probabilistic approach to find a way to combine them. Let us see how we predict how likely it is for Nutella to modify crepe, while al sole should become adjunct of the verb.

Here is the structure where the PP is complement of the noun:

7) [[[ho mangiato]VP [[la crepe]NP [alla Nutella]PP]NP]VP]      “I ate the crepe with Nutella”

The dependencies look like this:

(8)

francesca1

And here is where the PP modifies the verb:

9) [[[ho mangiato]VP [la crepe]NP]VP [al sole]PP]    “I ate the crepe in the sun”

The dependency structure looks different from (8):

(10)

francesca2

We decided to not only take into consideration the syntactic rules, but to predict that, given a verbal group, if a PP is something that can be eaten, and is adjacent to an NP, which, in turn, has to do with food, it will probably modify it. Conversely, if such PP does not have these features, in the same kind of construction, it will directly modify the verb. We provided the system with food knowledge by assigning an edible feature to a big set of words, which include crepe and Nutella. As a result, the relevant sentiment frame picks up the brand Nutella, but not the sun.

This case is a simple example of how NetBase NLP functions in cases of syntactic ambiguity. Of course, one can object that there will always be a missing piece of lexical knowledge, an over-estimated probability rate, a wrong link that will “misunderstand” the uncertainty. Yes, computers sometimes get ambiguity wrong. After all, it is complicated to deal with something without a human mind!

Have questions about how NetBase can cut through ambiguity and uncover relevant data for your brand? Reach out!

[1]     Chomsky, N. (1957), Syntactic Structures, The Hague/Paris: Mouton

[2]    All Dothraki examples here are personal communication of Dothraki creator David J. Peterson.

[3]     Church, K. and R. Patil (1982), Coping with syntactic ambiguity or how to put the block in the box on the table. American Journal of Computational Linguistics, 8; 139-149, July-December.

[4]     Manning, C. D. and H. Schütze (1999), Foundations of Statistical Natural Language Processing. Cambridge, Massachusetts: MIT Press.

Image from: Chris Eason

Premier social media analytics platform

Expand your social platform with LexisNexis news media

Power of social analytics for your entire team

Media analytics and market intelligence platform

Enrich your media analytics with social data

Social media benchmarking
and competitive intelligence

Data streams & custom KPIs for advanced data science

AI, Image Analytics, Reporting Tools & more

Out-of-the-box integration with other data sources