The goal of modern generative linguistics is to achieve a precise computational understanding of how language works: How do speakers turn the meanings they wish to communicate into utterances that can be spoken, written, or signed, how do listeners map these incoming signals to the meanings that they understand, and how do learners come to acquire the systems necessary to solve these problems.

Clearly, these are a big questions whose answer will involve many components. These will range from understanding the perceptual systems of the human mind to having a clearer ideas about what it means to “mean.” Like all complex scientific problems, we need to make some simplifying assumptions in order to make progress. Generally speaking, building scientific theories involves two kinds of mutually reinforcing work: We delimit the empirical phenomena that we wish to explain and we create simplified models of the world to explain these phenomena. These processes feed one another: Without some set of theoretical background assumptions it is impossible to define what counts as a phenomenon. Similarly, the phenomena we wish to explain will dictate much about the modeling assumptions we must make. Both of these kinds of scientific work thus involve a loop of idealization, simplification of the complexity of the real world, and evaluation. Both are critical to building scientific theories.

What kind of phenomena do we wish to explain in this course? What sorts of simplifications will we make in defining our phenomena and what sorts of idealizations will we need to developing our models?

Our first simplifying assumption is that we will focus on just the problem of explaining the structure of sentences. Consider the following English sentence.

John loves Mary.

What can we say about a sentence like this? First, it is obviously built from a set of basic units like words and morphemes. Second, the meaning of the sentence is compositional: The meaning of the whole sentence is the result of combining the meaning of the individual words used in the sentence together with the way they are combined. For example,

Mary loves John.

means something quite different from the John loves Mary. As English speakers, we know something which tells us that the ordering of the words affects their combined meaning in systematic and predictable ways. Many combinations of words are not valid in English.

* loves Mary John.

Here we are using the linguistic convention of using the symbol * at the beginning of a sequence that is not a possible English sentence or—in technical terms—is ungrammatical. While we may be able to guess what the speaker intended (or maybe not) if we heard this sequence of words, we know that it isn’t valid English. In the fifties, Chomsky gave the following famous trio of examples which illustrates this point more forcefully.

Revolutionary new ideas appear infrequently.
Colorless green ideas sleep furiously.
* Furiously sleep ideas green colorless.

In the preceding examples, the first two sentences are well-formed in English, while the last is not. What is striking is that despite being well-formed, the second sentence, unlike the first, doesn’t seem to mean anything that makes any sense. Chomsky used this example to illustrate the point that whatever principles tell us what a possible English sentence is, they must be at least partially independent of whether or not the sequence has a definite meaning or is otherwise useful.

Another famous example comes from Lewis Carroll’s poem Jabberwocky which begins.

’Twas brillig, and the slithy toves
Did gyre and gimble in the wabe;
All mimsy were the borogoves,
And the mome raths outgrabe.

These examples seem to suggest that we might make a start of our study of sentence structure by asking which sequences of words are possible, or grammatical, English sentences and which are not.

With this in mind, in this course, we will simplify our empirical problem to focus just on the domain of sentence structure, known in linguistics as syntax and on the question of grammaticality of particular sequences of words.

Note that in choosing to study the concept of grammaticality we are setting aside many important questions—including even the very question we set out to answer at the beginning of this chapter: How does form map to meaning and vice versa! We are simplifying greatly—the concept of grammaticality implies that strings of words can be always categorized as possible or impossible English sentences. But, it is very easy to find cases where answers aren’t clear cut.

More people have visited Berlin than I have.

This sentence sounds correct, but it has the property that it doesn’t seem to mean anything! Our intuitions about possible and impossible sentences come in degrees not simply a binary distinctions. Nevertheless, it is still useful to start with the idealization that it can be captured as a binary distinction.

It is very likely that the phenomenon of grammaticality as exemplified above isn’t a single phenomenon, but many. For example, the problems with the following two sentences seem to be differences in kind rather than necessarily the same form of ungrammaticality.

* The man walk quickly.

* Man the quickly walks.

Here we have the intuition that the subject-verb agreement error in the first example is quite different from the word salad in the second example. This suggests that there are multiple different mental processes involved in understanding language and each might have somewhat different notions of well-formedness.

This is all to emphasize, right from the beginning, that we are making drastic empirical simplifications. We are introducing concepts which are somewhat vaguely defined, capture only limited empirical phenomena, and whose relationship with the data may be complex. Whether grammaticality turns out to be a useful theoretical concept will depend largely on whether or not it suggests useful models, leads to interesting predictions, and generates valuable questions and refinements.


3 Recursion 5 Formal Languages