What is the concept and theory behind Generative Grammar?

Generative Grammar is a linguistic theory that seeks to explain how humans acquire and use language. It is based on the idea that there is a set of rules and principles that underlie all human languages, and these rules are innate to the human brain. This theory was developed by Noam Chomsky in the 1950s and has since become one of the most influential approaches in modern linguistics. In this introduction, we will explore the concept and theory behind Generative Grammar, its key principles, and its impact on our understanding of language and the human mind.

In theoretical linguistics, generative grammar refers to a particular approach to the study of syntax. A generative grammar of a language attempts to give a set of rules that will correctly predict which combinations of words will form grammatical sentences. In most approaches to generative grammar, the rules will also predict the morphology of a sentence.

Generative grammar originates in the work of Noam Chomsky, beginning in the late 1950s. Early versions of Chomsky’s theory were called transformational grammar, and this term is still used as a collective term that includes his subsequent theories. There are a number of competing versions of generative grammar currently practiced within linguistics. Chomsky’s current theory is known as the Minimalist program. Other prominent theories include or have included head-driven phrase structure grammar, lexical functional grammar, categorial grammar, relational grammar, link grammar and tree-adjoining grammar.

Chomsky has argued that many of the properties of a generative grammar arise from an “innate” universal grammar. Proponents of generative grammar have argued that most grammar is not the result of communicative function and is not simply learned from the environment (see poverty of the stimulus argument). In this respect, generative grammar takes a point of view different from cognitive grammar, functional and behaviorist theories.

Most versions of generative grammar characterize sentences as either grammatically correct (also known as well formed) or not. The rules of a generative grammar typically function as an algorithm to predict grammaticality as a discrete (yes-or-no) result. In this respect, it differs from stochastic grammar, which considers grammaticality as a probabilistic variable. However, some work in generative grammar (e.g. recent work by Joan Bresnan) uses stochastic versions of optimality theory.

 

Historical development of models of transformational grammar

The oldest known generative grammar that is still extant and in common use is the Sanskrit grammar of Pāṇini, called the Ashtadhyayi, composed by the middle of the 1st millennium BCE.

Generative grammar has been under development since the late 1950s, and has undergone many changes in the types of rules and representations that are used to predict grammaticality. In tracing the historical development of ideas within generative grammar, it is useful to refer to various stages in the development of the theory.

Scroll to Top