Nativism vs. Empiricism: Ramifications for Artificial Natural Language Processing

Note: This is an essay I wrote for the subject Philosophy of Cognitive Science that was part of my bachelor’s course. I think it might be interesting to others, so I’ve decided to publish it here. The format is adapted slightly to be more suitable for this blog; the content is unchanged.

Notebook Language

In the field of artificial intelligence, humans are often used as prime examples of adaptable agents with general intelligence. The goal of some artificial intelligence researchers is to arrive at an artificial general, or human-level, intelligence. These agents should be able to perform many of the same tasks with the same adaptability as humans are able to. One of the few empirical certainties in the endeavour of creating such intelligent agents is that the natural, human intelligence works. Thus, there is merit to artificial intelligence research that strives to mimic human intelligence by modelling human mechanisms.

An intriguing and far-from-settled debate concerns the origin of human knowledge, skills, abilities and thought in general. The major theories can be identified as lying somewhere between the two extremes of full-blown nativism and full-blown empiricism [7]. Nativistic theorists would argue for innate knowledge; at least some of our capabilities arise from hard-wired pathways in our nervous system that are available at birth. In contrast, empiricists would argue that these capabilities are learned from experience utilizing the brain’s various capacities for learning. For example, a baby’s suckling after birth is likely innate, whereas the behavioural pattern of brushing your teeth is likely learned. It is still unknown which combination of these extremes in this seemingly easy distinction is correct.

When striving to model human capacities in an artificial intelligence, knowing which parts of human intelligence and other capabilities are hard-wired and which parts arise from experiences should be of particular interest to artificial intelligence researchers. In the following, we will look at the innateness (or lack thereof) of language parsing and acquisition. From this, recommendations will be made regarding the high-level design of an artificial natural language processor.


A clear distinction between agents with artificial intelligence and those with natural intelligence is their disparate origins. An artificially intelligent agent is one that is designed by naturally intelligent agents. A naturally intelligent agent is “designed” by evolution.

Innateness in an artificially intelligent agent could be said to be any hard-coded rule-set. However, this rule-set can itself have arisen from a learning algorithm. In addition, any rule-set devised by a naturally intelligent agent can be argued to have arisen from the learning of that natural intelligence. As such, it might be argued that an artificial intelligence by its very nature is formed wholly by experience. There would be no room for innate capabilities. However, similarly, all innate capabilities of a natural intelligence have arisen from evolution. This process can itself be seen as a type of learning; it randomly tries introducing new capabilities or varies existing capabilities (mutations), and remembers those that work (natural selection). As such, by using the same reasoning, all natural intelligences would be formed wholly by experience as well. Evidently, the original reasoning is not sound.

We will define an “innate” capability of an artificial intelligence to be a capability of that intelligence that was available at its initialisation. This includes capabilities that are triggered at later times [14].

Language Learning

Many linguists suggest that the ability to parse natural languages is innate (e.g., [1]). For example, it might be possible that children are born with knowledge of the constraints applicable to the set of possible human languages [5]. These constraints can be seen as an untrained “language parser”. This parser is subsequently fine-tuned through integration of linguistic data (i.e., the child’s observations with regard to the target language). This process fits the parser to a specific grammar and lexicon.

Such an innate parser significantly eases grammar acquisition [6]. One of the major arguments in favour of an innate language parser is the so-called “poverty of the stimulus” argument [2]. It asserts that too little information is available for young children to learn a language in as deep a way as they do. Furthermore, the languages they are exposed to are so broad as to make inductive generalization highly improbable. To overcome this, children must have some sort of innate linguistic capacity that limits the set of possible grammars. In addition, the universal aspects of the structures of languages seem to indicate common innate constraints.

Thus, whereas the actual language itself is learned, the grammar constraints are already given. This innateness of constraints appears to be at odds with connectionism. Connectionist models do not incorporate traditional grammars [12]. However, these models are highly suited to these types of learning problems; they deal well with noisy input, create abstract representations, and have the ability to deal with novel inputs [12]. It might be the case that such structures can be used for learning languages, but they should likely be mediated by the innate constraints on possible grammars. Moreover, there are no markers or grammatical categories present explicitly in the speech a child is exposed to. Thus, such a mediation method should also provide a linkage between the observations and the innate grammar system.

One such mediation method is “bootstrapping”. This term comes from straps on boots that help pull on said boots. In the general case, a bootstrap is a small action or thing to help perform a greater action or thing. In linguistics, a bootstrapping mechanism starts a child’s language acquisition by constraining the language processing [8]. There are various different bootstrapping theories available (e.g., syntactic [11], and prosodic [13]). Essentially, bootstrapping helps the child to perform the task of matching the specific language’s properties to the innate grammatical system; the parser.

Such a bootstrapping mechanism can work in a variety of ways. Child-directed speech contains prosodic information about the syntactical structure of the sentence [4]. In the prosodic bootstrapping case, children have an innate ability to use these acoustic cues to separate syntactic boundaries before they have lexical knowledge [13].

An interesting observation can be made when looking at the brain areas associated with languages. In bilingual speakers, if the second language is acquired at a young age, the first and second languages are generally represented in a common area. If the second language is acquired late, the first and second languages tend to be spatially separated [10]. In addition, during the early stages of learning a second language, one often translates sentences word-by-word. In most cases this does not produce grammatically valid sentences in the target language. During the learning of the second language, sentences gradually become more grammatical. Additionally, learners of a second language can acquire knowledge of grammatical structures that are not available in their first language and that are not explicitly presented to them during their studies [3]. Thus, it appears that for late learners of a second language a second language parser is created and trained. This implies that such a parser, even if innate in itself, can be created in an empiricist (“on-demand”) fashion.

If an artificial intelligence is designed by mimicking human mechanisms, constraints are put on the design of that intelligence. If the above mechanisms are accurate descriptions of how humans acquire language, the constraints for an artificial natural language processor can be approximately identified. Firstly, such an intelligence must have an innate parser. The parser should contain parameters that influence its working. With the correct parameters, the parser should be able to parse any natural human language. Secondly, the raw speech input should be mediated in some way. This mediation should be performed by something similar to bootstrapping. It is not clear which type of bootstrapping is most appropriate, and likely a combination of possible bootstrapping methods is suitable. Thirdly, the artificial intelligence should be exposed to a large amount of speech. Adults change their speech when talking to children [9], and thus it might be appropriate to similarly have a corpus of “child-directed speech” for such an artificial intelligence to learn from.


Human language processing and acquisition appear to be complex combinations of innate and learned mechanisms. To mimic the abilities in a human way, an artificial intelligence should (i) contain an innate language parser that is tunable to specific languages through parameters, (ii) contain an innate mediation mechanism, such as bootstrapping, that converts speech input into a more readily usable representation, and (iii) have access to a corpus of appropriate speech, such as child-directed speech. The parser should then be tuned through supplying mediated speech input. Note that the innate mechanisms can themselves be created with machine learning methods; the only requirement is that they should be available at initialisation of the artificial intelligence in question. In the case of creating an artificial intelligence with the ability to learn more than one language, it is unclear as to whether it should augment the available routines (i.e., use the same “brain areas”) or create new routines. In the case of creating new routines, a new language parser should be trained.


  • [1] N. Chomsky, “Aspects of the theory of syntax,” DTIC Document 1964.
    title={Aspects of the Theory of Syntax},
    author={Chomsky, Noam},
    institution={DTIC Document}
  • [2] N. Chomsky, “Language and problems of knowledge,” Teorema: revista internacional de filosof\’\ia, pp. 5-33, 1997.
    title={Language and problems of knowledge},
    author={Chomsky, Noam},
    journal={Teorema: Revista Internacional de Filosof{\'\i}a},
  • [3] C. J. Doughty and M. H. Long, The handbook of second language acquisition, John Wiley & Sons, 2008, vol. 27.
    title={The handbook of second language acquisition},
    author={Doughty, Catherine J and Long, Michael H},
    publisher={John Wiley \& Sons}
  • [4] C. Fisher and H. Tokura, “Acoustic cues to grammatical structure in infant-directed speech: cross-linguistic evidence,” Child development, vol. 67, iss. 6, pp. 3192-3218, 1996.
    title={Acoustic cues to grammatical structure in infant-directed speech: cross-linguistic evidence},
    author={Fisher, Cynthia and Tokura, Hisayo},
    journal={Child Development},
    publisher={Wiley Online Library}
  • [5] J. A. Fodor, The modularity of mind: an essay on faculty psychology, MIT press, 1983.
    title={The modularity of mind: An essay on faculty psychology},
    author={Fodor, Jerry A},
    publisher={MIT press}
  • [6] J. D. Fodor, “Learning to parse?,” Journal of psycholinguistic research, vol. 27, iss. 2, pp. 285-319, 1998.
    title={Learning to parse?},
    author={Fodor, Janet Dean},
    journal={Journal of psycholinguistic research},
  • [7] S. Gross and G. Rey, “Innateness,” , 2012.
    author={Gross, Steven and Rey, Georges},
  • [8] B. Höhle, “Bootstrapping mechanisms in first language acquisition,” Linguistics, vol. 47, iss. 2, pp. 359-382, 2009.
    title={Bootstrapping mechanisms in first language acquisition},
    author={H{\"o}hle, Barbara},
  • [9] V. Kempe, S. Schaeffler, and J. C. Thoresen, “Prosodic disambiguation in child-directed speech,” Journal of memory and language, vol. 62, iss. 2, pp. 204-225, 2010.
    title={Prosodic disambiguation in child-directed speech},
    author={Kempe, Vera and Schaeffler, Sonja and Thoresen, John C},
    journal={Journal of Memory and Language},
  • [10] K. H. Kim, N. R. Relkin, K. Lee, and J. Hirsch, “Distinct cortical areas associated with native and second languages,” Nature, vol. 388, iss. 6638, pp. 171-174, 1997.
    title={Distinct cortical areas associated with native and second languages},
    author={Kim, Karl HS and Relkin, Norman R and Lee, Kyoung-Min and Hirsch, Joy},
    publisher={Nature Publishing Group}
  • [11] B. Landau, L. R. Gleitman, and B. Landau, Language and experience: evidence from the blind child, Harvard University Press, 2009, vol. 8.
    title={Language and experience: Evidence from the blind child},
    author={Landau, Barbara and Gleitman, Lila R and Landau, Barbara},
    publisher={Harvard University Press}
  • [12] M. S. Seidenberg, “Language acquisition and use: learning and applying probabilistic constraints,” Science, vol. 275, iss. 5306, pp. 1599-1603, 1997.
    title={Language acquisition and use: Learning and applying probabilistic constraints},
    author={Seidenberg, Mark S},
    publisher={American Association for the Advancement of Science}
  • [13] M. Soderstrom, A. Seidl, D. K. G. Nelson, and P. W. Jusczyk, “The prosodic bootstrapping of phrases: evidence from prelinguistic infants,” Journal of memory and language, vol. 49, iss. 2, pp. 249-267, 2003.
    title={The prosodic bootstrapping of phrases: Evidence from prelinguistic infants},
    author={Soderstrom, Melanie and Seidl, Amanda and Nelson, Deborah G Kemler and Jusczyk, Peter W},
    journal={Journal of Memory and Language},
  • [14] K. Sterelny, “Fodor’s nativism,” Philosophical studies, vol. 55, iss. 2, pp. 119-141, 1989.
    title={Fodor's nativism},
    author={Sterelny, Kim},
    journal={Philosophical Studies},

Leave a Reply

Your email address will not be published. Required fields are marked *