Machine Translation Turing Test

Machine Translation
Will computers ever reach the quality of professional translators?

The U.S. government spent 4.5 billion USD from 1990 through 2009 on outsourcing translations and interpretations [4]. If these translations were automated, much of this money could have been spent elsewhere. The research field of Machine Translation (MT) tries to develop systems capable of translating verbal language (i.e. speech and writing) from a certain source language to a target language.

Because verbal language is broad, allowing people to express a great number of things, one must take into account many factors when translating text from a source language to a target language. Three main difficulties when translating are proposed in [6]: the translator must distinguish between general vocabulary and specialized terms, as well as various possible meanings of a word or phrase, and must take into account the context of the source text.

Machine Translation systems must overcome the same obstacles as professional human translators in order to accurately translate text. To try to achieve this, researchers have had a variety of approaches over the past decades, such as [3, 2, 5]. At first, the knowledge-based paradigm was dominant. After promising results on a statistical-based system ([2, 1]), the focus shifted towards this new paradigm.

Continue reading “Machine Translation Turing Test”

[1] P. F. Brown, V. D. J. Pietra, S. D. A. Pietra, and R. L. Mercer, “The mathematics of statistical machine translation: parameter estimation,” Computational linguistics, vol. 19, iss. 2, p. 263–311, 1993.
title={The mathematics of statistical machine translation: Parameter estimation},
author={Brown, Peter F and Pietra, Vincent J Della and Pietra, Stephen A Della and Mercer, Robert L},
journal={Computational linguistics},
publisher={MIT Press}
[2] P. F. Brown, J. Cocke, S. D. A. Pietra, V. D. J. Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin, “A statistical approach to machine translation,” Computational linguistics, vol. 16, iss. 2, p. 79–85, 1990.
title={A statistical approach to machine translation},
author={Brown, Peter F and Cocke, John and Pietra, Stephen A Della and Pietra, Vincent J Della and Jelinek, Fredrick and Lafferty, John D and Mercer, Robert L and Roossin, Paul S},
journal={Computational linguistics},
publisher={MIT Press}
[3] D. A. Gachot, “The systran renaissance,” in Mt summit ii, 1989, p. 66–71.
title={The SYSTRAN renaissance},
author={Gachot, Denis A},
booktitle={MT SUMMIT II},
[4] D. Isenberg, Translating For Dollars, 2010.
author = {David Isenberg},
title = {{Translating For Dollars}},
howpublished = "\url{}",
year = {2010},
note = "[Online; accessed 2-October-2013]"
[5] P. Koehn, F. J. Och, and D. Marcu, “Statistical phrase-based translation,” in Proceedings of the 2003 conference of the north american chapter of the association for computational linguistics on human language technology-volume 1, 2003, p. 48–54.
title={Statistical phrase-based translation},
author={Koehn, Philipp and Och, Franz Josef and Marcu, Daniel},
booktitle={Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1},
organization={Association for Computational Linguistics}
[6] A. K. Melby, Some difficulties in translation, 1999.
author = {Alan K. Melby},
title = {{Some difficulties in translation}},
howpublished = "\url{}",
year = {1999},
note = "[Online; accessed 2-October-2013]"

Nativism vs. Empiricism: Ramifications for Artificial Natural Language Processing

Note: This is an essay I wrote for the subject Philosophy of Cognitive Science that was part of my bachelor’s course. I think it might be interesting to others, so I’ve decided to publish it here. The format is adapted slightly to be more suitable for this blog; the content is unchanged.

Notebook Language

In the field of artificial intelligence, humans are often used as prime examples of adaptable agents with general intelligence. The goal of some artificial intelligence researchers is to arrive at an artificial general, or human-level, intelligence. These agents should be able to perform many of the same tasks with the same adaptability as humans are able to. One of the few empirical certainties in the endeavour of creating such intelligent agents is that the natural, human intelligence works. Thus, there is merit to artificial intelligence research that strives to mimic human intelligence by modelling human mechanisms.

An intriguing and far-from-settled debate concerns the origin of human knowledge, skills, abilities and thought in general. The major theories can be identified as lying somewhere between the two extremes of full-blown nativism and full-blown empiricism [1]. Nativistic theorists would argue for innate knowledge; at least some of our capabilities arise from hard-wired pathways in our nervous system that are available at birth. In contrast, empiricists would argue that these capabilities are learned from experience utilizing the brain’s various capacities for learning. For example, a baby’s suckling after birth is likely innate, whereas the behavioural pattern of brushing your teeth is likely learned. It is still unknown which combination of these extremes in this seemingly easy distinction is correct.

When striving to model human capacities in an artificial intelligence, knowing which parts of human intelligence and other capabilities are hard-wired and which parts arise from experiences should be of particular interest to artificial intelligence researchers. In the following, we will look at the innateness (or lack thereof) of language parsing and acquisition. From this, recommendations will be made regarding the high-level design of an artificial natural language processor.

Continue reading “Nativism vs. Empiricism: Ramifications for Artificial Natural Language Processing”

[1] S. Gross and G. Rey, “Innateness,” , 2012.
author={Gross, Steven and Rey, Georges},