Inductively Defined Lists and Proofs

In this post, we will explore an inductive data structure as a type for lists of elements. We define two recursive functions that can be applied to lists of this type; namely, an operation to append two lists and an operation to reverse a list. With these definitions, we set out to mathematically prove a number of properties that should hold for these functions by means of induction.

A list is a data structure of a collection of elements in a certain order. A list can be defined in multiple ways. One way is to define a list as being an element (the “head” of the list) followed by another list (the “tail” of the list). Inductively, this could be formalized as follows:

Inductive list a := cons a list | nil.

This means that a list l with elements of type a is either cons x tail, with x of type a and with tail of type list a, or l is the empty list nil. Let’s look at some example lists with elements of the whole numbers.

// Empty list; []
l = nil

// List with one element; [1]
l = cons 1 nil

// Different list with one element; [2]
l = cons 2 nil

// List with two elements; [1,2]
l = cons 1 (cons 2 nil)

// List with three elements; [1,2,3]
l = cons 1 (cons 2 (cons 3 nil))

Note that because we have lists of integers, following our definition the list l is of type list integer.

Multiple list operations can be defined, such as append and reverse. We defined our list inductively, and so it would make sense to define these operations inductively (also known as recursively) as well. Because of our neat data structure and operations, we should then be able to prove that certain properties of the operations hold.

Continue reading “Inductively Defined Lists and Proofs”

Illogical Reasoning and Reasonable Illogicality: The Differences Between Logic and Reason

What is reasonable is not necessarily logical. One can believe two things and realize that they imply a third thing. Instead of coming to believe the third thing, that person might find they should stop believing one of the first two if they have a good reason to believe the third is false. Similarly, a person with those beliefs, except without a good reason to believe the third is false, might still not infer that the third is true: they might be utterly uninterested in whether it is true or not. There are many things one believes, and though it would be logical to follow those beliefs to their logical conclusions, this would result in one’s mind being filled with trivialities; a consequence that makes the action unreasonable.

In order to understand the difference between reason and logic, I think it is important to first understand reason more deeply.

Reason is traditionally split into two types. Theoretical reasoning is a type of thought to come to a certain belief, whereas practical reasoning is a type of thought to change plans or intentions. What is reasonable in one type, is not necessarily reasonable in the other. For example, in practical reasoning one might be presented with multiple, equally satisfactory options. It would be rational to choose an arbitrary option; otherwise one would be stalled through inaction. The same does not hold for theoretical reasoning: when presented with multiple, equally satisfactory beliefs it would not be rational to arbitrarily choose one to believe. However, it is possible to rationally choose which beliefs or questions one evaluates with theoretical reason. The conclusions remain unaffected.

Ordinarily, reasoning is applied in a conservative manner. One’s current beliefs and intentions are changed if there is a special reason to do so, and conserved otherwise. This is in contrast with foundational reasoning, where a belief should only be continued to be held if there is a justification to do so. In such a reasoning system, there are some foundational beliefs that require no further justification, such as current perceptions and logical axioms. In general, humans reason conservatively.

Logical processes can be divided into three modes. One such mode is deduction, where logical conclusions are reached from premises. A proof that some conclusion is true through deduction starts with the premises, then consists of a series of steps where each step follows logically from the prior steps or the premises, and finally leads to the conclusion. This proof is sometimes called “deductive reasoning”.

In reality, it is not actually a type of reasoning. In constructing such a proof, one can have many different considerations. One first determines what they are setting out to prove, upon which they might consider which intermediate steps could be useful. They then aim to prove these intermediate steps, and only then prove the conclusion is true using those intermediate results. The reasoning behind constructing a logical proof does not necessarily follow the same structure as the proof itself: the deductive rules must be satisfied for the proof, but are not necessarily followed in the proof’s construction.

As such, something that is reasonable is not necessarily logical, and something that is logical is not necessarily reasonable. One can reason about deductions, but deduction is not a kind of reasoning. Logic, at least its deductive mode, is not a theory of reasoning and does not tell us how we govern our beliefs and intentions.

If you would like to explore logic and reasoning in more detail, I highly recommend reading the article Internal critique: A logic is not a theory of reasoning and a theory of reasoning is not a logic, by G. Harman (2002).

An Introduction to Turing Machines

Alan Turing
Alan Turing

A Turing machine is a mathematical model of a mechanical machine. At its roots, the Turing machine uses a read/write head to manipulate symbols on a tape. It was invented by the computer scientist Alan Turing in 1936. Interestingly, according to the Church-Turing thesis this simple machine can do everything any other computer can do; including our contemporary computers. Though these machines are not a practical or efficient means to calculate something in the real world, they can be used to reason about computability and other properties of computer programs.

Continue reading “An Introduction to Turing Machines”

Alan Turing Quote from Computing Machinery and Intelligence

The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false.

— Alan Turing, 1950

Why You Should Never Go Data Fishing

Most likely you will have heard that you should never go data fishing, meaning that you should not repeatedly test data. In the case of statistical significance tests, perhaps you will have heard that because of the nature of these tests you will find an effect at the 5% significance level in 5% of cases when there actually is no effect, and an effect at the 2% significance level in 2% of cases when there actually is no effect, and so on. It is less likely you will have heard not to continue looking for an effect after your current test concluded there was none. Here is why.

Continue reading “Why You Should Never Go Data Fishing”

Competing in the ESA Moon Challenge

I have recently joined a group of people to form a team for competing in the ESA Moon Challenge. This challenge is held in light of recent plans to set up an inhabited station near the far-side of the Moon; specifically, at the second Lagrangian point of the Earth-Moon system ($latex \text{EML}_{2}$). In this orbit, the station will remain in a stable position relative to both the Earth and the Moon. This position brings great advantages, such as excellent Lunar research capabilities, as well as being relatively easy to reach with a rocket from Earth.

Additionally, due to this useful orbit, the station could be used as an “Exploration Gateway Platform“. This Gateway would allow for a great number of scientific missions, and would pave the way for sustainable Lunar exploration. Other long-term goals of the Gateway include exploration of asteroids and, ultimately, Mars. See the Global Exploration Roadmap (2013) for more information about the Gateway and potential missions.

The goal of the challenge is to design a number of missions in an international group of university students. The missions should be based on the rough architecture named “Human-Enhanced Robotic Architecture and Capabilities for Lunar Exploration and Science” (HERACLES); this architecture describes a number of landers, ascent modules, robots, etc, that will work in concert to provide an unprecedented opportunity for exploring the Moon.

I am really excited to be participating in this challenge, and I think it was a great idea of ESA to organize it. Good luck to all competing teams, and have fun!

Plotting a Heat Map Table in MATLAB

A heat map table with the "copper" colormap
A heat map table with the “copper” colormap.

A little while ago I found myself needing to plot a heat map table in MATLAB. Such a plot is a table where the cells have background colors; the colors depend on the value in the cell, e.g. a higher value could correspond with a warmer color. I found no existing function to do this easily, so I set out to create my own solution.

The code to plot a heat map table can be found here.

Usage is pretty simple. If you have a matrix $latex A$, just pass it into the function and it will do the rest! For example:

A = zeros(7,7);
for i = 1:7
    for j = 1:7
        A(i,j) = i+j-2;
    end
end
tabularHeatMap(A);

There are a number of options available. See the documentation in the code for more information about the options. To further adjust the generated figure, such as to add labels, proceed as you would with other plotting functions. For example:

confusion = crosstab(responses, correctAnswers);
h = tabularHeatMap(confusion, 'Colormap', 'winter');
title('Confusion Matrix');
xlabel('Correct');
ylabel('Response');
h.XAxisLocation = 'top';
h.XTick = [1 2 3];
h.XTickLabel = {'A', 'B', 'C'};
h.YTick = [1 2 3];
h.YTickLabel = {'A', 'B', 'C'};

Training, Testing and Development / Validation Sets

Finding models that predict or explain relationships in data is a big focus in information science. Such models often have many parameters that can be tuned, and in practice we only have limited data to tune the parameters with. If we make measurements of a function f(x) at different values of x, we might find data like in Figure (a) below. If we now fit a polynomial curve to all known data points, we might find the model that is depicted in Figure (b). This model appears to explain the data perfectly: all data points are covered. However, such a model does not give any additional insight into the relationship between x and f(x). Indeed; if we make more measurements, we find the data in Figure (c). Now the model we found in (b) appears to not fit the data well at all. In fact, the function used to generate the data is f(x) = x + \epsilon with \epsilon Gaussian noise. The linear model f'(x) = x depicted in Figure (d) is the most suitable model to explain the found data and to make predictions of future data.

The overly complex model found in (b) is said to have overfitted. A model that has been overfitted fits the known data extremely well, but it is not suited for generalization to unseen data. Because of this, it is important to have some estimate of a model’s ability to be generalized to unseen data. This is where training, testing, and development sets come in. The full set of collected data is split into these separate sets.

Continue reading “Training, Testing and Development / Validation Sets”

Mistaken Logical Implication

One of the most common logical inferences uses logical implication. For example, you know that if it rains then the grass will be wet. If you look outside and see that it rains, you do not have to look at the grass to know that it is wet. This inference is called modus ponens: if A implies B and A is true, then B is true. Formally, the implication can be written as:


\text{it rains} \to \text{the grass is wet}

The modus ponens belonging to this implication can be written as:


\frac{\text{it rains} \to \text{the grass is wet},\; \text{it rains}}{\text{the grass is wet}}

A commonly made mistake is to erroneously also assume the opposite: if the grass is wet it, is raining. This is called the converse:


\text{the grass is wet} \to \text{it rains}

Continue reading “Mistaken Logical Implication”

Nativism vs. Empiricism: Ramifications for Artificial Natural Language Processing

Note: This is an essay I wrote for the subject Philosophy of Cognitive Science that was part of my bachelor’s course. I think it might be interesting to others, so I’ve decided to publish it here. The format is adapted slightly to be more suitable for this blog; the content is unchanged.

Notebook Language

In the field of artificial intelligence, humans are often used as prime examples of adaptable agents with general intelligence. The goal of some artificial intelligence researchers is to arrive at an artificial general, or human-level, intelligence. These agents should be able to perform many of the same tasks with the same adaptability as humans are able to. One of the few empirical certainties in the endeavour of creating such intelligent agents is that the natural, human intelligence works. Thus, there is merit to artificial intelligence research that strives to mimic human intelligence by modelling human mechanisms.

An intriguing and far-from-settled debate concerns the origin of human knowledge, skills, abilities and thought in general. The major theories can be identified as lying somewhere between the two extremes of full-blown nativism and full-blown empiricism [1]. Nativistic theorists would argue for innate knowledge; at least some of our capabilities arise from hard-wired pathways in our nervous system that are available at birth. In contrast, empiricists would argue that these capabilities are learned from experience utilizing the brain’s various capacities for learning. For example, a baby’s suckling after birth is likely innate, whereas the behavioural pattern of brushing your teeth is likely learned. It is still unknown which combination of these extremes in this seemingly easy distinction is correct.

When striving to model human capacities in an artificial intelligence, knowing which parts of human intelligence and other capabilities are hard-wired and which parts arise from experiences should be of particular interest to artificial intelligence researchers. In the following, we will look at the innateness (or lack thereof) of language parsing and acquisition. From this, recommendations will be made regarding the high-level design of an artificial natural language processor.

Continue reading “Nativism vs. Empiricism: Ramifications for Artificial Natural Language Processing”

[1] S. Gross and G. Rey, “Innateness,” , 2012.
[Bibtex]
@article{gross2012innateness,
title={Innateness},
author={Gross, Steven and Rey, Georges},
year={2012}
}