Competing in the ESA Moon Challenge

I have recently joined a group of people to form a team for competing in the ESA Moon Challenge. This challenge is held in light of recent plans to set up an inhabited station near the far-side of the Moon; specifically, at the second Lagrangian point of the Earth-Moon system ($latex \text{EML}_{2}$). In this orbit, the station will remain in a stable position relative to both the Earth and the Moon. This position brings great advantages, such as excellent Lunar research capabilities, as well as being relatively easy to reach with a rocket from Earth.

Additionally, due to this useful orbit, the station could be used as an “Exploration Gateway Platform“. This Gateway would allow for a great number of scientific missions, and would pave the way for sustainable Lunar exploration. Other long-term goals of the Gateway include exploration of asteroids and, ultimately, Mars. See the Global Exploration Roadmap (2013) for more information about the Gateway and potential missions.

The goal of the challenge is to design a number of missions in an international group of university students. The missions should be based on the rough architecture named “Human-Enhanced Robotic Architecture and Capabilities for Lunar Exploration and Science” (HERACLES); this architecture describes a number of landers, ascent modules, robots, etc, that will work in concert to provide an unprecedented opportunity for exploring the Moon.

I am really excited to be participating in this challenge, and I think it was a great idea of ESA to organize it. Good luck to all competing teams, and have fun!

Plotting a Heat Map Table in MATLAB

A heat map table with the "copper" colormap
A heat map table with the “copper” colormap.

A little while ago I found myself needing to plot a heat map table in MATLAB. Such a plot is a table where the cells have background colors; the colors depend on the value in the cell, e.g. a higher value could correspond with a warmer color. I found no existing function to do this easily, so I set out to create my own solution.

The code to plot a heat map table can be found here.

Usage is pretty simple. If you have a matrix $latex A$, just pass it into the function and it will do the rest! For example:

A = zeros(7,7);
for i = 1:7
    for j = 1:7
        A(i,j) = i+j-2;
    end
end
tabularHeatMap(A);

There are a number of options available. See the documentation in the code for more information about the options. To further adjust the generated figure, such as to add labels, proceed as you would with other plotting functions. For example:

confusion = crosstab(responses, correctAnswers);
h = tabularHeatMap(confusion, 'Colormap', 'winter');
title('Confusion Matrix');
xlabel('Correct');
ylabel('Response');
h.XAxisLocation = 'top';
h.XTick = [1 2 3];
h.XTickLabel = {'A', 'B', 'C'};
h.YTick = [1 2 3];
h.YTickLabel = {'A', 'B', 'C'};

Training, Testing and Development / Validation Sets

Finding models that predict or explain relationships in data is a big focus in information science. Such models often have many parameters that can be tuned, and in practice we only have limited data to tune the parameters with. If we make measurements of a function f(x) at different values of x, we might find data like in Figure (a) below. If we now fit a polynomial curve to all known data points, we might find the model that is depicted in Figure (b). This model appears to explain the data perfectly: all data points are covered. However, such a model does not give any additional insight into the relationship between x and f(x). Indeed; if we make more measurements, we find the data in Figure (c). Now the model we found in (b) appears to not fit the data well at all. In fact, the function used to generate the data is f(x) = x + \epsilon with \epsilon Gaussian noise. The linear model f'(x) = x depicted in Figure (d) is the most suitable model to explain the found data and to make predictions of future data.

The overly complex model found in (b) is said to have overfitted. A model that has been overfitted fits the known data extremely well, but it is not suited for generalization to unseen data. Because of this, it is important to have some estimate of a model’s ability to be generalized to unseen data. This is where training, testing, and development sets come in. The full set of collected data is split into these separate sets.

Continue reading “Training, Testing and Development / Validation Sets”

Mistaken Logical Implication

One of the most common logical inferences uses logical implication. For example, you know that if it rains then the grass will be wet. If you look outside and see that it rains, you do not have to look at the grass to know that it is wet. This inference is called modus ponens: if A implies B and A is true, then B is true. Formally, the implication can be written as:


\text{it rains} \to \text{the grass is wet}

The modus ponens belonging to this implication can be written as:


\frac{\text{it rains} \to \text{the grass is wet},\; \text{it rains}}{\text{the grass is wet}}

A commonly made mistake is to erroneously also assume the opposite: if the grass is wet it, is raining. This is called the converse:


\text{the grass is wet} \to \text{it rains}

Continue reading “Mistaken Logical Implication”

Nativism vs. Empiricism: Ramifications for Artificial Natural Language Processing

[bibshow file=nativismvsempiricism.bib sort=firstauthor order=asc]
Note: This is an essay I wrote for the subject Philosophy of Cognitive Science that was part of my bachelor’s course. I think it might be interesting to others, so I’ve decided to publish it here. The format is adapted slightly to be more suitable for this blog; the content is unchanged.

Notebook Language

In the field of artificial intelligence, humans are often used as prime examples of adaptable agents with general intelligence. The goal of some artificial intelligence researchers is to arrive at an artificial general, or human-level, intelligence. These agents should be able to perform many of the same tasks with the same adaptability as humans are able to. One of the few empirical certainties in the endeavour of creating such intelligent agents is that the natural, human intelligence works. Thus, there is merit to artificial intelligence research that strives to mimic human intelligence by modelling human mechanisms.

An intriguing and far-from-settled debate concerns the origin of human knowledge, skills, abilities and thought in general. The major theories can be identified as lying somewhere between the two extremes of full-blown nativism and full-blown empiricism [bibcite key=gross2012innateness]. Nativistic theorists would argue for innate knowledge; at least some of our capabilities arise from hard-wired pathways in our nervous system that are available at birth. In contrast, empiricists would argue that these capabilities are learned from experience utilizing the brain’s various capacities for learning. For example, a baby’s suckling after birth is likely innate, whereas the behavioural pattern of brushing your teeth is likely learned. It is still unknown which combination of these extremes in this seemingly easy distinction is correct.

When striving to model human capacities in an artificial intelligence, knowing which parts of human intelligence and other capabilities are hard-wired and which parts arise from experiences should be of particular interest to artificial intelligence researchers. In the following, we will look at the innateness (or lack thereof) of language parsing and acquisition. From this, recommendations will be made regarding the high-level design of an artificial natural language processor.

Continue reading “Nativism vs. Empiricism: Ramifications for Artificial Natural Language Processing”

Introduction to Artificial Neural Networks

A simple ANN
A simple ANN

An artificial neural network (ANN) is a type of machine learning model. It is made up of a number of simple parts called units, or neurons. By combining a large amount of these simple units, ANNs can solve real-world problems. For example, the main network that was used in my bachelor thesis research consisted of over 12,000 units. The name artificial neural network is slightly misleading: they’re mostly related to biological neural networks through the fact that both artificial and natural neural networks are made up of simple parts. Other than that they’re quite unrelated.

Continue reading “Introduction to Artificial Neural Networks”

Decreasing Information

During the training of several neural networks for my bachelor’s thesis (more on that later, maybe!) I noticed something fun. The used networks’ weights (in this case classification function parameters) are initialized with numbers drawn from the standard normal distribution, meaning the initial network state is random. Such randomness by its very nature has no actual structure, and thus has high entropy. This means that compressing the information to save it to disk is less effective than on other, more structured, information. Initially, saving one such network’s weights required approximately 19.5MB of disk space.

As the networks’ training progressed, the file sizes shrunk! After a day of training, the space required for saving this network’s weights had decreased to 18.0MB; a 1.5MB decrease from the original value. I hadn’t thought about it before, but once I noticed it I soon realized what was happening. The whole act of training networks is exercised to find structure in data. A neural network does this by learning some sort of representation of the data through continuously updating its weights while training — in other words, the weights are getting more structured as the network is getting smarter! When the weights’ structure increases the entropy decreases, making compression more effective and our disks happier. Or unhappier, perhaps.

Rocket Fuel Requirements Revisited

In a previous post we looked at the fuel requirements for rockets to reach escape velocity. We calculated the fuel requirements using the rocket equation. This equation takes into account the conservation of momentum. However, momentum is not the only property influencing the velocity of the rocket during a launch.

Rockets expel their fuel over time. During this time, the rocket is pulled back due to gravity. Only if a rocket could instantaneously expel all of its fuel, and when ignoring atmospheric drag, the escape velocity would be reached instantaneously and the equation would hold.

Taking the burn-time and gravity into account yields a difficult differential equation. We can implement that equation in a computer program to simulate the launch.

Continue reading “Rocket Fuel Requirements Revisited”

Black Holes

A black hole is an object from which nothing, including light, is able to escape. As nothing can go faster than light, this can be more formally defined as an object for which the escape velocity is greater than the speed of light. In my post about escape velocity we found an equation relating the velocity required to escape from the gravitational pull of an object and that object’s mass.


v = \sqrt{2G \frac{M}{r}}

In this equation, v is the escape velocity, M is the mass of the object we want to escape from and r is the distance from the center of mass of the object we’re escaping from. G is the gravitational constant.

If the required velocity v is greater than the speed of light, c, even light will not able to escape the object, making it a black hole. We have two variables, M and r, so we can derive two equations from this. One equation gives the distance from the center of mass required to make escaping from the object impossible given the mass, and the other gives the mass required given the distance from the center of mass.


\begin{aligned}
\sqrt{2G \frac{M}{r}} &= c \\
2G \frac{M}{r} &= c^2 \\
\frac{M}{r} &= \frac{c^2}{2G} \\
M &= \frac{r c^2}{2G} \\
r &= \frac{2GM}{c^2}
\end{aligned}

The latter equation, r = \frac{2GM}{c^2}, is known as the Schwarzschild radius. It is the radius of the perfect sphere around the center of mass of the object, such that if all the mass is within that sphere the resulting escape velocity is equal to the speed of light. In other words, if the object were smaller than this, it would become a black hole. For Earth, the radius is slightly surprising:


\begin{aligned}
r &= \frac{2G M_{\oplus}}{c^2} \\
&= \frac{2 \cdot 6.67 \cdot 10^{-11} \cdot 5.97 \cdot 10^{24}}{3.00 \cdot 10^8} \approx 8.87 \text{ mm}
\end{aligned}

So, Earth would only become a black hole if it was compressed to the size of a marble. A black hole can be smaller than its Swartchzschild radius, however. In this case, the radius acts as the event horizon of the black hole: matter, or information, inside the radius would not necessarily be inside the black hole itself, but it would no longer be able to escape to outside the event horizon. In other words: everything that happens inside the event horizon of a black hole, is invisible to outside observers.

Escape Velocity (And: How Much Fuel Do Rockets Need?)

During the launch of a rocket, the Earth’s gravitational field is pulling the rocket back. The rocket needs a certain speed to be able to escape from the Earth’s gravitational field, such that it won’t fall back to Earth nor get into an orbit around it. Escape velocity is the speed a rocket requires to be able to escape from a body without having to burn more fuel later during the maneuver. For a body as massive as Earth, the required velocity is relatively high, and this is why rockets literally need tonnes of fuel.

In this post, by making a few simplifications and using the rocket equation that we found earlier, we will derive an equation to calculate the amount of propellant needed to escape from Earth.

Continue reading “Escape Velocity (And: How Much Fuel Do Rockets Need?)”