Wednesday, July 3, 2013
Tuesday, July 2, 2013
Pytables/Numpy: lesson learned
Today's PyTables/NumPy lesson that I learned the hard way (i.e. through time
wasted): When you use the
__getitem__
method from a PyTables
Table, and you pass an integer, you don't get back a record array. You get back
the same thing as if you pass an integer to the __getitem__
method
of a record array - namely a numpy.void
instance, which is used
presumably because NumPy doesn't know what to call whatever you have stashed
together in one record.
Labels:
Brain Sputter,
numpy,
programming,
pytables,
python,
Quantity,
Today I learned
Monday, July 1, 2013
Discussions and the power of language
Recently, I have followed the Armikrog
Kickstarter campaign. Doug TenNapel, one of the guys behind the project, and the
creator of Earthworm Jim and Neverhood, among other things, is against gay
marriage. He also did an interview where he said some things that admittedly
could have been said better.
I am going to do a defense of Doug here, because in cases like this I almost always root for the underdog. And even if I myself am a Christian, I hope that anyone who reads this will consider the merit of what I say in its own right. My whole experience with this Kickstarter campaign and the discussions that have been surrounding it have been the catalysts of this post, but the points are actually things that I have been thinking for a while, and should largely be applicable to the wider world. I will try as hard as I can not to step on any toes, but to say what I think is right.
Yet another caveat before I start: I do realize this is a tough issue for many homosexuals. I realize that there are whole lives of oppression, judgement and ostracizing involved in the issue. And perhaps the situation is too young to have stabilized, and it's too much to expect the discussion to not bound into the irrational when there are so many feelings involved. However, it seems to me there is a tendency now that those who have been oppressed for their sexual orientation now are gaining legitimacy and with it, more power. And anyone who has power has a responsibility to use that power in a good way. What I'm going to try here is to point out tendencies towards this not being the case.
The first thing that springs to mind is how abused language can get sometimes. I am in particular thinking of the term 'Homophobe' - a word packed with meaning, in a small bundle. It's not clear exactly what it's supposed to mean. First of all, I resent putting '-phobe' or '-phobia' at the end of words unless they're actually referring to a psychological disorder. Using the word in this way almost invariably is a way to discredit your opposition without rational argument, and nothing is more infuriating than getting discredited irrationally. One might even say that people who use such terms have a 'discussiophobia' - a (irrational) fear of rational discussion. But I digress. The point is, calling someone a homophobe when you know nothing about their motivations or beliefs is incredibly respectless (for more on what those motivations or beliefs could be, see below). To me, the term 'homophobe' is so full of meaning that it is meaningless. It cannot be brought into a discussion in the hopes of continuing in a rational manner.
Another meaningful but meaningless word is 'hate speech'. What in the world is that supposed to mean? If you say you hate gay people, or something that is synonymous, then yes, you're engaging in hate speech. But usually that's not when it is being used. Many of Doug's statements are being labelled as 'hate speech', although nothing of what he says condones hating gay people or promotes hating them (unless you consider being against gay marriage to be synonymous to hating gay people. But then, I think you need to look up the word 'hate' in some kind of dictionary.)
The second thing is how important context is. In the abovementioned interview, Doug and the interviewer had been chatting for a long while, in an informal style (if I remember correctly). He then said some remarks about gay marriage and he used some poor analogies to illustrate his point. I agree that he should have thought twice about using those analogies specifically. However. The way those things were taken out of context and quoted in various social media and gamer articles, etc. was pathetic. Any article that purports to write more than five sentences on the topic really should include at least the immediate context. Without it, it becomes a screamfest where those with the best quotes win.
The third thing is the comparisons of homosexuality to things like paedophilia, incest, zoophilia, etc: In some discussions about homosexuality, some of these other kinds of philias will be mentioned, and usually it results in massive chastise from everyone (at least the pro-gay people involved in the discussion), since "you cannot compare homosexuality with [insert some philia here] - they're totally different".
However. Usually the person who's bringing up this other philia isn't really comparing homosexuality to other philias. In the discussions I have seen, it's usually a matter of trying to take the statement "Homosexuality is on the same footing as heterosexuality" (or some paraphrasing of this statement) to its logical conclusion: Usually this statement is accompanied by 'because two people are in love, and that's all that matters'. And the point is then: Ok, let's try to apply this logic. Until homosexuality was accepted, used to be that 'all that mattered' was that 'two people of the opposite sex are in love'. However, there are several other implied restrictions. I'll try to exhaust the restrictions in the following sentence: "All that matters is that exactly two individuals of the species Homo Sapiens that are not closely related and whose age is no less than 16 that are alive and of the opposite sex are in love". So why are we removing the 'of the opposite sex' clause if we're not willing to remove any of the others? You could say that because the same sex clause is the only clause that wouldn't hurt someone. But this is blatantly false. You could for instance imagine allowing siblings that have undergone sterilization to get married - noone would get hurt. You could allow someone to marry someone else post-mortem as long as the person who died signed a contract saying it was his or her will. You could imagine an animal not being hurt by being married to a person. You could allow more than two persons to marry.
I really take issue with the "if you're against gay marriage, you're a homophobe" logic. It's a black and white logic that belongs in some kind of fascist state, but not in a democracy with free speech. If you're against gay marriage, you're against the concept of two people of the same sex going through the ritual we call marriage. That's it. Now, the reasons behind such a stance are varied, and some people probably are what some other people could legitimately call homophobes (or at least homo-dislikers) - as in, they don't like people who are homosexual, period. However, most other people, even those who are against gay marriage, get along fine with people who are homosexual, and don't hate those. In fact, I want to stop using the word "hate" here, because it's a really strong term and usually the feelings that are associated with this cannot be well described by this word.
So what reasons can people have for being against gay marriage that are not related to an irrational hatred towards homosexuals? Some people think it's just 'wrong'. That is, somewhere in their gut, there is a feeling of wrongness about the concept of homosexuality, and they're not particularly inclined to suffocate that feeling. And then you might say "Well, those people are the same kind of people who thought interracial marriage was a bad thing, and they were clearly wrong!"
Well, no. They weren't "clearly wrong", because there are no criteria upon which to base a verdict of correctness, unless you demand that everyone should be a consequentialist - that is, racial intermarriage has had no significant negative consequences, and lots of positive ones, so it must be the right thing to do. And being a consequentialist is totally fine, but there must be room for other types of ethics as well. Some people base their ethics system upon how they feel about some issue. And saying that only consequentialists are allowed opinions on some matter is inherently undemocratic, so such people must be allowed their say as well. Then there is another set of ethics, coming from the religious sphere. And this is more critical, in my mind. Especially when it comes to allowing in-church gay marriage, or even forcing churches to marry gay people, I can see why people would react. The point in that case is this:
And then the most cited answer is that Jesus told us to love one another, and so when two people are in love, we should sanctify it. Now, I'm not claiming to know what God wants, so I'm not going to say that people who think that God wants us to sanctify homosexuality are wrong. But I will point out that this particular line of reasoning is clearly wrong.
The main thing to say is that there are several kinds of 'love' talked about in the Bible. Usually, what Jesus and the apostles are talking about is 'agape', unconditional love, and that which is to be strived for by all Christians - as in love for your neighbor. The 'love' that we're talking about in the context of marriage is usually affectionate and erotic love, which is something else. Admittedly, the Church has done its part in confusing these terms, since we usually quote some passage by Paul in weddings which talk about the virtues of love - but this is agape, and shouldn't really be used in that context, except to say that you should also love your partner unconditionally as a Christian. Other than that, it has little to do with two people being in love. And this is why you cannot simply say that when two people are in love, God likes it, no matter who they are. Maybe He does, but that's not the point of much of the Bible, at least.
The last thing I can think of that I would like to rid these debates of is the concept that the people who are against gay marriage somehow are on a 'lower plane' of intelligence. Comments such as 'There is no point in discussing further with you, because you're obviously stuck in a prehistoric way of thinking' or the like can sometimes be called for, but more often than not when I see it in use it just serves to give the impression that you have run out of good arguments yourself and use this as a way to invalidate your opponent because of that fact. Admittedly, this is a more general problem, but in the gay marriage debate I more often than not see the proponents of gay marriage assuming that they're somehow more enlightened than their counterparts.
Clearly, I have been defending mostly one part of the debate in this post, and as I mentioned at the beginning, this is because I tend to side with the underdog. However, it is also because I really cannot stand it when a debate revolves around strawmen and misunderstandings that arise because of lack of appreciation of context. Granted, both sides are guilty of this, but as I see it, the pro-gay marriage side at this point is the more powerful part (in the arenas where I tend to be located, at least - I'm not saying that pro-gayism is the majority in the world or anything like that), and so they have more of a responsibility to do their debating properly, as I see it.
I am going to do a defense of Doug here, because in cases like this I almost always root for the underdog. And even if I myself am a Christian, I hope that anyone who reads this will consider the merit of what I say in its own right. My whole experience with this Kickstarter campaign and the discussions that have been surrounding it have been the catalysts of this post, but the points are actually things that I have been thinking for a while, and should largely be applicable to the wider world. I will try as hard as I can not to step on any toes, but to say what I think is right.
Yet another caveat before I start: I do realize this is a tough issue for many homosexuals. I realize that there are whole lives of oppression, judgement and ostracizing involved in the issue. And perhaps the situation is too young to have stabilized, and it's too much to expect the discussion to not bound into the irrational when there are so many feelings involved. However, it seems to me there is a tendency now that those who have been oppressed for their sexual orientation now are gaining legitimacy and with it, more power. And anyone who has power has a responsibility to use that power in a good way. What I'm going to try here is to point out tendencies towards this not being the case.
The first thing that springs to mind is how abused language can get sometimes. I am in particular thinking of the term 'Homophobe' - a word packed with meaning, in a small bundle. It's not clear exactly what it's supposed to mean. First of all, I resent putting '-phobe' or '-phobia' at the end of words unless they're actually referring to a psychological disorder. Using the word in this way almost invariably is a way to discredit your opposition without rational argument, and nothing is more infuriating than getting discredited irrationally. One might even say that people who use such terms have a 'discussiophobia' - a (irrational) fear of rational discussion. But I digress. The point is, calling someone a homophobe when you know nothing about their motivations or beliefs is incredibly respectless (for more on what those motivations or beliefs could be, see below). To me, the term 'homophobe' is so full of meaning that it is meaningless. It cannot be brought into a discussion in the hopes of continuing in a rational manner.
Another meaningful but meaningless word is 'hate speech'. What in the world is that supposed to mean? If you say you hate gay people, or something that is synonymous, then yes, you're engaging in hate speech. But usually that's not when it is being used. Many of Doug's statements are being labelled as 'hate speech', although nothing of what he says condones hating gay people or promotes hating them (unless you consider being against gay marriage to be synonymous to hating gay people. But then, I think you need to look up the word 'hate' in some kind of dictionary.)
The second thing is how important context is. In the abovementioned interview, Doug and the interviewer had been chatting for a long while, in an informal style (if I remember correctly). He then said some remarks about gay marriage and he used some poor analogies to illustrate his point. I agree that he should have thought twice about using those analogies specifically. However. The way those things were taken out of context and quoted in various social media and gamer articles, etc. was pathetic. Any article that purports to write more than five sentences on the topic really should include at least the immediate context. Without it, it becomes a screamfest where those with the best quotes win.
The third thing is the comparisons of homosexuality to things like paedophilia, incest, zoophilia, etc: In some discussions about homosexuality, some of these other kinds of philias will be mentioned, and usually it results in massive chastise from everyone (at least the pro-gay people involved in the discussion), since "you cannot compare homosexuality with [insert some philia here] - they're totally different".
However. Usually the person who's bringing up this other philia isn't really comparing homosexuality to other philias. In the discussions I have seen, it's usually a matter of trying to take the statement "Homosexuality is on the same footing as heterosexuality" (or some paraphrasing of this statement) to its logical conclusion: Usually this statement is accompanied by 'because two people are in love, and that's all that matters'. And the point is then: Ok, let's try to apply this logic. Until homosexuality was accepted, used to be that 'all that mattered' was that 'two people of the opposite sex are in love'. However, there are several other implied restrictions. I'll try to exhaust the restrictions in the following sentence: "All that matters is that exactly two individuals of the species Homo Sapiens that are not closely related and whose age is no less than 16 that are alive and of the opposite sex are in love". So why are we removing the 'of the opposite sex' clause if we're not willing to remove any of the others? You could say that because the same sex clause is the only clause that wouldn't hurt someone. But this is blatantly false. You could for instance imagine allowing siblings that have undergone sterilization to get married - noone would get hurt. You could allow someone to marry someone else post-mortem as long as the person who died signed a contract saying it was his or her will. You could imagine an animal not being hurt by being married to a person. You could allow more than two persons to marry.
I really take issue with the "if you're against gay marriage, you're a homophobe" logic. It's a black and white logic that belongs in some kind of fascist state, but not in a democracy with free speech. If you're against gay marriage, you're against the concept of two people of the same sex going through the ritual we call marriage. That's it. Now, the reasons behind such a stance are varied, and some people probably are what some other people could legitimately call homophobes (or at least homo-dislikers) - as in, they don't like people who are homosexual, period. However, most other people, even those who are against gay marriage, get along fine with people who are homosexual, and don't hate those. In fact, I want to stop using the word "hate" here, because it's a really strong term and usually the feelings that are associated with this cannot be well described by this word.
So what reasons can people have for being against gay marriage that are not related to an irrational hatred towards homosexuals? Some people think it's just 'wrong'. That is, somewhere in their gut, there is a feeling of wrongness about the concept of homosexuality, and they're not particularly inclined to suffocate that feeling. And then you might say "Well, those people are the same kind of people who thought interracial marriage was a bad thing, and they were clearly wrong!"
Well, no. They weren't "clearly wrong", because there are no criteria upon which to base a verdict of correctness, unless you demand that everyone should be a consequentialist - that is, racial intermarriage has had no significant negative consequences, and lots of positive ones, so it must be the right thing to do. And being a consequentialist is totally fine, but there must be room for other types of ethics as well. Some people base their ethics system upon how they feel about some issue. And saying that only consequentialists are allowed opinions on some matter is inherently undemocratic, so such people must be allowed their say as well. Then there is another set of ethics, coming from the religious sphere. And this is more critical, in my mind. Especially when it comes to allowing in-church gay marriage, or even forcing churches to marry gay people, I can see why people would react. The point in that case is this:
- A religious person could believe that the most important thing in the world is to serve God.
- Part of serving God is trying to live as He wishes us to live.
- There are parts of the Bible that indicate He doesn't wish us to live in homosexual relationships
And then the most cited answer is that Jesus told us to love one another, and so when two people are in love, we should sanctify it. Now, I'm not claiming to know what God wants, so I'm not going to say that people who think that God wants us to sanctify homosexuality are wrong. But I will point out that this particular line of reasoning is clearly wrong.
The main thing to say is that there are several kinds of 'love' talked about in the Bible. Usually, what Jesus and the apostles are talking about is 'agape', unconditional love, and that which is to be strived for by all Christians - as in love for your neighbor. The 'love' that we're talking about in the context of marriage is usually affectionate and erotic love, which is something else. Admittedly, the Church has done its part in confusing these terms, since we usually quote some passage by Paul in weddings which talk about the virtues of love - but this is agape, and shouldn't really be used in that context, except to say that you should also love your partner unconditionally as a Christian. Other than that, it has little to do with two people being in love. And this is why you cannot simply say that when two people are in love, God likes it, no matter who they are. Maybe He does, but that's not the point of much of the Bible, at least.
The last thing I can think of that I would like to rid these debates of is the concept that the people who are against gay marriage somehow are on a 'lower plane' of intelligence. Comments such as 'There is no point in discussing further with you, because you're obviously stuck in a prehistoric way of thinking' or the like can sometimes be called for, but more often than not when I see it in use it just serves to give the impression that you have run out of good arguments yourself and use this as a way to invalidate your opponent because of that fact. Admittedly, this is a more general problem, but in the gay marriage debate I more often than not see the proponents of gay marriage assuming that they're somehow more enlightened than their counterparts.
Clearly, I have been defending mostly one part of the debate in this post, and as I mentioned at the beginning, this is because I tend to side with the underdog. However, it is also because I really cannot stand it when a debate revolves around strawmen and misunderstandings that arise because of lack of appreciation of context. Granted, both sides are guilty of this, but as I see it, the pro-gay marriage side at this point is the more powerful part (in the arenas where I tend to be located, at least - I'm not saying that pro-gayism is the majority in the world or anything like that), and so they have more of a responsibility to do their debating properly, as I see it.
Labels:
Christianity,
context,
discussion,
gay marriage,
Quality,
Thoughtful
Friday, June 28, 2013
Shared library writeup: Part 4
I ended last time at saying how the dynamic linker had three main tasks:
Determine and load dependencies, relocate the application and dependencies, and
initialize the application and dependencies, and how the key to speeding up all
of these was to have fewer dependencies in the application.
Now, we're going to look at the relocation process more thorougly. First of all, what's going on? What does 'relocation' mean?
I'm by no means an expert in this, but I'm going to venture an attempt at an explanation: After an ELF object has been compiled, it has an entry point address - in other words, at which memory address the file resides, and if control is transferred to that address, the ELF object will start executing.
However, there are at least a couple of caveats here. First of all: Even if your ELF object has a fixed entry point address, it doesn't mean it will be loaded into actual physical memory at this address. Each process gets its own virtual memory space, which is a mapping from physical memory to a 'platonic' memory space. So the application might get loaded into the entry point address of the virtual memory space, but this address will correspond to another address entirely in physical space.
The second point is that if we're not talking about an executable, but rather a dynamic shared object, as we are here (or rather, we have one executable with a potentially high number of dynamic shared objects that are to be associated with it), the entry point address isn't even the entry point address it will end up with in the final executable - it will get shifted depending on what the linker determines is the best way to combine the addresses of all participating DSOs. This means that all 'internal' addresses in that object will be shifted by the same amount as well. This is what we're currently talking about when we use the term 'relocation'.
So the thing we're going to talk about is how the linker accomplishes this relocation - and especially the part where it has to synchronize all the load addresses, etc. First, it must be noted that there are two types of dependencies a given DSO can have. For one, you can have dependencies that are located within the same object - which I imagine happens when you create an object file with two functions/subroutines and one of them depends on the other - and for another, you can have dependencies that come from a different object.
The first kind of dependency is easy to handle, given that you know the 'new' entry point address of the object in question. For each such dependency, you calculate its relative offset from the entry point, and then simply add this offset to the new entry point.
The second type of dependency resolution is more involved, and I'm going to talk about that more the next time.
Now, we're going to look at the relocation process more thorougly. First of all, what's going on? What does 'relocation' mean?
I'm by no means an expert in this, but I'm going to venture an attempt at an explanation: After an ELF object has been compiled, it has an entry point address - in other words, at which memory address the file resides, and if control is transferred to that address, the ELF object will start executing.
However, there are at least a couple of caveats here. First of all: Even if your ELF object has a fixed entry point address, it doesn't mean it will be loaded into actual physical memory at this address. Each process gets its own virtual memory space, which is a mapping from physical memory to a 'platonic' memory space. So the application might get loaded into the entry point address of the virtual memory space, but this address will correspond to another address entirely in physical space.
The second point is that if we're not talking about an executable, but rather a dynamic shared object, as we are here (or rather, we have one executable with a potentially high number of dynamic shared objects that are to be associated with it), the entry point address isn't even the entry point address it will end up with in the final executable - it will get shifted depending on what the linker determines is the best way to combine the addresses of all participating DSOs. This means that all 'internal' addresses in that object will be shifted by the same amount as well. This is what we're currently talking about when we use the term 'relocation'.
So the thing we're going to talk about is how the linker accomplishes this relocation - and especially the part where it has to synchronize all the load addresses, etc. First, it must be noted that there are two types of dependencies a given DSO can have. For one, you can have dependencies that are located within the same object - which I imagine happens when you create an object file with two functions/subroutines and one of them depends on the other - and for another, you can have dependencies that come from a different object.
The first kind of dependency is easy to handle, given that you know the 'new' entry point address of the object in question. For each such dependency, you calculate its relative offset from the entry point, and then simply add this offset to the new entry point.
The second type of dependency resolution is more involved, and I'm going to talk about that more the next time.
Labels:
ELF,
libraries,
programming,
Quality,
regurgitated information,
Useful
Thursday, June 27, 2013
Multiplying lots of matrices in NumPy
The other day I found myself needing to perform matrix multiplication in Python,
using NumPy. Well, what's the big deal, you say? You do know that there
exists a
Yes, I do know that, you smart internet person you. However, my problem was that I had a number of matrices for which I wanted to perform the same type of matrix multiplication. I had on the order of a hundred thousand five by two matrices that were to be transposed and multiplied with another hundred thousand five by two matrices.
Well, duh, you say. The
So, I had to improvise:
Kind of involved, but it worked. I got the initial idea from here, but the solution given here only works for symmetrical matrices - for non-symmetrical ones you have to shift the newaxis one step to the left, or it will violate the broadcasting rules of NumPy.
dot
method, right?
Yes, I do know that, you smart internet person you. However, my problem was that I had a number of matrices for which I wanted to perform the same type of matrix multiplication. I had on the order of a hundred thousand five by two matrices that were to be transposed and multiplied with another hundred thousand five by two matrices.
Well, duh, you say. The
dot
method can handle more than two
dimensions, you know. Yeah, I know that as well. However, it doesn't handle it
the way I needed for this task. I wanted to end up with a hundred thousand two
by two matrices. Had I used dot
, I would have ended up with a
hundred thousand by two by hundred thousand by two matrix.
So, I had to improvise:
>>> A.shape
(100000, 2, 5)
>>> B.shape
(100000, 2, 5)
>>> result = np.sum(np.swapaxes(A, 1, 2)[:, np.newaxis, :, :] * B[:, :, :,
np.newaxis], 2)
Kind of involved, but it worked. I got the initial idea from here, but the solution given here only works for symmetrical matrices - for non-symmetrical ones you have to shift the newaxis one step to the left, or it will violate the broadcasting rules of NumPy.
Wednesday, June 26, 2013
Super Size Suckage
Well, here's an uncontroversial post. I saw Super Size Me the other day.
And I thought it sucked.
I'm not the first to do so, but that's what I have to say today. So I might as well detail my criticism a bit more, to make it more constructive.
The premise itself is kind of funny, and the main reason I watched it was to see how much weight this guy could gain in a month. Little did I know that this was to be the subject matter of only thirty percent of the movie, whereas the rest... sucked.
My experience based on previous such documentaries (Michael Moore, i'm looking at you) is that when the documentary maker has a very specific axe to grind, you just end up disbelieving everything that is presented, and you're actually trying to find flaws with the presentation. This is what I ended up doing. And the reason was that I very quickly got a lasting impression of the documentary film maker, which can be summed up thusly: "My vegan girlfriend hates McDonalds and I want to make a documentary. Why not kill two birds with one stone?" Seriously, that girlfriend should have been left out of the movie. I cringed everytime she said anything, because it was always about how superior organic and vegan food was. In the end, when she said she would 'cleanse' Morgan's post-experiment system with her special vegan diet, I cringed doubly.
Now, the above was mostly a gut reaction, but it is symptomatic of one of the biggest problems with this movie: It's not clear what in the world it's trying to say.
On the one hand, it seems to say that McDonalds is bad, and that's the take-home message. On the other hand, it seems to say that organic? vegan? food is the best. And then on the third hand, one premise of the movie seems to be that a guy is trying to eat as much fast food as he can for a month and see how that affects his health and well-being.
Then you might say, 'Well, all of these are tied together and they make up one coherent story'. But they don't. First of all, McDonalds being bad is not the same as vegan and organic food being the best. In fact, I think you will find a lot of people who would agree with the former (to some extent) but not to the latter statement. Second of all, you don't prove that McDonalds is bad by EATING TWICE AS MANY CALORIES PER DAY AS RECOMMENDED. That just proves you're bad at cause and effect.
A much better demonstration that eating McDonalds is bad for you would be to eat the recommended number of calories each day, but eating only McDonalds. If he had eaten five thousand calories worth of vegan food each day he would also gain weight.
As for the 'results' of this exercise, they're pretty much worthless as scientific facts towards demonstrating how McDonalds is bad for you. Very few of the changes that happened to his body can be said to be due solely to the fact that he was eating McDonalds and not to the fact that he was eating way too much. And some of them were pretty subjective. "I feel horrible". "My arms are twitching due to all the sugar". How do you know that?? "My sex life went down". Well, when you're binging on McDonalds food and have a vegan girlfriend, what do you expect?
Also, the most interesting result - how fat he got, was pretty underwhelming. He gained around ten kilograms, and I hardly noticed him getting fatter.
Another underwhelming result was how many times he had been asked whether he wanted a super-size menu, which was something he touted in the beginning of the movie. That was presented as one of the 'dramatic post-movie facts', you know - the ones that accompany some picture of whatever illustrates the fact best at the movie's end. He ate ninety times at McDonalds during this month, and was asked about a super-size menu nine times. Out of ninety. That's ten percent. I'm underwhelmed.
In addition to these things, you had the stock-standard Michael Moore-ish strawmen interviews, tying together unrelated facts to make a point, etc. that generally simply helped discredit the maker of the movie.
In short, I wish documentaries like this didn't get so much attention. I want to be on the right side of issues like this, but when the people who are supposedly on the right side use the same dirty tricks as those we claim to be fighting against, the lines get blurred. If what is presented is truly something that we should be shocked and appalled about, the facts will speak for themselves, and we don't need some dude or his vegan girlfriend to mix them together into a milkshake of dubious factual value.
I'm not the first to do so, but that's what I have to say today. So I might as well detail my criticism a bit more, to make it more constructive.
The premise itself is kind of funny, and the main reason I watched it was to see how much weight this guy could gain in a month. Little did I know that this was to be the subject matter of only thirty percent of the movie, whereas the rest... sucked.
My experience based on previous such documentaries (Michael Moore, i'm looking at you) is that when the documentary maker has a very specific axe to grind, you just end up disbelieving everything that is presented, and you're actually trying to find flaws with the presentation. This is what I ended up doing. And the reason was that I very quickly got a lasting impression of the documentary film maker, which can be summed up thusly: "My vegan girlfriend hates McDonalds and I want to make a documentary. Why not kill two birds with one stone?" Seriously, that girlfriend should have been left out of the movie. I cringed everytime she said anything, because it was always about how superior organic and vegan food was. In the end, when she said she would 'cleanse' Morgan's post-experiment system with her special vegan diet, I cringed doubly.
Now, the above was mostly a gut reaction, but it is symptomatic of one of the biggest problems with this movie: It's not clear what in the world it's trying to say.
On the one hand, it seems to say that McDonalds is bad, and that's the take-home message. On the other hand, it seems to say that organic? vegan? food is the best. And then on the third hand, one premise of the movie seems to be that a guy is trying to eat as much fast food as he can for a month and see how that affects his health and well-being.
Then you might say, 'Well, all of these are tied together and they make up one coherent story'. But they don't. First of all, McDonalds being bad is not the same as vegan and organic food being the best. In fact, I think you will find a lot of people who would agree with the former (to some extent) but not to the latter statement. Second of all, you don't prove that McDonalds is bad by EATING TWICE AS MANY CALORIES PER DAY AS RECOMMENDED. That just proves you're bad at cause and effect.
A much better demonstration that eating McDonalds is bad for you would be to eat the recommended number of calories each day, but eating only McDonalds. If he had eaten five thousand calories worth of vegan food each day he would also gain weight.
As for the 'results' of this exercise, they're pretty much worthless as scientific facts towards demonstrating how McDonalds is bad for you. Very few of the changes that happened to his body can be said to be due solely to the fact that he was eating McDonalds and not to the fact that he was eating way too much. And some of them were pretty subjective. "I feel horrible". "My arms are twitching due to all the sugar". How do you know that?? "My sex life went down". Well, when you're binging on McDonalds food and have a vegan girlfriend, what do you expect?
Also, the most interesting result - how fat he got, was pretty underwhelming. He gained around ten kilograms, and I hardly noticed him getting fatter.
Another underwhelming result was how many times he had been asked whether he wanted a super-size menu, which was something he touted in the beginning of the movie. That was presented as one of the 'dramatic post-movie facts', you know - the ones that accompany some picture of whatever illustrates the fact best at the movie's end. He ate ninety times at McDonalds during this month, and was asked about a super-size menu nine times. Out of ninety. That's ten percent. I'm underwhelmed.
In addition to these things, you had the stock-standard Michael Moore-ish strawmen interviews, tying together unrelated facts to make a point, etc. that generally simply helped discredit the maker of the movie.
In short, I wish documentaries like this didn't get so much attention. I want to be on the right side of issues like this, but when the people who are supposedly on the right side use the same dirty tricks as those we claim to be fighting against, the lines get blurred. If what is presented is truly something that we should be shocked and appalled about, the facts will speak for themselves, and we don't need some dude or his vegan girlfriend to mix them together into a milkshake of dubious factual value.
Tuesday, June 25, 2013
Shared library writeup: Part 3
I ended last time by talking about how the dynamic linker had to run before an
ELF file could run. Currently, in other words, the dynamic linker has been
loaded into the memory space of the process that we want to run, and we're going
to run the linker before we can run our actual application.
We cannot run the linker quite yet, however. The linker has to know where it should transfer control to after it has completed whatever it is supposed to do. This is accomplished by the kernel - it puts an auxiliary vector on top of the process' stack. This vector is a name-value array (or in Python terms, a dictionary), and it already contains the values I talked about last time - the
At this point the (dynamic) linker is supposed to do its magic. It has three tasks:
We cannot run the linker quite yet, however. The linker has to know where it should transfer control to after it has completed whatever it is supposed to do. This is accomplished by the kernel - it puts an auxiliary vector on top of the process' stack. This vector is a name-value array (or in Python terms, a dictionary), and it already contains the values I talked about last time - the
PT_INTERP
value which is contained in the
p_offset
field of the ELF program header. In addition to these, a
number of other values are added to the auxiliary vector. The values that can be
added like this are defined in the elf.h
header file, and have the
prefix AT_
. After this vector has been set up, control is finally
transferred to the dynamic linker. The linker's entry point is defined in the
ELF header of the linker, in the e_entry
field.
Trusting the linker
At this point the (dynamic) linker is supposed to do its magic. It has three tasks:
- Determine and load dependencies
- Relocate the application and all dependencies
- Initialize the application and dependencies in the correct order
Labels:
ELF,
libraries,
programming,
Quality,
regurgitated information,
Useful
Monday, June 24, 2013
Scheduling
It's weird how certain concepts simply stay out of your field of
'conceivability', so to speak, until they suddenly pop in and you feel silly for
not considering them earlier.
Setting up a schedule for myself has been such a concept. I have read about the concept and its advantages several times before, but for some reason I have just shrugged and never considered it seriously. And I don't really know why - that's the paradox of gestalt shifts - once you have shifted, you're unable to see the reasoning behind your old view (unless you have written it down, or something like that).
I believe that perhaps part of the reason I have been reluctant to set up a schedule is my slightly irregular sleeping habits. I have thought it more important to be rested than to wake up at a certain time. And I still do - working ten hours at sixty percent is worse than working eight at ninety. And my brain is really sensitive to this. It's like sleeping badly puts some kind of insulator between the synapses so they're unable to fire properly.
However, there are a couple of reasons I presently have for willing to try out a schedule nonetheless:
If it turns out that I'm unable to function properly because I am determined to wake up at a certain time, I could always wait with setting up the schedule until morning the same day. That way, I know how much time I have for disposal.
However, I presently have another theory: That my irregular sleep is in part due to my not having any obligations to get up in the morning. Currently, I have a research position, which means I can pretty much come and go as I want. Could this have a negative effect? Perhaps if I approach it more like I would a regular job, my brain somehow would get more 'incentive' to sleep properly during the night? You see, my problem isn't that I cannot fall asleep in the evening - I usually do pretty quickly. Rather, the problem is that my sleep is light and not 'restful' enough. Also, I usually wake up before time, and if I get up at that time, I will be tired.
In other words, this is going to be an experiment. I will schedule the following day the night before, including a time at which I wake up and a time at which I go to bed, and everything in between. Naturally, it will be impossible to follow such a schedule to the point - unexpected events do occur, of course, and there are some tasks which are hard to approximate in terms of time needed for completion. However, those things I believe will come with experience. The first hurdle is actually following through with it.
Setting up a schedule for myself has been such a concept. I have read about the concept and its advantages several times before, but for some reason I have just shrugged and never considered it seriously. And I don't really know why - that's the paradox of gestalt shifts - once you have shifted, you're unable to see the reasoning behind your old view (unless you have written it down, or something like that).
I believe that perhaps part of the reason I have been reluctant to set up a schedule is my slightly irregular sleeping habits. I have thought it more important to be rested than to wake up at a certain time. And I still do - working ten hours at sixty percent is worse than working eight at ninety. And my brain is really sensitive to this. It's like sleeping badly puts some kind of insulator between the synapses so they're unable to fire properly.
However, there are a couple of reasons I presently have for willing to try out a schedule nonetheless:
If it turns out that I'm unable to function properly because I am determined to wake up at a certain time, I could always wait with setting up the schedule until morning the same day. That way, I know how much time I have for disposal.
However, I presently have another theory: That my irregular sleep is in part due to my not having any obligations to get up in the morning. Currently, I have a research position, which means I can pretty much come and go as I want. Could this have a negative effect? Perhaps if I approach it more like I would a regular job, my brain somehow would get more 'incentive' to sleep properly during the night? You see, my problem isn't that I cannot fall asleep in the evening - I usually do pretty quickly. Rather, the problem is that my sleep is light and not 'restful' enough. Also, I usually wake up before time, and if I get up at that time, I will be tired.
In other words, this is going to be an experiment. I will schedule the following day the night before, including a time at which I wake up and a time at which I go to bed, and everything in between. Naturally, it will be impossible to follow such a schedule to the point - unexpected events do occur, of course, and there are some tasks which are hard to approximate in terms of time needed for completion. However, those things I believe will come with experience. The first hurdle is actually following through with it.
Labels:
Brain Sputter,
Insomnia,
personal,
Quantity,
scheduling,
self improvement
Friday, June 21, 2013
Game review: Phoenix Wright: Ace Attorney
Despite all my wishes to be a productive
person, sometimes I somehow end up playing some computer or video game.
Recently I have been playing Phoenix Wright: Ace Attorney, and thought I'd just
briefly review it.
First off, I don't like assigning one number to games, since the quality of a game can have many dimensions. SO I'm just going to write what I like about the game and what I don't like.
He's a bit more insecure than I thought prior to playing. But I like the
character, and Miles is also pretty cool, although sometimes I wish the anime
industry would find another archetype than the 'brooding dark-haired guy' to be
the cool dude.
The trials are hilarious and very entertaining. Whenever you manage to point out a contradiction and cool music starts playing, you feel like being a defense attorney would be the coolest job in the world. There's plenty of humor there, and especially if you're geared towards Japanese-style humor, you'll laugh out loud a lot. I did, at least. Most trials are pretty far-fetched in terms of how they are conducted and what is accepted as evidence and so on, but it's not much worse than your average American lawyer show.
Also, I want to mention the 'effects' as a good point of this game. In trial, when the attorneys are making a point, they are punching their fists on the desk in a really cool way. And whenever something 'unexpected' is happening, the effects really help bring this out by changing the music, kind of shaking the screen and in general putting surprised faces on everyone.
The graphics.. honestly, for a game such as this, realistic graphics is by no means something I want. The graphics do a good job without being extravagant.
First off, I don't like assigning one number to games, since the quality of a game can have many dimensions. SO I'm just going to write what I like about the game and what I don't like.
The premise
The game I played is for Nintendo DS, and it's a kind of point-and-click mystery solving and courtroom game, which to my knowledge is pretty unique in the market. It's animated with semi-moving anime frames. You play as Phoenix Wright, a lawyer straight out of law school, as he takes on his first cases as a defense attorney. The first mission is a simple trial, where you have to pick the witnesses testimonies apart, pressing every point and using evidence to bring to light contradictions in their testimonies. Later, you also play the role of the evidence-gatherer, which you usually do much better than the local police force anyway. There are several clashes with the arch-nemesis, Miles Edgeworth (who has later gotten games of his own). This game is the first in a series of several.The good
Phoenix Wright (the character) is pretty awesome, although playing the game I got a different impression from what I had from all the internet memes about him.![]() |
Y'know - these ones. |
The trials are hilarious and very entertaining. Whenever you manage to point out a contradiction and cool music starts playing, you feel like being a defense attorney would be the coolest job in the world. There's plenty of humor there, and especially if you're geared towards Japanese-style humor, you'll laugh out loud a lot. I did, at least. Most trials are pretty far-fetched in terms of how they are conducted and what is accepted as evidence and so on, but it's not much worse than your average American lawyer show.
Also, I want to mention the 'effects' as a good point of this game. In trial, when the attorneys are making a point, they are punching their fists on the desk in a really cool way. And whenever something 'unexpected' is happening, the effects really help bring this out by changing the music, kind of shaking the screen and in general putting surprised faces on everyone.
The bad
The evidence collection becomes pretty tedious, especially when you have to move through areas in a very slow manner (i.e. you cannot necessarily move from a given area to the area you want to be in - you have to go through all the 'intermediate' areas first).The both
The music is really great at times (i.e. during the trials) but at other times it can get a bit jarring (i.e. during evidence collection).The graphics.. honestly, for a game such as this, realistic graphics is by no means something I want. The graphics do a good job without being extravagant.
In summary
Despite the shortcomings of the game (i.e. the evidence collection phase) I would heartily recommend playing it, simply because they have a unique experience to offer: Being an awesome defense attorney who fights injustice and tears down even the most arrogant of prosecutors. Get your OBJECTION!s on and play it!Thursday, June 20, 2013
Check for duplicates in Python
Today's trick: Check whether a Python container
Neat!
cont
contains
duplicates!
if len(cont) != len(set(cont)): raise myError
Neat!
Wednesday, June 19, 2013
Fugitive.vim
All of my posts presently are written in the nick of time. It could be
symptomatic of either bad planning or too much to do, but I will try to keep up
the current schedule for a while longer at least.
As stated in another post, one of the plugins I am using for vim is the fugitive.vim plugin, written by Tim Pope. Its description is "A Git wrapper so awesome, it should be illegal". And so far, I have to agree - it is pretty darn awesome.
My favorite feature so far is the
Option b) isn't actually that bad in itself. It just takes a little more time. However, if you spot multiple bugs, or do several separate modifications of the file, the stashing can get a little messy.
Now, the Gdiff command opens up the file you're editing together with the current HEAD version of that file. (Or actually, that's not exactly what is opened, but I have to research more about how Git does stuff before I have more to say). It opens these two files in Diff mode (which I didn't even know about prior to this). It then allows you to choose hunks to stage for a commit, so that you don't have to commit everything in the file at once, if you don't want to. (A hunk is one continuous piece of changed 'stuff'). However, you can even break up the hunks by specifying the lines you want to 'diffput'.
In short - it's awesome. It has other neat features as well, but those will have to come at another time. I might also write a more technical piece on the Gdiff thing.
As stated in another post, one of the plugins I am using for vim is the fugitive.vim plugin, written by Tim Pope. Its description is "A Git wrapper so awesome, it should be illegal". And so far, I have to agree - it is pretty darn awesome.
My favorite feature so far is the
:Gdiff
mode. It has done wonders
to the tidyness of my git history. Used to be, I would edit a file, spot a
minor bug that was unrelated to whatever I was currently implementing, and then
I had to either a)fix the bug and implement whatever, commiting everything in
one large chunk, thus messing up the Git history, or b) stash the changes so
far, fix the bug, commit it, then continue implementing.
Option b) isn't actually that bad in itself. It just takes a little more time. However, if you spot multiple bugs, or do several separate modifications of the file, the stashing can get a little messy.
Now, the Gdiff command opens up the file you're editing together with the current HEAD version of that file. (Or actually, that's not exactly what is opened, but I have to research more about how Git does stuff before I have more to say). It opens these two files in Diff mode (which I didn't even know about prior to this). It then allows you to choose hunks to stage for a commit, so that you don't have to commit everything in the file at once, if you don't want to. (A hunk is one continuous piece of changed 'stuff'). However, you can even break up the hunks by specifying the lines you want to 'diffput'.
In short - it's awesome. It has other neat features as well, but those will have to come at another time. I might also write a more technical piece on the Gdiff thing.
Tuesday, June 18, 2013
TRAPs
As a true nerd, I am currently GMing an RPG campaign.
Preparing for sessions can be a chore - I find myself wondering how to structure the stuff I'm making up, and thinking about ways to organize often take more time than actual campaign writing.
However, there is one technique that I am extremely thankful for coming across - the TRAPs method, invented by Ry at Enworld forums: ry's Threats, Rewards, Assets and Problems (TRAPs)
It's really making my life as a GM a whole lot easier, because it's a simple algorithm for fleshing out an adventure or encounter: Everything you add should be either a Threat, Reward, Asset or Problem. If you're introducing something that's neither, it's ineffective. And before you complain about 'atmosphere' and so on - you can easily turn either of these things into stuff that provide atmosphere.
Right now, I don't have enough time to write about this more elaborately (I have to prepare for the session), but once I have tried it out a bit more, I will try to write down my experiences.
Preparing for sessions can be a chore - I find myself wondering how to structure the stuff I'm making up, and thinking about ways to organize often take more time than actual campaign writing.
However, there is one technique that I am extremely thankful for coming across - the TRAPs method, invented by Ry at Enworld forums: ry's Threats, Rewards, Assets and Problems (TRAPs)
![]() |
Not that kind. |
It's really making my life as a GM a whole lot easier, because it's a simple algorithm for fleshing out an adventure or encounter: Everything you add should be either a Threat, Reward, Asset or Problem. If you're introducing something that's neither, it's ineffective. And before you complain about 'atmosphere' and so on - you can easily turn either of these things into stuff that provide atmosphere.
Right now, I don't have enough time to write about this more elaborately (I have to prepare for the session), but once I have tried it out a bit more, I will try to write down my experiences.
Monday, June 17, 2013
Shared library writeup: Part 2
I ended last time talking about how the program header file contains references to the segments of an ELF file. Now, these segments can have various access permissions - some parts are executable but not writable, while some are writable but not executable.
Having a lot of non-writable segments is a good thing, since it means, in addition to data being protected from unintentional or malignant modification, that these segments can be shared if there are several applications that use them.
The way that the kernel knows what kind of segments are of what type is by reading the program header table, where this information is located. This table is represented by C structs called
However, the program header table is not located at a fixed place in an ELF file. The only thing that is fixed is the ELF header, which is always put at 'offset' zero, meaning the beginning of the file, essentially. (Offset means how many bytes from the beginning something is located). This header is also represented by a C struct, called
Now, the ELF header struct contains several pieces of information (fields) that are necessary to determine where the program header is. Writing down these pieces means essentially copy-pasting the article I'm reading, so I think I will not go down to that level of granulation.
Once the kernel has found the program header table, it can start reading information about each segment. The first thing it needs to know is which type the segment is, which is represented by the
However, even though the offset in memory is irrelevant for unlinked DSOs, the virtual memory size of the segment is relevant. This is because the actual memory space that the segment needs can be larger than the size of the segment in-file. When the kernel loads the segment into memory, if the requested memory size is larger than the segment size, the extra memory is initialized with zeroes. This is practical if there are so-called BSS sections in the segment. BSS is an old name for a section of data that contains only zero bits. Thus, as long as extraneous memory is initialized with zeroes, this is a good way to save space - you only need to know how large the bss section is, add that size to the current size of the segment, and the kernel handles the rest. An example of a BSS section is a section containing uninitialized variables in C code, since such variables are set to zero in C anyway.
Finally, each segment has a logical set of permissions that is defined in the
After this, the virtual address space for the ELF executable is set up. However, the executable binary at this point only contains the segments that had the
The dynamic linker is a program just like the executable we're trying to run, so it has to go through all the above steps. The difference is that the linker is a complete binary, and it should also be relocatable. Which linker is used is not specified by the kernel - it is contained in a special segment in the ELF file, which has the
This ends the second part of the writeup. And there's plenty left..
Having a lot of non-writable segments is a good thing, since it means, in addition to data being protected from unintentional or malignant modification, that these segments can be shared if there are several applications that use them.
The way that the kernel knows what kind of segments are of what type is by reading the program header table, where this information is located. This table is represented by C structs called
ELF32_Phdr
or ELF64_Phdr
.
However, the program header table is not located at a fixed place in an ELF file. The only thing that is fixed is the ELF header, which is always put at 'offset' zero, meaning the beginning of the file, essentially. (Offset means how many bytes from the beginning something is located). This header is also represented by a C struct, called
ELF32_Ehdr
or ELF64_Ehdr
(the 32 or 64 refers to whether the computer architecture is 32-bit or 64-bit, respectively - i.e., all its registers, memory addresses and buses have sizes of 32 bits or 64 bits.)
Now, the ELF header struct contains several pieces of information (fields) that are necessary to determine where the program header is. Writing down these pieces means essentially copy-pasting the article I'm reading, so I think I will not go down to that level of granulation.
Once the kernel has found the program header table, it can start reading information about each segment. The first thing it needs to know is which type the segment is, which is represented by the
p_type
field of the program header table struct. If this field has the value PT_LOAD
it means that this segment is 'loadable'. (other values this field can have is PT_DYNAMIC
, which means that this segment contains dynamic linking information, PT_NOTE
, which means the segment contains auxilliary notes, et cetera.) If the p_type
field has the value PT_LOAD
, the kernel must, in addition to knowing where the segment starts, also know how big it is, which is specified in the p_filesz
field. There are also a couple of fields that describe where the segment is located in virtual memory space. However, the actual offset in virtual memory space is irrelevant for DSOs that are not linked, since they haven't been assigned a specific place in virtual memory space. For executables and so-called 'prelinked' DSOs (meaning that they have been bound to an executable even if they're dynamic), the offset is relevant.
However, even though the offset in memory is irrelevant for unlinked DSOs, the virtual memory size of the segment is relevant. This is because the actual memory space that the segment needs can be larger than the size of the segment in-file. When the kernel loads the segment into memory, if the requested memory size is larger than the segment size, the extra memory is initialized with zeroes. This is practical if there are so-called BSS sections in the segment. BSS is an old name for a section of data that contains only zero bits. Thus, as long as extraneous memory is initialized with zeroes, this is a good way to save space - you only need to know how large the bss section is, add that size to the current size of the segment, and the kernel handles the rest. An example of a BSS section is a section containing uninitialized variables in C code, since such variables are set to zero in C anyway.
Finally, each segment has a logical set of permissions that is defined in the
p_flags
field of the program header struct - whether the segment is writable, readable, executable or any combination of the three.
After this, the virtual address space for the ELF executable is set up. However, the executable binary at this point only contains the segments that had the
PT_LOAD
value in the p_type
field. The dynamically linked segments are not yet loaded - they only have an address in virtual memory. Therefore, before execution can start, another program must be executed - the dynamic linker.
The dynamic linker is a program just like the executable we're trying to run, so it has to go through all the above steps. The difference is that the linker is a complete binary, and it should also be relocatable. Which linker is used is not specified by the kernel - it is contained in a special segment in the ELF file, which has the
PT_INTERP
value in the p_type
field. This segment is just a null-terminated string which specifies which linker to use. And the load address of the linker should not conflict with any of the executables on which it is being run.
This ends the second part of the writeup. And there's plenty left..
Labels:
ELF,
libraries,
programming,
Quality,
regurgitated information,
Useful
Friday, June 14, 2013
Shared library writeup: Part 1
During my daily work this week, I found myself struggling with shared libraries, linking them, and the various compiler flags needed to make the type of library you want. I decided to actually learn this stuff once and for all, and so I am currently reading "How to write shared libraries" by Ulrich Drepper. I decided this was a perfect opportunity to multitask - both write stuff for the blog and learn something! Espec
ially since you learn much better by writing about it. Hence, this will be the first part of my writeup of Drepper's paper.
In the most abstract, libraries are collections of code gathered into one file for easy reuse. They can be static, meaning that if you want to use the code in a program, the compiler must take the code contained in the library and bake it into the program upon compilation. Alternatively, they can also be shared or dynamic, meaning that they are not included in the program upon compilation, but the program contains mention of the libraries, so that on run-time, the program loads the library and incorporates it into the program.
Nowadays, (on Unix-like systems) libraries are handled by the so-called ELF (Executable Linkage format), which is a common file format that are used not just for libraries, but for executables and other types of files as well.
Earlier, other formats, such as a.out and the Common Object File Format (COFF) were used. The disadvantage with these were that when these libraries did not support relocation.
When you have a piece of compiled code (typically in what's called an object file), this file will contain a relocation table. Such a table is a list of pointers to various addresses within that object file, and these addresses are typically given relative to the beginning of the file (which is typically zero). When combining several such object files into one large executable, this object-file-specific list must typically be changed, since the object file now is not located at 'zero' anymore, but rather at some arbitrary point within the new executable.Then, when the executable is to be executed, the addresses are again modified to reflect the actual addresses in RAM. This last part is what is not supported by the old library formats.
This essentially means that each library must be given an absolute address in virtual memory upon creation, and that some central authority must keep track of where the various shared libraries are stored. In addition: when we make additions to a library that is supposed to be shared, we don't want to have to tell all the applications that used the old version that the library has changed - as long as the new version still contains all the stuff we need for our application, it should still work for that application without having to re-link the application with the new version of the library. This means that the table that points to where the various parts of the library are located must be kept separate from the actual library, and it must actually keep track of the pointer tables of all the old versions of that library - once a function had been added to a library, its address lasted forever. New additions to a library would just append to the existing table. In short, a.out and COFF were not very practical for use as shared libraries, although they did make the program run fast, since there is no relocation of table pointers at run time.
For an application that contains no dynamic components (no shared libraries etc.), its execution is straightforward: The application is loaded into memory, then instruction at the 'entry point' memory address is executed, which should start a chain of events that ends with the termination of the program.
For applications that do contain dynamic components, it is less straightforward: There must be another program that can coordinate the application with the DSOs (Dynamic Shared Objects) before execution of the program starts.
The section header table is a table with references to the various sections of the file. The program header table contains references to various groupings of the sections. So you might say that the section header table describes each 'atom' of the file, whereas the program header table collects these atoms into 'molecules' and makes sensible chunks, called segments, that are sections that work together to form a coherent whole.
End of part 1 of the writeup! And I'm only on page 3 of the paper!
In the most abstract, libraries are collections of code gathered into one file for easy reuse. They can be static, meaning that if you want to use the code in a program, the compiler must take the code contained in the library and bake it into the program upon compilation. Alternatively, they can also be shared or dynamic, meaning that they are not included in the program upon compilation, but the program contains mention of the libraries, so that on run-time, the program loads the library and incorporates it into the program.
Nowadays, (on Unix-like systems) libraries are handled by the so-called ELF (Executable Linkage format), which is a common file format that are used not just for libraries, but for executables and other types of files as well.
Earlier, other formats, such as a.out and the Common Object File Format (COFF) were used. The disadvantage with these were that when these libraries did not support relocation.
When you have a piece of compiled code (typically in what's called an object file), this file will contain a relocation table. Such a table is a list of pointers to various addresses within that object file, and these addresses are typically given relative to the beginning of the file (which is typically zero). When combining several such object files into one large executable, this object-file-specific list must typically be changed, since the object file now is not located at 'zero' anymore, but rather at some arbitrary point within the new executable.Then, when the executable is to be executed, the addresses are again modified to reflect the actual addresses in RAM. This last part is what is not supported by the old library formats.
This essentially means that each library must be given an absolute address in virtual memory upon creation, and that some central authority must keep track of where the various shared libraries are stored. In addition: when we make additions to a library that is supposed to be shared, we don't want to have to tell all the applications that used the old version that the library has changed - as long as the new version still contains all the stuff we need for our application, it should still work for that application without having to re-link the application with the new version of the library. This means that the table that points to where the various parts of the library are located must be kept separate from the actual library, and it must actually keep track of the pointer tables of all the old versions of that library - once a function had been added to a library, its address lasted forever. New additions to a library would just append to the existing table. In short, a.out and COFF were not very practical for use as shared libraries, although they did make the program run fast, since there is no relocation of table pointers at run time.
Enter ELF
ELF is, as mentioned before, a common file type for both applications, object files, libraries and more. It is therefore very easy to make a library once you know how to make an application - you just pass in an additional compiler flag. The only difference between them is that applications usually have a fixed load address, that is, the (virtual) memory address into which they are loaded upon execution. There is a special class of applications, called Position Independent Executables (PIEs) that don't even have a fixed load address, and for those, the difference between applications and shared libraries are even smaller.For an application that contains no dynamic components (no shared libraries etc.), its execution is straightforward: The application is loaded into memory, then instruction at the 'entry point' memory address is executed, which should start a chain of events that ends with the termination of the program.
For applications that do contain dynamic components, it is less straightforward: There must be another program that can coordinate the application with the DSOs (Dynamic Shared Objects) before execution of the program starts.
The ELF file structure
ELF files usually contain the following:- the file header
- the Program header table
- the Section header table
- Sections
The section header table is a table with references to the various sections of the file. The program header table contains references to various groupings of the sections. So you might say that the section header table describes each 'atom' of the file, whereas the program header table collects these atoms into 'molecules' and makes sensible chunks, called segments, that are sections that work together to form a coherent whole.
End of part 1 of the writeup! And I'm only on page 3 of the paper!
Labels:
ELF,
libraries,
programming,
Quality,
regurgitated information,
Useful
Thursday, June 13, 2013
Manual labor
A couple of times lately I have helped my grandparents do some manual labor
(pruning their fruit trees and trimming the hedges).
I don't do much manual labor at home myself. I live in a rented apartment, so I don't have much maintenance to speak of, and my day job mainly consists of programming, which can only be thought of as manual labor if you are a pedant and use the original definition of the word.
However, whenever I get to do some real manual labor, I think I should do more
of it. It's both due to the 'getting to work your body' thing and the 'feeling
like you actually did stuff' thing. Together, they give a feeling of
wholesomeness.
If and when I ever get a family and/or own a house of my own, I suppose there will be more of this. Until then, I'll just have to help out my grandparents as much as I can.
I don't do much manual labor at home myself. I live in a rented apartment, so I don't have much maintenance to speak of, and my day job mainly consists of programming, which can only be thought of as manual labor if you are a pedant and use the original definition of the word.
![]() |
But in that case, this is manual labor as well. |
If and when I ever get a family and/or own a house of my own, I suppose there will be more of this. Until then, I'll just have to help out my grandparents as much as I can.
Wednesday, June 12, 2013
Vim
I use vim for editing.
There is no overwhelmingly reasonable reason I chose vim. Several years ago, when I first started programming on a significant basis, I started reading about the editor wars, and I immediately knew I had to make a choice and stick with it. I think the article that stands out as the main reason for my choice is this one:
Pirates and ninjas: Emacs or Vi?"
Of all the articles that could have been the basis for my choice, this is probably one of the least reasonable. However, when I did read this article, I didn't know anything about what would be useful when programming. And so, connecting both editors to ninjas and pirates made it easy to make a choice (which, I think, matters not that much in the long run anyway).
Ninjas simply appeals more to me than pirates do, and knowing nothing else,
I chose vim. I cannot say I regret the choice, but that could easily be just
because I haven't tried emacs.
(Short aside: When I read the above article, I didn't know who Richard Stallman was. However, as it turns out, if I had known, there would have been more of an incentive to choose vim.)
Both editors benefit from plugins. I haven't manually installed many - I think the only ones I currently have installed is the fugitive plugin, written by Tim Pope, and the python indent script written by Eric McSween. I will elaborate on the former in a later post.
Of course, knowing the commands available to you is also something that makes you effective, whichever editor you use. I don't know a tenth of all the stuff that vim can do in theory, but this Stack Exchange question was a lot of help to me.
There are a couple of keybindings I find very helpful. Mapping caps lock to ESC, 't to :tabnew, 's to :w and 'q to :q are some that save plenty of keystrokes in the long run.
The more you use an editor, the better you get at it and the less you gain by switching. So it's likely I will keep using vim for the unforeseeable future. And that's ok.
There is no overwhelmingly reasonable reason I chose vim. Several years ago, when I first started programming on a significant basis, I started reading about the editor wars, and I immediately knew I had to make a choice and stick with it. I think the article that stands out as the main reason for my choice is this one:
Pirates and ninjas: Emacs or Vi?"
Of all the articles that could have been the basis for my choice, this is probably one of the least reasonable. However, when I did read this article, I didn't know anything about what would be useful when programming. And so, connecting both editors to ninjas and pirates made it easy to make a choice (which, I think, matters not that much in the long run anyway).
![]() |
Vim: for when cannons and cutlasses just won't cut it. |
(Short aside: When I read the above article, I didn't know who Richard Stallman was. However, as it turns out, if I had known, there would have been more of an incentive to choose vim.)
Both editors benefit from plugins. I haven't manually installed many - I think the only ones I currently have installed is the fugitive plugin, written by Tim Pope, and the python indent script written by Eric McSween. I will elaborate on the former in a later post.
Of course, knowing the commands available to you is also something that makes you effective, whichever editor you use. I don't know a tenth of all the stuff that vim can do in theory, but this Stack Exchange question was a lot of help to me.
There are a couple of keybindings I find very helpful. Mapping caps lock to ESC, 't to :tabnew, 's to :w and 'q to :q are some that save plenty of keystrokes in the long run.
The more you use an editor, the better you get at it and the less you gain by switching. So it's likely I will keep using vim for the unforeseeable future. And that's ok.
Tuesday, June 11, 2013
Dip into finance
Today I attended a lecture by a relatively well-known academic within computational finance (the reason for this is to try to figure out what to do after what I'm currently doing).
I wasn't too familiar with the terms used within finance, so I didn't follow the discussion. In this particular course (where the speaker was a guest lecturer) they seem to use Excel a lot. Probably this is just a particularity of the course level.
One thing that really stood out was that they talked a lot about master theses. Every half hour or so, the lecturer or someone else would say something like "This is probably something that a master student could have as their project." In my field, this very rarely comes up. It made me wonder how it is in other fields. Is it a sign of underabundance of researchers within finance?
After this lecture, I am less opposed to working in finance than I previously was. I asked the lecturer about future prospects based on my own history, and she said I would have very few problems entering quantitative finance in some way - I could perhaps take a couple of courses in finance first. Also, she said that she didn't find working with finance less intellectually stimulating than what she did before (she switched to finance after her Ph.D.).
I suppose what's standing in the way of a career in finance for me is the thought that it is less 'pure' or 'ideal' than what I am currently doing. After all, working with finance is not trying to figure out how the world works. But then again - working in the academic world has made me rethink the validity of stating that I am trying to figure out how the world works. At my institute, at least, it seems to be less and less true the older you get, as grant applications, teaching etc. takes over.
I am still undecided about this. I have to try to gather as much information as possible about the experience of working with something else than what I am currently doing before I make a choice.
I wasn't too familiar with the terms used within finance, so I didn't follow the discussion. In this particular course (where the speaker was a guest lecturer) they seem to use Excel a lot. Probably this is just a particularity of the course level.
One thing that really stood out was that they talked a lot about master theses. Every half hour or so, the lecturer or someone else would say something like "This is probably something that a master student could have as their project." In my field, this very rarely comes up. It made me wonder how it is in other fields. Is it a sign of underabundance of researchers within finance?
After this lecture, I am less opposed to working in finance than I previously was. I asked the lecturer about future prospects based on my own history, and she said I would have very few problems entering quantitative finance in some way - I could perhaps take a couple of courses in finance first. Also, she said that she didn't find working with finance less intellectually stimulating than what she did before (she switched to finance after her Ph.D.).
I suppose what's standing in the way of a career in finance for me is the thought that it is less 'pure' or 'ideal' than what I am currently doing. After all, working with finance is not trying to figure out how the world works. But then again - working in the academic world has made me rethink the validity of stating that I am trying to figure out how the world works. At my institute, at least, it seems to be less and less true the older you get, as grant applications, teaching etc. takes over.
I am still undecided about this. I have to try to gather as much information as possible about the experience of working with something else than what I am currently doing before I make a choice.
Monday, June 10, 2013
NumPy structured arrays
I'm programming quite a bit in Python, and my understanding of that language is incremental. Due to the nature of my work, I also work a lot with NumPy. Today I had to solve the following problem:
First of all, I thought that a structured array would be like a 'normal' NumPy array, just that one of the dimensions had field names and data types associated with them.
But I think I am wrong in this - I think it's more a matter of a structured array being a NumPy array, where each element in the array is a structure (which makes sense once I think about it).
For instance, you can't slice a structured array according to the first interpretation:
However, if you treat the result of a slice as a separate array, it works:
This is very basic, I know. But it's something I learned today. And that's what this blog mainly is for. Hopefully I will learn more interesting stuff later.
- Take an input dictionary
- Create a NumPy structured array with the keys as field names, the datatypes of the values as the field datatypes, and the values themselves as the array elements.
First of all, I thought that a structured array would be like a 'normal' NumPy array, just that one of the dimensions had field names and data types associated with them.
But I think I am wrong in this - I think it's more a matter of a structured array being a NumPy array, where each element in the array is a structure (which makes sense once I think about it).
For instance, you can't slice a structured array according to the first interpretation:
In [1]: dtype = ''.join(('uint8,', 4*'int16,', 'int16'))
In [2]: b = np.array([(0, 1, 2, 3, 4, 5)], dtype=dtype)
In [3]: b.shape
Out[3]: (1,)
In [4]: b[0, 3]
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
----> 1 b[0, 3]
IndexError: too many indices
However, if you treat the result of a slice as a separate array, it works:
In [5]: b[0][3]
Out[5]: 3
This is very basic, I know. But it's something I learned today. And that's what this blog mainly is for. Hopefully I will learn more interesting stuff later.
Labels:
numpy,
programming,
python,
Quantity,
Today I learned,
Useful
Friday, June 7, 2013
Bureaucracy and object-oriented programming
Today, as I had to grapple with certain aspects of real-life bureaucracy I was struck by the similarities between bureaucracy and object-oriented programming. I did a search, and found this:
Five (good) lessons the government teaches us about object-oriented programming.
I suppose there are some concepts in there that are outdated (in some communities) - for instance, I have the impression that in Python, the encapsulation concept isn't thought of as that central (cf. the 'consenting adults' paradigm). But still, the article makes good points, I think.
I think the main difference between oo programming and bureaucracy, or rather, why these concepts work so well in one case and not so well in the other, is that humans when working together as in a bureaucracy is not remotely like a logical machine. One cannot trust the output from one 'object'. The processing times are much larger. And the instantiation overhead is way too expensive in bureaucracies - people have to learn to cope with new regulations, departments, and so on.
I wonder if this can be extended somehow.. is it possible to make a model of real-life bureaucratic processing based on other programming paradigms, like a procedural programming one? If I have time at some point, I'll try to think about this more.
Five (good) lessons the government teaches us about object-oriented programming.
I suppose there are some concepts in there that are outdated (in some communities) - for instance, I have the impression that in Python, the encapsulation concept isn't thought of as that central (cf. the 'consenting adults' paradigm). But still, the article makes good points, I think.
I think the main difference between oo programming and bureaucracy, or rather, why these concepts work so well in one case and not so well in the other, is that humans when working together as in a bureaucracy is not remotely like a logical machine. One cannot trust the output from one 'object'. The processing times are much larger. And the instantiation overhead is way too expensive in bureaucracies - people have to learn to cope with new regulations, departments, and so on.
I wonder if this can be extended somehow.. is it possible to make a model of real-life bureaucratic processing based on other programming paradigms, like a procedural programming one? If I have time at some point, I'll try to think about this more.
Labels:
bureaucracy,
object-oriented programming,
Quantity,
Thoughtful
Thursday, June 6, 2013
First and foremost
As indicated on my profile, I am a Christian.
And in spite of the way it is 'casually' thrown in there as one of the things that define me, it is in fact the main thing that defines me. Everything else derives from that aspect of myself.
The reason I have put those defining traits together in such a haphazard manner, is that this isn't going to be a "Christian" blog - meaning, it isn't going to be a blog that is mainly focused around Christian life, inspiration for such a life and so on.
Rather, it's going to be a blog that's written by a Christian. Everything I write is written with a Christian backdrop, but that's not always going to be the main actor in every blog post.
Christians have kind of a bad rep in 'reason-focused' groups in the western world. I think part of that is that we're not making ourself visible as serious actors in those groups, and we define our own framework of thinking. That's fine (no, really - I will probably write more on that later), but we also need to engage with other frameworks of thinking.
Used to be that Christians were active in all kinds of activities - writing great literature, doing great science, building great buildings, etc. And believe it or not, but there hasn't been any kind of great discovery that "disproves" Christianity, no matter what certain people might insist. Rather, it has been a shift in how Christianity is viewed. Hopefully, this can and will change in the future.
Sometimes, I will write about Christian stuff. The posts will be labeled accordingly.
And in spite of the way it is 'casually' thrown in there as one of the things that define me, it is in fact the main thing that defines me. Everything else derives from that aspect of myself.
The reason I have put those defining traits together in such a haphazard manner, is that this isn't going to be a "Christian" blog - meaning, it isn't going to be a blog that is mainly focused around Christian life, inspiration for such a life and so on.
Rather, it's going to be a blog that's written by a Christian. Everything I write is written with a Christian backdrop, but that's not always going to be the main actor in every blog post.
Christians have kind of a bad rep in 'reason-focused' groups in the western world. I think part of that is that we're not making ourself visible as serious actors in those groups, and we define our own framework of thinking. That's fine (no, really - I will probably write more on that later), but we also need to engage with other frameworks of thinking.
Used to be that Christians were active in all kinds of activities - writing great literature, doing great science, building great buildings, etc. And believe it or not, but there hasn't been any kind of great discovery that "disproves" Christianity, no matter what certain people might insist. Rather, it has been a shift in how Christianity is viewed. Hopefully, this can and will change in the future.
Sometimes, I will write about Christian stuff. The posts will be labeled accordingly.
Wednesday, June 5, 2013
Wrapping your body
Sometimes I sleep badly. I have no problem going to sleep, but sometimes I wake up too early for some reason and have trouble getting back to sleep, it's as if my mind as on some kind of high (excited about the coming day, maybe?) and isn't able to calm down until a while later, at which point I have missed two hours of sleep and just know that the day is going to be crap.
As long as you are single and have flexible work-hours, this doesn't need to be that big of a deal. If you wake up early you can go do some work and then go bac k to sleep when your mind has calmed down a little. However, that's a quite limited group of people.
Something that I've experimented with the last couple of sleepless early mornings has been a relaxation technique that I did a couple of times as a teenager. It 's pretty simple - you lie on your back, calm down, and then start thinking about your toes, of relaxing them. You then move upwards through your body and focus on each body part, relaxing it, thinking that it becomes heavier. It feels a little like wrapping your body in some kind of 'Relax-o-wrap'. You end with your mouth, nose and eyes. After that, all of your body is mentally wrapped up, and you actually feel like lifting your arm, for instance, would ruin the wrap.
Then, you start focusing on your breath. You inhale deeply, down to the bottom of your lungs, so that it is your stomach that rises and falls, not your chest. I was taught to inhale through the nose and exhale through the mouth, but it's not vital. The vital part is that you focus on your breathing with your mind. In the beginning it's frustrating and hard to focus, but after a while you suddenly realize that you almost dozed off for a second. Then you actually do doze off for a second. Then for longer. Then you start dreaming - my dreams have been weird during this exercise - often they're about falling and flying etc.
After waking up, I usally do some kind of 'unwrapping' routine, focusing on each body part and making it 'unheavy' again. I don't know how important that is, bu t it maintains the illusion of a wrap around your body, which I think is important for this exercise.
So far, I haven't been able to go back to 'normal' sleep with this technique, but I find a certain type of sleep which I think is far superior to being awake. Maybe with time, regular sleep comes as well. Progress will be reported (if I can remember to do it).
As long as you are single and have flexible work-hours, this doesn't need to be that big of a deal. If you wake up early you can go do some work and then go bac k to sleep when your mind has calmed down a little. However, that's a quite limited group of people.
Something that I've experimented with the last couple of sleepless early mornings has been a relaxation technique that I did a couple of times as a teenager. It 's pretty simple - you lie on your back, calm down, and then start thinking about your toes, of relaxing them. You then move upwards through your body and focus on each body part, relaxing it, thinking that it becomes heavier. It feels a little like wrapping your body in some kind of 'Relax-o-wrap'. You end with your mouth, nose and eyes. After that, all of your body is mentally wrapped up, and you actually feel like lifting your arm, for instance, would ruin the wrap.
Then, you start focusing on your breath. You inhale deeply, down to the bottom of your lungs, so that it is your stomach that rises and falls, not your chest. I was taught to inhale through the nose and exhale through the mouth, but it's not vital. The vital part is that you focus on your breathing with your mind. In the beginning it's frustrating and hard to focus, but after a while you suddenly realize that you almost dozed off for a second. Then you actually do doze off for a second. Then for longer. Then you start dreaming - my dreams have been weird during this exercise - often they're about falling and flying etc.
After waking up, I usally do some kind of 'unwrapping' routine, focusing on each body part and making it 'unheavy' again. I don't know how important that is, bu t it maintains the illusion of a wrap around your body, which I think is important for this exercise.
So far, I haven't been able to go back to 'normal' sleep with this technique, but I find a certain type of sleep which I think is far superior to being awake. Maybe with time, regular sleep comes as well. Progress will be reported (if I can remember to do it).
Tuesday, June 4, 2013
Exercising
As mentioned here, I exercise regularly.
"Regularly" in this context means thrice a week, and it also means that I always exercise in the morning, right after waking up and before breakfast. Sometimes I skip exercising, though I shouldn't. Usually that's because I've slept badly (subject for another post!) and don't need more exhaustion. Sometimes it's because I was up late the day before and don't have time to exercise. Sometimes it's a combination (I slept badly, so I woke up late). But these are exceptions.
I exercise for about an hour. I usually listen to two podcasts of my favorite radio show while exercising, and they last for half an hour each.
The exercise is pretty tiring. I start with three repetitions of the following:
After this, I do back and abdominal exercises for about twenty-five minutes, which i think is important when you sit as much during the day as I do. In between these, I do as many pull-ups as I can.
It is an important point for me to be able to exercise without too much hassle, because then I usually never get around to it. The less overhead time, the better. So I prefer to exercise at home using only body-weight. For those of us who are only reasonably fit, that's more than enough. If your goal is to stay fit, not build muscles, there really is no point in doing heavy weight-lifting, IMO. Body-weight exercise will only take you so far, though, so if you want to look really buff, then you should start lifting weights.
When I first started doing burpees, they totally killed me. They're one of the most exhaustive forms of exercise I know, as long as you do a proper jump up and a proper push up each time. So in the beginning, x in the above regime was about two-three. It's nice to see improvement. I am a bit unsure of doing this for a long time, though. Although it's probably better for your legs and back to do burpees than running (for a fixed amount of 'exercise'), it can still be a strain on the joints to do that many jump-ups. So far, though, so good, so I'll keep doing it until it starts hurting!
Anyway - the above regime works all major muscle groups in addition to being good cardio exercise. Combined with healthy eating, and remembering that being hungry for a little while isn't dangerous, you should notice an improvement in how you look and feel after a couple of weeks.
"Regularly" in this context means thrice a week, and it also means that I always exercise in the morning, right after waking up and before breakfast. Sometimes I skip exercising, though I shouldn't. Usually that's because I've slept badly (subject for another post!) and don't need more exhaustion. Sometimes it's because I was up late the day before and don't have time to exercise. Sometimes it's a combination (I slept badly, so I woke up late). But these are exceptions.
I exercise for about an hour. I usually listen to two podcasts of my favorite radio show while exercising, and they last for half an hour each.
The exercise is pretty tiring. I start with three repetitions of the following:
- x burpees (the push up + jump up variant), where x is a function of my fitness (Currently x=13).
- Shadowboxing for y seconds, where I typically adjust y so that it takes as long as the burpees do. Currently, y=45 seconds, although the burpees don't take that long, so I have to adjust a little.
- Do one more of both the above points.
- Rest for a couple of minutes.
- The Plank for z seconds, where z=90 the first repetition, z=60 the second repetition, and z=45 the third repetition.
- Rest for a minute or so.
After this, I do back and abdominal exercises for about twenty-five minutes, which i think is important when you sit as much during the day as I do. In between these, I do as many pull-ups as I can.
It is an important point for me to be able to exercise without too much hassle, because then I usually never get around to it. The less overhead time, the better. So I prefer to exercise at home using only body-weight. For those of us who are only reasonably fit, that's more than enough. If your goal is to stay fit, not build muscles, there really is no point in doing heavy weight-lifting, IMO. Body-weight exercise will only take you so far, though, so if you want to look really buff, then you should start lifting weights.
![]() |
Or you can start doing experiments with certain drugs. |
Anyway - the above regime works all major muscle groups in addition to being good cardio exercise. Combined with healthy eating, and remembering that being hungry for a little while isn't dangerous, you should notice an improvement in how you look and feel after a couple of weeks.
Labels:
Brain Sputter,
dieting,
exercising,
Quantity,
self improvement
Monday, June 3, 2013
Interesting tasks as motivation
I used to play a lot of video games. I dread to count the hours spent doing this. Now, I don't play much anymore, though I have occasional bouts where I go on a
total gaming spree. Usually that leaves me pretty depressed afterwards.
I currently have a hope that this will not happen anymore, now that I view being able to learn programming as a fun 'hobby'. That is, when I'm working on science-related stuff now, and I lack motivation, I tell myself that "once you're done with this, you can learn more programming". And it seems to work.
At least for now. I have found that many of these motivational techniques are fleeting, so it remains to be seen whether this technique stands the test of time. However, I do believe that the key to being productive is to combine several techniques that work for you. So if I combine the "learn programming once you're done" technique with some kind of variation on the Pomodoro technique mentioned in an earlier post, maybe the combination will yield good results.
In the end, though, I think it's a matter of teaching your brain to operate differently - to eke out new neuron patterns so that the brain have less resistance in those directions I want it to go. The way there can be hard and painful, though!
I currently have a hope that this will not happen anymore, now that I view being able to learn programming as a fun 'hobby'. That is, when I'm working on science-related stuff now, and I lack motivation, I tell myself that "once you're done with this, you can learn more programming". And it seems to work.
At least for now. I have found that many of these motivational techniques are fleeting, so it remains to be seen whether this technique stands the test of time. However, I do believe that the key to being productive is to combine several techniques that work for you. So if I combine the "learn programming once you're done" technique with some kind of variation on the Pomodoro technique mentioned in an earlier post, maybe the combination will yield good results.
In the end, though, I think it's a matter of teaching your brain to operate differently - to eke out new neuron patterns so that the brain have less resistance in those directions I want it to go. The way there can be hard and painful, though!
Labels:
brain whipping,
gaming,
motivation,
personal,
productivity,
Quantity,
self improvement
Friday, May 31, 2013
Update on the usefulness of the blog
I've been writing this blog for three weeks now, and it's time to reflect a little on what the impact is so far.
First of all, so far it seems the blog does have a positive effect. I find it easier to be structured, among other things. However, this might be due to some other underlying fact. For instance, maybe the blog was conceived during a boost in motivation, which in turn also effected how I structured myself and so on. Hard to tell as of now.
Second of all, so far it's been not too hard to write one blog post per day. I ideally have no quality demands, and those posts which turn out to have a bit of quality to them, I label with the 'Quality' label anyway.
I do expect this to change, though. First of all, there will be holidays etc., during which it might be hard to actually publish something. Second of all, initially it's easy to find things to write about because you haven't written anything yet, so everything is up for grabs. I recently wrote about dancing, for instance. I don't have much more to say about that now, and might not for a while, so that source is tapped for the time being. If enough of these sources get tapped, it might get hard to find stuff to write about.
I also find that it does take a non-negligible amount of time to write a blog post, even with no quality standards. As of now, I spend at least half an hour per blog post (that doesn't just contain an update on how the blog is structured). Maybe it will turn out that half an hour each day is too much.
Third of all, I am generally not very satisfied with how I write on this blog. I know I can write much better, but I find myself just typing the words I need to state my point and not much more because I have work to do. Hopefully this will improve, but probably not unless I cut back on other work or update less often.
However, I know I need a regular schedule for this blog. I am thinking of at least only update during weekdays, taking breaks during the weekends. This would enable me to write posts during the weekend or at least edit posts that I made earlier if I have spare time. Which I probably don't.
I am excited to see how useful this blog will be for self improvement in the long term. At the very least, it should function as a kind of technical diary where I can write down what I have learned that day. However, if I don't have enough time to formulate what I have learned in a good way, I'm unsure of the utility of the blog.
We shall see. As of now, I will cut back to updating only on weekdays.
First of all, so far it seems the blog does have a positive effect. I find it easier to be structured, among other things. However, this might be due to some other underlying fact. For instance, maybe the blog was conceived during a boost in motivation, which in turn also effected how I structured myself and so on. Hard to tell as of now.
Second of all, so far it's been not too hard to write one blog post per day. I ideally have no quality demands, and those posts which turn out to have a bit of quality to them, I label with the 'Quality' label anyway.
I do expect this to change, though. First of all, there will be holidays etc., during which it might be hard to actually publish something. Second of all, initially it's easy to find things to write about because you haven't written anything yet, so everything is up for grabs. I recently wrote about dancing, for instance. I don't have much more to say about that now, and might not for a while, so that source is tapped for the time being. If enough of these sources get tapped, it might get hard to find stuff to write about.
I also find that it does take a non-negligible amount of time to write a blog post, even with no quality standards. As of now, I spend at least half an hour per blog post (that doesn't just contain an update on how the blog is structured). Maybe it will turn out that half an hour each day is too much.
Third of all, I am generally not very satisfied with how I write on this blog. I know I can write much better, but I find myself just typing the words I need to state my point and not much more because I have work to do. Hopefully this will improve, but probably not unless I cut back on other work or update less often.
However, I know I need a regular schedule for this blog. I am thinking of at least only update during weekdays, taking breaks during the weekends. This would enable me to write posts during the weekend or at least edit posts that I made earlier if I have spare time. Which I probably don't.
I am excited to see how useful this blog will be for self improvement in the long term. At the very least, it should function as a kind of technical diary where I can write down what I have learned that day. However, if I don't have enough time to formulate what I have learned in a good way, I'm unsure of the utility of the blog.
We shall see. As of now, I will cut back to updating only on weekdays.
Thursday, May 30, 2013
Presentations
Yesterday I held a presentation for a community of hobby enthusiasts in my field.
Writing presentations can be quite enlightening, since you ideally should know what you're talking about. Granted, I think it's easier to talk to non-specialists because then you can get away with more handwaving, but sometimes, when you want to explain stuff in the most basic way possible, it requires you to think things through - what's really going on in this process I'm trying to describe? How can it be explained in plain words?
There are some of things I loathe when it comes to listening to a presentation:
As for choosing the style of your slides: sometimes I think certain slide styles can look cool, but elaborate slide designs can sometimes take focus away from what you're saying. I use the Latex beamer class with just a very clean setup for my presentations.
Writing presentations can be quite enlightening, since you ideally should know what you're talking about. Granted, I think it's easier to talk to non-specialists because then you can get away with more handwaving, but sometimes, when you want to explain stuff in the most basic way possible, it requires you to think things through - what's really going on in this process I'm trying to describe? How can it be explained in plain words?
There are some of things I loathe when it comes to listening to a presentation:
- Slides containing walls of text
![]() |
Like this |
- When the slides are exact copies of the speaker's manuscript
- When the speaker provides too little background. I would rather hear things one time too many than one time too little.
- I view slides not as the main conveyors of information, but rather as a tool to complement what I say. What I say is the most important thing in a presentation - not what's on the slides.
- I try to illustrate my points with pictures when possible - it's much easier to grasp and you don't have to read while the presenter is talking
- When I have to resort to text, I try to keep it as short and simple as possible. The slides provide cues for what I will say, but they're only cues, not manuscripts.
- I try (time permitting) to provide as much background as the listeners need to understand what I'm talking about.
As for choosing the style of your slides: sometimes I think certain slide styles can look cool, but elaborate slide designs can sometimes take focus away from what you're saying. I use the Latex beamer class with just a very clean setup for my presentations.
Wednesday, May 29, 2013
Context in grading
This is another grading-related post, but it's also about context, which is one of my favourite terms.
I believe context is extremely important. Understanding context is what keeps humans from being machines. I think at least eighty percent of interpersonal conflict comes from disregarding or misunderstanding the context. Probably I will say something on this later.
But this post will be on context when grading. I don't have that much to say on the topic, but I needed to point out that when you grade exams, the ultimate goal is to assess whether the student has grasped the curriculum or not.
A person who is bureaucratic by nature (i.e. has a tendency to ignore context) will simply look at whether the student has written down the correct answer (in the natural sciences, that is). If it's not on the paper, it is irrelevant for the grading process.
And by doing so, the bureaucrat has failed to accomplish the goal of grading - namely to assess the student's grasp of the curriculum.
That is because what is written is not the only source of information available to the grader. Being a human, the grader also has access to the context of what is written.
As an example: If a student, during an explanation of some kind, uses the wrong word for a key term, the bureaucrat will automatically see that as an error. Don't get me wrong - it might be an error, but only insofar as it demonstrates a lack of understanding by the student. How to we ascertain this? By examining the context. If a student otherwise clearly shows what he/she is talking about, demonstrating an excellent command of the subject matter, then this error in wording shouldn't be taken as a symptom of lack of understanding, but simply as a symptom of momentary forgetfulness. However, if, along with this error, the student writes an explanation which shows that he/she has just been memorizing the curriculum, not really understanding what is going on, the error can be taken as a symptom of lack of understanding. In other words, the context determines whether this error should be penalized or not!
More on context later.
I believe context is extremely important. Understanding context is what keeps humans from being machines. I think at least eighty percent of interpersonal conflict comes from disregarding or misunderstanding the context. Probably I will say something on this later.
But this post will be on context when grading. I don't have that much to say on the topic, but I needed to point out that when you grade exams, the ultimate goal is to assess whether the student has grasped the curriculum or not.
A person who is bureaucratic by nature (i.e. has a tendency to ignore context) will simply look at whether the student has written down the correct answer (in the natural sciences, that is). If it's not on the paper, it is irrelevant for the grading process.
And by doing so, the bureaucrat has failed to accomplish the goal of grading - namely to assess the student's grasp of the curriculum.
That is because what is written is not the only source of information available to the grader. Being a human, the grader also has access to the context of what is written.
As an example: If a student, during an explanation of some kind, uses the wrong word for a key term, the bureaucrat will automatically see that as an error. Don't get me wrong - it might be an error, but only insofar as it demonstrates a lack of understanding by the student. How to we ascertain this? By examining the context. If a student otherwise clearly shows what he/she is talking about, demonstrating an excellent command of the subject matter, then this error in wording shouldn't be taken as a symptom of lack of understanding, but simply as a symptom of momentary forgetfulness. However, if, along with this error, the student writes an explanation which shows that he/she has just been memorizing the curriculum, not really understanding what is going on, the error can be taken as a symptom of lack of understanding. In other words, the context determines whether this error should be penalized or not!
More on context later.
Tuesday, May 28, 2013
Dancing
I dance. It's a partner dance, and it is a source of great joy. I can heartily recommend learning to dance, especially a partner dance.
Someone once described dancing as 'illustrating the music'. This I find to be a beautiful and accurate description. Listening to the music, trying to anticipate what's coming, and then doing something that you think fits to that, is a lot of fun.
There is also the joy of finding a 'connection' with your partner. Sometimes, when you dance, something 'clicks' and you and your partner are able to read each other, complementing each other's moves. This is close to being a transcendental experience. It's as if you're drawing something on paper with another person, and you both know what you're going to draw, so the lead draws the main structure, and the follow embellishes the structure, turning it into something beautiful.
This experience does, though, require some skill, both for the follow and the lead. I think there are two types of skill required: Motoric skill (being able to control your body) and creative skill. I will embellish on the latter below.
When you first start dancing, you learn 'turns', which are moves or short 'dance modules', if you like, that you can string together while dancing. Learning turns is vital, especially if you're a lead. However, one can easily get into the mindset that 'in order to become a good dancer, I have to learn a lot of turns'. This is incorrect - or rather, it is correct, but not for the reason you think.
Learning turns is an important means to another end, it's not an end in itself. The true end in dancing is to be able to illustrate the music in exactly the way you want yourself. Turns can help you on the way to that goal, but eventually, if you insist on only doing 'turns' that you have learned before, they will constrain your dancing. At some point, you will find yourself in the situation where you know that you want to illustrate the music in a specific way, but you find that you don't know the turn to do that. And that is the point at which you must start to break free from the turns. You must take what you know, based on doing turns, and turning that into creative music-illustration.
The above is probably true of all creative endeavors - you learn the ropes, but in order to be truly creative you have to understand that the ropes are structures that eventually will constrain you.
Someone once described dancing as 'illustrating the music'. This I find to be a beautiful and accurate description. Listening to the music, trying to anticipate what's coming, and then doing something that you think fits to that, is a lot of fun.
There is also the joy of finding a 'connection' with your partner. Sometimes, when you dance, something 'clicks' and you and your partner are able to read each other, complementing each other's moves. This is close to being a transcendental experience. It's as if you're drawing something on paper with another person, and you both know what you're going to draw, so the lead draws the main structure, and the follow embellishes the structure, turning it into something beautiful.
This experience does, though, require some skill, both for the follow and the lead. I think there are two types of skill required: Motoric skill (being able to control your body) and creative skill. I will embellish on the latter below.
When you first start dancing, you learn 'turns', which are moves or short 'dance modules', if you like, that you can string together while dancing. Learning turns is vital, especially if you're a lead. However, one can easily get into the mindset that 'in order to become a good dancer, I have to learn a lot of turns'. This is incorrect - or rather, it is correct, but not for the reason you think.
![]() |
Some of their rules can be bent. Others... can be broken. |
The above is probably true of all creative endeavors - you learn the ropes, but in order to be truly creative you have to understand that the ropes are structures that eventually will constrain you.
Labels:
creativity,
dancing,
personal,
Quality,
Thoughtful
Monday, May 27, 2013
Healthy eating
Staying in shape can be tough when you're a desk-worker like me. I try to regularly exercise three times a week, and since anecdotal evidence suggests that you can't outrun your fork , I also try to eat healthy.
I have no zen tips for accomplishing that. But once you start to actually see the contours of those abdominal muscles you thought were dissolved in fatty acids,
you start to understand what Kate Moss meant when she said "Nothing tastes as good as skinny feels". Not trying to condone anorexia here, obviously. I am currently in no danger of having that condition.
Another thing that has been important for me to keep in mind is that being hungry for an evening isn't dangerous. Going to sleep hungry isn't going to kill you. And usually you're not really hungry either, it's mostly just being half-full and/or bored.
A third important thing for me is not to fail miserably once I fail. As Jillian Michaels said: "Think of your weight loss journey as a car. If you were driving along and got a flat tire, would you slash the other 3 tires and call it a complete loss? No. You would fix that one tire and keep going."
There is one way of thinking within the fitness world that I simply find to be impractical, and that is the thought that you should eat often and eat small meals. The reason I have a problem with this is that it's thinking about food that makes me want to eat. The less I have to think about food during one day, the less I feel the need to eat. Thus, I limit my meals to three a day, and once I have finished one of them, I know that I won't be eating again for a while. And usually my stomach then tells me when it's time again.
Motivational quotes get a lot of heat from the irony generation. But I find them to be useful - they're like someone jerking your shoulder when you're about to fall asleep. Maybe I'll do a compilation of my favorites one day, for the pleasure of all my imaginary readers.
![]() |
Which means I won't be able to do this anymore. |
Another thing that has been important for me to keep in mind is that being hungry for an evening isn't dangerous. Going to sleep hungry isn't going to kill you. And usually you're not really hungry either, it's mostly just being half-full and/or bored.
A third important thing for me is not to fail miserably once I fail. As Jillian Michaels said: "Think of your weight loss journey as a car. If you were driving along and got a flat tire, would you slash the other 3 tires and call it a complete loss? No. You would fix that one tire and keep going."
There is one way of thinking within the fitness world that I simply find to be impractical, and that is the thought that you should eat often and eat small meals. The reason I have a problem with this is that it's thinking about food that makes me want to eat. The less I have to think about food during one day, the less I feel the need to eat. Thus, I limit my meals to three a day, and once I have finished one of them, I know that I won't be eating again for a while. And usually my stomach then tells me when it's time again.
Motivational quotes get a lot of heat from the irony generation. But I find them to be useful - they're like someone jerking your shoulder when you're about to fall asleep. Maybe I'll do a compilation of my favorites one day, for the pleasure of all my imaginary readers.
Labels:
Brain Sputter,
dieting,
exercising,
motivation,
personal,
Quantity,
self improvement
Sunday, May 26, 2013
Assigning grades
In my series of extremely interesting grading-related posts, I want to raise a question about the assignment of grades.
Namely, how on earth are you supposed to assign grades? The ideal, at least in my institution, is that the grade distribution should be Gaussian after removing the fail grades. So far, there are three "schemes" I can think of:
The second one is the most appealing to me, at least when the number of students taking the course is large, which is when you would expect something like a Gaussian distribution anyway. The main criticism against this scheme is that the grades become relative, so that an A one year is different from an A the next year. I posit that this is a problem with all of the above schemes. The "objective" scheme will be subject to variation because the teacher is not God, and because you typically don't give the same exam year after year. There is a point at which the second scheme will give larger variations than the objective scheme, but as long as the number of students are high, the total relative adjustment scheme will be more robust than the objective scheme.
The third option I think is an ok compromise if you feel uneasy about the total adjustment scheme. I dislike it because of its non-automated nature - i.e. even once you have assigned a percentage to an exam, you still have to make subjective judgements. Also, it seems to me this method is prone to even more arbitraryness than either of the two previous ones.
It is worth to note that most grading systems explain grades in terms of level of understanding - i.e., an A means that "the student has an excellent command of the subject" etc. In these terms, the objective scheme is the preferred one - there should be no a posteriori tinkering with the results based on the distributions! However - it's impossible to a priori know where to draw the line. If you say that an A should be ninety percent correct or more, then you might end up with no students getting an A because your standards were too high. You might say "well, that's too bad for the students - we cannot lower the bar just because the students do badly." But the point is that you don't know whether you're lowering the bar, because the concept of an ideal 'objective' test is flawed from the outset! If you base your grades on the actual empirical distribution, there still will be incentive for students to do well, because only the best ten percent of them will get an 'A'.
Ideally, then, one should change the whole meaning of the grading system. Instead of saying that grades reflect some kind of absolute skill level (which is a flawed concept anyway, unless you spend extreme amounts of time or unless the number of students is low), the grades should simply reflect which percentile you ended up in. I.e. an 'A' should just mean that the student was among the top ten percent of the class, and so on.
I'm not sure yet what we'll end up using for these exams, although we have used the third approach before. If I have the time, I will write some code to do some statistics on the results of this one to see if there are interesting patterns to be found.
Namely, how on earth are you supposed to assign grades? The ideal, at least in my institution, is that the grade distribution should be Gaussian after removing the fail grades. So far, there are three "schemes" I can think of:
- Giving grades based on some "objective" scale - i.e., if students get ninety percent or more correct on their exam, they get an A, a B if it's between eighty and ninety, and so on.
- Total relative adjustment of the scale: The top ten percent of the students gets an A, the next twenty percent gets a B, and so on.
- A "hybrid" solution. You find natural cutoff points that are not too far away from the "objective" scale so that the number of people who get an A and so on are approximately correctly distributed.
The second one is the most appealing to me, at least when the number of students taking the course is large, which is when you would expect something like a Gaussian distribution anyway. The main criticism against this scheme is that the grades become relative, so that an A one year is different from an A the next year. I posit that this is a problem with all of the above schemes. The "objective" scheme will be subject to variation because the teacher is not God, and because you typically don't give the same exam year after year. There is a point at which the second scheme will give larger variations than the objective scheme, but as long as the number of students are high, the total relative adjustment scheme will be more robust than the objective scheme.
The third option I think is an ok compromise if you feel uneasy about the total adjustment scheme. I dislike it because of its non-automated nature - i.e. even once you have assigned a percentage to an exam, you still have to make subjective judgements. Also, it seems to me this method is prone to even more arbitraryness than either of the two previous ones.
It is worth to note that most grading systems explain grades in terms of level of understanding - i.e., an A means that "the student has an excellent command of the subject" etc. In these terms, the objective scheme is the preferred one - there should be no a posteriori tinkering with the results based on the distributions! However - it's impossible to a priori know where to draw the line. If you say that an A should be ninety percent correct or more, then you might end up with no students getting an A because your standards were too high. You might say "well, that's too bad for the students - we cannot lower the bar just because the students do badly." But the point is that you don't know whether you're lowering the bar, because the concept of an ideal 'objective' test is flawed from the outset! If you base your grades on the actual empirical distribution, there still will be incentive for students to do well, because only the best ten percent of them will get an 'A'.
Ideally, then, one should change the whole meaning of the grading system. Instead of saying that grades reflect some kind of absolute skill level (which is a flawed concept anyway, unless you spend extreme amounts of time or unless the number of students is low), the grades should simply reflect which percentile you ended up in. I.e. an 'A' should just mean that the student was among the top ten percent of the class, and so on.
I'm not sure yet what we'll end up using for these exams, although we have used the third approach before. If I have the time, I will write some code to do some statistics on the results of this one to see if there are interesting patterns to be found.
Saturday, May 25, 2013
Grading as concentration practice
Grading exams is, as I have mentioned before, a mind-numbingly boring task. I am of the belief, though, that doing boring stuff can be good for you from time to time, especially if you use it for the right purposes.
I, for instance, have a slight problem concentrating on the task at hand. I'm surely not alone in this, even if I sometimes get the feeling that everyone else is much better at focusing than I am. My brain offers virtually zero resistance when being hijacked by the urge to check some social medium for updates. I need to teach my brain self-defense.
So far I haven't been very structured about it. I just learned about the Pomodoro technique, which I might try if I'm unable to hack this on my own.
But as of now, I am trying to hack this problem on my own - so I decided to use the grading process, in which I was stuck anyway, as a means to this end.
The first couple of days grading I didn't do this, and it basically degenerated to the point where after each exam I graded I would watch a YouTube video. Since every exam took about ten minutes to grade once I got up to speed, this made for a very attention-decifit-enhancing technique.
After that, though, I started setting limits, as in "no YouTube or social media before lunchtime, and do constant grading until then". Yes, I told myself to grade for two-three hours straight with no breaks. I think for a task which requires no creative input such as this, this is defensible (you don't need a break to mull over what you're currently doing) and it promotes concentration for extended periods of time, which currently is my major weak spot when it comes to productivity. And another thing - many programmers talk about being in "the zone". I cannot understand how you can get in the zone with only 25 minutes (as per the Pomodoro technique) available at a time?
So how did the concentration practice go? I would very often slip, though I did notice an increased resistance from my brain when the impulse to check on social media came. However, the slips lasted shorter than usual, and I did find myself forcing my brain to accept that there would be no break after this exam, just another exam to grade. I was basically telling my brain to shut up and suck it up, because it would get no external stimuli, no rewards until the time was up.
All in all, I found it a good exercise. I imagine now that I am better at focusing. I have taken no breaks, for example, during the writing of this blog post. I found the method of mentally allocating time for a task a good thing, and I will try to combine this with another zen-like technique which I'll write about later.
Hopefully, this has been an important step in making my brain less addicted to outer stimuli, which I think is the basic problem I have. God willing, I'll be able to keep this up!
I, for instance, have a slight problem concentrating on the task at hand. I'm surely not alone in this, even if I sometimes get the feeling that everyone else is much better at focusing than I am. My brain offers virtually zero resistance when being hijacked by the urge to check some social medium for updates. I need to teach my brain self-defense.
So far I haven't been very structured about it. I just learned about the Pomodoro technique, which I might try if I'm unable to hack this on my own.
But as of now, I am trying to hack this problem on my own - so I decided to use the grading process, in which I was stuck anyway, as a means to this end.
The first couple of days grading I didn't do this, and it basically degenerated to the point where after each exam I graded I would watch a YouTube video. Since every exam took about ten minutes to grade once I got up to speed, this made for a very attention-decifit-enhancing technique.
After that, though, I started setting limits, as in "no YouTube or social media before lunchtime, and do constant grading until then". Yes, I told myself to grade for two-three hours straight with no breaks. I think for a task which requires no creative input such as this, this is defensible (you don't need a break to mull over what you're currently doing) and it promotes concentration for extended periods of time, which currently is my major weak spot when it comes to productivity. And another thing - many programmers talk about being in "the zone". I cannot understand how you can get in the zone with only 25 minutes (as per the Pomodoro technique) available at a time?
So how did the concentration practice go? I would very often slip, though I did notice an increased resistance from my brain when the impulse to check on social media came. However, the slips lasted shorter than usual, and I did find myself forcing my brain to accept that there would be no break after this exam, just another exam to grade. I was basically telling my brain to shut up and suck it up, because it would get no external stimuli, no rewards until the time was up.
![]() |
You little scumbag! I got your name! I got your ass! You will not laugh! You will not cry! |
Hopefully, this has been an important step in making my brain less addicted to outer stimuli, which I think is the basic problem I have. God willing, I'll be able to keep this up!
Friday, May 24, 2013
Grading: What is this I don't even
Grading can occasionally be a profound glimpse into the human psyche under pressure.
For instance, some people, when they don't know an answer to a question, they will try to bullshit their way to an answer. This is much easier to pull off with an oral exam. For a written exam.. not so much.
And the thing is, doing this might actually be harmful, for two reasons:
The second item above, I'm not so sure is a problem. If some writes a satisfactory answer to a question - containing the bare minimum of what is required, not demonstrating superior understanding but still answering correctly, I might give eighty percent (say) for that question, simply because I must assume that the person knows what they are talking about. However, if that person feels like they haven't given a fulfilling answer, and then starts throwing in stuff they think might be true, then I'm getting a definitive confirmation that this person indeed doesn't know what they are talking about - thus I might actually lower the grade. To me, this makes sense - it's a kind of "innocent until proven guilty". Others take a more liberal stance, saying that as long as the right answer has been written down, it doesn't matter what else is also written.
It is of course important exactly what kind of bullshit has been written - if it's simply information that has no relevance to the question, such as demonstrating your knowledge about the human genome when asked about that of a pig, then I agree that you shouldn't be penalized - you're keeping the bullshit away from the breadbin. However, when the bullshit starts encroaching on the perfectly fine sandwich that is your basic answer, that sandwich, too, will start to smell.
For instance, some people, when they don't know an answer to a question, they will try to bullshit their way to an answer. This is much easier to pull off with an oral exam. For a written exam.. not so much.
And the thing is, doing this might actually be harmful, for two reasons:
- It might ruin the overall impression of your exam - i.e. if you give a bullshit answer early on, the grader might be slightly predisposed to look less favorably upon the rest of your answers
- It might reveal your ignorance, thus actually giving you a lower grade than you would have gotten if you had just shut up.
The second item above, I'm not so sure is a problem. If some writes a satisfactory answer to a question - containing the bare minimum of what is required, not demonstrating superior understanding but still answering correctly, I might give eighty percent (say) for that question, simply because I must assume that the person knows what they are talking about. However, if that person feels like they haven't given a fulfilling answer, and then starts throwing in stuff they think might be true, then I'm getting a definitive confirmation that this person indeed doesn't know what they are talking about - thus I might actually lower the grade. To me, this makes sense - it's a kind of "innocent until proven guilty". Others take a more liberal stance, saying that as long as the right answer has been written down, it doesn't matter what else is also written.
It is of course important exactly what kind of bullshit has been written - if it's simply information that has no relevance to the question, such as demonstrating your knowledge about the human genome when asked about that of a pig, then I agree that you shouldn't be penalized - you're keeping the bullshit away from the breadbin. However, when the bullshit starts encroaching on the perfectly fine sandwich that is your basic answer, that sandwich, too, will start to smell.
Thursday, May 23, 2013
Grading
The last week I've been grading exams for a college-level introductory course to my field for non-scientists. I'm a TA for this course, and so we also have to step up when the time for grading comes.
The grading process is very boring. After doing it for extended periods of time, my mind has been noticeably numbed. However, while grading one starts to notice a couple of things, and those can be interesting.. if you're in that kind of mood.
For instance, how are you supposed to grade an exam? That of course depends on what type it is. This particular exam consisted of fifteen questions, and being an exam on a science course, the answers to the questions were relatively well-defined. However, being a qualitative science exam, there were still a bit of leeway.
Even so, after grading around twenty exams, you start to notice patterns, and you stop reading the answers as carefully as you did in the beginning. Got the formula right? Check. Drew this graph correctly? Check. Included that particular process in the explanation? Oops, missed that one. That's a couple of points off.
When grading previous exams in the same course, I have tried to set up a checklist for each question, and then going through the checklist, giving points for each point contained in the answer. However there are a couple of problems with this approach.
First of all, even though there is a solution provided by the main teacher, the main teacher doesn't really know what his students know, especially for such a low-level course. Therefore, trying to build a checklist based on the solution provided by the teacher will prove to be a bad match when facing actual exams, in the sense that you will typically emphasize stuff that noone knows, or you will emphasize stuff that everyone gets right.
This is why you need a training set. You need to look at a number of exams, going through the answers and identifying which parts separate the wheat from the chaff. And then, ideally, you should go through the same set again, this time using your checklist to actually grade those exams.
That's.. not going to happen. At least not for me. I used to simultaneously grade and build up my checklist, meaning that the first ten-twenty exams probably were a bit off. However! We are two people grading the same exams, so as long as the other person starts at some other point than me, this approach is still pretty sound.
The second problem with the checklist approach is that the checklist doesn't convey enough information. Sometimes, you read an exam and you just know that this person has an excellent command of the material. And sometimes you read an exam and you realize this person has simply memorized the material, not really understanding what's going on. However, the checklist doesn't really differentiate between them, unless you put in some kind of checkpoint that says "Deep understanding: two points".
This could work, and I did something like that the last time I graded. However, this time around, I tried not using a checklist, rather trying to give a more "holistic" number of points for each question. That is, I tried to identify to what degree the person had understood what was going on.
This doesn't always work, since many questions are simply of the "regurgitate what you have learned" type. In those cases i would follow something like a mental checklist still. But some questions require more understanding, and in those cases I felt like this approach was better. Surely, this approach means that someone who answered the exact same thing might end up with a different percentage for that particular question, but since a) the exam is made up of fifteen questions, b) we are two graders and c) you get a discreticized letter grade anyway, I don't think this is a crucial problem.
This post is already pretty long.. I think I will split this grading experience into several posts.
The grading process is very boring. After doing it for extended periods of time, my mind has been noticeably numbed. However, while grading one starts to notice a couple of things, and those can be interesting.. if you're in that kind of mood.
For instance, how are you supposed to grade an exam? That of course depends on what type it is. This particular exam consisted of fifteen questions, and being an exam on a science course, the answers to the questions were relatively well-defined. However, being a qualitative science exam, there were still a bit of leeway.
Even so, after grading around twenty exams, you start to notice patterns, and you stop reading the answers as carefully as you did in the beginning. Got the formula right? Check. Drew this graph correctly? Check. Included that particular process in the explanation? Oops, missed that one. That's a couple of points off.
![]() |
How grading makes me feel |
First of all, even though there is a solution provided by the main teacher, the main teacher doesn't really know what his students know, especially for such a low-level course. Therefore, trying to build a checklist based on the solution provided by the teacher will prove to be a bad match when facing actual exams, in the sense that you will typically emphasize stuff that noone knows, or you will emphasize stuff that everyone gets right.
This is why you need a training set. You need to look at a number of exams, going through the answers and identifying which parts separate the wheat from the chaff. And then, ideally, you should go through the same set again, this time using your checklist to actually grade those exams.
That's.. not going to happen. At least not for me. I used to simultaneously grade and build up my checklist, meaning that the first ten-twenty exams probably were a bit off. However! We are two people grading the same exams, so as long as the other person starts at some other point than me, this approach is still pretty sound.
The second problem with the checklist approach is that the checklist doesn't convey enough information. Sometimes, you read an exam and you just know that this person has an excellent command of the material. And sometimes you read an exam and you realize this person has simply memorized the material, not really understanding what's going on. However, the checklist doesn't really differentiate between them, unless you put in some kind of checkpoint that says "Deep understanding: two points".
This could work, and I did something like that the last time I graded. However, this time around, I tried not using a checklist, rather trying to give a more "holistic" number of points for each question. That is, I tried to identify to what degree the person had understood what was going on.
This doesn't always work, since many questions are simply of the "regurgitate what you have learned" type. In those cases i would follow something like a mental checklist still. But some questions require more understanding, and in those cases I felt like this approach was better. Surely, this approach means that someone who answered the exact same thing might end up with a different percentage for that particular question, but since a) the exam is made up of fifteen questions, b) we are two graders and c) you get a discreticized letter grade anyway, I don't think this is a crucial problem.
This post is already pretty long.. I think I will split this grading experience into several posts.
Wednesday, May 22, 2013
Sailing
Today I went sailing with some friends. I was originally going to do grading, but being invited for sailing is such a rare occurrence (it has never happened before) that I joined.
I enjoyed it a lot. It's a small sailboat, less than 20 feet, with no more room than necessary for the five of us. I mainly stood close to the bow while we were sailing, and I didn't do much in terms of raising and lowering the sails etc., since it was my first time.
I can really recommend sailing if you ever get the opportunity, especially on a small boat like this. It's a good way to learn what the wind does to the boat and sails, and it is interesting to see how you strategically have to move the sails in order to take advantage of the wind. I was also surprised at how straight into the wind it is possible to sail and still make good speed.
It did make me feel a little bit like I would have enjoyed to be an actual sailor on an old large sailing ship, like a frigate. However, I think there is a slight difference in sailing for four hours like we did and sailing for four months like real sailors did. I didn't get scurvy once, for instance.
And now it feels like everything is undulating.
I enjoyed it a lot. It's a small sailboat, less than 20 feet, with no more room than necessary for the five of us. I mainly stood close to the bow while we were sailing, and I didn't do much in terms of raising and lowering the sails etc., since it was my first time.
I can really recommend sailing if you ever get the opportunity, especially on a small boat like this. It's a good way to learn what the wind does to the boat and sails, and it is interesting to see how you strategically have to move the sails in order to take advantage of the wind. I was also surprised at how straight into the wind it is possible to sail and still make good speed.
It did make me feel a little bit like I would have enjoyed to be an actual sailor on an old large sailing ship, like a frigate. However, I think there is a slight difference in sailing for four hours like we did and sailing for four months like real sailors did. I didn't get scurvy once, for instance.
![]() |
But that's probably because I brought one of these. |
And now it feels like everything is undulating.
Subscribe to:
Posts (Atom)