I ended last time at saying how the dynamic linker had three main tasks:
Determine and load dependencies, relocate the application and dependencies, and
initialize the application and dependencies, and how the key to speeding up all
of these was to have fewer dependencies in the application.
Now, we're going to look at the relocation process more thorougly. First of all,
what's going on? What does 'relocation' mean?
I'm by no means an expert in this, but I'm going to venture an attempt at an
explanation: After an ELF object has been compiled, it has an entry point
address - in other words, at which memory address the file resides, and if
control is transferred to that address, the ELF object will start executing.
However, there are at least a couple of caveats here. First of all: Even if your
ELF object has a fixed entry point address, it doesn't mean it will be loaded
into actual physical memory at this address. Each process gets its own
virtual memory space, which is a mapping from physical memory to a 'platonic'
memory space. So the application might get loaded into the entry point address
of the virtual memory space, but this address will correspond to another address
entirely in physical space.
The second point is that if we're not talking about an executable, but rather a
dynamic shared object, as we are here (or rather, we have one executable with a
potentially high number of dynamic shared objects that are to be associated with
it), the entry point address isn't even the entry point address it will end up
with in the final executable - it will get shifted depending on what the linker
determines is the best way to combine the addresses of all participating DSOs.
This means that all 'internal' addresses in that object will be shifted by the
same amount as well. This is what we're currently talking about when we use the
term 'relocation'.
So the thing we're going to talk about is how the linker accomplishes this
relocation - and especially the part where it has to synchronize all the load addresses, etc. First, it must be noted that there are two types of dependencies a given DSO can have. For one, you can have dependencies that are located within
the same object - which I imagine happens when you create an object file with
two functions/subroutines and one of them depends on the other - and for
another, you can have dependencies that come from a different object.
The first kind of dependency is easy to handle, given that you know the 'new'
entry point address of the object in question. For each such dependency, you
calculate its relative offset from the entry point, and then simply add this
offset to the new entry point.
The second type of dependency resolution is more involved, and I'm going to talk
about that more the next time.
Friday, June 28, 2013
Thursday, June 27, 2013
Multiplying lots of matrices in NumPy
The other day I found myself needing to perform matrix multiplication in Python,
using NumPy. Well, what's the big deal, you say? You do know that there
exists a
Yes, I do know that, you smart internet person you. However, my problem was that I had a number of matrices for which I wanted to perform the same type of matrix multiplication. I had on the order of a hundred thousand five by two matrices that were to be transposed and multiplied with another hundred thousand five by two matrices.
Well, duh, you say. The
So, I had to improvise:
Kind of involved, but it worked. I got the initial idea from here, but the solution given here only works for symmetrical matrices - for non-symmetrical ones you have to shift the newaxis one step to the left, or it will violate the broadcasting rules of NumPy.
dot method, right?
Yes, I do know that, you smart internet person you. However, my problem was that I had a number of matrices for which I wanted to perform the same type of matrix multiplication. I had on the order of a hundred thousand five by two matrices that were to be transposed and multiplied with another hundred thousand five by two matrices.
Well, duh, you say. The
dot method can handle more than two
dimensions, you know. Yeah, I know that as well. However, it doesn't handle it
the way I needed for this task. I wanted to end up with a hundred thousand two
by two matrices. Had I used dot, I would have ended up with a
hundred thousand by two by hundred thousand by two matrix.
So, I had to improvise:
>>> A.shape
(100000, 2, 5)
>>> B.shape
(100000, 2, 5)
>>> result = np.sum(np.swapaxes(A, 1, 2)[:, np.newaxis, :, :] * B[:, :, :,
np.newaxis], 2)
Kind of involved, but it worked. I got the initial idea from here, but the solution given here only works for symmetrical matrices - for non-symmetrical ones you have to shift the newaxis one step to the left, or it will violate the broadcasting rules of NumPy.
Wednesday, June 26, 2013
Super Size Suckage
Well, here's an uncontroversial post. I saw Super Size Me the other day.
And I thought it sucked.
I'm not the first to do so, but that's what I have to say today. So I might as well detail my criticism a bit more, to make it more constructive.
The premise itself is kind of funny, and the main reason I watched it was to see how much weight this guy could gain in a month. Little did I know that this was to be the subject matter of only thirty percent of the movie, whereas the rest... sucked.
My experience based on previous such documentaries (Michael Moore, i'm looking at you) is that when the documentary maker has a very specific axe to grind, you just end up disbelieving everything that is presented, and you're actually trying to find flaws with the presentation. This is what I ended up doing. And the reason was that I very quickly got a lasting impression of the documentary film maker, which can be summed up thusly: "My vegan girlfriend hates McDonalds and I want to make a documentary. Why not kill two birds with one stone?" Seriously, that girlfriend should have been left out of the movie. I cringed everytime she said anything, because it was always about how superior organic and vegan food was. In the end, when she said she would 'cleanse' Morgan's post-experiment system with her special vegan diet, I cringed doubly.
Now, the above was mostly a gut reaction, but it is symptomatic of one of the biggest problems with this movie: It's not clear what in the world it's trying to say.
On the one hand, it seems to say that McDonalds is bad, and that's the take-home message. On the other hand, it seems to say that organic? vegan? food is the best. And then on the third hand, one premise of the movie seems to be that a guy is trying to eat as much fast food as he can for a month and see how that affects his health and well-being.
Then you might say, 'Well, all of these are tied together and they make up one coherent story'. But they don't. First of all, McDonalds being bad is not the same as vegan and organic food being the best. In fact, I think you will find a lot of people who would agree with the former (to some extent) but not to the latter statement. Second of all, you don't prove that McDonalds is bad by EATING TWICE AS MANY CALORIES PER DAY AS RECOMMENDED. That just proves you're bad at cause and effect.
A much better demonstration that eating McDonalds is bad for you would be to eat the recommended number of calories each day, but eating only McDonalds. If he had eaten five thousand calories worth of vegan food each day he would also gain weight.
As for the 'results' of this exercise, they're pretty much worthless as scientific facts towards demonstrating how McDonalds is bad for you. Very few of the changes that happened to his body can be said to be due solely to the fact that he was eating McDonalds and not to the fact that he was eating way too much. And some of them were pretty subjective. "I feel horrible". "My arms are twitching due to all the sugar". How do you know that?? "My sex life went down". Well, when you're binging on McDonalds food and have a vegan girlfriend, what do you expect?
Also, the most interesting result - how fat he got, was pretty underwhelming. He gained around ten kilograms, and I hardly noticed him getting fatter.
Another underwhelming result was how many times he had been asked whether he wanted a super-size menu, which was something he touted in the beginning of the movie. That was presented as one of the 'dramatic post-movie facts', you know - the ones that accompany some picture of whatever illustrates the fact best at the movie's end. He ate ninety times at McDonalds during this month, and was asked about a super-size menu nine times. Out of ninety. That's ten percent. I'm underwhelmed.
In addition to these things, you had the stock-standard Michael Moore-ish strawmen interviews, tying together unrelated facts to make a point, etc. that generally simply helped discredit the maker of the movie.
In short, I wish documentaries like this didn't get so much attention. I want to be on the right side of issues like this, but when the people who are supposedly on the right side use the same dirty tricks as those we claim to be fighting against, the lines get blurred. If what is presented is truly something that we should be shocked and appalled about, the facts will speak for themselves, and we don't need some dude or his vegan girlfriend to mix them together into a milkshake of dubious factual value.
I'm not the first to do so, but that's what I have to say today. So I might as well detail my criticism a bit more, to make it more constructive.
The premise itself is kind of funny, and the main reason I watched it was to see how much weight this guy could gain in a month. Little did I know that this was to be the subject matter of only thirty percent of the movie, whereas the rest... sucked.
My experience based on previous such documentaries (Michael Moore, i'm looking at you) is that when the documentary maker has a very specific axe to grind, you just end up disbelieving everything that is presented, and you're actually trying to find flaws with the presentation. This is what I ended up doing. And the reason was that I very quickly got a lasting impression of the documentary film maker, which can be summed up thusly: "My vegan girlfriend hates McDonalds and I want to make a documentary. Why not kill two birds with one stone?" Seriously, that girlfriend should have been left out of the movie. I cringed everytime she said anything, because it was always about how superior organic and vegan food was. In the end, when she said she would 'cleanse' Morgan's post-experiment system with her special vegan diet, I cringed doubly.
Now, the above was mostly a gut reaction, but it is symptomatic of one of the biggest problems with this movie: It's not clear what in the world it's trying to say.
On the one hand, it seems to say that McDonalds is bad, and that's the take-home message. On the other hand, it seems to say that organic? vegan? food is the best. And then on the third hand, one premise of the movie seems to be that a guy is trying to eat as much fast food as he can for a month and see how that affects his health and well-being.
Then you might say, 'Well, all of these are tied together and they make up one coherent story'. But they don't. First of all, McDonalds being bad is not the same as vegan and organic food being the best. In fact, I think you will find a lot of people who would agree with the former (to some extent) but not to the latter statement. Second of all, you don't prove that McDonalds is bad by EATING TWICE AS MANY CALORIES PER DAY AS RECOMMENDED. That just proves you're bad at cause and effect.
A much better demonstration that eating McDonalds is bad for you would be to eat the recommended number of calories each day, but eating only McDonalds. If he had eaten five thousand calories worth of vegan food each day he would also gain weight.
As for the 'results' of this exercise, they're pretty much worthless as scientific facts towards demonstrating how McDonalds is bad for you. Very few of the changes that happened to his body can be said to be due solely to the fact that he was eating McDonalds and not to the fact that he was eating way too much. And some of them were pretty subjective. "I feel horrible". "My arms are twitching due to all the sugar". How do you know that?? "My sex life went down". Well, when you're binging on McDonalds food and have a vegan girlfriend, what do you expect?
Also, the most interesting result - how fat he got, was pretty underwhelming. He gained around ten kilograms, and I hardly noticed him getting fatter.
Another underwhelming result was how many times he had been asked whether he wanted a super-size menu, which was something he touted in the beginning of the movie. That was presented as one of the 'dramatic post-movie facts', you know - the ones that accompany some picture of whatever illustrates the fact best at the movie's end. He ate ninety times at McDonalds during this month, and was asked about a super-size menu nine times. Out of ninety. That's ten percent. I'm underwhelmed.
In addition to these things, you had the stock-standard Michael Moore-ish strawmen interviews, tying together unrelated facts to make a point, etc. that generally simply helped discredit the maker of the movie.
In short, I wish documentaries like this didn't get so much attention. I want to be on the right side of issues like this, but when the people who are supposedly on the right side use the same dirty tricks as those we claim to be fighting against, the lines get blurred. If what is presented is truly something that we should be shocked and appalled about, the facts will speak for themselves, and we don't need some dude or his vegan girlfriend to mix them together into a milkshake of dubious factual value.
Tuesday, June 25, 2013
Shared library writeup: Part 3
I ended last time by talking about how the dynamic linker had to run before an
ELF file could run. Currently, in other words, the dynamic linker has been
loaded into the memory space of the process that we want to run, and we're going
to run the linker before we can run our actual application.
We cannot run the linker quite yet, however. The linker has to know where it should transfer control to after it has completed whatever it is supposed to do. This is accomplished by the kernel - it puts an auxiliary vector on top of the process' stack. This vector is a name-value array (or in Python terms, a dictionary), and it already contains the values I talked about last time - the
At this point the (dynamic) linker is supposed to do its magic. It has three tasks:
We cannot run the linker quite yet, however. The linker has to know where it should transfer control to after it has completed whatever it is supposed to do. This is accomplished by the kernel - it puts an auxiliary vector on top of the process' stack. This vector is a name-value array (or in Python terms, a dictionary), and it already contains the values I talked about last time - the
PT_INTERP value which is contained in the
p_offset field of the ELF program header. In addition to these, a
number of other values are added to the auxiliary vector. The values that can be
added like this are defined in the elf.h header file, and have the
prefix AT_. After this vector has been set up, control is finally
transferred to the dynamic linker. The linker's entry point is defined in the
ELF header of the linker, in the e_entry field.
Trusting the linker
At this point the (dynamic) linker is supposed to do its magic. It has three tasks:
- Determine and load dependencies
- Relocate the application and all dependencies
- Initialize the application and dependencies in the correct order
Labels:
ELF,
libraries,
programming,
Quality,
regurgitated information,
Useful
Monday, June 24, 2013
Scheduling
It's weird how certain concepts simply stay out of your field of
'conceivability', so to speak, until they suddenly pop in and you feel silly for
not considering them earlier.
Setting up a schedule for myself has been such a concept. I have read about the concept and its advantages several times before, but for some reason I have just shrugged and never considered it seriously. And I don't really know why - that's the paradox of gestalt shifts - once you have shifted, you're unable to see the reasoning behind your old view (unless you have written it down, or something like that).
I believe that perhaps part of the reason I have been reluctant to set up a schedule is my slightly irregular sleeping habits. I have thought it more important to be rested than to wake up at a certain time. And I still do - working ten hours at sixty percent is worse than working eight at ninety. And my brain is really sensitive to this. It's like sleeping badly puts some kind of insulator between the synapses so they're unable to fire properly.
However, there are a couple of reasons I presently have for willing to try out a schedule nonetheless:
If it turns out that I'm unable to function properly because I am determined to wake up at a certain time, I could always wait with setting up the schedule until morning the same day. That way, I know how much time I have for disposal.
However, I presently have another theory: That my irregular sleep is in part due to my not having any obligations to get up in the morning. Currently, I have a research position, which means I can pretty much come and go as I want. Could this have a negative effect? Perhaps if I approach it more like I would a regular job, my brain somehow would get more 'incentive' to sleep properly during the night? You see, my problem isn't that I cannot fall asleep in the evening - I usually do pretty quickly. Rather, the problem is that my sleep is light and not 'restful' enough. Also, I usually wake up before time, and if I get up at that time, I will be tired.
In other words, this is going to be an experiment. I will schedule the following day the night before, including a time at which I wake up and a time at which I go to bed, and everything in between. Naturally, it will be impossible to follow such a schedule to the point - unexpected events do occur, of course, and there are some tasks which are hard to approximate in terms of time needed for completion. However, those things I believe will come with experience. The first hurdle is actually following through with it.
Setting up a schedule for myself has been such a concept. I have read about the concept and its advantages several times before, but for some reason I have just shrugged and never considered it seriously. And I don't really know why - that's the paradox of gestalt shifts - once you have shifted, you're unable to see the reasoning behind your old view (unless you have written it down, or something like that).
I believe that perhaps part of the reason I have been reluctant to set up a schedule is my slightly irregular sleeping habits. I have thought it more important to be rested than to wake up at a certain time. And I still do - working ten hours at sixty percent is worse than working eight at ninety. And my brain is really sensitive to this. It's like sleeping badly puts some kind of insulator between the synapses so they're unable to fire properly.
However, there are a couple of reasons I presently have for willing to try out a schedule nonetheless:
If it turns out that I'm unable to function properly because I am determined to wake up at a certain time, I could always wait with setting up the schedule until morning the same day. That way, I know how much time I have for disposal.
However, I presently have another theory: That my irregular sleep is in part due to my not having any obligations to get up in the morning. Currently, I have a research position, which means I can pretty much come and go as I want. Could this have a negative effect? Perhaps if I approach it more like I would a regular job, my brain somehow would get more 'incentive' to sleep properly during the night? You see, my problem isn't that I cannot fall asleep in the evening - I usually do pretty quickly. Rather, the problem is that my sleep is light and not 'restful' enough. Also, I usually wake up before time, and if I get up at that time, I will be tired.
In other words, this is going to be an experiment. I will schedule the following day the night before, including a time at which I wake up and a time at which I go to bed, and everything in between. Naturally, it will be impossible to follow such a schedule to the point - unexpected events do occur, of course, and there are some tasks which are hard to approximate in terms of time needed for completion. However, those things I believe will come with experience. The first hurdle is actually following through with it.
Labels:
Brain Sputter,
Insomnia,
personal,
Quantity,
scheduling,
self improvement
Friday, June 21, 2013
Game review: Phoenix Wright: Ace Attorney
Despite all my wishes to be a productive
person, sometimes I somehow end up playing some computer or video game.
Recently I have been playing Phoenix Wright: Ace Attorney, and thought I'd just
briefly review it.
First off, I don't like assigning one number to games, since the quality of a game can have many dimensions. SO I'm just going to write what I like about the game and what I don't like.
He's a bit more insecure than I thought prior to playing. But I like the
character, and Miles is also pretty cool, although sometimes I wish the anime
industry would find another archetype than the 'brooding dark-haired guy' to be
the cool dude.
The trials are hilarious and very entertaining. Whenever you manage to point out a contradiction and cool music starts playing, you feel like being a defense attorney would be the coolest job in the world. There's plenty of humor there, and especially if you're geared towards Japanese-style humor, you'll laugh out loud a lot. I did, at least. Most trials are pretty far-fetched in terms of how they are conducted and what is accepted as evidence and so on, but it's not much worse than your average American lawyer show.
Also, I want to mention the 'effects' as a good point of this game. In trial, when the attorneys are making a point, they are punching their fists on the desk in a really cool way. And whenever something 'unexpected' is happening, the effects really help bring this out by changing the music, kind of shaking the screen and in general putting surprised faces on everyone.
The graphics.. honestly, for a game such as this, realistic graphics is by no means something I want. The graphics do a good job without being extravagant.
First off, I don't like assigning one number to games, since the quality of a game can have many dimensions. SO I'm just going to write what I like about the game and what I don't like.
The premise
The game I played is for Nintendo DS, and it's a kind of point-and-click mystery solving and courtroom game, which to my knowledge is pretty unique in the market. It's animated with semi-moving anime frames. You play as Phoenix Wright, a lawyer straight out of law school, as he takes on his first cases as a defense attorney. The first mission is a simple trial, where you have to pick the witnesses testimonies apart, pressing every point and using evidence to bring to light contradictions in their testimonies. Later, you also play the role of the evidence-gatherer, which you usually do much better than the local police force anyway. There are several clashes with the arch-nemesis, Miles Edgeworth (who has later gotten games of his own). This game is the first in a series of several.The good
Phoenix Wright (the character) is pretty awesome, although playing the game I got a different impression from what I had from all the internet memes about him.![]() |
| Y'know - these ones. |
The trials are hilarious and very entertaining. Whenever you manage to point out a contradiction and cool music starts playing, you feel like being a defense attorney would be the coolest job in the world. There's plenty of humor there, and especially if you're geared towards Japanese-style humor, you'll laugh out loud a lot. I did, at least. Most trials are pretty far-fetched in terms of how they are conducted and what is accepted as evidence and so on, but it's not much worse than your average American lawyer show.
Also, I want to mention the 'effects' as a good point of this game. In trial, when the attorneys are making a point, they are punching their fists on the desk in a really cool way. And whenever something 'unexpected' is happening, the effects really help bring this out by changing the music, kind of shaking the screen and in general putting surprised faces on everyone.
The bad
The evidence collection becomes pretty tedious, especially when you have to move through areas in a very slow manner (i.e. you cannot necessarily move from a given area to the area you want to be in - you have to go through all the 'intermediate' areas first).The both
The music is really great at times (i.e. during the trials) but at other times it can get a bit jarring (i.e. during evidence collection).The graphics.. honestly, for a game such as this, realistic graphics is by no means something I want. The graphics do a good job without being extravagant.
In summary
Despite the shortcomings of the game (i.e. the evidence collection phase) I would heartily recommend playing it, simply because they have a unique experience to offer: Being an awesome defense attorney who fights injustice and tears down even the most arrogant of prosecutors. Get your OBJECTION!s on and play it!Thursday, June 20, 2013
Check for duplicates in Python
Today's trick: Check whether a Python container
Neat!
cont contains
duplicates!
if len(cont) != len(set(cont)): raise myError
Neat!
Wednesday, June 19, 2013
Fugitive.vim
All of my posts presently are written in the nick of time. It could be
symptomatic of either bad planning or too much to do, but I will try to keep up
the current schedule for a while longer at least.
As stated in another post, one of the plugins I am using for vim is the fugitive.vim plugin, written by Tim Pope. Its description is "A Git wrapper so awesome, it should be illegal". And so far, I have to agree - it is pretty darn awesome.
My favorite feature so far is the
Option b) isn't actually that bad in itself. It just takes a little more time. However, if you spot multiple bugs, or do several separate modifications of the file, the stashing can get a little messy.
Now, the Gdiff command opens up the file you're editing together with the current HEAD version of that file. (Or actually, that's not exactly what is opened, but I have to research more about how Git does stuff before I have more to say). It opens these two files in Diff mode (which I didn't even know about prior to this). It then allows you to choose hunks to stage for a commit, so that you don't have to commit everything in the file at once, if you don't want to. (A hunk is one continuous piece of changed 'stuff'). However, you can even break up the hunks by specifying the lines you want to 'diffput'.
In short - it's awesome. It has other neat features as well, but those will have to come at another time. I might also write a more technical piece on the Gdiff thing.
As stated in another post, one of the plugins I am using for vim is the fugitive.vim plugin, written by Tim Pope. Its description is "A Git wrapper so awesome, it should be illegal". And so far, I have to agree - it is pretty darn awesome.
My favorite feature so far is the
:Gdiff mode. It has done wonders
to the tidyness of my git history. Used to be, I would edit a file, spot a
minor bug that was unrelated to whatever I was currently implementing, and then
I had to either a)fix the bug and implement whatever, commiting everything in
one large chunk, thus messing up the Git history, or b) stash the changes so
far, fix the bug, commit it, then continue implementing.
Option b) isn't actually that bad in itself. It just takes a little more time. However, if you spot multiple bugs, or do several separate modifications of the file, the stashing can get a little messy.
Now, the Gdiff command opens up the file you're editing together with the current HEAD version of that file. (Or actually, that's not exactly what is opened, but I have to research more about how Git does stuff before I have more to say). It opens these two files in Diff mode (which I didn't even know about prior to this). It then allows you to choose hunks to stage for a commit, so that you don't have to commit everything in the file at once, if you don't want to. (A hunk is one continuous piece of changed 'stuff'). However, you can even break up the hunks by specifying the lines you want to 'diffput'.
In short - it's awesome. It has other neat features as well, but those will have to come at another time. I might also write a more technical piece on the Gdiff thing.
Tuesday, June 18, 2013
TRAPs
As a true nerd, I am currently GMing an RPG campaign.
Preparing for sessions can be a chore - I find myself wondering how to structure the stuff I'm making up, and thinking about ways to organize often take more time than actual campaign writing.
However, there is one technique that I am extremely thankful for coming across - the TRAPs method, invented by Ry at Enworld forums: ry's Threats, Rewards, Assets and Problems (TRAPs)
It's really making my life as a GM a whole lot easier, because it's a simple algorithm for fleshing out an adventure or encounter: Everything you add should be either a Threat, Reward, Asset or Problem. If you're introducing something that's neither, it's ineffective. And before you complain about 'atmosphere' and so on - you can easily turn either of these things into stuff that provide atmosphere.
Right now, I don't have enough time to write about this more elaborately (I have to prepare for the session), but once I have tried it out a bit more, I will try to write down my experiences.
Preparing for sessions can be a chore - I find myself wondering how to structure the stuff I'm making up, and thinking about ways to organize often take more time than actual campaign writing.
However, there is one technique that I am extremely thankful for coming across - the TRAPs method, invented by Ry at Enworld forums: ry's Threats, Rewards, Assets and Problems (TRAPs)
![]() |
| Not that kind. |
It's really making my life as a GM a whole lot easier, because it's a simple algorithm for fleshing out an adventure or encounter: Everything you add should be either a Threat, Reward, Asset or Problem. If you're introducing something that's neither, it's ineffective. And before you complain about 'atmosphere' and so on - you can easily turn either of these things into stuff that provide atmosphere.
Right now, I don't have enough time to write about this more elaborately (I have to prepare for the session), but once I have tried it out a bit more, I will try to write down my experiences.
Monday, June 17, 2013
Shared library writeup: Part 2
I ended last time talking about how the program header file contains references to the segments of an ELF file. Now, these segments can have various access permissions - some parts are executable but not writable, while some are writable but not executable.
Having a lot of non-writable segments is a good thing, since it means, in addition to data being protected from unintentional or malignant modification, that these segments can be shared if there are several applications that use them.
The way that the kernel knows what kind of segments are of what type is by reading the program header table, where this information is located. This table is represented by C structs called
However, the program header table is not located at a fixed place in an ELF file. The only thing that is fixed is the ELF header, which is always put at 'offset' zero, meaning the beginning of the file, essentially. (Offset means how many bytes from the beginning something is located). This header is also represented by a C struct, called
Now, the ELF header struct contains several pieces of information (fields) that are necessary to determine where the program header is. Writing down these pieces means essentially copy-pasting the article I'm reading, so I think I will not go down to that level of granulation.
Once the kernel has found the program header table, it can start reading information about each segment. The first thing it needs to know is which type the segment is, which is represented by the
However, even though the offset in memory is irrelevant for unlinked DSOs, the virtual memory size of the segment is relevant. This is because the actual memory space that the segment needs can be larger than the size of the segment in-file. When the kernel loads the segment into memory, if the requested memory size is larger than the segment size, the extra memory is initialized with zeroes. This is practical if there are so-called BSS sections in the segment. BSS is an old name for a section of data that contains only zero bits. Thus, as long as extraneous memory is initialized with zeroes, this is a good way to save space - you only need to know how large the bss section is, add that size to the current size of the segment, and the kernel handles the rest. An example of a BSS section is a section containing uninitialized variables in C code, since such variables are set to zero in C anyway.
Finally, each segment has a logical set of permissions that is defined in the
After this, the virtual address space for the ELF executable is set up. However, the executable binary at this point only contains the segments that had the
The dynamic linker is a program just like the executable we're trying to run, so it has to go through all the above steps. The difference is that the linker is a complete binary, and it should also be relocatable. Which linker is used is not specified by the kernel - it is contained in a special segment in the ELF file, which has the
This ends the second part of the writeup. And there's plenty left..
Having a lot of non-writable segments is a good thing, since it means, in addition to data being protected from unintentional or malignant modification, that these segments can be shared if there are several applications that use them.
The way that the kernel knows what kind of segments are of what type is by reading the program header table, where this information is located. This table is represented by C structs called
ELF32_Phdr or ELF64_Phdr.
However, the program header table is not located at a fixed place in an ELF file. The only thing that is fixed is the ELF header, which is always put at 'offset' zero, meaning the beginning of the file, essentially. (Offset means how many bytes from the beginning something is located). This header is also represented by a C struct, called
ELF32_Ehdr or ELF64_Ehdr (the 32 or 64 refers to whether the computer architecture is 32-bit or 64-bit, respectively - i.e., all its registers, memory addresses and buses have sizes of 32 bits or 64 bits.)
Now, the ELF header struct contains several pieces of information (fields) that are necessary to determine where the program header is. Writing down these pieces means essentially copy-pasting the article I'm reading, so I think I will not go down to that level of granulation.
Once the kernel has found the program header table, it can start reading information about each segment. The first thing it needs to know is which type the segment is, which is represented by the
p_type field of the program header table struct. If this field has the value PT_LOAD it means that this segment is 'loadable'. (other values this field can have is PT_DYNAMIC, which means that this segment contains dynamic linking information, PT_NOTE, which means the segment contains auxilliary notes, et cetera.) If the p_type field has the value PT_LOAD, the kernel must, in addition to knowing where the segment starts, also know how big it is, which is specified in the p_filesz field. There are also a couple of fields that describe where the segment is located in virtual memory space. However, the actual offset in virtual memory space is irrelevant for DSOs that are not linked, since they haven't been assigned a specific place in virtual memory space. For executables and so-called 'prelinked' DSOs (meaning that they have been bound to an executable even if they're dynamic), the offset is relevant.
However, even though the offset in memory is irrelevant for unlinked DSOs, the virtual memory size of the segment is relevant. This is because the actual memory space that the segment needs can be larger than the size of the segment in-file. When the kernel loads the segment into memory, if the requested memory size is larger than the segment size, the extra memory is initialized with zeroes. This is practical if there are so-called BSS sections in the segment. BSS is an old name for a section of data that contains only zero bits. Thus, as long as extraneous memory is initialized with zeroes, this is a good way to save space - you only need to know how large the bss section is, add that size to the current size of the segment, and the kernel handles the rest. An example of a BSS section is a section containing uninitialized variables in C code, since such variables are set to zero in C anyway.
Finally, each segment has a logical set of permissions that is defined in the
p_flags field of the program header struct - whether the segment is writable, readable, executable or any combination of the three.
After this, the virtual address space for the ELF executable is set up. However, the executable binary at this point only contains the segments that had the
PT_LOAD value in the p_type field. The dynamically linked segments are not yet loaded - they only have an address in virtual memory. Therefore, before execution can start, another program must be executed - the dynamic linker.
The dynamic linker is a program just like the executable we're trying to run, so it has to go through all the above steps. The difference is that the linker is a complete binary, and it should also be relocatable. Which linker is used is not specified by the kernel - it is contained in a special segment in the ELF file, which has the
PT_INTERP value in the p_type field. This segment is just a null-terminated string which specifies which linker to use. And the load address of the linker should not conflict with any of the executables on which it is being run.
This ends the second part of the writeup. And there's plenty left..
Labels:
ELF,
libraries,
programming,
Quality,
regurgitated information,
Useful
Friday, June 14, 2013
Shared library writeup: Part 1
During my daily work this week, I found myself struggling with shared libraries, linking them, and the various compiler flags needed to make the type of library you want. I decided to actually learn this stuff once and for all, and so I am currently reading "How to write shared libraries" by Ulrich Drepper. I decided this was a perfect opportunity to multitask - both write stuff for the blog and learn something! Espec
ially since you learn much better by writing about it. Hence, this will be the first part of my writeup of Drepper's paper.
In the most abstract, libraries are collections of code gathered into one file for easy reuse. They can be static, meaning that if you want to use the code in a program, the compiler must take the code contained in the library and bake it into the program upon compilation. Alternatively, they can also be shared or dynamic, meaning that they are not included in the program upon compilation, but the program contains mention of the libraries, so that on run-time, the program loads the library and incorporates it into the program.
Nowadays, (on Unix-like systems) libraries are handled by the so-called ELF (Executable Linkage format), which is a common file format that are used not just for libraries, but for executables and other types of files as well.
Earlier, other formats, such as a.out and the Common Object File Format (COFF) were used. The disadvantage with these were that when these libraries did not support relocation.
When you have a piece of compiled code (typically in what's called an object file), this file will contain a relocation table. Such a table is a list of pointers to various addresses within that object file, and these addresses are typically given relative to the beginning of the file (which is typically zero). When combining several such object files into one large executable, this object-file-specific list must typically be changed, since the object file now is not located at 'zero' anymore, but rather at some arbitrary point within the new executable.Then, when the executable is to be executed, the addresses are again modified to reflect the actual addresses in RAM. This last part is what is not supported by the old library formats.
This essentially means that each library must be given an absolute address in virtual memory upon creation, and that some central authority must keep track of where the various shared libraries are stored. In addition: when we make additions to a library that is supposed to be shared, we don't want to have to tell all the applications that used the old version that the library has changed - as long as the new version still contains all the stuff we need for our application, it should still work for that application without having to re-link the application with the new version of the library. This means that the table that points to where the various parts of the library are located must be kept separate from the actual library, and it must actually keep track of the pointer tables of all the old versions of that library - once a function had been added to a library, its address lasted forever. New additions to a library would just append to the existing table. In short, a.out and COFF were not very practical for use as shared libraries, although they did make the program run fast, since there is no relocation of table pointers at run time.
For an application that contains no dynamic components (no shared libraries etc.), its execution is straightforward: The application is loaded into memory, then instruction at the 'entry point' memory address is executed, which should start a chain of events that ends with the termination of the program.
For applications that do contain dynamic components, it is less straightforward: There must be another program that can coordinate the application with the DSOs (Dynamic Shared Objects) before execution of the program starts.
The section header table is a table with references to the various sections of the file. The program header table contains references to various groupings of the sections. So you might say that the section header table describes each 'atom' of the file, whereas the program header table collects these atoms into 'molecules' and makes sensible chunks, called segments, that are sections that work together to form a coherent whole.
End of part 1 of the writeup! And I'm only on page 3 of the paper!
In the most abstract, libraries are collections of code gathered into one file for easy reuse. They can be static, meaning that if you want to use the code in a program, the compiler must take the code contained in the library and bake it into the program upon compilation. Alternatively, they can also be shared or dynamic, meaning that they are not included in the program upon compilation, but the program contains mention of the libraries, so that on run-time, the program loads the library and incorporates it into the program.
Nowadays, (on Unix-like systems) libraries are handled by the so-called ELF (Executable Linkage format), which is a common file format that are used not just for libraries, but for executables and other types of files as well.
Earlier, other formats, such as a.out and the Common Object File Format (COFF) were used. The disadvantage with these were that when these libraries did not support relocation.
When you have a piece of compiled code (typically in what's called an object file), this file will contain a relocation table. Such a table is a list of pointers to various addresses within that object file, and these addresses are typically given relative to the beginning of the file (which is typically zero). When combining several such object files into one large executable, this object-file-specific list must typically be changed, since the object file now is not located at 'zero' anymore, but rather at some arbitrary point within the new executable.Then, when the executable is to be executed, the addresses are again modified to reflect the actual addresses in RAM. This last part is what is not supported by the old library formats.
This essentially means that each library must be given an absolute address in virtual memory upon creation, and that some central authority must keep track of where the various shared libraries are stored. In addition: when we make additions to a library that is supposed to be shared, we don't want to have to tell all the applications that used the old version that the library has changed - as long as the new version still contains all the stuff we need for our application, it should still work for that application without having to re-link the application with the new version of the library. This means that the table that points to where the various parts of the library are located must be kept separate from the actual library, and it must actually keep track of the pointer tables of all the old versions of that library - once a function had been added to a library, its address lasted forever. New additions to a library would just append to the existing table. In short, a.out and COFF were not very practical for use as shared libraries, although they did make the program run fast, since there is no relocation of table pointers at run time.
Enter ELF
ELF is, as mentioned before, a common file type for both applications, object files, libraries and more. It is therefore very easy to make a library once you know how to make an application - you just pass in an additional compiler flag. The only difference between them is that applications usually have a fixed load address, that is, the (virtual) memory address into which they are loaded upon execution. There is a special class of applications, called Position Independent Executables (PIEs) that don't even have a fixed load address, and for those, the difference between applications and shared libraries are even smaller.For an application that contains no dynamic components (no shared libraries etc.), its execution is straightforward: The application is loaded into memory, then instruction at the 'entry point' memory address is executed, which should start a chain of events that ends with the termination of the program.
For applications that do contain dynamic components, it is less straightforward: There must be another program that can coordinate the application with the DSOs (Dynamic Shared Objects) before execution of the program starts.
The ELF file structure
ELF files usually contain the following:- the file header
- the Program header table
- the Section header table
- Sections
The section header table is a table with references to the various sections of the file. The program header table contains references to various groupings of the sections. So you might say that the section header table describes each 'atom' of the file, whereas the program header table collects these atoms into 'molecules' and makes sensible chunks, called segments, that are sections that work together to form a coherent whole.
End of part 1 of the writeup! And I'm only on page 3 of the paper!
Labels:
ELF,
libraries,
programming,
Quality,
regurgitated information,
Useful
Thursday, June 13, 2013
Manual labor
A couple of times lately I have helped my grandparents do some manual labor
(pruning their fruit trees and trimming the hedges).
I don't do much manual labor at home myself. I live in a rented apartment, so I don't have much maintenance to speak of, and my day job mainly consists of programming, which can only be thought of as manual labor if you are a pedant and use the original definition of the word.
However, whenever I get to do some real manual labor, I think I should do more
of it. It's both due to the 'getting to work your body' thing and the 'feeling
like you actually did stuff' thing. Together, they give a feeling of
wholesomeness.
If and when I ever get a family and/or own a house of my own, I suppose there will be more of this. Until then, I'll just have to help out my grandparents as much as I can.
I don't do much manual labor at home myself. I live in a rented apartment, so I don't have much maintenance to speak of, and my day job mainly consists of programming, which can only be thought of as manual labor if you are a pedant and use the original definition of the word.
![]() |
| But in that case, this is manual labor as well. |
If and when I ever get a family and/or own a house of my own, I suppose there will be more of this. Until then, I'll just have to help out my grandparents as much as I can.
Wednesday, June 12, 2013
Vim
I use vim for editing.
There is no overwhelmingly reasonable reason I chose vim. Several years ago, when I first started programming on a significant basis, I started reading about the editor wars, and I immediately knew I had to make a choice and stick with it. I think the article that stands out as the main reason for my choice is this one:
Pirates and ninjas: Emacs or Vi?"
Of all the articles that could have been the basis for my choice, this is probably one of the least reasonable. However, when I did read this article, I didn't know anything about what would be useful when programming. And so, connecting both editors to ninjas and pirates made it easy to make a choice (which, I think, matters not that much in the long run anyway).
Ninjas simply appeals more to me than pirates do, and knowing nothing else,
I chose vim. I cannot say I regret the choice, but that could easily be just
because I haven't tried emacs.
(Short aside: When I read the above article, I didn't know who Richard Stallman was. However, as it turns out, if I had known, there would have been more of an incentive to choose vim.)
Both editors benefit from plugins. I haven't manually installed many - I think the only ones I currently have installed is the fugitive plugin, written by Tim Pope, and the python indent script written by Eric McSween. I will elaborate on the former in a later post.
Of course, knowing the commands available to you is also something that makes you effective, whichever editor you use. I don't know a tenth of all the stuff that vim can do in theory, but this Stack Exchange question was a lot of help to me.
There are a couple of keybindings I find very helpful. Mapping caps lock to ESC, 't to :tabnew, 's to :w and 'q to :q are some that save plenty of keystrokes in the long run.
The more you use an editor, the better you get at it and the less you gain by switching. So it's likely I will keep using vim for the unforeseeable future. And that's ok.
There is no overwhelmingly reasonable reason I chose vim. Several years ago, when I first started programming on a significant basis, I started reading about the editor wars, and I immediately knew I had to make a choice and stick with it. I think the article that stands out as the main reason for my choice is this one:
Pirates and ninjas: Emacs or Vi?"
Of all the articles that could have been the basis for my choice, this is probably one of the least reasonable. However, when I did read this article, I didn't know anything about what would be useful when programming. And so, connecting both editors to ninjas and pirates made it easy to make a choice (which, I think, matters not that much in the long run anyway).
![]() |
| Vim: for when cannons and cutlasses just won't cut it. |
(Short aside: When I read the above article, I didn't know who Richard Stallman was. However, as it turns out, if I had known, there would have been more of an incentive to choose vim.)
Both editors benefit from plugins. I haven't manually installed many - I think the only ones I currently have installed is the fugitive plugin, written by Tim Pope, and the python indent script written by Eric McSween. I will elaborate on the former in a later post.
Of course, knowing the commands available to you is also something that makes you effective, whichever editor you use. I don't know a tenth of all the stuff that vim can do in theory, but this Stack Exchange question was a lot of help to me.
There are a couple of keybindings I find very helpful. Mapping caps lock to ESC, 't to :tabnew, 's to :w and 'q to :q are some that save plenty of keystrokes in the long run.
The more you use an editor, the better you get at it and the less you gain by switching. So it's likely I will keep using vim for the unforeseeable future. And that's ok.
Tuesday, June 11, 2013
Dip into finance
Today I attended a lecture by a relatively well-known academic within computational finance (the reason for this is to try to figure out what to do after what I'm currently doing).
I wasn't too familiar with the terms used within finance, so I didn't follow the discussion. In this particular course (where the speaker was a guest lecturer) they seem to use Excel a lot. Probably this is just a particularity of the course level.
One thing that really stood out was that they talked a lot about master theses. Every half hour or so, the lecturer or someone else would say something like "This is probably something that a master student could have as their project." In my field, this very rarely comes up. It made me wonder how it is in other fields. Is it a sign of underabundance of researchers within finance?
After this lecture, I am less opposed to working in finance than I previously was. I asked the lecturer about future prospects based on my own history, and she said I would have very few problems entering quantitative finance in some way - I could perhaps take a couple of courses in finance first. Also, she said that she didn't find working with finance less intellectually stimulating than what she did before (she switched to finance after her Ph.D.).
I suppose what's standing in the way of a career in finance for me is the thought that it is less 'pure' or 'ideal' than what I am currently doing. After all, working with finance is not trying to figure out how the world works. But then again - working in the academic world has made me rethink the validity of stating that I am trying to figure out how the world works. At my institute, at least, it seems to be less and less true the older you get, as grant applications, teaching etc. takes over.
I am still undecided about this. I have to try to gather as much information as possible about the experience of working with something else than what I am currently doing before I make a choice.
I wasn't too familiar with the terms used within finance, so I didn't follow the discussion. In this particular course (where the speaker was a guest lecturer) they seem to use Excel a lot. Probably this is just a particularity of the course level.
One thing that really stood out was that they talked a lot about master theses. Every half hour or so, the lecturer or someone else would say something like "This is probably something that a master student could have as their project." In my field, this very rarely comes up. It made me wonder how it is in other fields. Is it a sign of underabundance of researchers within finance?
After this lecture, I am less opposed to working in finance than I previously was. I asked the lecturer about future prospects based on my own history, and she said I would have very few problems entering quantitative finance in some way - I could perhaps take a couple of courses in finance first. Also, she said that she didn't find working with finance less intellectually stimulating than what she did before (she switched to finance after her Ph.D.).
I suppose what's standing in the way of a career in finance for me is the thought that it is less 'pure' or 'ideal' than what I am currently doing. After all, working with finance is not trying to figure out how the world works. But then again - working in the academic world has made me rethink the validity of stating that I am trying to figure out how the world works. At my institute, at least, it seems to be less and less true the older you get, as grant applications, teaching etc. takes over.
I am still undecided about this. I have to try to gather as much information as possible about the experience of working with something else than what I am currently doing before I make a choice.
Monday, June 10, 2013
NumPy structured arrays
I'm programming quite a bit in Python, and my understanding of that language is incremental. Due to the nature of my work, I also work a lot with NumPy. Today I had to solve the following problem:
First of all, I thought that a structured array would be like a 'normal' NumPy array, just that one of the dimensions had field names and data types associated with them.
But I think I am wrong in this - I think it's more a matter of a structured array being a NumPy array, where each element in the array is a structure (which makes sense once I think about it).
For instance, you can't slice a structured array according to the first interpretation:
However, if you treat the result of a slice as a separate array, it works:
This is very basic, I know. But it's something I learned today. And that's what this blog mainly is for. Hopefully I will learn more interesting stuff later.
- Take an input dictionary
- Create a NumPy structured array with the keys as field names, the datatypes of the values as the field datatypes, and the values themselves as the array elements.
First of all, I thought that a structured array would be like a 'normal' NumPy array, just that one of the dimensions had field names and data types associated with them.
But I think I am wrong in this - I think it's more a matter of a structured array being a NumPy array, where each element in the array is a structure (which makes sense once I think about it).
For instance, you can't slice a structured array according to the first interpretation:
In [1]: dtype = ''.join(('uint8,', 4*'int16,', 'int16'))
In [2]: b = np.array([(0, 1, 2, 3, 4, 5)], dtype=dtype)
In [3]: b.shape
Out[3]: (1,)
In [4]: b[0, 3]
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
----> 1 b[0, 3]
IndexError: too many indices
However, if you treat the result of a slice as a separate array, it works:
In [5]: b[0][3]
Out[5]: 3
This is very basic, I know. But it's something I learned today. And that's what this blog mainly is for. Hopefully I will learn more interesting stuff later.
Labels:
numpy,
programming,
python,
Quantity,
Today I learned,
Useful
Friday, June 7, 2013
Bureaucracy and object-oriented programming
Today, as I had to grapple with certain aspects of real-life bureaucracy I was struck by the similarities between bureaucracy and object-oriented programming. I did a search, and found this:
Five (good) lessons the government teaches us about object-oriented programming.
I suppose there are some concepts in there that are outdated (in some communities) - for instance, I have the impression that in Python, the encapsulation concept isn't thought of as that central (cf. the 'consenting adults' paradigm). But still, the article makes good points, I think.
I think the main difference between oo programming and bureaucracy, or rather, why these concepts work so well in one case and not so well in the other, is that humans when working together as in a bureaucracy is not remotely like a logical machine. One cannot trust the output from one 'object'. The processing times are much larger. And the instantiation overhead is way too expensive in bureaucracies - people have to learn to cope with new regulations, departments, and so on.
I wonder if this can be extended somehow.. is it possible to make a model of real-life bureaucratic processing based on other programming paradigms, like a procedural programming one? If I have time at some point, I'll try to think about this more.
Five (good) lessons the government teaches us about object-oriented programming.
I suppose there are some concepts in there that are outdated (in some communities) - for instance, I have the impression that in Python, the encapsulation concept isn't thought of as that central (cf. the 'consenting adults' paradigm). But still, the article makes good points, I think.
I think the main difference between oo programming and bureaucracy, or rather, why these concepts work so well in one case and not so well in the other, is that humans when working together as in a bureaucracy is not remotely like a logical machine. One cannot trust the output from one 'object'. The processing times are much larger. And the instantiation overhead is way too expensive in bureaucracies - people have to learn to cope with new regulations, departments, and so on.
I wonder if this can be extended somehow.. is it possible to make a model of real-life bureaucratic processing based on other programming paradigms, like a procedural programming one? If I have time at some point, I'll try to think about this more.
Labels:
bureaucracy,
object-oriented programming,
Quantity,
Thoughtful
Thursday, June 6, 2013
First and foremost
As indicated on my profile, I am a Christian.
And in spite of the way it is 'casually' thrown in there as one of the things that define me, it is in fact the main thing that defines me. Everything else derives from that aspect of myself.
The reason I have put those defining traits together in such a haphazard manner, is that this isn't going to be a "Christian" blog - meaning, it isn't going to be a blog that is mainly focused around Christian life, inspiration for such a life and so on.
Rather, it's going to be a blog that's written by a Christian. Everything I write is written with a Christian backdrop, but that's not always going to be the main actor in every blog post.
Christians have kind of a bad rep in 'reason-focused' groups in the western world. I think part of that is that we're not making ourself visible as serious actors in those groups, and we define our own framework of thinking. That's fine (no, really - I will probably write more on that later), but we also need to engage with other frameworks of thinking.
Used to be that Christians were active in all kinds of activities - writing great literature, doing great science, building great buildings, etc. And believe it or not, but there hasn't been any kind of great discovery that "disproves" Christianity, no matter what certain people might insist. Rather, it has been a shift in how Christianity is viewed. Hopefully, this can and will change in the future.
Sometimes, I will write about Christian stuff. The posts will be labeled accordingly.
And in spite of the way it is 'casually' thrown in there as one of the things that define me, it is in fact the main thing that defines me. Everything else derives from that aspect of myself.
The reason I have put those defining traits together in such a haphazard manner, is that this isn't going to be a "Christian" blog - meaning, it isn't going to be a blog that is mainly focused around Christian life, inspiration for such a life and so on.
Rather, it's going to be a blog that's written by a Christian. Everything I write is written with a Christian backdrop, but that's not always going to be the main actor in every blog post.
Christians have kind of a bad rep in 'reason-focused' groups in the western world. I think part of that is that we're not making ourself visible as serious actors in those groups, and we define our own framework of thinking. That's fine (no, really - I will probably write more on that later), but we also need to engage with other frameworks of thinking.
Used to be that Christians were active in all kinds of activities - writing great literature, doing great science, building great buildings, etc. And believe it or not, but there hasn't been any kind of great discovery that "disproves" Christianity, no matter what certain people might insist. Rather, it has been a shift in how Christianity is viewed. Hopefully, this can and will change in the future.
Sometimes, I will write about Christian stuff. The posts will be labeled accordingly.
Wednesday, June 5, 2013
Wrapping your body
Sometimes I sleep badly. I have no problem going to sleep, but sometimes I wake up too early for some reason and have trouble getting back to sleep, it's as if my mind as on some kind of high (excited about the coming day, maybe?) and isn't able to calm down until a while later, at which point I have missed two hours of sleep and just know that the day is going to be crap.
As long as you are single and have flexible work-hours, this doesn't need to be that big of a deal. If you wake up early you can go do some work and then go bac k to sleep when your mind has calmed down a little. However, that's a quite limited group of people.
Something that I've experimented with the last couple of sleepless early mornings has been a relaxation technique that I did a couple of times as a teenager. It 's pretty simple - you lie on your back, calm down, and then start thinking about your toes, of relaxing them. You then move upwards through your body and focus on each body part, relaxing it, thinking that it becomes heavier. It feels a little like wrapping your body in some kind of 'Relax-o-wrap'. You end with your mouth, nose and eyes. After that, all of your body is mentally wrapped up, and you actually feel like lifting your arm, for instance, would ruin the wrap.
Then, you start focusing on your breath. You inhale deeply, down to the bottom of your lungs, so that it is your stomach that rises and falls, not your chest. I was taught to inhale through the nose and exhale through the mouth, but it's not vital. The vital part is that you focus on your breathing with your mind. In the beginning it's frustrating and hard to focus, but after a while you suddenly realize that you almost dozed off for a second. Then you actually do doze off for a second. Then for longer. Then you start dreaming - my dreams have been weird during this exercise - often they're about falling and flying etc.
After waking up, I usally do some kind of 'unwrapping' routine, focusing on each body part and making it 'unheavy' again. I don't know how important that is, bu t it maintains the illusion of a wrap around your body, which I think is important for this exercise.
So far, I haven't been able to go back to 'normal' sleep with this technique, but I find a certain type of sleep which I think is far superior to being awake. Maybe with time, regular sleep comes as well. Progress will be reported (if I can remember to do it).
As long as you are single and have flexible work-hours, this doesn't need to be that big of a deal. If you wake up early you can go do some work and then go bac k to sleep when your mind has calmed down a little. However, that's a quite limited group of people.
Something that I've experimented with the last couple of sleepless early mornings has been a relaxation technique that I did a couple of times as a teenager. It 's pretty simple - you lie on your back, calm down, and then start thinking about your toes, of relaxing them. You then move upwards through your body and focus on each body part, relaxing it, thinking that it becomes heavier. It feels a little like wrapping your body in some kind of 'Relax-o-wrap'. You end with your mouth, nose and eyes. After that, all of your body is mentally wrapped up, and you actually feel like lifting your arm, for instance, would ruin the wrap.
Then, you start focusing on your breath. You inhale deeply, down to the bottom of your lungs, so that it is your stomach that rises and falls, not your chest. I was taught to inhale through the nose and exhale through the mouth, but it's not vital. The vital part is that you focus on your breathing with your mind. In the beginning it's frustrating and hard to focus, but after a while you suddenly realize that you almost dozed off for a second. Then you actually do doze off for a second. Then for longer. Then you start dreaming - my dreams have been weird during this exercise - often they're about falling and flying etc.
After waking up, I usally do some kind of 'unwrapping' routine, focusing on each body part and making it 'unheavy' again. I don't know how important that is, bu t it maintains the illusion of a wrap around your body, which I think is important for this exercise.
So far, I haven't been able to go back to 'normal' sleep with this technique, but I find a certain type of sleep which I think is far superior to being awake. Maybe with time, regular sleep comes as well. Progress will be reported (if I can remember to do it).
Tuesday, June 4, 2013
Exercising
As mentioned here, I exercise regularly.
"Regularly" in this context means thrice a week, and it also means that I always exercise in the morning, right after waking up and before breakfast. Sometimes I skip exercising, though I shouldn't. Usually that's because I've slept badly (subject for another post!) and don't need more exhaustion. Sometimes it's because I was up late the day before and don't have time to exercise. Sometimes it's a combination (I slept badly, so I woke up late). But these are exceptions.
I exercise for about an hour. I usually listen to two podcasts of my favorite radio show while exercising, and they last for half an hour each.
The exercise is pretty tiring. I start with three repetitions of the following:
After this, I do back and abdominal exercises for about twenty-five minutes, which i think is important when you sit as much during the day as I do. In between these, I do as many pull-ups as I can.
It is an important point for me to be able to exercise without too much hassle, because then I usually never get around to it. The less overhead time, the better. So I prefer to exercise at home using only body-weight. For those of us who are only reasonably fit, that's more than enough. If your goal is to stay fit, not build muscles, there really is no point in doing heavy weight-lifting, IMO. Body-weight exercise will only take you so far, though, so if you want to look really buff, then you should start lifting weights.
When I first started doing burpees, they totally killed me. They're one of the most exhaustive forms of exercise I know, as long as you do a proper jump up and a proper push up each time. So in the beginning, x in the above regime was about two-three. It's nice to see improvement. I am a bit unsure of doing this for a long time, though. Although it's probably better for your legs and back to do burpees than running (for a fixed amount of 'exercise'), it can still be a strain on the joints to do that many jump-ups. So far, though, so good, so I'll keep doing it until it starts hurting!
Anyway - the above regime works all major muscle groups in addition to being good cardio exercise. Combined with healthy eating, and remembering that being hungry for a little while isn't dangerous, you should notice an improvement in how you look and feel after a couple of weeks.
"Regularly" in this context means thrice a week, and it also means that I always exercise in the morning, right after waking up and before breakfast. Sometimes I skip exercising, though I shouldn't. Usually that's because I've slept badly (subject for another post!) and don't need more exhaustion. Sometimes it's because I was up late the day before and don't have time to exercise. Sometimes it's a combination (I slept badly, so I woke up late). But these are exceptions.
I exercise for about an hour. I usually listen to two podcasts of my favorite radio show while exercising, and they last for half an hour each.
The exercise is pretty tiring. I start with three repetitions of the following:
- x burpees (the push up + jump up variant), where x is a function of my fitness (Currently x=13).
- Shadowboxing for y seconds, where I typically adjust y so that it takes as long as the burpees do. Currently, y=45 seconds, although the burpees don't take that long, so I have to adjust a little.
- Do one more of both the above points.
- Rest for a couple of minutes.
- The Plank for z seconds, where z=90 the first repetition, z=60 the second repetition, and z=45 the third repetition.
- Rest for a minute or so.
After this, I do back and abdominal exercises for about twenty-five minutes, which i think is important when you sit as much during the day as I do. In between these, I do as many pull-ups as I can.
It is an important point for me to be able to exercise without too much hassle, because then I usually never get around to it. The less overhead time, the better. So I prefer to exercise at home using only body-weight. For those of us who are only reasonably fit, that's more than enough. If your goal is to stay fit, not build muscles, there really is no point in doing heavy weight-lifting, IMO. Body-weight exercise will only take you so far, though, so if you want to look really buff, then you should start lifting weights.
![]() |
| Or you can start doing experiments with certain drugs. |
Anyway - the above regime works all major muscle groups in addition to being good cardio exercise. Combined with healthy eating, and remembering that being hungry for a little while isn't dangerous, you should notice an improvement in how you look and feel after a couple of weeks.
Labels:
Brain Sputter,
dieting,
exercising,
Quantity,
self improvement
Monday, June 3, 2013
Interesting tasks as motivation
I used to play a lot of video games. I dread to count the hours spent doing this. Now, I don't play much anymore, though I have occasional bouts where I go on a
total gaming spree. Usually that leaves me pretty depressed afterwards.
I currently have a hope that this will not happen anymore, now that I view being able to learn programming as a fun 'hobby'. That is, when I'm working on science-related stuff now, and I lack motivation, I tell myself that "once you're done with this, you can learn more programming". And it seems to work.
At least for now. I have found that many of these motivational techniques are fleeting, so it remains to be seen whether this technique stands the test of time. However, I do believe that the key to being productive is to combine several techniques that work for you. So if I combine the "learn programming once you're done" technique with some kind of variation on the Pomodoro technique mentioned in an earlier post, maybe the combination will yield good results.
In the end, though, I think it's a matter of teaching your brain to operate differently - to eke out new neuron patterns so that the brain have less resistance in those directions I want it to go. The way there can be hard and painful, though!
I currently have a hope that this will not happen anymore, now that I view being able to learn programming as a fun 'hobby'. That is, when I'm working on science-related stuff now, and I lack motivation, I tell myself that "once you're done with this, you can learn more programming". And it seems to work.
At least for now. I have found that many of these motivational techniques are fleeting, so it remains to be seen whether this technique stands the test of time. However, I do believe that the key to being productive is to combine several techniques that work for you. So if I combine the "learn programming once you're done" technique with some kind of variation on the Pomodoro technique mentioned in an earlier post, maybe the combination will yield good results.
In the end, though, I think it's a matter of teaching your brain to operate differently - to eke out new neuron patterns so that the brain have less resistance in those directions I want it to go. The way there can be hard and painful, though!
Labels:
brain whipping,
gaming,
motivation,
personal,
productivity,
Quantity,
self improvement
Subscribe to:
Comments (Atom)




