Yesterday I discussed my search for some help with implementing a free list in RTEMS. Today I'm glad to say I was able to use partitions to achieve my objective of a growable free list—something akin to a slab allocator. A fundamental limit, the maximum number of partitions, still exists but partitions can have an arbitrary size; these limits might be circumvented safely by making each partition much larger than the previous or by combining old partitions into new ones. Further improvements can be made, but for now I'm satisfied.
Allocation
Iterate over available partitions until one returns a free object. If none succeeds, create a new partition (malloc a new set of objects) and get a free object from the new partition.
I have not handled what to do if partition creation fails. Performance could be improved by tracking the non-full partitions.
Deallocation
Return the object to its partition. I made this easy by storing a handle to the owning partition as part of each object. Non-invasive techniques, such as tracking address ranges for each partition, could be used for a generic solution.
Monday, December 19, 2011
Sunday, December 18, 2011
RTEMS free list?
I implemented a free list in an RTEMS application as a pre-allocated array of objects on a linked list, but I foresee the need to have a fail-over case for growing my free list dynamically. Extending my current implementation to support dynamically changing the size of the free list is easy enough, but I'm interested to see if such support exists. I found two likely candidates: Partitions and Regions.
Partitions implement a simple free list of objects and have low overhead in both time and space. However a partition cannot change the size of its memory so an allocation will fail after the partition exhausts the preallocated set of buffers. This behavior is exactly what I already have implemented for my application.
Regions implement a more complex memory manager that allows for carving up a chunk of memory, calling it a region, and allocating segments from within a given region. However since segments can be variable-sized, the region manager uses a first-fit allocator that coalesces free blocks. Such an allocator is overkill for a free list of fixed-size objects.
Neither are quite what I want; where is the middle ground. My suspicion is that no such mechanism exists already because (hard) real-time applications usually have well-known resource requirements; the number of objects in the system are known and the partition manager would be sufficient. But when an unknown number of same-sized objects are employed, the current mechanisms appear to be insufficient.
Interestingly the Object Manager, an internal RTEMS subsystem, does seem to have capabilities to support dynamically extending the size (up to a preconfigured but possibly unlimited maximum) of a free list of fixed-size objects. Unfortunately the Object Manager is not accessible directly from the application level without violating API visibility rules.
Update: See my follow-up post regarding using partitions to implement a growable free list.
Partitions implement a simple free list of objects and have low overhead in both time and space. However a partition cannot change the size of its memory so an allocation will fail after the partition exhausts the preallocated set of buffers. This behavior is exactly what I already have implemented for my application.
Regions implement a more complex memory manager that allows for carving up a chunk of memory, calling it a region, and allocating segments from within a given region. However since segments can be variable-sized, the region manager uses a first-fit allocator that coalesces free blocks. Such an allocator is overkill for a free list of fixed-size objects.
Neither are quite what I want; where is the middle ground. My suspicion is that no such mechanism exists already because (hard) real-time applications usually have well-known resource requirements; the number of objects in the system are known and the partition manager would be sufficient. But when an unknown number of same-sized objects are employed, the current mechanisms appear to be insufficient.
Interestingly the Object Manager, an internal RTEMS subsystem, does seem to have capabilities to support dynamically extending the size (up to a preconfigured but possibly unlimited maximum) of a free list of fixed-size objects. Unfortunately the Object Manager is not accessible directly from the application level without violating API visibility rules.
Update: See my follow-up post regarding using partitions to implement a growable free list.
Wednesday, December 7, 2011
Science and Government
I just finished reading a brief lecture by C.P. Snow entitled "Science and Government" that, among other things, examines the role of science in decision making. Anyone who is involved in committees, politics (governmental, office, academic, or otherwise), or decision-making should read this lecture. I found its central story both entertaining and enlightening.
One of the main conclusions of the lecture is that decision-making ought not succumb to the euphoria of secrecy, and that decision-makers ought not be obsessed with gadgets: personally favored creations that have little value and lack a broad base of support. Snow says,
One of the main conclusions of the lecture is that decision-making ought not succumb to the euphoria of secrecy, and that decision-makers ought not be obsessed with gadgets: personally favored creations that have little value and lack a broad base of support. Snow says,
The euphoria of gadgets; the euphoria of secrecy...are the origin of 90 per cent of ill-judged scientific choices. Any scientist who is prone to these euphorias ought to be kept out of government decisions or choice-making, at almost any cost.
And as argument against obsessive covertness he claims that "societies at about the same level of technology will produce similar inventions."
He also discusses the roles of committees in decision-making---regardless of political organiziation committees are almost invariably present---and gives some advice for what may constitute an effective committee, namely
- The committee's objective must be clear and not too vast.
- The committee must be placed appropriately within the hierarchy of the larger (governmental) system.
- The committee must have powers of action. Advisory committees in particular have no power and therefore cannot make decisions.
I found the read quick, accessible, and relevant despite being 50 years old.
Monday, December 5, 2011
RTEMS Memory Protection Middle Layer
This is a continuation of my prior post in which I described an API for memory protection. Here I will describe the middle layer that glues the user API to the CPU support code that eventually interacts with hardware to enforce the permission attributes.
For those familiar with RTEMS, this code lives in libcpu and the design mimics that of the cache manager. The middle layer is just 5 functions:
Next I will briefly explain what each function does in my sparc64 implementation.
For those familiar with RTEMS, this code lives in libcpu and the design mimics that of the cache manager. The middle layer is just 5 functions:
/* * Initialize the hardware to prepare for * memory protection directives. */ rtems_status_code _CPU_Memory_protection_Initialize( void ); /* * Make sure memory protection permission * is valid for this CPU */ rtems_status_code _CPU_Memory_protection_Verify_permission( rtems_memory_protection_permission permission ); /* * Check if memory protection region size * is valid for this CPU */ rtems_status_code _CPU_Memory_protection_Verify_size( size_t size ); /* * Install (enforce) the memory protection entry mpe */ rtems_status_code _CPU_Memory_protection_Install_MPE( rtems_memory_protection_entry *mpe ); /* * Uninstall the memory protection entry mpe */ rtems_status_code _CPU_Memory_protection_Uninstall_MPE( rtems_memory_protection_entry *mpe );These functions are called via thin wrappers from the high-level API. The install and uninstall entry functions are called iteratively by the higher-level functions that install and uninstall protection domains. The two "verify" functions are used by the high-level API to avoid trying to install invalid entries proactively. The remaining functions interact with the hardware directly to support the memory protection mechanisms.
Next I will briefly explain what each function does in my sparc64 implementation.
- _CPU_Memory_protection_Initialize
- Disables interrupts
- Selectively demaps pages that were loaded during system startup
- Flushes the cache
- Maps new pages for the kernel core services
- Installs exception handlers for MMU misses
- _CPU_Memory_protection_Verify_permission
- Returns true; needs a little refinement.
- _CPU_Memory_protection_Verify_size
- Ensures the size (bounds) is a multiple of the supported TLB page size
- _CPU_Memory_protection_Install_MPE
- Adds a TLB entry for the memory region specified in the MPE
- _CPU_Memory_protection_Uninstall_MPE
- Removes the TLB entry for the specified memory region
RTEMS Memory Protection API
I am working on a solution to introduce real-time memory protection to RTEMS. I started from the work of two GSOC projects related to MMU support in RTEMS, and you can see most of the raw code that I have at http://code.google.com/p/gsoc2011-rtems-mmu-support-project/. This post describes the current design of the (high-level) API layer for setting attributes on regions of memory. I plan to post in the future about the middle and low-level interfaces. Any suggestions about design/naming/implementation etc. are welcomed.
The high-level interface comprises a region and attributes to create a memory protection entry (MPE), a set of which constitute a memory protection domain (MPD). Three structures are visible at the API layer, although only regions should ever be directly accessed; MPEs and MPDs are only meant to be used as handles in userland.
Application developers can create regions, for example
Regions must be encapsulated within MPEs and added to a MPD to have permissions enforced. The requisite functions are
Update: continued here
The high-level interface comprises a region and attributes to create a memory protection entry (MPE), a set of which constitute a memory protection domain (MPD). Three structures are visible at the API layer, although only regions should ever be directly accessed; MPEs and MPDs are only meant to be used as handles in userland.
/** * A region of contiguous memory */ typedef struct { char *name; void *base; size_t bounds; } rtems_memory_protection_region_descriptor; typedef uint8_t rtems_memory_protection_permission; /** * A memory protection entry (MPE) is a region of contiguous memory * (rtems_memory_protection_region_descriptor) with a set of permissions. * If it is currently being enforced then the installed field is true. */ typedef struct { rtems_chain_node node; rtems_memory_protection_region_descriptor region; bool installed; rtems_memory_protection_permission permissions; } rtems_memory_protection_entry; /** * A memory protection domain comprises a chain of active MPEs and * a freelist of idle MPEs. MPE storage is allocated from the Workspace * and managed in the MPD. */ typedef struct { rtems_memory_protection_entry *mpe_array; size_t mpe_array_size; rtems_chain_control active_mpe_chain; rtems_chain_control idle_mpe_chain; /* free list */ } rtems_memory_protection_domain;
Application developers can create regions, for example
rtems_memory_protection_region_descriptor text_region = { .name = "text", .base = TEXT_BEGIN, .bounds = TEXT_SIZE };Where TEXT_BEGIN and TEXT_SIZE are specified somewhere. The advantage of using a name to identify a region is that eventually we could layer a filesystem over the MPD structure and open memory files with specified attributes, for example: int fd = open("/memory/text", O_RDWR);. I have not given thought to how best to create regions, but that interface should be orthogonal to the protection API.
Regions must be encapsulated within MPEs and added to a MPD to have permissions enforced. The requisite functions are
/** * Allocates an array of max_number_of_mpes * memory protection entries from the workspace. */ rtems_status_code rtems_memory_protection_initialize_domain( rtems_memory_protection_domain *domain, size_t max_number_of_mpes ); /** * Create a new memory protection entry for the region with * permissions in perms. This function adds the newly created mpe * to the active chain of the_domain but does not install it. */ rtems_status_code rtems_memory_protection_create_entry( rtems_memory_protection_domain* the_domain, rtems_memory_protection_region_descriptor * const region, const rtems_memory_protection_permission perms, rtems_memory_protection_entry** mpe_ret ); /** * Install all mpes on the active list of the_domain. */ rtems_status_code rtems_memory_protection_install_domain( rtems_memory_protection_domain* the_domain );So to enforce the text_region from above:
rtems_memory_protection_domain *protection_domain; rtems_memory_protection_entry *mp_entry; rtems_memory_protection_permission permission; rtems_memory_protection_initialize_domain( protection_domain, 8 ); permission = RTEMS_MEMORY_PROTECTION_READ_PERMISSION | RTEMS_MEMORY_PROTECTION_EXECUTE_PERMISSION; rtems_memory_protection_create_entry( protection_domain, &text_region, permission, &mp_entry ); rtems_memory_protection_install_domain(protection_domain);One aspect I'm not happy with is the permissions field, an 8-bit field with preprocessor macros defining masks. A better solution would improve usability. The remainder of the API layer are a handful of functions to manage MPEs (find mpe by address/MPD, delete mpe from mpd, set permission) and MPDs (uninstall all MPEs and finalize to free resources).
Update: continued here
Wednesday, November 16, 2011
Gambler's fallacy
I was listening to ESPN First Take on Monday (podcast [mp3]) and the hosts were debating whether the Falcons should have punted or not on fourth and short in overtime against the Saints. In wrapping up the debate, Jay Crawford said (around minute 32 of the podcast):
Jay then apologizes to Skip Bayless, I guess for being too "mathy". He should be sorry not for using math, but for using math poorly.
"One thing on your [Jon Ritchie's] numbers deal, because I'm a math guy too. They [the Falcons] have gone for it twice earlier, similar situation, you said that there is a 66.1% probability [sic] of picking it up ... Here's why you don't go for it: they've tried it twice, they've made it twice [earlier in the game], it's probability. Because if you're sitting at a blackjack table and you're playing odds, you're playing three times, and the law of your averages tells you, it's simple math if you try it three times you're going to make it twice and miss it once, and that's exactly how it played out."
Jay then apologizes to Skip Bayless, I guess for being too "mathy". He should be sorry not for using math, but for using math poorly.
Saturday, November 12, 2011
OT: sisu
This is an excerpt from "The Finnish Settlement in the Eagle River Area", a memoir written by my grandmother, Elsa Ahola Bloom, who passed away in the fall of '99. Most of the memoir is published in Eagle River, Wisconsin: Its History & People, but this story, perhaps the most personal part of the brief history, was not included in the revised version.
Early every afternoon it was the duty of us younger children to start out looking for the cows and bring them home. We would travel the old road and follow the cow's tracks to whichever field they chose for their pasture for the day. We always loved these long walks and spent many happy hours bringing the cows home. Wild raspberries grew in patches along the road, and strawberries grew wild in the old fields, and we would pick them during the berry season.
When I was six years old they started to build a road to connect Rhinelander to the nearest city to the south, and Eagle River. This road, now State Highway 17, roughly followed the old wagon trail, and when completed, went right past our house and on to Eagle River. The county line, separating Oneida and Vilas County, was about a mile to the south of your home, and that portion of the road to the south was completed about a year before the road past our house was ready. Our cows would still take the old path through the woods but now they would end up on the completed portion of the road and cut off that to the old fields, or they would continue straight through to Camp Six [an abandoned logging camp] to pasture there.
One beautiful afternoon, in late summer, when the raspberries were ripe, I had an experience that I believe helped to mold the pattern of my life. My brother who was two years older than I, and I were walking along the completed portion of the road on our way to get the cows which this time seemed to be going straight along the road to Camp Six. We walked slowly along, pausing now and then to throw rocks at the chipmunks that sat on stumps alongside the road eating raspberries.
We had just rounded a turn in the road when about a quarter of a mile ahead of us we saw a large black bear cross the road. Black bear were scarce in those days, and I had never seen one before and had no idea what it was. My brother, who had seen one once when he stayed one summer at our uncle's farm in Michigan, recognized it for what it was. I was a little afraid to go on, but he assured me it would be gone into the woods long before we got to where it crossed. We continued on our way. As we walked along, I recall my brother telling me about how our uncle and some other men had shot and killed a bear in Michigan. We saw no further sign of the bear and took it for granted he was gone. We got to within a short distance of where we figured he had crossed when about a hundred feet away we saw him on the side of the road where he had stopped to eat some of the wild raspberries that grew alongside the road. He evidently saw or scented us at the same time for he came out of the ditch and out into the road. He got to the center of the road and stood up on his hind legs facing us. We stopped dead in our tracks.
"What can we do?" I whispered to my brother. He told me we must not turn and run because the bear might take after us. After several frantic seconds of speculation, my brother said he believed if we acted like we weren't afraid we could probably scare him off.
"When I say 'go' we will both start yelling at the top of our voices," he said, "and run right at him." I will never forget my terror as I waited for that word "go."
As my brother gave the signal, we started straight for that bear as fast as we could run, our hands waving in the air, and both of us yelling as loud as we could. After a moment of hesitation that bear turned, gave one great leap and went crashing off into the woods. We continued on our way, laughing shakily and agreeing that we had done the only thing we could do. In all the years following we never did see a bear again.
The grass of many years has grown over the grave of my brother, and I never did know what effect our adventure had on him. For me, though, whenever the clouds seem too dark, and the burdens of life too hard to bear, or when I face a seemingly unsurmountable problem, my thoughts travel back through the years and I see once more those two barefooted, terror-stricken children running as fast as they could straight toward what to them was the greatest danger, and I know that I can face whatever comes and win.
Early every afternoon it was the duty of us younger children to start out looking for the cows and bring them home. We would travel the old road and follow the cow's tracks to whichever field they chose for their pasture for the day. We always loved these long walks and spent many happy hours bringing the cows home. Wild raspberries grew in patches along the road, and strawberries grew wild in the old fields, and we would pick them during the berry season.
When I was six years old they started to build a road to connect Rhinelander to the nearest city to the south, and Eagle River. This road, now State Highway 17, roughly followed the old wagon trail, and when completed, went right past our house and on to Eagle River. The county line, separating Oneida and Vilas County, was about a mile to the south of your home, and that portion of the road to the south was completed about a year before the road past our house was ready. Our cows would still take the old path through the woods but now they would end up on the completed portion of the road and cut off that to the old fields, or they would continue straight through to Camp Six [an abandoned logging camp] to pasture there.
One beautiful afternoon, in late summer, when the raspberries were ripe, I had an experience that I believe helped to mold the pattern of my life. My brother who was two years older than I, and I were walking along the completed portion of the road on our way to get the cows which this time seemed to be going straight along the road to Camp Six. We walked slowly along, pausing now and then to throw rocks at the chipmunks that sat on stumps alongside the road eating raspberries.
We had just rounded a turn in the road when about a quarter of a mile ahead of us we saw a large black bear cross the road. Black bear were scarce in those days, and I had never seen one before and had no idea what it was. My brother, who had seen one once when he stayed one summer at our uncle's farm in Michigan, recognized it for what it was. I was a little afraid to go on, but he assured me it would be gone into the woods long before we got to where it crossed. We continued on our way. As we walked along, I recall my brother telling me about how our uncle and some other men had shot and killed a bear in Michigan. We saw no further sign of the bear and took it for granted he was gone. We got to within a short distance of where we figured he had crossed when about a hundred feet away we saw him on the side of the road where he had stopped to eat some of the wild raspberries that grew alongside the road. He evidently saw or scented us at the same time for he came out of the ditch and out into the road. He got to the center of the road and stood up on his hind legs facing us. We stopped dead in our tracks.
"What can we do?" I whispered to my brother. He told me we must not turn and run because the bear might take after us. After several frantic seconds of speculation, my brother said he believed if we acted like we weren't afraid we could probably scare him off.
"When I say 'go' we will both start yelling at the top of our voices," he said, "and run right at him." I will never forget my terror as I waited for that word "go."
As my brother gave the signal, we started straight for that bear as fast as we could run, our hands waving in the air, and both of us yelling as loud as we could. After a moment of hesitation that bear turned, gave one great leap and went crashing off into the woods. We continued on our way, laughing shakily and agreeing that we had done the only thing we could do. In all the years following we never did see a bear again.
The grass of many years has grown over the grave of my brother, and I never did know what effect our adventure had on him. For me, though, whenever the clouds seem too dark, and the burdens of life too hard to bear, or when I face a seemingly unsurmountable problem, my thoughts travel back through the years and I see once more those two barefooted, terror-stricken children running as fast as they could straight toward what to them was the greatest danger, and I know that I can face whatever comes and win.
Elsa's oldest son and a stroke survivor: "Never give up" |
Sunday, November 6, 2011
Symbolic links rock
I've used symbolic links (symlink or soft link) in the past, but recently I've started to realize their potential in two ways.
First, I keep almost all of my files in version control repositories, but I often want to use those files in a project that is not hosted in the repository. With centralized version control I was often copying files between my repo and source trees until I realized I can make a symbolic link between the file in my checked-out repository and where that file belongs in my source tree. Now I can work directly in my source tree and not have to remember to make copies to my repository; I just go to the repository to commit/update, and the symlinks automatically propagate the file data.
Second, when testing an improvement to a program I run an unmodified and modified program using the same input data. Sometimes I have to vary the input data, for example to cover a bunch of test cases or to do random input testing. Managing the input data sets can become tedious. One way to make my life easier is to generate the input data in a single location and make symlinks to the input data for each of my programs. Then a script easily can create input data, run the test programs, archive the input data and results, and repeat.
First, I keep almost all of my files in version control repositories, but I often want to use those files in a project that is not hosted in the repository. With centralized version control I was often copying files between my repo and source trees until I realized I can make a symbolic link between the file in my checked-out repository and where that file belongs in my source tree. Now I can work directly in my source tree and not have to remember to make copies to my repository; I just go to the repository to commit/update, and the symlinks automatically propagate the file data.
Second, when testing an improvement to a program I run an unmodified and modified program using the same input data. Sometimes I have to vary the input data, for example to cover a bunch of test cases or to do random input testing. Managing the input data sets can become tedious. One way to make my life easier is to generate the input data in a single location and make symlinks to the input data for each of my programs. Then a script easily can create input data, run the test programs, archive the input data and results, and repeat.
Saturday, October 29, 2011
GSOC Mentor Summit
This summer I mentored for the RTEMS project as part of the Google Summer of Code (GSoC) and was invited to attend the mentor summit, which is an unconference with mentors from the projects that participated in GSoC. Bringing together representatives from across open source communities is one of the benefits of the GSoC program: Many shared interests exist among the different organizations that participate in GSoC, and the social interactions and networking opportunities are nice.
I organized a session on open source in space, and I also got a lot out of the sessions on Women and new contributor outreach discussion, Non-Profit Infrastructure for Software Freedom, the Open Source OS Summit, and Aging Project Infrastructure Evaluation. The latter two were technical whereas the session on space focused on raising awareness of the contributions to space science that FOSS makes. The session on contributor outreach gave advice on how to be more inclusive and welcoming to outsiders (including women, who are underrepresented in FOSS communities), and the non-profit session discussed the why and how of making an open source software project benefit from charitable donations.
I organized a session on open source in space, and I also got a lot out of the sessions on Women and new contributor outreach discussion, Non-Profit Infrastructure for Software Freedom, the Open Source OS Summit, and Aging Project Infrastructure Evaluation. The latter two were technical whereas the session on space focused on raising awareness of the contributions to space science that FOSS makes. The session on contributor outreach gave advice on how to be more inclusive and welcoming to outsiders (including women, who are underrepresented in FOSS communities), and the non-profit session discussed the why and how of making an open source software project benefit from charitable donations.
Spacecraft Flight Software Workshop
MMS: a NASA mission that will fly RTEMS |
As one of the few RTOSs that run on radiation-hardened platforms, including the LEON and RAD750, RTEMS enjoys a share of space missions both in the US and abroad. Among the trends at FSW 2011 was an increase in the number of RTEMS-related talks over prior years: Open source does well in a recession.
I got to see a little of how space users are using RTEMS, what they care about, and where development should proceed. Besides V&V, which are important to the FSW community, the trends in RTOS use appear to be toward encouraging reusable software components (mainly through OSAL), flexible runtime support for dynamic execution such as scripting and dynamic linked code, memory protection, improved storage, autonomy, and simulation. Among these, RTEMS will soon be supporting dynamic code loading thanks to Chris Johns, and I am investigating memory protection. With respect to storage, one of the talks described a FAT32 flash filesystem for VxWorks and RTEMS. Autonomy is important for deep-space missions in which communication with ground support is delayed (or obstructed) thus preventing real-time interaction.
All-in-all I thoroughly enjoyed the workshop for its technical and social aspects. I met a lot of smart, kind people who work hard to advance space science and exploration.
Wednesday, September 21, 2011
Clobbering a memory leak
I spent today dealing with a subtle bug that had me slogging through code trying to figure out why my application was running out of memory. Adding more memory allowed the program to run longer, so I suspected a (relatively) slow leak. I added reference counters to ensure that every allocation had a matching free, and the reference counters did not indicate any memory regions were being lost. I should not have been exhausting the available memory, and yet I was.
By checking the return value of free I was able to determine that I was passing free an invalid pointer to a location not on the heap. But I was passing it a pointer to an object I allocated! Something was corrupting the pointer.
Normally I would suspect a stack overflow, but since I was using some inline assembly macros nearby I double-checked them. Sure enough I had forgotten to list one of the clobber registers in one of the macros so the compiler did not know that I was using that register. The register was assigned to hold the argument to free but was overwritten by the assembly code causing the original pointer to be silently leaked. By adding the correct register to the clobber list I fixed the memory leak.
By checking the return value of free I was able to determine that I was passing free an invalid pointer to a location not on the heap. But I was passing it a pointer to an object I allocated! Something was corrupting the pointer.
Normally I would suspect a stack overflow, but since I was using some inline assembly macros nearby I double-checked them. Sure enough I had forgotten to list one of the clobber registers in one of the macros so the compiler did not know that I was using that register. The register was assigned to hold the argument to free but was overwritten by the assembly code causing the original pointer to be silently leaked. By adding the correct register to the clobber list I fixed the memory leak.
Tuesday, September 20, 2011
OT: Gas Psychology
xkcd points out that the cost of saving money on gas in the time it takes to drive out of your way will rarely be worthwhile. The alt text goes further to state that if you drive an average car more than a mile out of the way for each penny per gallon saved you are spending more on gas than you are saving. I was intrigued so I did a little number crunching.
Suppose I can get gas for $4.00 per gallon without going out of my way, and that I usually get 10 gallons of gas. Results will fluctuate depending on the base price and the amount of gas purchased. Here is a chart showing the farthest I can drive before I start to lose money.
Taking a vertical slice out of the middle we can see how far to drive when the gas prices are 10 cents apart:
I also noticed that (at 10 gallons of gas and with $4.00 base) the alt text is correct for vehicles that get up to 40 mpg. At 40 mpg the break-even point is almost exactly 1 mile of travel for each penny per gallon saved. For my car, which gets about 20 to 25 mpg, the break-even point is about 1/2 mile for every penny saved.
Suppose I can get gas for $4.00 per gallon without going out of my way, and that I usually get 10 gallons of gas. Results will fluctuate depending on the base price and the amount of gas purchased. Here is a chart showing the farthest I can drive before I start to lose money.
Max distance to travel before losing money. Each line represents a different average miles per gallon from 10 m.p.g. to 50 m.p.g. |
Taking a vertical slice out of the middle we can see how far to drive when the gas prices are 10 cents apart:
Max distance to save 10 cents per gallon |
Sunday, August 21, 2011
multilib
Back when Eugen and I were porting SPARC-V9 to RTEMS, we had to consider the compiler tools for cross-compiling RTEMS to the target architecture. One of the issues was that RTEMS uses the multilib feature of GCC for building architectural variants, so we should consider the multilib options for SPARC-V9 and submit patches to add support for the desired multilibs to the rtems-sparc64-gcc.
What are multilibs?
The GCC manual doesn't say anything other than to identify the flags that use multilib. Google isn't terribly helpful here either. The automake manual has this to say:
So we ignored multilibs and moved on. That was fine, although to this day the sparc64-rtems-gcc does not support multilibs. If a tighter integration of the sparc64-rtems-gcc and BSPs becomes important then this issue might be worth re-visiting. But I still had little knowledge about multilibs.
multilibs are...
It turns out that multilib is common, although most end users won't notice them. multilib is the process by which a library is built multiple times, each with a different set of compiler flags. That means a library can be linked to applications built with similar flags, which may be necessary in case the compiler flags change the ABI (application binary interface). An example is compiling legacy 32-bit applications on 64-bit machines: current Linux distros ship GCC with a 32-bit multilib, so the GCC libraries are available for compiling 32-bit applications. On a 64-bit computer without a 32-bit multilib gcc cannot compile 32-bit programs.
RTEMS adds multilib flags to its rtems-flavored gcc compilers to allow RTEMS to link to GCC libraries that best support the target hardware. For users of existing board support packages (BSPs), the multilib is invisible: the BSP specifies compiler flags (in make/custom/somebsp.cfg), and rtems-gcc links the right library. However, BSP developers should ensure that critical flags specified by the BSP match an available rtems-gcc mulitlib or else a new multilib should be contributed. The gcc flags will pull in a multilib if there is a match in gcc-print-multi-lib
On the RTEMS multilib page there is also an explanation of how RTEMS can build its own multilib library, the librtemscpu.a. Building a multilib RTEMS is mostly useful to save a little time and space when compiling many variants of the same processor family, for example multiple BSPs from the same CPU. Perhaps I'll follow up sometime to discuss RTEMS' librtemscpu.a
What are multilibs?
The GCC manual doesn't say anything other than to identify the flags that use multilib. Google isn't terribly helpful here either. The automake manual has this to say:
Hardly helpful.18.3 Support for Multilibs
Automake has support for an obscure feature called multilibs. A multilib is a library that is built for multiple different ABIs at a single time; each time the library is built with a different target flag combination. This is only useful when the library is intended to be cross-compiled, and it is almost exclusively used for compiler support libraries.
The multilib support is still experimental. Only use it if you are familiar with multilibs and can debug problems you might encounter.
So we ignored multilibs and moved on. That was fine, although to this day the sparc64-rtems-gcc does not support multilibs. If a tighter integration of the sparc64-rtems-gcc and BSPs becomes important then this issue might be worth re-visiting. But I still had little knowledge about multilibs.
multilibs are...
It turns out that multilib is common, although most end users won't notice them. multilib is the process by which a library is built multiple times, each with a different set of compiler flags. That means a library can be linked to applications built with similar flags, which may be necessary in case the compiler flags change the ABI (application binary interface). An example is compiling legacy 32-bit applications on 64-bit machines: current Linux distros ship GCC with a 32-bit multilib, so the GCC libraries are available for compiling 32-bit applications. On a 64-bit computer without a 32-bit multilib gcc cannot compile 32-bit programs.
RTEMS adds multilib flags to its rtems-flavored gcc compilers to allow RTEMS to link to GCC libraries that best support the target hardware. For users of existing board support packages (BSPs), the multilib is invisible: the BSP specifies compiler flags (in make/custom/somebsp.cfg), and rtems-gcc links the right library. However, BSP developers should ensure that critical flags specified by the BSP match an available rtems-gcc mulitlib or else a new multilib should be contributed. The gcc flags will pull in a multilib if there is a match in gcc-print-multi-lib
On the RTEMS multilib page there is also an explanation of how RTEMS can build its own multilib library, the librtemscpu.a. Building a multilib RTEMS is mostly useful to save a little time and space when compiling many variants of the same processor family, for example multiple BSPs from the same CPU. Perhaps I'll follow up sometime to discuss RTEMS' librtemscpu.a
Tuesday, August 9, 2011
Red Black Trees: Bottom-Up or Top-Down?
Conventional red-black trees are bottom-up: nodes are inserted/deleted at the bottom and coloring properties are corrected up. Last year I found a tutorial describing a top-down approach claiming it is superior in terms of simplicity, ease of understanding, code size, and theoretical efficiency. Unfortunately these claims are unsubstantiated. Being curious I investigated further. I implemented an iterative bottom-up red-black tree (with parent pointers) and compared it with the top-down approach.
Since complexity increases with code size, I used sloccount to measure the code size. For the top-down approach, the Total Physical Source Lines of Code (SLOC) = 131. For bottom-up, SLOC = 189. So far the evidence does favor a top-down approach for being less complex.
But what about efficiency? (Speed is King!) The tutorial claims that a top-down approach makes a single pass of the tree and so is theoretically more efficient.
I wrote a simple testing program that generates and inserts (pseudo)random numbers as keys in a red-black tree and then removes them. The program takes wall time measurements with gettimeofday before and after both blocks of insertion and removal. Using wall time is crude but serves for this demonstration. By dividing the total time of insertion (removal) by the number of keys I compute the average cost per insertion (removal). I ran the testing program 10 times and averaged the per operation costs for top-down and bottom-up for an increasing tree size from between 100 and one million nodes. The following two images show the results (smaller is better):
The top-down approach has a worse average time for both inserting and removing nodes. I did not bother with statistical rigor, but the gaps were larger than a few standard deviations of the measured data.
Although my experiments were far from complete I opted to use the more established, more familiar, and (apparently) more efficient bottom-up approach in an implementation of red-black trees that is now part of the RTEMS kernel.
If I had to guess why the top-down approach performs worse I would say it is pessimistic in how many tree rotations it does. The bottom-up approach seems to make about as many tree rotations as there are nodes. (I believe the number of rotations is bounded to 2 for insert and 3 for remove, but on average it appears to be 1/2 rotation per insert/remove.) The top-down approach makes about 2-4 times as many (albeit simpler) rotations, and I have no idea if the bounds apply.
Since complexity increases with code size, I used sloccount to measure the code size. For the top-down approach, the Total Physical Source Lines of Code (SLOC) = 131. For bottom-up, SLOC = 189. So far the evidence does favor a top-down approach for being less complex.
But what about efficiency? (Speed is King!) The tutorial claims that a top-down approach makes a single pass of the tree and so is theoretically more efficient.
I wrote a simple testing program that generates and inserts (pseudo)random numbers as keys in a red-black tree and then removes them. The program takes wall time measurements with gettimeofday before and after both blocks of insertion and removal. Using wall time is crude but serves for this demonstration. By dividing the total time of insertion (removal) by the number of keys I compute the average cost per insertion (removal). I ran the testing program 10 times and averaged the per operation costs for top-down and bottom-up for an increasing tree size from between 100 and one million nodes. The following two images show the results (smaller is better):
The top-down approach has a worse average time for both inserting and removing nodes. I did not bother with statistical rigor, but the gaps were larger than a few standard deviations of the measured data.
Although my experiments were far from complete I opted to use the more established, more familiar, and (apparently) more efficient bottom-up approach in an implementation of red-black trees that is now part of the RTEMS kernel.
If I had to guess why the top-down approach performs worse I would say it is pessimistic in how many tree rotations it does. The bottom-up approach seems to make about as many tree rotations as there are nodes. (I believe the number of rotations is bounded to 2 for insert and 3 for remove, but on average it appears to be 1/2 rotation per insert/remove.) The top-down approach makes about 2-4 times as many (albeit simpler) rotations, and I have no idea if the bounds apply.
Wednesday, July 20, 2011
Principles, principals, and protection domains
Recently, I've had some time to think about computer security as I wrap up a book chapter that I'm co-authoring on the topic. I recall my initial exposure to three fundamental security concepts: the principle of least privilege, principals, and protection domains. These three concepts appear in practical computer security, have been around for a long time (longer than me), and are intricately related. This post explores the relationship among the three as they relate to one aspect of my research.
The classic paper by Saltzer and Schroeder [1] was my first introduction to these concepts; I highly recommend this paper for anyone even slightly interested in computer security. Butler Lampson's pages on Protection [2] and Principles [3] also have good information.
Let's start with definitions [1].
Principle of Least Privilege (POLP): Every program and every user of the system should operate using the least set of privileges necessary to complete the job.
Principal: The entity in a computer system to which authorizations are granted; thus the unit of accountability in a computer system.
Protection domain: The set of objects that currently may be directly accessed by a principal.
The logical conclusion of the POLP is that every principal should be as small (fine-grained) as is feasible, and every protection domain should be minimal for each principal. But in general as things get smaller and more numerous, managing them gets harder.
Lampson has argued against the POLP. His argument is that if "the price of reliability is the pursuit of utmost simplicity," as Hoare famously said, and if reliability is required for security, then enforcing the finest granularity of privileges is wrong because it introduces complexity thus reducing reliability.
So an open challenge in computer security is supporting fine-grained privileges without introducing complexity (or overhead).
Privileges are inherent to both principals and protection domains. In *nix, the "user" (account) is the principal. All processes having the same user can access the same persistent objects, but not temporary objects outside of process context. So files are accessible with user rights, but not per-process open file descriptors. So who is the principal—the user or the process?
I think it is really about multiple "contexts" within a single computer system. In one context, the principal is the user, and the protection domain is the set of persistent objects that are managed by the OS (files, programs). In another context, the principal is the process, and the protection domain is the set of objects "in use" by the process (file descriptors, process address space). These two contexts are muddled by interfaces like /proc, which allow users to access process context, and also because a process has the user's privileges when accessing persistent objects.
By thinking in terms of multiple contexts that align with principals, I can more easily think about how user/process principals fit with other principals, such as the coarser-grained machine principal seen in network protocols or finer-grained principals like threads, objects, or even procedure invocations. Each principal has an associated protection domain and can co-exist with overlapping principals.
Pushing the POLP toward its limits is one aspect of joint work I have done: hardware containers [4,5] support procedure invocations as a fine-grained principal, and we have argued that software can reasonably manage permissions to establish tightly-bounded protection domains.
[1] J. H. Saltzer and M. D. Schroeder, “The protection of information in computer systems,” Proceedings of the IEEE, vol. 63, no. 9, pp. 1278-1308, 1975. Available at: http://www.cs.virginia.edu/~evans/cs551/saltzer/
[2] B. Lampson. Protection. Proc. 5th Princeton Conf. on Information Sciences and Systems, Princeton, 1971. Reprinted in ACM Operating Systems Rev. 8, 1 (Jan. 1974), pp 18-24. Available at: http://research.microsoft.com/en-us/um/people/blampson/08-Protection/WebPage.html
[3] In Software System Reliability and Security, Proceedings of the 2006 Marktoberdorf Summer school. Available at: http://research.microsoft.com/en-us/um/people/blampson/74-PracticalPrinciplesSecurity/74-PracticalPrinciplesSecurityAbstract.htm
[4] E. Leontie, G. Bloom, B. Narahari, R. Simha, and J. Zambreno, “Hardware Containers for Software Components: A Trusted Platform for COTS-Based Systems,” in Computational Science and Engineering, IEEE International Conference on, Los Alamitos, CA, USA, 2009, vol. 2, pp. 830-836. Available at: http://home.gwu.edu/~gedare/publications.html#
[5] E. Leontie, G. Bloom, B. Narahari, R. Simha, and J. Zambreno, “Hardware-enforced fine-grained isolation of untrusted code,” in Proceedings of the first ACM workshop on Secure execution of untrusted code, Chicago, Illinois, USA, 2009, pp. 11-18. Available at: http://home.gwu.edu/~gedare/publications.html#
The classic paper by Saltzer and Schroeder [1] was my first introduction to these concepts; I highly recommend this paper for anyone even slightly interested in computer security. Butler Lampson's pages on Protection [2] and Principles [3] also have good information.
Let's start with definitions [1].
Principle of Least Privilege (POLP): Every program and every user of the system should operate using the least set of privileges necessary to complete the job.
Principal: The entity in a computer system to which authorizations are granted; thus the unit of accountability in a computer system.
Protection domain: The set of objects that currently may be directly accessed by a principal.
The logical conclusion of the POLP is that every principal should be as small (fine-grained) as is feasible, and every protection domain should be minimal for each principal. But in general as things get smaller and more numerous, managing them gets harder.
Lampson has argued against the POLP. His argument is that if "the price of reliability is the pursuit of utmost simplicity," as Hoare famously said, and if reliability is required for security, then enforcing the finest granularity of privileges is wrong because it introduces complexity thus reducing reliability.
So an open challenge in computer security is supporting fine-grained privileges without introducing complexity (or overhead).
Privileges are inherent to both principals and protection domains. In *nix, the "user" (account) is the principal. All processes having the same user can access the same persistent objects, but not temporary objects outside of process context. So files are accessible with user rights, but not per-process open file descriptors. So who is the principal—the user or the process?
I think it is really about multiple "contexts" within a single computer system. In one context, the principal is the user, and the protection domain is the set of persistent objects that are managed by the OS (files, programs). In another context, the principal is the process, and the protection domain is the set of objects "in use" by the process (file descriptors, process address space). These two contexts are muddled by interfaces like /proc, which allow users to access process context, and also because a process has the user's privileges when accessing persistent objects.
By thinking in terms of multiple contexts that align with principals, I can more easily think about how user/process principals fit with other principals, such as the coarser-grained machine principal seen in network protocols or finer-grained principals like threads, objects, or even procedure invocations. Each principal has an associated protection domain and can co-exist with overlapping principals.
Pushing the POLP toward its limits is one aspect of joint work I have done: hardware containers [4,5] support procedure invocations as a fine-grained principal, and we have argued that software can reasonably manage permissions to establish tightly-bounded protection domains.
[1] J. H. Saltzer and M. D. Schroeder, “The protection of information in computer systems,” Proceedings of the IEEE, vol. 63, no. 9, pp. 1278-1308, 1975. Available at: http://www.cs.virginia.edu/~evans/cs551/saltzer/
[2] B. Lampson. Protection. Proc. 5th Princeton Conf. on Information Sciences and Systems, Princeton, 1971. Reprinted in ACM Operating Systems Rev. 8, 1 (Jan. 1974), pp 18-24. Available at: http://research.microsoft.com/en-us/um/people/blampson/08-Protection/WebPage.html
[3] In Software System Reliability and Security, Proceedings of the 2006 Marktoberdorf Summer school. Available at: http://research.microsoft.com/en-us/um/people/blampson/74-PracticalPrinciplesSecurity/74-PracticalPrinciplesSecurityAbstract.htm
[4] E. Leontie, G. Bloom, B. Narahari, R. Simha, and J. Zambreno, “Hardware Containers for Software Components: A Trusted Platform for COTS-Based Systems,” in Computational Science and Engineering, IEEE International Conference on, Los Alamitos, CA, USA, 2009, vol. 2, pp. 830-836. Available at: http://home.gwu.edu/~gedare/publications.html#
[5] E. Leontie, G. Bloom, B. Narahari, R. Simha, and J. Zambreno, “Hardware-enforced fine-grained isolation of untrusted code,” in Proceedings of the first ACM workshop on Secure execution of untrusted code, Chicago, Illinois, USA, 2009, pp. 11-18. Available at: http://home.gwu.edu/~gedare/publications.html#
Friday, July 15, 2011
Writing and Reading
I've been writing a lot lately, and thinking about writing, and reading about writing, and now writing about writing; I'll spare you from any writing about thinking.
As I (re)learn more about proper English, I find that poor usage now grates on me. My ability to read efficiently suffers because I am now aware of improper writing—I keep reaching for a pencil to make edits!
I only hope I can learn to choose to read with a critical eye or not. Ignorance is bliss.
As I (re)learn more about proper English, I find that poor usage now grates on me. My ability to read efficiently suffers because I am now aware of improper writing—I keep reaching for a pencil to make edits!
I only hope I can learn to choose to read with a critical eye or not. Ignorance is bliss.
Sunday, July 10, 2011
Properly attired yogurt is delicious
In the grocery store today I impulsively bought some plain yogurt and a box of granola (oats & honey) with the idea—surely not original—to combine them with some blueberries and a bit of strawberry jam. The result is a delicious summertime snack that I am enjoying as I write.
The recipe is simple:
The recipe is simple:
- (Optional) Mix a dollop of jam with plain yogurt.
- Combine equal parts yogurt, berries, and granola.
- Eat.
Saturday, July 9, 2011
I can has Knuth?
As I was working on my thesis, Misty decided my computer was a nice place to nestle. Then she decided TAOCP was interesting; after all, I was reading from it.
Tuesday, June 28, 2011
OT: TAL and WI
Acronymonious title to indicate I want to say a few words about the latest episode of This American Life, a public radio show that posts weekly 1-hour episodes to its website. The format of a TAL episode is a series of stories from around the country that relate to an overall theme. Each current episode can be downloaded as an mp3, and all episodes can be streamed.
This week's episode is themed from the phrase, "A house divided against itself cannot stand," which was used famously by Lincoln and appears to originate from Matthew 12:25. I downloaded and listened to this episode during my bike ride in to work today.
One of the stories this week is about the dissent, even foment, in Wisconsin over the political debate raging about whether a State should forbid unionization of its employees. Activists in WI have triggered recall elections in the state, and the whole debacle has caused a rift and elicited behavior that is foreign to the WI that I know.
I found the episode to be enlightening, and encourage anyone with a half-hour to kill to listen to the first half of the episode in which the story is located. If you have more time, listen to the whole episode. The debate in WI is a microcosm of the larger debate of big vs little government. I do not care to discuss that here, and the radio show did a fair job of avoiding the subject by focusing on the people involved and interviewing both sides.
The notion that political activism can so quickly and deeply divide a community is amazing, although surely not unprecedented. I was taken aback when district 12 was highlighted: State Sen. Holperin has represented my home town in one form or another for a long time. I went to school with one of his children. It is odd to hear about the rural community of my youth in a national broadcast. The behavior that is discussed by Kim (I also knew one of her children, though he was ahead of me in school) about her treatment is astonishing.
I suppose the reactions are not that surprising, considering that the legislation affects the lives of so many, especially of beloved teachers. In a state that has little experience with such political strife, the division has polarized people who are otherwise good and reasonable folks.
It is saddening that party lines appear to supersede common sense time and again. Will the house soon stand again?
This week's episode is themed from the phrase, "A house divided against itself cannot stand," which was used famously by Lincoln and appears to originate from Matthew 12:25. I downloaded and listened to this episode during my bike ride in to work today.
One of the stories this week is about the dissent, even foment, in Wisconsin over the political debate raging about whether a State should forbid unionization of its employees. Activists in WI have triggered recall elections in the state, and the whole debacle has caused a rift and elicited behavior that is foreign to the WI that I know.
I found the episode to be enlightening, and encourage anyone with a half-hour to kill to listen to the first half of the episode in which the story is located. If you have more time, listen to the whole episode. The debate in WI is a microcosm of the larger debate of big vs little government. I do not care to discuss that here, and the radio show did a fair job of avoiding the subject by focusing on the people involved and interviewing both sides.
The notion that political activism can so quickly and deeply divide a community is amazing, although surely not unprecedented. I was taken aback when district 12 was highlighted: State Sen. Holperin has represented my home town in one form or another for a long time. I went to school with one of his children. It is odd to hear about the rural community of my youth in a national broadcast. The behavior that is discussed by Kim (I also knew one of her children, though he was ahead of me in school) about her treatment is astonishing.
I suppose the reactions are not that surprising, considering that the legislation affects the lives of so many, especially of beloved teachers. In a state that has little experience with such political strife, the division has polarized people who are otherwise good and reasonable folks.
It is saddening that party lines appear to supersede common sense time and again. Will the house soon stand again?
Tuesday, June 7, 2011
When to make thesis slides
I've been working on my thesis proposal for awhile now, and I even took one attempt at writing it which turned out to be a bit premature. My advisor suggested that I consider making my proposal defense slides before I write the proposal. I have now finished my first iteration on the slides and found it to be a most useful exercise. The content of my slides covers what I plan to address in my proposal, although the presentation needs a unifying story to tie everything together. A presentation should have a strong motivation and story because it is important to hook the audience in the first couple of slides and to keep them engaged throughout.
My feeling is that if I start by writing my proposal, I would have a sub-par narrative that would be weak when translated to a presentation. I usually start writing with an outline, then I fill in the technical details, results, and conclusion, then write the background material: introduction, motivation, and related work. Making the presentation slides first will allow me to follow a single story throughout the proposal.
The act of thinking about how to present my work makes me consider more deeply both the organization and narrative than when I sit down to write. By expending more energy on the why and less the how, I can focus on a narrative that puts my proposed work in context and motivates my contribution. After I finish my slides, I will write my proposal using the slides as a guideline. This approach should yield a concise, readable proposal with a flow that is consistent with the presentation.
I also expect that most of my slides will be reusable for the thesis defense (and potential job talks), since the motivation and high-level ideas will be largely unchanged. Although some details and results will change, these will amount to a small delta in the presentation that will be easy to update.
My feeling is that if I start by writing my proposal, I would have a sub-par narrative that would be weak when translated to a presentation. I usually start writing with an outline, then I fill in the technical details, results, and conclusion, then write the background material: introduction, motivation, and related work. Making the presentation slides first will allow me to follow a single story throughout the proposal.
The act of thinking about how to present my work makes me consider more deeply both the organization and narrative than when I sit down to write. By expending more energy on the why and less the how, I can focus on a narrative that puts my proposed work in context and motivates my contribution. After I finish my slides, I will write my proposal using the slides as a guideline. This approach should yield a concise, readable proposal with a flow that is consistent with the presentation.
I also expect that most of my slides will be reusable for the thesis defense (and potential job talks), since the motivation and high-level ideas will be largely unchanged. Although some details and results will change, these will amount to a small delta in the presentation that will be easy to update.
Friday, May 27, 2011
Evaluating Research by Squeezing
John Regehr writes about squeezing a research idea, an exercise for evaluating a research idea. A squeeze is like a feasibility study. An idea can also be squeezed after the fact to determine its value or identify future work for improvements. The basic idea is to find a bound on the performance improvement of the research idea.
My experience has been that the second step is best accomplished by developing a model that stresses the enhancement proposed in the research idea. Exploring system behavior in the limit puts the proposed enhancement in perspective. Upper bounds on the usefulness of the idea can be determined by idealizing constraint parameters of the model. See applications of Amdahl's law for some examples of doing this right.
A metric that is easy to measure, explain, and visualize should be applied to the baseline and enhanced system. A good metric for evaluating a squeeze will also be useful in presenting results and selling the research project. The measures taken of the two systems provide the last step of the squeeze.
In systems work, this analysis is often seen when results are normalized to a baseline, such as a benchmark on a stock system. Typical metrics in final results include speedup, throughput (bandwidth), and latency. Casting a squeeze in similar language as final results eases transitioning from pilot studies to experiments to disseminating results.
Squeezing an idea is a way to determine if going forward with it makes sense. The results of the squeeze will be useful in motivating the contribution of the research idea. Gaps between experimental results and predicted upper bounds will provide areas for further improvement.
To squeeze an idea you ask:A good baseline, or control in more classical terms, yields the first step of a squeeze. Baseline results should represent state-of-the-art solutions that are readily available. Comparing a proposed idea to an in-house or contrived baseline often does not make for good science.
- How much of the benefit can be attained without the new idea?
- If the new idea succeeds wildly, how much benefit can be attained?
- How large is the gap between these two?
My experience has been that the second step is best accomplished by developing a model that stresses the enhancement proposed in the research idea. Exploring system behavior in the limit puts the proposed enhancement in perspective. Upper bounds on the usefulness of the idea can be determined by idealizing constraint parameters of the model. See applications of Amdahl's law for some examples of doing this right.
A metric that is easy to measure, explain, and visualize should be applied to the baseline and enhanced system. A good metric for evaluating a squeeze will also be useful in presenting results and selling the research project. The measures taken of the two systems provide the last step of the squeeze.
In systems work, this analysis is often seen when results are normalized to a baseline, such as a benchmark on a stock system. Typical metrics in final results include speedup, throughput (bandwidth), and latency. Casting a squeeze in similar language as final results eases transitioning from pilot studies to experiments to disseminating results.
Squeezing an idea is a way to determine if going forward with it makes sense. The results of the squeeze will be useful in motivating the contribution of the research idea. Gaps between experimental results and predicted upper bounds will provide areas for further improvement.
Monday, April 25, 2011
RTEMS GSoC 2011
Congratulations to the students accepted to participate in the GSoC 2011 with RTEMS as their mentoring organization!
Student: Christophe Huriaux
Project: Implementation of the ISO9660 filesystem
Goal: Implement the ISO9660 filesystem so that RTEMS can read CD, DVD, and similar media.
Student: Ricardo Aguirre Reyes
Project: POSIX Timing Tests
Goal: Implement a suite of tests to time POSIX routines.
Student: Cui Xiang
Project: POSIX Compliance Test Suite (misnomer)
Goal: Implement a suite of tests for filesystem testing.
Student: Petr Benes
Project: Porting of resource reservation framework to RTEMS executive
Goal: Port the FRESCOR resource reservation framework to support RTEMS.
Student: Quanming Shi
Project: RTEMS MMU Context Support
Goal: Implement an architecture independent API and structure to manage memory blocks with access attributes based on MMU Support.
Student: Scotty Smith
Project: Lua Scripting and Shell Support in RTEMS
Goal: Add Lua as a library in RTEMS and provide a Lua-based shell as an alternate to the current shell.
Student: Zhang Wenjie
Project: Hypervisor for RTEMS
Goal: Implement virtualization support in RTEMS kernel and demonstrate Linux running on RTEMS.
Student: Jie Liu
Project: RTEMS port of the GNU Java Compiler
Goal: Support compiling Java applications on RTEMS with the GCJ (for x86 at least).
Student: Christophe Huriaux
Project: Implementation of the ISO9660 filesystem
Goal: Implement the ISO9660 filesystem so that RTEMS can read CD, DVD, and similar media.
Student: Ricardo Aguirre Reyes
Project: POSIX Timing Tests
Goal: Implement a suite of tests to time POSIX routines.
Student: Cui Xiang
Project: POSIX Compliance Test Suite (misnomer)
Goal: Implement a suite of tests for filesystem testing.
Student: Petr Benes
Project: Porting of resource reservation framework to RTEMS executive
Goal: Port the FRESCOR resource reservation framework to support RTEMS.
Student: Quanming Shi
Project: RTEMS MMU Context Support
Goal: Implement an architecture independent API and structure to manage memory blocks with access attributes based on MMU Support.
Student: Scotty Smith
Project: Lua Scripting and Shell Support in RTEMS
Goal: Add Lua as a library in RTEMS and provide a Lua-based shell as an alternate to the current shell.
Student: Zhang Wenjie
Project: Hypervisor for RTEMS
Goal: Implement virtualization support in RTEMS kernel and demonstrate Linux running on RTEMS.
Student: Jie Liu
Project: RTEMS port of the GNU Java Compiler
Goal: Support compiling Java applications on RTEMS with the GCJ (for x86 at least).
Monday, March 21, 2011
Traps, Interrupts, and Exceptions in GEMS
Today, someone asked on the gems-users mailing list the question, "How does GEMS simulate traps?" I gave my best answer, and the topic is a good prelude to an upcoming post I'm planning.
GEMS is a computer hardware simulator, and I have discussed it previously. Simics/GEMS is primarily a three-headed monster: Simics, Opal, and Ruby. Simics+Opal implement the processor simulator, and Ruby implements the memory hierarchy for multicore platforms. My work primarily uses Simics+Opal, without Ruby. Opal is a "timing-first" simulator based on TFsim. It implements most of the processor model, but relies on the functional simulator (Simics) to verify simulation correctness and to provide some of the harder-to-model processor features.
One of the harder-to-model features of modern processors is the exception/interrupt handling mechanism. In the SPARC-v9 architecture, these are both referred to as traps. Opal models trap handling for a subset of the possible traps, including register window traps, TLB misses, and software interrupts. All trap handling is done in the retire stage of the instruction window, which allows speculation past traps. Non-modelled traps in Opal, for example I/O, rely on Simics to provide the functional implementation of taking the trap and updating the architected state. Modelled traps simulate the trap handling algorithms and should result in the same architected state as functionally correct trap handling.
Modelled traps improve simulation accuracy. In-flight instructions are squashed, the program counter (and other register state) is properly updated, and Opal continues to execute the workload. Simulator correctness depends on the ability of Opal to generate the same traps as Simics, and simulator accuracy requires Opal to model the effects of taking the trap.
That's all for now. In the near future, I will be expanding on this topic to discuss how to add new traps to Simics/GEMS.
GEMS is a computer hardware simulator, and I have discussed it previously. Simics/GEMS is primarily a three-headed monster: Simics, Opal, and Ruby. Simics+Opal implement the processor simulator, and Ruby implements the memory hierarchy for multicore platforms. My work primarily uses Simics+Opal, without Ruby. Opal is a "timing-first" simulator based on TFsim. It implements most of the processor model, but relies on the functional simulator (Simics) to verify simulation correctness and to provide some of the harder-to-model processor features.
One of the harder-to-model features of modern processors is the exception/interrupt handling mechanism. In the SPARC-v9 architecture, these are both referred to as traps. Opal models trap handling for a subset of the possible traps, including register window traps, TLB misses, and software interrupts. All trap handling is done in the retire stage of the instruction window, which allows speculation past traps. Non-modelled traps in Opal, for example I/O, rely on Simics to provide the functional implementation of taking the trap and updating the architected state. Modelled traps simulate the trap handling algorithms and should result in the same architected state as functionally correct trap handling.
Modelled traps improve simulation accuracy. In-flight instructions are squashed, the program counter (and other register state) is properly updated, and Opal continues to execute the workload. Simulator correctness depends on the ability of Opal to generate the same traps as Simics, and simulator accuracy requires Opal to model the effects of taking the trap.
That's all for now. In the near future, I will be expanding on this topic to discuss how to add new traps to Simics/GEMS.
Saturday, March 19, 2011
Information Warfare
I had the pleasure of helping out at the 6th International Conference on Information Warfare and Security (ICIW 2011) and enjoyed re-engaging my brain's security neurons. This year's ICIW was quite diverse, with a large number of countries represented and participation from both academia and government. All in all, I thought it was a great conference on information operations.
Although not my core area of research, information operations (or "warfare") is an area of applied computer security that also includes aspects of societal studies and psychological disciplines. In truth, information warfare is legitimized (state sponsored) hacking.
Something that struck me was that the US appears to be lagging behind some other countries with respect to organizing and training information operators. It is widely believed that China has a hierarchy of hackers organized somehow within its armed forces or governmental regime. I also learned that Estonia has a volunteer "cyber militia", which I guess would be something like Minutemen hackers.
I know that various US government branches recruit students with technical skills, primarily via scholarship for service programs. I'm also given to understand that armed forces assign personnel to information operations work, but I think this is mainly in using regular IT infrastructure. The technically challenging work is relegated to contracting firms, who may employ white/gray-hat hackers as penetration testers and similar.
I don't believe much work is done in engaging the hobbyists and those with technical hacking abilities but without formal technical education beyond high school. In fact, it is hinted that many talented individuals end up in the black-hat community simply because they lack higher education.
Given the untapped talent pool out there, I wonder if the future holds the possibility for cyber-militia (a la the National Guard) and a new branch of armed forces aimed directly at recruiting and training cyber-infantry? If the current trends in information warfare continue, it seems likely that such skills will be required. They would also be transferable to the private sector, which I think is an interesting thought to ponder.
Although not my core area of research, information operations (or "warfare") is an area of applied computer security that also includes aspects of societal studies and psychological disciplines. In truth, information warfare is legitimized (state sponsored) hacking.
Something that struck me was that the US appears to be lagging behind some other countries with respect to organizing and training information operators. It is widely believed that China has a hierarchy of hackers organized somehow within its armed forces or governmental regime. I also learned that Estonia has a volunteer "cyber militia", which I guess would be something like Minutemen hackers.
I know that various US government branches recruit students with technical skills, primarily via scholarship for service programs. I'm also given to understand that armed forces assign personnel to information operations work, but I think this is mainly in using regular IT infrastructure. The technically challenging work is relegated to contracting firms, who may employ white/gray-hat hackers as penetration testers and similar.
I don't believe much work is done in engaging the hobbyists and those with technical hacking abilities but without formal technical education beyond high school. In fact, it is hinted that many talented individuals end up in the black-hat community simply because they lack higher education.
Given the untapped talent pool out there, I wonder if the future holds the possibility for cyber-militia (a la the National Guard) and a new branch of armed forces aimed directly at recruiting and training cyber-infantry? If the current trends in information warfare continue, it seems likely that such skills will be required. They would also be transferable to the private sector, which I think is an interesting thought to ponder.
Friday, February 4, 2011
The Future (?) of Computing
Making predictions is a questionable exercise at best. However, it is important to look forward and try to see what is coming down the road. So I'm going to give my perspective on where I think the (electronic) computing industry is headed, along with an overview of where it has been.
the past
The computing industry as we know it was born out of the government and academic projects that sought to improve on the ENIAC machine by enabling stored program computing. The EDSAC project showed that a stored program computer is possible, and its complex control logic motivated the microprogrammable control unit. Many subsequent developments in computer architecture can be traced back to the philosophy and goals of microprogramming:
the present
In 1995, the memory wall was identified as the performance bottleneck caused by processing speed improving faster than off-chip peripheral performance, especially that of memory. It seemed that the "free lunch" of faster software due to better hardware would go away. However, computer architects brought to market techniques that mitigate memory access latencies: out-of-order (OoO) execution, caching, prefetching, and simultaneous multithreading (SMT). These are reasonably well-explained on wikipedia, although no replacement for an authoritative source, such as Hennessy and Patterson.
In the 2000s, the power wall (see Hennessy and Patterson, 2009) results directly from packing more transistors into chips. Enter the multicore. By using simpler pipelines in parallel, computer architects can keep clock frequencies and feature density low while utilizing the available transistors. The main approach to exploit parallelism is the chip multiprocessor (CMP), also called the multicore microprocessor or simply multicore. Intel, AMD, Sun Microsystems, IBM, and others have brought multicore computing to the public, and the paradigm seems poised to continue for at least the next generation of processor families.
the future
My opinion is that, barring unforeseen technology, the multicore era may herald the end of the "general purpose processor." If (when) the cloud computing paradigm catches on, the need for powerful desktops will be reduced and, therefore, the market factors that drive powerful general purpose processors will decline. Special-purpose processors will appear that serve niche markets. Already this is seen in the form of GPGPU-CPU hybrids, which serve the substantial videogaming and streaming video markets. Because not all application spaces can benefit from SIMD parallelism, other special-purpose hardware will appear.
My belief is that special-purpose hardware will take some form of (re)configurable logic, such as FPGA, CGRA, or an as-yet-unknown technology. The performance gaps of current reconfigurable hardware to application specific integrated circuits (ASICs) will be less impactful on future applications in conjunction with the shift toward cloud computing. Furthermore, integrating reconfigurable logic with hard-wired functionality such as processor cores, e.g. http://www.eetimes.com/electronics-news/4210937/Intel-rolls-six-merged-Atom-FPGA-chips, further reduces the power and performance gaps. I'm not convinced FPGAs are the solution, but I think something similar will become the "general purpose processor" of the future, or at least will have the dominant market share for (non-embedded) end use.
I think the industry will continue to move forward in four broad directions: power-conscious servers, performance-hungry desktop users, embedded devices, and thin clients. Servers are likely to continue using the multicore model, since they benefit greatly from thread-level parallelism. Compute-heavy desktops are likely to adapt the multicore and GPGPU models, while also relying on the ability to compose cores for better single processor performance for applications that do not benefit from either thread-level or SIMD parallelism. Ever smaller, power-conscious platforms will enable processor-based computation in more everyday items, driving the development of concepts such as smart homes. Finally, thin clients will continue to evolve, from the mobile platforms such as smartphones and tablet computers to the next generation "cloud clients." Such clients will benefit from simple low-power processing cores and flexible hardware that can adapt to the specific workload of the client.
I believe the processor architectures of the future will be divergent, and there will be a variety of computer architecture families. Advances in multicore computing will improve servers and compute-heavy desktops. Single processor performance will continue to be important, especially in embedded and thin client markets, but also in compute-heavy desktops. Power will remain an important factor for all processors, but the programming tools to support the variety of processing models will be as important. Uniprocessor programming is pretty well understood, but programming parallel and reconfigurable computers remains hard.
The challenge for computer architects will continue to be to design platforms for servers, desktops, embedded systems, or thin clients that provide (1) appropriate performance, (2) power-consciousness, and (3) support for programmer abstractions.
the past
The computing industry as we know it was born out of the government and academic projects that sought to improve on the ENIAC machine by enabling stored program computing. The EDSAC project showed that a stored program computer is possible, and its complex control logic motivated the microprogrammable control unit. Many subsequent developments in computer architecture can be traced back to the philosophy and goals of microprogramming:
- Reduce the complexity of control. Regular hardware structures, interconnects, and single-cycle microinstructions are paralleled in the development of RISC architectures and modern CMP (multicore) designs.
- Improve programmer productivity. The microprogrammed control store translates assembly (the dominant language of the 50s and 60s) into sequences of machine operations. This translation mechanism leads to increased use of complex instructions and resultant CISC architectures.
- Compatibility across machine generations. By implementing a language interpreter in microcode, an assembly language can stay fixed while the underlying machine changes. Binary compatibility and bitslicing result from interpreted assembly and lead to ISA families. This use of microprogramming can still be seen in the Intel x86 architecture.
- Extract hardware parallelism.
New hardware resources can be used in parallel by packing more operations in (wider) microinstructions, known as horizontal microprogramming. The technique of issuing multiple parallel instructions leads to VLIW architectures. VLIW research drove compiler technology, because the compiler is responsible for determining the parallel instructions to pack in a VLIW instruction word. Intel brought VLIW to market with the Itaniam, the IA-64 ISA, but that has not seen very much success.
the present
In 1995, the memory wall was identified as the performance bottleneck caused by processing speed improving faster than off-chip peripheral performance, especially that of memory. It seemed that the "free lunch" of faster software due to better hardware would go away. However, computer architects brought to market techniques that mitigate memory access latencies: out-of-order (OoO) execution, caching, prefetching, and simultaneous multithreading (SMT). These are reasonably well-explained on wikipedia, although no replacement for an authoritative source, such as Hennessy and Patterson.
In the 2000s, the power wall (see Hennessy and Patterson, 2009) results directly from packing more transistors into chips. Enter the multicore. By using simpler pipelines in parallel, computer architects can keep clock frequencies and feature density low while utilizing the available transistors. The main approach to exploit parallelism is the chip multiprocessor (CMP), also called the multicore microprocessor or simply multicore. Intel, AMD, Sun Microsystems, IBM, and others have brought multicore computing to the public, and the paradigm seems poised to continue for at least the next generation of processor families.
the future
My opinion is that, barring unforeseen technology, the multicore era may herald the end of the "general purpose processor." If (when) the cloud computing paradigm catches on, the need for powerful desktops will be reduced and, therefore, the market factors that drive powerful general purpose processors will decline. Special-purpose processors will appear that serve niche markets. Already this is seen in the form of GPGPU-CPU hybrids, which serve the substantial videogaming and streaming video markets. Because not all application spaces can benefit from SIMD parallelism, other special-purpose hardware will appear.
My belief is that special-purpose hardware will take some form of (re)configurable logic, such as FPGA, CGRA, or an as-yet-unknown technology. The performance gaps of current reconfigurable hardware to application specific integrated circuits (ASICs) will be less impactful on future applications in conjunction with the shift toward cloud computing. Furthermore, integrating reconfigurable logic with hard-wired functionality such as processor cores, e.g. http://www.eetimes.com/electronics-news/4210937/Intel-rolls-six-merged-Atom-FPGA-chips, further reduces the power and performance gaps. I'm not convinced FPGAs are the solution, but I think something similar will become the "general purpose processor" of the future, or at least will have the dominant market share for (non-embedded) end use.
I think the industry will continue to move forward in four broad directions: power-conscious servers, performance-hungry desktop users, embedded devices, and thin clients. Servers are likely to continue using the multicore model, since they benefit greatly from thread-level parallelism. Compute-heavy desktops are likely to adapt the multicore and GPGPU models, while also relying on the ability to compose cores for better single processor performance for applications that do not benefit from either thread-level or SIMD parallelism. Ever smaller, power-conscious platforms will enable processor-based computation in more everyday items, driving the development of concepts such as smart homes. Finally, thin clients will continue to evolve, from the mobile platforms such as smartphones and tablet computers to the next generation "cloud clients." Such clients will benefit from simple low-power processing cores and flexible hardware that can adapt to the specific workload of the client.
I believe the processor architectures of the future will be divergent, and there will be a variety of computer architecture families. Advances in multicore computing will improve servers and compute-heavy desktops. Single processor performance will continue to be important, especially in embedded and thin client markets, but also in compute-heavy desktops. Power will remain an important factor for all processors, but the programming tools to support the variety of processing models will be as important. Uniprocessor programming is pretty well understood, but programming parallel and reconfigurable computers remains hard.
The challenge for computer architects will continue to be to design platforms for servers, desktops, embedded systems, or thin clients that provide (1) appropriate performance, (2) power-consciousness, and (3) support for programmer abstractions.
Saturday, January 1, 2011
OT: Apple Pie
The holidays really give me a hankering for pie. I made some apple pies awhile back after going apple picking, and I took a couple photos.
Although there is no more pie to share, I can share some of the photos and my apple pie recipe. Just click through the jump for some apple pie goodness.
Although there is no more pie to share, I can share some of the photos and my apple pie recipe. Just click through the jump for some apple pie goodness.
Subscribe to:
Posts (Atom)