In this post I describe how to add a new scheduler implementation to the RTEMS open-source real-time operating system. The scheduler I use as an example is an earliest deadline first (EDF) scheduler which applies the EDF algorithm to schedule periodic tasks and a FIFO policy for non-periodic tasks. This post demonstrates how to use my modular scheduling framework to extend the scheduling capabilities of RTEMS.
Sunday, December 12, 2010
Saturday, December 4, 2010
Architectural simulators
As a researcher in the areas of operating systems, compilers, and computer architecture, I spend a lot of time dealing with simulators. An inordinate amount of time. Much of this time is spent figuring out how well a given simulator provides:
- Useful timing measurements (cycle-accurate)
- Support for fully functional OSs (full-system)
- Auxiliary features for specific experiments
- Support for useful hardware platforms and instruction sets
- Openness to modifying microarchitectural features
- Availability of technical support
Monday, November 22, 2010
Interrupt Handling with MMU Hardware on an MMU-less OS
This post describes some work that I have just recently gotten to a point that it seems to be correct. It was a long-running project that started around May 2010.
Sunday, November 14, 2010
RTEMS Modular Task Scheduler
As I mentioned in my last post, this past summer I participated in the Google Summer of Code by working on the RTEMS project. I have hopefully ironed out the last few small details that prevented the basic infrastructure that I developed from being accepted into the RTEMS code base. In this post I describe the design of the new scheduler, and also comment on some of the problems that I faced when merging the work. At the end of the post is a list of remaining work for this project.
Pre-existing RTEMS Scheduler
Pre-existing RTEMS Scheduler
New Modular Scheduler
Scheduler Control Structure
Ready Queue Encapsulation
Per-Thread Metadata
Scheduler Operations Table
Configuring the Scheduler
Merging the New Scheduler
Short-circuit Evaluation
PowerPC overheads
Open Issues
Improper Encapsulation
Interrupt Service Routine (ISR) Latency
Documentation
Configuration
New Technologies
Thursday, July 29, 2010
Retroactive Introduction to my GSOC project
I've been busy this summer, between my GSOC project, my academic progress, and life in general. I thought I should catalog some of the work that I've done for the GSOC 2010. I have previously talked about my work to port RTEMS to SPARC V9. Well, I had a GSOC project accepted for the RTEMS project which I have been working on this summer. The remainder of this blog post is excerpted from my project proposal and describes what I have been doing lately. The full project proposal can be read online at: http://code.google.com/p/rtems-sched/wiki/ModularSuperCoreSchedulerProject.
My project is to refactor the RTEMS task scheduler to make it more modular. The current RTEMS scheduler is tightly coupled with the thread management code. This tight coupling prevents easy implementation and testing of alternate scheduling mechanisms. Refactoring the scheduler into a new SuperCore Scheduler Manager enables RTEMS developers (namely, me) to explore new scheduling mechanisms.
After finishing my project, the RTEMS scheduler internal data structures will be encapsulated in a module that provides a protected interface such that none of the thread management code directly touches scheduling data.
Perhaps the most important outcome of my project will be enabling exploration of scheduling for symmetric multiprocessing (SMP) and multicore platforms. An SMP system is a closely-coupled system with multiple CPU chips interconnected with main memory; a multicore system is even more tightly coupled, with multiple CPUs interconnected on a single chip. Although RTEMS currently supports multiprocessing systems, the support is designed for coarse-grained systems. Each processing element (core) runs a separate RTEMS application and executive, and communication is routed through a portion of shared memory. Scheduling decisions are made locally at each core, and tasks are not able to migrate between cores.
Although my project will not provide SMP support, the goal of refactoring the scheduler is to facilitate implementing an SMP scheduler, so care will be taken to make sure the interface does not enforce any policies that might deter an SMP scheduler. By supporting global scheduling decisions and task migration, RTEMS will be able to balance a processing load across multiple CPUs. Further, unifying the executive across all the cores in the system allows RTEMS to make smarter decisions and to reduce the effort of end users in deploying SMP and multicore configurations.
My project is to refactor the RTEMS task scheduler to make it more modular. The current RTEMS scheduler is tightly coupled with the thread management code. This tight coupling prevents easy implementation and testing of alternate scheduling mechanisms. Refactoring the scheduler into a new SuperCore Scheduler Manager enables RTEMS developers (namely, me) to explore new scheduling mechanisms.
After finishing my project, the RTEMS scheduler internal data structures will be encapsulated in a module that provides a protected interface such that none of the thread management code directly touches scheduling data.
Perhaps the most important outcome of my project will be enabling exploration of scheduling for symmetric multiprocessing (SMP) and multicore platforms. An SMP system is a closely-coupled system with multiple CPU chips interconnected with main memory; a multicore system is even more tightly coupled, with multiple CPUs interconnected on a single chip. Although RTEMS currently supports multiprocessing systems, the support is designed for coarse-grained systems. Each processing element (core) runs a separate RTEMS application and executive, and communication is routed through a portion of shared memory. Scheduling decisions are made locally at each core, and tasks are not able to migrate between cores.
Although my project will not provide SMP support, the goal of refactoring the scheduler is to facilitate implementing an SMP scheduler, so care will be taken to make sure the interface does not enforce any policies that might deter an SMP scheduler. By supporting global scheduling decisions and task migration, RTEMS will be able to balance a processing load across multiple CPUs. Further, unifying the executive across all the cores in the system allows RTEMS to make smarter decisions and to reduce the effort of end users in deploying SMP and multicore configurations.
Friday, July 9, 2010
Understanding Energy and Power
Lately I've been looking at power as an evaluation metric for my research. Power consumption has always been an important design concern in embedded and resource constrained devices. Recently, power has also become important in desktop and server computing environments at the chip-level and at the rack-level respectively.
Energy and power are related, and the two are so often used synonymously that confusing them is quite easy.
Power, typically measured in (kilo)watts, is a rate of production or consumption of energy. The watt unit is an expression of coulombs per second, where a coulomb is a unit of electric charge. So power is the rate of change of electric charge over a period of time. Energy is typically measured in (kilo)watt-hours, which is what shows up on your "power" bill. So you pay for the total amount of energy consumed during the period of your bill, which is actually power integrated over that interval.
A common analogy to help understand the relationship between power and energy is that power is to a flow of water as energy is to a pool of water. The flow can be very slow, dripping even, but can fill a pool given enough time; or the flow can be very fast and fill a pool even quicker. The amount of water in the pool is analogous to energy, and the rate of the flow is analogous to power.
I find it is also helpful to consider the physical equations.
To understand power and energy, think back to middle or high school and recall Ohm's law, I = V / R (equivalently V = I * R), where I is current, V is voltage, and R is resistance. Current is the movement of electric particles, measured in amperes. Voltage is the force required to drive a current between two points, measured in volts. Resistance is opposition to current, measured in ohms. Note that current and voltage are only non-zero if there is a mismatch in the electrical potential between two connected points, and that bad things happen as resistance approaches zero.
Ohm's law is about as far as most people recall (if even that far!), and we haven't yet reached energy or power. We can get some interesting equations by substituting Ohm's law into Joule's first law after some massaging, P = I^2 * R = V * I = V^2 / R, where P is the power dissipated by a resistor. Power dissipation is a more accurate term for the notion of power consumption, although the two can be used interchangeably to mean the conversion of power into some other form of transfer of energy, for example heat or sound. The power dissipation of the resistive elements of a circuit is equivalent to the instantaneous power applied to the circuit in order to generate current through it. By considering a current applied from time 0 to t to a circuit with resistance R, the electric energy created by the current passing through the resistive elements of the circuit during that time interval is E = P * t.
The moral of the story is that power is the rate of transfer of electrical charge, and that energy is an accumulation of electrical charge.
Energy and power are related, and the two are so often used synonymously that confusing them is quite easy.
Power, typically measured in (kilo)watts, is a rate of production or consumption of energy. The watt unit is an expression of coulombs per second, where a coulomb is a unit of electric charge. So power is the rate of change of electric charge over a period of time. Energy is typically measured in (kilo)watt-hours, which is what shows up on your "power" bill. So you pay for the total amount of energy consumed during the period of your bill, which is actually power integrated over that interval.
A common analogy to help understand the relationship between power and energy is that power is to a flow of water as energy is to a pool of water. The flow can be very slow, dripping even, but can fill a pool given enough time; or the flow can be very fast and fill a pool even quicker. The amount of water in the pool is analogous to energy, and the rate of the flow is analogous to power.
I find it is also helpful to consider the physical equations.
To understand power and energy, think back to middle or high school and recall Ohm's law, I = V / R (equivalently V = I * R), where I is current, V is voltage, and R is resistance. Current is the movement of electric particles, measured in amperes. Voltage is the force required to drive a current between two points, measured in volts. Resistance is opposition to current, measured in ohms. Note that current and voltage are only non-zero if there is a mismatch in the electrical potential between two connected points, and that bad things happen as resistance approaches zero.
Ohm's law is about as far as most people recall (if even that far!), and we haven't yet reached energy or power. We can get some interesting equations by substituting Ohm's law into Joule's first law after some massaging, P = I^2 * R = V * I = V^2 / R, where P is the power dissipated by a resistor. Power dissipation is a more accurate term for the notion of power consumption, although the two can be used interchangeably to mean the conversion of power into some other form of transfer of energy, for example heat or sound. The power dissipation of the resistive elements of a circuit is equivalent to the instantaneous power applied to the circuit in order to generate current through it. By considering a current applied from time 0 to t to a circuit with resistance R, the electric energy created by the current passing through the resistive elements of the circuit during that time interval is E = P * t.
The moral of the story is that power is the rate of transfer of electrical charge, and that energy is an accumulation of electrical charge.
Friday, June 18, 2010
Building and Booting RTEMS sparc64
I've put together a package at http://code.google.com/p/rtemssparc64/downloads/list called buildsparc64.tgz that contains a directory structure with scripts to help build the niagara and usiii BSPs.
There is a README with some simple instructions. After extracting the archive, you'll need to create a link in the build-sparc64 directory to the RTEMS sources on your machine (that contain the sparc64 CPU model and BSPs). Assuming you have the sparc64-rtems4.11 tools installed, you will then be able to build the niagara or usiii BSPs. Then there are scripts in the boot subdirectory to help build a bootable ISO image for either BSP. The scripts are set up to only build and package some of the RTEMS sample applications. Note that only Niagara has been run on open source simulators so far. I have run both BSPs on the Simics simulator, and you can contact me privately for more information.
Once you successfully build the image.iso file, you can follow the instructions at my previous post about the M5 simulator to run the BSP. Simply make a link from the built image.iso file to the /dist/m5/system/disks/image.iso file, and make the changes I mention to the M5 configuration files.
There is a README with some simple instructions. After extracting the archive, you'll need to create a link in the build-sparc64 directory to the RTEMS sources on your machine (that contain the sparc64 CPU model and BSPs). Assuming you have the sparc64-rtems4.11 tools installed, you will then be able to build the niagara or usiii BSPs. Then there are scripts in the boot subdirectory to help build a bootable ISO image for either BSP. The scripts are set up to only build and package some of the RTEMS sample applications. Note that only Niagara has been run on open source simulators so far. I have run both BSPs on the Simics simulator, and you can contact me privately for more information.
Once you successfully build the image.iso file, you can follow the instructions at my previous post about the M5 simulator to run the BSP. Simply make a link from the built image.iso file to the /dist/m5/system/disks/image.iso file, and make the changes I mention to the M5 configuration files.
Thursday, June 17, 2010
SPARC V9 (sparc64) RTEMS
I'm pleased to announce that the work I've done, together with my colleague Eugen, to get RTEMS working on the sparc64 processor architecture, has been accepted for inclusion to the RTEMS project. We've been working on porting RTEMS to sparc64 for about 9 months, and have had a functional port for about 4 months.
We started with a basic template from the 32-bit SPARC V7 port of RTEMS, and modified or replaced its code with the appropriate support for the 64-bit SPARC V9. There were quite a few changes, since we started with SPARC V7 and skipped ahead to V9. The SPARC V8 truly is somewhere between the two ports. For user-space compatibility, the processor is not substantially different; however, the privileged processor state is very different in SPARC V9, and even more so in the sun4v variant in which we were most interested. I've shared a Updated: Google Doc that contains a lot of our notes from the porting process. We started by focusing on the SPARC Niagara CPU, but have since added support for the UltraSPARC 3 CPU model and should be able to run most UltraSPARC family processors.
A description of the Board Specific Package (BSP) for the Niagara CPU can be found on the RTEMS Wiki at the http://www.rtems.com/wiki/index.php/Niagara page. I'm still working to get instructions on how to run the BSP in the M5 open source simulator, which I hinted at in my previous post about M5.
It was a fair amount of work to clean up the code to submit it to RTEMS, but it will be nice to have the code in the repository. I guess we'll see if anyone else is actually interested in using it! One nice aspect, and I think one of the reasons the RTEMS people accepted the work, is that there are no other 64-bit architectures currently supported by RTEMS. So adding the SPARC V9 processor family provides a good way to test RTEMS' 64-bit capabilities. Eugen and I did not have too much difficulty in this regard, at least from a functionality point-of-view.
Hopefully now that the port is "upstream", I can again focus on my research efforts actually using RTEMS on the sparc64, and not just getting things to work!
We started with a basic template from the 32-bit SPARC V7 port of RTEMS, and modified or replaced its code with the appropriate support for the 64-bit SPARC V9. There were quite a few changes, since we started with SPARC V7 and skipped ahead to V9. The SPARC V8 truly is somewhere between the two ports. For user-space compatibility, the processor is not substantially different; however, the privileged processor state is very different in SPARC V9, and even more so in the sun4v variant in which we were most interested. I've shared a Updated: Google Doc that contains a lot of our notes from the porting process. We started by focusing on the SPARC Niagara CPU, but have since added support for the UltraSPARC 3 CPU model and should be able to run most UltraSPARC family processors.
A description of the Board Specific Package (BSP) for the Niagara CPU can be found on the RTEMS Wiki at the http://www.rtems.com/wiki/index.php/Niagara page. I'm still working to get instructions on how to run the BSP in the M5 open source simulator, which I hinted at in my previous post about M5.
It was a fair amount of work to clean up the code to submit it to RTEMS, but it will be nice to have the code in the repository. I guess we'll see if anyone else is actually interested in using it! One nice aspect, and I think one of the reasons the RTEMS people accepted the work, is that there are no other 64-bit architectures currently supported by RTEMS. So adding the SPARC V9 processor family provides a good way to test RTEMS' 64-bit capabilities. Eugen and I did not have too much difficulty in this regard, at least from a functionality point-of-view.
Hopefully now that the port is "upstream", I can again focus on my research efforts actually using RTEMS on the sparc64, and not just getting things to work!
Saturday, May 8, 2010
A week in M5
I spent the last week or so playing around with a relatively new simulator called M5. It is a very nice simulator, especially for architectural research. My interest in it is to use it for doing both operating system and architecture research. This type of research is difficult to do with most simulators, because fully functional simulators rarely provide accurate timing to evaluate overheads, while timing simulators rarely provide sufficient functionality to run an operating system. What is needed is a full system simulator that provides fidelity for both functionality and timing.
In the past our research group has used SimpleScalar for architectural research. However, that project is obsolete and the simulator is slowly becoming obsolete with respect to real-world systems. Also, it does not provide full system simulation, although a set of patches by Jack Whitham merges RTEMS and SimpleScalar to provide full system simulation for RTEMS. I tried to use Jack's patches, but I could not quite get everything working. When I sent him an e-mail to ask about it, he suggested that I should look at M5 as a more realistic simulator instead. So I did.
M5 provides two types of simulators: Full System (FS) and Syscall Emulation (SE).
In the past our research group has used SimpleScalar for architectural research. However, that project is obsolete and the simulator is slowly becoming obsolete with respect to real-world systems. Also, it does not provide full system simulation, although a set of patches by Jack Whitham merges RTEMS and SimpleScalar to provide full system simulation for RTEMS. I tried to use Jack's patches, but I could not quite get everything working. When I sent him an e-mail to ask about it, he suggested that I should look at M5 as a more realistic simulator instead. So I did.
M5 provides two types of simulators: Full System (FS) and Syscall Emulation (SE).
Thursday, May 6, 2010
vim Productivity
I use vim. Not because I think it is inherently superior to emacs or any other full-featured editor/IDE, but because I learned it first. A fellow vim user and friend of mine tried emacs out for about a semester+ and his experience was that they are roughly equivalent after learning how to use each, so I stick with vim. The only real difference between the two is the command syntax
Through working on various projects, I have gathered a few tricks, tips, and features to improve my productivity while using vim. The rest of this post is a semi-tutorial and repository of my vim usage.
Built-ins:
Through working on various projects, I have gathered a few tricks, tips, and features to improve my productivity while using vim. The rest of this post is a semi-tutorial and repository of my vim usage.
Built-ins:
- .vimrc
- Modes
- Cut-Paste
- Find-Replace
- Jumping Around
- Split
- Auto-Format
- Shell commands
- Latex-suite
- Cscope
- changelog
Sunday, April 25, 2010
Rant of the Week (ROTW): Trusted vs. Trustworthy
A common theme in computer security is "trust" -- and the implication that trust is equal to security is prevalent in quite a bit of literature and propaganda. Although Wikipedia isn't the greatest source, it does provide some insight into popular beliefs, and makes for a good vehicle for this discussion. Consider the article on the Trusted Platform Module, which starts with:
So what? Follow the links, and we get a little closer to some understanding:
In computing, Trusted Platform Module (TPM) is both the name of a published specification detailing a secure cryptoprocessor that can store cryptographic keys that protect information, as well as the general name of implementations of that specification, often called the "TPM chip" or "TPM Security Device"
So what? Follow the links, and we get a little closer to some understanding:
A secure cryptoprocessor is a dedicated computer on a chip or microprocessor for carrying out cryptographic operations, embedded in a packaging with multiple physical security measures, which give it a degree of tamper resistance.
Thursday, April 15, 2010
Scripting with Lua
For about two weeks, I've been planning to put together a way to generate test cases for one of my current projects. The project involves adding new capabilities to the RTEMS scheduler, and to test it I need to create RTEMS applications that spawn various periodic and aperiodic tasks. I already had written two such applications by hand, but I found it tedious to generate the parameters.
At a high level, the parameters are task type (periodic or aperiodic), duration (execution time), period, relative deadline, and release time (a.k.a. phase). To spawn tasks, I had some arrays defined in my RTEMS code of length equal to the number of total tasks. These arrays hold the values of the period, priority (relative deadline), release time, and execution time. Then my code uses the arrays to generate tasks with the given parameters. This isn't too difficult to write up by hand, but my project also requires that I compute some runtime characteristics of the periodic tasks.
At a high level, the parameters are task type (periodic or aperiodic), duration (execution time), period, relative deadline, and release time (a.k.a. phase). To spawn tasks, I had some arrays defined in my RTEMS code of length equal to the number of total tasks. These arrays hold the values of the period, priority (relative deadline), release time, and execution time. Then my code uses the arrays to generate tasks with the given parameters. This isn't too difficult to write up by hand, but my project also requires that I compute some runtime characteristics of the periodic tasks.
Tuesday, March 30, 2010
Running your code!
One of the challenges of writing code as part of a large project is being able to actually test your code as you develop it. I've been working on some relatively small thread scheduling projects in RTEMS over the past couple of weeks, and my most recent work has just been really depressing. I wrote ~300 lines of code, but haven't been able to test any of it. I still haven't, but I'm really close now -- my test case compiles, and the system runs without crashing, but I have no idea if it is doing what I want it to. It has taken me probably 40+ hours of coding to get to this state. That is a very frustrating feeling, to code for that long without knowing if what you are writing will work.
So what can be done? One way to validate algorithms is to implement them in isolation. This works great, unless you need another 10K lines of code to actually exercise your algorithm. In this case, you want some way to emulate the rest of the system, and create some type of test bench for your code. In this case, it would be an environment for implementing scheduling algorithms for RTEMS that has all of the interfaces to the scheduler. A similar project is LinSched, which is an infrastructure for trying out Linux scheduler algorithms. This is one of the goals of the GSoC project that I am proposing to do, and I will post more on this later. :)
For now, I'm going to try and validate that my code actually works, and doesn't just "run to completion."
So what can be done? One way to validate algorithms is to implement them in isolation. This works great, unless you need another 10K lines of code to actually exercise your algorithm. In this case, you want some way to emulate the rest of the system, and create some type of test bench for your code. In this case, it would be an environment for implementing scheduling algorithms for RTEMS that has all of the interfaces to the scheduler. A similar project is LinSched, which is an infrastructure for trying out Linux scheduler algorithms. This is one of the goals of the GSoC project that I am proposing to do, and I will post more on this later. :)
For now, I'm going to try and validate that my code actually works, and doesn't just "run to completion."
Tuesday, March 23, 2010
Extensible Data Structures in C
A lot of systems programming code is done in C, primarily because of the exposure of explicit memory addresses, but for other reasons too. However, C has pretty poor language support for many of the helpful programming constructs in object oriented languages that improve code re-use and readability, for example generics/templates, polymorphism, and inheritance. So systems programmers have developed a few design patterns to emulate such language constructs using C.
Lately I've been looking at how to design data structures in C that can provide flexible storage for data elements that can be used by different consumers. In particular, I'm looking at how a thread control block can provide storage for different implementations of scheduling systems.
There are three features of C that appear useful for this purpose: void pointers, unions, and typedefs.
union works well when you know exactly what type of data the different consumers will use ahead of time, and you are willing to place multiple specifications within the data structure. The advantages of union are that it is easy-to-read and efficient, providing multiplexed storage space to different data structures. Some disadvantages are that the programmer has to know which field of the union to access, and the size consumed by the union is equal to the largest member.
void pointers are type-less memory addresses, so a programmer can assign the value of the pointer to point to any structure, and later retrieve what was stored. However, as with unions, the programmer must know the type of the structure pointed to by the void pointer in order to use the structure. This usually is not a problem for the scenario I'm investigating.
typedefs provide a way to define new types for structures and primitives. They allow programmers to define opaque types whose representation can change, but whose type can always be checked. They have the advantage over union and void pointers of being able to be type-checked. The way to use typedef to provide extensibility is to combine it with C pre-processor conditional compilation to control which typedef is used. This way, multiple typedefs of the same type can be defined, and only one will be used based on pre-processor defines. However, code that uses the generic typedef may need to provide multiple cases based on the different specific typedefs provided.
I may come back to this topic and provide some examples. I'm playing around with typedefs right now, and like this approach because the ugliness can be hidden and the end result is high-performance code with little bloat in the compiled version.
Lately I've been looking at how to design data structures in C that can provide flexible storage for data elements that can be used by different consumers. In particular, I'm looking at how a thread control block can provide storage for different implementations of scheduling systems.
There are three features of C that appear useful for this purpose: void pointers, unions, and typedefs.
union works well when you know exactly what type of data the different consumers will use ahead of time, and you are willing to place multiple specifications within the data structure. The advantages of union are that it is easy-to-read and efficient, providing multiplexed storage space to different data structures. Some disadvantages are that the programmer has to know which field of the union to access, and the size consumed by the union is equal to the largest member.
void pointers are type-less memory addresses, so a programmer can assign the value of the pointer to point to any structure, and later retrieve what was stored. However, as with unions, the programmer must know the type of the structure pointed to by the void pointer in order to use the structure. This usually is not a problem for the scenario I'm investigating.
typedefs provide a way to define new types for structures and primitives. They allow programmers to define opaque types whose representation can change, but whose type can always be checked. They have the advantage over union and void pointers of being able to be type-checked. The way to use typedef to provide extensibility is to combine it with C pre-processor conditional compilation to control which typedef is used. This way, multiple typedefs of the same type can be defined, and only one will be used based on pre-processor defines. However, code that uses the generic typedef may need to provide multiple cases based on the different specific typedefs provided.
I may come back to this topic and provide some examples. I'm playing around with typedefs right now, and like this approach because the ugliness can be hidden and the end result is high-performance code with little bloat in the compiled version.
Monday, March 15, 2010
Introduction to RTEMS
I will probably have quite a few posts related to RTEMS, so I thought an introduction would be appropriate. I've been doing a lot of work recently on a project with Eugen, a fellow Ph.D. student, to port the RTEMS Operating System (related blog) to the UltraSPARC T1 Niagara, a 64-bit SPARC-v9 processor.
RTEMS is a real-time operating system (RTOS), which means that its operations can be precisely and accurately timed and that it supports applications that have strict timing requirements. Classes of such applications range from control systems to streaming data processors. Examples that we see (or don't see) everyday are embedded in such things as planes, trains, and automobiles, or multimedia video and audio devices.
RTEMS is notable for a few reasons. First, it's free and open source. Second, it supports a large number of target architectures (platforms). Third, it is in space! Some of the platforms that RTEMS supports are radiation-hardened for outer space, and it is on some of the NASA and ESA equipment floating around up there.
But I don't plan to be a rocket scientist. My interest in RTEMS is for its support of a variety of computer architectures, including the SPARC-v8 architecture. Eugen and I have identified the Niagara as a promising architecture with which to continue our current research direction, and we wanted to have a low-level OS that is small enough to understand. Thus we chose RTEMS and started porting the existing support for SPARC-v8 to the newer SPARC-v9. This has been an ongoing effort for awhile, but we have made quite a bit of success, and hope to contribute our work back to the RTEMS community.
If you want more info about RTEMS, check out its website, which is newly updated.
RTEMS is a real-time operating system (RTOS), which means that its operations can be precisely and accurately timed and that it supports applications that have strict timing requirements. Classes of such applications range from control systems to streaming data processors. Examples that we see (or don't see) everyday are embedded in such things as planes, trains, and automobiles, or multimedia video and audio devices.
RTEMS is notable for a few reasons. First, it's free and open source. Second, it supports a large number of target architectures (platforms). Third, it is in space! Some of the platforms that RTEMS supports are radiation-hardened for outer space, and it is on some of the NASA and ESA equipment floating around up there.
But I don't plan to be a rocket scientist. My interest in RTEMS is for its support of a variety of computer architectures, including the SPARC-v8 architecture. Eugen and I have identified the Niagara as a promising architecture with which to continue our current research direction, and we wanted to have a low-level OS that is small enough to understand. Thus we chose RTEMS and started porting the existing support for SPARC-v8 to the newer SPARC-v9. This has been an ongoing effort for awhile, but we have made quite a bit of success, and hope to contribute our work back to the RTEMS community.
If you want more info about RTEMS, check out its website, which is newly updated.
Sunday, March 14, 2010
Versioning my work
I've found it helpful to keep around clean and working versions of projects that I edit, especially when I am changing someone else's work. This is suggested behavior for generating patches for open source projects, but I find it useful for both programming and document editing.
Of course, version control (CVS or subversion) helps, but I don't always want to commit my changes back to the repository. Someday I should also investigate a distributed version control system that lets other users "check out" your branches. However, most of the projects I work on use cvs and svn. I also like to extract the changes that I've made, in order to keep track of my versions by hand. This is useful when I reach milestones in projects, and I want to make off-line backups.
Of course, version control (CVS or subversion) helps, but I don't always want to commit my changes back to the repository. Someday I should also investigate a distributed version control system that lets other users "check out" your branches. However, most of the projects I work on use cvs and svn. I also like to extract the changes that I've made, in order to keep track of my versions by hand. This is useful when I reach milestones in projects, and I want to make off-line backups.
Post 0
I've been thinking about starting a blog for awhile, but unlike some of my compulsions, I actually followed through this time. Although I've never been much into the "blog" scene, this might be a decent way to write down some of my random thoughts. I'll try to focus on issues related to my work, but I'll probably mix in some life-related postings.
What can you expect to find here? My plan is to post reflections, links to topics that interest me, and tips and tricks related to technology and life. Really this will be a repository for myself to look back and see some of the things that I've found to be interesting. But maybe someone else will benefit, too.
I'll do my best not to belabor points, although I may tend to wax philosophical, and I do enjoy writing. But I know that no one wants to read a lot of useless drivel.
What can you expect to find here? My plan is to post reflections, links to topics that interest me, and tips and tricks related to technology and life. Really this will be a repository for myself to look back and see some of the things that I've found to be interesting. But maybe someone else will benefit, too.
I'll do my best not to belabor points, although I may tend to wax philosophical, and I do enjoy writing. But I know that no one wants to read a lot of useless drivel.
Subscribe to:
Posts (Atom)