One of the challenges of writing code as part of a large project is being able to actually test your code as you develop it. I've been working on some relatively small thread scheduling projects in RTEMS over the past couple of weeks, and my most recent work has just been really depressing. I wrote ~300 lines of code, but haven't been able to test any of it. I still haven't, but I'm really close now -- my test case compiles, and the system runs without crashing, but I have no idea if it is doing what I want it to. It has taken me probably 40+ hours of coding to get to this state. That is a very frustrating feeling, to code for that long without knowing if what you are writing will work.
So what can be done? One way to validate algorithms is to implement them in isolation. This works great, unless you need another 10K lines of code to actually exercise your algorithm. In this case, you want some way to emulate the rest of the system, and create some type of test bench for your code. In this case, it would be an environment for implementing scheduling algorithms for RTEMS that has all of the interfaces to the scheduler. A similar project is LinSched, which is an infrastructure for trying out Linux scheduler algorithms. This is one of the goals of the GSoC project that I am proposing to do, and I will post more on this later. :)
For now, I'm going to try and validate that my code actually works, and doesn't just "run to completion."