Today, someone asked on the gems-users mailing list the question, "How does GEMS simulate traps?" I gave my best answer, and the topic is a good prelude to an upcoming post I'm planning.
GEMS is a computer hardware simulator, and I have discussed it previously. Simics/GEMS is primarily a three-headed monster: Simics, Opal, and Ruby. Simics+Opal implement the processor simulator, and Ruby implements the memory hierarchy for multicore platforms. My work primarily uses Simics+Opal, without Ruby. Opal is a "timing-first" simulator based on TFsim. It implements most of the processor model, but relies on the functional simulator (Simics) to verify simulation correctness and to provide some of the harder-to-model processor features.
One of the harder-to-model features of modern processors is the exception/interrupt handling mechanism. In the SPARC-v9 architecture, these are both referred to as traps. Opal models trap handling for a subset of the possible traps, including register window traps, TLB misses, and software interrupts. All trap handling is done in the retire stage of the instruction window, which allows speculation past traps. Non-modelled traps in Opal, for example I/O, rely on Simics to provide the functional implementation of taking the trap and updating the architected state. Modelled traps simulate the trap handling algorithms and should result in the same architected state as functionally correct trap handling.
Modelled traps improve simulation accuracy. In-flight instructions are squashed, the program counter (and other register state) is properly updated, and Opal continues to execute the workload. Simulator correctness depends on the ability of Opal to generate the same traps as Simics, and simulator accuracy requires Opal to model the effects of taking the trap.
That's all for now. In the near future, I will be expanding on this topic to discuss how to add new traps to Simics/GEMS.