Measuring the execution speed of a simulation or any other computer program -is not as straight forward as taking a stopwatch and measuring the time from the execution is started until it terminates. Furthermore, the actual implementations of the NAs are used when simulating the model, which will have a large impact on the execution speed.
ModelSim includes a performance profiler, which pauses the simulation every x milliseconds and samples which part of the design is being executed. When the simulation run is completed, the distribution of samples between components can be reported. It is not the absolute number of samples taken in a specific component that is interesting, but rather the relative distribution between components. Furthermore, the distribution of samples does not necessarily indicate a difference in simulation performance between components. For example, if a system has two components and roughly 50% of the samples are taken in each component, it can be deduced that roughly half of the time during the simulation is spent in each component. However, it might be that one component is activated only once while the other is activated thousands of time, indicating a huge difference in simulation performance between the two components, but this is not reported by the performance profiler. In the test system used here though, all components are activated equally often.
Apparently, the performance profiler does not report in which part of SystemC code samples are taken, but all samples in SystemC code are commonly reported under an entry simply called NoContext. It is thus not possible to observe whether the samples are taken in the model or in the SystemC OCP cores, but as will be seen from the results, this is not really relevant.
The table in figure 8.3 shows the number of samples taken in the NAs, the net-work and SystemC code and the percentage of the total number of samples taken in user code during the simulation. When simulating MANGO, 16325 samples were taken, 7266 or 45% of these in user code. For the model, the numbers were 3631 samples in total and 545 or 15% in user code. The simulation environment was iden-tical between the two simulations.
62 CHAPTER 8 VERIFICATION AND RESULTS
Number of samples % of samples
MANGO Model MANGO Model
Initiator NA 312 192 4.3 35.2
Target NA 510 347 7.0 63.7
Network 6440 − 88.6 −
SystemC 4 6 0.1 1.1
Total 7266 545 100 100
Figure 8.3: The distribution of samples between simulations of MANGO and the model. No samples are reported in the network in the model, as it is not reported which part of SystemC code samples are taken in.
As can be seen, the number of samples is significantly decreased in the model compared to MANGO. The drop in the relative number of samples in user code is not readily explained. It may be caused by the faster executing model which prompts the threads in the OCP cores to be triggered more often, measured in wall clock time. As triggering a thread involves a context change, it is quite expensive and may be the cause of many of the samples taken outside user code. Also, the number of simulation events in the NAs is approximately constant between simulations - the same flits and OCP transactions pass through - causing the amount of time spent by the simulation engine to handle these events to increase relative to time spent in user code. However, this is difficult to state for a fact without more detailed knowledge of the implementation of the performance profiler than is given in the ModelSim user manual.
Another notable difference between MANGO and the model is that fewer sam-ples are taken in the NAs in the model than in MANGO, despite the fact that the same number of flits pass through in both. One possible explanation is that in the model, the entire data input to the NA from the network is set up at the same time, whereas in MANGO individual bits may arrive at slightly different times due to different delays through the standard cells. The model thus produces a single simulation event when setting up data, whereas MANGO may produce up to 32 events. Another possible explanation may be found in the smaller code base of the model possibly produc-ing fewer cache misses durproduc-ing simulation. However, determinproduc-ing this as a plausible cause also requires more detailed knowledge of how the performance profiler works.
Despite the apparent shortcomings of the performance profiler, it can be seen that the majority of samples taken in the model are taken in the NAs. Compared to MANGO, the NAs percentage of samples has increased dramatically from 11.3% to 98.9% combined. At the same time, the percentage of samples taken in the network has decreased from 88.6% to at most 1.1%. This is a very dramatic increase in performance in this part of the system, and introducing a high-level model of the NAs should yield a significant increase in overall simulation performance. While the number of samples taken in SystemC code is very small, the expected speedup of a purely high-level SystemC model appears to be on a magnitude of a factor 1000
SIMULATION PERFORMANCE 63
compared to simulating the netlists of standard cells. This factor is calculated as the ratio of the number of samples in SystemC code in the model to the number of samples in the network in MANGO, 64406 . However, in the test vectors, only the lower 15 bits are different from ’0’, which halves the potential switching activity in MANGO, thereby reducing the possible number of simulation events considerably.
The speedup may thus be greater, but the number of samples in SystemC code is very small for concluding a specific speedup, but a factor with a magnitude around 1000 is not unrealistic based on these measurements.
Chapter 9
Discussion
This chapter will discuss how to resolve the known issues in the current implemen-tation of the model, how the model may be applied to system level modeling and simulation and finally how the model may be expanded and improved upon in the future.