Likewise with waveform viewer text enumerations, some simulators allow enumerations to be specified for certain values. Whilst this is very useful it also promotes lock in and is not the best place to implement such functionality. I have always advocated adding signal textual enumerations in test bench code as this is portable across simulators and importantly can be reused in monitors when displaying text reports (e.g "register field updated to value 0x5 [A_TEXT_ENUMERATION]") as well as functional coverage. Thus reusing a single point of decoding for multiple outputs, this will also aid maintainability too.
Despite having been originally designed as a scripting language for IC design Tcl is now a bit long in the tooth, despite many recent updates and a rich set of libraries. Python, Ruby or Lua might be better choices today. One of the aspects of this example is integrating Python into a simulator and importing the VPI as an API into the test bench and design hierarchy. The raw VPI API is not very Pythonic at all so I've introduced a small amount of wrapping to make it appear a little more Pythonic. I'm familiar with Python so that's the main reason I chose to use Python in this example. There other libraries available on the web that have integrated the VPI with Ruby and Python too, so this is certainly not a novel approach. Whilst conceptually it is only doing the same thing the commands the simulator uses in it's Tcl shell - the advantage here is that we have full use of the VPI to give richer access than most simulators Tcl would allow and most importantly the commands are now under end user control and can be used consistently across different simulators. Using a script in this environment is now an aid to portability and not a hindrance against it.
The import is done with SWIG, the Simplified Wrapper and Interface Generator. Adding our own scripting shell also means that simulators without a Tcl interpreter now have a scripting interface too, and the same one as all the others. So any code written for this shell can be used across all simulators, thus increasing portability between vendors and removing lock in. I have been a Verilator user and advocate for a number of years and successfully used it many times. It does not come packaged with a shell and so by default any option parsing needs to be done in C++ or Verilog. Whilst it is possible to do this in either I believe that a script is the best way to handle option parsing, checking and executing actions based on their values. Of course, options don't need to be --option=value format on the command line. XML, YAML, libconfig or a script fragment can be used as well and a reference to the required test(s) passed in. CSV is not so flexibly extensible and therefore does not lend itself to wider reuse. Text formats are best as they can be concurrently edited; do not be tempted to use something in a binary format e.g. a spreadsheet as these tend to require exclusive locks when being edited (merging binary files is not easy in most version control systems), and also file revision history is not visible inline via the version control system, i.e. you cannot use git annotate.
Whilst the performance of the scripting shell is much lower than that of compiled C++, it is important to remember that it should only executed for a small portion of the simulation (the beginning, the end, and possibly some infrequent callbacks in between) and that some of this work would be performed in a script anyway. At this point I would also advocate the use of a profiler (e.g. google perftools) to keep a check that this really is the case. Integrate this instrumentation and add this as a test in your regression test set to check for performance regression. This will serve to alert you if any change in your test bench or design code has a large effect on performance. It is otherwise easy to leak effective simulation Hertz as poorly performing code is added to the code base be they in the script, Verilog or C++.
The example also has a simple logging library that allows messages with varying severities to be recorded through an API, with the logging calls available to the Verilog RTL through DPI wrappers. This API is also SWIG'ed and wrapped to be available in Python too. So now we have a unified approach to logging in the entire test whether it is in within a script, Verilog or C++ the same logging code will be used. So an error or warning within the test prologue/epilogue, RTL or testbench checking code will all be treated and reported the same way.
In python :
import message
try :
dosomething()
catch :
message.error('dosomething() raised ' + str(sys.exc_info()))
`include "example.h"
if (error) `ERROR("error flag set");
In C++ :
#include "example.h"
if (error) ERROR("error flag set");
The API also facilitates adding callbacks to message events, and so we can add a function that logs these messages into a database. This is the subject of a later post.
In order to prevent further behaviour causing e.g. a segfault and a crash, we can stop simulation and flush all pending messages when one of ERROR or higher is emitted. This should prevent any eventual crash from losing any error messages. Additionally we add a SUCCESS level message. In order to pass a test must have one and only one SUCCESS message and no messages of severity ERROR or higher.
Additional functionality allows us to run a test beyond the first ERROR (to see how many more there are) and adjust the verbosity of the output by silencing less important messages. We can also add callbacks to adjust the verbosity at events within the simulation.
In order to prevent further behaviour causing e.g. a segfault and a crash, we can stop simulation and flush all pending messages when one of ERROR or higher is emitted. This should prevent any eventual crash from losing any error messages. Additionally we add a SUCCESS level message. In order to pass a test must have one and only one SUCCESS message and no messages of severity ERROR or higher.
Additional functionality allows us to run a test beyond the first ERROR (to see how many more there are) and adjust the verbosity of the output by silencing less important messages. We can also add callbacks to adjust the verbosity at events within the simulation.
Example Code
The sample code that accompanies these blog posts is verilog integration at github.The example requires boost and python development libraries to be installed and you'll need a simulator. Verilator is the primary target, where you'll need 3.845 or later (because it has the required VPI support). Set VERILATOR_ROOT in the environment or in test/make.inc.
% git clone https://github.com/rporter/verilog_integration
% cd verilog_integration
% cat README
% test/regress -s verilator
Please also note that this is proof of concept code. It's not meant to be used in anger as it has not been tested for completeness, correctness or scalability. It does also have a number of shortcomings as presented (some of which I hope to fix shortly). It does however show what can be done, and how.
No comments:
Post a Comment