Wednesday, June 23, 2021

RiVer Core: A RISC-V Core Verification Framework

InCore and Tessolve announce the availability of our open source RISC-V Core Verification tool - RiVer Core. RiVer Core is a python based extensible and scalable framework aimed at providing a central control point for all major aspects of a RISC-V processor verification flow. The tool is fully open source under the permissive BSD-3 Clause License

The repository is hosted on Github -

RiVer requires 3 major components to a RISC-V Core.

  1. A set of tests which need to be run on the target. These could be either random or directed in nature.
  2. A RISC-V target that needs to be tested. This is typically an RTL implementation, but could also include other micro-architectural models
  3. A Reference model against which checks are performed to determine pass/fail conditions.

Through the RiVer Core framework, you can continue to build and generate new tests using an existing environment and scripts independently of the environment chosen/used by the target or the reference. Similarly, you can easily replace the reference models for different tests depending on the test's requirements. Unlike other conventional frameworks, RiVer takes a more holistic approach and avoids creating any environment tailored to a specific test-suite, target environment or reference environment, thereby allowing use of RiVer in existing environments.

What makes RiVer truly extensible, is in its approach to keep the above 3 components completely independent and decoupled each other, thereby enabling any combination of test-suite , target and reference model to co-exist effortlessly. RiVer achieves this broad range exensibility via its plugin based approach. RiVer splits the entire flow into 3 major parts: Test Generation, Running Tests on the Target/DUT and Running Tests on the Reference Model and defines specific plugin APIs for each of these parts, thereby enabling easy and early verification bring-up.


Compared to the immense growth of open-source RISC-V designs , the effort in building open source verification test suites is not as impressive. One effort by the RISC-V International (RVI) has been to build an Architectural Test Suite (ATS) (a.k.a compliance suite) which serves as a mechanism to confirm that the target operates in accordance to the RISC-V ISA specification. However, the ATS is not a verification suite, as it only checks for basic behaviors and does not test the functional aspects of the processor. Passing the ATS, would only mean that the interpretation of the spec is correct. It does not mean that the processor is bug-free.

To fill this gap to a certain extent, there have been a few open source random program generators like AAPG, RISC-V Torture, and Microtesk. AAPG is a python based pseudo-random assembly program generator, with a large number of configurable knobs controlled by a YAML file. Torture has a java-based backend to generate random assembly programs. MicroTESK provides a Ruby-based language to express test scenarios, or so-called templates. Now the challenge in using all of these generators is that each requires its own environment to configure and produces its own artifacts: linker scripts, header files, libraries, etc.
As one can imagine, in order to include tests from all the above sources, significant amount of environment hacking will be required. RiVer, on the other hand makes it easy by allowing each generator to use its own environment and execution sequence while expecting a standardized test-list format which allows the target and the reference models to consume the generated tests easily. As of today InCore and Tessolve have jointly provided 4 test-generator plugins which can be found here. With RiVer you can now focus on open RISC-V verification and not on hacking the test environment.


One would readily agree that in spite of having a single ISA spec, RISC-V implementations would vary on a wide range of parameters from each other. This is primarily due to the flexibility provided by the ISA and the lack of micro-architectural mandates from the same. Similarly, the verification environment for targets could be significantly different from each other to the point of having have almost no common components. The points of difference could be : language of choice of the test-bench like SystemVerilog, C/C++; choice of simulator (like questa, vcs) ; choice of coverage metrics. Targets may also choose to use completely different tool-chains (open or closed) to compile tests. RiVer adapts to these variable requirements through its python based plugins which can abstract away all the nitty gritty details and provide a simple access to the DUT. RiVer is not just open source based, it is truly an open architecture framework, agnostic but fully adaptive to any verification flow.


More often than not, you would like to have multiple reference models to validate your design against. Reasons could be as simple as improving confidence in your design, or because not all features implemented by the target exist entirely in one simulator. For such scenarios RiVer ensures that you can easily plug-out and plug-in a new reference model without affecting DUT or tests generation parts.

The Current RiVer Core Flow provides a log-based comparison solution. The Target and the Reference model, are both required to generate execution logs for the same test. If these execution logs match, then the test is declared as "PASSED", else it declared as FAILED. The format of the logs itself is not mandated and can be anything as defined by the user, thus providing flexibility.

Once the log comparison is done, RiVer generates an HTML report capturing all the runs (generation, target-sim, reference-sim, etc) and provides a unified, one stop view into all the logs, thereby easing debugging of issues. RiVer encourages but does not mandate, the plugins to re-use the available pytest framework to write their plugins, thereby facilitating safe-parallel runs automatically.

From a database and coverage point of view, RiVer also provides an easy way to merge databases of multiple runs which can help filter out and rank tests based on their coverage contribution. Again, the coverage metrics and scripts are all DUT specific and don't really affect the RiVer framework in any other manner.

While the log based solution discussed above, is lightweight and useful during the early days of processor design, a slightly different variant of the flow will be required for to carry out a more fulfilling and accelerated verification effort.


A significant portion of the initial generator and test-dut plugins were developed, validated and contributed by Tessolve. Using RiVer Core, engineers at Tessolve were able to define test-plans rapidly using the different generators and collect ranked coverage databases to further enhance the confidence in the stability of the designs. Tessolve has been using this framework to perform regression runs and for various RISC-V core configurations generated by InCore's Chromite Core Generator.

The coverage models developed leveraged a deep knowledge of the CPU micro-architecture. The generation of these coverage models is automated using a robust python infrastructure which also demonstrates the scalability and flexibility of the core-generator. The infrastructure can, with minimal effort, be adapted to alternate micro-architectures. These coverage models can also be easily combined with structural coverage of standard EDA tools there by increasing portability. This coverage flow adds significant value to the confidence in the functional correctness of the core and should add significant value to future projects too.

NEXT IN RiVer Core

Self-checking tests have the advantage of running on FPGA or silicon without much intervention from the host, thereby accelerating the speed verification significantly. Self-checking tests also mean that test includes certain results which have to be produced by a reference model. One way to achieve this would be to insert specific function calls, which does take a little of bit of effort, that can calculate the checksum of the architectural state and store them back into a reference memory section of the test. This is similar to the self-checking approach used by AAPG. More thoughts on this here.

In the log based solution, the test-bench for the Target is very minimal. However, to test the behavior of the target under external asynchronous events (like interrupts), a more involved and complex test environment is required which has the ability to synchronize the Target and the Reference model at the architectural level, and compare behaviors to an external stimulus. This mechanism also provides a more high level of context comparison which can be reasonably higher than what is achieved by the log based comparison technique. More thoughts on this here.

While InCore and Tessolve continue to enhance and maintain RiVer Core, we also invite constructive feedback and contributions to this effort in any form possible.


Wednesday, July 8, 2020

Chromite-M SoC targeted for FPGAs

Today, InCore announces the availability of its open source Chromite-M SoC  targeted for FPGAs. The Chromite-M SoCis a minimal SoC that uses an RV64IMAC core generated from the Chromite Core  Generator. This SoC is the first member of a family open-source reference SoCs from InCore. 

This release  is targeted at the Digilent Arty-100 FPGA board and is accompanied by an SDK from Zephyr OS. The open source community finally has a production ready open source Core, Interconnect and SoC with extensive documentation. 

The Chromite-M SoC, is targeted for high-performance RTOS based embedded applications like networking, storage controllers, wearables, IoT aggregation. The Chromite core in the SoC has the following configuration:  
  • Supports RV64IMACU and compliant with the latest version of the Unprivileged and Privileged ISA specifications. 
  • Machine and User mode support. 
  • 16KiB 4-way set-associative Instruction Cache. 
  • 16KiB 4-way set-associative Data Cache. 
  • Gshare based branch predictor with a 512-entry Branch History Table (BHT) and 32-entry Branch Target Buffer (BTB) 
  • An 8-entry Return Address Stack (RAS). 
  • 2-cycle integer multiplier, and a 64-cycle integer divider unit. 
  • Physical memory protection of up to 4 regions with a granularity of 8-bytes 
  • AXI-4 compliant master interfaces for the I-cache and D-cache. 
  • A debugger with JTAG interface. 
The SoC includes the following open-source devices and interconnect IPs: 
  • PLIC: A Platform Level Interrupt Controller with 17 interrupts and 3 levels of priority. 
  • GPIOs: up to 16 configurable General Purpose IOs are available. 
  • UART: A simple interrupt-driven UART controller. 
  • CLINT: Core Local Interrupt Controller for timer and soft-interrupts. 
  • On-Chip Memory: a 16KiB BRAM. 
  • BootROM: A 4KiB boot ROM memory based on BRAMs. 
  • 64-bit AXI-4 compliant Crossbar: This cross-bar connects the cache masters, debug master, high speed peripherals (like DDR) and bridges to slow peripherals. 
  • AXI4-to-APB and AXI4-to-AXI4-Lite bridges. 
  • An AXI4 open-slave to connect third part IPs. 
The above seof device and interconnect IPs are completely open source, and are available under a permissive open source license on Gitlab by InCoreAn attractive feature of the device IPs is that they are extremely configurable and include an abstract device-configuration wrapper which allows these devices to be mapped to almost any interconnect protocol seamlessly.  

The SoC also depends on external IPs for the following: 
  • DDRx Xilinx DDR IP controlle 
  • JTAG: Xilinx BSCANE2 JTAG TAP cell for remote host debugging. While an open-source JTAG controller is available from InCore, using the BSCANE2 re-uses the same micro-usb interface for debugging and programming FPGA. 

While the SoC itself is designed to be FPGA independent, the current release supports the Arty-100t FPGA board (from Digilent) which has an Artix-7 series FPGA. On this FPGA, while the core currently runs at 50MHz the entire SoC consumes less than 70% of the LUTs availableInCore plans to support other standard FPGA boards for the same SoC depending on interest from the community. 

InCore is continuously working on enhancing the SoC with other common peripherals like I2C, SPI, QSPI, Watchdog timers and PWMs, as part of its future releases  

The SoC also comes along with a full featured Zephyr port supporting the current feature set. Zephyr Shell applications can be run on the Chromite M SoC on the Arty board. 

The Chromite M Soc also includes an extensive documentation of the devices, cores and SoC which will allow SW developers to create new OS ports and build applications on the platform. . 

By contributing code to the open-source community, InCore aims at not only democratizing silicon development, but also encourage open-source contributions to both HW and SW ecosystems.

Detailed steps for building and using the Chromite-M SoC can be found here: