Hardware Security Verification Example: CWE-1277 Firmware Not Updateable

Anders Nordstrom
Security Application Engineer at Tortuga Logic

 

Common Weakness Enumeration (CWE) is a community-developed list of software and hardware weakness types which may cause Security issues. The list is maintained by the MITRE organization and can be found here: cwe.mitre.org

In my recently updated post, “Better Hardware Security Verification through CWE and Information Flow Checking” I introduced Information Flow Checking as a way to verify that you don’t have the weaknesses listed in the CWE database in your design.

Here, let’s look at a more detailed example of how you can translate a Hardware CWE to a security rule which you can include in your simulation or emulation regression runs.

A product’s firmware cannot be updated or patched, leaving weaknesses present with no means of repair and the product vulnerable to attack

Depending on the actual design implementation there may be several vulnerabilities related to this CWE to verify. If the design doesn’t have the ability to patch ROM code, this may be intentional or an implementation oversight. Security verification will not find the case where something is missing that only exist in documentation, but it will find cases where there are security violations caused by an incorrect implementation or design. Here, we consider the case where the implemented update mechanism can be bypassed. This case can be modeled as an information flow verification problem.

Note: Please also see CWE-1310 which describes a similar scenario.

In this example, consider the hypothetical SoC below:

The Trusted Microcontroller Unit (tmcu) in the Hardware Root of Trust (HRoT) loads firmware from the ROM in the HRoT. The ROM is programmed in manufacturing and cannot be changed. To provide a method to update the firmware, new firmware can be written to the trusted non-volatile memory (tnvm) and a bit set in the tmcu will instruct it to read firmware from a predetermined address in tnvm instead.

Threat Model

An attacker may be able to read the modified firmware and later write back malicious code to the tnvm. The attacker may also clear the control bit in the tnvm so that the device falls back to executing old firmware which may have known security vulnerabilities and thereby enabling further attacks.

Security Requirement

The tmcu register controlling firmware location must not be writable from outside the HRoT. The updated firmware must not be readable or writable from outside the HRoT when it is being used by the tmcu.

Radix Security Rules

The requirement can be expressed as information flow verification problems i.e.:

  • Information (e.g. write data) from outside the HRoT must not flow to the alternative boot register
  • Information (e.g. read data) from the tnvm must not flow outside the HRoT

Using the Tortuga Logic Radix no-flow operator “=/=>” we can write the following security rules.

Verification

Using the security rules above, the Tortuga Radix tool will build a security monitor which when simulated together with the design will flag any violation of the rules. Using information flow security rules, over 80% of the MITRE Hardware CWEs can be verified by Radix.

To learn more about how to write security rule checks based on the Hardware CWE list, see the “Radix Coverage for HW CWE Guide” white paper.

Author

  • Security Application Engineer at Tortuga Logic: Anders Nordstrom is a Functional and Security Verification expert and ASIC design and verification professional with over 25 years of experience both as a design and verification engineer and from methodology development and customer support in the EDA field. Specialize in solving customer verification problems using formal verification tools. Hardware formal security verification expert. Architected Formal security verification tool, developed prototype, and deployed product at customers

Share on twitter
Twitter
Share on linkedin
LinkedIn