Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Adding Secure Boot Capability to Embedded Processors No single technique will prevent compromise of an embedded system. Layered security must be built upon a secure foundation or “root-of-trust” since without design/device security, it is virtually impossible to provide good data security. by G. Richard Newell, Microsemi and Robert K. Braden, Bradtec Security Consultants Security threats to embedded systems have escalated to an unprecedented level with routine reports of serious intrusions or compromise. A recent presentation by a Navy security expert suggested that cybercrime has now equaled the dollar value of all illegal drug trafficking. He emphasized that targeted malware and nation-sponsored attacks are clearly on the rise (e.g. Stuxnet). Even industrial controllers and medical systems that previously seemed safe from intrusion due to their isolated functionality have been targeted for ‘cyber blackmail’, data collection, potential terrorism, and recreational hacking. These serious attacks on financial, industrial, communications and military systems highlight the need for security and anti-tamper safeguards within electronic systems. The sophistication of attacks and the availability of techniques and equipment for hacking processing systems are also expanding rapidly. With several thousand dollars of equipment, a moderately skilled hacker can extract encryption keys from many devices using differential power analysis (DPA). DPA is a powerful technique, first published around 1998, in which the power consumption of a device is monitored during the times it is performing multiple cryptographic operations like decryption. Using statistical methods, the noisy power consumption measurements are correlated with data calculated from known information, such as the input ciphertext, with the resulting ability to learn the key the device was using. Many reverse engineering tools and hacking techniques are readily available on hacker websites. In-depth security is now considered mandatory by many customers, particularly for Department of Defense systems. User recognition of the need for security provides an important product discriminator for the embedded system developer. Customers now recognize that security adds value and are willing to pay for quality protections. Lack of adequate security can result in loss of money, reputation, intellectual property, customers, and can result in loss of life, notably for defense, medical, industrial, and transportation systems. The severity of attacks must be countered with improved security measures. Suppliers of embedded systems need to understand and adopt measures to embed security into their offerings. These security measures must include support for information assurance, design integrity, and anti-tamper. One of the most important tenets of embedded systems security is to ensure that only authorized code is loaded and executed. If a hacker can modify or substitute the software or firmware the system runs, the security battle is already lost. This begins with the boot code executed immediately after power-on. The process of guaranteeing the initial boot code and all subsequent code are authentic is called “secure boot.” And secure boot must be built upon a secure foundation or “root-of-trust.” Establishing a Root-of-Trust A root-of-trust is essential to system security. It is an entity that can be trusted to always behave in the expected manner. As a system element, it supports verification of system, software and data integrity and confidentiality, as well as the extension of trust to internal and external entities. The root-of-trust is the foundation upon which all further security layers are created and it is essential that its keys remain secret and the process it follows is immutable. In embedded systems, the root-of-trust works in conjunction with other system elements to ensure the main processor boots securely using only authorized code, thus extending the trusted zone to the processor and its applications. The trusted platform module (TPM) is an example of an industry standard root-of-trust. TPM devices provide cryptographic services (hashing, encryption) with a static RSA key embedded in each device. Many of the security features of the TPM are now available in select SoC FPGAs with the recent addition of on-chip oscillators, cryptographic services, a true random number generator, and stronger design security and anti-tamper measures. SoC FPGAs are vastly more computationally powerful than typical TPMs, and have many more I/O pins and built-in interfaces. Some FPGAs permanently program keys within the device while others utilize battery-backed internal SRAM. A superior approach is hardware-based key generation which creates a deviceunique secret key upon power-up. This dynamic key can then be used to form the root-of-trust. Unlike other approaches, the secret key can be transient (ephemeral) and immediately cleared after use, enhancing security since the secret key is never present when the system is at rest. Variations of individual circuits induced during the manufacturing process can be used to create a physically unclonable function (PUF). Security experts generally agree that PUFs can provide an excellent root-of-trust. Technical challenges include stability (voltage/temperature) and aging of some types of PUFs. Fortunately, SRAM PUF technology from Intrinsic-ID has been extensively tested by Philips, Thales, Microsemi, and a major US defense contractor, among others, to verify the maturity of the SRAM PUF technique. The PUF creates a highly secure ‘digital fingerprint’ of the device using a dedicated SRAM block and a controller. The controller collects underlying device characteristics resulting in the generation of a unique-per-device hardware-based cryptographic key. One example of an SoC FPGA useable as a hardware root-of-trust is SmartFusion2, from Microsemi. SmartFusion2 FPGAs employ the Intrinsic-ID SRAM PUF—either in soft or hard form—along with immutable on-chip embedded non-volatile memory. This and other security features create a root-of-trust for configuration and secure boot of the SoC. The SoC can extend that trust to securely boot an external processor chip – even if the processor chip has limited or no intrinsic secure boot capability. Typical Multi-Stage Boot Initializing embedded processing systems from rest requires a secure boot process (Figure 1) that executes trusted code free from malicious content or compromise. Validation of each stage must be performed by the prior successful phase to ensure a ‘chain of trust’ all the way through to the top application layer. The initial boot loader (Phase 0) code is embedded within the SoC and is validated by the secure root-of-trust, which ensures the integrity and authenticity of the code. Each sequential phase of the Secure Boot is validated by the previously trusted system before code and execution is transferred to it. It is essential that the code be validated prior to delivery and execution to ensure that no compromise has occurred that could subvert or damage the boot of each phase. This can be done using either symmetric or asymmetric key cryptographic techniques. One approach is to build an inherently trusted RSA or ECC public key into the immutable Phase-0 boot loader. The Phase-1 code is digitally signed by the developer using the RSA or ECC private key. During Phase 0 the root-of-trust subsystem validates the digital signature of the Phase-1 code before allowing execution. The boot process is aborted if invalid. It is critically important that the inherently trusted public key and the immutable root-of-trust signature-checking process cannot be modified by a would-be hacker, for if they could substitute another public key or subvert the process, they could spoof subsequently loaded digitally signed code. Preferably, continual feedback to each prior stage is used to confirm that no tampering has occurred during boot load. Each phase can continue to execute if all anti-tamper (AT) monitors confirm a safe environment. These monitors typically include voltage, temperature, clocks, and any intrusion sensors. The AT monitoring must continue during operation and be capable of terminating operation. An SoC FPGA with cryptographic and anti-tamper features can be of great value for these functions. Extending Secure Boot Many of the security benefits of an SoC FPGA can be leveraged to provide a root of trust for external processors which inherently lack a secure Phase-0 boot capability. A target processor can be paired with the secure SoC FPGA (Figure 2) which would assist in securing the Phase-0 boot process. The FPGA can independently provide run-time monitoring and corrective action or a penalty, if called for. In this example, all loader code for phases 1 and higher would be in SPI Flash memory (all code could be encrypted). The SoC would perform authenticity checks on the code for each stage, decrypt the code (if required) and feed it to the main MPU when requested via the MPU-toFPGA SPI interface. For added security, the Phase-0 code would be stored in the embedded non-volatile memory (eNVM) of the SoC FPGA, which has strong protections against overwriting, and could be encrypted. After power-up, the FPGA would hold the main MPU in reset until it had completed its own integrity self-tests. When ready, it would release the reset. The MPU would be configured to boot from the interface to the FPGA (e.g., via its SPI interface). The FPGA, acting as an SPI slave, would deliver the requested Phase-0 boot code to the MPU as it comes out of reset. Assuming the MPU does not inherently support secure boot, the challenge is to load some code into the MPU, with a high assurance that it hasn’t been tampered with. One approach is to have the very first part of the boot code copy the boot code to the MPU’s onchip SRAM (or cache). Then, the code can perform an integrity check by computing a cryptographic digest of the SRAM contents. This result can be made to vary each time the MPU boots up by including a different true random number, used as a nonce, or “number used only once,” in the uploaded data. The MPU returns the digest value to the FPGA for validation. If it does not respond with the correct value, the FPGA would assume that either the data or the process had been tampered with, and it would terminate the boot process. If everything checks, the boot process would continue by branching to the now-trusted code in the MPU’s SRAM. This would contain the code needed to initiate the next phase, and could include a now-trusted RSA or ECC public key. Another way to validate code in MPU SRAM would be to use another interface such as JTAG to independently verify the SRAM by reading it back to the FPGA. Once the code in the MPU SRAM is trusted, additional security measures can be employed. This could include establishing a shared key by using public key methods, and encrypting all the subsequent boot code transmitted between the FPGA and the MPU with that shared key. Additionally, it may be possible to bind all the hardware components of the system together cryptographically so none would work without all the exact components of the original system. The SoC can additionally provide real-time monitoring of module environmental conditions, such as temperature, voltage, clock frequency, and other factors. The FPGA fabric can be securely configured to provide I/O for external tamper sensors and intrusion detectors. These can be sensed by the SoC to prevent vulnerability to attack from known exploits which apply abnormal conditions to extract critical information. Physical anti-tamper monitoring can also be incorporated to sense intrusion or sniffing of the critical connections between the SoC and the Target Processor. If these conditions are detected, the SoC can immediately take action to terminate ongoing processes and perform zeroization (deletion) of any key material. Figure 2 suggests one type of tamper response which would institute a power shutdown of pointof-load (PoL) power regulators thereby disabling the module/system. Tight integration of the SoC FPGA with other board functions can make bypassing the hardware root-of-trust more difficult. Once the main MPU is running trusted code, much higher security levels can be achieved with proper design. For many commercial and industrial applications, secure boot with a few lowcost anti-tamper measures may be enough. For financial and defense applications additional monitors and a tamper-sensing, tamper-evident enclosure may be required. Whether used as a self-contained processing element or in conjunction with adjunct processors, the SoC FPGA brings a new measure of security to embedded processing. While it is possible to construct an embedded processor module with specialty security devices that perform monitoring and static key storage, consolidation of security features within the SoC FPGA provides much greater security, flexibility, and performance. PUF-based ephemeral key generation, cryptographic, and anti-tamper protections centralized within the SoC FPGA make it extremely difficult for an attacker to acquire sufficient information to compromise the processing system. The advanced security capabilities of SoC FPGAs such as the Microsemi SmartFusion2 can provide the essential root-of-trust and security tool set needed by embedded system developers to meet the challenge of today’s security threats. Microsemi, Aliso Viejo, CA. (800) 713-4113. [www.microsemi.com] Bradtec Security Consultants. (610) 212-6447. [[email protected]] Intrinsic-ID, San Jose, CA. (408) 573-6186. [www.intrinsic-id.com]