Embedded How Do I Debug Firmware Before Hardware Is Done

Embedded How Do I Debug Firmware Before Hardware Is Done

Debugging embedded designs is becoming increasingly difficult as the number of observed and possible interactions betwixt hardware and software go along to abound, and every bit more features are crammed into chips, packages, and systems. Only there too announced to be some advances on this front end, involving a mix of techniques, including hardware trace, scan chain-based debug, along with better simulation models.

Some of this is due to new tools, some is a combination of existing tools, and some involves a alter in methodology in which tools are used in unlike combinations at different times in the design-through-manufacturing flow.

“Tracing and capturing the internal signals based on trigger events, and storing them in a trace buffer that can so exist read from a debug port, allows seamless collection of information without disrupting the normal execution of the arrangement,” said Shubhodeep Roy Choudhury, CEO of
Valtrix Systems. “A lot of hardware instrumentation may be required, but placing the trigger events near the point of failure can yield a lot of visibility into the upshot.”

I of the big challenges revolves effectually the software that runs on an embedded. “To exam that software is really hard,” said Simon Davidmann, CEO of
Imperas Software. “I recall equally an engineer trying to get access to the prototypes. You have to timeshare them because y’all never accept enough. Employ of simulation changes the ability to develop software for the embedded globe because if you can build a good simulation model of your platform, information technology means you can examination information technology and run information technology and put it in regression farms, implement continuous integration, etc. Y’all get much college quality software because you lot’ve got much more than access to verifying it. Simply when it comes to debugging, where information technology is today with simulation, it’s much more than efficient than debugging on a prototype or hardware, and you get a few big benefits — versioning, controllability, observability, and the power to abstract and stop where you lot want — so come across everything. With a simulator, you get observability that you lot couldn’t go otherwise.”

This gets much more than complicated equally different types of chips and memories are added into designs in order to better performance and reduce ability.

“In heterogeneous systems, a typical system will have IP blocks from multiple different vendors, — as well as, in some cases, IP blocks provided past internal groups inside the SoC developer — plus IP blocks from non-traditional IP suppliers,” said George Wall, product marketing managing director in the IP Grouping at
Cadence. “If they have licensed a hardware module from some other SoC visitor, each one has its own set of standards, its own implementation. So information technology’s a very heterogeneous blazon device. How do you integrate all of that at the debug level? This is a challenge. We do support open standards on the debug side, as do a lot of other commercial vendors. But the internal IP, and IP from non-traditional places, may exist sourced from companies where those standards may not have been supported.”

This adds a whole other layer of complication because the engineering squad is dealing with some blackness boxes. They don’t know what’south within them, and they don’t want to tinker with them very much. “They just want to make sure there is visibility externally so they can see what’s going on,” Wall said. “In that location actually is no manufacture standard to say, ‘Well, here’s the right amount of visibility.’ So it’south a challenge.”



Fig. 1: Cadency/Greenish Hills tool integration for embedded system design. Source: Cadence

Increasing visibility

But new approaches can create new efficiencies, Davidmann said. “We’ve seen engineers who are trying to get their firmware to run with their operating systems, write traces and abstract things in order to monitor not only at the variable level or function level, just probe into the Os and sentinel what the Os is doing. They they tin trace that. This means instead of getting a billion lines of education trace, they’d become a few thousand lines of function trace or scheduler trace, then they can sentinel what’s happening and visualize some of information technology at a high level. And then when things go wrong, they can drill down. For example, one user implemented assertions to monitor the OS, and if something happened information technology would mistake. It kept a rolling buffer of, say, 10,000 instructions and 1,000 part builds in a trace. When it stopped, they could look back and encounter what had happened at that point.”

This tin be done in hardware, only not easily. “You’ve got to build it all in,” he said. “With a simulator, applied science teams tin can write their own, extend it, and build their own tools in club to have better visibility, and they can put assertions in to monitor things meliorate and point out bugs that may not exist fatal merely still bear on operation. It’s about visibility and observability. If you lot use a simulator properly, zero’due south a black box.”

Read:  What Format Does a Usb Need to Be for Hp Printer Firmware

The big divergence between debugging software applications running on generic hardware versus embedded applications is the unique hardware information technology’southward running on.

“By and large, that’s something that’s been purpose built for that particular application, rather than a genetic compute node,” said Sam Tennent, senior manager for R&D at
Synopsys. “What’s key in that location are the interactions between the hardware-dependent software, which is that layer right at the bottom where the software must be aware of the hardware. There will be layers to a higher place that, which are abstracted abroad from the hardware. We see involvement in the layer that has to talk to the hardware and the specific issues that brings upwards, which are dissimilar from the issues that you might see with higher-level software.”

The debug engineer needs to know a little bit about hardware, he said. “They demand to exist enlightened of things like device registers, interrupts — all of these things that happen down at the hardware level that tin affect what the software’due south doing. They really need the visibility of not just what’s happening in the software domain, but also what’s happening in the hardware domain. And they need to exist able to correlate these things.”

Virtual prototypes
are one way to approach this. “The fact that virtual prototypes are using abstracted models of the hardware ways you can go visibility into what’s happening at the hardware layer at the same time as you tin see what’s happening in your software.,” Tennent said. “Typically you can correlate these things. You can wait at an event in your software, or y’all can look at your software routines and run across exactly how those are interacting with the hardware, for example, If you know the hardware brings up an interrupt, you can trace that, and you can see exactly what the software does in reaction to that. This is really useful when you’re debugging problems down at that level.”

Pure software simulators are all the same useful for debugging upwardly at higher levels, just typically they are not used to model things like hardware registers, for example. “They take a high level API, which the software is using, but they’re not modeling right down at the register level,” Tennent said. “This means any issues that you lot take down at that level are not going to be picked up by something like these software emulators.”

Davidmann noted that when a processor model is encapsulated in Verilog, the engineering team can then debug using Cadence, Siemens EDA and Synopsys tools. “They tin can debug the software stack in our debugger and all in the one simulation. Every bit they are single stepping, they can see the waveforms in the Cadence device, for instance, and tin await in at the hardware and the software all in ane simulation. Information technology’s not one debugger because conceptually, there are signals and wires in the hardware, only information technology can exist done in one run and be synchronized so that the engineer tin can click on a point in the software and encounter what the waveform was doing at that bespeak in time.”

Cadency’due south Wall advises engineering teams to retrieve upfront about how to ensure the debug of the firmware running on the organisation. “Consider the types of interactions that volition be occurring between the firmware and other devices in your SoC,” he said. “Call up most how to proceeds visibility into those interactions. One common method at the CPU level is to implement tracing capabilities, where the trace output can be used to at least tell you what code was running when certain things happened. There also are a lot of things that demand to be done at the organization level to ensure the visibility of those interactions. Special visibility registers tin exist added to periodically check the embedded firmware that provides a state of the system. There are other techniques of implementing trace instrumentation in the other blocks in the system that can be controlled or enabled by the firmware running on the processor. So if it’south having difficulty interacting with i detail block, information technology can turn on the trace for that block, and then read the trace from a memory location to empathize the problem.”

Improving low-level software debug

Depending on whether the awarding is at the bare metal level, or has an RTOS, tin brand a divergence, too.

“The application based on these ii branches are e’er debugged using some integrated evolution surroundings, which is the primary entry indicate when you’re debugging something,” said Haris Turkmanovic, embedded software lead at
Vtool. “How complex the debugging process is depends on how complex the integrated development environs is. If there is a well-developed integrated embedded environment, information technology volition be much easier for to debug.”

So what does the process of debugging look like? “You lot offset demand to know what to expect,” Turkmanovic said. “If that expectation is not satisfied, you know that you have a problem. That expectation is based on various values, which are part of the memory. When you debug something, you need to go into the retentiveness, look at the memory content to come across how it’s changed, get pace by footstep through your code. Each debugging process consists of iteration. You go through your code, step past footstep, watching the retentivity content to see if something behaves unexpectedly. So you tin take hold of the problem. Basically, you’re watching the retentivity to see if the content is written as expected. If it’s not, so you lot localize the problem. If yous have a big organisation, you tin divide it into parts and look at each part separately. This process tin can exist easier if at that place is some kind of operating system, considering embedded platforms that run embedded systems are very complex. They ordinarily have a memory protection unit that allows y’all, for example, to divide a memory into multiple regions. If you want to access part of the code from one region to another region, the MPU volition notice. The second approach when you have an operating system is to utilise the built-in functions, which will monitor the execution of your program. If y’all try to do something that was not planned, this function will be chosen, and breakpoints at that place can catch the error.”

Read:  Harmony One Firmware Update in Safe Mode

Improving the efficiency of the debug procedure requires a systematic approach.

“The entry point of that systematic approach is to know what y’all expect and what the limits are,” Turkmanovic said. “If you don’t have a systematic arroyo to debug, and if you become into debugging but by guessing, you tin can go into an countless loop and the debugging will never terminate. If you don’t accept an expectation, and if you don’t have a systematic approach, information technology’s very hard to debug. For example, if you endeavor to go data by higher speeds than the system tin can do, yous will always become a bug because you lot cannot practise something that is impossible.”

In cases where automatically generated code and/or custom configurations are used, automation of processor core verification is particularly important. Formal verification techniques play a key role hither.

Formal verification
provides faster runtime than simulation, assuasive simulation licenses to exist freed upward for other tasks similar integration testing,” noted Rob van Blommestein, caput of marketing for
OneSpin, a Siemens Business. “Ready is also much quicker and easier. RISC-V’s flexibility to create custom instructions creates a verification hurdle for simulation. Formal engineering hands tin can be applied to verify custom extensions and instructions. Complete coverage of all corner cases can be achieved with formal with minimal to no try in the development of the testbench. Unspecified behavior, such every bit undocumented instructions, as well tin can be uncovered using formal. The engineering science squad volition be able to understand coverage progress as it relates to ISA requirements throughout the verification process. Directly traceability of verification and coverage can exist achieved.”

New techniques in formal verification applied science also help verify that the set of assertions is sufficient to encompass a RISC-V core design and ensure at that place is no unverified RTL lawmaking.

“Any actress functionality in the blueprint, including hardware Trojans, is detected and reported equally a violation of the ISA. This includes the systematic discovery of any hidden instructions or unintended side furnishings of instructions. Overall, formal delivers better quality of results with much less effort,” van Blommestein added.

With embedded Linux lawmaking, the movie gets more complex. “In that location we have multithreaded, multiprocessor systems, and information technology is not easy to debug using some kind of debugger that can go step past step, inspect memory, or something else,” said Gradimir Ljubibratic, embedded software Linux lead at Vtool. “In the Linux world, we are generally depending on debug blocks and debug events and so in existent time. Nosotros can see what is going on with the organisation, how the system is fluctuating, how dissimilar components interact, then on. Before we even try to test everything on a real system, we employ unit tests to exam the different small components of the system. Nosotros are currently in the process of implementing continuous integration testing to aid us detect bugs in early stages of evolution. Also, dissimilar tools tin be used for memory profiling to audit what is going on with the code if we accept some kind of stack overflow or retention violations and then on. This is mainly related to user space evolution with application development.”

Taimoor Mirza, technology director for the EPS IDE squad at
Siemens EDA, agreed. “In modern software development where multiple threads on multiple cores on multiple SoCs have to be covered, and where each of the components demand to talk to others in the system, traditional debug techniques quickly reach their limit. These are of import and necessary to runway failures on a single thread/core, merely for the full-system agreement of debugging, the view needs to be extended. This is where analysis and profiling tools come into play to help in analyzing and agreement complex organisation beliefs and helping tracking downward these sorts of problems. The user gets an overview of the overall system beliefs, making information technology piece of cake spot areas where issues arise such every bit networking problems, scheduling issues for operating systems, problems in device drivers. Tools likewise tin can import external recorded data and show in sync with the software traces.”

Read:  What Is the Best Custom Firmware to Install on Your Router



Fig. 2: Debugging with complex testbenches. Source: Siemens EDA

Trace points also can be added into the code to extend the usability, while APIs allow for the creation of custom agents that understand these trace-points for ameliorate supporting the user to provide a hint where to search for bug, Mirza said. “Likewise, IDE firmware developers can add together instrumentation and utilize information technology in their own agents to assistance finish-users finding issues with higher level code. Once a trouble area is zeroed in, the user tin can now use problem-specific techniques to try to get more details.”

For instance, if the user finds issues in the Linux Kernel or Kernel Modules, tools tin be used to assistance debug the Linux kernel or Kernel modules. If the user is having issues with an RTOS, RTOS awareness features can be used to provide additional information on the outcome.

In general, Mizra said, there are certain tips and techniques that can help when addressing whatsoever situation. For case, make code easier to debug through apply of -O0 -fno-inline, which disables optimizations and in-lining, then you can step through all code, and practice so naturally. Yous also tin utilize -Og instead of -O0 to specifically optimize for debugging. Essentially, this asks the optimizer to assist with debug-power, rather than hamper it or being disabled.

At that place are multiple other techniques bachelor. In addition, debug teams can use static analysis tools, such as Klocwork, the valgrind suite, etc, which come up with come with a learning curve and sometimes give fake-positives, but they can detect problems you weren’t even looking for, so it’due south best to use them early in evolution and then continuously.

Running a project early an emulator allows for meliorate remote and parallel development, also every bit ameliorate high-level test automation. Also, optimizing the edit-build-debug cycle tin really pay off after. Evolution scripts also tin be created, makefiles tuned, and the IDE adjusted to automate the fastest build and load. These kinds of adjustments tin have a big impact on debug schedules and overall time to market place.

Security concerns

Security is condign a bigger topic with developers, too. “There is a natural tension between security requirements and debug visibility,” said Wall. “The data the SoC designer wants to become while the SoC is running is likewise potentially vulnerable, simply valuable to a hacker. You cannot think most these aspects in silos. You accept to retrieve upfront how all these pieces will collaborate with each other. It has to exist designed architected upfront.”

“Most organizations with whom I’ve worked aren’t focused on improve debugging practices,” said Mike Fabian, primary security consultant for Synopsys’ Software Integrity Grouping. “Resilience, quality, and safety are all goals and objectives of organizations at differing levels of maturity across the board. They are focused on finding errors earlier using the latest advances to ensure routine releases are resilient and meet an accepted level of security. At that place needs to be a mandatory employ of vetted design blueprints, clear SDK/framework/coding standards, supply chain diligence, active mechanisms in place to protect client privacy, and automatic governance and technical guardrails to avert preventable errors early on. Debugging an issue later in the development cycle, assuming that debugging means ‘this code isn’t working as intended,’ is a failure of those processes and controls. Finding bugs faster and earlier is more cost effective.”

Conclusion

Finally, considering tardily bugs are always a take a chance to the schedule, in addition to debug techniques, a high level of accent should as well be put on the stimulus and test generators. “Enabling software-driven stimulus and real-world use cases early on in the design lifecycle increases the chances of hitting the circuitous bugs,” said Valtrix’southward Roy Choudhury. “And since awarding software is non developed with the mindset of finding blueprint bugs, and is frequently complex to debug, using stimulus generators which tin do the organisation better and utilize the debug infrastructure present in the system is always a adept idea.”

Related

Debug: The Schedule Killer

Time spent in debug is unpredictable. It consumes a big portion of the evolution bike and can disrupt schedules, but good practices tin can minimize it.

Embedded How Do I Debug Firmware Before Hardware Is Done

You May Also Like