Saturday, September 22, 2018

So You Want To Be An Embedded Systems Developer

Then listen now to what I say.
Just get an electric guitar
and take some time and learn how to play.

Oh, wait, that's a song by the Byrds. But the strategy is the same. Get some information and tools and learn how to use them. No need to sell your soul to the company.

The items I've listed below are sufficient to get you started on a career as an embedded systems developer. There are of course many additional resources out there. But these will arm you with enough knowledge to begin.

What's An Embedded System?

It's a computer that's embedded inside another product, like a car, a microwave, a robot, an aircraft, or a giant industrial machine in a factory; or an IoT device like an Amazon Echo, a Sonos speaker, or a SimpliSafe home security system. You think of the thing as the end product, not as a computer. The computer happens to be one of the things inside that makes it work.

The fascinating thing about embedded systems is that you get to have your hands in the guts of things. The code you write makes a physical object interact with the real world. It's a direct control feedback loop. Working on them is incredibly seductive.

Embedded systems are a multi-disciplinary endeavor. At a minimum they require a mix of electronics and software knowledge. Depending on the particular application (the end product you're building), they may also require some combination of mechanical, materials science, physics, chemical, biological, medical, or advanced mathematical knowledge.

The Primary Resources

These are all excellent resources that provide the minimum knowledge for a beginner. If you already have some knowledge and experience, they'll fill in the gaps.

These are well-written, very practical guides. There's some overlap and duplication among them, but each author has a different perspective and presentation, helping to build a more complete picture.

They also have links and recommendations for further study. Once you've gone through them, you'll have the background knowledge to tackle more advanced resources.

The most important thing you can do is to practice the things covered. This material requires hands-on work to really get it down, understand it, and be able to put it to use, especially if you're using it to get a job.

Whether you practice as you read along or read through a whole book first, invest the time and effort to actually do what it says. That's how you build the skills and experience that will help you in the real world.

Expect to spend a few days to a few weeks on each of these resources, plus a few months additional. While they're all introductory, some assume more background knowledge than others, such as information on binary and hexadecimal numbers. You can find additional information on these topics online by searching on some of the keywords.

These are in a recommended order, but you can go through the software and electronics materials in parallel.
Embedded Software Engineering 101: This is a fantastic blog series by Christopher Svec, Senior Principal Software Engineer at iRobot. What I really like about it is that he goes through things at very fine beginner steps, including a spectacular introduction to microcontroller assembly language.
C Programming Language, 2nd Edition, 1988, by Brian W. Kernighan and Dennis M. Ritchie: C is the primary language used for embedded systems software, though C++ is starting to become common. This is the seminal book on C, extremely well-written, that influenced a generation of programming style and other programming books.
Make: Electronics: Learning Through Discovery, 2nd Edition, 2015, by Charles Platt. This is hands down the best book on introductory electronics I've ever seen. Platt focuses primarily on other components rather than microcontrollers, covering what all those other random parts on a board do. See Review: Make: Electronics and Make:More Electronics for more information on this and the next book, and Learning About Electronics And Microcontrollers for additional resources.
Make: More Electronics: Journey Deep Into the World of Logic Chips, Amplifiers, Sensors, and Randomicity, 2014, by Charles Platt. More components that appear in embedded systems.
Making Embedded Systems: Design Patterns for Great Software, 2011, by Elecia White. This is an excellent book on the software for small embedded systems that don't use operating systems (known as bare-metal, hard-loop, or superloop systems), introducing a broad range of topics essential to all types of embedded systems. And yes, the topic of design patterns is applicable to embedded systems in C. It's not just for non-embedded systems in object-oriented languages. The details of implementation are just different.
Real-Time Concepts for Embedded Systems, 2003, by Qing Li and Caroline Yao. This is an introduction to the general concurrency control mechanisms in embedded operating systems (and larger-scale systems).
Designing Embedded Hardware: Create New Computers and Devices, 2nd Edition, 2005, by John Catsoulis. This covers the hardware side of things, an excellent complement to White's book. It provides the microcontroller information to complement Platt's books.
Test Driven Development for Embedded C, 2011, by James Grenning. This is a spectacular book on designing and writing high quality code for embedded systems. See Review: Test Driven Development for Embedded C, James W. Grenning for full details. Just as White's book applies concepts from the OO world to embedded systems, Grenning applies Robert C. Martin's "Clean Code" concepts that are typically associated with OO to embedded systems. We'll all be better off for it.
Modern C++ Programming with Test-Driven Development: Code Better, Sleep Better, 2013, by Jeff Langr. This is an equally spectacular book on software development. It reinforces and goes into additional detail on the topics covered in Grenning's book, so the two complement each other well. Even if you don't know C++, it's generally easy enough to follow and the material still applies.
Some Hardware
Texas Instruments MSP430F5529 USB LaunchPad Evaluation Kit, $12.99. An evaluation kit is a complete ready-to-use microcontroller board for general experimentation. Christopher Svec uses this kit in his blog series above, where he also covers using the free downloadable development software. It's one of a bazillion microcontroller boards out there that are useful for learning how to work on embedded systems. If you buy directly from the TI site, register as an "Independent Designer".
Saleae Logic 8 Logic Analyzer, 8 D/A Inputs, 100 MS/s, $199 with awesome "enthusiast/student" discount of $200, using a discount code that you can request and apply to your cart when checking out, thanks guys! This is also covered briefly in Svec's blog series. A logic analyzer is an incredibly valuable tool that allows you to see complex signals in action on a board. They used to cost thousands of dollars and need a cart to roll them around. The Saleae is a miraculously miniaturized version that fits in your pocket, at a price that makes it a practical must-have personal tool. The software is free to download from their site; you can play with it in simulation mode of you don't have an analyzer yet.
Extech EX330 Autoranging Mini Multimeter, $58.00. There are also a bazillion multimeters out there. This is one reasonable mid-range model. The multimeter is a vital tool for checking things on boards.
For a full shopping list to equip a personal electronics lab, see the Shopping List heading at Limor Fried Is My New Hero. That page also has many links to resources on how to use the tools.

Glossaries

It can be a bit maddening as you learn the vocabulary, with lots of terms, jargon, and acronyms being thrown around as if you completely understood them. As you get through the resources, the accumulation of knowledge starts to clarify things. Sometimes you'll need to go back and reread something once you get a little more information.
Other Links

These sites have articles and links useful for beginners through advanced developers.
Final Thought

Our society is becoming more and more dependent on embedded systems and the various backend and support systems they interact with. It's our responsibility as developers to build security in to make sure that we're not creating a house of cards ready to collapse at any moment. Because people's lives can depend on it.

If you think I'm overstating that, see Bruce Schneier's new book. We are the ones on the front lines.

Learning About Electronics And Microcontrollers


Tutorial books: to read beginning to end. "Make: Electronics" is the best introduction to electronics I've ever seen in any form, a must read!


Reference books: to jump around and get more details as necessary.

If you're new to electronics and microcontrollers and want to learn about them, here's my recommended reading list, in order (Amazon links)
There's been an evolutionary leap forward in the affordability and ease of use of microcontrollers as a result of inexpensive open-source hardware and free open-source software.

These have removed many of the traditional roadblocks that made learning to use microcontrollers daunting. Working with them is now within reach of everyone from elementary school children to adults.

Microcontrollers are then excellent platforms for learning to code, because they allow you to interact directly with the physical world. Making the hardware do what your code told it to do is very satisfying.

Tutorial Books

First Two Books

These cover basic electronics, and make an outstanding starting point. You can read my review of them to see why I like them so much (as well as information on components kits for the experiments in the first book).

They don't focus on microcontrollers. Instead, they focus on the other parts that surround microcontrollers, as well as projects that don't need microcontrollers.

If you're impatient to get on to the microcontroller information, save Make: More Electronics until later. But you should definitely start with Make: Electronics no matter what.

Third Book

This briefly covers some of the same basics as the first two, then covers some other useful basics. The first two and this one complement each other very well.

It then gets into working with Arduino and Raspberry Pi microcontrollers. It includes a number of simple projects for working with external modules and sensors.

Arduino is programmed in C on "bare metal", i.e. without an operating system, and Raspberry Pi is programmed in Python on embedded Linux, so the two illustrate the variety of programming and runtime environments for microcontrollers.

I found this to be a nice gentle introduction to the practicalities of working with microcontrollers and the huge array of third-party modules available. While it only skims the surface of a vast topic, it makes an excellent jumping off point for learning about embedded systems.

Reference Books

First Three Books

These gather in one place information from a wide array of resources on how to use a wide array of electronic components. They focus on practical concerns rather than theory, and are illustrated with the same excellent color diagrams and photos as Make: Electronics.

For each component, they contain the following sections: What It Does, How It Works, Variants, Values, How To Use It, and What Can Go Wrong.

Fourth Book

This is like multiple smaller books bound into one. It starts with an extensive chapter on theory and related math. The authors point out that much of the math throughout the book is simply to prove the theory, so if you're not interested in that level of detail, you can skip over it.

The remainder of the book covers a broad range of devices, providing both theory and practical material. It has a chapter on microcontrollers that makes a good follow-up to Hacking Electronics.

Using The Books

The tutorial books are meant to be read from beginning to end as you tinker with their projects. They are easy reading with hands-on experiments, where each chapter builds on previous material.

The reference books are meant to read here and there, jumping around as you need more details on a specific topic.

Even though some of the topics are duplicated between all the books, each author and each book has a different perspective. Each has a different emphasis and presentation.

They complement each other to give a more complete picture because one author may delve deeper into details that another glosses over. You may prefer one author's explanation over another's. No single resource is ever able to give the whole story, so it helps to have multiple perspectives.

These books will give you a good foundation so that you'll be able to understand other books and resources.

If you're interested in doing embedded systems software development, see So You Want To Be An Embedded Systems Developer.

Electronics Suppliers

There are two outstanding suppliers of discrete electronics, microcontrollers, tools, modules, sensors, and breakout boards that cater to the small-scale needs of hobbyists, students, and experimenters:
You can read my paean to Adafruit at Limor Fried Is My New Hero,which includes the shopping list for setting up my small-scale electronics lab. For a simple example of using this equipment, see First Use Of New Tools.

There are several suppliers for industrial scale, but who also supply at small scale (do you need 10 pieces, or 10 million?):
Supplier Learning Resources

All these suppliers have extensive online learning resources. However, the industrial suppliers don't have much for the absolute beginner; they're good once you've built up some background knowledge.

Adafruit Learn and SparkFun Learn resources are in both written and video form. You can scan through videos for a quick overview pass by setting the speed in the YouTube window settings (the gear icon) to 2x, then come back and watch at normal speed for a second pass.

There's a lot of duplication between them (and between these resources and the books above), but it's useful to see how different people approach the same topics. Just like reading books by different authors, they provide additional perspectives to help fill in the gaps.

Both sites can be a bit overwhelming to dig through, so I've selected a number of beginner resources below, organized by supplier and then type of resource. Many of them have links to additional information.

These Adafruit videos by Collin Cunningham cover basic electronics lab skills:
  • Soldering and Desoldering: how to solder components together properly, and how to pull them apart for salvage and rework.
  • Surface Mount Soldering: how to solder surface-mount components.
  • Multimeters: how to use a meter for basic measurements.
  • Oscilloscopes: how to use an oscilloscope for advanced measurements and waveforms.
  • Hand Tools: the basic hand tools used for assembling and disassembling electronics.
  • Schematics: how to read schematics (no, they're not Greek!).
  • Breadboards and Perfboards: how to combine the parts on a schematic into a functioning circuit.
  • Ohm's Law: understanding the relationship between voltage, current, and resistance.
He also has these videos on the basics of various components:
These are Adafruit written guides by various contributors:
These SparkFun videos by Shawn Hymel cover basic electronics lab skills:
He also has these videos on electronics basics:
These are SparkFun written guides by various contributors:
Another great YouTube resource is Dave Jones' EEVblog.

Tuesday, August 14, 2018

Review: Test Driven Development for Embedded C, James W. Grenning



The TL;DR:
  • Test Driven Development for Embedded C by James W. Grenning is an outstanding book. 
  • The title says C, but if you work in C, C++, C#, Go, Objective-C, Java, Javascript, or anything else, this is worth reading.
  • It says embedded, but if you work in embedded systems, front end web apps, mobile apps, desktop apps, backend servers, or anything else, this is worth reading.
  • And it's not just TDD, it's all the concepts that go into good design.
  • Get it, read it, USE it. You won't regret it.
Background

I first learned about XP (eXtreme Programming) concepts in 2007, when I was introduced to Kent Beck's Test-Driven Development: By Example. I used TDD (Test-Driven Development) to develop a major component on a server system. I learned more in 2013, when I read Michael Feathers' Working Effectively With Legacy Code. I used that to apply TDD to an existing server codebase.

Over the past 3 months, I've been on a reading binge, triggered by reading Robert C. Martin's 2017 book Clean Architecture: A Craftsman's Guide to Software Structure and Design. I have an hour-long commuter rail ride, so I have lots of time to read and work on my laptop, plus a little lunchtime reading, and I always have a book open at home.

I read his Clean Code: A Handbook of Agile Software Craftsmanship, The Clean Coder: A Code of Conduct for Professional Programmers, and am currently in the middle of his Agile Software Development, Principles, Patterns, and Practices.

I read Sandro Mancuso's The Software Craftsman: Professionalism, Pragmatism, Pride, and am in the middle of Mike Cohn's Agile Estimating and Planning, both from Martin's series.

I read Andrew Hunt and David Thomas' The Pragmatic Programmer: From Journeyman to Master, and am halfway through Pete McBreen's Software Craftsmanship: The New Imperative; Martin Fowler's Refactoring: Improving the Design of Existing Code is waiting on the shelf.

I've encountered bits and pieces of this material over the years, but this was a chance to go back to primary sources, get the full details and parts I've missed out on, and really understand them. I highly recommend it.

Review

But maybe you don't have time for all that. Maybe you'd like to cut to the chase and see how to apply their principles in practice.

Test Driven Development for Embedded C by James W. Grenning does that. It draws from many of those sources and more, showing you real-world examples to put them into practice.

Grenning is one of the original authors of the Agile Manifesto (as are Beck, Fowler, Hunt, Martin, and Thomas). He contributed the chapter "Clean Embedded Architecture" to Clean Architecture, and is the inventor of the Agile planning poker estimation method.

The book was published in 2011, so is now 7 years old, but it remains as timely as ever. That's especially true as IoT vastly expands the number of embedded systems that we rely on in our daily lives. Effective testing is critically important. For instance, see Testing Is How You Avoid Looking Stupid.

If you work on embedded systems in C, this is a must read.

If you work in a different language besides C, or on a different type of system than embedded systems, you may not think that a book on embedded C programming applies to you. But it's broadly applicable and worth reading.

The book is organized as an introductory chapter, the remaining chapters grouped into 3 parts, and appendices. I see it as three distinct portions, plus appendices: Chapter 1; Parts I and II (chapters 2-10); and Part III (chapters 11-15).

Throughout, Grenning addresses the common concerns people have with applying TDD to embedded systems. Embedded systems are a particular challenge, with particular target system constraints, so people might be skeptical.

This is a very hands-on, how-to book. I've included a number of lists from it, including those Grenning draws from other sources, because they illustrate the practical, pragmatic, disciplined approach. You can use this as a cheat sheet to remember them after you've read the book.

It might be tempting to think you can get by just with the information I've provided here and skip the book. But I've included it specifically with the hope that you'll realize you must read the book, and that it will be a worthwhile investment.

First Portion

This is the motivational portion, the appetizer. Grenning introduces TDD, its benefits in general, and the specific benefits for embedded systems.

He lists Kent Beck's TDD microcycle:
  1. Add a small test.
  2. Run all the tests and see the new one fail, maybe not even compile.
  3. Make the small changes needed to pass the test.
  4. Run all the tests and see the new one pass.
  5. Refactor to remove duplication and improve expressiveness.
The microcycle is critically important to the technique, so Grenning reminds you of it several times as he works through examples. This is what makes TDD effective, and I know from my own experience is also what makes it fun and extremely satisfying. He has a sidebar titled "Red-Green-Refactor and Pavlov's Programmer", which is very apt. That Pavlovian drive to take the next step in the cycle draws you into the zone and keeps you cranking.

For embedded systems, in addition to all the benefits that apply to other types of software, the primary benefits include being able to develop tested, working code when the target hardware isn't available; being able to test off-target (i.e. not on the target embedded system), where you have all the benefits of a general-purpose system and none of the constraints of an embedded one, including speed of development turnaround cycle; being able to isolate hardware/software interactions; and decoupling software from hardware dependencies.

That last point is part of the Big Lesson (see below) from all this. TDD in general, for any type of software, results in testable and tested software. But more than that, it drives development in a way that improves the design significantly.

That improved design means a much longer and happier life for the software and the systems that use it. They will be able to adapt to changes much more easily. It's not just about getting V1.0 done. It's about getting to V10.0.

In Software Craftsmanship, Pete McBreen starts off with the origin of the term software engineering. It was coined by a 1967 NATO study group working on "the problems of software." A 1968 NATO conference identified a software crisis and suggested that software engineering was the best way out of that crisis. They were concerned with very large defense systems. McBreen gives the example of the SAFEGUARD Ballistic Missile Defense System, developed from 1969 through 1975.

He says, "These really large projects are really systems engineering projects. They are combined hardware and software projects in which the hardware is being developed in conjunction with the software. A defining characteristic of this type of project is that initially the software developers have to wait for the hardware, and then by the end of the project the hardware people are waiting for the software. Software engineering grew up out of this paradox."

McBreen is questioning the value of that style of large-scale software engineering in the development of commercial products, suggesting that a different approach is needed.

But doesn't that situation sound familiar? Doesn't that sound like the problem embedded systems developers face all the time, that Grenning is addressing? This was a situation where TDD and off-target testing could have significantly alleviated the software crisis.

Granted, it was more complicated, since they were also developing the very processors and programming languages they would use, while modern systems rely on COTS (Commercial Off The Shelf) processors and languages. But we see that this has been a pervasive problem for some 50 years.

All types of systems, from embedded to frontend mobile apps to high-scale backend servers, in all those languages, from C to C++, Objective-C, Go, Java, Javascript, etc., can benefit.

All that code can be removed from its normal production environment and run off-target, off-platform, in a unit test environment that allows you to exercise every code path you want easily and quickly. That includes the obscure dark corners of the code trying to handle unusual error cases that are hard to produce on the target system.

For some of my own experience testing off-target, see Off-Target Testing And TDD For Embedded Systems.

Second Portion

This portion is the meat of the book, applying TDD to real-world embedded development and going through the mechanics with practical examples.

Following the lead of Martin's book, Grenning makes restrained use of UML diagrams. While some people dislike UML because they associate it with the heavyweight BDUF (Big Design Up Front) software engineering methodologies that McBreen was talking about, this is a very effective use of it that communicates information quickly. Which is the whole point of UML.

Grenning presents two unit test harnesses, Unity and CppUTest (of which he is one of the authors). All of the material applies just as well to other test harness tools, such as Google Test/Google Mock. It's equally applicable to other languages and their language-specific test harnesses.

He uses Gerard Meszaros' Four-Phase Test pattern to structure tests:
  • Setup: Establish the preconditions to the test.
  • Exercise: Do something to the system.
  • Verify: Check the expected outcome.
  • Cleanup: Return the system under test to its initial state after the test.
The rubber meets the road in his five examples of using TDD to develop embedded code:
  • LED driver
  • Light scheduler for a home automation system
  • Circular buffer
  • Flash driver for ST Microelectronics 16 Mb flash memory device
  • OS isolation layer (aka OSAL, OS Abstration Layer) for Linux/POSIX, Micrium uC/OS-III, and Win32 (this is actually an appendix and only covers thread control, but establishes the pattern)
Clearly, these have real hardware dependencies on both the processor I/O interface and the attached devices, as well as the system clock, and real OS dependencies. Those are critical concerns for the embedded developer. The LED driver is very simple behavior, so makes for a gentle introduction. The others are more complex.

Grenning discusses driver requirements, then shows the initial tests and code. Notice I said tests first. That's an important concept in TDD. You always write the test first, that uses the code in the way that you want the code to work. Then you write the code that satisfies that usage. He emphasizes the save-make-run cycle that you do repeatedly during this process. Then you repeat for the next test and bit of code. That's how you make fast progress.

The key concept is faking out portions of the system, so that the Code Under Test (CUT) can run as if it was running on the real system. That's critical for making TDD work off-target and off-platform. There are several strategies for doing this. In the case of the LED driver, he uses virtual registers to simulate memory-mapped I/O. This is simply a variable under the control of the test suite.

He also talks about test-driving the interface before test-driving the internals. That's another critical concept, integral to the whole design process. That's design-for-change. Because things will change. A product with a long, useful life, that represents an ongoing revenue stream for a company, will change over that time to adapt to changes in underlying technologies, user requirements, and usage. TDD means you can make changes without fear of breaking things (because you'll find and fix breakage as a result of performing the microcycle).

He talks about the strategy of incremental progress and refactoring as you go. This is in the heat of development. Final code does not flow directly from your fingertips. It evolves in incremental steps as you work. Did you ever look at someone's code and marvel at how clean and easy to follow it was, despite the complexity of the job it was achieving? You might think you could never do something that easily. This process results in that kind of code. Like a novelist in the heat of writing a scene, the first draft is never the final product, and the story arc evolves over time.

This is where he covers several important guidelines for driving the TDD process effectively. He lists Robert Martin's Three Laws of TDD:
  • Do not write production code unless it is to make a failing unit test pass.
  • Do not write more of a unit test than is sufficient to fail, and build failures are failures.
  • Do now write more production code than is sufficient to pass the one failing unit test.
He describes Kent Beck's snappy acronym DTSTTCPW: Do The Simplest Thing That Could Possibly Work, which initially means just faking it (for instance, hard code a function to return false in order to get the test that uses it to pass). Then keep tests small and focused, and refactor on green (many unit test setups show a failing result in red, and a passing result in green).

As this evolves, the faked out code turns into real code (the hard coded false is changed to actual code that does something and returns true or false under the appropriate conditions). That builds out a verified test suite as it builds out verified code.

This leads to the TDD State Machine, which tells you what to do next. The guidelines above and the state machine take you through the mechanics of working in the TDD style. They answer the questions:
  • How should you start?
  • What should you do next?
  • How do you know when you're done?
Whenever you write some production code, ask yourself, "Do you have a test for that?". If not, stop, go back, and write the missing test.

He also covers Dave Thomas and Andrew Hunt's DRY principle: Don't Repeat Yourself. This mantra helps drive the refactoring so that you keep the code lean and clean. I'll throw in additionally the DAMP principle: use Descriptive And Meaningful Phrases, a concept the book applies without calling out by name. This favors readable function and variable names that express intent over cryptic abbreviations and syntax. The result is code that reads with a narrative flow.

Keeping your code DRY and DAMP makes it easy for others to understand and modify (which might be you when you come back to it six months or a year later). This is the same as Beck's microcycle step 5.

To some degree this all turns TDD into a very mechanistic process. But that's a good thing. It's not a random, ad hoc process where you're constantly questioning yourself about what do to. Instead it's an orderly stepwise process that makes effective progress. You quickly see and appreciate the value.

It's also very fun and satisfying, because that mechanistic aspect actually drives your creativity. What's the next thing you can add to it? What's the next test, the next bit of functionality? When you finish, you feel like you've accomplished something, and you have the evidence to prove it. It's addicting.

That leads to Grenning's Embedded TDD Cycle, which starts with TDD on the development system, then advances to the target processor and eval hardware, then the actual target hardware:
  • Stage 1: Write a unit test; make it pass; refactor. This is red-green-refactor, the TDD microcycle on the development platform.
  • Stage 2: Compile unit tests for target processor. This is a build check that verifies toolchain compatibility.
  • Stage 3: Run unit tests on the eval hardware or simulator.
  • Stage 4: Run unit tests on target hardware.
  • Stage 5: Run acceptance tests on target hardware. These are automated and manual tests of the integrated system.
This sequence gives you confidence in the code under test quickly, then you can address any hardware-dependent issues that start to arise, such as compiler, library, or primitive data type differences. Next you to start exercising the hardware-dependent code.

Testing separately on eval hardware and actual target hardware helps shake out hardware issues in the actual target, since the eval hardware is presumably known good. One of the challenges in embedded development is always trying to determine if problems are due to the software or due to the hardware, since both are in active development and haven't had much soak time to prove them out.

For the other TDD examples, Grenning goes through a progression of different collaborator strategies. These are the test doubles, the fakes, that are substitutable for real components. They stand in for those components to break the test dependencies and allow you to simulate and monitor interactions. An important point is that they are much lighter weight than full-scale simulators. Full simulators can themselves require significant development. These fakes have only enough behavior to support the tests (part of the DTSTTCPW mindset).

He uses these types of doubles:
  • Spies
  • Stubs
  • Mocks
  • Exploding fakes
He goes through the following substitution methods, showing how to do them and discussing when they are appropriate:
  • Link-time substitution
  • Function pointer substitution
  • Preprocessor substitution
  • Combined link-time and function pointer substitution
These are fully-worked-out examples, although he starts omitting intermediate steps as he progresses in the interest of brevity. All the code is available online.

Third Portion

This portion completes the meal, complementing the meat in the second portion. It addresses design issues. This is important because design for testability also means design for flexibility and long product life.

Grenning starts out with Martin's SOLID principles:
  • S: Single Responsibility Principle (SRP)
  • O: Open Closed Principle (OCP)
  • L: Liskov Substitution Principle (LSP)
  • I: Interface Segregation Principle (ISP)
  • D: Dependency Inversion Principle (DIP)
He covers both how the previous chapters have incorporated these principles, and how to use them to guide the development process. TDD is closely intertwined with them.

Don't be put off by the apparent difference between non-object oriented and object-oriented languages. The specific language used is irrelevant. The syntactic mechanics may be different, but the concerns and concepts are all the same. C can be every bit as object-oriented as Java, it just takes a little more developer discipline. That means that all of the concepts of the various principles above apply.

He uses the SOLID principles in four module design models of increasing complexity, applicable in different embedded system design cases:
  • Single-instance module: Encapsulates a module's internal state when only one instance of the module is needed.
  • Multiple-instance module: Encapsulates a module's internal state and lets you create multiple instance of the module's data.
  • Dynamic interface: Allows a module's interface functions to be assigned at runtime.
  • Per-type dynamic interface: Allows multiple types of modules with the same interface to have unique interface functions.
You'll probably recognize more than one of these in the systems you work on. You may also recognize object-oriented concepts, and in fact he shows how to implement, use, and test a C++ virtual function table (vtable) in C.

Part of good design is adapting to change. He covers Martin Fowler's concepts of refactoring, both the code smells that point to things that need to be refactored, and the strategies for doing it with TDD. He describes a disciplined stepwise process that avoids burning bridges.

This then leads into Michael Feathers' concepts of working on legacy code (which Feathers defines as "code without tests"). He lists Feathers' legacy code change algorithm:
  1. Identify change points.
  2. Find test points.
  3. Break dependencies.
  4. Write tests.
  5. Make changes and refactor.
He describes how to apply this to embedded systems. Two important types of unit tests during this process are characterization tests that establish how the legacy code behaves, and learning tests that help you learn how to work with third-party code.

The final chapter covers test patterns and antipatterns. This is useful for helping to build good, effective unit tests that are maintainable over the long term.

The Big Lesson

For embedded systems, working with the specific hardware is a critical detail. But as Martin points out in Clean Architecture, it's just a detail. For GUI-based mobile, web, and desktop apps, the GUI is just a detail. For either of these, as well as backend servers, the OS (or lack thereof on a bare metal system) is just a detail. The network is just a detail. The database or the filesystem is just a detail. The frameworks or third-party packages are just details.

All of those details, critical though they may be, can be isolated and segregated from the code that defines what it is your system is about. That code is called the business logic, which sounds a little too dry for me. But's it's the stuff that makes your system something that other people want to use. So it's the stuff that makes your system drive a meaningful business.

Your business logic interacts with all those details to make a functioning system. TDD allows you to test that logic, in all its happy, twisty, and unhappy paths, separated from its dependencies on the details. The details are represented by test doubles: dummies, stubs, spies, mocks, and fakes.

This is where the Gang of Four's concept of programming to an interface, not an implementation, stated in their book Design Patterns, comes into play. You write your business logic to work to an interface to accomplish the detail interactions. In the production environment, you use the real detail components, the real implementations, with a thin adaptation layer that conforms to the interfaces.

In the test environment you can substitute test doubles that conform to the interfaces; these are alternate implementations. Since you're in control of the test doubles, you can drive any scenario you need to in order to exercise the business logic.

That isolation also allows you to substitute in other versions of production details, so it's a design strategy, not just a testability strategy. Maybe you want to use some different hardware in your embedded system, or run your app on a different mobile device with a different GUI, or deploy the system on a different OS, or use a different database.

By defining your details as abstract data types or abstract services, you can drop in replacements, with just the effort of implementing the interface layers.

Saturday, July 21, 2018

Off-Target Testing And TDD For Embedded Systems

I've recently started reading things by James Grenning (Wingman-sw.com), one of the authors of the Agile Manifesto. My interest in his work relates to Test-Driven Development (TDD) for embedded systems.

A copy of his book Test Driven Development for Embedded C is currently winging its way to me. His site links to a webinar he gave last summer, Test-Driven Development for Embedded Software, that makes a great introduction to the topic.

I found one of his answers on Quora interesting. The question was: Can I perform a unit test when creating C firmware for ARM Cortex-M MCUs? The answer, of course, is yes. Specifically, testing can be done off-target (i.e. not on the target embedded system).

I wrote a long comment on the answer, and decided it might make an interesting blog post. So the remainder of this post reproduces it substantially as it appears there, with some cleanup. He very kindly asked if I would be interested in adding it to his Stories From The Field.

My Three Stories

I can offer three anecdotes that show why I give a big thumbs up to off-target testing. Off-target testing puts you on target!

The first case was back in 1995. I had recently transferred to the DEChub group at Digital Equipment Corporation to work on networking equipment.

They had a problem with their popular DECbridge 90 product, an office- or departmental-scale stackable Ethernet bridge running an in-house custom RTOS on Motorola 68K, all written in C. It would run for weeks or months at a customer site, then suddenly crash. That would throw the LAN into a tizzy as it went through a Spanning Tree Protocol (STP) reconfiguration event. Then the LAN would do it again once the bridge came back up and advertised its links.

So it could be very disruptive to the local network and everyone running on it, completely unpredictable. No one had been able to reproduce the problem in the lab.

I was tasked with finding and fixing it. This platform had very little in the way of crash dump and debug support, and software update was done by burning and replacing an EPROM. It did have an emulator pod, so that was how most debugging was done.

The problem here was the long run time between failures. That made trying to collect useful information from repeated test runs, either real or via emulator, impractical to the point of impossibility.

The one clue we knew from the crash log was that it was an OOM condition (Out Of Memory). The question was why. Other than supporting STP, which is a bit of complex behavior, a bridge is a pretty simple device, just L2 forwarding. Packet comes in, look it up in the bridge tables, forward it out the appropriate interfaces.

The key dynamic structure was the MAC address table. A bridge is a learning device in that it learns which MAC addresses are attached to which links. It builds up the table as it runs, learning the local network topology and participating in STP. So this table was certainly a prime suspect, but it had capacity for thousands of entries, yet it was crashing in LANs with only tens or hundreds of nodes.

The table used a B-tree implementation that was public-domain software from some university. We speculated that it was a memory leak in either the B-tree itself, or our interfacing to it.

So I pulled out the B-tree code and built a test program for it that would go through tens of thousands of adds and deletes in various simple patterns. This is similar to the type of test fixture that Brian Kernighan and Rob Pike later talked about in their book The Practice Of Programming.

I ran this off-target, on a VAX/VMS. VMS supported a simple Ctrl-T key in the terminal interface that would show the process memory consumption, similar to what the Linux ps command shows. The turnaround time on playing with this setup was minutes, build and run, with the full support of an OS to help me chase things down, including good old printf logging and customized dumping of data structures (VMS also had a debugger similar to gdb).

With this I could see that under some patterns, memory consumption was monotonically increasing. So yeah, a memory leak. Further exploration allowed me to home in on the right area of the code.

It was right there in the B-tree memory release code: it would free the main B-tree nodes (the main large data element being managed), but not the associated pointer nodes that it used for bookkeeping. So on every B-Tree node release, it would leak 8 bytes of memory.

This was a case of a very slow memory leak, that only manifested with lots of table changes. In a customer environment, it could take a long time to chew through the memory. In the lab running on-target, it was even slower, since we didn't know what the cause was, so we didn't know how to trigger and reproduce it.

Off-target, it took less than an hour to find. Code change: add two lines to free the pointer nodes. This was after many man-weeks of effort for all the people involved in trying to reproduce and chase down the problem, plus all the aggravation caused at customer sites. Ten minutes to code, build, and verify the fix.

The second case was just recently. I implemented an FSM based directly on the book Models to Code: With No Mysterious Gaps, by Leon Starr, Andrew Mangogna, and Stephen J. Mellor (backed up by Executable UML: A Foundation for Model Driven Architecture, by Stephen J. Mellor and Marc J. Balcer, and Executable UML: How to Build Class Models by Leon Starr; I highly recommend the trio of books). Thank you, Leon, Andrew, Stephen, and Marc!

I was also reading Robert C. Martin's Clean Architecture: A Craftsman's Guide to Software Structure and Design and Clean Code: A Handbook of Agile Software Craftsmanship at the time, which heavily influenced the work (and finally motivated me to fully commit to TDD; Grenning contributed the chapter "Clean Embedded Architecture" to Clean Architecture). Mellor and Martin are both additional Agile Manifesto authors.

A product of all this reading, the FSM was a hand-built and -translated version of the MDA (Model -Driven Architecture) approach, in C on a PIC32 running bare metal superloop.

The FSM performed polymorphic control of cellular communication modules connected via a UART. The modules use the old Hayes modem "AT" command set to connect to the cell network and perform TCP/IP communications.

It was polymorphic because it had to support 4 different modules from 2 different vendors, each with their own variation of AT commands and patterns of asynchronous notifications (URC's, Unsolicited Result Codes).

If you think LANs and WANs are squirrelly, just wait till you try cellular networks. I could hardly get two test runs to repeat the same path through the FSM. Worse, there were corner cases that the network would only trigger occasionally.

It was horribly non-deterministic. How can I be sure I've built the right thing when I can't stimulate the system to produce the behavior I want to exercise?

The solution: build a nearly-full-coverage test suite to run off-target. I built a trivial simulator with fake system clock and UART ISR that I ran on an Ubuntu VM on my Mac. That gave me full support for logging and gdb.

This wasn't quite TDD, but it was one step away: it was Test-After Development, and instead of Google Test/Mock or some other framework, I built my own ad-hoc fakes and EXPECT handling.

With this I was able to create scenarios to test every path in the FSM, for all the module variants. Since I had control of the fake clock and ISR, I could drive all kinds of timing conditions and module responses. It did help that the superloop environment was pure RTC (Run To Completion, which coincidentally is required for Executable UML state machines), rather than preemptive multitasking/multithreading. But I could have faked that as well if necessary.

I was able to fix several bugs that would have been hell to reproduce and debug on-target. In all cases, just as with the B-tree, the code changes were trivial. The time-consuming and hard part is always the debug phase to figure out what's going wrong and what needs to be changed. Doing the actual changes is usually simple.

That debug phase is where non-TDD approaches run into trouble, especially when they have to be done on-target. It can consume unbounded amounts of development time. The time required to do TDD is far shorter, and for a significant number of problems can either completely eliminate the debug phase, or narrowly direct it to the specific code path of a failing test.

The third case was this past week, when I did my first true TDD off-target thing for some embedded code. The platform is embedded Linux on ARM, so full OS, with cross-compiled C++.

I built the code in my Ubuntu VM and used Google Test/Mock, mocking out the file I/O (standard input stream and file output) and system clock. The code wasn’t particularly complex, but it did have a corner case dealing with a full buffer that represented the greatest bug risk.

I used very thin InputStreamInterface, OutputFileInterface, and ClockInterface classes as the OSAL (Operating System Abstraction Layer) to provide testability (thank you, Robert and James!).

It was gloriously wonderful and liberating to build it TDD style, red-green-refactor, and I knew I had all the paths in the code covered, including the unusual ones. That instills great confidence in what I did. No more worrying that I got everything right. I was able to demonstrate that I did.

Did it take a little extra time? Sure, mostly because I’m still on the learning curve getting into the TDD flow. But if I hadn’t used TDD and this code had produced failures, it would take me longer after the fact to chase down the bug. Plus I was able to avoid the impact on all the other people in the organization affected by the development turnaround cycle.

And just today, I added more behavior to that component using the TDD method. I was able to work fully confident that I wasn't breaking any of the stuff I had already done, and just as confident in the new code.

So I'm definitely a believer in off-target testing, and from now on I'll be doing it TDD.

Another benefit of this off-target TDD model? Working out of that Ubuntu VM on my Mac, I'm totally portable. I can work anywhere, at a coffee shop, on the train commuting, at the airport, on a plane, at home. I can be just as productive as if I had my full embedded development environment in front of me. Then once I'm back at my full environment, I have tested, running code ready to go.

For reference, these are the books that taught me TDD while in different jobs, both highly recommended:
  • Test Driven Development: By Example, by Kent Beck (yet another Agile Manifesto author). I was introduced to the book and TDD in general by new coworker Steve Vinoski in 2007, whose cred in my eyes went way up when I noticed his name in the acknowledgements of James O. Coplien's Advanced C++ Programming Styles and Idioms.
  • Working Effectively With Legacy Code, by Michael C. Feathers. Amazon tells me I bought this in 2013. At the time I used it to start adding unit test coverage to our codebase at work. What makes this book particularly useful is the fact that nearly all software development requires working with legacy code to some degree, even on brand new projects. It also helps you avoid creating a legacy of code that future developers will curse. 

Tuesday, May 22, 2018

Review: Make: Electronics and Make:More Electronics


Charles Platt's Make: Electronics and Make: More Electronics.

Amazon links:
If you're interested in learning electronics, I highly recommend these two books by Charles Platt. They are hands down the best books I have ever seen on the subject, spectacular resources for the beginner.

Rather than focusing on theory, Platt jumps right into hands-on experimentation. The books are organized as a series of experiments and circuit-building projects that build knowledge incrementally.

He calls it "learning by discovery". He then follows up with just enough theory to explain what's going on. This is an extremely effective method that avoids getting bogged down.

I first learned electric theory in high school science. I learned Ohm's Law and basic circuit layout, including the equations for computing series and parallel resistance. I learned further details in college physics.

But these didn't really cover the practical details of electronics. They didn't address detailed circuit design, combining components into useful projects.

I started to learn some of those details from the books of Forrest M. Mims III and George Young's book Digital Electronics: A Hands-on Learning Approach. The latter introduced me to integrated circuit chips (IC's) and digital logic, as well as breadboard experimentation.

That constituted the bulk of my electronics knowledge for the past 35 years. But there was still a lot missing, particularly an intuitive understanding of electronics and all those other random parts surrounding the IC's.

Then I found Platt's books. Platt has a real gift for explaining things at an intuitive level in just a few concise paragraphs and clear diagrams.

He delves deep into the practical details. No detail is too small. For a beginner trying to learn from a book, this is critical.

He explains the most basic things so that you know how to wire up a breadboard and check things with a meter. He shows how things work internally, both mechanically and electrically, so a component isn't just an opaque black box.

The color diagrams are outstanding. One thing I really like is the way he steps from a circuit schematic diagram, to a breadboard-friendly schematic, to a breadboard component and wiring diagram, to a component value diagram, to an under-the-covers diagram illustrating all the electrical paths in the wiring and the breadboard connections hidden beneath.

He takes several projects from breadboard to final soldered board built into a simple enclosure. This shows how you can turn your experiment into a completed useful or fun gadget.

The diagrams and project builds really show where other books fall short. Most books show a schematic, and maybe a completed breadboard or a completed wired-up project. But they don't show the stepwise process to get from the start to the end.

That process is not always obvious and is full of opportunities for mistakes, so having it laid out in detail is a huge benefit. He also covers some of the things that can go wrong and how to diagnose and fix them.

The clarity of the diagrams and overall layout make the books very readable. This is another improvement over other books.

If you send a registration email to Platt, he'll add you to his email list for a bonus project and book updates.

Microcontrollers

Platt doesn't emphasize microcontrollers in these books. In an age where Arduinos, Raspberry Pi's, and other microcontrollers allow you to solve nearly any problem with a little embedded software, he mostly shows you what you can do without them. One experiment does cover using an Arduino.

He also discusses the pros and cons of replacing the discrete components with microcontrollers in several experiments. This is actually very useful from an engineering standpoint, giving you choices in how to implement things.

That also ensures that if you do incorporate microcontrollers into your projects, you understand how to integrate them with external components. There are many books on getting started with microcontrollers, but they tend to gloss over the details of those other components, assuming you already understand them. Which you will if you read these books!

Component Kits

Component kits for all the experiments in the first book are available online. You can certainly gather parts on your own, but the kits offer you one-stop shopping of the correct parts.

I used ProTechTrader, the supplier he recommends in his email, and I recommend them highly based on my experience. Make sure you get the kits for the 2nd edition.

The kits are available for the best price directly from the ProTechTrader website. They offer 3 kits, covering experiments 1-11, 12-24, and 25-34. Each is available in regular and deluxe versions. The deluxe versions add things like a digital multimeter, soldering iron, 9V power supply, and upgraded magnet.

I purchased the regular version of each kit, since I already had most of the deluxe items, with free economy 3-10-day shipping. The kits arrived in 3 days.

While you pay a little extra per part for convenience vs. buying everything separately, it was well worth it. The parts are extremely well organized. They're bagged and labeled by value, stored in compartmentalized containers, and identified by experiment.

Don't underestimate the value of the labor that went into that. Platt dedicates several pages in his book to organization of workspace and parts. That's key to efficient work. Rummaging around in a box of loose parts will make you tear your hair out.

Additional Books


Platt's Make: Tools and 3-volume Encyclopedia of Electronic Components.

Platt has several additional books that make useful companions to this pair:
The first book (no, it's not about how to make tools, it's just part of the Make: series) covers the basic hand and small power tools you'll find at home centers and hardware stores, showing how to use them to build small projects. It feature Platt's usual deep attention to practical details.

The book contains a number of simple projects in wood and plastic. The methods for working with plastic are particularly noteworthy, because while there are many books about woodworking, there aren't many about plastic.

These are the skills you need to build different styles of enclosures and stands for your electronics projects, and can also be applied to other mechanical aspects such as robotics.

The remaining books are a 3-volume encyclopedia of electronic components. This is all the information that he didn't have room for in the other books, plus more. Where those books were written as tutorials, this is a reference set.

He's compiled a vast trove of information culled from manufacturer data sheets, tutorials, reference books, and other sources to create a centralized, practical one-stop resource.

Need to know pinouts, sample circuits, voltage levels, alternative packages? You can find them here, in Platt's signature level of detail.

Sunday, April 29, 2018

How To Ace Calculus

This is the method I used to ace 3 semesters of calculus. Also linear algebra, differential equations, and a semester of physics. It should work for any math or science class, at earlier or later level.

When I say ace, I mean getting a grade of 100 on most homework, quizzes, tests, midterms, and finals. In all cases, the top grade in the class. Yes, I was the one breaking the curve.

Sounds arrogant? Well, I didn't start off in that lofty position. So before I give you the recipe for success, let me give you the recipe for failure.

The Recipe For Failure

In 1978, I entered Northwestern University, in Evanston, IL, as a mechanical engineering major. My dream was to work for NASA.

This was a major in the Technical Institute, requiring calculus, physics, and mechanics (statics and dynamics).

The recipe:
  1. Show up for all classes and pay attention.
  2. Complete all reading assignments on time.
  3. Complete all assigned homework on time.
  4. Study for tests, reviewing homework.
This was the recipe that had gotten me through high school, where I was usually able to do most of the homework in the last 5 minutes of class allocated for that purpose. Then just a few minutes at home to complete it, and another 10 or 15 to read the next section.

Sounds like a pretty good plan, right? Sounds like a good student, right?

The problem was that the material in college was more difficult and faster paced. Doing just the assigned problems was barely enough to keep your head above water. That didn't give you enough practice thinking through and performing the work.

Back in algebra, problems were simple, they had one procedure to follow to the solution. Calculus wasn't like that. There were multiple procedures depending on the style of the equation. A good portion of the battle was classifying the equation to determine what approach to bring to it.

Physics was similar. Both topics required a more analytical approach. That meant building up a problem database in your mind so you could pick the approach. That meant experience doing lots of problems.

The result of following that recipe? C's, D's, and finally, an F in physics. Where I had prided myself on my math and science abilities, my favorite subjects, I had failed. Distraught, I dropped out of Northwestern.

The Recipe For Success

About 5 years later, I started part-time classes at Richland College, part of the Dallas County Community College District.

Oh sure, you may say, community college. That's easy, it's not a real college.

Negative. Richland used exactly the same textbooks as Northwestern, just the next editions. So it was exactly the same material. And I had Ralph Esparza as instructor for calculus I and III. Ralph was feared among students as a tough math teacher, who cares if it's community college or Ivy League.

I was determined to repeat all those classes in my favorite subjects, and do well in them. Somewhere in hindsight, I had realized the need to do more than the minimum.

The recipe:
  1. Show up for all classes and pay attention.
  2. Complete all reading assignments on time.
  3. Complete all assigned homework on time.
  4. Complete all remaining odd-numbered problems in the section and check against answers in the back.
  5. Complete all remaining even-numbered problems in the section.
  6. Study for tests, redoing all problems.
This boils down to doing every problem in every section of the textbook at least twice.

You may say, that's a lot of work. Yes, it is.

In Nike ads, athletes show how tough they are. Just do it. Be tough.

The result? Redemption.

Saturday, April 21, 2018

Sodoto: See One, Do One, Teach One

Here's a useful strategy on this learning path: see one, do one, teach one. Sodoto.

Sodoto is a learning method and a teaching method rolled into one. It's the cycle of knowledge.

I'm familiar with it from medicine. My wife is a surgical nurse, and this is the traditional method of teaching in surgery. Obviously, safety concerns mean that you don't just watch a brain surgeon at work and then go try it yourself.

But this forms a useful pattern of mentoring and learning and passing knowledge along. It applies to any kind of knowledge- or skill-based activity.

It works with a single student at a time, or a whole group. You don't have to be a formal teacher.

See one: watch someone do a procedure.

Do one: do what you saw.

Teach one: show it to someone else.

Once you learn a procedure, you're primed to teach it. That's how knowledge spreads.

Here's the real kicker: the teaching step is actually a powerful learning step for you as the teacher. It locks the knowledge into your brain.

You have to have sorted out what you're talking about in order to teach it. You can't just vaguely know it and wave your hands in the air glossing over details. Your students will be annoyed and you'll feel stupid.

The process of getting ready to teach and then doing the teaching forces you to organize your thoughts and chase down details, because you don't want to look stupid, and you want to be prepared for questions.

That motivates you to dig deeper. As a result, you end up learning more yourself.

There are two keys to making this work: background knowledge, and the experience of doing it.

Background Knowledge

Background knowledge applies at each stage of see, do, and teach. Note that "see" can mean live and in person, or on video.

Whatever the subject, medicine, coding, building anything from woodworking to electronics, any knowledge you have before seeing the procedure will help you understand it. You can bet that surgeon learning how to do brain surgery brought a huge amount of background knowledge.

Some things take minimal background, just the random skills and knowledge you already have from life. But more difficult subjects benefit from whatever time you can invest beforehand. Videos, books, blogs, and articles, in print and online, are all good resources, as well as online forums.

That establishes the background knowledge you'll bring to seeing the procedure.

Once you've seen the procedure, as you prepare to do it yourself, it's useful to go back to your resources. Now that you know better what to look for, you can get more details. You can reinforce what you saw.

That expands the background knowledge you'll bring to doing the procedure.

Once you've done the procedure, as you prepare to teach it, go back to your resources again. As a result of doing, there will be details you want to fill in, and you may understand the material better. You may have run into some things that you wished you knew more about. You may anticipate additional questions from your students.

That further expands the background knowledge you'll bring to teaching the procedure.

Experience Of Doing

The experience of doing the procedure is critical. That's where you have the opportunity to work through mistakes and see what works and doesn't work for you. That's where you start to lock it into your brain.

Don't be afraid to make mistakes! Mistakes are great learning opportunities. As long as there's no injury and no damage, there's no harm done.

This is also where you can work out your own changes to the procedure. Just because you saw it done one way doesn't mean that's the only way to do it. That was one way. You can use it as your starting point, and add your own tweaks.

Or maybe you'll realize what you saw really was a good way and you shouldn't mess with it.

You might need to do the procedure more than once before teaching it. Some procedures take practice before you feel confident teaching them to someone else.

The experience of teaching the procedure will be different from the experience of doing it for yourself. Your students may have questions or difficulties that force you to think about things in different ways.

Plus there's the pressure of performing for an audience. But as you gain experience teaching, that will get easier. It's just a different kind of doing.

The experience of teaching is where you finish locking it into your brain.

Teamwork

Sodoto is a great method for dividing up a project. Whether at work, at school, or with your friends, you can divide up the project and have each person take on a part.

They go off and see how it's done, do it themselves until they feel ready, then bring it back to the team to teach everyone else.

What if you can't agree how to divide it up because multiple people want to do the same thing? Fine! Let them!

Each person will have their own take on the experience and teach it slightly differently. That helps explore all the possibilities in the procedure.