Showing posts with label CPlusPlus. Show all posts
Showing posts with label CPlusPlus. Show all posts

Friday, November 23, 2018

Review: Web-Based Course Test-Driven Development For Embedded C/C++, James W. Grenning


Full disclosure: I was given a seat in this course by James Grenning.

I took James Grenning's 3-day web-based course Test-Driven Development for Embedded C/C++ September 4-6, 2018. It was organized as a live online delivery, 5 hours each day. The schedule worked out perfectly for me in Boston, starting at 9AM each morning, but he had attendees from as far west as California and as far east as Germany, spanning 9 time zones.

The participants ranged from junior level embedded developers to those with more than 20 years of experience. One worked in a fully MISRA-compliant environment. This was the first introduction to TDD for some of them.

The course was organized as blocks consisting of presentation and discussion, coding demo, then live coding exercises. It used CppUTest as the TDD framework.

The short answer: this is an outstanding course. It will change the way you work. I highly recommend it, well worth the investment in time and money. The remote delivery method worked great.

I had previously read Grenning's book, Test Driven Development for Embedded C, which I reviewed in August. I covered a lot of his technical information on TDD in the review, so I'll only touch on that briefly here. He covers the same material in the course presentation portions.

The course naturally has a lot of overlap with the book, so each can serve as a standalone resource. But I found the combination of the two to be extremely valuable. They complement each other well because the book provides room to delve more deeply into background information, while the course provides guided practice time with an expert.

Reading the book first meant I was fully grounded in the motivations and technical concepts of TDD, so I was ready for them when he covered them in the course. I was also already convinced of its value. What the live course brings to that part is the opportunity to ask questions and discuss things.

You can certainly take the course without first reading the book, which was the case for several of the participants.


Presentations

For the presentation portions, Grenning covered the issues with development using non-TDD practices, what he calls "debug-later programming" (DLP). This consists of a long phase of debug fire-fighting at the end of development, that often leaves bugs behind.

He introduced the TDD microcycle, comparing the physics of DLP to the physics of TDD. By physics he means the time domain, the time taken from injection of a bug (repeat after me: "I are a ingenuer, I make misteaks, I write bugs") to its removal. This is one of the most compelling arguments for adopting TDD. TDD significantly compresses that time frame.

He covered how to apply the process to embedded code and some of the design considerations. He also talked about the common concerns people have about TDD.

One quote from Kent Beck that I really liked:
TDD is a great excuse to think about the problem before you think about the solution.
He covered the concept of test fakes and the use of "spies" to capture information. He covered mocks as well, including mocking the actual hardware so that you can run your tests off-target.

He covered refactoring to keep tests easy to follow and maintain. He also covered refactoring of "legacy code" (i.e. any production code that was not built using TDD), including "code smells" and "code rot", using TDD to provide a safety harness. This included a great quote from Donald Knuth (bold emphasis mine):
Let us change our traditional attitude to the construction of programs. Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.

Coding Demos

Grenning performed two primary live coding demos. First, he used TDD to build a circular buffer, a common data structure in embedded systems. He used this to demonstrate the stepwise process of applying the TDD microcycle.

Second, he performed a refactoring demo on a set of tests. He used this to show how to apply the refactoring steps to simplify the tests and make the test suite more readable and maintainable.

This was just as valuable as the TDD microcycle, because a clean test suite means it will live a long and useful life. Failing to refactor and keep it clean risks making it a one-off throwaway after getting its initial value.


Coding Exercises

Grenning uses Cyber-Dojo to conduct exercises (as well as his demos). This is a cloud-based, Linux VM, ready-to-use, code-build-run environment that allows each student to work individually, but he can monitor everyone's code as they work. This turned out to be one of the most valuable aspects of the course.

I should also mention that I had read Jeff Langr's book Modern C++ Programming with Test-Driven Development: Code Better, Sleep Better in between reading Grenning's book and taking this course. Langr puts a lot of emphasis on short, easily-readable tests, and that's something that also comes out in Grenning's class.

What was so valuable about doing these exercises in Cyber-Dojo is that Grenning was able to stop someone who was heading off in the wrong direction and quickly bring them back on track, or help them if they weren't sure what to do next. That fast feedback cycle is very much in tune with TDD itself. It works just as well as a teaching method.

So if someone started writing code without a test, or wrote too much code for what the test covered, or had too much duplication in tests, or had too much setup that could be factored out, he let them know and guided them back. In some cases he interrupted the exercise to go through someone's code on the screen with everybody watching, not to put them on the spot, but to cover the issues that we all run into.

That's critical because learning to truly work effectively in TDD style requires a reorientation of your thinking. We all have the coding habits of years that need to be overcome.

That doesn't happen automatically just because you read a book and have decided to follow it. It takes effort; half the effort is just noticing that you're straying from the intended path. That's the value of having a live instructor who can watch over your shoulder. It's like being an apprentice under the watchful eye of a master craftsman.

For me, this was ultimately the greatest value in the class. Having Grenning provide real-time guidance had an immediate effect on my coding, for both the test code and the production code. Whether it was talking about my mistakes or someone else's, I was able to immediately improve my work.

That made a huge difference between the test code I wrote before the class and the test code I wrote by the end of the class.

The coding exercises were building our own circular buffer, building a light controller spy, using TDD with the spy to implement a light scheduler, and implementing a flash chip driver. Note that these exercises are also covered in his book.

I also found that Cyber-Dojo made for an interesting example of pair programming, something I've never done before. Grenning provided initial files to work on, like a pair partner guiding you in the next step, then provided active feedback, like a partner asking questions and making suggestions: "Are you missing something there? What if you tried this? Wait, before you do that...".


The Big Lesson

The big lesson for me from this course was that it finally drove home that TDD is ALL ABOUT DEVELOPMENT! Sometimes I have to be clubbed over the head for something to really sink in, and that's what happened here.

We get so focused on the word "test" in TDD that we jump to the conclusion that it's just a test methodology. We emphasize test, as in TEST-Driven Development.

But really, the emphasis should be reversed, it's Test-Driven DEVELOPMENT. That means you apply design concepts and address the requirements of the product as you engage in a very active development thought process that is driven forward by tests.

Did you ever write some throwaway test code just so you could see how something worked, or to explore some design ideas? Hmmm, well TDD formalizes that.

The fact that you do end up with useful unit tests is almost a side effect of the process. An extremely valuable side effect, but a side effect nonetheless.

The real output of the process is working production code. That's what really matters. That's the real goal.

At some point on the last day of the course, I recognized the change in emphasis deep in my being. Maybe the difference is subtle, but it is critical.

That recognition first started to dawn after I read the book and applied it at work. I was amazed at the cleanliness of the resulting code. It was DRY and DAMP and SOLID, with no further refinement or debugging required.

Yes, I had a unit test suite. But look at the production code! It was breathtaking, right out of the chute. That was motivating.

It was in that receptive frame of mind that I did the coding exercises in the course. That was when the club hit. It was one of those moments of realization where you divide time into what came before, and what came after, the physical moment of grok, providing a whole new lens through which to perceive the work.

Savor that consideration for a moment.

People have been saying for years that TDD is about development, but we tend to focus on the test. Grenning emphasizes development when he talks about developing while "test-driving", meaning he is doing his development driven by tests. I guess it just takes time for the real implications to sink in.

One of Grenning's slides quotes Edsger Dijkstra:
Those who want really reliable software will discover that they must find means of avoiding the majority of bugs to start with, and as a result, the programming process will become cheaper. If you want more effective programmers, you will discover that they should not waste their time debugging, they should not introduce the bugs to start with.
While we all aspire to be like Dijkstra, this seems like a pipe dream. Until you realize that TDD does exactly that. It provides the shortest path to working software. I think he would have liked that.

Now that I've relegated the test aspect of this to second-class citizenship, let me bring it back to prominence.

The testing aspect approaches Dijkstra's ideal, because it finds bugs immediately as part of the code, build, test cycle. So the bugs are squashed before they've had time to scatter and hide in the dark corners. That reduces the dreaded unbounded post-development debug cycle to near zero.

If you don't let bugs get into the code, you won't have to spend time later removing them. Yeah, what Dijkstra said.

This doesn't guarantee bug-free code. There might still be bugs that occur during the integration of parts that are working (for example, one module uses feet, while another uses inches), or the code may not have addressed the requirements properly (the customer wanted a moving average of the last 5 data points, while the code uses the average of all data points), but as a functional unit, each module is internally consistent and working according to its API.

The resulting unit test suite is an extremely valuable resource, just as valuable as the production code. What makes it so valuable? Two things: safety harness, and example usage.

It provides a safety harness to allow you to do additional work on the code, then run the suite as a regression test to prove you haven't broken anything. Or to detect breakage so you can fix it immediately.

Using and extending the suite liberates you to make changes to the code safely. Need to add some functionality? Fix one of those integration or requirements bugs? Refactor for better performance or maintainability? Clean up some tech debt? Have at it.

You can instantly prove to yourself that you haven't screwed anything up, or show that you have, so that you can fix it before it ever gets committed to the codebase. No one will ever see that dirty laundry.

It provides example usage, showing how to use the API: how to call the various functions, in what order, how to setup and drive various behavioral scenarios, how to exercise the interfaces for different functional behaviors, how different parameters affect things, how to interpret return values.

This is real, live code, showing how to use the production code as a client. You can even get creative and add exploratory tests that push the production code in odd directions to see what happens. Grenning calls these characterization tests and learning tests.

The test suite is actually something quite magical: self-updating documentation! Since you need to invest the time to maintain the tests in order to get the development done, you are also automatically updating the example usage documentation for free.

You might argue that tools like Doxygen offer similar self-updating capability, but they still require updating textual explanations along with the code. They are subject to the same staleness that can happen with any comments, where the comments (or Doxygen annotations) aren't kept up to date with code changes (see Tim Ottinger's Rules for Commenting for advice to help avoid stale comments).

But if you want to really know how to use the production code, go read the tests! If you've truly followed the TDD process as Grenning shows you in this course, they will tell you how to produce every bit of behavior that it is capable of, because every bit of behavior implemented will have been driven by the tests.

That's the full-circle, closed-loop feedback power of test-driven DEVELOPMENT.

Doxygen still has its place. I think of the Doxygen docs as API reference, while the test suite is API tutorial, showing actual usage.


Another Lesson

I've already alluded to the other interesting lesson that I drew from this course: it takes practice! We're not used to working like this, so it takes practice and self-awareness to learn how to do it.

That was particularly driven home by the coding exercises. Even though I had just read his book and followed through the exact same exercises, and read Langr's book, and applied the knowledge at work, I still had trouble getting rolling on the first couple of exercises. It was a matter of instilling the new habits.

It took a few times having Grenning redirect me (or listen to the advice he gave someone else). By the final exercise, after the benefit of his live feedback, I was able to catch myself in time and start applying the habits on my own.

It's still going to take some time. I'll know I've gotten there when I start thinking of the tests automatically as the first step of coding.


Third Time's A Charm

At one point in the discussion I mentioned that Grenning's book and this course represented my third attempt at using TDD, and one of the participant said he would be interested in hearing about my previous attempts.

My first attempt was in 2007, when I was introduced to TDD by a coworker. I read Kent Beck's Test Driven Development: By Example and used it to develop the playback control module for a large video-on-demand server intended for use in cable provider head ends.

This was both a great success and a classic failure. It was a great success in that it accelerated my work on the module, avoiding many bugs and shortening the debug cycle. In that respect it lived up to the promise of TDD completely.

It was a classic failure in that I made the tests far too brittle. I put too much internal knowledge of the module in them, with many internal details that were useful when I was first developing the module, but that became a severe impediment to ongoing maintenance.

The classic symptom of this problem was that a minor change in implementation would cause a cascade of test failures. The production code was fine, but some internal detail such as a counter value that was being checked by the tests had changed. Otherwise the test code itself was also fine. But I had overburdened it with details that should have been hidden by encapsulation.

The result was that ultimately I had to abandon the test suite. It had provided good initial value, but failed to deliver on-going value because it became a severe maintenance burden.

This is exactly the type of situation that Grenning's course seeks to prevent. During coding exercises, he watches out for cases of inappropriate information exposure. Thus another benefit of this is improved encapsulation and information hiding.

My second attempt was in 2013, when I wanted to refactor some of the code in an IP acceleration server as part of improvements to one of its features. I had read Michael Feathers' Working Effectively with Legacy Code, and found that many of the things he covered applied to the codebase I was working on.

This was a revenue-generating service product, so I needed to be sure I didn't break it.

The main strategy the book covers is to use TDD to provide that safety harness I mentioned above, in order to verify that the legacy code behaves the same after modification as it did before.

I began building a set of test fakes that could be used with Google Test. One issue was that the code relied heavily on the singleton pattern, so there always had to be some implementation of each class that would satisfy the linker. And of course there were chains of such dependencies interlocked in a web.

My first task was to replace that bit by bit with dependency injection. I focused just on the parts necessary to allow me to test the area I was modifying. Part of Feathers' strategy is to tackle just enough of the system at a time to be able to make progress, rather than a wholesale break-everything-down-and-rebuild approach.

I had enough success with this that once I finished my primary work on the feature changes, I embarked on a background project to put the entire codebase into 100% dependency injection. That would allow me to build unit tests for any arbitrary component, in combination with any set of faked dependencies, with the longer-term goal of building out near-100% unit test coverage incrementally.

However, not too long after starting this, I ended up changing jobs. So once again I got the short-term benefit from TDD, but didn't reap the long-term benefit. It was a useful exercise to go through, though, providing good experience on how to migrate such a codebase to TDD.

This is another area that Grenning's course covers.


Related Links

For the perspective of another class participant, see Phillip Johnston's post What I Learned from James Grenning's Remote TDD Course.

There are things about the TDD process that make people suspicious. Is it just hacking? In this interview with Grenning, embedded systems expert Jack Ganssle raises some of those concerns. Grenning explains how the process works to reach the goal of well-designed, working production code that meets customer requirements.

Elecia and Christopher White have a great interview podcast with Grenning. Best joke: how many Scrum masters does it take to manage one developer? Also good Shakespeare and Bradbury quotes that are much ado about programming.

Saturday, July 21, 2018

Off-Target Testing And TDD For Embedded Systems

I've recently started reading things by James Grenning (Wingman-sw.com), one of the authors of the Agile Manifesto. My interest in his work relates to Test-Driven Development (TDD) for embedded systems.

A copy of his book Test Driven Development for Embedded C is currently winging its way to me. His site links to a webinar he gave last summer, Test-Driven Development for Embedded Software, that makes a great introduction to the topic.

I found one of his answers on Quora interesting. The question was: Can I perform a unit test when creating C firmware for ARM Cortex-M MCUs? The answer, of course, is yes. Specifically, testing can be done off-target (i.e. not on the target embedded system).

I wrote a long comment on the answer, and decided it might make an interesting blog post. So the remainder of this post reproduces it substantially as it appears there, with some cleanup. He very kindly asked if I would be interested in adding it to his Stories From The Field.

My Three Stories

I can offer three anecdotes that show why I give a big thumbs up to off-target testing. Off-target testing puts you on target!

The first case was back in 1995. I had recently transferred to the DEChub group at Digital Equipment Corporation to work on networking equipment.

They had a problem with their popular DECbridge 90 product, an office- or departmental-scale stackable Ethernet bridge running an in-house custom RTOS on Motorola 68K, all written in C. It would run for weeks or months at a customer site, then suddenly crash. That would throw the LAN into a tizzy as it went through a Spanning Tree Protocol (STP) reconfiguration event. Then the LAN would do it again once the bridge came back up and advertised its links.

So it could be very disruptive to the local network and everyone running on it, completely unpredictable. No one had been able to reproduce the problem in the lab.

I was tasked with finding and fixing it. This platform had very little in the way of crash dump and debug support, and software update was done by burning and replacing an EPROM. It did have an emulator pod, so that was how most debugging was done.

The problem here was the long run time between failures. That made trying to collect useful information from repeated test runs, either real or via emulator, impractical to the point of impossibility.

The one clue we knew from the crash log was that it was an OOM condition (Out Of Memory). The question was why. Other than supporting STP, which is a bit of complex behavior, a bridge is a pretty simple device, just L2 forwarding. Packet comes in, look it up in the bridge tables, forward it out the appropriate interfaces.

The key dynamic structure was the MAC address table. A bridge is a learning device in that it learns which MAC addresses are attached to which links. It builds up the table as it runs, learning the local network topology and participating in STP. So this table was certainly a prime suspect, but it had capacity for thousands of entries, yet it was crashing in LANs with only tens or hundreds of nodes.

The table used a B-tree implementation that was public-domain software from some university. We speculated that it was a memory leak in either the B-tree itself, or our interfacing to it.

So I pulled out the B-tree code and built a test program for it that would go through tens of thousands of adds and deletes in various simple patterns. This is similar to the type of test fixture that Brian Kernighan and Rob Pike later talked about in their book The Practice Of Programming.

I ran this off-target, on a VAX/VMS. VMS supported a simple Ctrl-T key in the terminal interface that would show the process memory consumption, similar to what the Linux ps command shows. The turnaround time on playing with this setup was minutes, build and run, with the full support of an OS to help me chase things down, including good old printf logging and customized dumping of data structures (VMS also had a debugger similar to gdb).

With this I could see that under some patterns, memory consumption was monotonically increasing. So yeah, a memory leak. Further exploration allowed me to home in on the right area of the code.

It was right there in the B-tree memory release code: it would free the main B-tree nodes (the main large data element being managed), but not the associated pointer nodes that it used for bookkeeping. So on every B-Tree node release, it would leak 8 bytes of memory.

This was a case of a very slow memory leak, that only manifested with lots of table changes. In a customer environment, it could take a long time to chew through the memory. In the lab running on-target, it was even slower, since we didn't know what the cause was, so we didn't know how to trigger and reproduce it.

Off-target, it took less than an hour to find. Code change: add two lines to free the pointer nodes. This was after many man-weeks of effort for all the people involved in trying to reproduce and chase down the problem, plus all the aggravation caused at customer sites. Ten minutes to code, build, and verify the fix.

The second case was just recently. I implemented an FSM based directly on the book Models to Code: With No Mysterious Gaps, by Leon Starr, Andrew Mangogna, and Stephen J. Mellor (backed up by Executable UML: A Foundation for Model Driven Architecture, by Stephen J. Mellor and Marc J. Balcer, and Executable UML: How to Build Class Models by Leon Starr; I highly recommend the trio of books). Thank you, Leon, Andrew, Stephen, and Marc!

I was also reading Robert C. Martin's Clean Architecture: A Craftsman's Guide to Software Structure and Design and Clean Code: A Handbook of Agile Software Craftsmanship at the time, which heavily influenced the work (and finally motivated me to fully commit to TDD; Grenning contributed the chapter "Clean Embedded Architecture" to Clean Architecture). Mellor and Martin are both additional Agile Manifesto authors.

A product of all this reading, the FSM was a hand-built and -translated version of the MDA (Model -Driven Architecture) approach, in C on a PIC32 running bare metal superloop.

The FSM performed polymorphic control of cellular communication modules connected via a UART. The modules use the old Hayes modem "AT" command set to connect to the cell network and perform TCP/IP communications.

It was polymorphic because it had to support 4 different modules from 2 different vendors, each with their own variation of AT commands and patterns of asynchronous notifications (URC's, Unsolicited Result Codes).

If you think LANs and WANs are squirrelly, just wait till you try cellular networks. I could hardly get two test runs to repeat the same path through the FSM. Worse, there were corner cases that the network would only trigger occasionally.

It was horribly non-deterministic. How can I be sure I've built the right thing when I can't stimulate the system to produce the behavior I want to exercise?

The solution: build a nearly-full-coverage test suite to run off-target. I built a trivial simulator with fake system clock and UART ISR that I ran on an Ubuntu VM on my Mac. That gave me full support for logging and gdb.

This wasn't quite TDD, but it was one step away: it was Test-After Development, and instead of Google Test/Mock or some other framework, I built my own ad-hoc fakes and EXPECT handling.

With this I was able to create scenarios to test every path in the FSM, for all the module variants. Since I had control of the fake clock and ISR, I could drive all kinds of timing conditions and module responses. It did help that the superloop environment was pure RTC (Run To Completion, which coincidentally is required for Executable UML state machines), rather than preemptive multitasking/multithreading. But I could have faked that as well if necessary.

I was able to fix several bugs that would have been hell to reproduce and debug on-target. In all cases, just as with the B-tree, the code changes were trivial. The time-consuming and hard part is always the debug phase to figure out what's going wrong and what needs to be changed. Doing the actual changes is usually simple.

That debug phase is where non-TDD approaches run into trouble, especially when they have to be done on-target. It can consume unbounded amounts of development time. The time required to do TDD is far shorter, and for a significant number of problems can either completely eliminate the debug phase, or narrowly direct it to the specific code path of a failing test.

The third case was this past week, when I did my first true TDD off-target thing for some embedded code. The platform is embedded Linux on ARM, so full OS, with cross-compiled C++.

I built the code in my Ubuntu VM and used Google Test/Mock, mocking out the file I/O (standard input stream and file output) and system clock. The code wasn’t particularly complex, but it did have a corner case dealing with a full buffer that represented the greatest bug risk.

I used very thin InputStreamInterface, OutputFileInterface, and ClockInterface classes as the OSAL (Operating System Abstraction Layer) to provide testability (thank you, Robert and James!).

It was gloriously wonderful and liberating to build it TDD style, red-green-refactor, and I knew I had all the paths in the code covered, including the unusual ones. That instills great confidence in what I did. No more worrying that I got everything right. I was able to demonstrate that I did.

Did it take a little extra time? Sure, mostly because I’m still on the learning curve getting into the TDD flow. But if I hadn’t used TDD and this code had produced failures, it would take me longer after the fact to chase down the bug. Plus I was able to avoid the impact on all the other people in the organization affected by the development turnaround cycle.

And just today, I added more behavior to that component using the TDD method. I was able to work fully confident that I wasn't breaking any of the stuff I had already done, and just as confident in the new code.

So I'm definitely a believer in off-target testing, and from now on I'll be doing it TDD.

Another benefit of this off-target TDD model? Working out of that Ubuntu VM on my Mac, I'm totally portable. I can work anywhere, at a coffee shop, on the train commuting, at the airport, on a plane, at home. I can be just as productive as if I had my full embedded development environment in front of me. Then once I'm back at my full environment, I have tested, running code ready to go.

For reference, these are the books that taught me TDD while in different jobs, both highly recommended:
  • Test Driven Development: By Example, by Kent Beck (yet another Agile Manifesto author). I was introduced to the book and TDD in general by new coworker Steve Vinoski in 2007, whose cred in my eyes went way up when I noticed his name in the acknowledgements of James O. Coplien's Advanced C++ Programming Styles and Idioms.
  • Working Effectively With Legacy Code, by Michael C. Feathers. Amazon tells me I bought this in 2013. At the time I used it to start adding unit test coverage to our codebase at work. What makes this book particularly useful is the fact that nearly all software development requires working with legacy code to some degree, even on brand new projects. It also helps you avoid creating a legacy of code that future developers will curse. 

Tuesday, April 17, 2018

More C+-

In The Case For C+-, I talked about writing quick tools in a simple C style, but taking advantage of the C++ standard library, primarily the dynamic data structures. It ends up being C++ without any (or just a few) user-defined classes, so is something of a lightweight object-oriented approach (yes, yes, I'm sure OO purists are barfing at the thought). The main benefit is fast coding.

There I showed as an example the msgresolve tool, which I used to resolve messages logged by an IOT device (the client) and its server. This is a lot of string processing and cross-indexing, with logs containing potentially thousands or tens of thousands of messages.

Shortly after I had completed msgresolve, I needed to have a tool to help me sift through large text files of server logs, logging the TCP connections made by clients and their subsequent activity. I was chasing down a problem where some of the connections were shutting down sooner than expected.

I wasn't sure what was causing the early shutdowns, and wasn't even sure initially which connections had experienced it, so I wanted to be able to gather all the lines for a given connection and list them out for tracing through, for each connection.

That would help me identify the ones that were live at the end of the log sample vs. the ones that had ended early. The log entries for hundreds of connections were all intermixed.

Armed with the methods I had used in msgresolve.cpp, conceptual design was easy. I wanted an ordered list of connections, and associated with each one, the sequential list of log entries associated with the connection.

There were also connections with some internal addresses I wanted to ignore. I could have done this filtering with grep, but it was easy enough to build the capability into the program so that it could stand alone. That also helped me explore some additional string processing functions.

Given that architecture, the data structure I needed was a std::map that mapped a string (the connection identification) to a std::list of strings (the log lines for the connection).

I had the program working in less than an hour. Then I spent at least another hour screwing around with the timestamps in the log entries, figuring out how to process them and deciding what to do with them. Then a little more time on refactoring and cleanup.

Throughout, I used a sample log file that had entries for several connections, including addresses I wanted to skip. I used that as a simple unit test to exercise the code paths.

The resulting code provided the impetus for a simple generalized string processing module, which I'll cover in another post. But you can see some clear patterns emerging in this code.

Doing quick tools like this is fun and very satisfying. It makes your serotonin flow. You have a problem you need to deal with, so you sit down and spew a bunch of code in a short time, refine it, and use the results.

This is actually quite different from long-term product development. That kind of work has its intense coding phases, but once the initial version of the product is out, a lot of the work is much smaller surgical changes.

Even fitting a major new feature in often involves many small bits of code scattered throughout the larger code base, integrating the tendrils. Getting that to work has a different kind of satisfaction.

Design Choices

These tools also give you a chance to think about different approaches. You can balance the variables of memory consumption, CPU consumption, I/O consumption, time, and code complexity (that is, ease of writing and maintaining the code, and compiled code space consumption, not algorithmic complexity) for a given situation.

For instance, the log files I was dealing with had over a million lines of data, some 200MB worth covering hundreds of connections.

That meant I had several choices:
  1. I could load all the data into memory and then print it out in an orderly manner. This is a single-pass solution, that consumes large amounts of memory.
  2. I could scan the file once, identifying all the individual connections, then for each connection, scan the file from beginning to end to read and print their lines. This a multi-pass solution that requires little memory but significant file I/O.
  3. I could scan the file once, and for each identified connection, track the file position of the first and last line, then for each connection, just scan that range of the file. This is a multi-pass solution that reduces the total file I/O for a negligible increase in memory.
  4. I could do the same thing, but instead of tracking just the first and last line file positions, build a list of the file position and length of each line, then on each pass, just skip directly to the locations of the lines. This is still multi-pass, but significantly reduces the total file I/O because it only visits each file position twice, requiring a bit more complexity and a bit more memory.
The decision on which choice to use is system-dependent. If memory is cheap and plentiful, and file I/O is relatively expensive, either in terms of time or charges to transfer data over a data link (maybe the data is remote, accessed over a cellular link), then the single-pass solution is best, choice #1.

On a small-memory system, a multi-pass solution is better, and you just have to live with the extra I/O. In that case, choice #4, which is the most complicated code, has the best compromise of low memory and low I/O consumption.

Although if you're really pressed for code space, the simplest multipass solution that iterates over the entire file for each connection is the better choice, #2.

Realistically though, you don't run tools such as this locally on small systems. Where possible, you offload the data to a larger system and run the tools offline.

In this case, I'm running on a Mac with 16GB of memory. Slurping up a 200MB text file and holding everything in memory is nothing. So the single-pass solution is the way to go.

Loading The Data

The log file may just be a chronological sample of all the activity logged by the server, so some connections will already exist at the start of the log, and some will remain at the end. Meanwhile, connection starts and terminations will appear in the log.

The program parses the lines from the log to find connection lines (i.e. some activity for a given connection). It identifies them by looking for a connection ID, which consists of an IPv4 address/port pair (a remote socket ID), and may optionally include a hexadecimal client ID.

It uses just the socket ID as the connection ID, which is the key to the connection map. When it finds a new connection ID, it adds it to the map with an empty list. For each connection line, it finds the connection map entry and appends the line to the list of lines for that connection.

As it loads connection lines, it filters them against the set of IP addresses to skip (these skip address are due to logging of other types of connections besides the client connections). That helps reduce the noise from a large log file.

Printing The Data

The program prints the connections by first iterating through the map and printing out a summary of each one: the connection ID, the number of lines, the duration of the connection data found in the log, and how its lifetime relates to the overall log. Since the map is an ordered structure, printing connections is always in sorted order by connection ID (though string sorted, not numeric sorted).

Then it makes a second pass through the connection map, iterating through each line for each connection and printing it out, with separators between connections. It prints a header and summary line before and after the activity lines for each connection, showing where the connection starts and ends relative to the start and end of the log.

As a last quick change, I added a threshold time value to clearly identify connections that ended at least 60 seconds before the end of the log. This would be a good candidate for a command-line parameter to override the default threshold.

All the output lines have unique greppable features or keywords so you can use other tools for additional postprocessing or extraction. For instance, I could grep out the end summary line of each connection, and maybe the last couple of activity lines before it, to see how each connection ended up. I could use the "threshold exceeded" indication to identify the ones that had ended early.

Some Design Evolution

This program adds the isMember() function, which determines whether a string is a member of a set of strings. Since my usage here was intended to deal with a small set and I had other functions that had similar iterative structure, in the heat of battle I quickly coded it as a linear search of a vector of strings.

That worked fine here, but as I pulled a bunch of this code out into a general string processing module, I realized that was a bad choice, because it's an O(N) search.

That became especially bad when I wanted an overload that took a vector of strings and determined if they were all members. That meant an O(M) repetition of O(N) searches: an O(M*N) or effectively O(N^2) algorithm.

That gets out of hand fast as M and N get larger. Meanwhile, the std::unordered_set is perfect for this, an O(1) algorithm for single searches, and an O(M) algorithm when repeated for M items.

I've left the original isMember() implementation here as an example of the evolution of a concept as you generalize it for other uses.

I also threw in a few overloads that I didn't end up using, but that set the stage for running with the concept in the string processing module. More discussion of that in the post containing the module.

The Result

This program turned out to be another fun exercise in string processing once I had built the basic support functions and could see the problem in those terms. It felt more like working in Python, and in fact just as the dict structure from msgresolve.cpp was inspired by Python, so are the split() and join() functions here.

The funny thing is that it took doing stuff in Python to make me see this approach. That points out one of the advantages of working in multiple different languages: you start seeing opportunities to apply some of the common idioms from one language in another language.

Here's logsplit.cpp:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
// Usage: logsplit <serverLog>
//
// Splits a server log file by IPv4 connection. Prints a
// summary list of the connections, then the log lines for
// each separate connection.
//
// This is an example of a C++ program that is written mostly
// in plain C style, but that makes use of the container and
// composition classes in the C++ standard library. It is a
// lightweight use of C++ with no user-defined classes.
//
// 2018 Steve Branam <sdbranam@gmail.com> learntocode

#include <iostream>
#include <iomanip>
#include <sstream>
#include <vector>
#include <list>
#include <map>

enum ARGS
{
    ARGS_PROGNAME,
    ARGS_SERVER_LOG,
    ARGS_REQUIRED,
    ARGS_SKIP = ARGS_REQUIRED
};

enum SERVER
{
    SERVER_DATE,
    SERVER_TIME,
    SERVER_THREAD,
    SERVER_SEVERITY,
    SERVER_FUNC,
    SERVER_CONN,
    SERVER_TIME_LEN = 16,
    SERVER_TIMESTAMP_LEN = 28
};

enum CONN
{
    CONN_IP,
    CONN_PORT,
    CONN_CLIENT_ID
};

enum
{
    END_TIME_THRESHOLD = 60
};

typedef std::string String;
typedef std::vector<String> StringVec;
typedef std::list<String> StringList;
typedef std::map<String, StringList> ConnMap;
typedef std::pair<String, StringList> ConnMapEntry;

const char *timeFormat = "%Y-%m-%d %H:%M:%S";
StringVec skipIps;
size_t lines = 0;
size_t skipped = 0;
String firstTimestamp;
String lastTimestamp;
ConnMap connections;

StringVec split(const String& str, const char* delim)
{
    char buffer[str.size() + 1];
    StringVec strings;

    strcpy(buffer, str.c_str());

    char *token = std::strtok(buffer, delim);
    while (token != NULL) {
        strings.push_back(token);
        token = std::strtok(NULL, delim);
    }
    
    return strings;
}

String join(const StringVec& strings, const String& sep,
            size_t start = 0, size_t end = 0)
{
    String str;

    if (!end) {
        end = strings.size();
    }
    for (size_t i = start; i < end; ++i) {
        str.append(strings[i]);
        if (i + 1 < end) {
            str.append(sep);
        }
    }
    return str;
}

bool isMember(const String&str, const StringVec& set)
{
    for (size_t i = 0; i < set.size(); ++i) {
        if (str == set[i]) {
            return true;
        }
    }

    return false;
}

typedef int (*CharMatch)(int c);

bool isToken(const String& token, CharMatch isMatch)
{
    if (token.empty()) {
        return false;
    }
    else {
        for (size_t i = 0; i < token.size(); ++i)
        {
            if (!isMatch(token[i])) {
                return false;
            }
        }
    }
    return true;
}

bool isToken(const StringVec& tokens, CharMatch isMatch,
             size_t start = 0, size_t end = 0)
{
    if (!end) {
        end = tokens.size();
    }
    for (size_t i = start; i < end; ++i) {
        if (!isToken(tokens[i], isMatch)) {
            return false;
        }
    }
    return true;
}

bool isNumeric(const String& token)
{
    return isToken(token, isdigit);
}

bool isHex(const String& token)
{
    return isToken(token, isxdigit);
}

bool isNumeric(const StringVec& tokens,
               size_t start = 0, size_t end = 0)
{
    return isToken(tokens, isdigit, start, end);
}

bool isIpv4Address(const String& str)
{
    StringVec tokens(split(str, "."));

    return ((tokens.size() == 4) &&
            isNumeric(tokens));
}

bool isIpv4Port(const String& str)
{
    return ((str.size() <= 5) &&
            isNumeric(str));
}

bool isIpv4Socket(const StringVec& strings)
{
    return ((strings.size() >= 2) &&
            isIpv4Address(strings[0]) &&
            isIpv4Port(strings[1]));
}

time_t getTime(const String& strTime, const char* format)
{
    std::tm t = {};
    std::istringstream ss(strTime);
    ss >> std::get_time(&t, format);
    return mktime(&t);
}

time_t getTime(const String& field)
{
    // Skip opening and closing brackets.
    return getTime(field.substr(1, SERVER_TIMESTAMP_LEN - 2),
                   timeFormat);
}

size_t getDuration(const time_t& start, const time_t& stop)
{
    size_t seconds(difftime(stop, start));
    return seconds;
}

size_t getDuration(const String& start, const String& stop)
{
    return getDuration(getTime(start), getTime(stop));
}

size_t getDuration(const time_t& start, const String& stop)
{
    return getDuration(start, getTime(stop));
}

size_t getDuration(const String& start, const time_t& stop)
{
    return getDuration(getTime(start), stop);
}

bool isServerTime(const String& str)
{
    if (str.size() == SERVER_TIME_LEN) {
        for (size_t i = 0; i < str.size(); ++i)
        {
            if (!isdigit(str[i]) &&
                (str[i] != ':') &&
                (str[i] != '.') &&
                (str[i] != ']')) {
                return false;
            }
        }
        return true;
    }
    return false;
}

bool isConnId(const String& str)
{
    StringVec fields(split(str, ":"));

    return (isIpv4Socket(fields) &&
            (fields.size() < CONN_CLIENT_ID + 1 ||
             isHex(fields[CONN_CLIENT_ID])));
}

bool isServerConn(const StringVec& fields)
{
    return ((fields.size() > SERVER_CONN) &&
            isServerTime(fields[SERVER_TIME]) &&
            isConnId(fields[SERVER_CONN]));
}

bool loadServer(const char* fileName)
{
    FILE* file = std::fopen(fileName, "r");
    
    if (file) {
        char buffer[1000];
        while (std::fgets(buffer, sizeof(buffer), file) != NULL) {
            String line(buffer);
            StringVec fields = split(buffer, " \t");

            if (isServerConn(fields)) {
                ++lines;
                lastTimestamp = line.substr(0, SERVER_TIMESTAMP_LEN);
                if (firstTimestamp.empty()) {
                    firstTimestamp = lastTimestamp;
                }
                
                strncpy(buffer, fields[SERVER_CONN].c_str(),
                        sizeof(buffer));
                StringVec conn = split(buffer, ":");

                if (isMember(conn[CONN_IP], skipIps)) {
                    ++skipped;
                }
                else {
                    String key(conn[CONN_IP]);
                    key.append(":");
                    key.append(conn[CONN_PORT]);

                    ConnMap::iterator match;
                    match = connections.find(key);
                    if (match == connections.end()) {
                        connections.insert(ConnMapEntry(key,
                                           StringList()));
                        match = connections.find(key);
                    }
                    match->second.push_back(line);
                }
            }
        }
        std::fclose(file);
        if (connections.empty()) {
            std::cout << "No connections found" << std::endl;
            return false;
        }
        return true;
    }
    std::cout << "Failed to open server file"
              << fileName << std::endl;
    return false;
}

void printSeparator()
{
    std::cout << std::endl
              << "=-=-=-=-" << std::endl
              << std::endl;
}

void listConnections()
{
    std::cout << connections.size() << " connections "
              << firstTimestamp << "-" << lastTimestamp << " "
              << lines << " lines, "
              << getDuration(firstTimestamp, lastTimestamp) << " sec:"
              << std::endl;
    if (skipIps.size()) {
        std::cout << "(skipped " << skipped << " connections with "
                  << join(skipIps, ", ") << ")" << std::endl;
    }
    std::cout << std::endl;
    for (ConnMap::iterator curConn = connections.begin();
         curConn != connections.end();
         curConn++) {        
        String conn(curConn->first);
        StringList connLogs(curConn->second);
        std::cout << conn << "\t"
                  << connLogs.front().substr(0, SERVER_TIMESTAMP_LEN)
                  << "-" << connLogs.back().substr(0, SERVER_TIMESTAMP_LEN)
                  << " " << connLogs.size() << " lines, "
                  << getDuration(connLogs.front(), connLogs.back())
                  << " sec" << std::endl;
    }
    printSeparator();
}

void logConnections()
{
    time_t timeFirst = getTime(firstTimestamp);
    time_t timeLast = getTime(lastTimestamp);
    
    for (ConnMap::iterator curConn = connections.begin();
         curConn != connections.end();
         curConn++) {        
        String conn(curConn->first);
        StringList connLogs(curConn->second);
        size_t duration(getDuration(connLogs.front(), connLogs.back()));
        
        std::cout << "Connection " << conn
                  << " " << connLogs.front().substr(0, SERVER_TIMESTAMP_LEN)
                  << "-" << connLogs.back().substr(0, SERVER_TIMESTAMP_LEN)
                  << " " << connLogs.size() << " lines, "
                  << duration << " sec:" << std::endl << std::endl;

        size_t seconds = getDuration(timeFirst, connLogs.front());
        std::cout << firstTimestamp << " Starts " << seconds
                  << " sec after start of log." << std::endl;

        for (StringList::iterator curLog = connLogs.begin();
             curLog != connLogs.end();
             curLog++) {        
            std::cout << *curLog;
        }

        seconds = getDuration(connLogs.back(), timeLast);
        std::cout << lastTimestamp
                  << " " << connLogs.size() << " lines, "
                  << duration << " sec. Ends "
                  << seconds << " sec before end of log.";
        if (seconds > END_TIME_THRESHOLD) {
            std::cout << " Exceeds threshold.";
        }
        std::cout << std::endl;
        printSeparator();
    }
}

int main(int argc, char* argv[])
{
    if (argc < ARGS_REQUIRED ||
        String(argv[1]) == "-h") {
        std::cout << "Usage: " << argv[ARGS_PROGNAME]
                  << " <serverLog> [<skipIps>]" << std::endl;
        std::cout << "Where <skipIps> is comma-separated list "
                  << "of IP addresses to skip." << std::endl;
        return EXIT_FAILURE;
    }
    else {
        if (argc > ARGS_REQUIRED) {
            skipIps = split(argv[ARGS_SKIP], ",");
        }
        
        if (loadServer(argv[ARGS_SERVER_LOG])) {
            listConnections();
            logConnections();
        } else {
            return EXIT_FAILURE;
        }
    }
    return EXIT_SUCCESS;
}

The sample log file, conns.log:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[2018-04-03 13:16:29.469659] [0x00007fb3ff129700] [debug]   start() 10.1.180.206:30450 TCP socket receive_buffer_size=117708
[2018-04-03 13:16:29.469678] [0x00007fb3ff129700] [debug]   start() 10.1.180.206:30450 TCP socket send_buffer_size=43520
[2018-04-03 13:16:29.867381] [0x00007fb3ff129700] [debug]   set_idle_send_timeout() 10.1.180.206:30450 set idle send timeout to 60 seconds
[2018-04-03 13:16:29.867394] [0x00007fb3ff129700] [debug]   set_idle_receive_timeout() 10.1.180.206:30450 set idle receive timeout to 120 seconds
[2018-04-03 13:16:29.867450] [0x00007fb3ff92a700] [info]    handle_connected() 10.1.180.206:30450 remote connected [2423/8951]
[2018-04-03 13:16:29.959877] [0x00007fb3ff129700] [debug]   install_connection() 10.1.180.206:30450:00003bd4 linkType 0
[2018-04-03 13:16:29.966599] [0x00007fb3ff129700] [debug]   receive() 10.1.180.206:30450:00003bd4 RX sum 0x4a2137a6, 231 bytes
[2018-04-03 13:16:29.966935] [0x00007fb3ff129700] [debug]   send() 10.1.180.206:30450:00003bd4 TX sum 0xa11f878a, 35 bytes, msg type 3
[2018-04-03 13:17:29.967117] [0x00007fb3ff129700] [debug]   send() 10.1.180.206:30450:00003bd4 TX sum 0x1c35386e, 29 bytes, msg type 1
[2018-04-03 13:17:29.967228] [0x00007fb3ff129700] [debug]   handle_idle_send_timeout() 10.1.180.206:30450:00003bd4 59 seconds
[2018-04-03 13:17:30.086722] [0x00007fb3ff129700] [debug]   receive() 10.1.180.206:30450:00003bd4 RX sum 0xf6e3f3bf, 29 bytes
[2018-04-03 13:17:30.086813] [0x00007fb3ff129700] [debug]   send() 10.1.180.206:30450:00003bd4 TX sum 0xfb208d9f, 29 bytes, msg type 17
[2018-04-03 13:17:40.086722] [0x00007fb3ff129700] [debug]   receive() 10.2.80.206:3050:00000bd4 RX sum 0xf6e3f3bf, 29 bytes
[2018-04-03 13:17:40.086813] [0x00007fb3ff129700] [debug]   send() 10.2.80.206:3050:00000bd4 TX sum 0xfb208d9f, 29 bytes, msg type 17
[2018-04-03 13:17:30.139377] [0x00007fb3ff129700] [debug]   receive() 10.1.180.206:30450:00003bd4 RX sum 0xa0f8a8e1, 78 bytes
[2018-04-03 13:18:29.867494] [0x00007fb3ff129700] [debug]   handle_idle_receive_timeout() 127.0.0.1:32450:00003bd4 60 seconds
[2018-04-03 13:18:29.967315] [0x00007fb3ff129700] [debug]   handle_idle_send_timeout() 10.1.180.206:30450:00003bd4 0 seconds
[2018-04-03 13:18:30.086988] [0x00007fb3ff129700] [debug]   send() 10.1.180.206:30450:00003bd4 TX sum 0x8fcf53f6, 29 bytes, msg type 1
[2018-04-03 13:18:30.087101] [0x00007fb3ff129700] [debug]   handle_idle_send_timeout() 10.1.180.206:30450:00003bd4 59 seconds
[2018-04-03 13:18:30.197029] [0x00007fb3ff129700] [debug]   receive() 10.1.180.206:30450:00003bd4 RX sum 0x515f1d4e, 29 bytes
[2018-04-03 13:18:30.197120] [0x00007fb3ff129700] [debug]   send() 10.1.180.206:30450:00003bd4 TX sum 0x6827d190, 29 bytes, msg type 17
[2018-04-03 13:18:30.249027] [0x00007fb3ff129700] [debug]   receive() 10.1.180.206:30450:00003bd4 RX sum 0xda66c5c7, 78 bytes
[2018-04-03 13:19:30.087189] [0x00007fb3ff129700] [debug]   handle_idle_send_timeout() 10.1.180.206:30450:00003bd4 0 seconds
[2018-04-03 13:19:30.139486] [0x00007fb3ff129700] [debug]   handle_idle_receive_timeout() 10.1.180.206:30450:00003bd4 60 seconds
[2018-04-03 13:19:30.197322] [0x00007fb3ff129700] [debug]   send() 10.1.180.206:30450:00003bd4 TX sum 0x812afb16, 29 bytes, msg type 1

Sample run:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
$ g++ logsplit.cpp -o logsplit
$ ./logsplit conns.log 127.0.0.1,3.3.3.3
2 connections [2018-04-03 13:16:29.469659]-[2018-04-03 13:19:30.197322] 25 lines, 181 sec:
(skipped 1 connections with 127.0.0.1, 3.3.3.3)

10.1.180.206:30450 [2018-04-03 13:16:29.469659]-[2018-04-03 13:19:30.197322] 22 lines, 181 sec
10.2.80.206:3050 [2018-04-03 13:17:40.086722]-[2018-04-03 13:17:40.086813] 2 lines, 0 sec

=-=-=-=-

Connection 10.1.180.206:30450 [2018-04-03 13:16:29.469659]-[2018-04-03 13:19:30.197322] 22 lines, 181 sec:

[2018-04-03 13:16:29.469659] Starts 0 sec after start of log.
[2018-04-03 13:16:29.469659] [0x00007fb3ff129700] [debug]   start() 10.1.180.206:30450 TCP socket receive_buffer_size=117708
[2018-04-03 13:16:29.469678] [0x00007fb3ff129700] [debug]   start() 10.1.180.206:30450 TCP socket send_buffer_size=43520
[2018-04-03 13:16:29.867381] [0x00007fb3ff129700] [debug]   set_idle_send_timeout() 10.1.180.206:30450 set idle send timeout to 60 seconds
[2018-04-03 13:16:29.867394] [0x00007fb3ff129700] [debug]   set_idle_receive_timeout() 10.1.180.206:30450 set idle receive timeout to 120 seconds
[2018-04-03 13:16:29.867450] [0x00007fb3ff92a700] [info]    handle_connected() 10.1.180.206:30450 remote connected [2423/8951]
[2018-04-03 13:16:29.959877] [0x00007fb3ff129700] [debug]   install_connection() 10.1.180.206:30450:00003bd4 linkType 0
[2018-04-03 13:16:29.966599] [0x00007fb3ff129700] [debug]   receive() 10.1.180.206:30450:00003bd4 RX sum 0x4a2137a6, 231 bytes
[2018-04-03 13:16:29.966935] [0x00007fb3ff129700] [debug]   send() 10.1.180.206:30450:00003bd4 TX sum 0xa11f878a, 35 bytes, msg type 3
[2018-04-03 13:17:29.967117] [0x00007fb3ff129700] [debug]   send() 10.1.180.206:30450:00003bd4 TX sum 0x1c35386e, 29 bytes, msg type 1
[2018-04-03 13:17:29.967228] [0x00007fb3ff129700] [debug]   handle_idle_send_timeout() 10.1.180.206:30450:00003bd4 59 seconds
[2018-04-03 13:17:30.086722] [0x00007fb3ff129700] [debug]   receive() 10.1.180.206:30450:00003bd4 RX sum 0xf6e3f3bf, 29 bytes
[2018-04-03 13:17:30.086813] [0x00007fb3ff129700] [debug]   send() 10.1.180.206:30450:00003bd4 TX sum 0xfb208d9f, 29 bytes, msg type 17
[2018-04-03 13:17:30.139377] [0x00007fb3ff129700] [debug]   receive() 10.1.180.206:30450:00003bd4 RX sum 0xa0f8a8e1, 78 bytes
[2018-04-03 13:18:29.967315] [0x00007fb3ff129700] [debug]   handle_idle_send_timeout() 10.1.180.206:30450:00003bd4 0 seconds
[2018-04-03 13:18:30.086988] [0x00007fb3ff129700] [debug]   send() 10.1.180.206:30450:00003bd4 TX sum 0x8fcf53f6, 29 bytes, msg type 1
[2018-04-03 13:18:30.087101] [0x00007fb3ff129700] [debug]   handle_idle_send_timeout() 10.1.180.206:30450:00003bd4 59 seconds
[2018-04-03 13:18:30.197029] [0x00007fb3ff129700] [debug]   receive() 10.1.180.206:30450:00003bd4 RX sum 0x515f1d4e, 29 bytes
[2018-04-03 13:18:30.197120] [0x00007fb3ff129700] [debug]   send() 10.1.180.206:30450:00003bd4 TX sum 0x6827d190, 29 bytes, msg type 17
[2018-04-03 13:18:30.249027] [0x00007fb3ff129700] [debug]   receive() 10.1.180.206:30450:00003bd4 RX sum 0xda66c5c7, 78 bytes
[2018-04-03 13:19:30.087189] [0x00007fb3ff129700] [debug]   handle_idle_send_timeout() 10.1.180.206:30450:00003bd4 0 seconds
[2018-04-03 13:19:30.139486] [0x00007fb3ff129700] [debug]   handle_idle_receive_timeout() 10.1.180.206:30450:00003bd4 60 seconds
[2018-04-03 13:19:30.197322] [0x00007fb3ff129700] [debug]   send() 10.1.180.206:30450:00003bd4 TX sum 0x812afb16, 29 bytes, msg type 1
[2018-04-03 13:19:30.197322] 22 lines, 181 sec. Ends 0 sec before end of log.

=-=-=-=-

Connection 10.2.80.206:3050 [2018-04-03 13:17:40.086722]-[2018-04-03 13:17:40.086813] 2 lines, 0 sec:

[2018-04-03 13:16:29.469659] Starts 71 sec after start of log.
[2018-04-03 13:17:40.086722] [0x00007fb3ff129700] [debug]   receive() 10.2.80.206:3050:00000bd4 RX sum 0xf6e3f3bf, 29 bytes
[2018-04-03 13:17:40.086813] [0x00007fb3ff129700] [debug]   send() 10.2.80.206:3050:00000bd4 TX sum 0xfb208d9f, 29 bytes, msg type 17
[2018-04-03 13:19:30.197322] 2 lines, 0 sec. Ends 110 sec before end of log. Exceeds threshold.

=-=-=-=-