First Things First

When one decides to start a blog, it stands to reason that there has to be a first post. Naturally then, the topic of the first post will be the first topic on the blog. It seems like it should be chosen with care, as there can be only one fist topic (never mind that there can, by definition, be only one second topic, one third topic…). Where better to start than where it started for me, at my first full-time, professional job?

I chose to accept this job, among other reasons, because the product excited me. The purpose, as it was explained to me in my interview, was to speed field asset inspection, make it safer for the inspectors, and to automate the very manual production of paper forms. In an age when smartphones had only been around for a couple years and the cost points of the phones and the data plans had not yet made them ubiquitous, this technology was a marvel. It had a sensor suite that allowed the user to capture geotagged media from a standoff position. If you couldn’t get close enough to inspect something, you’d record the data from a standoff position and have centimeter-level accuracy as to where on the Earth the subject of the photo or video was.

It was a portable computer, running Windows XP, that you literally wore on your person by attaching it to a harness in such a manner as you could manipulate it with a stylus without holding the machine in your hands. The other half of the design was a thing that looked a bit like how I imagined a policeman’s radar gun to look, and it connected to the computer via a very rugged cable. The “radar gun” served as a relatively simple sensor bed that would fuse the various sensors, and the PC ran custom applications to put this geo-tagged media into a reporting format the user wanted.

When I started, this is what the product looked like:

I’ve linked this from the National Park Service. They probably own the image.

Over the course of my tenure at this company, this is what the product came to look like:

This image is linked from Bally Systems. This is their design.

I can’t commend my old boss enough for his vision and execution of how we took the early hardware, through often-limited funding, into the design in this image. There were a couple iterations in between, but the last design was quite amazing. It balanced its weight well, the screen was quite visible, and the whole thing was really quite ergonomic. I remember having an immense sense of pride when the prototype housing and electronic components were assembled for the first of many times. This thing had a lot of potential!

But we never really got much past there. The device had a bit of an identity crisis. It was simultaneously an instrument for collecting data and reporting that data. This was revealed most accurately in its naming evolution. When I arrived, it was called the HAMMER, or “Hand-held Apparatus for Mobile Mapping and Expedited Reporting.” It was assumed that the device’s reporting capabilities were central to its justification as a product.

We would write software on this device to input data from the user and sensors, and then aggregate it in specific ways to automate the creation of paper reports. It didn’t start that way; initially (before I joined), the team had used the hardware platform to create a solution to automate the creation and population of ArcGIS projects.

The difficulty with this solution is the limitation of the ArcGIS data format at the time, .shp files. The expectation was that as an end user, you’d be able to load these files in to various GIS software and do the normal activities that one does with their GIS program. That worked well to that end, but ultimately this meant that there was no way to distribute and view the media in association with the point data. This was just a limitation of the 3rd-party technologies at the time.

The leaders in the company realized that this limitation was killing their product’s potential. They worked with the customer to define a solution. It was understood that to make the product useful to the end users, they would need to ask these users to change their workflow. In hindsight, there were a few ways this could be done

  • Ask the users to collect their data using the device, with a custom application running on the device. The custom application would automate the production of the single-page reports plus substantiation images that they would typically create by hand. These generated reports would then be sent by the users as normal.
    • This had the advantage that only the data collection phase is different from their prior workflow.
    • It had the disadvantage that the data could not really be edited. Could you imagine writing a report for your superior, and you only get one chance to make it correct?
  • Ask the users to collect their data using the device in a generic fashion, with minimal training to keep the data from becoming a jumbled mess. Later, after offloading the device to a desktop computer, the user would manually associate the data to the appropriate reports, which were then created by hand.
    • This had the advantage that data could be generically collected and used in any future context. Additionally, the collection process was highly adaptable to new uses.
    • It had the disadvantage that this didn’t save the end users any work, and the advantages of the product as a whole were then limited to the device’s physical characteristics. This did not work in our favor, considering that the computer had to be strapped to the user and they had to carry the “radar gun” around. We would be making data collection physically more difficult, and making the reporting workflow more difficult.
  • An alternative to combine both approaches was possible. Users could collect data in a generic fashion, and custom off-device applications could be written to structure the data and produce the reports.
    • This had the advantage that the data could be edited after the collection, and that the data collection process remains generic, capable of being used for any customer and context.
    • It had the disadvantage that a distinct set of tools would need to be written and deployed. This would have been exceptionally difficult with this customer, where there were tight restrictions on approved software. It would take months or longer to get approval, and each update would require an approval. To my knowledge, nobody thought to simply sell the customer an additional laptop to run these applications until the situation could be resolved.

In addition to this, the data collection could only happen seasonally. There were prime months for collection, and outside of that window, collection was essentially impossible. I think in an effort to make this window, the first approach was chosen, where the data would be collected through a highly-custom process, and the device would generate a report. In this way, we would only need to write one application, not several, and users would not need extensive training and workflow modifications. The kicker though is that the report generation was inextricably coupled to the applications that collected the data. There was no way to export the data; the PDF reports were the export!

And so, in this way of balancing all of these complex constraints, we sealed our fate. This method of reporting is what we demonstrated to potential customers, and this is what we were associated with.

Eventually, when we had to work with our customer at the time to produce an export format to get data into their databases, we had no real grasp of the obvious, neither us nor the customer, that we were not working in the interests of either group. The customer who was buying the product was buying a mobile automated reporting tool, not a data collection platform. That’s what we were selling them, specifically a long effort to automate the production of one type of report. This had so permeated everyone’s thinking, especially the customer, that by the time we were exporting our data into their database, it was structured in such a way that it mirrored how the inspectors would collect it on the form and there was no other way it could be reported without destroying all of the associations that gave the data meaning.

In short, producing these forms is all that this data could ever be used for without manual and pain-staking readjustments to the data. At the scale this was intended to be used, this just wasn’t feasible. Moreover, our point of contact in this organization had expended a lot of her energy in selling this product to her colleagues for this specific purpose, and in the end, the pendulum had swung a bit too far in that direction. It didn’t help that we spent a considerable amount of time and funding trying to automate the data collection and reporting for just one form, and there was no way to adapt the user-facing software we had written to do more than this one thing without a rewrite of the application, data storage on the device, and data exporting codes.

Getting to the heart of what customers really want to use can be difficult, because they usually don’t really know. And because we were so heavily focused on the reporting capabilities of the device, we became bogged down in the UX and workflow for how to automate the production, again, of a single one-page form. This took so long that we burnt through all of the funding for the project, and we couldn’t shake the image in the customer org as being a device that wasn’t solving the problem that was claimed. We missed the data collection window. By the time funding came around again, we were in the same position. We missed the data collection window for the same reasons for a second year.

Amusingly, in hindsight, it should have been a hint to us that we took what inspectors were writing on a single piece of paper and turned it into an application requiring 5 or more screens, each with several tabs. We never saw funding again from this customer in the time I was there.

Later, the leaders of the company had a bit of an epiphany around the same time that mobile Internet connectivity became a common thing. The earlier customers were not panning out, much (but not all) of which was due to the device’s software limitations as I listed above. However, it became possible with the advent of affordable cell and satellite modems to offload data as soon as it was collected, and a new customer was interested. This wasn’t just a device for automating the production of paper reports, it was for real-time communication! It was renamed “OMNI” for “Operational Mapping and Networked Intelligence.” I suspect, after looking at the older picture above, that the company preferred the name OMNI to HAMMER, but they wanted to emphasize the reporting aspects of the product, so they changed the OMNI name’s meaning to “Observational Mapping and Noting Instrument” some time after I left. Note that this image is of the older hardware, not the newer. I can only suspect that this is because the newer, ergonomic hardware never made it out of prototype, straw-man stage (and that’s a story for another day).

I have to admit, it was pretty neat sitting in the customer’s lab, watching data show up in real time on a monitor the size of my compact sedan at the time, as our salesman walked around outside about 1/4 mile away and collected it. But the issue with this demo is that we still hadn’t shaken our mindset that this device was for reporting as much as it was for collecting the data. The export format we supported was KML, which is the computational geography world’s equivalence of HTML, that is to say, not something one should use to store or transfer rich, highly-associative data. Our lead UI programmer did some really neat work with this, getting images and video to play inside of Google Earth, but in the end, the media rested on a server and the data rested somewhere else. And of course, it was the device that was generating the KML, not the more intelligent pattern of an ingest application running on an offload server to convert the data to KML. In a very practical sense, the data was ephemeral.

This again brought us to the point where the hardware was the selling point of the product, not improved customer workflows. And again, we lost on that front. The prototype hardware was not ready for the stress of a pilot project, and none happened.

The device’s true identity was not fully resolved over the years I was with the company, though the Director of Software Engineering did steer it much closer to the right direction, a fact that I did not fully understand at the time. He drastically improved the UX by copying design ideas that were showing up in smartphones that had become more affordable by that time, and we started to form application-specific structures around the data that was collected, rather than our prior mindset of using the application to serve as the structure for the data.

But it was too late. Due to many reasons, we didn’t find a buyer for this product in time for it to live. Within a few months of the official end of the project, the company started laying off developers. I left before I knew whether I’d eventually be cut loose, and within just a couple months of that, the entire software department, except for 1, had either quit or had been terminated.

And so looking back, it fascinates me to think on the complex thinking that went in to these decisions, trying to create a perfect balance for all of the constraints. As is often the case though, by optimizing one constraint, another was violated. Stated differently, by mitigating one risk another was magnified, which only ended up causing us to run square into the exact situation we were trying to avoid.

By optimizing to keep the end user’s workflow almost identical to what they already had, we entered into an expensive iterative design process with the customer that we never emerged from because we missed the data collection window, and came away with nothing to give the customer.

Strong plans can never assume one path, beginning to end. The plan needs to be granular enough that there are several points for reflecting on how the effort is progressing—does it seem as though we will be able to meet our goals? If not, do we require different goals? Moreover, each stage of the project needs to be constantly validated against the latest knowledge. This will often mean developing plans to choose the most flexible decisions and concurrently conduct toy experiments to refine those decisions into more concrete and focused decisions. The feedback loops cannot be long. The longer we went before showing the customer our work, the more we had to go back and change.

And perhaps most importantly, any decision that has the ability to produce an outcome that cannot be altered is one that is operating on too large of a domain, and is increasing risk. This goes hand-in-glove with the last paragraph: get feedback sooner. If you do not have the time or resources for this feedback, you almost certainly do not actually have the capacity to complete this project and meet your goals.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.