Where I currently work many process improvement projects end up requiring some sort of software deliverable be created and left behind to aid in the sustainment of the process improvement work. Originally these were prototypes developed for a single site, but over time the need has arisen to make some of the tools more enterprise ready so they could be deployed at multiple customer sites.

To this end we have tried to build out a standard product template and approach. I will post on some of the patterns and frameworks we have decided upon in future posts. This post is primarily to outline some things I have learned from the most recent product I worked on.

First the good

  1. The Testing framework we created and have used in the last two products is a huge help for both manual testing and automated testing. I will post on this process in the future.
  2. The data caching framework worked incredibly well at reducing database noise as we discovered when a dashboard was initially set to a 1 second refresh.
  3. Many of our products are reporting very similar items and the base table structure can be used for many products and ultimately transfer to a data warehouse a product suite can sit upon.

Things to consider in the future

  1. Merge Data instead of reload – Most products we developed in the past were strictly reporting platforms and they would perform data reloads. This means we clear all of the tables and then reload the data with each process run. Our  latest product started the same way, but halfway thru the development process we needed to merge loaded reporting data with data user were manually entering.  We opted to  keep the current process structure in place and add on a manual entry caching mechanism. This works well enough, but it may have been a better pattern to perform a merge instead of a reload.
    1. Merging may better support a “Source System Down” state.  Currently we get out if nothing was staged and a merge could just process normally.
    2. This would also be a first step towards implementing an incremental process framework in our products as I have done elsewhere
  2. Manual Overrides – Another decision we made when we had to merge manually entered user data with reporting data we Extracted, Transformed, and Loaded was to keep the reporting data structures and processes unchanged and to have views, functions, and procedures determine if we should be showing the original reporting data or the overrides based on statuses and configurations. One of the main reasons we decided on this approach was because our direction was to have near-real time (1 minute) refreshes and we were concerned about locking and deadlocking for users entering information while the process was running. Each view, function, or procedure used to determine what data to use is fairly straight forward in isolation. However, the fully implemented logic became overly complex and not as simple to maintain or test as we first thought – thankfully we had automated tests to tell us we did not break anything existing. In the future, we will probably change this over to use manual overrides when populating the reporting tables, instead of determining the values to report on the fly.
    1. This will create simple isolated processes that will determine what is in the reporting tables
      1. When the refresh process runs
      2. When a user saves overrides
    2. The reporting queries need only look at the tables
    3. The downside is users may have to wait 5-10 seconds if they are trying to save while a data pull process is running, but we expect they really only use manual entry when the automation is down, or the source system is unavailable.
    4. The only thing to account for is a snapshot of source data, since we also need it in its un-overridden state for manual process entry

Did this strike a cord with you?

Let’s discuss in the comments below.