Within the community of drug manufacturers, wholesalers, dispensers, solution providers and regulators, we have debated many different architectures that might serve all global regulatory and business needs. The easiest architecture to understand is the central database model where all track and trace data on a particular drug is pushed to a single (usually country based) database. On the other extreme is the distributed model where each trading partner in the supply chain holds their own data and that data is retrieved by others when needed. There are many variations on these two themes that have been brought forward in an attempt to provide solutions to the myriad of issues faced by trading partners, regulators and solution providers. Most issues revolve around who gets to see this valuable data.
The result is usually a debate between proponents of a seemingly simple architecture (centralized) with complex data governance (all data governance rules must be negotiated and centralized also), and a seemingly complex architecture (distributed) with simple data governance (each company decides whether to share data). Personally, I believe the distributed model holds the most value to each trading partner and the entire supply chain. My reasoning is that although the centralized model is technically easy to set up, in the end, it only provides a solution for a particular regulatory entity. All the rich data that is collected is not shared with trading partners for other purposes (I won’t go into those here, but as an analogy, some folks thought a mobile phone would just be used for phone calls, today, that is the least used feature of our mobile phones). On the other hand, a distributed model causes us to solve a number of technical challenges that in the end will result in a much more robust and useful architecture (again, think of today's mobile phones) that will serve the industry and individual trading partners far into the future and could be the basis for a revolution in supply chain efficiency (CPFR, proof of delivery, vendor managed inventory and patient medication compliance being the low hanging fruit here).
So, what does all of this have to do with the title of this article? One of the complexities that we as an industry need to resolve is how we deal with data latency in a distributed architecture (even in a central one for that matter)? If the traceability data for all of the nodes in the supply chain path for a particular package aren’t available, what does that mean? Does it necessarily mean that nefarious activity has occurred? Does it mean that some trading partners report product movement once a day, twice a day or in real time? How do we determine which is the answer when we come to the decision of whether to sell, ship or administer the medication?
The NIST (National Institute of Standards and Technology) “Cloud Computing Service Metrics Description” draft Special Publication provides some very interesting discussion on metrics for Service Agreements (SAs). I’m sure that other bodies (ISO, UN/CEFACT, etc.) may have similar publications, but in this document NIST seems to have developed a meta-model that we in the traceability field might be able to leverage to predict the availability of a full set of traceability data. Put simply, if a set of traceability data leading back to the original manufacturer is not complete, a software system might be able to query the associated set of SAs’ to determine when the full set should be available or provide some insight into why a particular link is missing when we tried to access it.
I’d be interested in the opinions of my more technical colleagues in the field. I’ve made some claims that all may not agree with, but, if you could, take a look at the draft document to at least see (and share with the community) your opinion on whether this study adds to our long running discussions or is irrelevant. Lastly, please don’t just shout “Irrelevant!” (using all caps, I guess) … provide some discussion.