Data, in our day and age, has become something of a standard expectation for the management of any company big or small. It has become synonymous with the idea of modernity to the point that it is a required output of any contemporary ERP, WMS, or TMS. Yet, if one wanted to engineer the supply chain of an entire company it would take hundreds, if not thousands of hours for most companies to get this completed with meaningful results. Why, in this day and age of “big data” have we not ended the concept of massive hours of labor related to the engineering of supply chains? What are the challenges that befall any company attempting to rationalize and expand on their existing supply chain footprint? The answers to this lie in three distinct truths of our current world: the sources and quality of data are subjective at best, there are no standards for data collection and outputs across like systems, and the desired outcomes related to the use of data still remains as much of an art as it does a science.
Supply chains have become increasingly complex and require more consistent manipulation and ongoing management than ever before. The advent of concepts like near market fulfillment, final mile delivery, and omni-channel retail strategy have challenged even the most experience supply chain team with keeping relevant product stocked where it is needed. Similarly, the days of extremely large companies being the only ones consuming engineering resources for supply chains are behind us, and many small-to-medium sized businesses are more inclined to participate in these studies as the awareness of their product grows faster then ever before. The desire for a product can go from 0 to 100 overnight and back to 0 again.
Where a large company might have embarked upon a network analysis or out of stock engineering exercise once or twice a decade, some are doing so annually at a minimum. In examining the state of things in the supply chain engineering world, it has become clear that those with good quality data, and good business practice around standardization of domain and data processing can get better results and do it more often than the average company. Sadly, very few have the ability to claim that quality of data across the breadth of the network that would be engineered. As such, nearly 90% of all supply chain engineering time is used to cleanse and prepare data for analysis. Furthermore, while there may be more volumes of data and inconsistencies in that data across systems; modes or business lines can make it even more complex than it might have been 10 years ago when things could have simply been needed to be manually entered. Yes, in this day and age it is still the case that the consultancy tends to be more of a data janitor than a data engineer.
Supply chains have become increasingly complex and require more consistent manipulation and ongoing management than ever before
3PL’s have begun to bring a level of clarity and focus to this aspect of the industry. As companies have come to manage entire continental or global supply chains, they have begun to posses the volume of data that is valuable to engineering. Through the “cleansing” of data at the integration point, combined with the ability to standardize the data across disparate systems, many companies are beginning to find the type of flexibility and value in data they had hoped to achieve on their own. This ubiquity of information across business units and systems is providing the clearinghouse of data necessary to quickly and effectively manipulate large volumes of data for accurate supply chain analysis. Many companies today are finding that using a company with a pay-as-you-go model for managing their transportation—and thereby their data management—is helping to provide companies with far greater savings long term. The ability to rapidly engineer segments of their market as they move their relevant product closer to the consumer and in step with demand can save them money in many different forms. Cost savings connecting to reduction of labor has become a factor in these engineering models as it is eliminating the need for hundreds or thousands of engineering hours to model due to the data being from a consistent “trustworthy source. Setting aside time and its inherent expense related to adaptability, the low cost of having that freight passing through a transparency hub and having it standardized and accessible could never come close to reaching the costs large consultancy’s can charge for the same analysis, and this can now be done.
Many think of our world as rapidly maturing when it comes to data. While there are certainly places in that world where data is driving every decision and helping to make our world a faster and more efficient place that is not certainly the case across the supply chain industry. There is a great deal of maturation yet to happen in our world to get to a place where we can engineer supply chains on-demand and use the results to drive a change of behavior across a supply chain that spans the globe. In order to get there, we are going to need to continue to rely on trusted partners to help many get to that place where we can constantly re-engineer our world with a level of confidence many in other industries enjoy today.