caLogo

Data for the sake of data can do more harm than good.

Data can be very useful, in moderation! Over what seems like a very long career I have seen data used and have used data for a great number of purposes. But “data,” of course, can mean different things.

Early in my career during the 1970s, data were what I would consider “flat” numbers. If a machine had a counter, if you could measure or quantify a dimension via some type of gauge, that was considered best-in-class “data.” During the 1980s, computers became more powerful and programmers were more adept at identifying information that previously could not be easily obtained. The power of large computing combined with the evolving skills of computer programmers ushered in new, never doable concepts such as logistics to be able to smartly schedule the entire manufacturing process from procurement to shipping.

Late in the 1980s, personal computers had evolved, and user-friendly software programs such as Lotus 1-2-3, Excel, Word, etc., enabled an expanded group of employees to collect information and create far more usable “data.” These users of PCs were often not professional programmers, but the actual shop floor operator or supervisor who needed the data output but also understood the source of all the inputs. In many ways, the late 1980s through 1990s was a period of quantum expansion of data use. Both the user and generator of information could write programs where the information could be input and sliced and diced through pivot tables to generate far more usable “data.” Of course, this worked only as well as the quality of both the inputs and formulas utilized.

More recently, thanks in large part to the ever more powerful computing that is readily available – combined with technological advances in sensors and software – almost any transaction, product, process or even idea can have thousands of data points to support or disprove quality, performance or viability. This progression at times may make what is considered “data” inaccurate, unhelpful or even detrimental to decision making.

As data evolution moves forward, one cannot help but think about if and when artificial intelligence (AI) takes hold in a significant way. When a human becomes overwhelmed with data input, they can tap into their experiences of what they did and why that worked (or didn’t) to regain focus or clarity. Data for the sake of data, with layers and layers of it, may just exacerbate the problem and result in a logical but very unsuccessful solution.

Which raises the question, “When is there too much data?” Early in my career when anyone was swamped with too much information or other input, they were said to be experiencing analysis paralysis. In this mental state, you begin to question everything, and bad and good inputs are treated equally because they all look the same to the overwhelmed person analyzing it. In that environment, any time saved by good data is more than neutralized by the time wasted interpreting and validating mounds of bad data. All too often, this leads to bad decisions despite having tons of data.

Often less is more. Just because something can be measured or collected does not mean that it should be.

Experience and prioritizing are the linchpins to separating what’s noise from what’s usable. Experience tells us when we would benefit from more information. Equally, experience tells us what type of information is critical for the task at hand versus what might be interesting but of no value, or worse, confusing and distracting.

More importantly, experience is the accumulation of data compiled over years or decades of successes and failures. Seeing how data can make tasks easier, faster or less costly comes from the experience gained both when things go well and when they do not. Experience is commonsense knowledge that enables a person to realize what is helpful and what is not.

Prioritizing, on the other hand, is realizing the inefficiency of too much data and then consciously focusing on identifying the most important information followed by the next important, etc., and reviewing it over and again so only the top few truly important data inputs are mined. Prioritizing, enhanced by experience, makes it more probable that only usable information will be compiled, and that input will become the valued data needed to move a product or process forward. Experience and prioritizing enables users to minimize the noise that distracts effort and wastes critical time.

Throughout my career, the collection of data has been pursued to provide clarity and enable an individual, team or an entire corporation to achieve success faster and more efficiently. Data analysis enables more rapid problem-solving while providing the building blocks to potentially prevent the same problem occurring in the future. Managing that data and not letting the magnitude of all available data inputs manage you is imperative. Include the more critically important input of experience, together with the discipline of prioritization, to avoid data noise and instead harness only the signal needed for the moment. 

Peter Bigelow is president of FTG Circuits Haverhill; (imipcb.com); pbigelow@imipcb.com. His column appears monthly.

Submit to FacebookSubmit to Google PlusSubmit to TwitterSubmit to LinkedInPrint Article
Don't have an account yet? Register Now!

Sign in to your account