Data Science has been hailed as the silver bullet. With lots of interesting data we can make accurate, fast-paced decisions. We can see what is really going on in our companies. At least that is the idea…
Learning and development (L&D) departments and organizational managers can use learning analytics, people analytics and workforce analytics to identify current and future training needs. But, using data comes with inherent dangers. Here I highlight 5 typical mistakes made with analytics that can cost organizations significantly.
ONE: Capture the wrong data
First things first, it is essential to capture the right data. Whether capturing quantitative data (numbers) or qualitative (descriptive), it is important to know what data to get.
Once decided, effective means of data collection are needed. Surveys, observations, focus groups, tests and evaluations can all be used. But, many times these are poorly designed and do not measure the intended skill, knowledge or behavior, etc.
If the data collected does not truly represent what is going on, decision making is compromised. Using carefully designed and validated measurement tools is one way to mitigate the risk. But, who uses the tools and how is just as is important.
TWO: Manipulating questions and answers
Let us assume that the data collected is accurate and reliable. Another big pitfall is asking the wrong questions. I have worked with managers with access to huge data sets. What is interesting to observe is the different ways of asking questions.
What strikes me more than anything, however, is how some people have already answered their own question before they ask it. It is very common to observe data being selected to provide the desired answers. Cherry picking of data can be very dangerous and conformation bias equally.
An example is in sales. It can be very easy to purposely select only a small sample of data to show either good or poor performance. Just pick up an end of quarter report and you will often see only a small picture of the reality. Sandbagging for results from one quarter to another can be a great way to fool others around us.
THREE: Jump the gun
It can be tempting to think that the data in front of us is a true, entirely accurate depiction of reality. Philosophical beliefs aside, this is generally not the case. Data can be used to make observations about what is going on, but it won’t necessarily tell the whole story.
A common mistake I see is users of data jumping the gun, making snap decisions based on one data collection. This can have profound effects. For example, high performers who were having a bad day or quarter when the data was captured may be negatively affected. Perhaps they get reprimanded by their boss or even consigned to long, unnecessary training sessions. Even worse, some companies fire them.
Equally, after providing training to staff and observing small, short-term increases in performance, it can be tempting to believe it is mission accomplished. These improvements could be real, but they could equally just be poor performers regressing to the mean (average) level of performance. See about Khaneman’s study of Israeli Pilots… It could just be a natural variation in performance.
FOUR: False causality and narrative fallacy
Two closely related problems. In the case of false causality, it can be tempting to assume two things are connected when they are not. In reality, very few outcomes are affected by just one factor.
It is important for L&D designers and managers to ensure they look at all data within context. There are many possible causes for changes in performance and many potential confounding variables. Only using proper experimental groups can causality be inferred with more confidence.
Equally, narrative fallacy causes issues. Time and time again, after an event happens we are great at creating stories to explain it. Hindsight is a great thing. It is remarkably easy for managers and other users of data to create a story that justifies the result. The issue is that these stories can cover the reality.
FIVE: Personal Bias
Nobody likes to admit that they are biased. We all believe that we make impartial, subjective judgements. In the case of using analytics in the workplace, we are regularly more biased than we realize.
It is easy for us to have preconceived ideas or prejudices about the people the data describes. Thus, we must seek to eliminate bias as much as possible. For example, in performance analysis, it can be deceptively easy to set up the measurement to ensure certain types of people perform poorly or well.
Take for example the ACT test that was established decades ago for college entry. It was designed to measure who was academically the best and ready for university. It had a secondary effect, however. Intentional or not, it created a separation of classes, races and genders, meaning the talent pool was vastly reduced and biased in favor of a few.
This is why it can be recommendable to get outside help from a data specialist. Data analysts can guide companies on how to use data more effectively and how to ensure biases and discrimination do not occur. This is particularly important in cases of using analytics for hiring and for performance management.
How can I avoid the mistakes?
As mentioned above, seeking out advice from a specialist is a highly recommended idea. As data use becomes more prevalent, there are also likely to be legal implications. Employees often have the right to see this data and decide what happens with it. Protecting it will be of the utmost importance.
More than anything, however, it is vital to ensure data gets used in productive ways. In the case of L&D, analytics offer great possibilities. But if companies want to make the right decisions, it is vital to ask the right questions and deduce the most accurate answers.