HR Law: Will your People Analytics AI activity create legal concerns?
In a recent editorial (here), Emerging Intelligence Columnist John Sumser explains how pending EU Artificial Intelligence (AI) regulations will...
Understand the major types of machine learning models, which ones perform best on your data, and the questions to ask predictive analytics providers.
The human resources department is a mission-critical function in most businesses. So the promise of better people decisions has generated interest in and adoption of advanced machine-learning capabilities.
In response, organizations are adopting a wide variety of data science tools and technology to produce economically-optimal business outcomes. This trend is the result of the proliferation of data and the improved decision-making opportunities that come with harnessing the predictive value of that data.
What are the downsides to harnessing machine learning?
For one, machines lack ethics. They can be programmed to intelligently and efficiently drive optimal economic outcomes. It seems as though the use of machines in decisions and seemingly desirable organizational behaviors. Of course machines lack a sense of fairness or justice. But optimal economic outcomes do not always correspond to optimal ethical outcomes.
So the key question facing human resources teams and technology support is "How can we ensure that our people decisions are ethical when a machine is suggesting those decisions?”
The answer almost certainly requires radical transparency about how artificial intelligence and machine learning are used the decision making process. It is impossible to understand the ethical aspect of a prediction made by a machine unless the input data and the transformations of that data are clear and understood as well. General differences between various machine learning approaches have a profound impact on the ethicality of the outcomes that their predictions lead to. So let's begin by understanding some of those differences.
Let’s focus on the various types of machine learning models: the black box model, the canned model, and the inductive model.
What is a Black Box Model?
A black box model is one that produces predictions that can’t be explained. There are tools that help users understand black box models, but these types of models are generally extremely difficult to understand.
Many vendors build black box models for customers, but are unable or unwilling to explain their techniques and the results that those techniques tend to produce. Sometimes it is difficult for the model vender to understand its own model! The result is that the model lacks any transparency.
Black box models are often trained on very large data sets. Larger training sets can greatly improve model performance. However, for this higher level of performance to be generalized many dependencies need to be satisfied. Naturally, without transparency it is difficult to trust a black box model. As you can imagine, it is concerning to depend on a model that uses sensitive data when that model lacks transparency.
For example, asking a machine to determine if a photo has a cat in the frame doesn't require much transparency because the objective lacks an ethical aspect. But decisions involving people often have an ethical aspect to them. This means that model transparency is extremely important.
Black box models can cross ethical lines where people decisions are concerned. Models, like humans, can exhibit biases resulting from sampling or estimation errors. They can also use input data in undesirable ways. Furthermore, model outputs are frequently used in downstream models and decisions. In turn, this ingrains invisible systematic bias into the decision.
Naturally, the organization jeopardizes its ethical posture when human or machine bias leads to undesirable diversity or inclusion outcomes. One of the worst possible outcomes is a decision that is unethical or prejudicial. These bad decisions can have legal consequences or more.
What is a Canned Model?
The terms "canned model" or “off-the-shelf model” describe a model that was not developed or tailored to a specific user’s dataset. A canned model could also be a black box model depending on how much intellectual property the model’s developer is willing to expose. Plus, the original developer might not understand much about its own model.
Canned models are vulnerable to the same biases as black box models. Unrepresentative data sets can lead to unethical decisions. Even a representative data set can have features that lead to unethical decisions. So canned models aren't without their disadvantages either.
But even with a sound ethical posture, canned models can perform poorly in an environment that simply isn’t reflective of the environment on which the model was trained. Imagine a canned model that segmented workers in the apparel industry by learning and development investments. A model trained on Walmart’s data wouldn’t perform very well when applied to decisions for a fashion startup.
Canned models can be quite effective if your workforce looks very similar to the ones that the model was trained on. But that training set is almost certainly a more general audience than yours. Models perform better when training data represents the real life population that was targeted and represented in the training set.
What are Custom Built Models?
Which brings us to custom built models. Custom models are the kind that are trained on your data. One AI is an example of the custom built approach. It delivers specialized models that best understand your environment because it’s seen it before. So it can detect patterns within your data to learn and make accurate predictions.
Custom models discover the unique aspects of your business and learn from those discoveries. To be sure, it is common for data science professionals to deploy the best performing model that they can. However, the business must ensure that these models comply with high ethical and business intelligence standards. That's because it is possible to make an immoral decision with a great prediction.
So for users of the custom built model, transparency is only possible through development techniques that are not cloudy or secret. Even with custom built models, it is important to assess the ethical impact that a new model will have before it is too late.
Custom built models may incorporate some benefits of canned models, as well. External data can be incorporated into the model development process. External data is valuable because it can capture what is going on outside of your organization. Local area unemployment is a good example of a potentially valuable external data set.
Going through the effort of building a model that is custom to your organization will provide a much higher level of understanding than just slamming a generic model on top of your data. You will gain the additional business intelligence that comes from understanding how your data, rather than other companies' data, relates to your business outcomes.
The insights gleaned during the model development process can be valuable even if the model is never deployed. Understanding how any model performs on your data teaches you a lot about your data. This, in turn, will inform which type of model and model-building technique will be advantageous to your business decisions.
Don’t Be Misled by Generic Model Performance Indicators
A canned model’s advertised performance can be deceptive. The shape of the data that the canned model learned from may be drastically different from the data in your specific business environment. For example, if 5% of the people in the model's sample work remotely, but your entire company is remote, then the impact and inferences drawn by the model about remote work are not likely to inform your decisions very well.
When to be Skeptical of Model Performance Numbers
Most providers of canned models are not eager to determine the specific performance of their model on your data because of the inherent weaknesses described above. So how do you sniff out performant models? How can you understand a good smelling model from a bad smelling one?
The first reason to be skeptical lies in whether the model provider offers relative performance numbers. A relative performance value is a comparative one, and therefore failing to disclose relative performance should smell bad. Data scientists understand the importance of measuring performance. They know that it is crucial to understand performance prior to using a model’s outputs. So avoiding relative performance, the vendor is not being 100% transparent.
The second reason to be skeptical concerns vendors who can't (or won't) explain which features are used in their model and the contribution that each feature makes to the prediction. It is very difficult to trust a model's outputs when the features and their effects lack explanation. So that would certainly smell bad.
One Model published a whitepaper listing the questions you should ask every machine learning vendor.
Focus on Relative Performance….or Else!
There are risks that arise when using data without relative performance. The closest risk to the business is that faith in the model itself could diminish. This means that internal stakeholders would not realize “promised” or “implied” performance. Of course, failing to live up to these promises is a trust-killer for a predictive model.
Employees themselves, and not just decision makers, can distrust models and object to decisions made with it. Even worse, employees could adjust their behavior in ways that circumvent the model in order to “prove it wrong”.
But loss of trust by internal stakeholders is just the beginning. Legal, compliance, financial, and operational risk can increase when businesses fail to comply with laws, regulations, and policies. Therefore, it is appropriate for champions of machine learning to be very familiar with these risks and to ensure that they are mitigated when adopting artificial intelligence.
Finally, it is important to identify who is accountable for poor decisions that are made with the assistance of a model. The act of naming an accountable individual can reduce the chances of negative outcomes, such as bias, illegality, or imprudence.
How to Trust a Model
A visually appealing model that delivers "interesting insights" is not necessarily trustworthy. After all, a model that has a hand in false or misleading insights is a total failure.
At One Model, we feel that all content generated from predictive model outputs must link back to that model's performance metrics. An organization cannot consider itself engaged in ethical use of predictive data without this link.
Canned and black box models are extremely difficult to understand, and even more difficult to predict how they respond to your specific set of data. There are cases where these types of models can be appropriate. But these cases are few and far between in the realm of people data in the human resources function.
Instead, custom models offer a much higher level of transparency. Model developers and users understand their own data much better throughout the model building process. (This process is called Exploratory Data Analysis, and it is an extremely under-appreciated aspect of the field of machine learning.)
At One Model, we spent a long time -- more than 5 years -- building One AI to make it easier for all types of human resources professionals build and deploy ethical custom models from their data, while ensuring model performance evaluation and model explainability. One AI includes robust, deep reporting functionality that provides clarity on which data was used to train models. It blends rich discovery with rapid creation and deployment. The result is the most transparent and ethical machine learning capability in any people analytics platform.
Nothing about One AI is hidden or unknowable. And that's why you can trust it.
Their Artificial Intelligence Still Needs Your Human Intelligence
Models are created to inform us of patterns in systems. The HR community intends to use models on problem spaces involving people moving through and performing within organizations. So HR pros should be able to learn a lot from predictive models.
But it is unwise to relinquish human intelligence to predictive models that are not understood.
The ultimate value of models (and all people analytics) is to make better, faster, more data-informed talent decisions at all levels of the organization. Machine learning is a powerful tool, but it is not a solution to that problem.
In a recent editorial (here), Emerging Intelligence Columnist John Sumser explains how pending EU Artificial Intelligence (AI) regulations will...
We wrote this paper because we believe that AI/ML has the potential to be a very valuable and powerful technology to support better talent decisions...
Post 1: Sniffing for Bull***t. As a people analytics professional, you are now expected to make decisions about whether to use various predictive...