Steam Powered Data Science for HR
We’re back with another installment of our One Model Difference series. On the heels of our One AI announcement, how could we not take this...
Organizations that are making AI investments should use these good questions to ask about artificial intelligence before signing a contract and why One AI?
John Sumser, one of the most insightful industry analysts in HR, recently wrote an article providing guidance on the selection of machine learning/AI tools. That article is found HERE, and can serve as a rubric for reviewing AI and predictive analysis tools for use in your people analytics practice or HR operations.
Much of our work day is filled with conversations regarding the One Model tool and how it fits into an organization's People Analytics initiative. This is often the first practical exposure a customer contact has using Artificial Intelligence (AI), so a significant amount of time is invested in explaining AI and the dangers of misusing it.
Our product, One AI, delivers a suite of easy-to-use predictive pipelines and data extensions, allowing organizations to build, understand, and predict workforce behaviors. Artificial Intelligence in its simplest form is about automating a decision process. We class our predictive modeling engine as AI because it is built to automate the decisions usually made by a human data scientist in building and testing predictive models. In essence, we’ve built our own automated machine learning toolkit that rapidly discovers, builds, and tests many hundreds of potential data features, predictive models, and parameter tuning to ultimately select the best fit for the business objective at hand. Unlike other predictive applications in the market, One AI provides full transparency and configurability, which implicitly encompasses peer review. Every predictive output is not only peered reviewable within a given moment of time but also for all time.
This post will follow a Q&A style as we comment on each of John’s 12 critical questions to ask an artificial intelligence company.
Ideally, all data available to One Model is used for feeding the machine learning engine - the more the better. You cannot overload One AI because it is going to wade through everything you throw at it and decide which data points are relevant, and how much history it should use, and then select, clean, and position that data as part of its process. This means we should feed every single system we have available into the engine from the HRIS, ATS, Survey, Payroll, Absence, Talent Management - everything and the kitchen sink as long as we’re ethically okay with its potential use. This is not a one size fits all algorithm; each model is unique to the customer, their data set, and their target problem.
The content of training data can also be user-defined. Users define what type of data is brought into the modeling process, choosing which variables, filters, or cuts will be offered. At any time if users want to specify how individual fields will be treated, they have the ability to do so with the same types of levers as you would have in creating your own model externally.
The scope of data and the machine learning pipeline determine training time. The capacity to create models is intrinsically available in One AI and training can take anywhere from 5 minutes to 20+ hours.
For example, we automatically schedule re-training a turnover prediction model for a 15k employee-customer in the space of 45 minutes.
Yes, data can be set to be held static or use fresh data every time the model is trained. One AI acts as a data science orchestration toolkit that automates the data refresh, training, build and ongoing maintenance of the model. Models are typically scheduled to potentially refresh on a regular basis e.g. monthly.
With every run extensive reports are created, time-stamped, and logged so users can always return to summary reports of what the data looked like, the decisions made, and the performance of the model at any given time.
One AI models and pipelines are completely persisted. They can be turned on and off with no loss of data or logic. We are a data science orchestration toolset for building and managing predictive models at scale.
Download our latest whitepaper to get the questions you should ask in the next sales pitch when someone is trying to sell you technology with AI.
Yes, customers own the results from their predictive models, and those results are easily downloaded. Results and models are based upon your organizations data. One Model customers only see their own results, and these results are not combined with other data for any purpose. All the decisions that the machine made to select a model are shown and could be used to recreate the model externally as well.
Predictive modeling, along with all features of our One AI product, are inclusive within the One Model suite subscription fee.
Each predictive model is generated and its results are fully transparent. Once a One AI run is finished, two reports are generated for review:
Models are typically scheduled to be re-trained every month with any new data received. The new models can be compared to the previous model using the output reports generated. It is expected that models will degrade over time and they should be replaced regularly with better performing models incorporating recent data. This is a huge burden on a human team, hence the need for data science orchestration automating the manual process and taking data science delivery to scale.
One Model’s customers are trained on all aspects of our People Analytics tool. Training is offered for non-Data Scientists to be able to interpret the Results Summary and Exploratory Data Analysis reports so they can feel comfortable deploying models. A named One Model Customer Service Manager is available to aid and provide guidance if needed.
One AI is built with change in mind. If the data changes in a way that breaks the model or the model drifts enough that a retrain is necessary, users can restart the automated machine learning pipelines to bring in new data and create a new pipeline. The new model can be compared to the previous model. One AI also allows work to occur on a draft version of a model while the active model is being run in production.
The Results Summary and Exploratory Data Analysis charts provide extensive model performance and diagnostic data. Actual real-world results can be used to assess the performance of the model by overlaying predictions with outcomes within the One Model application. This is also typically how results are distributed to users through the main analytics visualization toolsets. When comparing actual results against predictions, One Model cautions users to be aware of underlying data changes or company behaviors skewing results. For example, an attrition model may identify risk due to an employee being under-trained. If that employee is then trained and chooses to remain with the organization, then the model may have been correct but because the training data changed results can’t really be compared. In the case of this employee their risk score today would be lower than their risk score from several months ago prior to training. The action to provide additional training may indeed have been a response from the organization to address the attrition risk, and actions like these that are specifically made to address risk must also be captured to inform the model if mitigation actions have taken place. The Results Summary and Exploratory Data Analysis reports typically build enough trust in cross-validation that system performance questions are not an issue.
One AI provides tooling to create models along with the reports for model explanation and interpretation of results. All models and results are based exclusively on a customer’s own data. The customer must review the model’s results and choose to deploy and how they use those results within the organization. We provide transparency into our modeling and explanations to provide confidence and knowledge of what the machine is doing and not just trusting a black box algorithm is working (or not). This is different from other vendors who may deliver inflexible canned models that were trained on data other than the customers or are inflexible to use a unique customer data set relevant to the problem. I would be skeptical of any algorithm that cannot be explained or its performance tracked over time.
Each One Model customer decides how specific models will be run for them, and how to apply One AI. These predictive models typically include attrition risk, time to fill, promotability, and headcount forecast. Customers own every model and the result generated within their One Model tool.
One AI empowers our customers to combine the appropriate science with a strong awareness of their business needs. Our most productive One AI users utilize the tool by asking it critical business questions, understanding all relative data ethics, and providing appropriate guidance to their organization.
If you would like to learn more about One AI, and how it can address your specific people analytics needs, schedule some time with a team member below.
We’re back with another installment of our One Model Difference series. On the heels of our One AI announcement, how could we not take this...
What's machine learning? Is it artificial intelligence? Deep learning? Is it black magic, or better yet, just a phrase the industry's marketing folks...
Find our team in a city near you, and stop by in person to learn more about our workforce analytics solutions. February 9, 2018 - Austin, TX - The...