QUICK FILTERS
Featured
5 min read
Joe Grohovsky
In a recent editorial (here), Emerging Intelligence Columnist John Sumser explains how pending EU Artificial Intelligence (AI) regulations will impact its global use. A summary of those regulations can be found here. You and your organization should take an interest in these developments and yes there are HR legal concerns over AI. The moral and ethical concerns associated with the application of AI are something we must all understand in the coming years. Ignorance of AI capabilities and ramifications can no longer be an excuse. Sumser explains how this new legislation will add obligations and restrictions beyond existing GDPR requirements and that there is legislation applicable to human resource machine learning. The expectation is that legal oversight will arise that may expose liability to People Analytic users and their vendors. These regulations may bode poorly for People Analytics providers. It is worth your while to review what is being drafted related to machine learning and the law as well as how your current vendor addresses the three primary topics from these regulations: Fairness – This can address both training data used in your predictive model as well as the model itself. Potential bias toward things like gender or race may be obvious, but hidden bias often exists. Your vendor should identify biased data and allow you to either remove it or debias it. Transparency – All activity related to your predictive runs should be identifiable and auditable. This includes selection and disclosure of data, the strength of the models developed, and configurations used for data augmentation. Individual control over their own data – This relationship ultimately exists between the worker and their employer. Sumser’s article expertly summarizes a set of minimum expectations your employees deserve. When it comes to HR law, our opinion is that vendors should have already self-adopted these types of standards, and we are delighted this issue is being raised. What are the differences between regulations and standards? Become a more informed HR Leader by watching our Masterclass Series: Why One Model is Preferred when it comes to Machine Learning and the Law? At One Model we are consistently examining the ethical issues that are associated with AI. One Model already meets and exceeds the Fairness and Transparency recommendations; not begrudgingly but happily because it is the right thing to do. Where most competitors put your data into a proverbial AI black box, One Model opens its platform and allows full transparency and even modification of the AI algorithm your company uses. One Model has long understood the HR law and how the industry has an obligation to develop rigor and understanding around Data Science and Machine Learning. The obvious need for regulation and a legal standard for ethics has risen with the amount of snake oil and obscurity being heavily marketed by some HR People Analytics vendors. One Model’s ongoing plan to empower your HR AI initiatives includes: Radical transparency. Full traceability and automated version control (data + model). Transparent local and model level justifications for the predictions that our Machine Learning component called OneAI makes. By providing justifications and explanations for our decision-making process One Model builds paths for user-education and auditability for both simple and complex statistics. Our objective has been to advance the HR landscape by up-skilling analysts within their day-to-day job while still providing the latest cutting edge in statistics and machine learning. Providing clear and educational paths to statistics is in the forefront of our product design and roadmaps, and One Model is just getting started. You should promptly schedule a review of the AI practices being conducted with your employee data. Ignoring what AI can offer risks putting your organization at a competitive disadvantage. Incorrectly deploying AI practices may expose you to legal risk, employee distrust, compromised ethics, and incorrect observations. One Model is glad to share our expertise around People Analytics AI with you and your team. High level information on our OneAI capability can be found in the following brief video and documents: https://bit.ly/OneModelPredictiveModeling https://bit.ly/OneModel-AI https://bit.ly/HR_MachineLearning For a more detailed discussion please schedule a convenient time for a personal discussion. http://bit.ly/OneModelMeeting
Read Article
Featured
10 min read
Joe Grohovsky
John Sumser, one of the most insightful industry analysts in HR, recently wrote an article providing guidance on the selection of machine learning/AI tools. That article is found HERE, and can serve as a rubric for reviewing AI and predictive analysis tools for use in your people analytics practice or HR operations. Much of our work day is filled with conversations regarding the One Model tool and how it fits into an organization's People Analytics initiative. This is often the first practical exposure a customer contact has using Artificial Intelligence (AI), so a significant amount of time is invested in explaining AI and the dangers of misusing it. Good Questions to Ask About Artificial Intelligence Solutions - And Our Answers! Our product, One AI, delivers a suite of easy-to-use predictive pipelines and data extensions, allowing organizations to build, understand, and predict workforce behaviors. Artificial Intelligence in its simplest form is about automating a decision process. We class our predictive modeling engine as AI because it is built to automate the decisions usually made by a human data scientist in building and testing predictive models. In essence, we’ve built our own automated machine learning toolkit that rapidly discovers, builds, and tests many hundreds of potential data features, predictive models, and parameter tuning to ultimately select the best fit for the business objective at hand. Unlike other predictive applications in the market, One AI provides full transparency and configurability, which implicitly encompasses peer review. Every predictive output is not only peered reviewable within a given moment of time but also for all time. This post will follow a Q&A style as we comment on each of John’s 12 critical questions to ask an artificial intelligence company. 1) Tell me about the data used to train the algorithms and models. Ideally, all data available to One Model is used for feeding the machine learning engine - the more the better. You cannot overload One AI because it is going to wade through everything you throw at it and decide which data points are relevant, and how much history it should use, and then select, clean, and position that data as part of its process. This means we should feed every single system we have available into the engine from the HRIS, ATS, Survey, Payroll, Absence, Talent Management - everything and the kitchen sink as long as we’re ethically okay with its potential use. This is not a one size fits all algorithm; each model is unique to the customer, their data set, and their target problem. The content of training data can also be user-defined. Users define what type of data is brought into the modeling process, choosing which variables, filters, or cuts will be offered. At any time if users want to specify how individual fields will be treated, they have the ability to do so with the same types of levers as you would have in creating your own model externally. 2) How long will it take for the system to be trained? The scope of data and the machine learning pipeline determine training time. The capacity to create models is intrinsically available in One AI and training can take anywhere from 5 minutes to 20+ hours. For example, we automatically schedule re-training a turnover prediction model for a 15k employee-customer in the space of 45 minutes. 3) Can we make changes to our historical data? Yes, data can be set to be held static or use fresh data every time the model is trained. One AI acts as a data science orchestration toolkit that automates the data refresh, training, build and ongoing maintenance of the model. Models are typically scheduled to potentially refresh on a regular basis e.g. monthly. With every run extensive reports are created, time-stamped, and logged so users can always return to summary reports of what the data looked like, the decisions made, and the performance of the model at any given time. 4) What happens when you turn it off? How much notice will we receive if you turn it off? One AI models and pipelines are completely persisted. They can be turned on and off with no loss of data or logic. We are a data science orchestration toolset for building and managing predictive models at scale. Is AI being offered in a solution for your HR Team? Download our latest whitepaper to get the questions you should ask in the next sales pitch when someone is trying to sell you technology with AI. 5) Do we own what the machine learned from us? How do we take those data with us? Yes, customers own the results from their predictive models, and those results are easily downloaded. Results and models are based upon your organizations data. One Model customers only see their own results, and these results are not combined with other data for any purpose. All the decisions that the machine made to select a model are shown and could be used to recreate the model externally as well. 6) What is the total cost of ownership? Predictive modeling, along with all features of our One AI product, are inclusive within the One Model suite subscription fee. 7) How do we tell when the models and algorithms are “drifting”? Each predictive model is generated and its results are fully transparent. Once a One AI run is finished, two reports are generated for review: Results Summary – This report details the model selected and its performance. Exploratory Data Analysis – This report details the state of the data that the model was trained on so users can determine if the present-state data has changed drastically. Models are typically scheduled to be re-trained every month with any new data received. The new models can be compared to the previous model using the output reports generated. It is expected that models will degrade over time and they should be replaced regularly with better performing models incorporating recent data. This is a huge burden on a human team, hence the need for data science orchestration automating the manual process and taking data science delivery to scale. 8) What sort of training comes with the service? One Model’s customers are trained on all aspects of our People Analytics tool. Training is offered for non-Data Scientists to be able to interpret the Results Summary and Exploratory Data Analysis reports so they can feel comfortable deploying models. A named One Model Customer Service Manager is available to aid and provide guidance if needed. 9) What do we do when circumstances change? One AI is built with change in mind. If the data changes in a way that breaks the model or the model drifts enough that a retrain is necessary, users can restart the automated machine learning pipelines to bring in new data and create a new pipeline. The new model can be compared to the previous model. One AI also allows work to occur on a draft version of a model while the active model is being run in production. 10) How do we monitor system performance? The Results Summary and Exploratory Data Analysis charts provide extensive model performance and diagnostic data. Actual real-world results can be used to assess the performance of the model by overlaying predictions with outcomes within the One Model application. This is also typically how results are distributed to users through the main analytics visualization toolsets. When comparing actual results against predictions, One Model cautions users to be aware of underlying data changes or company behaviors skewing results. For example, an attrition model may identify risk due to an employee being under-trained. If that employee is then trained and chooses to remain with the organization, then the model may have been correct but because the training data changed results can’t really be compared. In the case of this employee their risk score today would be lower than their risk score from several months ago prior to training. The action to provide additional training may indeed have been a response from the organization to address the attrition risk, and actions like these that are specifically made to address risk must also be captured to inform the model if mitigation actions have taken place. The Results Summary and Exploratory Data Analysis reports typically build enough trust in cross-validation that system performance questions are not an issue. 11) What are your views on product liability? One AI provides tooling to create models along with the reports for model explanation and interpretation of results. All models and results are based exclusively on a customer’s own data. The customer must review the model’s results and choose to deploy and how they use those results within the organization. We provide transparency into our modeling and explanations to provide confidence and knowledge of what the machine is doing and not just trusting a black box algorithm is working (or not). This is different from other vendors who may deliver inflexible canned models that were trained on data other than the customers or are inflexible to use a unique customer data set relevant to the problem. I would be skeptical of any algorithm that cannot be explained or its performance tracked over time. 12) Get an inventory of every process in your system that uses machine intelligence. Each One Model customer decides how specific models will be run for them, and how to apply One AI. These predictive models typically include attrition risk, time to fill, promotability, and headcount forecast. Customers own every model and the result generated within their One Model tool. One AI empowers our customers to combine the appropriate science with a strong awareness of their business needs. Our most productive One AI users utilize the tool by asking it critical business questions, understanding all relative data ethics, and providing appropriate guidance to their organization. If you would like to learn more about One AI, and how it can address your specific people analytics needs, schedule some time with a team member below.
Read Article
Featured
5 min read
Joe Grohovsky
"Strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat." - Sun Tzu Complex people analytics (PA) projects risk losing sight of what is profoundly important as they endeavor to fulfill all aspirational requirements. Identifying and delivering business insight is their purpose, not simply fulfilling a stakeholder’s tactical wish list of presentation-layer features. However, far too often PA initiatives are launched with requirements dominated by this tactical wish list without a true appreciation for the value of the metrics contained within each report. The funding and focus involved clearly classify these initiatives as Strategic HR projects. Instead of blindly focusing on what presentation tactics will be used, consider first a strategy for building better insights. These strategic conversations should begin with the number (metric/measure) in question. This number is critical and is the cornerstone for all other discussions. This number should be accurate and meaningful. Everything else within a PA initiative is the tactical positioning of that strategic number. Accuracy Without accurate numbers, a reporting effort is wasted. Ask yourself these questions. Is the number derived from trusted, validated source data? Is the source data modeled specifically for your organization? Does your definition of that number align with what will be provided? These questions are more than simplistic check boxes. Consider Headcount, which is the most basic HR measure. Is it based on the Start of the Period, End of the Period, or Average Daily Headcount? Are retroactive changes accommodated? What will happen when introducing additional data sources such as Engagement or Performance? Are you forced to work with templated data and a rigid data model? Interested in learning how to create a stellar People Data Platform? Read our latest whitepaper to understand the steps your team needs to take to create an analytics-ready data platform that will give your team reliable, accurate information that can help propel your people analytics projects toward success. Meaningfulness Not all numbers are equal or valuable. When considering specific metrics, consider these questions. Is this number important on its own, or does it merely provide context? Is it actionable? Considering the above, an easy analogy would be the numbers a physician uses during a patient’s annual physical examination. Those numbers include things like age, height, weight, blood pressure, etc. Age and height are uncontrollable and immune to any action. However, these numbers still provide valuable context for other numbers. Weight and blood pressure would be considered actionable and the focal point for discussion. Once actionable numbers are identified, ask yourself “So What.” Will this insight drive any internal decisions? If not, it is best to focus elsewhere. These questions will determine meaningfulness. Presentation of Numbers After accurate and meaningful numbers are established, a conversation on presentation tactics can occur. Awareness of internal culture and data consumer preferences is critical in this step. Most PA initiatives serve a broad spectrum of data consumers that may involve: HR Business Partners Analysts Center of Excellence Data Scientists Line of Business Managers Self-Service capabilities Senior Executives Each group is best served by providing varying amounts of support, flexibility, and handholding. Common differences for each group would include the decision to provide a summary or detailed data, the amount of context provided, or the amount of supporting documentation required to establish metric validity. Summary It is understandable that PA professionals become fascinated with whiz-bang features contained within presentation capabilities. Ease of data consumption is important, but please realize that it trails behind generating accurate, meaningful numbers. Storytelling your way through People Analytics without substance supporting you is risky. For examples of impactful HR projects, or information on how One Model approaches this topic, please connect with us.
Read Article
Featured
10 min read
Joe Grohovsky
During my daily discussions with One Model prospects and customers, two consistent themes emerge: A general lack of understanding of predictive modeling and a delay in considering its use until basic reporting and analytical challenges are resolved. These are understandable, and I can offer a suggestion to overcome both. My suggestion is based upon seeing successful One Model customers gain immediate insights from their data by leveraging the technology found in our One AI component. These insights include data relationships that can surface even before customers run their first predictive model. Deeper insights before predictive modeling? How? To begin, let’s rethink what you may consider to be a natural progression for your company and your People Analytics team. For years we’ve been told a traditional People Analytics Maturity Continuum has a building block approach that is something like this: The general concept of the traditional People Analytics maturity model is based upon the need to master a specific step before progressing forward. Supposedly, increased value can be derived when each step is mastered, and the accompanying complexity is mastered. While this may seem logical, it is largely inaccurate in the real world. The sad result is many organizations languish in the early stages and never truly advance. The result is diminished ROI and frustrated stakeholders. What should we be doing instead? The short answer is to drive greater value immediately when your people analytics project launches. Properly built data models will immediately allow for basic reporting and advanced analytics, as well as predictive modeling. I’ll share a brief explanation of two One Model deliverables to help you understand where I’m going with this. People Data Cloud™️ Core Workforce data is the first data source ingested by One Model into a customer's People Data Cloud. Although additional data sources will follow, our initial effort is focused on cleaning, validating, and modeling this Core Workforce data.This analytics-ready data is leveraged in their People Data Cloud instance. Once that has occurred storyboards are then created, reflecting a customer’s unique metrics for reporting and analytics. It is now that customers can and should begin leveraging One AI (Read more about People Data Cloud). Exploratory Data Analysis One AI provides pre-built predictive models for customers. The capability also exists for customers to build their own bespoke models, but most begin with a pre-built model like Attrition Risk. These pre-built models explore a customer's People Data Cloud to identify and select relevant data elements from which to understand relationships and build a forecast. The results of this selection and ranking process are presented in an Exploratory Data Analysis (EDA) report. What is exploratory data analysis, you ask? It is a report that provides immediate insights and understanding of data relationships even before a model is ever deployed. Consider the partial EDA report below reflecting an Attrition Risk model. We see that 85 different variables were considered. One AI EDA will suggest an initial list of variables relevant to this specific model, and we see it includes expected categories such as Performance, Role, and Age. This first collection of variables does not include Commute Time. But is Commute Time a factor in your ideal Attrition Risk model? If so, what should the acceptable time threshold be? Is that threshold valid across all roles and locations? One AI allows each customer to monitor and select relevant data variables to understand how they impact insights into your predictive model. Changing the People Analytics Maturity Model into a Continuum Now that we realize that the initial Core Workforce People Data Cloud can generate results not only for Reporting and Analytics but also for Predictive Modeling, we can consider a People Analytics Maturity Continuum like this: This model recognizes the fact that basic reporting and analytics can occur simultaneously after a proper data lake is presented. It also introduces the concept of Monitoring your data and Understanding how it relates to your business needs. These are the first steps in Predictive Modeling and can occur without a forecast being generated. The truth underlining my point is: Analytics professionals should first understand their data before building forecasts. Ignoring One AI Exploratory Data Analysis insights from this initial data set is a lost opportunity. This initial model can and should be enhanced with additional data sources as they become available, but there is significant value even without a predictive output. The same modeled data that drives basic reports can drive Machine Learning. The greater value of One AI is providing a statistical layer, not simply a Machine Learning output layer. The EDA report is a rich trove of statistical data correlations and insights that can be used to build data understanding, a monitoring culture, and the facilitation of qualitative questions. But the value doesn’t stop there. Integrated services that accompany One AI also provide value for all data consumers. These integrated services are reflected in storyboards and include: Forecasting Correlations Line of BestFit Significance Testing Anomaly Detection These integrated services are used to ask questions about your data that are more valid than what can be derived solely from traditional metrics and dimensions. For example, storyboards can reflect data relationships so even casual users can gain early insights. The scatterplot below is created with Core Workforce data and illustrates the relationship between Tenure and Salary. One AI integrated services not only renders this view but cautions that based upon the data used this result is unlikely to be statistically significant (refer to the comment under the chart title below). More detailed information is contained in the EDA report, but this summary provides the first step in Monitoring and Understanding this data relationship. Perhaps one of the questions that may arise from this monitoring involves understanding existing gender differences. This is easily answered with a few mouse clicks: This view begins to provide potential insight into gender differences involving Tenure and Salary, though the results are still not statistically significant. Analysts are thus guided toward discovering their collection of insights contained within their data. List reports can be used to reflect feature importance and directionality. In the above table report, both low and high Date of Birth values increase Attrition Risk. Does this mean younger and older workers are more likely to leave than middle-aged workers? Interesting relationships begin to appear, and One AI automatically reports on the strength of those relationships and correlations. Iterations will increase the strength of the forecast, especially when additional data sources can be added. Leveraging One AI's capability at project launch provides a higher initial ROI, an accelerated value curve, and better-informed data consumers. At One Model, you don’t need to be a data scientist to get started with predictive modeling. Contact One Model to learn more and see One AI in action. Customers - Would you like more info on EDA reports in One Model? Visit our product help site.
Read Article
Featured
6 min read
Joe Grohovsky
As a result of my blogs and customer conversations, I receive a variety of interesting comments and feedback from my contacts in the People Analytics space. A common topic is that different stakeholder groups within a People Analytics project have vastly different ideas as to what is acceptable in a People Analytics tool. This often leads to disappointment, failed initiatives and wasted budget. Examples provided are that LOB (Line of Business) and general HR professionals tend to be attracted to and satisfied with "Convenience Analytics". Convenience Analytics is a term referring to simplistic, easy to digest metrics or reports. They are typically generated without much effort, often by the source system, but are limited in breadth, depth, and growth possibilities. The appeal of Convenience Analytics may be their low-cost of entry and their non-threatening nature to the decision makers that use them, but they are extremely inflexible. Significant challenges occur when Convenience Analytics are deployed to an organization expecting deep insights, growth of use-cases, or the addition of new data sources. The People Analytic and HRIS (Human Resource Information System) professionals supporting these Convenience Analytics projects ultimately suffer from a lack of long-term data quality and a capability to drive future insight that is uniquely strategic to their organization and not a pre-canned report. One Model recognizes that a properly constructed People Analytics infrastructure has a system agnostic HR Data Strategy, and this has driven our industry leading Data Orchestration capabilities. Data Orchestration is a process that takes siloed data from multiple locations, combines it, and makes it available for data analysis. One Model breaks Data Orchestration into 4 activities/phases: Data Ingestion – This phase is the process of removing data from source systems and delivering it into One Model. We take a flexible approach and accommodate strategies ranging from API extraction, to file based transfer over SFTP, to manual uploading of data through the One Model interface. Data Modeling – After data is ingested, it is combined into a single, interconnected data model that supports a broad range of analytics. Taken together with the ingestion phase these activities constitute ETL (extract, transform and load) activities. This results in what is recognized as a fact and dimension star schema style of data model. Data Quality – This phase is driven by rules and logic. As a result, quality issues in source data begin to surface. These issues are captured and resolved during this time. Data Destinations – This phase is the scheduling of data exports out of the One Model system and delivering them to SFTP sites, Amazon S3 buckets, and/or other destinations. This reflects the vision of our company; not to be the ultimate destination for your data but a data asset existing amid your analytics infrastructure feeding downstream system and tools. Data Accessibility is a noteworthy benefit of One Model's Data Orchestration process. A customer is not restricted to accessing their data only through our query engine. Access is also provided to your orchestrated data directly in the data warehouse hosted on AWS. This allows the usage of your own tools like Tableau, Looker, Qlik, etc. for presentation purposes. Additional benefits include being able to run your own integrations or internal application development against a clean, comprehensive data set to solve challenges specific to your organization. Let us look at two of the most popular HRIS systems and some of the data orchestration advantages One Model offers. Workday - Workday uses point-in-time (snapshot) based reporting. This snapshot reporting is recognized as being limited and brittle in accommodating backdated changes and other HR analytic scenarios. External data is difficult to connect with and pulling and maintaining snapshots from Workday is a pain. One Model avoids all issues with snapshot reporting by rebuilding a data schema that is effective dated and transactional in nature. The result is a dataset perfect for delivering accurate, flexible reporting and analytics. We support both full and incremental refreshes of data from Workday. SAP SuccessFactors - One Model has pre-built data processing logic that can be used to transform data from various SuccessFactors objects into a well-organized, effective-dated structure that supports a wide range of analytic use cases. The SAP SuccessFactors API allows us to identify customizations in your SuccessFactors configuration -- and our data model readily supports the inclusion of those custom fields in the resulting data model. We support both full and incremental refreshes of data from the SuccessFactors API. One Model has perfected data orchestration so well that we are often included in searches for integration partners. Our tailored solution enables the accurate transfers of large files of complex data from existing tools such as an ATS into new, replacement tools. This creates tremendous possibilities for efficiency in migrations and adoption of new technology. If you are interested in receiving full value from your People Analytics investment, please click here to reach out to One Model to schedule an in-depth discussion. Listed below are links to various articles that provide further insight into this topic. The end Snapshot Reporting for People Analytics The need to build Structural Views of SAP SuccessFactors Data People Analytics for SAP SuccessFactors Using People Analytics to support system migration About One Model One Model delivers a comprehensive people analytics platform to business and HR leaders that integrates data from any HR technology solution with financial and operational data to deliver metrics, storyboard visuals, and advanced analytics through a proprietary AI and machine learning model builder. People data presents unique and complex challenges which the One Model platform simplifies to enable faster, better, evidence-based workforce decisions. Learn more at www.onemodel.co.
Read Article
Featured
11 min read
Joe Grohovsky
Most of my One Model work involves chatting with People Analytics professionals discussing how our technology enables them to perform their role more effectively. One Model is widely acknowledged for our superior ability to orchestrate and present customer’s people metrics, as well as leveraging Artificial Intelligence/Machine Learning for predictive modeling purposes. My customer interactions always result in excited conversations around our data ingestion and modeling, and how a customer can leverage the flexibility of our many presentation options. However, when it comes to further exploring the benefits of Artificial Intelligence, enthusiasm levels often diminish, and customers become hesitant to explore how this valuable technology can immediately benefit their organization. One Model customer contacts tend to be HR professionals. My sense is they view Artificial Intelligence/Machine Learning as very cool, but aspirational for both them and their organization. This is highlighted during implementations as we plan their launch and roll-out timelines; the use of predictive models is typically pushed out to later phases. This results in a delayed adoption of an extraordinarily valuable tool. Machine Learning is a subset of Artificial Intelligence and is the ability for algorithms to discern patterns within data sets. It elevates decision-support functions to an advanced level and as such can provide previously unrecognized insights. When used with employee data there is understandable sensitivity because people's lives and careers risk being affected. HR professionals can successfully use Machine Learning to address a variety of topics that impact an array of areas throughout their company. Examples would include: Attrition Risk – impact at the organizational level Promotability – impact at the employee level Candidate Matching – impact outside the organization Exploratory Data Analysis - quickly build robust understandings of any dataset/problem With this basic understanding, let us explore three possible reasons why the deployment of Machine Learning is delayed, and how One Model works to increase a customer’s comfort level and accelerate its usage. #1: Machine Learning is undervalued For many of us, change is hard. There are plenty of stories in business, sports, or government illustrating a refusal to use decision-support methods to rise above gut-instinct judgments. The reluctance or inability to use fact-based evidence to sway an opinion makes this the toughest category to overcome. #2: Machine Learning is misunderstood For many of us, numbers and math are frightening. Typically, relating possibility and probability to a prediction does not go beyond guessing at the weather for this weekend’s picnic. Traditional metrics such as employee turnover or gender mix are simple and comfortable. Grasping how dozens of data elements from thousands of employees can interact to lead or mislead a prediction is an unfamiliar experience for many HR professionals that they would prefer to avoid. #3: Machine Learning is intimidating This may be the most prevalent reason, albeit subliminal. Admitting a weakness to colleagues, your boss, or even yourself is not easily done. Intimidation may arise from several sources. The first occurs from the general lack of understanding referenced earlier, accompanied by a fear of liability due to data bias or unsupported conclusions. Often, some organizations with data scientists on staff may pressure HR to transfer the responsibility for People Analytics predictions to these scientists to be handled internally with Python or R. This sort of internal project never ends well for HR; it is a buy/build situation akin to IT departments wanting to build their own People Analytics data warehouse with a BI front-end. Interestingly, when a customer’s data science team is exposed to One Model’s Machine Learning capabilities, they usually become some of our biggest advocates. During my customer conversations, I avoid dwelling on their reluctance and simply explain how One Model’s One AI component intrinsically addresses Machine Learning within our value proposition. Customers do not need familiarity with predictive modeling to enjoy these benefits. Additionally, I explain how One AI protects our customers by providing complete transparency in how training data is selected, results are generated, how any models are making decisions, validating the strength of resulting prediction, and thorough flexibility to modify every data run to fit within each customer’s own data ethics. This transparency and flexibility provide protection against data bias and generally bad data science. Customers simply apply an understanding of their business requirements to One AI’s predictions and adjust if necessary. Below is a brief explanation of a few relevant components of One Model's Machine Learning strategy and the benefits they provide. Selection of Training Data After a prediction objective is defined, the next step is to identify and collect the relevant data points that will be used to teach One AI how to predict future or unseen data points. This can be performed manually, automatically, or a combination of both. One AI offers automatic feature selection using algorithms to decide which features are statistically significant and worth training upon. This shrinks the data set and reduces noise. The context of fairness is critical, and it is at this point that One AI starts to measure and report on data bias. One measurement of group fairness that One AI supports is Disparate Impact. Disparate Impact refers to practices that adversely affect one group of people of a protected characteristic more than another, even if a group does not overtly discriminate (i.e. their policies may be neutral). Disparate Impact is a simple measure of group fairness and does not consider sample sizes, instead focusing purely on outcomes. These limitations work well with attempting to prevent bias from getting into Machine Learning. It is ethically imperative to measure, report and prevent bias from making its way into Machine Learning. This Disparate Impact reporting is integrated into One AI along with methods to address the identified bias. One AI allows users to measure group fairness in many ways and on many characteristics at once, making it easy to make informed, ethical decisions. Promotability predictions could serve as an example. If an organizations historic promotion data is collected for training purposes, the data set may reflect a bias toward Caucasian males who graduated from certain universities. Potential bias toward gender and race may be obvious, but there may also be a hidden bias toward these certain universities, or away from other universities that typically target different genders or race. An example of how hidden bias affected Amazon can be found here. One AI can identify bias and help users remove bias from data using the latest research. It is important to One Model that our users not only be informed of bias but can also act upon these learnings. Generation of Results After a predictive model is run, One AI still takes steps that ensure the predictions are as meaningful as possible. It is important to note that One AI does all the “heavy lifting”; our customers need only provide oversight as it applies to their specific business. Any required modifications or changes are easily handled. An example can be found in an Attrition Risk model. After running this model our Exploratory Data Analysis (EDA) report provides an overview of all variables considered for the model and identifies which were accepted, which were rejected, and why. A common reason for rejection is that of a “cheating” variable. This is when there is too close of a one-to-one relationship between the target and identified variable. If “Severance Pay” is rejected as a cheating variable, we likely will agree because logically anyone receiving a severance package would be leaving the company. However, if “Commute Time 60+” is rejected as a cheating variable, we may push back and decide to include this because commuting over an hour is something the organization can control. It is an easy modification to override the original exclusion and re-run the model. One Model customers who are more comfortable with predictive modeling may even choose to dive deeper into the model itself. A report on each predictive run shows which model type was used, Dataset ID’s, Dimensionality Reduction status, etc. One Model’s flexibility allows a customer to change these with a mouse click should they want to explore different models. Please remember that this is not a requirement at all and simply a reflection of the available transparency and flexibility for those customers preferring this level of involvement. My favorite component of our results summary reporting is how One AI ranks the variables impacting the model. Feature Importance is listed in descending order of importance to the result. In our Attrition Risk model above, the results summary report would provide a prioritized list of items to be aware of in your attempt to reduce attrition. Strength of Prediction It is important to remember that Machine Learning generates predictions, not statements of fact. We must realize that sometimes appropriate data is just not available to generate meaningful predictions and these models would not be trustworthy. Measuring and reporting the strength of predictions is a solid step in developing a data-driven culture. There are several ways to evaluate model performance; many are reflected in the graphic below. One Model automatically generates multiple variations to help provide a broad view and ensure that a user has the data they feel comfortable evaluating. Both “precision” and “recall” are measured and displayed. Precision measures the proportion of positive identifications (people who terminate in the future) the model correctly identified. Put another way when the model said someone would terminate, how often was it correct? Recall reflects the proportion of actual positives (people who terminate in the future) that were correctly identified by the model. Put another way, of all the people that actually terminated - how many did the model correctly identify. Precision & recall are just one of the many metrics that One AI supports. If you or your team is more familiar with another method for measuring performance, we most likely already support it. One Model is glad to work with your team in refining your algorithms to build strong predictive models and ensure you have the confidence to interpret the results. Summary Machine Learning and Data Science are extremely valuable tools that should be a welcome conversation topic and an important part of project roll-out plans. People Analytic professionals owe it to their companies to incorporate these tools into their decision-support capabilities even if they do not have access to internal data scientists. Care should be taken to ensure all predictive models are transparent, free from bias, and can be proven so by your analytics vendor. Want to Learn More? Contact One Model and learn how we can put leading-edge technology in your hands and accelerate your People Analytic initiatives.
Read Article
Featured
10 min read
Joe Grohovsky
To help understand why some People Analytics professionals are more successful than others I undertook a worldwide request for insight. I have long held the opinion that 3 basic core competencies were prevalent in successful People Analytics professionals; but to generate a complete profile, I wanted accompanying information on their professional backgrounds, career aspirations, and the organizations who gave them their first People Analytics role. The core competencies referred to are: Close familiarity with the organizations needs and culture Strong people skills An open mind Ultimately my request would suggest the importance of 2 additional factors: Familiarity with data and HR (context) An identified focus (definition of success) Respondent Profile Most respondents were currently working within HR when they assumed their role, though their specific task at the time is unknown. HR was and continues to be the organizational home for most People Analytic roles. Almost half indicated their first People Analytics role emerged gradually from a previous role rather than being specifically created. It was an even split between the first role being a Team of One or not. Two thirds had no specific career path in mind and the same portion feel their careers’ next step will remain within HR. However almost 100% envision People Analytics (PA) being part of their future career, in or out of HR. The greatest self-reported strengths attributable to receiving the People Analytics role were familiarity with data and HR, with technology and math skills also being significant. Lessons Learned If we use SUCCESS and EMBRACING RESULTS separately for scoring, there are 3 areas where lessons can be learned in building our profile: Employee background Availability of People Analytics resources Identification of a specific business problem These lessons are inter-related, but they raise two new questions that are not fully answerable from these results. We can discuss these in our recommendations, but the questions are: Can core competencies overcome deficiencies in the ideal profile? Can a People Analytics role that fails to influence an organization be considered a success? Employee Background No link could be identified between a specific background attribute and success. However, there is a definite link between their background and having their results being embraced. Those respondents who did not have results embraced heavily attributed data familiarity as a strength but had no reported HR strength. Perhaps this was a contextual issue pointing to a weakness in understanding what is important to the company, the correct perspective on HR data, or poor people skills (core competencies). Availability of People Analytic Resources Resource availability seemed to have no impact on success. Slightly more than half of successful respondents were given specific tools, but 40% of successful respondents were provided no team, budget, tools, or other resources. This seems to be another area suggesting the need for core competencies. An open mind may allow the focus to remain on the problem to be solved instead of viewing it from the perspective of an available solution to be used. People skills can empower a professional to leverage resources from other areas of the organization. Identification of a Specific Business Problem Unsuccessful roles usually lacked an identified business problem to address. Stated another way, there was no stated focus. It is my sense that defining focus is the biggest improvement opportunity for both organizations new to People Analytics as well as those who have been practicing for a while. We have already drawn a link between an employee’s background and results not being embraced. Almost none of those situations had a specific business problem to address, and neither were they considered successful. In addition to pre-identifying business problems, many organizations do find value in exploring data to uncover unknown areas for improvement (focus) and following the insights provided. Predictive modeling is a common example of this in People Analytics. In these circumstances business value is found in both historic metrics such as turnover as well as predictive metrics such as attrition risk. Conclusions If we construct a candidate profile of a successful People Analytics professional whose work was embraced, they would be working within HR and have a well-rounded familiarity with HR, data, technology, and math. Their employer provides a clear definition of success by defining a problem on which to focus. Core competencies they possess allow them to overcome the dearth of any resource need as well as the ability to deftly convey their insights back to their organization in an effective, appreciable manner. It is important to note that these core competencies could possibly exist within a single individual or be spread amongst a team. In initiatives that were not embraced, there are several identifiable trouble spots to address. The most visible is the lack of focus/defined business problem. It is not uncommon to expect data to tell you where to focus, but perhaps this is a distinct skillset beyond the stated core competencies. Another concern is highlighted by unembraced initiatives involving People Analytic professionals who reported strength in data familiarity but no strength in HR. Core competencies may provide the people skills to appropriately share insights. However, the nuance of people data and the HR process seems to be lacking in this subset. This possibly points to the need for some HR functional context or guidance on conveying their message. To summarize, ingredients for a successful People Analytics professional producing results that will be embraced by the organization seem to be: 1) Presence of the stated core competencies Close familiarity with the organizations needs and culture Strong people skills An open mind 2) Familiarity with data and HR (context) 3) An identified focus (definition of success) Recommendations The lack of core competencies in an individual does not necessarily doom a People Analytics initiative, or that individual’s participation in it. This situation can be overcome by using formal or informal teams to ensure each skill set is available. It is also advisable to ensure proper context is in place. This involves more than simply examining how the defined business problem is impacting the organization. The People Analytics professional(s) involved may not have a full awareness of the nuances and breadth of the HR function itself. Perhaps an “HR 101” course could be used to explain the relevance of Recruiting, Learning, Total Rewards, Performance, etc. and why those employee processes and data are unique. An alternative to this could be ensuring an HR expert closely reviews all results before they are shared with the business. Perhaps the most significant recommendation is having a definition of success: an identified business problem was a strong component of successful initiatives. There is also a place for exploring your data to find areas of improvement. Caution should be used, and this is where strong people skills will come into play; without a defined focus, the People Analytics professional will have found a problem that was previously unidentified. Calling attention to it and providing suggestions on its resolution can be interpreted as criticizing an organizational leader and telling them how to do their job. The two questions raised but unanswerable by the provided insights were: Can core competencies overcome deficiencies in the ideal profile? Can a People Analytics role that fails to influence an organization be considered a success? Core competencies are true skills and reflect an ability to get things done. This ability powers People Analytics professionals to find resource alternatives, ideal communication techniques, and relevant focus topics. It is my opinion these competencies do a tremendous job of overcoming any inherent shortcomings in a defined role. We must not settle for simply being right but also strive to be effective. People Analytics cannot be successful when results are unembraced by the organization. The goal of any decision support role is to empower better decision making and provide our data-consumers with relevant insights in a meaningful way. Effective People Analytic professionals base their insights on trustworthy data and irrefutable metrics. This is especially relevant with the burgeoning use of artificial intelligence and predictive modeling. People Analytic professionals would do well to remain skeptical of any predictive model that is not fully transparent, cannot be explained, and is verifiably void of hidden bias. Insight Purpose & Process My insight request occurred as a survey shared among social media and industry websites so as broad an audience as possible could be captured. Participants responded from all global regions and the intent was to create a snapshot in time reflecting circumstances when they undertook their first People Analytics role. These circumstances were then compared with both their success in that role and whether their organization embraced their results. The quest was not driven by simple curiosity but a desire to help identify a replicable profile. My work In the People Analytics technology space involves helping my customers succeed in their role and build a practiced embraced by their organization. This resulting profile will be shared with my customers and used to identify areas where I can help them improve. Where are you in your People Analytics Career or Journey? One Model can provide guidance around all the above profile ingredients, and create a path for you to establish yourself as a People Analytics leader as you move forward. Step 1: One Model can help you define your organization's critical metrics and understand how to present them to various layers of decision makers. Step 2: Our team of data engineers can solve your problem of HR data portability and quickly integrate all relevant customer data sources into one platform. Step 3: Our Machine Learning/Artificial Intelligence platform will equip you with a suite of easy-to-use predictive pipelines and data extensions that allow your organization to build, understand, and predict workforce behaviors. If you would like further information on this study or to learn more about One Model, please reach out to me at: Joe Grohovsky | joe.grohovsky@onemodel.co About One Model: One Model delivers a comprehensive people analytics platform to business and HR leaders that integrates data from any HR technology solution with financial and operational data to deliver metrics, storyboard visuals, and advanced analytics through a proprietary AI and machine learning model builder. People data presents unique and complex challenges which the One Model platform simplifies to enable faster, better, evidence-based workforce decisions. Learn more at www.onemodel.co.
Read Article