8 min read
    Gina Calvert

    Are you as intentional about measuring the value of your data infrastructure and models as you are about building them? In the video below, our Solutions Architect Phil Schrader recently revealed at People Analytics Summit in Toronto the importance of (and some strategies for) using analytics to evaluate the impact of your analytics investments. From leveraging machine learning to track improvements to thinking creatively about integrating predictive models into everyday workflows, you'll gain insights on how to apply analytics to your own analytics. Short on time? We’ve summarized his presentation for you below. The Core Problem: Evaluating Data Investments When we talk about people analytics, we often focus on the tools, processes, and models that drive better decisions. But what happens when we turn that lens inward—when we use analytics to assess the very work of analytics itself? The idea is simple: if we’re investing in building data infrastructure and models, we should be just as intentional about measuring the value of those investments. Anyone leading a people analytics team knows the balancing act. On one side, there’s the pressure to deliver quick insights, the kind that keeps operations running smoothly. On the other side, there’s the longer-term need to build out robust data systems that support advanced analytics. Yet, as essential as these data initiatives are, we often struggle to quantify their value. How do we measure the ROI of building a data lake? How do we ensure that the data we’re collecting today will pay off down the road? Solution: Analytics About Analytics Here’s where we can take a different approach—by applying analytics to our own analytics. The falling cost of technical work in machine learning (ML) has opened up new possibilities, allowing us to embed these tools within our day-to-day operations. Instead of just using ML models for predictions, we can use them as a means to measure how good our data is and how effective our processes are. Essentially, we can start to think analytically about how we do analytics, especially when it comes to creating a predictive model that measures improvements over time. A Concrete Metric: Precision, Recall, and the F1 Score The foundation of this approach lies in the well-known metrics used to evaluate machine learning models: precision, recall, and the F1 score. In brief: Precision asks: When the model makes a prediction, how often is it correct? Recall asks: Out of all the events that should have been predicted, how many did the model actually identify? The F1 score strikes a balance between these two metrics, offering a single number that reflects how well your model performs overall. By tracking this metric, we can gauge the quality of our data and see how incremental improvements—like adding new data sources—translate into better predictive power. This kind of measurement becomes crucial as we think about the future of machine learning and how it integrates into everyday operations. Building Analytics for Growth This method doesn’t just give us a way to measure progress; it gives us a framework to demonstrate that progress in tangible terms. Start with the basics—core HR data like job titles, tenure, and compensation. As you layer in additional data points—learning metrics, performance reviews, engagement scores—you can observe how each new addition boosts your model’s F1 score. It’s a practical way to quantify the value of your data and justify continued investment. The Changing Landscape: Embedding Predictive Models Predictive modeling no longer needs to be a separate, resource-intensive project. As the tools become more accessible, we can embed this capability directly into our workflows. Think of it as using predictive models the way we use pivot tables—regularly, as a quick check to see how well our data is performing. This kind of embedded analytics allows us to experiment, iterate, and find creative ways to leverage machine learning without overcommitting resources. With AI continually reshaping business practices, this shift will allow teams to use predictive models in increasingly versatile ways, driving more efficient decision-making. Beyond Traditional Metrics: Rethinking the Value of Data By adopting this approach, we’re able to ask—and answer—a critical question: How valuable is our data, really? If we can demonstrate that our data is increasingly effective at predicting key outcomes like employee turnover or high performance, we’re no longer just talking about data quality in abstract terms. We’re providing a concrete metric that resonates with stakeholders and gives us a way to collaborate more effectively, whether it’s across HR functions or with external vendors whose data feeds into our models. Looking Ahead: Embracing Innovation as Costs Fall The future of AI and the workplace is advancing quickly, blurring the line between strategic and routine applications. What was once a complex, time-consuming effort will soon be something we do without a second thought. This shift requires a mindset change—being open to ideas that may seem wasteful or unconventional today but could become standard practice tomorrow. The key is to embrace this shift and look for new, innovative ways to use predictive analytics. In summary, by taking an “analytics for analytics” approach, we gain more than just better models—we gain clarity on the value of our data investments. The ability to measure progress in predictive power isn’t just a technical exercise; it’s a strategic advantage that drives smarter decision-making across the board. Not sure where to start? Download Key Questions to Ask When Selecting an AI-Powered HR Tool to get the answers you need. Download Your Buying Guide Now

    Read Article

    4 min read
    Richard Rosenow

    The buzz around Artificial Intelligence (AI) in the workplace is growing louder by the day. As organizations worldwide attempt to harness this revolutionary technology, particularly in the realm of Human Resources (HR), a fundamental question arises: Is our workforce data truly ready for AI and Machine Learning (AI/ML)? The Reality of Data Readiness for AI and ML In our modern business environment, HR teams are making use of workforce data for a variety of purposes. Traditionally, these teams had focused on extracting data for reporting in the form of monthly extracts or daily snapshots. This approach, while useful for traditional needs, falls short of the data needs for AI and ML. That’s because data preparation for AI isn’t just about collecting and storing data to review later; it's about curating data in the right way to effectively train sophisticated models. AI tools today are highly complex and capable of predicting patterns with remarkable accuracy. However, vast amounts of high-quality, curated data are required to effectively train those models. The quality and relevance of the data are critical for the fine-tuning needed for specific tasks or domains like our use cases in HR. The Need for a Paradigm Shift From this perspective, most HR datasets and HR data stores that we had previously prepared are not ready for AI and ML (whether it's generative AI or "traditional" predictive AI). Without appropriately prepared training data, the algorithms we hope to launch will fall short in their learning. Potential benefits of AI in HR—from recruitment optimization to workforce alignment with business goals—could remain untapped or, worse, lead to unintended consequences if models are trained on poor or incorrect data. Preparing your HR team for this new phase of work isn’t just about adopting new technologies; it's a paradigm shift in how we think about and handle data. This is even more pivotal in the areas of MLOps and LLM operations when we try to deploy these models at scale in a repeatable fashion. We’re going to start to hear more about these terms and the operational needs of machine learning in the near term future and it’s HR’s responsibility to stay on top of the nuances in this space. The First Step: Preparing and Unlocking Your Data Data extraction is one of the most essential parts of preparing for AI and ML. We address the foundational importance of this step, robust data preparation and management, in our blogpost 5 Tips for Getting Data Extraction Right. It explores in greater detail these 5 action steps: Prioritize and align extracted data with the needs of the business Be thoughtful about what you extract Build the business case to pull more Automate your extractions Extract for data science, not just reporting The paradigm shift and these tips can help HR teams more effectively and efficiently adopt AI practices that will drive business value and insights. Why One Model Stands Out in People Analytics AI The final key in preparing for AI and ML is having the right technology in place to build a fine-tuned model that meets your company’s unique needs. One of the main reasons I joined the One Model team stems from their foresight and commitment in this area. Due to that investment, we're now the only people analytics vendor with a machine learning platform that runs on a data model tailored to your firm, not just last-minute AI features. This distinction is vital. And "One Model" isn’t merely about preparing data for AI models; it’s an end-to-end platform encompassing data management, storytelling, model creation, evaluation, deployment, and crucially, audit-ready and transparent tools. Our platform empowers HR teams to manage and deploy customized ML models and MLOps effectively, beyond the traditional scope of data engineering teams. The dialogue around AI, ML, and MLOps in HR is already in full swing. Staying informed and engaged in this conversation is crucial. If you wish to delve deeper or discuss strategies and insights in this space, I, along with the One Model team, am more than willing to engage. We're keen to hear how your team is navigating the intricate landscape of MLOps in HR. Essential Questions to Ask When Selecting an AI-Powered HR Tool Learn the right questions to ask to make the right decisions as you explore incorporating AI in HR.

    Read Article

    7 min read
    Steve Hall

    Reporting specialists and data analysts are often required to predict the future for stakeholder groups. They do this through a variety of models, including forecasting and annualization. Although both methodologies aim to predict future values, their applications and the mathematical logic behind them vary significantly, catering to different business needs. What is Annualization and Its Significance? Annualization is a mainstay for finance and accounting but there are situations where it may be useful in HR contexts. It can be used to estimate year-end values for turnover rates, total new hires, and job openings filled, based on current data. Annualization works well when: There is little volatility in the metric across time periods There is little seasonality in the metric The metric is not likely to trend upward or downward during the course of the year Simplify Annualization with One Model One Model streamlines the computation of annualized metrics. By selecting the "Year to Date" option and "Annualize" in "Time Functions," the system will only consider the current year's data, offering a clear example of annualization at work. The Case for Forecasting Forecasting provides several benefits over annualization. While annualization typically only considers data points from the current year, forecasting can: Utilize data from a much wider time frame and range of data points Factor in seasonal fluctuations and trends Provide a more nuanced view of potential future states with confidence intervals, which is especially valuable for HR metrics that exhibit variability (e.g., number of hires, number of terminations, and termination rates). Simplify Forecasting With One Model One Model simplifies forecasting with its Embedded Insights feature. Just create a time-series line graph for your metric and use the feature to extend your forecast to the year's end. Increasing the number of data points, by adjusting the time metric from monthly to weekly or daily, for instance, can enhance forecast accuracy by capturing shorter-term cycles that may be present in the data. Including data from at least 30 data points will improve the accuracy of your forecasts and if annual seasonality is present, including data covering two or more years will also improve accuracy. You can adjust forecast parameters to align the final forecast period with the year-end. After running the forecast, simply click on the last data point in the visualization to see the forecasted value and its confidence interval. For more complex situations where the current year data pattern is expected to shift relative to last year’s pattern, One AI can be used to create a predictive model that incorporates additional internal and external features to improve accuracy. Making the Choice: Annualizing or Forecasting? Annualization and forecasting each have their strengths and weaknesses. Deciding between them depends on your data and your stakeholders’ needs. Sometimes a rough approximation is good enough; other times, a precise estimate or a range of values (e.g., a confidence interval) will be required. Annualization Forecasting Only considers data from the current year Can leverage data from multiple years Only needs a single month of data to start the estimation process One Model will need at least 4 data points to produce a forecast, but forecast accuracy suffers with so few data points unless the metric progresses in a very linear fashion Does not adjust for seasonality or trend Accounts for trends and seasonality Very simple approach requiring little input regarding computations and easy to understand More sophisticated approach that may prompt questions from end-users (luckily One Model provide embedded information describing the forecast) Estimates made early in the year are likely to be inaccurate Estimates made early in the year are likely more accurate than Annualization, especially when data from the prior year are utilized Will always underestimate or overestimate if a trend is present Can produce more accurate results even when trend is present Alternatives and Strategic Adjustments Alternatives like the 12-month rolling average provide another strategy for estimating year-end values, accommodating changes anticipated over the year. For specific metrics, like annual turnover, manually adjust the year-end prediction using expert analysis or the expected effects of internal actions. Depending on the metric being forecasted, it may also be reasonable to manually adjust the year-end value based on general projections for the year. For instance, to predict next year's annual turnover, start with the current end-of-year rate and refine it using projections from external experts or by considering the expected effects of internal measures aimed at reducing turnover. One Model Simplifies Forecasting and Annualization You might encounter scenarios where estimating year-end values for a metric is necessary. Although predicting these values with absolute precision is challenging, One Model can generate reasonable estimates, bearing in mind that sudden changes mid-year could significantly affect forecast accuracy. In practice, forecasting, particularly with One Model's Embedded Insights, tends to be more effective than annualization, especially at the start of the year. However, the accuracy of forecasting is impacted by decisions related to data inclusion and model parameters. Forecasting may also require a bit more effort to maintain, albeit minimal. Fortunately, One Model simplifies the use of both annualization and forecasting. In fact, using both methods to create estimates can be practical. When the results are close, opting for the annualized figure might be preferable for its simplicity. If results differ, the underlying data should be evaluated and the method that best aligns with the data’s characteristics should be used. One Model has you covered regardless of the situation you face and the approach you prefer or choose.

    Read Article

    5 min read
    Joe Grohovsky

    In a recent editorial (here), Emerging Intelligence Columnist John Sumser explains how pending EU Artificial Intelligence (AI) regulations will impact its global use. A summary of those regulations can be found here. You and your organization should take an interest in these developments and yes there are HR legal concerns over AI. The moral and ethical concerns associated with the application of AI are something we must all understand in the coming years. Ignorance of AI capabilities and ramifications can no longer be an excuse. Sumser explains how this new legislation will add obligations and restrictions beyond existing GDPR requirements and that there is legislation applicable to human resource machine learning. The expectation is that legal oversight will arise that may expose liability to People Analytic users and their vendors. These regulations may bode poorly for People Analytics providers. It is worth your while to review what is being drafted related to machine learning and the law as well as how your current vendor addresses the three primary topics from these regulations: Fairness – This can address both training data used in your predictive model as well as the model itself. Potential bias toward things like gender or race may be obvious, but hidden bias often exists. Your vendor should identify biased data and allow you to either remove it or debias it. Transparency – All activity related to your predictive runs should be identifiable and auditable. This includes selection and disclosure of data, the strength of the models developed, and configurations used for data augmentation. Individual control over their own data – This relationship ultimately exists between the worker and their employer. Sumser’s article expertly summarizes a set of minimum expectations your employees deserve. When it comes to HR law, our opinion is that vendors should have already self-adopted these types of standards, and we are delighted this issue is being raised. What are the differences between regulations and standards? Become a more informed HR Leader by watching our Masterclass Series: Why One Model is Preferred when it comes to Machine Learning and the Law? At One Model we are consistently examining the ethical issues that are associated with AI. One Model already meets and exceeds the Fairness and Transparency recommendations; not begrudgingly but happily because it is the right thing to do. Where most competitors put your data into a proverbial AI black box, One Model opens its platform and allows full transparency and even modification of the AI algorithm your company uses. One Model has long understood the HR law and how the industry has an obligation to develop rigor and understanding around Data Science and Machine Learning. The obvious need for regulation and a legal standard for ethics has risen with the amount of snake oil and obscurity being heavily marketed by some HR People Analytics vendors. One Model’s ongoing plan to empower your HR AI initiatives includes: Radical transparency. Full traceability and automated version control (data + model). Transparent local and model level justifications for the predictions that our Machine Learning component called OneAI makes. By providing justifications and explanations for our decision-making process One Model builds paths for user-education and auditability for both simple and complex statistics. Our objective has been to advance the HR landscape by up-skilling analysts within their day-to-day job while still providing the latest cutting edge in statistics and machine learning. Providing clear and educational paths to statistics is in the forefront of our product design and roadmaps, and One Model is just getting started. You should promptly schedule a review of the AI practices being conducted with your employee data. Ignoring what AI can offer risks putting your organization at a competitive disadvantage. Incorrectly deploying AI practices may expose you to legal risk, employee distrust, compromised ethics, and incorrect observations. One Model is glad to share our expertise around People Analytics AI with you and your team. High level information on our OneAI capability can be found in the following brief video and documents: https://bit.ly/OneModelPredictiveModeling https://bit.ly/OneModel-AI https://bit.ly/HR_MachineLearning For a more detailed discussion please schedule a convenient time for a personal discussion. http://bit.ly/OneModelMeeting

    Read Article

    10 min read
    Dennis Behrman

    Ever play with a Magic 8 Ball? Back in the day, you could ask it any question and get an answer in just a few seconds. And if you didn't like its response, you could just shake it again for a new prediction. So simple, so satisfying. Today's HR teams and businesses obviously need more reliable ways of predicting outcomes and forecasting results than a Magic 8 Ball. But while forecasting and predicting sound similar, they're actually two different problem-solving techniques. Below, we'll go over both and explain what they're best suited for. What is HR forecasting? Remember the Magic 8 ball? At first glance, the Magic 8 ball "predicts" or "forecasts" an answer to your question. This is not how forecasting works (at least, for successful companies or HR departments). Instead, HR forecasting is a process of predicting or estimating future events based on past and present data and most commonly by analysis of trends. "Guessing" doesn't cut it. For example, we could use predictive forecasting to discover how many customer calls Phil, our product evangelist, is likely to receive in the next day. Or how many product demos he'll lead over the next week. The data from previous years is already available in our CRM, and it can help us accurately predict and anticipate future sales and marketing events where Phil may be needed. A forecast, unlike a prediction, must have logic to it. It must be defendable. This logic is what differentiates it from the Magic 8 ball's lucky guess. After all, even a broken watch is right two times a day. What is predictive analytics? Predictive analytics is the practice of extracting information from existing data sets in order to determine patterns and trends that could potentially predict future outcomes. It doesn't tell you what will happen in the future, but rather, what might happen. For example, predictive analytics could help identify customers who are likely to purchase our new One AI software over the next 90 days. To do so, we could indicate a desired outcome (a purchase of our people analytics software solution) and work backwards to identify traits in customer data that have previously indicated they are ready to make a purchase soon. (For example, they might have the decision-making authority on their people analytics team, have an established budget for the project, completed a demo, and found Phil likeable and helpful.) Predictive modeling and analytics would run the data and establish which of these factors actually contributed to the sale. Maybe we'd find out Phil's likability didn't matter because the software was so helpful that customers found value in it anyway. Either way, predictive analytics and predictive modeling would review the data and help us figure that out — a far cry from our Magic 8 ball. Managing your people analytics data: how do you know know if you need to use forecasting vs. predictive analysis? Interested in how forecasting and/or predictive modeling / predictive analytics can help grow your people analytics capabilities? Do you start with forecasting or predictive modeling? The infographic below (credit to Educba.com - thanks!) is a great place to compare your options: Recap: Should you use forecasting or predictive analysis to solve your question? Forecasting is a technique that takes data and predicts the future value of the data by looking at its unique trends. For example - predicting average annual company turnover based on data from 10+ years prior. Predictive analysis factors in a variety of inputs and predicts future behavior - not just a number. For example - out of this same employee group, which of these employees are most likely to leave (turnover = the output), based on analyzing past employee data and identifying the indicators (input) that often proceed with the output? In the first case, there is no separate input or output variable but in the second case, you use several input variables to arrive at an output variable. While forecasting is insightful and certainly helpful, predictive analytics can provide you with some pretty helpful people analytics insights. People analytics leaders have definitely caught on. We can help you figure it out and get started. Want to see how predictive modeling can help your team with its people analytics initiatives? We can jump-start your people analytics team with our Trailblazer quick-start package, which really changes the game by making predictive modeling agile and iterative process. The best part? It allows you to start now and give your stakeholders a taste without breaking the bank, and it allows you to build your case and lay the groundwork for the larger scale predictive work you could continue in the future. Want to learn more? Connect with Us. Forecasting vs. Predictive Analysis: Other Relevant Terms Machine Learning - machine learning is a branch of artificial intelligence (ai) where computers learn to act and adapt to new data without being programmed to do so. The computer is able to act independently of human interaction. Read Machine Learning Blog. Data Science - data science is the study of big data that seeks to extract meaningful knowledge and insights from large amounts of complex data in various forms. Data Mining - data mining is the process of discovering patterns in large data sets. Big Data - big data is another term for a data set that's too large or complex for traditional data-processing software. Learn about our data warehouse. Predictive Modeling - Predictive modeling is a form of artificial intelligence that uses data mining and probability to forecast or estimate more granular, specific outcomes. Learn more about predictive analytics. Descriptive Analytics - Descriptive analytics is a type of post-mortem analysis in that it looks at past performance. It evaluates that performance by mining historical data to look for the reasons behind previous successes and failures. Prescriptive Analytics - prescriptive analytics is an area of business analytics dedicated to finding the potential best course of action for a given situation. Data Analytics - plain and simple, data analytics is the science of inspecting, cleansing, transforming, and modeling data in order to draw insights from raw information sources. People Analytics - All these elements are important for people analytics. Need basics? Learn more about people analytics. About One Model One Model’s people analytics solutions help thriving companies make consistently great talent decisions at all levels of the organization. Large and rapidly-growing companies rely on our People Data Cloud™ people analytics platform because it takes all of the heavy lifting out of data extraction, cleansing, modeling, analytics, and reporting of enterprise workforce data. One Model pioneered people data orchestration, innovative visualizations, and flexible predictive models. HR and business teams trust its accurate reports and analyses. Data scientists, engineers, and people analytics professionals love the reduced technical burden. People Data Cloud is a uniquely transparent platform that drives ethical decisions and ensures the highest levels of security and privacy that human resource management demands.

    Read Article

    8 min read
    Josh Lemoine

    With the introduction of One AI Recipes, One Model has created an intuitive interface for no-code machine learning in people analytics. One AI Recipes (Recipes) are a question-and-answer approach to framing your people data for predictive models that answer specific people analytics questions. Adding this capability to the existing One AI automated machine learning (autoML) platform results in a more accessible end-to-end no-code solution for delivering AI from right within your people analytics platform. We call them Recipes because they walk you through each of the steps necessary to create a delicious dish; a predictive model. Simply select the ingredients from your data pantry in One Model then follow the steps in the Recipe to be guided through the process of creating a successful model. Recipes democratize the production and reproduction of AI models with consistency, accuracy, and speed. Understanding some of the terminology used above and how it relates to One AI will be useful in explaining why Recipes are so useful. What is a no-code machine learning platform? “No-code machine learning platform” is somewhat of a vague term. The definition is pretty straightforward. A no-code machine learning platform is a tool that enables you to apply artificial intelligence without writing any code. It provides a guided user experience that takes business context as an input and produces predictions and/or causal inferences as output. Where it becomes vague is in the range of complexity and flexibility of these platforms. On one end of the spectrum, there are simple-to-use AI builders where the user answers a few questions and is presented with predictions. These tend to only be useful in very standardized use cases. There is often very little transparency into what the machine learning model is actually doing. On the other end are the complex and powerful platforms like Azure ML. Azure doesn’t require writing code and is also very powerful and flexible, but it is also complex. Anyone without a working knowledge of data science would be hard-pressed to create trustworthy models on platforms like this. One AI is aiming at the sweet (no dessert Recipe pun intended) spot on the spectrum. Being designed specifically for people analytics, it allows us to leverage the question-and-answer approach of Recipes. Experienced Chefs can still toss the Recipe aside and cook from scratch though. The One AI kitchen is well stocked with machine learning tools and appliances at its disposal. What is autoML? AutoML is a series of processes or “pipeline” that performs data cleaning and preparation, algorithm selection, and parameter optimization for machine learning. Performing these tasks manually can be labor intensive and time-consuming and requires expertise in data science techniques. AutoML automates these tasks while still delivering the benefits of machine learning. One AI has always provided an autoML pipeline, albeit one where any default setting can be overridden. Even so, there were two areas where we knew we could improve: 1. The data structure for analytic purposes is not the same as the data structure necessary for machine learning. Performing machine learning on data in One Model at times required additional data modeling, a task performed by an expert. 2. Framing up the problem and interpreting the results often required an expert to be involved to ensure accuracy and coherent insights. Recipes address these challenges. Recipes both re-frame the data in a way that a machine learning model can work with and provide a coherent statement that explains both what the model will be predicting and how it will be doing so. How can you benefit from One AI with Recipes? Resource Savings Recipes lighten the load on the technical resources that are likely in high demand at your organization. People analytics is a key strategic business function, yet most people analytics teams aren’t lucky enough to employ Data Engineers, Data Scientists, and Machine Learning Engineers. These teams often fight for the same technical resources as other teams for people who are very talented but can’t possibly possess a deep understanding of all of the different areas of business in the company. Predicting and planning for outcomes has become a key deliverable of people analytics teams, yet they’re often not well equipped to succeed. Companies are increasingly looking for software for automation in HR. Machine learning tools are making great strides in taking business context as an input and producing useful insights as an output. The full realization of this functionality is no-code machine learning platforms. Time Savings With Recipes, time-to-value for machine learning from your people data is substantially reduced. The difference in time required to manually perform this work versus leveraging a no-code machine learning platform is stark. It’s weeks to months vs. hours to days. Even if you have Data Scientists on staff that have the skills necessary to build custom predictive models, they can save time by prototyping in a no-code environment. Interpretability Having the clear statements that Recipes provide that explain what it is you’re predicting and how you’re going about it makes the results easier to interpret. Contrast this with manual machine learning where details can get lost in translation. This prediction statement is in addition to the exploratory data analysis (EDA), model explanation/performance, and Storyboards that One AI provides. One Model also employs a team of experts in the ML and AI space that are available to assist if uncertainty is encountered. Transparency Since One AI is part of One Model, your model configuration and performance data is available in the same place as the predictive or causal data and your people data (at large). Also, your models are trained on YOUR data. These are not “black box” models. At One Model we emphasize making model performance data easily available anywhere predictive data or causal data is included. Compliance As a One Model customer, your potentially sensitive employee data resides in the same place as your machine learning. You do not need to export this data and move it around. On the flip side, the output from your models can be leveraged in your Storyboards in One Model without exporting or moving sensitive data outside of your people analytics solution. The predictive outputs can even be joined to your employee dimensions to help you identify where risk sits. Control and Flexibility Users have the option of configuring data and settings manually in a very granular way. Just because One AI offers a no-code option for creating machine learning models doesn’t mean you’re tied to it. Want to use a specific estimator? You can do that. Want to modify the default settings for that estimator? You can also do that. Recipes just expand the number of One Model user personas able to leverage AI on their data. In Summary One AI Recipes provide a question-and-answer approach to building predictive models that answer key questions in the people analytics space. The resulting democratization of the production of AI models provides benefits including: Resource Savings Time Savings Interpretability Transparency Compliance Control and Flexibility You can have all of this as part of your people analytics platform by choosing One Model. Since you won’t learn about these Recipes by watching Food Network, schedule a demo here: Request a Demo

    Read Article

    5 min read
    Stacia Damron

    Is your company meeting its diversity goals? More importantly, if it is, are you adequately measuring diversity and inclusion success? While we may have the best intentions, today’s companies need to be focused on not just monitoring hiring metrics - but effectively analyzing them - in order to make a DE&I difference in the long term. But first, in order to do that, we need to take a look at key metrics for diversity and inclusion success. Let's talk about these diversity KPIs we’re measuring and why we’re measuring them. Without further ado, here’s 4 out-of-the box ways to measure diversity-related success that don’t have to do with hiring - all of which can help you supplement enhance your current reporting. Number 1: Rate and Timing of an Individual’s Promotions Are non-minority groups typically promoted every year and a half when minorities are promoted two years? Are all employees held accountable to the same expectations and metrics for success? Is your company providing a clearly-defined path to promotion opportunities, regardless of race or gender? Every hire should be rewarded for notable successes and achievement, and promoted according to a clear set of criteria. Make sure that’s happening across the organization - including minority groups. Digging into these metrics can help determine those answers and in the very least – put you on a path to asking the right questions. Number 2: Title and Seniority Do employees with the same levels of educational background and qualifications receive equitable salaries and titles? Often, minorities are underpaid compared to their non-minority counterparts. Measuring and tracking rank and pay metrics are two good ways to spot incongruences catch them early – giving your company a chance to correct a wage gap versus inadvertently widening it over time. Quantitative measures of diversity, like this, can help you see trends over time because changing diversity turning radius is a long process. Keep your eye on historically underpaid groups. A fairly paid employee is a happy, loyal employee. Number 3: Exposure to Upper Management and Inclusion in Special Assignments Global studies cited in a Forbes article revealed that a whopping 79 percent of people who quit their jobs cite ‘lack of appreciation’ as their reason for leaving. Do your employees – including minority groups - feel valued? Are you empowering them to make an impact? Unsurprisingly, people who feel a sense of autonomy and inclusion report higher satisfaction with their jobs – and are therefore more likely to stay. Are all groups within the organization equal-opportunity contributors? Bonus: On that note - are you performing any types of employee satisfaction surveys? Number 4: Training and Education Programs and Partnerships In 2014, Google made headlines for partnering with Code School. They committed to providing thousands of paid accounts to provide free training for select women and minorities already in tech. Does your company have a similar partnership or initiative with your community or company? As simple as it sounds – don’t just set it and forget it - track the relevant diversity KPIs that determine success and measure the results of your programs to determine if it is in fact, helping achieve your commitments towards improving diversity. The Summary: Success Comes by Measuring Diversity and Inclusion Hopefully, one of two (heck - maybe all four) of the items above resonated with you, and you’re excited to go tinker with your reporting platform. But wait - what if you have all this data, and you WANT to make some predictive models and see correlations in the data - and you’re all giddy to go do it - but you don’t have the tools in place? That’s where One Model can help. Give us your data in its messiest, most useless form, load it into our platform, and we’ll help you fully leverage that data of yours. Want to learn more? Let's Connect About Diversity Metrics Today. Let's get this party started. About One Model: One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team.

    Read Article

    11 min read
    Taylor Clark

    The human resources department is a mission-critical function in most businesses. So the promise of better people decisions has generated interest in and adoption of advanced machine-learning capabilities. In response, organizations are adopting a wide variety of data science tools and technology to produce economically-optimal business outcomes. This trend is the result of the proliferation of data and the improved decision-making opportunities that come with harnessing the predictive value of that data. What are the downsides to harnessing machine learning? For one, machines lack ethics. They can be programmed to intelligently and efficiently drive optimal economic outcomes. It seems as though the use of machines in decisions and seemingly desirable organizational behaviors. Of course machines lack a sense of fairness or justice. But optimal economic outcomes do not always correspond to optimal ethical outcomes. So the key question facing human resources teams and technology support is "How can we ensure that our people decisions are ethical when a machine is suggesting those decisions?” The answer almost certainly requires radical transparency about how artificial intelligence and machine learning are used the decision making process. It is impossible to understand the ethical aspect of a prediction made by a machine unless the input data and the transformations of that data are clear and understood as well. General differences between various machine learning approaches have a profound impact on the ethicality of the outcomes that their predictions lead to. So let's begin by understanding some of those differences. Let’s focus on the various types of machine learning models: the black box model, the canned model, and the inductive model. What is a Black Box Model? A black box model is one that produces predictions that can’t be explained. There are tools that help users understand black box models, but these types of models are generally extremely difficult to understand. Many vendors build black box models for customers, but are unable or unwilling to explain their techniques and the results that those techniques tend to produce. Sometimes it is difficult for the model vender to understand its own model! The result is that the model lacks any transparency. Black box models are often trained on very large data sets. Larger training sets can greatly improve model performance. However, for this higher level of performance to be generalized many dependencies need to be satisfied. Naturally, without transparency it is difficult to trust a black box model. As you can imagine, it is concerning to depend on a model that uses sensitive data when that model lacks transparency. For example, asking a machine to determine if a photo has a cat in the frame doesn't require much transparency because the objective lacks an ethical aspect. But decisions involving people often have an ethical aspect to them. This means that model transparency is extremely important. Black box models can cross ethical lines where people decisions are concerned. Models, like humans, can exhibit biases resulting from sampling or estimation errors. They can also use input data in undesirable ways. Furthermore, model outputs are frequently used in downstream models and decisions. In turn, this ingrains invisible systematic bias into the decision. Naturally, the organization jeopardizes its ethical posture when human or machine bias leads to undesirable diversity or inclusion outcomes. One of the worst possible outcomes is a decision that is unethical or prejudicial. These bad decisions can have legal consequences or more. What is a Canned Model? The terms "canned model" or “off-the-shelf model” describe a model that was not developed or tailored to a specific user’s dataset. A canned model could also be a black box model depending on how much intellectual property the model’s developer is willing to expose. Plus, the original developer might not understand much about its own model. Canned models are vulnerable to the same biases as black box models. Unrepresentative data sets can lead to unethical decisions. Even a representative data set can have features that lead to unethical decisions. So canned models aren't without their disadvantages either. But even with a sound ethical posture, canned models can perform poorly in an environment that simply isn’t reflective of the environment on which the model was trained. Imagine a canned model that segmented workers in the apparel industry by learning and development investments. A model trained on Walmart’s data wouldn’t perform very well when applied to decisions for a fashion startup. Canned models can be quite effective if your workforce looks very similar to the ones that the model was trained on. But that training set is almost certainly a more general audience than yours. Models perform better when training data represents the real life population that was targeted and represented in the training set. What are Custom Built Models? Which brings us to custom built models. Custom models are the kind that are trained on your data. One AI is an example of the custom built approach. It delivers specialized models that best understand your environment because it’s seen it before. So it can detect patterns within your data to learn and make accurate predictions. Custom models discover the unique aspects of your business and learn from those discoveries. To be sure, it is common for data science professionals to deploy the best performing model that they can. However, the business must ensure that these models comply with high ethical and business intelligence standards. That's because it is possible to make an immoral decision with a great prediction. So for users of the custom built model, transparency is only possible through development techniques that are not cloudy or secret. Even with custom built models, it is important to assess the ethical impact that a new model will have before it is too late. Custom built models may incorporate some benefits of canned models, as well. External data can be incorporated into the model development process. External data is valuable because it can capture what is going on outside of your organization. Local area unemployment is a good example of a potentially valuable external data set. Going through the effort of building a model that is custom to your organization will provide a much higher level of understanding than just slamming a generic model on top of your data. You will gain the additional business intelligence that comes from understanding how your data, rather than other companies' data, relates to your business outcomes. The insights gleaned during the model development process can be valuable even if the model is never deployed. Understanding how any model performs on your data teaches you a lot about your data. This, in turn, will inform which type of model and model-building technique will be advantageous to your business decisions. Don’t Be Misled by Generic Model Performance Indicators A canned model’s advertised performance can be deceptive. The shape of the data that the canned model learned from may be drastically different from the data in your specific business environment. For example, if 5% of the people in the model's sample work remotely, but your entire company is remote, then the impact and inferences drawn by the model about remote work are not likely to inform your decisions very well. When to be Skeptical of Model Performance Numbers Most providers of canned models are not eager to determine the specific performance of their model on your data because of the inherent weaknesses described above. So how do you sniff out performant models? How can you understand a good smelling model from a bad smelling one? The first reason to be skeptical lies in whether the model provider offers relative performance numbers. A relative performance value is a comparative one, and therefore failing to disclose relative performance should smell bad. Data scientists understand the importance of measuring performance. They know that it is crucial to understand performance prior to using a model’s outputs. So avoiding relative performance, the vendor is not being 100% transparent. The second reason to be skeptical concerns vendors who can't (or won't) explain which features are used in their model and the contribution that each feature makes to the prediction. It is very difficult to trust a model's outputs when the features and their effects lack explanation. So that would certainly smell bad. One Model published a whitepaper listing the questions you should ask every machine learning vendor. Focus on Relative Performance….or Else! There are risks that arise when using data without relative performance. The closest risk to the business is that faith in the model itself could diminish. This means that internal stakeholders would not realize “promised” or “implied” performance. Of course, failing to live up to these promises is a trust-killer for a predictive model. Employees themselves, and not just decision makers, can distrust models and object to decisions made with it. Even worse, employees could adjust their behavior in ways that circumvent the model in order to “prove it wrong”. But loss of trust by internal stakeholders is just the beginning. Legal, compliance, financial, and operational risk can increase when businesses fail to comply with laws, regulations, and policies. Therefore, it is appropriate for champions of machine learning to be very familiar with these risks and to ensure that they are mitigated when adopting artificial intelligence. Finally, it is important to identify who is accountable for poor decisions that are made with the assistance of a model. The act of naming an accountable individual can reduce the chances of negative outcomes, such as bias, illegality, or imprudence. How to Trust a Model A visually appealing model that delivers "interesting insights" is not necessarily trustworthy. After all, a model that has a hand in false or misleading insights is a total failure. At One Model, we feel that all content generated from predictive model outputs must link back to that model's performance metrics. An organization cannot consider itself engaged in ethical use of predictive data without this link. Canned and black box models are extremely difficult to understand, and even more difficult to predict how they respond to your specific set of data. There are cases where these types of models can be appropriate. But these cases are few and far between in the realm of people data in the human resources function. Instead, custom models offer a much higher level of transparency. Model developers and users understand their own data much better throughout the model building process. (This process is called Exploratory Data Analysis, and it is an extremely under-appreciated aspect of the field of machine learning.) At One Model, we spent a long time -- more than 5 years -- building One AI to make it easier for all types of human resources professionals build and deploy ethical custom models from their data, while ensuring model performance evaluation and model explainability. One AI includes robust, deep reporting functionality that provides clarity on which data was used to train models. It blends rich discovery with rapid creation and deployment. The result is the most transparent and ethical machine learning capability in any people analytics platform. Nothing about One AI is hidden or unknowable. And that's why you can trust it. Their Artificial Intelligence Still Needs Your Human Intelligence Models are created to inform us of patterns in systems. The HR community intends to use models on problem spaces involving people moving through and performing within organizations. So HR pros should be able to learn a lot from predictive models. But it is unwise to relinquish human intelligence to predictive models that are not understood. The ultimate value of models (and all people analytics) is to make better, faster, more data-informed talent decisions at all levels of the organization. Machine learning is a powerful tool, but it is not a solution to that problem.

    Read Article

    10 min read
    Joe Grohovsky

    During my daily discussions with One Model prospects and customers, two consistent themes emerge: A general lack of understanding of predictive modeling and a delay in considering its use until basic reporting and analytical challenges are resolved. These are understandable, and I can offer a suggestion to overcome both. My suggestion is based upon seeing successful One Model customers gain immediate insights from their data by leveraging the technology found in our One AI component. These insights include data relationships that can surface even before customers run their first predictive model. Deeper insights before predictive modeling? How? To begin, let’s rethink what you may consider to be a natural progression for your company and your People Analytics team. For years we’ve been told a traditional People Analytics Maturity Continuum has a building block approach that is something like this: The general concept of the traditional People Analytics maturity model is based upon the need to master a specific step before progressing forward. Supposedly, increased value can be derived when each step is mastered, and the accompanying complexity is mastered. While this may seem logical, it is largely inaccurate in the real world. The sad result is many organizations languish in the early stages and never truly advance. The result is diminished ROI and frustrated stakeholders. What should we be doing instead? The short answer is to drive greater value immediately when your people analytics project launches. Properly built data models will immediately allow for basic reporting and advanced analytics, as well as predictive modeling. I’ll share a brief explanation of two One Model deliverables to help you understand where I’m going with this. People Data Cloud™️ Core Workforce data is the first data source ingested by One Model into a customer's People Data Cloud. Although additional data sources will follow, our initial effort is focused on cleaning, validating, and modeling this Core Workforce data.This analytics-ready data is leveraged in their People Data Cloud instance. Once that has occurred storyboards are then created, reflecting a customer’s unique metrics for reporting and analytics. It is now that customers can and should begin leveraging One AI (Read more about People Data Cloud). Exploratory Data Analysis One AI provides pre-built predictive models for customers. The capability also exists for customers to build their own bespoke models, but most begin with a pre-built model like Attrition Risk. These pre-built models explore a customer's People Data Cloud to identify and select relevant data elements from which to understand relationships and build a forecast. The results of this selection and ranking process are presented in an Exploratory Data Analysis (EDA) report. What is exploratory data analysis, you ask? It is a report that provides immediate insights and understanding of data relationships even before a model is ever deployed. Consider the partial EDA report below reflecting an Attrition Risk model. We see that 85 different variables were considered. One AI EDA will suggest an initial list of variables relevant to this specific model, and we see it includes expected categories such as Performance, Role, and Age. This first collection of variables does not include Commute Time. But is Commute Time a factor in your ideal Attrition Risk model? If so, what should the acceptable time threshold be? Is that threshold valid across all roles and locations? One AI allows each customer to monitor and select relevant data variables to understand how they impact insights into your predictive model. Changing the People Analytics Maturity Model into a Continuum Now that we realize that the initial Core Workforce People Data Cloud can generate results not only for Reporting and Analytics but also for Predictive Modeling, we can consider a People Analytics Maturity Continuum like this: This model recognizes the fact that basic reporting and analytics can occur simultaneously after a proper data lake is presented. It also introduces the concept of Monitoring your data and Understanding how it relates to your business needs. These are the first steps in Predictive Modeling and can occur without a forecast being generated. The truth underlining my point is: Analytics professionals should first understand their data before building forecasts. Ignoring One AI Exploratory Data Analysis insights from this initial data set is a lost opportunity. This initial model can and should be enhanced with additional data sources as they become available, but there is significant value even without a predictive output. The same modeled data that drives basic reports can drive Machine Learning. The greater value of One AI is providing a statistical layer, not simply a Machine Learning output layer. The EDA report is a rich trove of statistical data correlations and insights that can be used to build data understanding, a monitoring culture, and the facilitation of qualitative questions. But the value doesn’t stop there. Integrated services that accompany One AI also provide value for all data consumers. These integrated services are reflected in storyboards and include: Forecasting Correlations Line of BestFit Significance Testing Anomaly Detection These integrated services are used to ask questions about your data that are more valid than what can be derived solely from traditional metrics and dimensions. For example, storyboards can reflect data relationships so even casual users can gain early insights. The scatterplot below is created with Core Workforce data and illustrates the relationship between Tenure and Salary. One AI integrated services not only renders this view but cautions that based upon the data used this result is unlikely to be statistically significant (refer to the comment under the chart title below). More detailed information is contained in the EDA report, but this summary provides the first step in Monitoring and Understanding this data relationship. Perhaps one of the questions that may arise from this monitoring involves understanding existing gender differences. This is easily answered with a few mouse clicks: This view begins to provide potential insight into gender differences involving Tenure and Salary, though the results are still not statistically significant. Analysts are thus guided toward discovering their collection of insights contained within their data. List reports can be used to reflect feature importance and directionality. In the above table report, both low and high Date of Birth values increase Attrition Risk. Does this mean younger and older workers are more likely to leave than middle-aged workers? Interesting relationships begin to appear, and One AI automatically reports on the strength of those relationships and correlations. Iterations will increase the strength of the forecast, especially when additional data sources can be added. Leveraging One AI's capability at project launch provides a higher initial ROI, an accelerated value curve, and better-informed data consumers. At One Model, you don’t need to be a data scientist to get started with predictive modeling. Contact One Model to learn more and see One AI in action. Customers - Would you like more info on EDA reports in One Model? Visit our product help site.

    Read Article

    10 min read
    Phil Schrader

    Post 1: Sniffing for Bull***t. As a people analytics professional, you are now expected to make decisions about whether to use various predictive models. This is a surprisingly difficult question with important consequences for your employees and job applicants. In fact, I started drafting up a lovely little three section blog post around this topic before realizing that there was zero chance that I was going to be able to pack everything into a single post. There are simply no hard and fast rules you can follow to know if a model is good enough to use “in the wild.” There are too many considerations. To take an initial example, what are the consequences of being wrong? Are you predicting whether someone will click on an ad, or whether someone has cancer? In fact, even talking about model accuracy is multifaceted. Are you worried about detecting everyone who does have cancer-- even at the risk of false positives? Or are you more concerned about avoiding false positives? Side note: If you are a people analytics professional, you ought to become comfortable with the idea of precision and recall. Many people have produced explanations of these terms so we won’t go into it here. Here is one from “Towards Data Science”. So all that said, instead of a single, long post attempting to cover a respectable amount of this topic, we are going to put out a series of posts under that heading: Evaluating a predictive model: Good Smells and Bad Smells. And, since I’ve never met an analogy that I wasn’t willing to beat to death, we’ll use that smelly comparison to help you keep track of the level at which we are evaluating a model. For example, in this post we’re going to start way out at bull***t range. Sniffing for Bull***t As this comparison implies, you ought to be able to smell these sorts of problems from pretty far out. In fact, for these initial checks, you don’t even have to get close enough to sniff around at the details of the model. You’re simply going to ask the producers of the model (vendor or in-house team) a few questions about how they work to see if they are offering you potential bull***t. At One Model, we're always interested in sharing our thoughts on predictive modeling. One of these great chats are available on the other side of this form. Back to our scheduled programming. Remember that predictions are not real. Because predictive models generate data points, it is tempting to treat them like facts. But they are not facts. They are educated guesses. If you are not committed to testing them and reviewing the methodology behind them, then you are contenting yourself with bull***t. Technically speaking, by bull***t, I mean a scenario in which you are not actually concerned with whether the predictions you are putting out are right or wrong. For those of you looking for a more detailed theory of bull***t, I direct you to Harry G. Frankfurt. At One Model we strive to avoid giving our customers bull***t (yay us!) by producing models with transparency and tractability in mind. By transparency we mean that we are committed to showing you exactly how a model was produced, what type of algorithm it is, how it performs, how features were selected, and other decisions that were made to prepare and clean the data. By tractability we mean that the data is traceable and easy to wrangle and analyze. When you put these concepts together you end up with predictive models that you can trust with your career and the careers of your employees. If, for example, you produce an attrition model, transparency and tractability will mean that you are able to educate your data consumers on how accurate the model is. It will mean that you have a process set up to review the results of predictions over time and see if they are correct. It will mean that if you are challenged about why a certain employee was categorized as a high attrition risk, you will be able to explain what features were important in that prediction. And so on. To take a counter example, there’s an awful lot of machine learning going on in the talent acquisition space. Lots of products out there are promising to save your recruiters time by using machine learning to estimate whether candidates are a relatively good or a relatively bad match for a job. This way, you can make life easier for your recruiters by taking a big pile of candidates and automagically identifying the ones that are the best fit. I suspect that many of these offerings are bull***t. And here are a few questions you can ask the vendors to see if you catch a whiff (or perhaps an overwhelming aroma) of bull***t. The same sorts of questions would apply for other scenarios, including models produced by an in-house team. Hey, person offering me this model, do you test to see if these predictions are accurate? Initially I thought about making this question “How do you” rather than “Do you”. I think “Do you” is more to the point. Any hesitation or awkwardness here is a really bad smell. In the talent acquisition example above, the vendor should at least be able to say, “Of course, we did an initial train-test split on the data and we monitor the results over time to see if people we say are good matches ultimately get hired.” Now later on, we might devote a post in this series to self-fulfilling prophecies. Meaning in this case that you should be on alert for the fact that by promoting a candidate to the top of the resume stack, you are almost certainly going to increase the odds that they are hired and, thus, you are your model is shaping, rather than predicting the future. But we’re still out at bull***t range so let’s leave that aside. And so, having established that the producer of the model does in fact test their model for accuracy, the next logical question to ask is: So how good is this model? Remember that we are still sniffing for bull***t. The purpose of this question is not so much to hear whether a given model has .75 or .83 precision or recall, but just to test if the producers of the model are willing to talk about model performance with you. Perhaps they assured you at a high level that the model is really great and they test it all the time-- but if they don’t have any method of explaining model performance ready for you… well… then their model might be bull***t. What features are important in the model? / What type of algorithm is behind these predictions? These follow up questions are fun in the case of vendors. Oftentimes vendors want to talk up their machine learning capabilities with a sort of “secret sauce” argument. They don’t want to tell you how it works or the details behind it because it’s proprietary. And it’s proprietary because it’s AMAZING. But I would argue that this need not be the case and that their hesitation is another sign of bull***t. For example, I have a general understanding of how the original Page Rank algorithm behind Google Search works. Crawl the web and work out the number of pages that link to a given page as a sign of relevance. If those backlinks come from sites which themselves have large numbers of links, then they are worth more. In fact, Sergey Brin and Larry Page published a paper about it. This level of general explanation did not prevent Google from dominating the world of search. In other words, a lack of willingness to be transparent is a strong sign of bull***t. How do you re-examine your models? Having poked a bit at transparency, these last questions get into issues of tractability. You want to hear about the capabilities that the producers of the model have to re-examine the work they have done. Did they build a model a few years ago and now they just keep using it? Or do they make a habit of going back and testing other potential models. Do they save off all their work so that they could easily return to the exact dataset that was used to train a specific version of the model. Are they set up to iterate or are they simply offering a one-size fits all algorithm to you? Good smells here will be discussions about model deployment, maintenance and archiving. Streets and sewers type stuff as one of my analytics mentors likes to say. Bad smells will be high level vague assurances or -- my favorite -- simple appeals to how amazingly bright the team working on it is.If they do vaguely assure you that they are tuning things up “all the time” then you can hit them with this follow up question: Could you go back to a specific prediction you made a year ago and reproduce the exact data set and version of the algorithm behind it? This is a challenging question and even a team fully committed to transparency and tractability will probably hedge their answers a bit. That’s ok. The test here not just about whether they can do it, but whether they are even thinking about this sort of thing. Ideally it opens up a discussion about you they will support you, as the analytics professional responsible for deploying their model, when you get challenged about a particular prediction. It’s the type of question you need to ask now because it will likely be asked of you in the future. As we move forward in this blog series, we’ll get into more nuanced situations. For example, reviewing the features used in the predictions to see if they are diverse and make logical sense. Or checking to see if the type of estimator (algorithm) chosen makes sense for the type of data you provided. But if the model that you are evaluating fails the bull***t smell test outlined here, then it means that you’re not going to have the transparency and tractability necessary to pick up on those more nuanced smells. So do yourself a favor and do a test whiff from a ways away before you stick your nose any closer.

    Read Article

    3 min read
    Nicholas Garbis

    We wrote this paper because we believe that AI/ML has the potential to be a very valuable and powerful technology to support better talent decisions in organizations – and it also has the potential to be mishandled in ways that are unethical and can do harm to individuals and groups of employees. In this paper, we provide some process-thinking substance to the conversation that has too often been dominated by hyperbolic “AI/ML is great!” and “AI/ML will destroy us!” headlines. In the paper, you will find a set of Guiding Principles … And, most importantly, a set of Processes for Ethical ML Stewardship that we believe you should be discussing (immediately) within your organizations. Each of these processes (and sub-processes) is defined in the paper in plain, readable language to enable the widest possible readership. We believe we are at a delicate and critical point in time where AI/ML has been embedded into so many HR technology solutions without sufficient governance amongst the buying organizations. Vendors (like One Model) need to have their AI/ML solutions challenged to provide sufficient transparency into the AI/ML models – model features, performance measures, bias detection, review/refresh commitments, etc. One Model has built our “One AI” machine learning toolset to enable the processes that our customers can use to ensure ethical model design and outputs. To be clear, this paper is not a promotional piece about One Model, but it is absolutely intended to challenge the sellers and buyers of HR technology to get this right. Without the appropriate focus on ethics, AI/ML products and projects could become too risky for organizations and then summarily eliminated along with all the potential value for individuals and organizations. DOWNLOAD PAGE: https://www.onemodel.co/whitepapers/ethics-of-ai-ml-in-hr

    Read Article

    2 min read
    Chris Butler

    One Model took home the Small Business Category of the Queensland Premier's Export Awards held last night at Brisbane City Hall. The award was presented by Queensland Premier and Minister for Trade, Hon Annastacia Palaszczuk MP and Minister for Employment, Small Business, Training and Skills Development, Hon Dianne Farmer MP. “We are delighted to receive this award given the quality of entrepreneurs and small business owners in Queensland,” One Model CEO, Chris Butler said. “It is a tribute to the exceptional team we have in Brisbane and the world leading people analytics product One Model has built.” “From our first client, One Model has been an export focussed business. With the profile boost this award gives us, we look forward to continuing to grow our export markets of the United States, Europe and Asia,” Mr Butler said. Following this win, One Model is now a finalist in the 59th Australian Export Awards to be held in Canberra on Thursday 25 November 2021. One Model was founded in Texas in 2015, by South-east Queensland locals Chris Butler, Matthew Wilton and David Wilson. One Model generates over 90% of its revenue from export markets, primarily the United States. One Model was also nominated in the Advanced Technologies Award Category. One Model would like to congratulate Shorthand for winning this award as well as our fellow finalists across both categories - Healthcare Logic, Tactiv (Advanced Technologies Category), iCoolSport, Oper8 Global, Ryan Aerospace and Solar Bollard Lighting (Small Business Category). The One Model team would like to thank Trade and Investment Queensland for their ongoing support. To learn more about One Model's innovative people analytics platform or our company's exports, please feel free to reach out to Bruce Chadburn at bruce.chadburn@onemodel.co. PICTURE - One Model Co-Founders Chris Butler, Matthew Wilton and David Wilson with Queensland Premier, Hon Annastacia Palaszczuk MP and the other award winners.

    Read Article

    15 min read
    Chris Butler

    The public sector is rapidly evolving, is your people analytics strategy fit for purpose and can it meet the increasing demands of a modern public sector? In this blog, we will highlight the unique challenges that public sector stakeholders face when implementing a people analytics strategy. In light of those challenges, we will then outline how to best design and implement a modern people analytics strategy in the public service. When it comes to people analytics, the public sector faces a number of unique challenges; The public sector is the largest and most complex workforce of any employer in Australia. A workforce that bridges everything from white collar professionals to front line staff and every police officer, teacher and social worker in between. Public sector workforces are geographically dispersed with operations across multiple capital cities in the case of the Commonwealth Government, or a mix of city and regional staff in the case of both state and federal governments. The public service operates a multitude of HR systems acquired over a long time, leading to challenges of data access and interoperability. Important public service HR data may also be held in manual non-automated spreadsheets prone to error and security risk. A complex industrial relations and entitlements framework, details of which are generally held in different datasets. Constant machinery of government (MoG) changes demand both organisational and technological agility by public servants to keep delivering key services (as well as the delivery of ongoing and accurate HR reporting). The public sector faces increased competition for talent, both within the public service and externally with the private sector. Citizen and political pressure for new services and methods of government service provision is at an all time high - so not only are your critical stakeholders your customers, they are your voters as well. Cyber security and accessibility issues that are unique to the public sector. This all comes under the pressure of constant cost constraints that require bureaucracies to do more with limited budgets. As a result - understanding and best utilising limited human capital resources is crucial for the public sector at both a state and federal level. Now that we have isolated the unique people analytics challenges of the public sector, how do HR professionals within the public service begin the process of implementing a people analytics strategy? 1. Data Orchestration “Bringing all of your HR data together.” The first stage of any successful people analytics programme is data orchestration, without having access to all of your relevant people data feeds in one place, it is almost impossible to develop a universal perspective of your workforce. Having a unified analytical environment is critical as it allows HR to; Develop a single source of truth for the data you hold on employees. Cross reference employee data within and between departments to adequately benchmark and compare workforces to drive team-level, department-level and public service wide insights. Establish targeted interventions and not one-size-fits-all solutions. For example, a contact centre is going to have very different metric results than your corporate groups like Finance or Legal. Blend data between systems to uncover previously hidden insights. Uncover issues such as underpayments that develop when different systems don’t communicate. Using people analytics to mitigate instances of underpayment is covered extensively in this blog. Provide a clean and organised HR data foundation from which to generate predictive insights. Have the capacity to export modelled data to an enterprise data warehouse or another analytical environment (PowerBI, Tableau etc). Allow HR via people analytics to support the Enterprise data mesh - covered in more detail in this blog post. People data orchestration in the public sector is complicated by the reliance on legacy systems, as well as the constant changes in structure driven by machinery of government reforms. Successful data orchestration can only be achieved through an intimate knowledge of the source HR systems and a demonstrated capacity to extract information from those systems and then model that information in a unified environment. This takes significant technology knowledge, such as bespoke API integrations for cloud based systems and proven experience working with on premise systems. It also requires subject matter expertise in the nuances of HR data. It can not be easily implemented without the right partners. Ideally, the end solution should be a fully flexible open analytics infrastructure to future proof the public sector and allow for the ingestion of data from new people data systems as they arise (such as new LMS or pulse survey products) while also facilitating the migration of data from legacy systems to more modern cloud based platforms. 2. Data Governance “Establishing the framework to manage your data.” Now that all of your data is in one place, it is important you develop a robust framework for how to manage that data - in our view this has two parts - data definition and data access. Data Definition Having consolidated multiple sources of data in one environment, the next step is metric definition, which is critical to being able to convert the disparate data sets that you have assembled into coherent, understandable language. It is all well and good to have your data in one place, but if you have 5 different definitions of what an FTE means from the five different systems you are aggregating then the benefits you receive from your data orchestration phase will be marginal. Comprehensive metric definitions with clear explanations are needed to ensure your data is properly orchestrated and organisation-wide stakeholders have confidence that data is standardised and can be trusted. Data Access HR data is some of the most complex and sensitive a government holds, so existing HR data management practices based on spreadsheets that can be easily distributed to non-approved stakeholders both inside and outside of your organisation are no longer fit for purpose. Since your people analytics data is coming from multiple systems you need to provide an overarching security framework that controls who gets access to what information and why. This framework must based on logical rules, aligned to broader departmental privacy policies and flexible enough to accommodate organisational change and to scale to your entire department or agency regardless of its size. Critically, there needs to be a high level of automation and scalability to use role based security as a mechanism for safely sharing data to decision makers. Today’s spreadsheet based world relies on limiting data sharing, which also limits effective data driven decision making. Finally, these role based security access frameworks need to be scalable so each new user or change in structure doesn’t require days of manual work from your team to ensure both access and compliance. 3. Secure People Analytics Distribution “Delivering people analytics content to your internal stakeholders.” The next step, once you have consolidated your data and established an appropriate data governance framework, is to present and distribute this data to your internal stakeholders. This is what we refer to as the distribution phase of your people analytics implementation. We established in the last section that for privacy and security reasons, different stakeholders require access to varying levels of information. The distribution phase goes one step further and places access within the prism of what individual stakeholders need in order to successfully do their jobs. For example, the information and insights necessary for a Departmental Secretary and a HR business partner to do their jobs are wildly different and therefore should be tailored to their particular needs. So, organisation wide metrics and reports in the case of the Departmental Secretary and team or individual level metrics for the HR BP or line manager. This is further complicated by disclosure requirements and reporting unique to the public service. This includes; Media requests regarding public servant pay and conditions Statutory reporting requirements for annual state of the public service reports Submissions to and appearances before parliamentary committees Disclosure to independent oversight inquiries or agencies As a result, public sector HR leaders are required to walk a tightrope of both breadth and specificity. So how do we recommend you do this? Offer a baseline of standardised metrics for the whole organisation. Tailor that baseline based on role-based access requirements, so stakeholders only see information that is relevant to drive data driven decision making. Deliver those insights at scale - the wider the stakeholder group consuming your outputs the better. Ensure those outputs are timely and relevant - daily or weekly updates are recommended. Be able to justify your insights and offer access to raw data, calculations and metric definitions. Continually educate your stakeholders about best practice people analytics. Increase reporting sophistication based on the people analytics maturity of your stakeholders - simple reporting for entry level stakeholders, more complicated predictive insights for the more advanced. To get the most out of your people analytics strategy you need to deliver two things; Role based access to the widest stakeholder group across your department, the wider the group of employees that have access to detailed datasets the easier it will be to deliver data driven decision making. Support your team with a change management programme to grow their analytical capability over the course of time. 4. Extracting Value from your Data “Using AI + Data Science to generate predictive insights.” Now we get to the fun part - using data science to supercharge your analysis and generate predictive insights. However, to quote the great theologian and people analytics pioneer - Spiderman - “With great power comes great responsibility.” Most data science work today is performed by a very small number of people using arcane knowledge and coding in technologies like R or Python. It is not scalable and rarely shared. The use of machine learning capabilities with people data requires a thoughtful approach that considers the following; Does your AI explain its decisions? Could the decisions your machine learning environment recommends withstand the scrutiny of a parliamentary committee? Do you adhere to ethical AI frameworks and decision making? What effort has been made to detect and remove bias? Does harnessing predictive insights require a data scientist or can it be used by everyday stakeholders within your department? Will your use of AI adhere to current or future standards, such as those recently proposed by the European Commission? To learn more about the European Commission proposal regarding new rules for AI, click here. In integrating the use of machine learning into your people analytics programme, you must ensure that models are transparent and can be explained to both your internal and external stakeholders. 5. Using People Analytics to Support Public Sector Reform “Public sector HR driving data-driven decision making.” A people analytics strategy does not exist in isolation, it is a crucial aspect of any departmental strategy. However, in speaking to our public sector HR colleagues - they often feel that their priorities are sidelined or they don’t have the resources to argue for their importance. A lot of this has to do with the absence of integrated datasets and outputs to justify HR prioritisation and investment. We see people analytics and the successful aggregation of disparate data sets as the way that HR can drive their people priorities forward. If HR can present an integrated and trusted dataset, that allows comparison and cross validation with data from other verticals including finance, community engagement, procurement and IT. This gives HR the capability to be central to overall decision making and support broader departmental corporate strategies from the ground up. We have written extensively about the importance of data driven decision making in HR and using people analytics to support enterprise strategy, this content can be found on our blog here - www.onemodel.co/blog Why you should invest in people analytics and what One Model can do to help. The framework of a successful public sector people analytics project outlined above is the capability that the One Model platform delivers. From data orchestration to predictive insights, One Model delivers a complete HR Analytics Capability. The better you understand your workforce, the more ambitious the reform agendas you can fulfil. One Model is set up to not only orchestrate your data to help the public service understand the challenges of today, but through our proprietary OneAI platform - to help you build the public service of the future. One Model’s public sector clients are some of our most innovative and pragmatic, we love working with them. At One Model, we are constantly engaging with the public sector about best practice people analytics - last year, our Chief Product Officer - Tony Ashton (https://www.linkedin.com/in/tony-ashton/) - himself a former Commonwealth HR public servant appeared on the NSW Public Service Commission’s The Spark podcast to discuss how the public sector can use people data to make better workforce decisions. That podcast can be found here. Let’s start a conversation If you work in a public service department or agency and are interested in learning more about how the One Model solution can help you get the most out of your workforce, my email is patrick.mcgrath@onemodel.co

    Read Article

    11 min read
    Joe Grohovsky

    Most of my One Model work involves chatting with People Analytics professionals discussing how our technology enables them to perform their role more effectively. One Model is widely acknowledged for our superior ability to orchestrate and present customer’s people metrics, as well as leveraging Artificial Intelligence/Machine Learning for predictive modeling purposes. My customer interactions always result in excited conversations around our data ingestion and modeling, and how a customer can leverage the flexibility of our many presentation options. However, when it comes to further exploring the benefits of Artificial Intelligence, enthusiasm levels often diminish, and customers become hesitant to explore how this valuable technology can immediately benefit their organization. One Model customer contacts tend to be HR professionals. My sense is they view Artificial Intelligence/Machine Learning as very cool, but aspirational for both them and their organization. This is highlighted during implementations as we plan their launch and roll-out timelines; the use of predictive models is typically pushed out to later phases. This results in a delayed adoption of an extraordinarily valuable tool. Machine Learning is a subset of Artificial Intelligence and is the ability for algorithms to discern patterns within data sets. It elevates decision-support functions to an advanced level and as such can provide previously unrecognized insights. When used with employee data there is understandable sensitivity because people's lives and careers risk being affected. HR professionals can successfully use Machine Learning to address a variety of topics that impact an array of areas throughout their company. Examples would include: Attrition Risk – impact at the organizational level Promotability – impact at the employee level Candidate Matching – impact outside the organization Exploratory Data Analysis - quickly build robust understandings of any dataset/problem With this basic understanding, let us explore three possible reasons why the deployment of Machine Learning is delayed, and how One Model works to increase a customer’s comfort level and accelerate its usage. #1: Machine Learning is undervalued For many of us, change is hard. There are plenty of stories in business, sports, or government illustrating a refusal to use decision-support methods to rise above gut-instinct judgments. The reluctance or inability to use fact-based evidence to sway an opinion makes this the toughest category to overcome. #2: Machine Learning is misunderstood For many of us, numbers and math are frightening. Typically, relating possibility and probability to a prediction does not go beyond guessing at the weather for this weekend’s picnic. Traditional metrics such as employee turnover or gender mix are simple and comfortable. Grasping how dozens of data elements from thousands of employees can interact to lead or mislead a prediction is an unfamiliar experience for many HR professionals that they would prefer to avoid. #3: Machine Learning is intimidating This may be the most prevalent reason, albeit subliminal. Admitting a weakness to colleagues, your boss, or even yourself is not easily done. Intimidation may arise from several sources. The first occurs from the general lack of understanding referenced earlier, accompanied by a fear of liability due to data bias or unsupported conclusions. Often, some organizations with data scientists on staff may pressure HR to transfer the responsibility for People Analytics predictions to these scientists to be handled internally with Python or R. This sort of internal project never ends well for HR; it is a buy/build situation akin to IT departments wanting to build their own People Analytics data warehouse with a BI front-end. Interestingly, when a customer’s data science team is exposed to One Model’s Machine Learning capabilities, they usually become some of our biggest advocates. During my customer conversations, I avoid dwelling on their reluctance and simply explain how One Model’s One AI component intrinsically addresses Machine Learning within our value proposition. Customers do not need familiarity with predictive modeling to enjoy these benefits. Additionally, I explain how One AI protects our customers by providing complete transparency in how training data is selected, results are generated, how any models are making decisions, validating the strength of resulting prediction, and thorough flexibility to modify every data run to fit within each customer’s own data ethics. This transparency and flexibility provide protection against data bias and generally bad data science. Customers simply apply an understanding of their business requirements to One AI’s predictions and adjust if necessary. Below is a brief explanation of a few relevant components of One Model's Machine Learning strategy and the benefits they provide. Selection of Training Data After a prediction objective is defined, the next step is to identify and collect the relevant data points that will be used to teach One AI how to predict future or unseen data points. This can be performed manually, automatically, or a combination of both. One AI offers automatic feature selection using algorithms to decide which features are statistically significant and worth training upon. This shrinks the data set and reduces noise. The context of fairness is critical, and it is at this point that One AI starts to measure and report on data bias. One measurement of group fairness that One AI supports is Disparate Impact. Disparate Impact refers to practices that adversely affect one group of people of a protected characteristic more than another, even if a group does not overtly discriminate (i.e. their policies may be neutral). Disparate Impact is a simple measure of group fairness and does not consider sample sizes, instead focusing purely on outcomes. These limitations work well with attempting to prevent bias from getting into Machine Learning. It is ethically imperative to measure, report and prevent bias from making its way into Machine Learning. This Disparate Impact reporting is integrated into One AI along with methods to address the identified bias. One AI allows users to measure group fairness in many ways and on many characteristics at once, making it easy to make informed, ethical decisions. Promotability predictions could serve as an example. If an organizations historic promotion data is collected for training purposes, the data set may reflect a bias toward Caucasian males who graduated from certain universities. Potential bias toward gender and race may be obvious, but there may also be a hidden bias toward these certain universities, or away from other universities that typically target different genders or race. An example of how hidden bias affected Amazon can be found here. One AI can identify bias and help users remove bias from data using the latest research. It is important to One Model that our users not only be informed of bias but can also act upon these learnings. Generation of Results After a predictive model is run, One AI still takes steps that ensure the predictions are as meaningful as possible. It is important to note that One AI does all the “heavy lifting”; our customers need only provide oversight as it applies to their specific business. Any required modifications or changes are easily handled. An example can be found in an Attrition Risk model. After running this model our Exploratory Data Analysis (EDA) report provides an overview of all variables considered for the model and identifies which were accepted, which were rejected, and why. A common reason for rejection is that of a “cheating” variable. This is when there is too close of a one-to-one relationship between the target and identified variable. If “Severance Pay” is rejected as a cheating variable, we likely will agree because logically anyone receiving a severance package would be leaving the company. However, if “Commute Time 60+” is rejected as a cheating variable, we may push back and decide to include this because commuting over an hour is something the organization can control. It is an easy modification to override the original exclusion and re-run the model. One Model customers who are more comfortable with predictive modeling may even choose to dive deeper into the model itself. A report on each predictive run shows which model type was used, Dataset ID’s, Dimensionality Reduction status, etc. One Model’s flexibility allows a customer to change these with a mouse click should they want to explore different models. Please remember that this is not a requirement at all and simply a reflection of the available transparency and flexibility for those customers preferring this level of involvement. My favorite component of our results summary reporting is how One AI ranks the variables impacting the model. Feature Importance is listed in descending order of importance to the result. In our Attrition Risk model above, the results summary report would provide a prioritized list of items to be aware of in your attempt to reduce attrition. Strength of Prediction It is important to remember that Machine Learning generates predictions, not statements of fact. We must realize that sometimes appropriate data is just not available to generate meaningful predictions and these models would not be trustworthy. Measuring and reporting the strength of predictions is a solid step in developing a data-driven culture. There are several ways to evaluate model performance; many are reflected in the graphic below. One Model automatically generates multiple variations to help provide a broad view and ensure that a user has the data they feel comfortable evaluating. Both “precision” and “recall” are measured and displayed. Precision measures the proportion of positive identifications (people who terminate in the future) the model correctly identified. Put another way when the model said someone would terminate, how often was it correct? Recall reflects the proportion of actual positives (people who terminate in the future) that were correctly identified by the model. Put another way, of all the people that actually terminated - how many did the model correctly identify. Precision & recall are just one of the many metrics that One AI supports. If you or your team is more familiar with another method for measuring performance, we most likely already support it. One Model is glad to work with your team in refining your algorithms to build strong predictive models and ensure you have the confidence to interpret the results. Summary Machine Learning and Data Science are extremely valuable tools that should be a welcome conversation topic and an important part of project roll-out plans. People Analytic professionals owe it to their companies to incorporate these tools into their decision-support capabilities even if they do not have access to internal data scientists. Care should be taken to ensure all predictive models are transparent, free from bias, and can be proven so by your analytics vendor. Want to Learn More? Contact One Model and learn how we can put leading-edge technology in your hands and accelerate your People Analytic initiatives.

    Read Article

    3 min read
    Stacia Damron

    The One Model team is excited to announce that Tony Ashton has moved from Vice President of Product Management at SAP SuccessFactors to be the Chief Product Officer at One Model. One Model is an Austin-based HR technology company, with offices in the United States, United Kingdom, and Australia. Tony will join our Brisbane, Australia office, which headquarters our rapidly growing engineering team. With over seventeen years of experience leading the people analytics product team at SAP SuccessFactors and before that, Infohrm (acquired by SuccessFactors), Tony brings a wealth of product leadership experience to the quickly-growing HR technology startup. “One Model is doing the most exciting, innovative work in the people analytics space today,” asserts Ashton. “No other company in the world is going as deep or innovating as fast as One Model in HR data modeling and the application of machine learning and artificial intelligence to the field of people analytics.” As One Model’s Chief Product Officer, Tony will play an instrumental role in driving One Model’s product innovation strategy and bringing the company's vision to life across our People Analytics Infrastructure, One AI, and Trailblazer offerings. “This strategic hire will support One Model as it continues to remain a market leader in product innovation, development, and people analytics strategy on a global scale,” says Stacia Damron, Senior Marketing Manager. “Scaling our team is the next step; the right hires will be instrumental in the creation and evolution of our offerings, and in our commitment in the alignment of those offerings with both current and future customers needs.” One Model CEO, Chris Butler, is thrilled with this addition to the team. “Tony is without doubt the highest calibre and most experienced product leader in the people analytics domain. I am incredibly excited about the capability that Tony brings to drive our product forward and focus on the success of our customers" says Butler. About One Model One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team. Learn more at onemodel.co. About One AI Tony is instrumental in leading the One AI team. Making HR machine learning transparent and accessible to all is a key differentiator between One Model and other People Analytics tools on the Market. Tony's passion for building the community is unprecidented. Learn more about One AI.

    Read Article

    13 min read
    Phil Schrader

    As the people analytics leader in your organization, you are responsible for transforming people data into a unique competitive advantage. You need to figure out what it is about your workforce that makes your business grow, and you need to get leaders across the organization on board with using data to make better employee outcomes happen. How will you do that in 2019? Will you waste another year attempting to methodically climb up the 4 or 5 stages of the traditional analytics maturity model? You know the one. It goes from operational reporting in the lower left, up through a few intermediate stages, and then in the far distant upper right, culminates with predictive models and optimization. Here’s the Bersin & Associates one for reference or flip open your copy of Competing on Analytics (2007) for another (p. 8). The problem with this model is that on the surface it appears to be perfect common sense while in reality, it is hopelessly naive. It requires you to undertake the most far-reaching and logistically challenging efforts first. Then in the magical future, you will have this perfect environment in which to figure out what is actually important. If this were an actual roadmap for an actual road it would say, “Step 1: Begin constructing four-lane highway. … Step 4: Figure out where the highway should go.” It is the exact opposite of the way we have learned to think about business in the last decade. Agile. The Lean Startup. Etc. In fact it is such a perfect inverse of what you should be doing that we can literally turn this maturity model 180 degrees onto its head and discover an extremely compelling way to approach people analytics. Here is the new model. Notice the axes. This is a pragmatic view. We are now building impact (y axis) in the context of increasing logistical complexity (x axis). Impact grows as more people are using data to achieve the people outcomes that matter. But, as more and more people engage with the data your logistical burden grows as well. These burdens will manifest themselves in the form of system integrations, data validation rules, metric definitions, and a desire for more frequent data refreshes. From this practical perspective, operational data no longer seems like a great place to start. It’s desirable because it’s the point at which many people in the organization will be engaging with data, but it will require an enormous logistical effort to support. This is a good time to dispense with the notion that operational data is somehow inferior to other forms of data. That it’s the place to start because it’s so simplistic. Actually, your business runs operationally. Amazon’s operational data, for example, instructs a picker in a warehouse to go and fetch a particular package from the shelves at a particular moment in time. That’s just a row of operational data. But it occurs at the end of a sophisticated analytics process that often results in you getting a package on the very same day you ordered it. Operational data is data at the point of impact. Predictive data also looks quite different from this perspective. It’s a wonderful starting point because it is very manageable logistically. And don’t be put off by the fact that I’ve labeled its impact as lower. Remember that impact in this model is a function of the number of people using your data. The impact of your initial predictive models will be felt in a relatively small circle of people around you, but it’s that group of people that will form your most critical allies as you seek to build your analytics program. For starters, it’s your boss and the executive team. Sometime around Valentines Day they will no doubt start to ask, “Hey, how’s the roadmap coming along?” In the old model, you would have to say, “Oh well you know it’s difficult because it’s HR data and we need to get it right first.” Then you’d both nod knowingly and head off to LinkedIn to read more articles about HR winning a seat at the table. But this year you will say, “It’s going great! We’ve run a few hundred predictive models and discovered that we can predict {insert Turnover, Promotion, Quality of Hire, etc} with a decent degree of recall and precision. As a next step, we’re figuring out how to organize this data more effectively so we can slice and dice it in more ways. After that we will start seeking out other data sets to improve our models and make a plan for distributing this data to our people leaders.” Ah. Wouldn’t that feel nice to say? Next, you begin taking steps to better organize your data and add new data sets. This takes more logistical effort so you will engage your next group of allies: HR system owners and IT managers. Because they are not fools, they will be a little skeptical at first. Specifically, they’re going to ask you what data you need and why it’s worth going after. If you’re operating under the old model, you won’t really know. You might say, “All of it.” They won’t like that answer. Or maybe you’ll be tempted to get some list of predefined KPIs from an article or book. That’s safer, but you can’t really build a uniquely differentiating capability for your organization that way. You’re just copying what other people thought was important. If you adopt our upside down model, on the other hand, you’ll have a perfectly good answer for the system owners and IT folks. You’ll say, “I’ve run a few hundred models and we know that this manageable list has the data elements that are the most valuable. These data points help us predict X. I’d like to focus on those. “Amen,” they’ll say. How’s that for a first two months of 2019? You’re showing progress to your execs. Your internal partners are on board. You are building momentum. The more allies you win, the more logistical complexity you can take on. At this stage people have reason to believe in you and share resources with you. As you move up the new maturity model with your IT allies, you’ll start to build analytic data sets. Now you’re looking for trends and exploring various slices. Now is the time for an executive dashboard or two. Now is the time to start demonstrating that your predictive models are actually predictive. These dashboards are focused. They’re not a grab bag of KPIs. They might simply show the number of people last month who left the company and whether or not they were predicted by the model. Maybe you cut it by role and salary band. The point is not to see everything. The point is to see what matters. Your execs will gladly take three pieces of meaningful data once per month over a dozen cuts of overview data once a day. Remember to manage your logistical commitment. You need to get the data right about once a month. Not daily. Not “real time.” Finally, you’re ready to get your operational data right. In the old world this meant something vague like being able to measure everything and having all the data validated and other unrealistic things. In the new world it means delivering operational data at the point of impact. In the old world you’d say, “Hey HRBP or line manager, here are all these reports you can run for all this stuff.” And they would either ignore them or find legitimate faults with them. In the new world, you say, “Hey HRBP or line manager, we’ve figured out how to predict X. We know that X is (good | bad) for your operations. We’ve rolled out some executive dashboards to track trends around X. Based on all that, we’ve invested in technology and process to get this data delivered to you as well. X can be many things. Maybe it’s a list of entry-level employees likely to promote two steps based upon factors identified in the model. Maybe it’s a list of key employees at high risk of termination based. Maybe it’s a ranking of employee shifts with a higher risk of a safety incident. Whatever it is for your business, you will be ready to roll it out far and wide because you’ve proven the value of data and you’ve pragmatically built a network of allies who believe in what you are doing. And the reason you’ll be in that position is because you turned your tired old analytics maturity model on it’s head and acted the way an agile business leader is supposed to act. Yeah but… Ok Phil, you say, that’s a nice story but it’s impossible. We can’t START with prediction. That’s too advanced. Back when these maturity models were first developed, I’d say that was true. The accessibility of data science has changed a lot in ten years. We are all more accustomed to talking about models and predictive results. More to the point, as the product evangelist at One Model I can tell you with first-hand confidence that you can, in fact, start with prediction. One Model’s One AI product offering ingests sets of data and runs them through a set of data processing steps, producing predictive models and diagnostic output. Here’s the gory details on all that. Scroll past the image and I’ll explain. Basically there’s a bunch of time consuming work that data scientists have to do in order to generate a model. This may include things like taking a column and separating the data into multiple new columns (One Hot Encoding) or devising a strategy to deal with missing data elements, or checking for cheater columns (a column like “Severance Pay” might be really good at predicting terminations, for example). There’s likely several ways to prepare a data set for modeling. After all that, a data scientist must choose from a range of predictive model types, each of which can be run with various different parameters in place. This all adds up to scrubbing, rescrubbing, running and re-running things over and over again. If you are like me, you don’t have the skill set to do all of that effectively. And you likely don’t have a data scientist loitering around waiting to grind through all of that for you. That’s why in the past this sort of thing was left at the end of the roadmap-- waiting for the worthy few. But I bet you are pretty good at piecing data sets together in Excel. I bet you’ve handled a vlookup or two on your way to becoming a people analytics manager. Well… all we actually need to do is manually construct a data set with a bunch of columns that you think might be relevant to predicting whatever outcome you are looking for. Then we feed the data into One AI. It cycles through all the gnarly stuff in the image above and gives you some detailed output on what it found. This includes an analysis of all the columns you fed in and also, of course, the model itself. You don’t need to be able to do all the stuff in that image. You just need to be able to read and digest the results. And of course, we can help with that. Now, the initial model may not have great precision and recall. In other words, it might not be that predictive but you’ll discover a lot about the quality and power of your existing data. This exercise allows you to scout ahead, actually mapping out where your roadmap should go. If the initial data you got your hands on doesn’t actually predict anything meaningful in terms of unique, differentiating employee outcomes-- then it’s damn good you didn’t discover that after three years of road building. That would be like one of those failed bridges to nowhere. Don’t do that. Don’t make the next phase of your career look like this. Welcome to 2019. We’ve dramatically lowered the costs of exploring the predictive value of your data through machine learning. Get your hands on some data. Feed it into One AI. If it’s predictive, use those results to build your coalition. If the initial results are not overly predictive, scape together some more data or try a new question. Iterate. Be agile. Be smart. Sometimes you have to stand on your head for a better view. How can I follow Phil's advice and get started? About One Model One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team.

    Read Article

    5 min read
    Josh Lemoine

    2019 Goals: With it being the dawn of a new year, a lot of us are setting goals for ourselves. This year, I set two goals: To write and publish a blog post To run a marathon As the father of two young children, I'm always looking for ways to maximize time management. As I ran on the treadmill recently, a bizarre idea came to me in between thoughts of "why do I do this to myself?" and "this sucks". I might be able to accomplish the first goal and get a start on the second at the same time. See, on my very first run 6 years ago, I brought my phone and tracked the run using a fitness tracker app. Since then, I never quit running and I never stopped tracking every single run using the same app. I have literally burned 296,827 calories building this data set... ...and this data deserves better than living in an app on my phone. As a Data Engineer, I feel ashamed to have been treating my exciting (to me) and certainly hard-earned data this way. What if I loaded the data into One Model and performed some analysis on it? If it worked, it would provide an excellent use case for just how flexible One Model is. It would also give me a leg up (running pun intended) on marathon training. One Model is flexible! One Model is a People Analytics platform. That said, it's REALLY flexible and very well positioned as the definition of "People Data" becomes more broad. The companies we work with are becoming increasingly creative in the types of data they're loading. And they're increasing their ROI by doing so. One Model is NOT a black box that you load HRIS and/or ATS data into that then spits out some generic reports or dashboards. The flexible technology platform coupled with a team of people with a massive amount of experience working with People Data is a big part of what differentiates One Model from other options. Would One Model be flexible enough to allow for analyzing running data in it? Yes. Not only was it flexible enough, but the data was loaded, modeled, and visualized without using any database tools. Everything you're about to see was done through the One Model front end. One Model has invested substantially over the past year in building a data scripting framework and it's accessible within the UI. This is a really exciting feature that customers will increasingly be able to utilize in the coming year. Years ago, as a customer of a People Analytics provider, I would have given my right arm for something like this. That said, as a One Model customer you also get access to a team of experts to model your data for you. What did I take away and what should you take away from this? Along with gaining a better understanding of my running, this exercise has gotten me more excited about running. Is "excited about running" even a thing? I plan to start capturing and analyzing more complete running data in 2019 with the use of a smart watch. I'll also be posting runs more consistently on social media (Strava). It'll be interesting to watch the changes as I train for a marathon. Aside from running though, it has given me some fresh perspective on what's possible in One Model. This will surely carry over into the work I do on a daily basis. Hopefully you can take something away from it as well. If you're already using One Model you might want to think about whether you have other data sources that can be tied to your more traditional People Data. If you're not using One Model yet but have an interesting use case related to People Analytics, One Model might be just the ticket for you. Without further ado, here's my running data in One Model: "Cool - this is all really exciting. How can I get started?" Did the above excite you? Could One Model help you with your New Year's resolution? I can't guarantee it'll help you burn any calories, but you could be up and running with your own predictive analytics during Q1 of 2019. One Model's Trailblazer quick-start program allows you to get started with predictive analytics now. Want to learn more? About One Model One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team.

    Read Article

    10 min read
    Stacia Damron

    Wouldn't it be incredible to predict the future? Let's ask 63-year-old Joan Ginther. She's arguably one of the luckiest women in the world. This Texas woman defied odds to win million-dollar lottery payouts via scratch cards not once, not twice, but four times over the past decade. Her first lottery win landed her $5.4 million, followed by $2 million, $3 million, and then a whopping $10 million jackpot over the summer of 2010. Mathematicians calculate the odds of this happening as one in eighteen septillion. Theoretically, this should only happen once in a quadrillion years. So how did this woman manage to pull it off? Was it luck? I'd certainly argue yes. Was it skill? Maybe. She did purchase all four scratch off cards at the same mini mart. Most interestingly, did it have something to do with the fact that Joan was a mathematics professor with a PhD in statistics from Stanford University? Quite possibly. We'll never know for sure what Joan's secret was, but the Texas Lottery Commission didn't (and still doesn't) suspect any foul play. Somehow, Joan (pictured to the left) predicted it was the right time and right place to buy a scratch off ticket. All we know for sure is that she's exceptionally lucky. And loaded. Most of us have a hard enough time predicting traffic on our morning commute. We can, however, make some insightful predictions for people analytics teams by running people data through predictive models. So, what is HR predictive analytics? Most specifically - predictive analytics use modeling, or a form of artificial intelligence that uses data mining and probability, to forecast or estimate specific outcomes. Each predictive model is comprised of a set of predictors (variables) in the data that influence future results. When the data set is processed by the program, it creates a statistical model based on the given data set. Translation? Predictive analytics allow us to predict the future based on historical outcomes. Let's discuss predictive analytics in HR examples. So predictive analytics can help HR professionals and business leaders make better decisions, but how? Maybe a company wants to learn where they're sourcing their best sales reps so they know where to turn to hire more top-notch employees. First, they must determine whether their "best" reps have measurable qualities. For the sake of this post, let's say they sell twice as much as the average sales reps. Perhaps all the best reps share several qualities such as referral source (like Indeed), a similar skill (fluency in Spanish listed on their resume) or personality trait (from personality tests conducted during the job interview). A predictive model would weigh all this data and compare it against the outcome: the superior sales quotas being hit. The model references the exploratory data analysis used to find correlations across all your data sources. This allows a company to run job candidates' resumes through the model in an effort to predict their future success in that role. Sounds great right? Now - here are the problems to consider: 1) Predictive models can only predict the future based on historical data. If you don't have enough data, that could be a problem. Download Ethics of AI Whitepaper. 2) Even if you do have enough data, that can still be a problem. Amazon, for example, recently scrapped its resume software (which evaluated resumes of current/previous employees to help screen potential ones) because it discovered the algorithm was biased towards men in engineering roles over women, which disqualified candidates that listed any women's organizations on their resume. (And it's not Amazon's fault. It's the data; historically, most men had been in those roles.) Kudos to them for scrapping that. That's why it's so important to use a human capital predictive analysis tool that is transparent and customized to your data vs. another big-box company in your industry. Check out One Model's One AI. HR predictive analysis is helpful, but it's also a process. Are there more applications? What HR-related problems does it solve? Predictive analysis applications in people analytics are vast. The right predictive models can help you solve anything from recruiting challenges to retention/employee attrition questions, to absenteeism, promotions and management, and even hr demand forecasting. The sky's the limit if you have the right tools and support. Time for a people analytics infrastructure reboot Sure - a people analytics infrastructure reboot isn't as exciting as winning the lottery and buying a yacht, but it's really, really helpful in solving questions large corporations struggle with daily. If you haven't used predictive modeling to solve a burning business problem, this might be a great place for your people analytics team to dive in. For One Model Customers - We recommend you push a couple of buttons and start with an exploratory data analysis. More and more companies are beginning to incorporate machine learning technology into their stack, and there's so much value that can be derived. If you're not sure where to get started, just keep it simple and bite off one piece of the puzzle at a time with One Model. One Model is built to turn your general HR team into people data scientists, no advanced degrees required. One Model provides the people analytics infrastructure - aka - it provides a platform for you to import your workforce data from all sources, transform it into one analytics-ready asset, and build predictive models to help you solve business challenges. Our customers are creating customized models and you can too. It's not as intimidating as you might think. It's super easy to get started: One Model will work with you to pull your people data out of any source that's giving you trouble (for example, your Greenhouse ATS, Workday, or Birst). We'll export it, clean it up, and put it all in the same place. It takes just a few weeks. From there, you can glean some insights from it. To learn more about One Model's capabilities (or to ask us any questions about how we create our predictive models), click the button below and a team member will reach out to answer all of your questions! Let's Talk More About Predictive Analytics for HR. About One Model One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team.

    Read Article

    13 min read
    Stacia Damron

    What's machine learning? Is it artificial intelligence? Deep learning? Is it black magic, or better yet, just a phrase the industry's marketing folks say to pique your interest? The answer? Let's crack it open. What is it? Machine learning is an application of artificial intelligence (AI) that uses statistical techniques to give computer systems the ability to automatically learn and steadily improve their performance from their experience with the data - all without being explicitly programmed to do so. Think of it this way: it's a program that's automatically learning and adjusting its actions without any help or assistance from humans. Cool, right? How is it used in data analytics? Machine learning is used to create complex models and algorithms that predict specific outcomes. Thus, it's coined as predictive analytics. The predictive models it creates allow the end users (data scientists, engineers, researchers, or analysts) to "produce reliable, repeatable decisions and results" that reveal otherwise "hidden insights through learning historical relationships and trends in the data." [1] Here's what artificial intelligence (AI) and machine learning are not: 1) Glorified statistics. Sure - both statistics and machine learning address the question "how do we learn from data?" In its most basic definition, "Statistics is a branch of mathematics dealing with data collection, organization, analysis, interpretation and presentation." [2] Statistics is a field of mathematics that addresses sample, population, and hypothesis to understand and interpret data. Machine learning, on the other hand, allows computers to act and make data-driven decisions without being directly programmed to carry out a specific task. It involves predictions and supervised/unsupervised learning. Above, supervised learning is explained with apples. Supervised machine learning is when a program is trained on a pre-defined dataset. It's provided with example inputs (the data) and their desired outputs (results), and the computer's goal is to analyze these to learn the rule that maps these inputs to outputs. It can then apply it's knowledge to the learning algorithm to adjust and improve its future predictions about output values. In the graphic above, you provide a data set that teaches the program, "these are apples. this is what apples look like." The desired output in this case is knowing and recognizing an apple. The program learns from this data, and next time, it will be able to identify apples on it's own. Viola! - it has officially been trained. A real world example of supervised learning is predicting a car sale price based on a given dataset of previous auto sales data for that make, model, and condition in that area. Above: unsupervised learning, explained by some tasty fruit. Unsupervised learning, on the other hand, is when a program automatically recognizes patterns or relationships in a given dataset. The algorithm is essentially on its own finding structure in its input, as it's not provided with classifications or labels ahead of time. Above, the raw data is represented with a selection of fruit. In it goes, where the algorithm finds structure in the data (it notices there are some apples, some bananas, and some oddly shaped oranges). It processes this information and clusters these into groups to be classified. The output is shown above as sorted fruits in neatly defined groups: one for apples, one for bananas, and one for the oranges. Unsupervised learning helps: make inferences regarding the data, which; classify hidden structures within the previously unlabeled data. Since unsupervised learning helps discover and classify hidden patterns in the dataset, a solid example would be a program grouping a variety of documents (the documents are the dataset) by subject with no prior knowledge or training. To summarize: while machine learning certainly utilizes statistics, it's a different way of addressing and solving a problem. It's not some magical version of stats that's going to suddenly provide all the answers. On that note... 2) It's not magic that will solve any problem with any data set with 100% accuracy. Machine learning algorithms can only analyze the data they're provided. For example, a machine learning system trained on a company's current customer data might be limited in that it's only able to predict the needs of new customers that are already in the data, eliminating another type of customer demographic that's not present in the data it was trained on. It can also take over any intrinsic biases that lie in the current data. Machine learning isn't perfect. Take Google for example. The tech giant famously struggled with this in 2015, when its Google photo software exhibited signs of accidental algorithmic racism. It made headlines when the machine learning algorithm mistakenly tagged people of certain ethnicities as gorillas. The company took immediate action and removed all gorilla-based learnings from the training data, and the algorithm was modified. Google Photos will no longer tag any image as a gorilla, chimpanzee, or monkey - including the actual animals. While machine learning can make some extremely helpful and enriching business predictions, it's not always going to make accurate predictions. Machine learning is just that - constantly learning. 3) Marketing buzzwords. At this point, journalists are saying "AI" is on it's way to becoming the meaningless, intangible tech-industry equivalent of "all natural." Yes - there are absolutely some companies that claim to have an AI component when they actually do not, just to hype up their product (and shame on them!). But for every one company that's throwing the term loosely around, there's a few more that just don't know any better. Thus AI isn't well defined. As a result, any piece of software that employs a convolutional neural network, deep learning system, etc. is being marketed as “powered by artificial intelligence." Here's some questions you can ask to evaluate if a company truly is has an AI strategy: a) Is the company using machine learning? Artificial intelligence technology uses machine learning. Can they tell you what machine learning algorithms they're using? If you ask a rep this question and you're met with a blank stare, that's a red flag. b) Ask about the data. What data are you using to train your algorithms? Is there enough of it? According to this source, around 5,000 training examples are necessary to begin generating results. 10 million training examples are needed to achieve human-level performance. Also, ask about a company's claim to reliably produce a certain result. How do they generate that number? How do they prevent overfitting errors? c) Get to know the technology and company itself. Was this technology developed in-house? What was the company doing before? Were they always an AI company specializing in predictive, or were they riding on the bandwagon of whatever was cool and trendy before? No one's an expert in something for a few years back, and then all of a sudden an expert in something totally different that's hot right now. Who founded the company, and where does their industry expertise lie? Learn about the current leadership. If you stick with the check-list above to vet AI technology, you'll be able to dig up some answers pretty quickly - and you'll look pretty freakin' savvy while you're doing it. So, how is machine learning being used in the HR space? Well-informed leaders in the people analytics space are embracing AI and budgeting for the resources to incorporate machine learning technology into their HR strategies for the long-term. Machine learning technology can create a variety of predictive models that help companies gain insights and solve challenges in the following areas: Recruiting - Where are you sourcing your best candidates from? Know where your high performers are coming from and get insights into the KPIs their resumes or career histories have in common. Retention & Employee Attrition - Predictive analytics use a company's historical data to determine potential attrition risks prior to their occurrence, giving leadership otherwise unknown insights and an opportunity to take preventative actions. Absenteeism - The Bureau of Labor and Statistics says that in 2017, the average number of days an employee missed annually was 2.8 days. It doesn't seem like a lot, but if your company has 1,000 employees, then that amounts to 2,800 days per year. According to Circadian, unscheduled absenteeism costs roughly $2,650 each year for salaried employees. That adds up to a whopping 7.42 million a year in absenteeism costs. That's a huge incentive to find a solution. Predictive models can help identify patterns and trends in why employees are absent. Would they have been able to complete their assignments as scheduled if they were able to work from home? Are there are lot of absences under a particular manager? Or is a particular department under a high level of stress? The answers may lie in the data. Promotions and Management - What are some inputs in the company datasets that indicate a higher likelihood of minorities receiving promotions or opportunities? How can we encourage more women to apply for or join X department? Predictive models can analyze the data and provide helpful insights into why. People Spend - Predictive models can forecast the effects of any type of spend toward future workforce productivity, whether that's hiring more employees, increasing training and educational opportunities, or implementing new systems. What that means for today's people analytics leaders More and more companies are beginning to benefit from incorporating machine learning technology that supports their long-term strategy. If you're evaluating different tools to solve your people analytics challenges, add One Model to your list of companies to your list. One Model provides people analytics infrastructure - aka - it provides a platform for you to import your workforce data and build predictive models to help you solve business challenges such as the ones listed above (and many more). Our customers can create customized models or use our out-of-the-box integrations. To learn more about One Model's capabilities (or to ask us any questions about our machine learning algorithms and how we create our predictive models), click the button below and a team member will reach out to answer all of your questions. About One Model One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team. [1] "Machine Learning: What it is and why it matters". www.sas.com. Retrieved 2016-03-29. [2] Dodge, Y. (2006) The Oxford Dictionary of Statistical Terms, Oxford University Press. ISBN 0-19-920613-9

    Read Article

    3 min read
    Stacia Damron

    Today, at The HR Technology Conference and Exposition, HRExaminer unveiled its 2019 Watchlist - "The Most Interesting AI Vendors in HR Technology." One Model is one of thirteen companies named, narrowed down from a list of over 200 intelligence tools, only 70 of which were invited to provide a demo. One Model was featured alongside several notable vendors including Google, IBM, Workday, and Kronos. The Criteria HRExaminer, an independent analyst of HRTechnology and intelligence tools, selected two winners across five distinct categories: AI as a Platform Data Workbench Microservices Embedded AI First Suite One Model was named as one of two featured companies in HRExaminer's Data Workbench Category and commended for its management of disparate data from disparate sources - specifically the platform's robust Analytics Integration. “Each of the companies on our 2019 Watchlist is demonstrating the best example of a unique value proposition. While we are in the early stages of the next wave of technology, they individually and collectively point the way," said John Sumser, HRExaminer’s founder and Principal Analyst. "Congratulations are in order for the work that they do. The award is simply a recognition of their excellence." Sumser goes on to state, “There are two main paths to analytics literacy and working processes in today’s market. The first is templated toolkits for specific purposes that can give employers a quick start and repeatable/benchmarkable processes. One Model represents the alternative: a complete set of tools for designing and building your own nuanced analytics, predictions and applications.” One Model is currently exhibiting at The Technology Conference and Exposition in Vegas, September 11th-13th. Attendees can visit booth #851 for more information. About One Model: One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team.

    Read Article

    6 min read
    Phil Schrader

    There will be over 400 HR product and service providers in the expo hall at HR Tech in September. A typical company makes use of 8 - 11 of these tools, some as many as 30. And that is wonderful. I love working in HR Technology. Companies are increasingly free to mix and match different solutions to deliver the employee experience that is right for them. New products come to market all the time. And the entrepreneurs behind these products are pretty consistently driven by a desire to make work better for employees. All that innovation leads to data fragmentation. Better for employees that don't work in HR Operations and People Analytics, that is. Because all that innovation leads to data fragmentation. In your organization, you might recruit candidates using SmartRecruiters in some countries and iCIMS in others. You might do candidate assessments in Criteria Corp and Weirdly. Those candidates might get hired into Workday, have their performance reviews in Reflektive and share their own feedback through Glint surveys. This would not be in the least bit surprising. And it also wouldn't be surprising if your internal systems landscape changed significantly within the next 12 months. The pace of innovation in this space is not slowing down. And the all-in-one suite vendors can’t keep pace with 400 best of breed tools. So if you want to adopt new technology and benefit from all this innovation, you will have to deal with data fragmentation. How do you adopt new innovation without losing your history? What if the new technology isn’t a fit? Can you try something else without having a gaping hole in your analytics and reporting? How will you align your data to figure out if the system is even working? This is where One Model fits in to the mix. We're going to call this One Model Difference your Data Insurance Policy. One Model pulls together all the data from your HR systems and related tools, then organizes and connects this data as if it all came from a single source. This means you can transition between technology products without losing your data. This empowers you to choose which technology fits your business without suffering a data or transition penalty. I remember chatting about this with Chris back at HR Tech last year. At the time I was working at SmartRecruiters and I remember thinking... Here we are, all these vendors making our pitches and talking about all the great results you're going to get if you go with our product. And here's Chris literally standing in the middle of it all with One Model. And if you sign up with One Model, you'll be able to validate all these results for yourself because you can look across systems. For example, you could look at your time to hire for the last 5 years and see if it changed after you implemented a new ATS. If you switched out your HRIS, you could still look backwards in time from new system to old and get a single view of your HR performance. You could line up results from different survey vendors. You'd literally have "one model," and your choice of technology on top of that would be optional. That's a powerful thought. A few months later, here I am getting settled in at One Model. I'm getting behind the scenes, seeing how how all this really comes together. And yeah, it looks just as good from the inside as it did from the outside. I've known Chris for a while, so it's not like I was worried he was BS-ing me. But, given all the new vendors competing for your attention, you'd be nuts if you haven't become a little skeptical about claims like data-insurance-policy-that-makes-it-so-you-can-transition-between-products-without-losing-your-data. So here are a couple practical reasons to believe, beyond the whole cleaning up and aligning your data stuff we covered previously. First off, One Model is... are you ready... single tenant. Your data lives in its own separate database from everyone else's data. It's your data. If you want to have direct database access into the data warehouse that we've built for you, you can have it. Heck, if you want to host One Model in your own instance of AWS, you can do that. We're not taking your data and sticking it into some rigid multi-tenant setup at arms length from you. That would not be data insurance. That would be data hostage-taking. Second, One Model doesn't charge per data source. That would be like one of those insurance policies where everything is out-of-network. With One Model, your systems are in-network. If you add a new system and you want the data in One Model, we'll add the data to One Model. If we don't have a connector, we'll build one. One of our clients has data from 40 systems in One Model. 40 systems. In one single model. In its own database. With no fees per data source. So go wild at HR Tech this fall. It is in Vegas after all. Add all the solutions that are right for your employees. And tell all your new vendors you'll be able to hold them accountable for all those bold ROI-supporting metrics they’re claiming. Because you can put all your data into One Model for all your people analytics. You can see for yourself. And if you swap that vendor out later, you’ll take all your data with you. Just don't wait until then to reach out to us at One Model. We love talking shop. And if you happen to like what you see with One Model, we can have your data loaded well before you get to Vegas. About One Model: One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team.

    Read Article

    3 min read
    Stacia Damron

    The One Model team is pleased to announce its official launch of One AI. The new tool integrates cutting-edge machine learning capabilities into the current One Model platform, equipping HR professionals with readily-accessible, unparalleled insights from their people analytics data. One Model’s core platform enables its customers to import multiple data sources into one, extensible, cloud-based platform. Organizations are then able to take full control of their people and business data, gaining increased visibility and spotting trends in the data that otherwise, would remain unnoticed. Machine Learning Insights like HR Professionals Have Never Seen Before One AI delivers a suite of out-of-the-box predictive models and data extensions, allowing organizations to understand and predict employee behavior like never before. One AI extends upon the current One Model platform capabilities, so now HR Professionals can access machine learning insights alongside their current people analytics data and dashboards. Additionally, the solution is open to allow customers and their partners to create and run their own predictive models or code within the One Model platform, enabling true support for an internal data science function. “One AI is a huge leap into the future of workforce analytics,” says Chris Butler, CEO of One Model. “By applying One Model's full understanding of HR data, our machine learning algorithms can learn from all of a customer’s data and predict on any target that our customers select.” The new tool offers faster insights: it can create a turnover risk predictive model in minutes, consuming data from across the organization, cleaned, structured, and tested through dozens of ML models and thousands of hyperparameters. It utilizes these to create a unique, accurate model that can provide explanations and identify levers for reducing an individual employees risk of turnover. This ability to explain and identify change levers is a cutting-edge capability. It allows One AI to choose a high accuracy model that’s otherwise unintelligible and explain it’s choices to our users. “The launch of One AI will have a huge impact on current and future customers alike.” says Stacia Damron, One Model’s Senior Marketing Manager. “One AI’s ability to successfully incorporate machine learning insights into an organization’s people analytics strategy is significant. It means it’s possible to quickly and automatically produce models that can analyze bigger, more complex data and deliver faster, more accurate results. By creating more precise models, and augmenting internal capabilities, an organization can better identify cost-saving opportunities and mitigate risk.” The One Model team looks forward to sharing more information about One AI with this year’s People Analytics World Conference attendees in London on April 11-12. Stop by the One Model booth if you would like to connect and learn more. About One Model: One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team.

    Read Article