QUICK FILTERS
Featured
7 min read
The One Model Team
As you know, People Analytics has evolved far beyond basic HR metrics like turnover rates or headcount tracking. As organizations seek to make smarter, proactive decisions about their workforce, they’re turning to more sophisticated People Analytics techniques. Moving beyond foundational metrics, advanced analytics — like predictive modeling, sentiment analysis, employee journey mapping, and ethical AI considerations — offer HR professionals the opportunity to play a powerful, proactive role in shaping their organization’s future. With these tools, HR leaders can anticipate challenges, influence key decisions, and drive meaningful, strategic change. This blog explores four advanced analytics techniques that help People Analytics leaders move from basic reporting to making decisions that resonate throughout the organization. 1. Predictive Analytics: Looking Ahead with Confidence What It Is: Predictive analytics leverages historical data to forecast future workforce trends, such as turnover risks, performance outcomes, or employee engagement levels. By identifying patterns within existing data, organizations can make proactive decisions, positioning them to address issues before they arise. Applications: Predictive analytics is highly valuable for HR teams aiming to prevent turnover in high-risk teams, pinpoint factors that impact employee engagement, or understand potential productivity trends. For instance, if a team shows signs of elevated turnover risk, leaders can intervene early — offering targeted support or resources to improve retention. Example: Consider a company that uses predictive analytics to identify teams with high burnout risk based on previous data trends, like prolonged overtime hours or low engagement scores. With this foresight, HR can intervene with support initiatives, helping employees recharge and boosting retention. Learn more about our One AI and One AI Assistant predictive analytics. 2. Sentiment Analysis: Understanding Employee Emotion at Scale What It Is: Sentiment analysis uses natural language processing (NLP) to interpret the emotional tone behind employee feedback, open-ended survey responses, and internal communication channels. By analyzing this data, companies gain a real-time understanding of employee morale and can detect early signs of dissatisfaction. Applications: Sentiment analysis can track morale trends across the organization, identify engagement dips, and help HR better understand employee needs. This technique allows for “pulse” insights, where sentiment can be monitored continuously, alerting leaders to shifts in morale. Example: A company might use sentiment analysis to monitor feedback on a recent policy change. If negative sentiment spikes, leadership can quickly address concerns, maintaining trust and morale by responding with empathy and transparency. 3 Keys to Effective Listening at Scale 3. Employee Journey Mapping: Visualizing the Employee Experience What It Is: Employee journey mapping visualizes each stage of an employee’s experience, from recruitment to exit, identifying critical touchpoints that affect engagement, satisfaction, and retention. By mapping these interactions, HR can see where employees thrive or struggle, allowing for targeted interventions. Applications: Journey mapping is valuable for tracking specific experiences such as onboarding effectiveness, career development paths, and retention at pivotal moments. It provides insights into the employee lifecycle, helping HR design initiatives that enhance satisfaction and reduce turnover. Example: Using Sankey diagrams, a company could visualize the journey from onboarding through various career milestones — revealing, for instance, that many employees exit after two years in certain roles. This insight enables HR implement targeted engagement or development programs during critical points in an employee’s journey. 4. Ethical Considerations in Advanced Workforce Analytics Why It Matters: As People Analytics methods become more advanced, ethical considerations grow in importance, especially around data privacy and employee consent. Ensuring responsible data use is essential for maintaining employee trust and aligning with broader organizational values. Best Practices: To conduct People Analytics ethically, companies should anonymize data wherever possible, obtain clear employee consent, and maintain transparency about data collection and usage. A commitment to ethical guidelines isn’t just about compliance — it strengthens trust and encourages openness to analytics-driven initiatives. Example: Organizations risk overstepping by monitoring too closely, which can lead to feelings of surveillance among employees. Ethical People Analytics is about balance: using data to benefit the organization while respecting employees’ privacy and autonomy. Conclusion: Moving from Insight to Impact The field of People Analytics has grown into a powerful strategic tool, and advanced HR analytics techniques like these (and others) enable HR leaders to anticipate, understand, and enhance the employee experience in proactive, strategic ways. Ready to take your People Analytics impact to the next level? Measuring the Value of People Analytics dives even further into implementing these advanced analytics strategies and gaining a sustainable advantage. Complete the form below to download it today and empower your People Analytics team with the tools needed for meaningful, data-driven change.
Read Article
Featured
5 min read
The One Model Team
Workforce planning and forecasting have become paramount for finance leaders to navigate market uncertainties and stay ahead of the competition. One Model's advanced People Analytics platform enables finance leaders to make smarter data-driven decisions, propelling their business toward sustainable growth and increased profitability. Centralise HR and Finance data for accurate predictions. The foundation of effective workforce planning lies in the ability to consolidate data from various sources into a single, reliable location. One Model achieves this by seamlessly integrating HR data with finance data, creating a centralized hub of valuable insights. By breaking down silos and allowing for data collaboration, finance leaders can gain a comprehensive understanding of their workforce, leading to more accurate predictions and tactical strategies. Slice and dice data more efficiently. Traditional ERP systems often struggle to handle the sheer volume and complexity of workforce data, leading to sluggish reporting and analysis. One Model, on the other hand, offers the ability to slice and dice data with ease, providing real-time insights and a granular, employee-level detail. Finance leaders can effortlessly examine the cost and productivity drivers at a departmental or individual level, empowering them to implement strategic initiatives with surgical precision. Identify high performers and which roles deliver the most value. Understanding the contribution of each role within an organization is crucial for effective workforce planning. One Model's advanced analytics capabilities offer improved visibility into productivity, revealing which roles deliver the most value to the organization. By identifying top-performing roles and focusing on their development, companies can reduce costly turnover, unleash the full potential of their workforce, and bolster overall performance. Better prepare for mergers, acquisitions, and divestitures. The financial services sector often witnesses mergers, acquisitions, and divestitures, which can lead to complex organizational changes and talent restructuring. With One Model, finance leaders can confidently embark on these transformations by leveraging the platform's capabilities. One Model can provide quick insight into topics such as your spans and layers that would traditionally involve high-cost and time-consuming consulting projects. From developing clear organizational structures to conducting talent audits to retain key personnel, One Model ensures a smooth transition and alignment of talent with strategic goals. Make more data-informed business decisions. Quick and informed decisions are critical for CFOs. With One Model, you can build your own metrics and definitions for headcount, FTE (full-time equivalents) updated daily, and other performance indicators to assess the return on investment from talent programs. And if Finance and HR can’t agree on how a certain metric (e.g., headcount) is calculated, One Model can support both variations. With clear insight into headcount and FTEs, you can better measure performance and plan labor needs. One Model delivers a holistic view of talent distribution and performance so Finance leaders can optimize headcount for the company’s needs, maintain cost-efficiency, and strike the perfect balance between talent and resources. Facilitate deeper conversations between HR and Finance. HR and Finance teams can have more meaningful and pointed conversations using One Model — where all the workforce data is captured, data quality is managed, and all related dimensions (e.g., hierarchies, employee attributes) are available for analysis. Bringing HR and Finance teams together can help your company accelerate your People Analytics journey and more easily identify opportunities to turn a profit. With One Model you can gain insight into more advanced metrics like Return on Human Investment Ratio (the ratio of operating profit, adding back total compensation expense, returned for every dollar invested in employee compensation and benefits) and hundreds of others to level up your HR and Finance decision making. Two examples of content specifically designed to align HR and Finance teams and empower them to make smarter data-driven decisions are: Headcount Storyboard — Setting up a storyboard which shows headcount represented in multiple ways: FTEs vs. employee counts, variations of which statuses are included/excluded, etc. This information becomes readily comparable with the metric definitions only a click away. Even better, the storyboard can be shared with the finance and HR partners in the discussion to explore on their own after the session. One Model is the best tool for counting headcount over time because it can support multiple variations. Hierarchy Storyboard — Providing views of the headcount as seen using the supervisor and cost hierarchies side-by-side will help to emphasize that both are simultaneously correct (i.e., the grand total is exactly the same). This can also provide an opportunity to investigate some of the situations where the cost and organizational hierarchy are not aligned. In many cases, these situations can be understood. Still, occasionally there are errors from previous reorganizations/transfers which resulted in costing information not being updated for a given employee (or group of employees). One Model is your partner for profitable growth One Model stands out as the ideal People Analytics partner for companies seeking to drive profitability through data-driven decision-making. If you’re ready to learn more, download our eBook 4 Ways CFOs Can Increase Profitability with One Model’s People Analytics Platform to discover even more ways our platform can enhance your profits.
Read Article
Featured
5 min read
Joe Grohovsky
In a recent editorial (here), Emerging Intelligence Columnist John Sumser explains how pending EU Artificial Intelligence (AI) regulations will impact its global use. A summary of those regulations can be found here. You and your organization should take an interest in these developments and yes there are HR legal concerns over AI. The moral and ethical concerns associated with the application of AI are something we must all understand in the coming years. Ignorance of AI capabilities and ramifications can no longer be an excuse. Sumser explains how this new legislation will add obligations and restrictions beyond existing GDPR requirements and that there is legislation applicable to human resource machine learning. The expectation is that legal oversight will arise that may expose liability to People Analytic users and their vendors. These regulations may bode poorly for People Analytics providers. It is worth your while to review what is being drafted related to machine learning and the law as well as how your current vendor addresses the three primary topics from these regulations: Fairness – This can address both training data used in your predictive model as well as the model itself. Potential bias toward things like gender or race may be obvious, but hidden bias often exists. Your vendor should identify biased data and allow you to either remove it or debias it. Transparency – All activity related to your predictive runs should be identifiable and auditable. This includes selection and disclosure of data, the strength of the models developed, and configurations used for data augmentation. Individual control over their own data – This relationship ultimately exists between the worker and their employer. Sumser’s article expertly summarizes a set of minimum expectations your employees deserve. When it comes to HR law, our opinion is that vendors should have already self-adopted these types of standards, and we are delighted this issue is being raised. What are the differences between regulations and standards? Become a more informed HR Leader by watching our Masterclass Series: Why One Model is Preferred when it comes to Machine Learning and the Law? At One Model we are consistently examining the ethical issues that are associated with AI. One Model already meets and exceeds the Fairness and Transparency recommendations; not begrudgingly but happily because it is the right thing to do. Where most competitors put your data into a proverbial AI black box, One Model opens its platform and allows full transparency and even modification of the AI algorithm your company uses. One Model has long understood the HR law and how the industry has an obligation to develop rigor and understanding around Data Science and Machine Learning. The obvious need for regulation and a legal standard for ethics has risen with the amount of snake oil and obscurity being heavily marketed by some HR People Analytics vendors. One Model’s ongoing plan to empower your HR AI initiatives includes: Radical transparency. Full traceability and automated version control (data + model). Transparent local and model level justifications for the predictions that our Machine Learning component called OneAI makes. By providing justifications and explanations for our decision-making process One Model builds paths for user-education and auditability for both simple and complex statistics. Our objective has been to advance the HR landscape by up-skilling analysts within their day-to-day job while still providing the latest cutting edge in statistics and machine learning. Providing clear and educational paths to statistics is in the forefront of our product design and roadmaps, and One Model is just getting started. You should promptly schedule a review of the AI practices being conducted with your employee data. Ignoring what AI can offer risks putting your organization at a competitive disadvantage. Incorrectly deploying AI practices may expose you to legal risk, employee distrust, compromised ethics, and incorrect observations. One Model is glad to share our expertise around People Analytics AI with you and your team. High level information on our OneAI capability can be found in the following brief video and documents: https://bit.ly/OneModelPredictiveModeling https://bit.ly/OneModel-AI https://bit.ly/HR_MachineLearning For a more detailed discussion please schedule a convenient time for a personal discussion. http://bit.ly/OneModelMeeting
Read Article
Featured
10 min read
Dennis Behrman
Ever play with a Magic 8 Ball? Back in the day, you could ask it any question and get an answer in just a few seconds. And if you didn't like its response, you could just shake it again for a new prediction. So simple, so satisfying. Today's HR teams and businesses obviously need more reliable ways of predicting outcomes and forecasting results than a Magic 8 Ball. But while forecasting and predicting sound similar, they're actually two different problem-solving techniques. Below, we'll go over both and explain what they're best suited for. What is HR forecasting? Remember the Magic 8 ball? At first glance, the Magic 8 ball "predicts" or "forecasts" an answer to your question. This is not how forecasting works (at least, for successful companies or HR departments). Instead, HR forecasting is a process of predicting or estimating future events based on past and present data and most commonly by analysis of trends. "Guessing" doesn't cut it. For example, we could use predictive forecasting to discover how many customer calls Phil, our product evangelist, is likely to receive in the next day. Or how many product demos he'll lead over the next week. The data from previous years is already available in our CRM, and it can help us accurately predict and anticipate future sales and marketing events where Phil may be needed. A forecast, unlike a prediction, must have logic to it. It must be defendable. This logic is what differentiates it from the Magic 8 ball's lucky guess. After all, even a broken watch is right two times a day. What is predictive analytics? Predictive analytics is the practice of extracting information from existing data sets in order to determine patterns and trends that could potentially predict future outcomes. It doesn't tell you what will happen in the future, but rather, what might happen. For example, predictive analytics could help identify customers who are likely to purchase our new One AI software over the next 90 days. To do so, we could indicate a desired outcome (a purchase of our people analytics software solution) and work backwards to identify traits in customer data that have previously indicated they are ready to make a purchase soon. (For example, they might have the decision-making authority on their people analytics team, have an established budget for the project, completed a demo, and found Phil likeable and helpful.) Predictive modeling and analytics would run the data and establish which of these factors actually contributed to the sale. Maybe we'd find out Phil's likability didn't matter because the software was so helpful that customers found value in it anyway. Either way, predictive analytics and predictive modeling would review the data and help us figure that out — a far cry from our Magic 8 ball. Managing your people analytics data: how do you know know if you need to use forecasting vs. predictive analysis? Interested in how forecasting and/or predictive modeling / predictive analytics can help grow your people analytics capabilities? Do you start with forecasting or predictive modeling? The infographic below (credit to Educba.com - thanks!) is a great place to compare your options: Recap: Should you use forecasting or predictive analysis to solve your question? Forecasting is a technique that takes data and predicts the future value of the data by looking at its unique trends. For example - predicting average annual company turnover based on data from 10+ years prior. Predictive analysis factors in a variety of inputs and predicts future behavior - not just a number. For example - out of this same employee group, which of these employees are most likely to leave (turnover = the output), based on analyzing past employee data and identifying the indicators (input) that often proceed with the output? In the first case, there is no separate input or output variable but in the second case, you use several input variables to arrive at an output variable. While forecasting is insightful and certainly helpful, predictive analytics can provide you with some pretty helpful people analytics insights. People analytics leaders have definitely caught on. We can help you figure it out and get started. Want to see how predictive modeling can help your team with its people analytics initiatives? We can jump-start your people analytics team with our Trailblazer quick-start package, which really changes the game by making predictive modeling agile and iterative process. The best part? It allows you to start now and give your stakeholders a taste without breaking the bank, and it allows you to build your case and lay the groundwork for the larger scale predictive work you could continue in the future. Want to learn more? Connect with Us. Forecasting vs. Predictive Analysis: Other Relevant Terms Machine Learning - machine learning is a branch of artificial intelligence (ai) where computers learn to act and adapt to new data without being programmed to do so. The computer is able to act independently of human interaction. Read Machine Learning Blog. Data Science - data science is the study of big data that seeks to extract meaningful knowledge and insights from large amounts of complex data in various forms. Data Mining - data mining is the process of discovering patterns in large data sets. Big Data - big data is another term for a data set that's too large or complex for traditional data-processing software. Learn about our data warehouse. Predictive Modeling - Predictive modeling is a form of artificial intelligence that uses data mining and probability to forecast or estimate more granular, specific outcomes. Learn more about predictive analytics. Descriptive Analytics - Descriptive analytics is a type of post-mortem analysis in that it looks at past performance. It evaluates that performance by mining historical data to look for the reasons behind previous successes and failures. Prescriptive Analytics - prescriptive analytics is an area of business analytics dedicated to finding the potential best course of action for a given situation. Data Analytics - plain and simple, data analytics is the science of inspecting, cleansing, transforming, and modeling data in order to draw insights from raw information sources. People Analytics - All these elements are important for people analytics. Need basics? Learn more about people analytics. About One Model One Model’s people analytics solutions help thriving companies make consistently great talent decisions at all levels of the organization. Large and rapidly-growing companies rely on our People Data Cloud™ people analytics platform because it takes all of the heavy lifting out of data extraction, cleansing, modeling, analytics, and reporting of enterprise workforce data. One Model pioneered people data orchestration, innovative visualizations, and flexible predictive models. HR and business teams trust its accurate reports and analyses. Data scientists, engineers, and people analytics professionals love the reduced technical burden. People Data Cloud is a uniquely transparent platform that drives ethical decisions and ensures the highest levels of security and privacy that human resource management demands.
Read Article
Featured
6 min read
Dennis Behrman
Artificial intelligence (AI) has become an integral part of various industries, revolutionizing the way organizations make decisions. However, with the rapid advancement of AI technology, concerns about its potential and ethical implications have emerged. As a result, governments around the world are preparing to enact regulations to address the use of AI in people decisions. In this blog post, we will explore the scope of these forthcoming regulations and discuss how People Data Cloud can help ensure equitable, ethical, and legally-compliant practices in automated decision-making across organizations. Broad Scope of Regulations While generative AI, such as ChatGPT, has been the catalyst for these regulations, it is important to note that the scope will not be limited to such technologies alone. Instead, the regulations are expected to encompass a wide range of automated decision technologies, including rule-based systems and rudimentary scoring methods. By extending the regulatory framework to cover diverse AI applications, governments aim to ensure fairness and transparency in all areas of decision-making. Beyond Talent Acquisition Although talent acquisition processes like interview selection and hiring criteria are likely to be subject to regulation, the scope of these regulations will go far beyond recruitment alone. Promotions, raises, relocations, terminations, and numerous other people decisions will also be included. Recognizing the potential impact of AI on employees' careers and well-being, governments seek to create an equitable and just environment across the entire employee lifecycle. Focus on Eliminating Bias and Ensuring Ethical Practices One of the primary objectives of these regulations will be to eliminate bias in AI-driven decision-making. Biases can arise from historical data, flawed algorithms, or inadequate training, leading to discriminatory outcomes. Governments will emphasize the need for organizations to proactively identify and mitigate biases, ensuring that decisions are based on merit and competence rather than factors such as race, gender, or age. Ethical considerations, including privacy and consent, will also be critical aspects of the regulatory landscape. Be Prepared. Join the Regulations and Standards Masterclass today. Learning about AI regulations and standards for HR has never been easier with an enlightening video series from experts across the space sharing the key concepts you need to know. A Holistic Approach to Compliance To comply with forthcoming AI regulations, organizations must evaluate their entire people data ecosystem. This includes assessing where data resides, which technologies are involved in decision-making processes, the level of human review and transparency afforded, and the overall auditability of automated decisions. Achieving compliance will require robust systems that enable organizations to monitor and assess the fairness and transparency of their AI-driven decisions. One AI is Your Automated People Decision Compliance Platform As governments gear up to regulate AI in people decisions, organizations must be prepared to adapt and comply with the evolving legal landscape. The scope of these regulations will extend beyond generative AI and encompass a broad range of automated decision technologies. Moreover, regulations will address not only talent acquisition but also various aspects of employee decision-making. Emphasizing the elimination of bias and ethical practices, governments seek to create fair and equitable workplaces. To ensure compliance with AI regulations, organizations can leverage platforms like One Model's One AI, which is fully embedded into every People Data Cloud product. This platform provides the necessary machine learning and predictive modeling capabilities, acting as a "clean room" to enable compliant and data-informed people decisions. By leveraging such tools, organizations can future-proof themselves against audits and demonstrate their commitment to ethical and unbiased decision-making in the AI era. Request a Personal Demo to See How One AI Keeps Your Enterprise People Decisions Ethical, Transparent, and Legally Compliant Learn more about One AI HR Software
Read Article
Featured
10 min read
Phil Schrader
Post 1: Sniffing for Bull***t. As a people analytics professional, you are now expected to make decisions about whether to use various predictive models. This is a surprisingly difficult question with important consequences for your employees and job applicants. In fact, I started drafting up a lovely little three section blog post around this topic before realizing that there was zero chance that I was going to be able to pack everything into a single post. There are simply no hard and fast rules you can follow to know if a model is good enough to use “in the wild.” There are too many considerations. To take an initial example, what are the consequences of being wrong? Are you predicting whether someone will click on an ad, or whether someone has cancer? In fact, even talking about model accuracy is multifaceted. Are you worried about detecting everyone who does have cancer-- even at the risk of false positives? Or are you more concerned about avoiding false positives? Side note: If you are a people analytics professional, you ought to become comfortable with the idea of precision and recall. Many people have produced explanations of these terms so we won’t go into it here. Here is one from “Towards Data Science”. So all that said, instead of a single, long post attempting to cover a respectable amount of this topic, we are going to put out a series of posts under that heading: Evaluating a predictive model: Good Smells and Bad Smells. And, since I’ve never met an analogy that I wasn’t willing to beat to death, we’ll use that smelly comparison to help you keep track of the level at which we are evaluating a model. For example, in this post we’re going to start way out at bull***t range. Sniffing for Bull***t As this comparison implies, you ought to be able to smell these sorts of problems from pretty far out. In fact, for these initial checks, you don’t even have to get close enough to sniff around at the details of the model. You’re simply going to ask the producers of the model (vendor or in-house team) a few questions about how they work to see if they are offering you potential bull***t. At One Model, we're always interested in sharing our thoughts on predictive modeling. One of these great chats are available on the other side of this form. Back to our scheduled programming. Remember that predictions are not real. Because predictive models generate data points, it is tempting to treat them like facts. But they are not facts. They are educated guesses. If you are not committed to testing them and reviewing the methodology behind them, then you are contenting yourself with bull***t. Technically speaking, by bull***t, I mean a scenario in which you are not actually concerned with whether the predictions you are putting out are right or wrong. For those of you looking for a more detailed theory of bull***t, I direct you to Harry G. Frankfurt. At One Model we strive to avoid giving our customers bull***t (yay us!) by producing models with transparency and tractability in mind. By transparency we mean that we are committed to showing you exactly how a model was produced, what type of algorithm it is, how it performs, how features were selected, and other decisions that were made to prepare and clean the data. By tractability we mean that the data is traceable and easy to wrangle and analyze. When you put these concepts together you end up with predictive models that you can trust with your career and the careers of your employees. If, for example, you produce an attrition model, transparency and tractability will mean that you are able to educate your data consumers on how accurate the model is. It will mean that you have a process set up to review the results of predictions over time and see if they are correct. It will mean that if you are challenged about why a certain employee was categorized as a high attrition risk, you will be able to explain what features were important in that prediction. And so on. To take a counter example, there’s an awful lot of machine learning going on in the talent acquisition space. Lots of products out there are promising to save your recruiters time by using machine learning to estimate whether candidates are a relatively good or a relatively bad match for a job. This way, you can make life easier for your recruiters by taking a big pile of candidates and automagically identifying the ones that are the best fit. I suspect that many of these offerings are bull***t. And here are a few questions you can ask the vendors to see if you catch a whiff (or perhaps an overwhelming aroma) of bull***t. The same sorts of questions would apply for other scenarios, including models produced by an in-house team. Hey, person offering me this model, do you test to see if these predictions are accurate? Initially I thought about making this question “How do you” rather than “Do you”. I think “Do you” is more to the point. Any hesitation or awkwardness here is a really bad smell. In the talent acquisition example above, the vendor should at least be able to say, “Of course, we did an initial train-test split on the data and we monitor the results over time to see if people we say are good matches ultimately get hired.” Now later on, we might devote a post in this series to self-fulfilling prophecies. Meaning in this case that you should be on alert for the fact that by promoting a candidate to the top of the resume stack, you are almost certainly going to increase the odds that they are hired and, thus, you are your model is shaping, rather than predicting the future. But we’re still out at bull***t range so let’s leave that aside. And so, having established that the producer of the model does in fact test their model for accuracy, the next logical question to ask is: So how good is this model? Remember that we are still sniffing for bull***t. The purpose of this question is not so much to hear whether a given model has .75 or .83 precision or recall, but just to test if the producers of the model are willing to talk about model performance with you. Perhaps they assured you at a high level that the model is really great and they test it all the time-- but if they don’t have any method of explaining model performance ready for you… well… then their model might be bull***t. What features are important in the model? / What type of algorithm is behind these predictions? These follow up questions are fun in the case of vendors. Oftentimes vendors want to talk up their machine learning capabilities with a sort of “secret sauce” argument. They don’t want to tell you how it works or the details behind it because it’s proprietary. And it’s proprietary because it’s AMAZING. But I would argue that this need not be the case and that their hesitation is another sign of bull***t. For example, I have a general understanding of how the original Page Rank algorithm behind Google Search works. Crawl the web and work out the number of pages that link to a given page as a sign of relevance. If those backlinks come from sites which themselves have large numbers of links, then they are worth more. In fact, Sergey Brin and Larry Page published a paper about it. This level of general explanation did not prevent Google from dominating the world of search. In other words, a lack of willingness to be transparent is a strong sign of bull***t. How do you re-examine your models? Having poked a bit at transparency, these last questions get into issues of tractability. You want to hear about the capabilities that the producers of the model have to re-examine the work they have done. Did they build a model a few years ago and now they just keep using it? Or do they make a habit of going back and testing other potential models. Do they save off all their work so that they could easily return to the exact dataset that was used to train a specific version of the model. Are they set up to iterate or are they simply offering a one-size fits all algorithm to you? Good smells here will be discussions about model deployment, maintenance and archiving. Streets and sewers type stuff as one of my analytics mentors likes to say. Bad smells will be high level vague assurances or -- my favorite -- simple appeals to how amazingly bright the team working on it is.If they do vaguely assure you that they are tuning things up “all the time” then you can hit them with this follow up question: Could you go back to a specific prediction you made a year ago and reproduce the exact data set and version of the algorithm behind it? This is a challenging question and even a team fully committed to transparency and tractability will probably hedge their answers a bit. That’s ok. The test here not just about whether they can do it, but whether they are even thinking about this sort of thing. Ideally it opens up a discussion about you they will support you, as the analytics professional responsible for deploying their model, when you get challenged about a particular prediction. It’s the type of question you need to ask now because it will likely be asked of you in the future. As we move forward in this blog series, we’ll get into more nuanced situations. For example, reviewing the features used in the predictions to see if they are diverse and make logical sense. Or checking to see if the type of estimator (algorithm) chosen makes sense for the type of data you provided. But if the model that you are evaluating fails the bull***t smell test outlined here, then it means that you’re not going to have the transparency and tractability necessary to pick up on those more nuanced smells. So do yourself a favor and do a test whiff from a ways away before you stick your nose any closer.
Read Article
Featured
6 min read
Nicholas Garbis
WATCH THE VIDEO! Conversation with our Chief Product Officer, Tony Ashton, on the topic of insight generation and he shows how One Model’s new insight function works. Insight Generation I believe that a key element of People Analytics should be on insight generation, reducing the time and cognitive load for HR and business leaders to generate insights that lead to actions. Many people analytics teams have made this a priority from a service offering, some of them even including "insights" in the naming of their team. With artificial intelligence, higher quality and faster insight generation can be driven across an organization. An organization with a mature people analytics capability should be judged on the frequency and quality of insight generation away from the center. Why I Stopped Liking Maturity Models Humor me for a moment while I share a very short rant and a confession. I have grown to despise the “maturity curves” that have been circulating through people analytics for over a decade. My confession is that I have not (yet!) been able to come up with a compelling replacement. My main issues? The focus is on data & technology deliverables, not on actions and outcomes. They are vague and imply that you proceed from one stage to the next, when in reality all of them can (and should) be constantly maturing and evolving without any of them ever being “done” or “perfect.” Too many times I have heard (mostly newer) people analytics leaders saying that they need to get their data and basic reporting right before they can consider any analytics. I personally don’t believe that to be true -- things will get easier, faster, and better with your analytics but you do not have to wait to make progress at any of the stages. Action Orientation For example, getting to “predictive” -- being able to foresee what is likely to happen -- is shown in many maturity models. It is easy to imagine, and you may have examples, where very mature predictive analytics deliverables have had little or no impact on the business. In my opinion, true maturity is not about the deliverable, but about the insights generated and the corresponding actions that are taken to drive business outcomes. Going further, getting to “prescriptive” means you have a level of embedded, artificial intelligence that is producing common language actions that should be considered. This would assume the “insight” component is completely handled by the AI which then proceeds into selecting or creating a recommended action. This is still quite aspirational for nearly all organizations, yet it is repeated often. Focus on Designing for Insight Generation at the “Edges” People analytics teams are typically centralized in a COE model, where expertise on workforce data, analytics, dashboard design, data science, insight generation, and data storytelling can be concentrated and developed. The COE is capable of generating insights for the CHRO and HR leadership team, but what about the rest of the organization? What about the HR leaders and managers farther out at the edges of the org chart? The COE needs to design and deliver content to the edges of the organization that enable them to generate insights without needing to directly engage the COE in the process. A storyboard or dashboard needs to be designed with specific intention to shorten the time between a user seeing the content and them having an accurate insight. A good design will increase the likelihood of a “lightbulb" moment. Humans and Machines Turning on “Lightbulbs” Together We need to ensure that the HR leaders and line managers are capable of generating insights from the people analytics deliverables (reports, dashboards, storyboards, etc). This will require some upskilling in data interpretation and data storytelling. With well-designed content, they will generate insights faster and with less effort. Human-generated insights will never be fully replaced. Instead, they will be augmented with machines in the form of AI and machine learning. With the augmentation of AI, the humans will get a boost and together the human-machine combination is a powerful force for insights and then actions. When we have augmentation of AI, we can stop trying to teach everyone statistical regression techniques which they will never use. The central PA team can manage the AI toolset and ensure it is delivering valid interpretations and then focus on enabling insight generation and storytelling by the humans, the HR leaders and line managers. One Model Lights Up Our Customers’ Data Visualizations One Model has just introduced a “lightbulb” feature that is automatically enabled on storyboard tiles that contain metrics that would benefit from forecasting or statistical significance tests. This is not just limited to the content our team creates, it is also automatically scanning the data within storyboards created by our customers. This is far more than basic language attached to a simple regression model. By integrating features of our One AI machine learning module into the user interface we are automatically interpreting the type & structure of the data in the visual and then selecting the appropriate statistical model for determining if there is a meaningful relationship which is described in easy-to-interpret language. Where a forecast is available it is based on an ARIMA model and all the relevant supporting data is just a click away. With this functionality built directly into the user interface, each time you navigate into the data, filtering or drilling into an organization structure, the calculations will automatically reassess the data and generate the interpretations for you. With automated insights generated through AI, One Model accelerates your people analytics journey, moving you from data to insights to actions. About One Model One Model’s industry-leading, enterprise-scale people analytics platform is a comprehensive solution for business and HR leaders that integrates data from HR systems with financial and operational data to deliver metrics, storyboard visuals, and predictive analytics through a proprietary AI and machine learning model builder. People data presents unique and complex challenges which One Model simplifies to enable faster, better, evidence-based workforce decisions. Learn more at www.onemodel.co One Model’s new Labor Market Intel product delivers external supply & demand data at an unmatched level of granularity and flexibility. The views in LMI help you to answer the questions you and your leaders need answers to with the added flexibility to create your own customized views. Learn more at www.onemodel.co/LMI
Read Article
Featured
4 min read
Josh Lemoine
Software companies today aren't exactly selling the idea of "lovingly crafting you some software that's unique and meaningful to you". There's a lot more talk about best practices, consistency, and automation. It's cool for software capabilities to be generated by robots now. And that's cool when it comes to things like making predictions. One Model is a leader in that space with One AI. This post isn't about machine learning though. It's about modeling your company's people data . The people at One Model work with you to produce a people data model that best suits your company's needs. It's like having your own master brewer help guide you through the immense complexity that we see with people data. Why does One Model take this hands-on approach? Because the people employed at your company are unique and your company itself is unique. Organizations differ not only in structure and culture but also in the combinations of systems they use to find and manage their employees. When you consider all of this together, it's a lot of uniqueness. The uniqueness of your company and its employees is also your competitive advantage. Why then would you want the exact same thing as other companies when it comes to your people analytics? The core goal of One Model is to deliver YOUR organization's "one model" for people data. A Data Engineer builds your data model in One Model. The Data Engineer working with you will have actual conversations with you about your business rules and logic and translates that information into data transformation scripting. One Model does not perform this work manually because of technical limitations or an immature product. It's actually kind of the opposite. Affectionately known as "Pipeo", One Model's data modeling framework is a major factor in allowing One Model to scale while still using a hands-on approach. Advantages of Pipeo include the following: It's fast. Templates and the "One Model" standard models are used as the starting point. This gets your data live in One Model very quickly, allowing for validation and subsequent logic changes to begin early on in the implementation process. It's extremely flexible. Anything you can write in SQL can be achieved in Pipeo. This allows One Model to deliver things outside the realm of creating a standard data model. We've created a data orchestration and integrated development environment with all the flexibility of a solution you may have built internally. It's transparent. You the customer can look at your Pipeo. You can even modify your Pipeo if you're comfortable doing so. The logic does not reside in a black box. It facilitates accuracy. Press a validation button, get a list of errors. Correct, validate, and repeat. The scripting does not need to be run to highlight syntax issues. OMG is it efficient. What used to take us six weeks at our previous companies and roles we can deliver in a matter of hours. Content templates help but when you really need to push the boundaries being able to do so quickly and with expertise at hand lets you do more faster. It's fun to say Pipeo. You can even use it as a verb. Example: I pipeoed up a few new dimensions for you. The role the Data Engineer plays isn't a substitute for working with a dedicated Customer Success person from One Model. It's in addition to it. Customer Success plays a key role in the process as well. The Customer Success people at One Model bring many years of industry experience to the table and they know their way around people data. They play a heavy role in providing guidance and thought leadership as well as making sure everything you're looking for is delivered accurately. Customer Success will support you throughout your time with One Model, not just during implementation. If you'd like to sample some of the "craft people analytics" that One Model has on tap, please reach out for a demo. We'll pour you a pint right from the source, because canned just doesn't taste as good.
Read Article
Featured
13 min read
Phil Schrader
As the people analytics leader in your organization, you are responsible for transforming people data into a unique competitive advantage. You need to figure out what it is about your workforce that makes your business grow, and you need to get leaders across the organization on board with using data to make better employee outcomes happen. How will you do that in 2019? Will you waste another year attempting to methodically climb up the 4 or 5 stages of the traditional analytics maturity model? You know the one. It goes from operational reporting in the lower left, up through a few intermediate stages, and then in the far distant upper right, culminates with predictive models and optimization. Here’s the Bersin & Associates one for reference or flip open your copy of Competing on Analytics (2007) for another (p. 8). The problem with this model is that on the surface it appears to be perfect common sense while in reality, it is hopelessly naive. It requires you to undertake the most far-reaching and logistically challenging efforts first. Then in the magical future, you will have this perfect environment in which to figure out what is actually important. If this were an actual roadmap for an actual road it would say, “Step 1: Begin constructing four-lane highway. … Step 4: Figure out where the highway should go.” It is the exact opposite of the way we have learned to think about business in the last decade. Agile. The Lean Startup. Etc. In fact it is such a perfect inverse of what you should be doing that we can literally turn this maturity model 180 degrees onto its head and discover an extremely compelling way to approach people analytics. Here is the new model. Notice the axes. This is a pragmatic view. We are now building impact (y axis) in the context of increasing logistical complexity (x axis). Impact grows as more people are using data to achieve the people outcomes that matter. But, as more and more people engage with the data your logistical burden grows as well. These burdens will manifest themselves in the form of system integrations, data validation rules, metric definitions, and a desire for more frequent data refreshes. From this practical perspective, operational data no longer seems like a great place to start. It’s desirable because it’s the point at which many people in the organization will be engaging with data, but it will require an enormous logistical effort to support. This is a good time to dispense with the notion that operational data is somehow inferior to other forms of data. That it’s the place to start because it’s so simplistic. Actually, your business runs operationally. Amazon’s operational data, for example, instructs a picker in a warehouse to go and fetch a particular package from the shelves at a particular moment in time. That’s just a row of operational data. But it occurs at the end of a sophisticated analytics process that often results in you getting a package on the very same day you ordered it. Operational data is data at the point of impact. Predictive data also looks quite different from this perspective. It’s a wonderful starting point because it is very manageable logistically. And don’t be put off by the fact that I’ve labeled its impact as lower. Remember that impact in this model is a function of the number of people using your data. The impact of your initial predictive models will be felt in a relatively small circle of people around you, but it’s that group of people that will form your most critical allies as you seek to build your analytics program. For starters, it’s your boss and the executive team. Sometime around Valentines Day they will no doubt start to ask, “Hey, how’s the roadmap coming along?” In the old model, you would have to say, “Oh well you know it’s difficult because it’s HR data and we need to get it right first.” Then you’d both nod knowingly and head off to LinkedIn to read more articles about HR winning a seat at the table. But this year you will say, “It’s going great! We’ve run a few hundred predictive models and discovered that we can predict {insert Turnover, Promotion, Quality of Hire, etc} with a decent degree of recall and precision. As a next step, we’re figuring out how to organize this data more effectively so we can slice and dice it in more ways. After that we will start seeking out other data sets to improve our models and make a plan for distributing this data to our people leaders.” Ah. Wouldn’t that feel nice to say? Next, you begin taking steps to better organize your data and add new data sets. This takes more logistical effort so you will engage your next group of allies: HR system owners and IT managers. Because they are not fools, they will be a little skeptical at first. Specifically, they’re going to ask you what data you need and why it’s worth going after. If you’re operating under the old model, you won’t really know. You might say, “All of it.” They won’t like that answer. Or maybe you’ll be tempted to get some list of predefined KPIs from an article or book. That’s safer, but you can’t really build a uniquely differentiating capability for your organization that way. You’re just copying what other people thought was important. If you adopt our upside down model, on the other hand, you’ll have a perfectly good answer for the system owners and IT folks. You’ll say, “I’ve run a few hundred models and we know that this manageable list has the data elements that are the most valuable. These data points help us predict X. I’d like to focus on those. “Amen,” they’ll say. How’s that for a first two months of 2019? You’re showing progress to your execs. Your internal partners are on board. You are building momentum. The more allies you win, the more logistical complexity you can take on. At this stage people have reason to believe in you and share resources with you. As you move up the new maturity model with your IT allies, you’ll start to build analytic data sets. Now you’re looking for trends and exploring various slices. Now is the time for an executive dashboard or two. Now is the time to start demonstrating that your predictive models are actually predictive. These dashboards are focused. They’re not a grab bag of KPIs. They might simply show the number of people last month who left the company and whether or not they were predicted by the model. Maybe you cut it by role and salary band. The point is not to see everything. The point is to see what matters. Your execs will gladly take three pieces of meaningful data once per month over a dozen cuts of overview data once a day. Remember to manage your logistical commitment. You need to get the data right about once a month. Not daily. Not “real time.” Finally, you’re ready to get your operational data right. In the old world this meant something vague like being able to measure everything and having all the data validated and other unrealistic things. In the new world it means delivering operational data at the point of impact. In the old world you’d say, “Hey HRBP or line manager, here are all these reports you can run for all this stuff.” And they would either ignore them or find legitimate faults with them. In the new world, you say, “Hey HRBP or line manager, we’ve figured out how to predict X. We know that X is (good | bad) for your operations. We’ve rolled out some executive dashboards to track trends around X. Based on all that, we’ve invested in technology and process to get this data delivered to you as well. X can be many things. Maybe it’s a list of entry-level employees likely to promote two steps based upon factors identified in the model. Maybe it’s a list of key employees at high risk of termination based. Maybe it’s a ranking of employee shifts with a higher risk of a safety incident. Whatever it is for your business, you will be ready to roll it out far and wide because you’ve proven the value of data and you’ve pragmatically built a network of allies who believe in what you are doing. And the reason you’ll be in that position is because you turned your tired old analytics maturity model on it’s head and acted the way an agile business leader is supposed to act. Yeah but… Ok Phil, you say, that’s a nice story but it’s impossible. We can’t START with prediction. That’s too advanced. Back when these maturity models were first developed, I’d say that was true. The accessibility of data science has changed a lot in ten years. We are all more accustomed to talking about models and predictive results. More to the point, as the product evangelist at One Model I can tell you with first-hand confidence that you can, in fact, start with prediction. One Model’s One AI product offering ingests sets of data and runs them through a set of data processing steps, producing predictive models and diagnostic output. Here’s the gory details on all that. Scroll past the image and I’ll explain. Basically there’s a bunch of time consuming work that data scientists have to do in order to generate a model. This may include things like taking a column and separating the data into multiple new columns (One Hot Encoding) or devising a strategy to deal with missing data elements, or checking for cheater columns (a column like “Severance Pay” might be really good at predicting terminations, for example). There’s likely several ways to prepare a data set for modeling. After all that, a data scientist must choose from a range of predictive model types, each of which can be run with various different parameters in place. This all adds up to scrubbing, rescrubbing, running and re-running things over and over again. If you are like me, you don’t have the skill set to do all of that effectively. And you likely don’t have a data scientist loitering around waiting to grind through all of that for you. That’s why in the past this sort of thing was left at the end of the roadmap-- waiting for the worthy few. But I bet you are pretty good at piecing data sets together in Excel. I bet you’ve handled a vlookup or two on your way to becoming a people analytics manager. Well… all we actually need to do is manually construct a data set with a bunch of columns that you think might be relevant to predicting whatever outcome you are looking for. Then we feed the data into One AI. It cycles through all the gnarly stuff in the image above and gives you some detailed output on what it found. This includes an analysis of all the columns you fed in and also, of course, the model itself. You don’t need to be able to do all the stuff in that image. You just need to be able to read and digest the results. And of course, we can help with that. Now, the initial model may not have great precision and recall. In other words, it might not be that predictive but you’ll discover a lot about the quality and power of your existing data. This exercise allows you to scout ahead, actually mapping out where your roadmap should go. If the initial data you got your hands on doesn’t actually predict anything meaningful in terms of unique, differentiating employee outcomes-- then it’s damn good you didn’t discover that after three years of road building. That would be like one of those failed bridges to nowhere. Don’t do that. Don’t make the next phase of your career look like this. Welcome to 2019. We’ve dramatically lowered the costs of exploring the predictive value of your data through machine learning. Get your hands on some data. Feed it into One AI. If it’s predictive, use those results to build your coalition. If the initial results are not overly predictive, scape together some more data or try a new question. Iterate. Be agile. Be smart. Sometimes you have to stand on your head for a better view. How can I follow Phil's advice and get started? About One Model One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team.
Read Article
Featured
5 min read
Josh Lemoine
2019 Goals: With it being the dawn of a new year, a lot of us are setting goals for ourselves. This year, I set two goals: To write and publish a blog post To run a marathon As the father of two young children, I'm always looking for ways to maximize time management. As I ran on the treadmill recently, a bizarre idea came to me in between thoughts of "why do I do this to myself?" and "this sucks". I might be able to accomplish the first goal and get a start on the second at the same time. See, on my very first run 6 years ago, I brought my phone and tracked the run using a fitness tracker app. Since then, I never quit running and I never stopped tracking every single run using the same app. I have literally burned 296,827 calories building this data set... ...and this data deserves better than living in an app on my phone. As a Data Engineer, I feel ashamed to have been treating my exciting (to me) and certainly hard-earned data this way. What if I loaded the data into One Model and performed some analysis on it? If it worked, it would provide an excellent use case for just how flexible One Model is. It would also give me a leg up (running pun intended) on marathon training. One Model is flexible! One Model is a People Analytics platform. That said, it's REALLY flexible and very well positioned as the definition of "People Data" becomes more broad. The companies we work with are becoming increasingly creative in the types of data they're loading. And they're increasing their ROI by doing so. One Model is NOT a black box that you load HRIS and/or ATS data into that then spits out some generic reports or dashboards. The flexible technology platform coupled with a team of people with a massive amount of experience working with People Data is a big part of what differentiates One Model from other options. Would One Model be flexible enough to allow for analyzing running data in it? Yes. Not only was it flexible enough, but the data was loaded, modeled, and visualized without using any database tools. Everything you're about to see was done through the One Model front end. One Model has invested substantially over the past year in building a data scripting framework and it's accessible within the UI. This is a really exciting feature that customers will increasingly be able to utilize in the coming year. Years ago, as a customer of a People Analytics provider, I would have given my right arm for something like this. That said, as a One Model customer you also get access to a team of experts to model your data for you. What did I take away and what should you take away from this? Along with gaining a better understanding of my running, this exercise has gotten me more excited about running. Is "excited about running" even a thing? I plan to start capturing and analyzing more complete running data in 2019 with the use of a smart watch. I'll also be posting runs more consistently on social media (Strava). It'll be interesting to watch the changes as I train for a marathon. Aside from running though, it has given me some fresh perspective on what's possible in One Model. This will surely carry over into the work I do on a daily basis. Hopefully you can take something away from it as well. If you're already using One Model you might want to think about whether you have other data sources that can be tied to your more traditional People Data. If you're not using One Model yet but have an interesting use case related to People Analytics, One Model might be just the ticket for you. Without further ado, here's my running data in One Model: "Cool - this is all really exciting. How can I get started?" Did the above excite you? Could One Model help you with your New Year's resolution? I can't guarantee it'll help you burn any calories, but you could be up and running with your own predictive analytics during Q1 of 2019. One Model's Trailblazer quick-start program allows you to get started with predictive analytics now. Want to learn more? About One Model One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team.
Read Article
Featured
10 min read
Stacia Damron
Wouldn't it be incredible to predict the future? Let's ask 63-year-old Joan Ginther. She's arguably one of the luckiest women in the world. This Texas woman defied odds to win million-dollar lottery payouts via scratch cards not once, not twice, but four times over the past decade. Her first lottery win landed her $5.4 million, followed by $2 million, $3 million, and then a whopping $10 million jackpot over the summer of 2010. Mathematicians calculate the odds of this happening as one in eighteen septillion. Theoretically, this should only happen once in a quadrillion years. So how did this woman manage to pull it off? Was it luck? I'd certainly argue yes. Was it skill? Maybe. She did purchase all four scratch off cards at the same mini mart. Most interestingly, did it have something to do with the fact that Joan was a mathematics professor with a PhD in statistics from Stanford University? Quite possibly. We'll never know for sure what Joan's secret was, but the Texas Lottery Commission didn't (and still doesn't) suspect any foul play. Somehow, Joan (pictured to the left) predicted it was the right time and right place to buy a scratch off ticket. All we know for sure is that she's exceptionally lucky. And loaded. Most of us have a hard enough time predicting traffic on our morning commute. We can, however, make some insightful predictions for people analytics teams by running people data through predictive models. So, what is HR predictive analytics? Most specifically - predictive analytics use modeling, or a form of artificial intelligence that uses data mining and probability, to forecast or estimate specific outcomes. Each predictive model is comprised of a set of predictors (variables) in the data that influence future results. When the data set is processed by the program, it creates a statistical model based on the given data set. Translation? Predictive analytics allow us to predict the future based on historical outcomes. Let's discuss predictive analytics in HR examples. So predictive analytics can help HR professionals and business leaders make better decisions, but how? Maybe a company wants to learn where they're sourcing their best sales reps so they know where to turn to hire more top-notch employees. First, they must determine whether their "best" reps have measurable qualities. For the sake of this post, let's say they sell twice as much as the average sales reps. Perhaps all the best reps share several qualities such as referral source (like Indeed), a similar skill (fluency in Spanish listed on their resume) or personality trait (from personality tests conducted during the job interview). A predictive model would weigh all this data and compare it against the outcome: the superior sales quotas being hit. The model references the exploratory data analysis used to find correlations across all your data sources. This allows a company to run job candidates' resumes through the model in an effort to predict their future success in that role. Sounds great right? Now - here are the problems to consider: 1) Predictive models can only predict the future based on historical data. If you don't have enough data, that could be a problem. Download Ethics of AI Whitepaper. 2) Even if you do have enough data, that can still be a problem. Amazon, for example, recently scrapped its resume software (which evaluated resumes of current/previous employees to help screen potential ones) because it discovered the algorithm was biased towards men in engineering roles over women, which disqualified candidates that listed any women's organizations on their resume. (And it's not Amazon's fault. It's the data; historically, most men had been in those roles.) Kudos to them for scrapping that. That's why it's so important to use a human capital predictive analysis tool that is transparent and customized to your data vs. another big-box company in your industry. Check out One Model's One AI. HR predictive analysis is helpful, but it's also a process. Are there more applications? What HR-related problems does it solve? Predictive analysis applications in people analytics are vast. The right predictive models can help you solve anything from recruiting challenges to retention/employee attrition questions, to absenteeism, promotions and management, and even hr demand forecasting. The sky's the limit if you have the right tools and support. Time for a people analytics infrastructure reboot Sure - a people analytics infrastructure reboot isn't as exciting as winning the lottery and buying a yacht, but it's really, really helpful in solving questions large corporations struggle with daily. If you haven't used predictive modeling to solve a burning business problem, this might be a great place for your people analytics team to dive in. For One Model Customers - We recommend you push a couple of buttons and start with an exploratory data analysis. More and more companies are beginning to incorporate machine learning technology into their stack, and there's so much value that can be derived. If you're not sure where to get started, just keep it simple and bite off one piece of the puzzle at a time with One Model. One Model is built to turn your general HR team into people data scientists, no advanced degrees required. One Model provides the people analytics infrastructure - aka - it provides a platform for you to import your workforce data from all sources, transform it into one analytics-ready asset, and build predictive models to help you solve business challenges. Our customers are creating customized models and you can too. It's not as intimidating as you might think. It's super easy to get started: One Model will work with you to pull your people data out of any source that's giving you trouble (for example, your Greenhouse ATS, Workday, or Birst). We'll export it, clean it up, and put it all in the same place. It takes just a few weeks. From there, you can glean some insights from it. To learn more about One Model's capabilities (or to ask us any questions about how we create our predictive models), click the button below and a team member will reach out to answer all of your questions! Let's Talk More About Predictive Analytics for HR. About One Model One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team.
Read Article
Featured
8 min read
Phil Schrader
Last week I was doodling some recruiting graphs in my notebook, with an eye toward building out some new recruiting efficiency dashboards. I was thinking about how requisitions age over time and I got an idea for a cool stacked graph that counts up how many requisitions you have open each month and breaks them out into age buckets. Maybe some supporting breakouts like recruiter, some summary metrics, etc. Something like this: Phil's Beautifully Hand-illustrated Cholesterol Graph (above) This would be an awesome view. At a glance I could see whether my total req load was growing and I could see if I’m starting to get a build up of really old reqs clogging the system. This last part is why I was thinking of calling it the Requisition Cholesterol Graph. (That said, my teammate Josh says he hates that name. There is a comment option below… back me up here!) But then I got to thinking, how am I actually going to build that? What would the data look like? Think about it: Given: I have my list of requisitions and I know the open date and close date for each of them. Problem #1: I want to calculate the number of open reqs I have at the end of each time period. Time periods might be years, quarters, months, or days. So I need some logic to figure out if the req is open during each of those time periods. If you’re an Excel ninja then you might start thinking about making a ton of columns and using some conditional formulas. Or… maybe you figure you can create some sort of pancake stacks of rows by dragging a clever formula down the sheet… Also if you are an Excel ninja… High Five! Being an Excel ninja is cool! But this would be pretty insane to do in Excel. And it would be really manual. You’d probably wind up with a static report based on quarters or something and the first person you show it to will ask if they can group it by months instead. #%^#!!! If you’re a full on Business Intelligence hotshot or python / R wiz, then you might work out some tricky joins to inflate the data set to include a record or a script count a value for each time the reqs open date is before or within a given period, etc. Do able. But then… Problem #2: Now you have your overall count of reqs open in each period. Alls you have to do now is group the requisitions by age and you’re… oh… shoot. The age grouping of the requisitions changes as time goes on! For example, let’s say you created a requisition on January 1, 2017. It’s still open. You should count the requisition in your open req count for January 2017 and you’d also count it in your open req count for June 2018 (because it’s still open). Figuring all that out was problem #1. But now you want to group your requisitions by age ranges. So back in January 2017, the req would count in your 0 - 3 months old grouping. Now it’s in your > 1 year grouping. The grouping changes dynamically over time. Ugh. This is another layer of logic to control for. Now you’re going to have a very wild Excel sheet or even more clever scripting logic. Or you’re just going to give up on the whole vision, calculate the average days open across all your reqs, and call it a day. $Time_Context is on my side (Gets a little technical) But I didn’t have to give up. It turns out that all this dynamic grouping stuff just gets handled in the One Model data structure and query logic -- thanks to a wonderful little parameter called $Time_Context (and no doubt a lot of elegant supporting programming by the engineering team). When I ran into $Time_Context while studying how we do Org Tenure I got pretty excited and ran over to Josh and yelled, “Is this what I think it is!?” (via Slack). He confirmed for me that yes, it was what I hoped it was. I already knew that the data model could handle Problem #1 using some conditional logic around effective and end dates. When you run a query across multiple time periods in One Model, the system can consider a date range and automatically tally up accurate end of period (or start of period) counts bases on those date ranges. If you have a requisition that was opened in January 2017 and you want to calculate the number of reqs you have open at the end of every month, One Model will cycle through the end of each month, check to see if the req was opened before then and is not yet closed, and add it to the totals. We use this for all sorts of stuff, particularly headcount calculations using effective dates and end dates. So problem one was no problem, but I expected this. What I didn’t expect and what made me Slack for joy was how easily I could also deal with Problem #2. Turns out I could build a data model and stick $Time_Context in the join to my age dimension. Then One Model would just handle the rest for me. If you’ve gotten involved in the database side of analytics before, then you’re probably acquainted with terms like fact and dimension tables. If you haven’t, just think vlookups in Excel. So, rather than doing a typical join or vlookup, One Model allows you to insert a time context parameter into the join. This basically means, “Hey One Model, when you calculate which age bucket to put this req in, imagine yourself back in time in whatever time context you are adding up at that moment. If you’re doing the math for January 2017, then figure out how old the req was back then, not how old is is now. When you get to February 2017, do the same thing.” And thus, Problem #2 becomes no problem. As the query goes along counting up your metric by time period, it looks up the relevant requisition age grouping and pulls in the correct value as of that particular moment in time. So, with our example above, it goes along and says, “Ok I’m imagining that it’s January 2017. I’ll count this requisition as being open in this period of time and I’ll group it under the 0 - 3 month old range.” Later it gets to June 2018 and it says, “Ok… dang that req is STILL open. I’ll include it in the counts for this month again and let’s see… ok it’s now over a year old.” This, my friends, is what computers are for! We use this trick all the time, particularly for organization and position tenure calculations. TL;DR In short, One Model can make the graph that I was dreaming of-- no problem. It just handles all the time complexity for me. Here’s the result in all it’s majestic, stacked column glory: So now at a glance I can tell if my overall requisition load is increasing. And I can see down at the bottom that I’m starting to develop some gunky buildup of old requisitions (orange). If I wanted to, I could also adjust the colors to make the bottom tiers look an ugly gunky brown like in the posters in your doctors office. Hmmm… maybe Josh has a point about the name... And because One Model can handle queries like this on the fly, I can explore these results in more detail without having to rework the data. I can filter or break the data out to see which recruiters or departments have the worst recruiting cholesterol. I can drill in and see which particular reqs are stuck in the system. And, if you hung on for this whole read, then you are awesome too. Kick back and enjoy some Rolling Stones: https://www.youtube.com/watch?v=wbMWdIjArg0.
Read Article