8 min read
    Gina Calvert

    Are you as intentional about measuring the value of your data infrastructure and models as you are about building them? In the video below, our Solutions Architect Phil Schrader recently revealed at People Analytics Summit in Toronto the importance of (and some strategies for) using analytics to evaluate the impact of your analytics investments. From leveraging machine learning to track improvements to thinking creatively about integrating predictive models into everyday workflows, you'll gain insights on how to apply analytics to your own analytics. Short on time? We’ve summarized his presentation for you below. The Core Problem: Evaluating Data Investments When we talk about people analytics, we often focus on the tools, processes, and models that drive better decisions. But what happens when we turn that lens inward—when we use analytics to assess the very work of analytics itself? The idea is simple: if we’re investing in building data infrastructure and models, we should be just as intentional about measuring the value of those investments. Anyone leading a people analytics team knows the balancing act. On one side, there’s the pressure to deliver quick insights, the kind that keeps operations running smoothly. On the other side, there’s the longer-term need to build out robust data systems that support advanced analytics. Yet, as essential as these data initiatives are, we often struggle to quantify their value. How do we measure the ROI of building a data lake? How do we ensure that the data we’re collecting today will pay off down the road? Solution: Analytics About Analytics Here’s where we can take a different approach—by applying analytics to our own analytics. The falling cost of technical work in machine learning (ML) has opened up new possibilities, allowing us to embed these tools within our day-to-day operations. Instead of just using ML models for predictions, we can use them as a means to measure how good our data is and how effective our processes are. Essentially, we can start to think analytically about how we do analytics, especially when it comes to creating a predictive model that measures improvements over time. A Concrete Metric: Precision, Recall, and the F1 Score The foundation of this approach lies in the well-known metrics used to evaluate machine learning models: precision, recall, and the F1 score. In brief: Precision asks: When the model makes a prediction, how often is it correct? Recall asks: Out of all the events that should have been predicted, how many did the model actually identify? The F1 score strikes a balance between these two metrics, offering a single number that reflects how well your model performs overall. By tracking this metric, we can gauge the quality of our data and see how incremental improvements—like adding new data sources—translate into better predictive power. This kind of measurement becomes crucial as we think about the future of machine learning and how it integrates into everyday operations. Building Analytics for Growth This method doesn’t just give us a way to measure progress; it gives us a framework to demonstrate that progress in tangible terms. Start with the basics—core HR data like job titles, tenure, and compensation. As you layer in additional data points—learning metrics, performance reviews, engagement scores—you can observe how each new addition boosts your model’s F1 score. It’s a practical way to quantify the value of your data and justify continued investment. The Changing Landscape: Embedding Predictive Models Predictive modeling no longer needs to be a separate, resource-intensive project. As the tools become more accessible, we can embed this capability directly into our workflows. Think of it as using predictive models the way we use pivot tables—regularly, as a quick check to see how well our data is performing. This kind of embedded analytics allows us to experiment, iterate, and find creative ways to leverage machine learning without overcommitting resources. With AI continually reshaping business practices, this shift will allow teams to use predictive models in increasingly versatile ways, driving more efficient decision-making. Beyond Traditional Metrics: Rethinking the Value of Data By adopting this approach, we’re able to ask—and answer—a critical question: How valuable is our data, really? If we can demonstrate that our data is increasingly effective at predicting key outcomes like employee turnover or high performance, we’re no longer just talking about data quality in abstract terms. We’re providing a concrete metric that resonates with stakeholders and gives us a way to collaborate more effectively, whether it’s across HR functions or with external vendors whose data feeds into our models. Looking Ahead: Embracing Innovation as Costs Fall The future of AI and the workplace is advancing quickly, blurring the line between strategic and routine applications. What was once a complex, time-consuming effort will soon be something we do without a second thought. This shift requires a mindset change—being open to ideas that may seem wasteful or unconventional today but could become standard practice tomorrow. The key is to embrace this shift and look for new, innovative ways to use predictive analytics. In summary, by taking an “analytics for analytics” approach, we gain more than just better models—we gain clarity on the value of our data investments. The ability to measure progress in predictive power isn’t just a technical exercise; it’s a strategic advantage that drives smarter decision-making across the board. Not sure where to start? Download Key Questions to Ask When Selecting an AI-Powered HR Tool to get the answers you need. Download Your Buying Guide Now

    Read Article

    7 min read
    Gina Calvert

    RedThread Research has identified 7 skills verification methods that range from simple to more complex. In Part 1 of this 2-part job skills assessment series, we dive into the 4 simplest and most common job skills assessments. In Part 2, we examine 3 complex forms of skills verification that lean heavily on benchmarks and data. RedThread members may access the full report authored by Heather Gilmartin and Dani Johnson. As the prevalence of skills-based recruiting grows, HR leaders are beginning to grapple with how to verify skills in order to ensure their data is accurate. They’re discovering that evaluating job skills is more complex than merely defining roles and hoping to find perfect matches. Decision-makers must weigh a variety of factors to determine the most suitable verification approach for their needs. You’re likely using some of these tactics to authenticate skills, but which are right for each role? And when should you level up to new ones? 1. Self-Assessment If you’re looking for simple ways to verify skills, having employees and applicants affirm their own expertise is the second most common approach, according to RedThread. This is most typically seen in job applications, employee resumes, and interviews. But just because it’s popular doesn’t mean it’s effective. While widely used, self-assessments can be unreliable. Discrepancies can occur for several reasons, including poor self-awareness, overconfidence, unintentional "self-presentation" bias, or, more seriously, candidate fraud. Many studies support the notion that people are notoriously inaccurate in subjective evaluation compared to objective measurements. Additionally, RedThread notes that this approach lacks specificity of the level of skills and doesn’t contribute to the company’s skills data set. That’s not to say there’s no place for worker self-reviews. As long as leaders recognize the limitations and risk, self-assessments can be a good, low-cost first step in identifying top talent early on. Giving potential employees an opportunity to showcase their abilities and skills contributes to a better hiring experience. 2. Performance Feedback / Informal Observation In this verification type, an observer validates skills through an informal set of standards using various modes of feedback and reviews. According to RedThread’s report, 37% of surveyed organizations use performance feedback in their skills verification processes - the single most-used method by a wide margin. This is possibly because even before adopting a skills-based recruiting strategy, performance feedback was already being used. These evaluations offer valuable insights into an employee's understanding and reveal any knowledge gaps by reflecting their overall performance over time or within a specific project. This approach contrasts with formal assessments, which isolate feedback to a single, often stressful event or test. One significant downside to note in this type of job skills assessment is that the observer’s feedback can be subjective and influenced by personal biases. 3. Formal Observation The key difference between formal and informal observation is that formal observation employs a specific framework to assess employee skills. A formal, structured set of standards empowers managers to develop the ability to hold difficult conversations. It enables the clear identification of areas of improvement, and it provides a foundation for coaching and knowledge transfer that helps improve performance levels. Even beyond actual performance and skills, observation can provide insight into so-called “soft skills,” such as how they handle pressure, adapt to new challenges, and interact with colleagues. It’s important to invest in the time and training needed to carry out effective, unbiased observation. Observers should factor in the possibility that employee apprehension may result in inconsistent results. Additionally, observation might not capture all aspects of an employee’s capabilities. 4. Formal Assessment Think tests, simulations, and sandboxes. RedThread reports that 53% of respondents who use formal job skills assessments do so because of compliance and regulatory requirements for certain roles, including necessary certifications or credentials. Formal assessments can be very valuable. They increase objectivity, help clarify the role for applicants (who may be defining the skill differently than you do), provide leaders with data, and save time for recruiters. However, they don’t always align with the role or tell you what you need to know. Paying attention to assessment quality is critical for the best outcomes in skills verification. Upskilling Your Career Skills Assessment Approach In this first installment of our exploration into skills verification approaches, the basic methods we’ve discussed serve as a foundational step. It's important to recognise, however, that these initial methods, while effective up to a certain point, might not suffice for roles requiring deeper or more specialized skill verification. And as the skills trend continues to evolve, leaders will increasingly desire more confidence, accuracy, or granularity in their skills data. In Part 2 of this series, we explore 3 more rigorous and comprehensive approaches to meeting the evolving demands of talent acquisition and employee upskilling programs. One Model: Skills-Based Recruiting Depends on Data One Model provides a people analytics platform that enhances skills-based recruiting by leveraging data-driven insights to identify skill gaps and predict future talent needs. We help organizations make more informed hiring decisions and better align their recruitment strategies with their business objectives. Learn how to build a people data platform that will allow you to do better skills-based hiring.

    Read Article

    5 min read
    Gina Calvert

    In Part 1 of our series on job skills assessments, we explored 4 simple ways to verify skills as identified by RedThread Research. RedThread members may access the full report authored by Heather Gilmartin and Dani Johnson. In Part 2, we delve into 3 sophisticated techniques that leverage both internal and external data to ensure a more accurate job skills assessment approach. As the landscape of skills-based recruiting expands, it becomes evident that some roles and contexts demand more nuanced and data-intensive verification methods than others. 1. Comparison to External Benchmarks When verifying skills, it’s crucial to measure them against established external standards. Yet, according to RedThread, only 11% of organizations do so. Benchmarking helps companies understand how their candidates' skills stack up against industry standards. In addition to providing a clear perspective on talent level relative to the broader market, it helps the organization future-proof their talent strategy and competitive edge. However, relying solely on external benchmarks may overlook unique aspects of a company’s culture or specific job roles that require customized skill sets. This approach also assumes that industry standards are up-to-date and sufficiently granular for an organization’s needs, which may not always be the case in fast-changing industries. Effective benchmarking relies on advanced skills intelligence tools, thus requiring an investment in technology or access to benchmarking data. As with other verification methods, benchmarks are most effective when used in conjunction with internal assessments. These platforms can integrate with existing HR systems to provide deeper insights and real-time data that help refine benchmarking efforts against industry standards. 2. Inference from HR Data Skills prediction based on HR data involves analyzing information from HR technology systems to infer employee skills. AI models predict employees’ skills based on a range of data sources. It’s quick, effective, and doesn’t require much employee involvement, RedThread explains. They report that 13% of employers currently make use of this career skills assessment method. This method uses historical data, such as past job performances, training records, and employee interactions, to predict skill levels and identify potential gaps. As it continues to evolve, the accuracy of skill predictions generally increases with the number of data points processed by AI. While powerful, this approach can be limited by the quality and completeness of the data collected. Biases in historical data can also lead to skewed predictions, making it essential to continuously update and review data inputs to ensure accuracy and fairness. HR data on industry skills is typically purchased through Human Resource Information (HRIS), Learning Management (LMS), Talent Marketplaces, Applicant Tracking (ATS), and Performance Management systems. Such systems enhance the accuracy of skills predictions by utilizing machine learning models which improve as they process more diverse and comprehensive data sets. 3. Inference from Work Data Using work system data to measure skills involves analyzing real-time data from work processes and outputs. By evaluating the quality, efficiency, and creativity of the work produced, organizations can gain a precise understanding of an employee's practical skills. This method requires sophisticated data analysis tools and expertise. It is also more complex than using HR data because it demands advanced technical integrations and substantial cross-functional collaboration to identify relevant metrics for specific skills. However, RedThread concludes that this is the only skills verification method that offers real-time insights into daily work and enables decisions at scale, based on performance data. This is where One Model shines, by seamlessly integrating with multiple data sources across the organization, enabling a more holistic and real-time assessment of employee skills based on actual work outputs. How One Model Partnerships Elevate Job Skills Assessment with Advanced Data-Driven Approaches Lightcast is a leading expert in the labor market. They collect and process a wide array of data, including job postings, resumes, and work history profiles. This data is aligned to job titles and skills classifications every two weeks. By merging Lightcast's extensive knowledge of the external labor market with One Model's ability to unlock people data, organizations can gain business insights relative to industry-wide talent trends. A One Model partnership empowers HR teams to: Enhance the consistency of data in reporting by adding standardized titles to current roles Analyze talent headcount, career paths, retention with accuracy Better align skills with job roles to enhance skills knowledge and plan for the future Ready to spend your time sharing insights, not integrating your people data? Learn how One Model integrations can help you see the whole picture. Or get a peek under the hood at how One Model could specifically benefit your organization. Request a demo.

    Read Article