One Model Blog

AI and the Future of Work: Thinking Analytically About How We Do Analytics

Written by Gina Calvert | Sep 10, 2024 7:56:49 PM

Are you as intentional about measuring the value of your data infrastructure and models as you are about building them? 

In the video below, our Solutions Architect Phil Schrader recently revealed at People Analytics Summit in Toronto the importance of (and some strategies for) using analytics to evaluate the impact of your analytics investments.

From leveraging machine learning to track improvements to thinking creatively about integrating predictive models into everyday workflows, you'll gain insights on how to apply analytics to your own analytics.

Short on time? We’ve summarized his presentation for you below. 


The Core Problem: Evaluating Data Investments

When we talk about people analytics, we often focus on the tools, processes, and models that drive better decisions. But what happens when we turn that lens inward—when we use analytics to assess the very work of analytics itself? The idea is simple: if we’re investing in building data infrastructure and models, we should be just as intentional about measuring the value of those investments.

Anyone leading a people analytics team knows the balancing act. On one side, there’s the pressure to deliver quick insights, the kind that keeps operations running smoothly. On the other side, there’s the longer-term need to build out robust data systems that support advanced analytics. 

Yet, as essential as these data initiatives are, we often struggle to quantify their value. How do we measure the ROI of building a data lake? How do we ensure that the data we’re collecting today will pay off down the road?

Solution: Analytics About Analytics

Here’s where we can take a different approach—by applying analytics to our own analytics. The falling cost of technical work in machine learning (ML) has opened up new possibilities, allowing us to embed these tools within our day-to-day operations. Instead of just using ML models for predictions, we can use them as a means to measure how good our data is and how effective our processes are. Essentially, we can start to think analytically about how we do analytics, especially when it comes to creating a predictive model that measures improvements over time.

A Concrete Metric: Precision, Recall, and the F1 Score

The foundation of this approach lies in the well-known metrics used to evaluate machine learning models: precision, recall, and the F1 score. In brief:

  • Precision asks: When the model makes a prediction, how often is it correct?
  • Recall asks: Out of all the events that should have been predicted, how many did the model actually identify?
  • The F1 score strikes a balance between these two metrics, offering a single number that reflects how well your model performs overall. By tracking this metric, we can gauge the quality of our data and see how incremental improvements—like adding new data sources—translate into better predictive power. This kind of measurement becomes crucial as we think about the future of machine learning and how it integrates into everyday operations.

Building Analytics for Growth

This method doesn’t just give us a way to measure progress; it gives us a framework to demonstrate that progress in tangible terms. Start with the basics—core HR data like job titles, tenure, and compensation. As you layer in additional data points—learning metrics, performance reviews, engagement scores—you can observe how each new addition boosts your model’s F1 score. It’s a practical way to quantify the value of your data and justify continued investment.

The Changing Landscape: Embedding Predictive Models

Predictive modeling no longer needs to be a separate, resource-intensive project. As the tools become more accessible, we can embed this capability directly into our workflows. Think of it as using predictive models the way we use pivot tables—regularly, as a quick check to see how well our data is performing. 

This kind of embedded analytics allows us to experiment, iterate, and find creative ways to leverage machine learning without overcommitting resources. With AI continually reshaping business practices, this shift will allow teams to use predictive models in increasingly versatile ways, driving more efficient decision-making.

Beyond Traditional Metrics: Rethinking the Value of Data

By adopting this approach, we’re able to ask—and answer—a critical question: How valuable is our data, really?

If we can demonstrate that our data is increasingly effective at predicting key outcomes like employee turnover or high performance, we’re no longer just talking about data quality in abstract terms. We’re providing a concrete metric that resonates with stakeholders and gives us a way to collaborate more effectively, whether it’s across HR functions or with external vendors whose data feeds into our models.

Looking Ahead: Embracing Innovation as Costs Fall

The future of AI and the workplace is advancing quickly, blurring the line between strategic and routine applications. What was once a complex, time-consuming effort will soon be something we do without a second thought. This shift requires a mindset change—being open to ideas that may seem wasteful or unconventional today but could become standard practice tomorrow. The key is to embrace this shift and look for new, innovative ways to use predictive analytics.

In summary, by taking an “analytics for analytics” approach, we gain more than just better models—we gain clarity on the value of our data investments. The ability to measure progress in predictive power isn’t just a technical exercise; it’s a strategic advantage that drives smarter decision-making across the board.

Not sure where to start? 

Download Key Questions to Ask When Selecting an AI-Powered HR Tool to get the answers you need.