QUICK FILTERS
Featured
4 min read
Steve Hall
When it comes to People Analytics, the most valuable tool is one that lets you to ask the right questions and explore solutions. Canned insights can't answer the real questions you need to answer. Recently, during a demo with a prospective client, a question came up that perfectly illustrates how One Model is a platform built for problem-solving rather than just offering irrelevant canned insights. The Situation: A Forecasting Challenge The scenario began with a focus on Female Representation metrics, specifically forecasting whether the organization was on track to meet its diversity targets for women. The forecast feature showed trends for different job levels, and while representation looked promising for some levels, there was a noticeable downward trend for the executive level. Naturally, the prospect wanted to know: Why is this happening? This was not a question with an easy, pre-packaged answer. Instead, it required a deeper dive into the data—an approach that highlights One Model's value as a tool for discovery and insight generation. Digging Deeper: How We Tackled the Problem To address the question, we demonstrated how to use filters and visualizations to isolate and explore the data. Here's how it unfolded: Applying Filters: We filtered the data by job level and gender to focus specifically on female executives. From there, we looked at key metrics like net hiring trends and termination rates. Identifying Patterns: The data revealed a significant drop in representation between 2023 and 2024, which appeared dramatic due to the auto-scaling of the graph. Exploring Causes: By clicking through different visualizations, we identified that termination rates, particularly "other" terminations, were higher than expected. Using One Model's hotspot maps, we further pinpointed the specific business unit and region where the issue was most acute. Forming Hypotheses: Using this information, we leveraged One Model's built-in predictive AI capabilities to identify potential turnover drivers and develop actionable insights. Flexibility Matters This scenario underscores something critical about One Model: We don’t solve all your problems; we give you the tools to solve them. Other platforms that rely on rigid, canned use cases might struggle in this situation; no solution can offer pre-built analyses for all possible scenarios. Without a pre-built guide addressing their specific issue in this specific organization, the user will hit a wall. One Model, by contrast, enables users to dynamically filter, explore, and analyze data to uncover answers. Why This is Critical for People Analytics This scenario demonstrates the real-world challenges of People Analytics. Insights are rarely handed to you on a silver platter. Instead, they require a combination of curiosity, exploration, and judgment —qualities not even AI will bring to the table. While some HRBP-level professionals might not engage in this level of analysis, advanced People Analytics practitioners understand that solving complex, niche problems—like representation trends at a specific level—requires more than surface-level data. The One Model Advantage Here’s why One Model is different: Speed: Because One Model creates a unified single source of truth for your organization, you can explore complex interactions without having to manually manipulate data, saving you time. Flexibility: You’re not limited to prebuilt Storyboards or canned content. You can adapt and dig into unique questions in real-time, even in situations where you need to create new metrics to explore an issue. Depth of Insights: By enabling dynamic exploration, One Model allows for nuanced and complete answers that out-of-the-box solutions can’t deliver. The takeaway from this use case is simple: Good insights require effort. Platforms that promise quick, prebuilt solutions often oversimplify problems or deliver incomplete answers. One Model’s strength lies in empowering users to dig deeper and uncover real insights—even when the questions are complex. With One Model, you’re not just using a People Analytics platform—you’re solving real problems.
Read Article
Featured
4 min read
The One Model Team
Having the right vendor partnership can make a huge difference. And the wrong one can lead to huge headaches. One Model understands this, and we strive to be more than just another software provider. We seek to be a trusted partner for both HR and IT teams, deeply entrenched in the success of both departments. By partnering with One Model, tech teams get: Expert resources to field HR’s requests A common challenge many businesses face is the reliance of HR teams on their internal IT for business intelligence (BI) support. This not only strains IT resources but also may not always result in optimal solutions tailored for HR needs. With One Model, HR gets access to expert People Analytics resources. This isn't just about having an extra set of hands; it's about having a skilled set of hands, well-versed in BI, ready to converse, collaborate, and create. More time to focus on IT initiatives With One Model, tech teams can channel their energies and expertise towards initiatives directly tied to their KPIs. Our proposition is simple: let us empower HR with solutions that meet their BI needs while IT reallocates their time towards other tech initiatives. This isn't about pitting departments against each other; it's about recognising and optimising strengths of both groups. Increased transparency and accessibility If there's ever a need for IT to get involved, no problem. One Model's platform is built on transparency. Developers can literally inspect the SQL, ensuring a seamless integration of our platform into your ecosystem. This creates a harmonious interplay between HR and IT, with both departments benefitting. A cost-effective approach to People Analytics The cost of hiring and maintaining a single data engineer is substantial, and it’s not easy to find IT candidates with People Analytics experience. Data engineers often earn an annual salary of over $110,000 each year. And this doesn’t even include additional expenses your organisation will need for data architects, project managers, and other resources — especially as you scale. Partnering with One Model's team is much more cost-efficient, allowing you to allocate your resources more strategically. “From the tech leader’s perspective, there’s a significant cost to having HR rely on your internal IT team for BI support. So as you consider building your own solution from scratch or buying a People Analytics tool, One Model’s flexible platform is ideal because we’ll partner with your HR team and deliver the best of both worlds. We specialise in supporting HR’s needs, so tech teams can focus on their own KPIs. And, if developers ever have questions, One Model is open enough for them to jump in and literally look at the SQL. It’s a win-win for HR and IT.” — Taylor Clark, Chief Data Scientist, One Model Navigating the complexity of people data While many development teams are adept builders, navigating the labyrinth of people data is a different beast altogether. A common misconception is that IT teams can effortlessly manage data extractions, transformations, and integrations from HR systems. The reality? People data is complex, intricate, and often disorganised. “Many IT teams are already handling data extractions, transformation, and integrations across HR systems. With that experience, the justifiable assumption is that People Analytics will be a straightforward project. But the challenges of People Analytics are unique. For example, creating historically accurate, effective dated data models across multiple systems. One Model is the only vendor that confronts these challenges head on.” — John Carter, Senior Sales Engineer, One Model With One Model, you're not just getting a People Analytics platform, you're gaining a partner skilled in deciphering, managing, and optimising people data. Where many falter, we excel. The challenges that often stymie others, like managing Workday's unique constraints, are where our expertise comes to the forefront. We do the heavy lifting, ensuring that HR's data needs are met so tech teams can avoid the typical complexities. Our approach isn't just about providing a platform. It's about building a valuable, long-term partnership and commitment to ensure the success of HR, IT, and the overall company. Ready to learn more Download our whitepaper Why Tech Leaders Prefer One Model’s People Analytics Platform to learn 4 key reasons IT teams choose our platform over others on the market.
Read Article
Featured
5 min read
Matthew Wilton
People data is the lifeblood that fuels insights and drives strategic decisions. Yet, for many leaders, extracting meaningful data from complex systems like Workday can be a daunting task. One Model's Workday Connector is designed to turn this challenge into an opportunity, providing a powerful solution that stands out in the crowded market. Here's why it’s a game-changer for technical people analytics leaders. The One Model Advantage: Beyond Brute Force At its core, our Workday API Connector is built on a deep understanding of the intricacies of Workday. Unlike competitors who might rely on inefficient methods—such as pulling data for every employee every day—One Model has developed a sophisticated approach that is both clever and efficient. Intelligent Data Retrieval With a brute force approach querying a year's worth of data for a single employee requires 365 requests to the Workday API. For a 1,000 employee company this means to get a full year's data will require 365,000 API requests. Workday’s API returns data in large, complex XML files and API requests can take seconds to receive a response. For the 1,000 employee company, even if the API responds to every request in 1 second, it will take over 4 days to pull all the data from the Workday API. This brute force method does not scale and is not practical, especially for larger enterprises. Our solution? We focus on significant data change points, intelligently identifying the moments when meaningful changes occur in an employee's record. This approach not only reduces the volume of data processed but also ensures that we capture the most critical updates. The Self-Healing Data Model: Scalability and Accuracy One Model’s unique self-healing data model is a standout feature that ensures accuracy and consistency in your analytics. Here's how it works: Intelligent Identification: By leveraging our deep understanding of the nuances of data locations and changes, our connector identifies and extracts only the necessary data points. This minimizes the load on Workday’s API and speeds up the data retrieval process. Error Detection and Correction: Our system automatically detects discrepancies and back-dated changes, correcting them without manual intervention. This self-healing capability ensures that your data remains up to date and accurate, even if historical changes are made. Dynamic Processing: The connector dynamically adapts to changes in the Workday API, ensuring continuous, reliable data extraction without interruption. Comprehensive Data Support: From Raw Workday Data to Analytical Models One Model goes beyond mere data extraction. We transform raw data into analytical models, providing actionable insights rather than just raw numbers. Our approach integrates custom fields and user-defined reports, ensuring that even the unique aspects of your data are captured and analyzed. Integration with Custom Reports For those unique data points that aren't covered by standard API calls, One Model supports the integration of custom reports. Customers can create custom reports in Workday, which our connector then pulls and integrates into the overall data model. This flexibility means that no piece of data is left behind, giving you a comprehensive view of your workforce. Unmatched Support and Stability Our Workday Connector isn't just a tool; it's a platform-based service. Through the platform, we offer continuous monitoring, maintenance, and support to ensure your data pipeline remains robust and reliable. Beyond the platform, our team is on hand to address any issues, making sure that your focus remains on deriving insights, not on troubleshooting data pipelines. Handling Workday Data Updates with Ease Workday’s frequent updates can pose challenges, but One Model’s connector is designed to handle these seamlessly. By using versioned API endpoints and dynamic data processing, we ensure that changes in Workday’s data model do not disrupt your analytics operations. Why Choose One Model? In a market where many solutions promise easy data extraction but fall short on delivering comprehensive, scalable, and accurate data models, One Model’s Workday Connector stands out. Here’s why: Scalability: Efficient data retrieval methods that scale with your organization. Accuracy: Self-healing models that ensure data integrity. Flexibility: Integration of custom reports and fields. Support: Continuous maintenance and monitoring from a dedicated team. We have many customers and current prospects that have come to us to solve their challenges in accessing, obtaining, and maintaining a historic data load from Workday. With our Workday Connector, you get more than a Workday data export – you get it in a form that drives meaningful, actionable insights. Unlock the full potential of your people data with One Model. Connect with us today or download our Workday People Analytics guide to learn more about our connection to Workday and how it can transform your analytics capabilities.
Read Article
Featured
7 min read
Dennis Behrman
Few tasks can be as perplexing — and oddly satisfying — as the alchemy of turning headcount numbers into meaningful cost allocations by work days in a month and then having the option to break it down by department or any other variable you desire. With business demands rapidly evolving, the age-old adage of "time is money" has never been more accurate. Yet navigating the complexities of cost allocation, also referred to as overhead allocation, and crafting the perfect cost allocation plan can be a Herculean task. As you may know, cost allocation involves the identification and allocation of expenses to various activities, individuals, projects, or any relevant cost-related entities. Its primary objective is to equitably distribute costs among different departments, facilitate profitability calculations, and establish transfer pricing. Essentially, cost allocation serves as a means to gauge financial performance and enhance the decision-making process. Since your employees are by in large your greatest investment, understanding their cost allocation on many levels has immense benefits. As Phil shows in the video above, One Model makes this process seamless — and it’s all thanks to the power of our data orchestration model. Learn more about our People Data Cloud Platform. The Changing Landscape of HR Data It is no longer enough to get a holistic cost allocation from your headcount. Organizations across the globe need to be able to slice and dice their data to really understand how those costs are changing over time and how to best build a thriving workforce. Traditional views showing headcount over time are excellent starters, but the main course? That's translating those numbers into actionable cost insights. After all, understanding not just the size but also the cost of your workforce over time is the key to informed decision-making for both finance and operations teams. For example, slicing and dicing dynamic cost allocation over time, like total days in month breakout and broken down by department, supervisor hierarchy level, or by length of time employed can lead to insights that can change policy or articulate critical headcount needs. How does One Model accomplish this? One Model possesses unique capabilities that can transform your traditional headcount chart into a sophisticated cost analysis tool. What makes us unique? It all has to do with the data model. Once your data is modelled, you gain access to a variety of metrics that you can use as is or modify to fit your specific business needs. Diving into your compensation grouping of metrics, you can replace the “headcount, end of period” metric with “headcount, beginning of period” or append it with the “average salary, end of period” metric. Delving deeper, the real magic happens as One Model enables you to convert that average salary into a robust cost allocation strategy. With the dynamic "compensation cost daily allocation" metric at your disposal, it's like having a personal assistant that adjusts effortlessly to varying time durations, including accommodating leap years. Furthermore, One Model recognises the fluctuations in costs, especially during shorter months or leap years, ensuring a more precise and insightful view of your financial landscape. This capability allows you to make more informed decisions and gain a deeper understanding of your organisation's financial dynamics. Segmenting Cost Allocation Metrics Each organisation is akin to a mosaic, with numerous sections and subdivisions. With One Model, you can delve into each segment, examining the cost allocation intricacies at every level. The insights gleaned can empower both finance and operations professionals, offering clarity in strategy and resource allocation. Why is Overhead Allocation such an important metric? Cost allocation is crucial for various reasons in business and financial management. Here are four key reasons why it's important to pay attention to cost allocation: Fairness and Equity Overhead allocation ensures that costs are distributed fairly among different departments, products, or projects. This fairness is essential for budget allocation and growth in each department. Performance Measurement Allocating costs accurately allows for better measurement of the performance of different departments or business segments. By attributing costs to specific activities, it becomes easier to identify areas of inefficiency and make necessary improvements. Profitability Analysis Cost allocation helps in determining the profitability of products, services, or business units. This information is invaluable for making strategic decisions about resource allocation, product pricing, and business expansion. However, read our other considerations when breaking down revenue in our average revenue per employee blog. Resource Allocation When costs are allocated appropriately, organisations can allocate resources more effectively. It helps in identifying where additional resources are needed and where resources might be overallocated, leading to cost savings. Visualising Cost: The Power of Representation One Model lets you visualise your cost allocation journey over time through detailed charts. While this can present a plethora of data, each data point offers invaluable insights. For those who prefer a more structured representation, a tabulated view can provide clarity. All you need to do is create a data set that shows the amount of cost to allocate, along with the start and end dates of that allocation. From current headcount to cost allocation for recruiting, the process to get the answer is the same. For example, if you spent $10,000 on job advertisements on LinkedIn from Jan. 1, 2018, to Dec. 31, 2018, One Model can efficiently allocate that spend per day throughout the year. This becomes very useful when combined with other metrics over periods of time. For example, I can compare what I'm spending on LinkedIn with the number of applications I receive from LinkedIn during that period. This yields a "Cost Per Application" metric that I can use to compare the effectiveness of LinkedIn relative to other sources. The Takeaway If the daunting task of juggling countless spreadsheets, numbers, and formulas sounds all too familiar, there's a better way. One Model is designed to transform the perplexing world of cost allocation and overhead allocation and creating a tailored cost allocation plan into a more straightforward, efficient process. So, if late-night data crunching is your current reality, it's time to explore the capabilities of One Model. Let us show you how One Model does this 1:1
Read Article
Featured
6 min read
Phil Schrader
It’s always good news when a prospective One Model customer tells me that they use SuccessFactors for recruiting. Given that HR technology in general and applicant tracking systems in particular seldom involve feelings of pleasure, my statement bears a bit of explanation. I wouldn’t chalk it up to nostalgia, though like many members of the One Model team, I had a career layover at SuccessFactors. Instead, my feelings for SuccessFactors recruiting are based on that system’s unique position in the evolution of applicant tracking systems. I think of SuccessFactors as the “Goldilocks ATS”. On one hand, SFSF doesn’t properly fit in with the new generation of ATS systems like SmartRecruiters, Greenhouse, or Lever. But like those systems, SFSF is young enough to have an API and for it to have grown up in a heavily integrated technology landscape. On the other hand, SFSF can’t really be lumped in with the older generation of ATS systems like Kenexa and Taleo either. However, yet again, it is close enough to have picked up a very positive trait from that older crowd. Specifically, it still manages to concern itself with the mundane task of, ya know, tracking applicant statuses. (Yeah, yeah, new systems, candidate experience is great, but couldn’t you also jot down when a recruiter reviewed a given application and leave that note somewhere where we could find it later without building a report???) In short, SFSF Recruiting is a tweener and better for it. If you are like me, and you happen to have been born in the fuzzy years between Gen X and Millennials, then you can relate: you're young enough to have been introduced to web design and email in high school, and old enough to have not had Facebook and cell phones in college. So let’s take a look at the magic of tracking application status history using data from SuccessFactors RCM, an applicant tracking system. While it seems like a no-brainer, not all ATSs provide full Application Status history via an API. Since it's basically the backbone of any type of recruiting analytics, it's fortunate that SuccessFactors does provide it. For those of you who want to poke around in your own data a bit, the data gets logged in an API object called JobApplicationStatusAuditTrail. In fact, not only is the status history data available, but custom configurations are accounted for and made available via the API as well. This is one of the reasons why at One Model we feel that without a doubt, SuccessFactors has the best API architecture for getting data out to support an analytics program. Learn more about our SuccessFactors integration. But there is something that not even the Goldilocks ATS can pull off -- making sense of the data. It’s great to know when an application hits a given status, but it’s a mistake to think that recruiting is a calm and orderly process where applications invariably progress from status to status in a logical order. In reality, recruiters are out there in the wild doing their best to match candidates with hiring managers in an ever-shifting context of business priorities, human preferences, and compliance requirements. Things happen. Applicants are shuffled from requisition to requisition. Statuses get skipped. Offers are rescinded. Job requisitions get cancelled without applicants getting reassigned. And that’s where you need a flexible people analytics solution like One Model. You’ll probably also want a high-end espresso machine and a giant whiteboard because we’re still going to need to work out some business logic to measure what matters in the hectic, nonlinear, applicant-shuffling real world of recruiting. Once we have the data, One Model works with customers to group and order their application statuses based on their needs. From there, the data is modeled to allow for reporting on the events of applications moving between statuses as well as the status of applications at any point in history. You can even look back at any point in time and see how many applications were at a particular status alongside the highest status those applications eventually made it to. And yes - we can do time to fill. There are a billion ways of calculating it. SuccessFactors does their customers a favor by allowing them to configure how they would like to calculate time to fill and then putting the number in a column for reporting. If you're like most customers though, one calculation isn't enough. Fortunately, One Model can do additional calculations any way you want them-- as well as offering a "days open" metric and grouped dimension that's accurate both current point in time as well as historically. “Days in status” is available as well, if you want to get more granular. Plus, on the topic of time to fill, there’s an additional tool in One Model’s toolkit. It’s called One AI and it enables customers to utilize machine learning to help predict not only time to fill, but also the attributes of candidates that make them more likely to receive an offer or get hired. However, that is another topic for another day. For today, the good news is that if you have SuccessFactors Recruiting, we’ll have API access to the status history data and customizations we need to help you make sense of what's going on in recruiting. No custom reports or extra connections are required. Connecting your ATS and HRIS data also means you can look at metrics like the cost of your applicant sourcing and how your recruiters are affecting your employee outcomes long term. So here’s to SuccessFactors Applicant Tracking System, the Goldilocks ATS. Ready to get more out of SuccessFactors? Click the button below and we'll show you exactly how, and how fast you can have it. **Quick Announcement** Click here to view our Success with SuccessFactors Webinar recording and learn how to create a people data strategy!
Read Article
Featured
10 min read
Phil Schrader
The One Model difference that really sets us apart is our ability to extract all your messy data and clean it into a standardized data catalog. Let's dive deeper. One Model delivers people analytics infrastructure. We accelerate every phase of your analytics roadmap. The later phases of that roadmap are pretty fun and exciting. Machine learning. Data Augmentation. Etc. Believe me, you’re going to hear a ton about that from us this year. But not today. Today we’re going to back up for a minute and pay homage to an absolutely wonderful thing about One Model: We will help you clean up your data mess. Messy Data? Don't distress. Josh Bersin used this phrasing in his talk at the People Analytics and the Future of Work conference. From my notes at PAFOW on Feb 2, 2018: You know there are huge opportunities to act like a business person in people analytics. In the talk right before Josh’s, Jonathan Ferrar reminded us that you get $13.01 back for every dollar you spend on analytics. But you have to get your house in order first. And that’s going to be hard. Our product engineering team at One Model has spent their careers figuring out how to pull data from HR systems and organizing it all into effective data models that are ready for analytics. If your team prefers, your company can spend years and massive budgets figuring all this out... Or, you can take advantage of One Model. When you sign up with One Model: 1) We take on responsibility for helping you extract all the data from your HR systems and related tools. 2) We connect and refine all that data into a standard data catalog that produces answers your team will actually trust. Learn what happened to Synk when they finally had trust. Big data cleansing starts with extracting the data from all your HR and related tools. We will extract all the data you want from all the systems you want through integrations and custom reports. It’s part of the deal. And it’s a big deal! For some perspective, check out this Workday resource document and figure out how you’ll extract your workers’ FTE allocation from it. Or if Oracle is your thing, you can go to our HRIS comparison blog and read about how much fun our founder, Chris, had figuring out how to get a suitable analytics data set out of Fusion. In fact, my coworker Josh is pulling some Oracle data as we speak and let me tell you, I’m pretty happy to be working on this post instead. Luckily for you, you don’t need to reinvent this wheel! Call us up. We’ll happily talk through the particulars of your systems and the relevant work we’ve already done. The documentation for these systems (for the most part) is out there, so it’s not that this is a bunch of classified top-secret stuff. We simply have a lot of accumulated experience getting data out of HR systems and have built proprietary processes to ensure you get the most data from your tools. In many cases, like Workday, for example, we can activate the custom integration we’ve already built and have your core data set populated in One Model. If you go down that road on your own, it’ll take you 2 - 3 days just to arrange the internal meeting to talk about how to make a plan to get all this data extracted. We spent over 10,000 development hours working on our Workday extraction process alone. And once you do get the data out, there’s still a mountain of work ahead of you. Which brings us to... The next step is refining your extracted data into a standardized data catalog. How do you define and govern the standard ways you are going to analyze your people data? Let’s take a simple example, like termination rate. The numerator part of this is actually pretty straightforward. You count up the number of terminations. Beyond that, you will want to map termination codes into voluntary and involuntary, exclude (or include) contractors, etc. Let’s just assume all this goes fine. Now what about the bottom part? You had, say 10 terminations in the given period of time, so your termination rate is... relative to what headcount? The starting headcount for that period? The ending headcount? The average headcount? How about the daily average headcount? Go with this for two reasons. 1) It’s the most accurate. You won’t unintentionally under or overstate termination rate, giving you a more accurate basis of comparison over time and the ability to correctly pro-rate values across departments. See here for details. And 2) If you are thinking of doing this in-house, it’ll be fun to tell your team that they need to work out how to deliver daily average headcounts for all the different dimensions and cuts to meet your cleaning data requirements. If you really want to, you can fight the daily average headcount battle and many others internally. But we haven’t even gotten to time modeling yet, which is so much fun it may get its own upcoming One Model Difference post. Or the unspeakable joy you will find managing organizational structure changes, see #10. On the other hand, One Model comes complete with a standard metrics catalog of over 590 metrics, along with the data processing logic and system integrations necessary to collect that data and calculate those metrics. You can create, tweak, and define your metrics any way you want to. But you do not have to start from scratch. If you think about it. This One Model difference makes all the difference. Ultimately, you simply have to clean up your messy data. We recognize that. We’ve been through it before. And we make it part of the deal. Our customers choose One Model because we're raising the standard and setting the pace for people analytics. If you are spending time gathering and maintaining data, then the yardstick for what good people analytics is going to accelerate away from you. If you want to catch up, book a demo below and we can talk. Tell us you want to meet. About One Model: One Model helps thriving companies make consistently great talent decisions at all levels of the organization. Large and rapidly-growing companies rely on our People Data Cloud™ people analytics platform because it takes all of the heavy lifting out of data extraction, cleansing, modeling, analytics, and reporting of enterprise workforce data. One Model pioneered people data orchestration, innovative visualizations, and flexible predictive models. HR and business teams trust its accurate reports and analyses. Data scientists, engineers, and people analytics professionals love the reduced technical burden. People Data Cloud is a uniquely transparent platform that drives ethical decisions and ensures the highest levels of security and privacy that human resource management demands.
Read Article
Featured
13 min read
Nicholas Garbis
At some point, every successful People Analytics team will develop a meaningful partnership with the Finance organization. Unfortunately, this partnership is usually not easily achieved and it's quite normal for initial alignment efforts to last for a couple of years (or more!). We are delighted to repost this insightful blog post authored by Nicholas Garbis on May 4, 2021. Revisiting his valuable insights will help us all foster a deeper understanding of how HR and Finance can collaborate more effectively. A new or maturing People Analytics team may fail to recognize the effort level required and not prioritize the work needed to establish this critical partnership with Finance. They do so at their own peril. The day will inevitably arrive when a great analytics product from the PA team will be dismissed by senior leaders when they see the foundational headcount numbers do not match. The PA team will be lacking in a clear explanation that is supported by the CFO and Financial Planning & Analysis (FP&A) leaders. But why is this the case? And how can HR and People Analytics teams do a better job of establishing the partnership? Analyzing the analytics conflicts between finance and HR Lack of alignment on workforce data At the heart of the issue is a lack of alignment on the most basic workforce metric: headcount. Both Finance and HR teams are often sharing headcount data with senior leaders. In many companies, the numbers are different. This creates distrust and frustration, and I will contend that, given Finance’s influence in most organizations, the HR team is on the losing end of these collisions. End result is that the organization spends time debating the figures (at a granular level) and misses the opportunity to make talent decisions that support the various company strategies (eg, growth, innovation, cultural reinvention, cost optimization). While headcount is at the foundation, there are several other areas where such disconnects arise and create similar challenges: workforce costs, contingent workers, position management, re-organizations, workforce budgets/plans, movements, etc... Solving the basic headcount alignment is the first step in setting the partnership. Source of the Disconnect: "Headcount Dialects" and "Dialectical Thinking" The disconnect in headcount figures is nearly always one of definition. Strange as it may sound, Finance and HR do not naturally count the workforce in the same way. It's as if there is a 'headcount dialect" that each needs to learn in order to communicate with the other. Therefore, if they have not spent some intentional, focused time on aligning definitions and processes, they will continue to collide with each other (and HR will fail to gain the trust needed to build an analytics/evidence-based culture around workforce decisions). The dialectical thinking challenge is for Finance and HR to recognize that the same data can be presented in (at least) two different ways and both can be simultaneously accurate. It is for the organization to determine which definition is considered "correct" for each anticipated use case (and then stick to that plan). Primary disconnection points Two primary areas of disconnect are the definition of the term “headcount” and whether a cost or organizational hierarchy is being used. Definition of “Headcount”: There are several components of this, underscoring the need for alignment when it comes to finance headcount vs HR headcount. Using Full-Time Equivalent (FTE) or Employee Count: Employees that are working less than full-time are often in the system with FTE values of 1.0 (full-time), 0.5 (half-time), and every range of fraction in between. The Employee Count, on the other hand, will count each employee as 1 (sometimes lightly referred to as a “nose count” to distinguish it from the FTE values). In some companies, interns/co-op employees are in the system with FTE value of 0, even though they are being paid. Determining Which Status Codes are to be Included: Employees are captured in the HR system as being active or inactive, on short-term or long-term leave of absence (LOA, “garden leave”), and any number of custom values that are used to align with the HR processes. In many companies, the FTE values are updated to align with the change in status. Agreeing on which status codes are counted in "headcount" is required for setting the foundation. Organization versus Cost Hierarchy: The headcount data can be rolled up (and broken down) in at least two ways: based on the organization/supervisor hierarchy structure or based on the cost center/financial hierarchy. Each has its unique value, and neither is wrong -- they are simply two representations of the same underlying data. It’s quite common that insufficient time has been spent in aligning, reconciling, and validating these hierarchies and determining which one should be used in which situations. Organization Hierarchy: This is sometimes called the “supervisory hierarchy” and represents “who reports to whom” up the chain of command to the CEO. This hierarchy is representative of how work is being managed and how the workforce is structured. Each supervisor, regardless of who is paying for their team members, is responsible for the productivity, engagement, performance, development, and usually the compensation decisions, too. Viewing headcount through the organization hierarchy will provide headcount values (indicating the number of resources) for each business unit, each central function, etc... The organization hierarchy is appropriate for understanding how work is being done, performance is being managed, the effectiveness of leaders and teams, and all other human capital management concerns. It is also useful in some cost-related analyses such as evaluation and optimization of span-of-control and organization layers. Cost Hierarchy: This is sometimes referred to as “who is paying for whom” and is rarely in perfect alignment with the organization hierarchy. There is a good reason for this, as there are situations when a position in one part of the organization (eg, research & development) is being funded by another (eg, a product or region business unit). In these cases, one leader is paying for the work and the work is being managed by a supervisor within another leader's organization. I have seen "cross-billing" situations going as high as 20% of a given organization. When headcount is shown in a cost hierarchy, it indicates what will hit the general ledger and the financial reporting of the business units. It has a valid and proper place, but it is mostly about accounting, budgeting, and financial planning. Which business unit is right? The truth is that as long as you have all the workforce data accurately captured in the system, everything is right. This sounds trite, but it puts emphasis on the task at hand which is to determine a shared understanding and establish rules for what will be counted and how, which situations will use which variations, and what agreed-upon labeling will be in place for charts/tables shared with others. Some organizations that have a culture of compliance and governance could set this up as part of an HR data governance effort (where headcount and other workforce metrics would be defined, managed, and communicated). Going further, there is a need beyond the Finance and HR/People Analytics leader to socialize whatever is determined as these running rules across the Finance and HR organizations. These teams all need to be aligned. How does One Model help finance and HR collaborate? With a People Analytics solution like One Model in place, the conversations between HR and Finance can be had with much more clarity and speed. This becomes easier because, within One Model all of the workforce data is captured, data quality is managed, and all related dimensions (eg, hierarchies, employee attributes) are available for analysis. Two examples of content that is specifically designed to facilitate the Finance-HR alignment discussions are: Headcount Storyboard. Setting up a storyboard which shows headcount represented in multiple ways: FTEs versus employee counts, variations of which statuses are included/excluded, etc. This information becomes readily comparable with the metric definitions only a click away. Even better, the storyboard can be shared with the Finance and HR partners in the discussion to explore on their own after the session. One Model is the best tool for counting headcount over time. Hierarchy Storyboard. Providing views of the headcount as seen using the supervisor and cost hierarchies side-by-side will help to emphasize that both are simultaneously correct (ie, the grand total is exactly the same). This can also provide an opportunity to investigate some of the situations where the cost and organizational hierarchy are not aligned. In many cases, these situations can be understood. Still, occasionally there are errors from previous reorganizations/transfers which resulted in costing information not being updated for a given employee (or group of employees). With the data in front of the teams, the discussion can move from “Which one is right?” to “Which way should be used when we meet with leaders next time?” When you have One Model, you can bring HR and Finance together faster and more easily ... and that helps you to accelerate your people analytics journey. Need Help Talking to Finance? Let us know you'd like to chat.
Read Article
Featured
5 min read
Phil Schrader
Analytics is a funny discipline. On one hand, we deal with idealized models of how the world works. On the other hand, we are constantly tripped up by pesky things like the real world. One of these sneaky hard things is how best to count up people at various points in time, particularly when they are liable to move around. In other words, how do you keep track of people at a given point in time, especially when you have to derive that information from a date range? Within people analytics, you run into this problem all the time. In other areas, it isn’t as big of a deal. Outside of working hours (sometimes maybe during working hours), I run into this when I’m in the middle of a spreadsheet full of NBA players. Let's explore by looking at an easy-to-reference story from 2018. Close your eyes and imagine I’m about to create an amazing calculation when I realize that I haven’t taken player trades into consideration. George Hill, for example, starts the season in Sacramento but ends it in Cleveland. How do you handle that? Extra column? Extra row? What if he had gotten traded again? Two extra columns? Ugh! My spreadsheet is ruined! Fortunately, One Model is set up for this sort of point-in-time metric. Just tell us George Hill’s effective and end dates and the corresponding metrics will be handled automatically. Given the data below, One Model would place him in the Start of Period (SOP) Headcount for Sacramento and End of Period (EOP) Headcount for Cleveland. Along the way, we could tally up the trade events. In this scenario, Sacramento records an outbound trade of Hill and Cleveland tallies an inbound trade. The trade itself would be a cumulative metric. You could ask, “How many inbound trades did Cleveland make in February?” and add them all up. Answer-- they made about a billion of them. Putting it all together, we can say that Hill counts in Cleveland’s headcount at any point in time after Feb 7. (Over that period Cleveland accumulated 4 new players through trades.) So the good news is that this is easy to manage in One Model. Team Effective Date End Date Sacramento 2017-07-10 2018-02-07 Cleveland 2018-02-08 --- The bad news is that you might not be used to looking at data this way. Generally speaking, people are pretty comfortable with cumulative metrics (How many hires did we make in January?). They may even explore how to calculate monthly headcount and are pretty comfortable with the current point in time (How many people are in my organization). However, being able to dip into any particular point in time is new. You might not have run into many point-in-time scenarios before-- or you might have run into versions that you could work around. But, there is no hiding from them in people analytics. Your ability to count employees over time is essential. Unsure how to count people over time? Never fear. We’ve got a video below walking you through some examples. If you think this point in time stuff is pretty cool, then grab a cup of coffee and check out our previous post on the Recruiting Cholesterol graph. There we continue to take a more intense look beyond monthly and yearly headcount, and continue to dive deeper into point-in-time calculations. Also, if you looked at the data above and immediately became concerned about the fact that Hill was traded sometime during the day on the 8th of February and whether his last day in Sacramento should be listed as the 7th or the 8th-- then please refer to the One Model career page. You’ll fit right in with Jamie :) Want to read more? Check out all of our People Analytics resources. About One Model: One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own. Its newest tool, One AI, integrates cutting-edge machine learning capabilities into its current platform, equipping HR professionals with readily-accessible, unparalleled insights from their people analytics data.
Read Article
Featured
4 min read
Dennis Behrman
Phil Schrader and Stephen Haigh had an opportunity to attend the People Analytics World Conference in London April 26-27, 2023. During their visit, Phil was asked to give a public demonstration of how HR analytics software works. While we can't speak for other people analytics tools, we can speak to One Model. The crowd was mesmerized and had lots of questions at the end that you definitely have to watch. Join Phil as he walks through data import, export, and all the magic in between — even showing in real time how an AI model is built exclusively on your data. Phil, always cheeky and fun to watch, is a great teacher in all the things you should look for when assessing which people analytics tool is right for you. Compared to other HR analytics tools on the market, you'll quickly see that One Model is more transparent, easier to use, and more open than any other option on the market. Want your own personal tour of One Model? Request time to meet today. During the video, Phil walks us through each of these layers: The Consumer Layer: At the top of the platform, users, such as HR Business Partners, can access data, insights, and storyboards through a user-friendly interface. The storyboard feature allows users to interpret data visually and navigate through various tools like Explore, Storyboards, and Data. These tools enable users to slice and dice analytics, explore heat mapping, and gain insights into different data sources. From Consumer to Analyst Layer: One Model's flexibility empowers users to transition from the consumer layer to the analyst layer effortlessly. Here, analysts can customize the views, rearrange elements, and dive deeper into the data. With simple clicks, they can transform data into charts, change metrics, and connect multiple systems to gain a holistic view. Configuring Metrics and Data Engineering: As analysts continue their exploration, they can configure metrics according to their organization's specific requirements. They can modify calculations, adjust inclusion/exclusion criteria, and create unique views tailored to their audience. Furthermore, One Model offers transparency into data engineering, allowing analysts to delve into the underlying data models, processing scripts, and data sources. Unleashing the Power of Data Science: Finally, One Model empowers advanced analysts and data scientists to build predictive models. With the augmentation feature, analysts can create and maintain multiple models, evaluate their performance, and put them on schedules. The platform provides a guided walkthrough for model building, enabling users to define their objectives, select relevant metrics, and generate predictions. The prediction capabilities extend to specific employee segments or the entire population.
Read Article
Featured
6 min read
Dennis Behrman
Artificial intelligence (AI) has become an integral part of various industries, revolutionizing the way organizations make decisions. However, with the rapid advancement of AI technology, concerns about its potential and ethical implications have emerged. As a result, governments around the world are preparing to enact regulations to address the use of AI in people decisions. In this blog post, we will explore the scope of these forthcoming regulations and discuss how People Data Cloud can help ensure equitable, ethical, and legally-compliant practices in automated decision-making across organizations. Broad Scope of Regulations While generative AI, such as ChatGPT, has been the catalyst for these regulations, it is important to note that the scope will not be limited to such technologies alone. Instead, the regulations are expected to encompass a wide range of automated decision technologies, including rule-based systems and rudimentary scoring methods. By extending the regulatory framework to cover diverse AI applications, governments aim to ensure fairness and transparency in all areas of decision-making. Beyond Talent Acquisition Although talent acquisition processes like interview selection and hiring criteria are likely to be subject to regulation, the scope of these regulations will go far beyond recruitment alone. Promotions, raises, relocations, terminations, and numerous other people decisions will also be included. Recognizing the potential impact of AI on employees' careers and well-being, governments seek to create an equitable and just environment across the entire employee lifecycle. Focus on Eliminating Bias and Ensuring Ethical Practices One of the primary objectives of these regulations will be to eliminate bias in AI-driven decision-making. Biases can arise from historical data, flawed algorithms, or inadequate training, leading to discriminatory outcomes. Governments will emphasize the need for organizations to proactively identify and mitigate biases, ensuring that decisions are based on merit and competence rather than factors such as race, gender, or age. Ethical considerations, including privacy and consent, will also be critical aspects of the regulatory landscape. Be Prepared. Join the Regulations and Standards Masterclass today. Learning about AI regulations and standards for HR has never been easier with an enlightening video series from experts across the space sharing the key concepts you need to know. A Holistic Approach to Compliance To comply with forthcoming AI regulations, organizations must evaluate their entire people data ecosystem. This includes assessing where data resides, which technologies are involved in decision-making processes, the level of human review and transparency afforded, and the overall auditability of automated decisions. Achieving compliance will require robust systems that enable organizations to monitor and assess the fairness and transparency of their AI-driven decisions. One AI is Your Automated People Decision Compliance Platform As governments gear up to regulate AI in people decisions, organizations must be prepared to adapt and comply with the evolving legal landscape. The scope of these regulations will extend beyond generative AI and encompass a broad range of automated decision technologies. Moreover, regulations will address not only talent acquisition but also various aspects of employee decision-making. Emphasizing the elimination of bias and ethical practices, governments seek to create fair and equitable workplaces. To ensure compliance with AI regulations, organizations can leverage platforms like One Model's One AI, which is fully embedded into every People Data Cloud product. This platform provides the necessary machine learning and predictive modeling capabilities, acting as a "clean room" to enable compliant and data-informed people decisions. By leveraging such tools, organizations can future-proof themselves against audits and demonstrate their commitment to ethical and unbiased decision-making in the AI era. Request a Personal Demo to See How One AI Keeps Your Enterprise People Decisions Ethical, Transparent, and Legally Compliant Learn more about One AI HR Software
Read Article
Featured
6 min read
Richard Rosenow
Once upon a time, in a bustling corporate office, there was a dedicated HR leader who was determined to improve the company's understanding of the workforce. Despite the challenges faced by the HR team, the leader was committed to improving how HR used data for decision-making and decided that getting their workforce data in order so they could make sense of it and analyze it would help the company. Upon researching the space, they decided that investing in a people data platform would best optimize HR processes and bring about positive change. Interested in learning how the HR Leader decided on a people data platform? Check out our whitepaper on the topic to learn more. Download and Read Today The HR leader knew that they needed the support of the other business functions to make this vision a reality. They approached the Data Engineering team, Information Technology team, and Enterprise Analytics team, seeking their assistance in crafting a compelling pitch for the people data platform. "Who will help me gather data and build a strong business case for the people data platform?" the HR leader asked. "Not I," replied the Data Engineering team, busy maintaining complex data pipelines for Finance. "Not I," said the Information Technology team, focused on streamlining the company's vendor landscape. "Not I," responded the Enterprise Analytics team, preoccupied with analyzing key metrics for marketing. Feeling disheartened but undeterred, the HR leader took it upon themselves to build the pitch. They researched the benefits of having a centralized, clean, and well-organized data model, highlighting how a people data platform would enable the HR team to visualize, report on, and analyze HR data effectively. The HR leader emphasized that this investment would not only help HR but would empower the leaders and managers in the company to make data-informed decisions about their workforce. After weeks of hard work, the HR leader completed the pitch but knew that securing the budget wouldn't be easy. They decided to run a pilot project to demonstrate the value of the people data platform to the senior management. "Who will help me with the pilot project to showcase the potential of a people data platform?" the HR leader asked the other business function leaders. "Not I," replied the Data Engineering team, focused on optimizing their data infrastructure. "Not I," said the Information Technology team, busy managing software updates and hardware maintenance. "Not I," responded the Enterprise Analytics team, occupied with supporting the Product team with their dashboards. Undaunted, the HR leader initiated the pilot project on their own, using limited resources and sheer determination. They collected data, created reports, and provided insights that highlighted the platform's potential to revolutionize HR processes. They learned about what was needed to secure HR data and how to best share progress with employees to communicate transparently about the systems. When the pilot project was completed, the HR leader presented the results of the pilot along with their pitch to the senior management. Impressed by the evidence and the potential impact on the company, the senior management team approved a substantial budget for the investment in HR’s very own people data platform. The news spread quickly throughout the company, and soon, the other business functions took notice. Seeing the approved budget, the Data Engineering team, Information Technology team, and Enterprise Analytics team approached the HR leader with newfound enthusiasm. "Can we use your approved budget to build an in-house solution by adding headcount to our teams and activating more licenses on our in-house systems?" they asked, their eyes gleaming with anticipation. The HR leader shook their head and replied, "No, when I asked for your help in building the pitch and running the pilot project, none of you were willing to support the project. I gathered the data, built the business case, executed the pilot, and secured the budget all by myself. This investment is dedicated to the HR team and we will determine how it will be spent on a people data platform." The other business functions couldn't help but feel a pang of regret for not having supported the HR leader earlier. They realized the importance of collaboration and the value of supporting each other's projects. From that day forward, the Data Engineering team, Information Technology team, and Enterprise Analytics team made it a priority to work closely with the HR team, ensuring that the platform launch went off without a hitch and that all departments benefited from the people data platform. The company thrived, as data-driven storytelling spread throughout the company and workforce data was securely and safely distributed to decision-makers, fostering a culture of shared success and mutual support. The moral of the story: Success comes from collaboration and supporting one another, and a company thrives when all its functions work together to support each other’s needs. A bit of a fairy tale ending? Absolutely, but it’s fun to dream. But are you ready to get some help? Reach out to our team for a demo and to learn more about how One Model makes People Analytics easy for HR leaders. You deserve good data and to work with a partner who knows how to help HR get there. We’re here to help.
Read Article
Featured
4 min read
Taylor Clark
Machine Learning Explainability for Human Resources One AI is the HR industry's leading machine learning and predictive analytics technology because it's flexible, secure, and transparent. Watch Taylor Clark, our Chief Data Scientist, demo a brilliant turnover forecast in seconds in the video below. In Taylor's example, you can see turnover going up over time and then coming back down a little. But how can you inform decisions about which direction the trend could go moving forward? And how can you confidently stand behind that prediction? With One AI, it only takes a click. A powerful forecast in a single click? Yes! Generally, Taylor suggests that people think about turnover trends from multiple contexts and perspectives. So in the linked video, he shares his screen to demonstrate the HR industry's fastest way to analyse turnover. In the People Data Cloud™️ platform, it's a single click on the "light bulb" icon of any table, chart, or graph. Once clicked, you see One AI thinking in the background and doing a lot of math. Suddenly, it produces a forecast. What does the cone mean? The shaded cone over the forecasted zone of the graph is considered the range of uncertainty. It's also known as an uncertainty interval or a confidence interval to mathematicians and statisticians. But here's what it's telling us: the actual results could lie anywhere within this cone when that time comes. The range of possible outcomes is also influenced by the modelling technique that is used by One AI. How can decision makers trust this forecast? Most responsible business leaders need to trust the analyses that they are given. Often, this involves understanding the assumptions and techniques used to generate the analysis. One AI has you covered! The easiest way to win confidence and trust in a forecast with One AI is to simply click on any point within the forecasted range. Then you'll see a pop-up on the screen that shows a whole bunch of information. It includes information such as the upper and lower bounds of the forecast, the different types of algorithms that were used in making the prediction, and even the transformations and data sets used to generate the possible outcomes. You don't need to be a PhD to know that this is the first and only HR technology to offer model explainability and explainable artificial intelligence directly in the reports and as an embedded feature of the analysis. Only One AI is this flexible and transparent. This approach to forecasting is unique in that it can be applied to any table or chart within the People Data Cloud platform. Gone are the days when your data science team needed to recreate the wheel for every single forecast demanded by the business. And gone are the days when you need a data science team for every single type of forecast. Now your modeling teams can build a predictive tool that can be reused and reapplied across a broad range of talent decisions, including attrition, performance, engagement, advancement, and so on. See how One AI takes your forecasting game to the next level. Fill out the form below to get a live demonstration.
Read Article
Featured
9 min read
Richard Rosenow
It’s very difficult to do people analytics without data. Finding and extracting workforce data to use for analytics is maybe the first and most common challenge that people analytics teams encounter. In this blog post, I’ll share tips I’ve learned about data extraction for HR teams, common challenges involved in extracting data, and best practices for overcoming these challenges. By applying these tips, HR teams can more effectively and efficiently extract data to drive business value and insights. What is Data Extraction? Data extraction is the process of extracting data from one or more sources and transforming it into a usable format for further analysis or processing. It is the "E" in "ETL". In the context of HR, data extraction is an essential process for collecting and organizing data related to the workforce, such as core HRIS, employee demographics, performance data, and engagement data. By extracting this data, HR teams can more effectively analyse and utilise it to make informed decisions and drive business value. Data extraction may involve extracting data from various sources, such as databases, spreadsheets, and HR systems. This is the first in a series we're writing on the people data platform. If you'd like to learn more, Download the whitepaper. Here are 5 Tips to Ensure HR Data Extraction Success 1. Prioritize and Align Extracted Data with the Needs of the Business First and foremost, it is important for people analytics teams to prioritize what data they go after based on the needs and challenges of the business. If the business is experiencing high attrition, start with the HRIS data and build an analysis on termination trends. However, if the business is concerned about understanding remote work, the starting point for data extraction may need to be the survey system to get insights on employee voice back to leadership teams. Delivering against critical business needs adds value to the company, builds trust, and creates the buy-in needed for future projects. There’s a time and a place to pursue novel data to generate insights that the business is not expecting, but without a foundation of trust and a history of delivering against core business concerns that can be a difficult road. When you’re building your data extraction roadmap, start with the data where you can get to value quickly. 2. Be Thoughtful About What You Extract Workforce data is inherently different from other data in the company as underneath each data point is a coworker with a livelihood, career, friends and family, and personal details. It is critical that People Analytics teams be careful about what they extract and that they are thoughtful about use cases for the data. It’s an important ethical decision to make sure the data is private, secured, and safe in storage as well as in the extraction tools and pipelines that get the data into storage. There are ethical approaches you should be thinking about, but we also live in an environment now where there are hard legal requirements related to the extraction and storage of workforce data. Depending on the nature of the data and where you operate, you may be required to comply with CPRA (California), SOX, HIPAA, and GDPR to name a few. Of note, GDPR applies to EU citizens wherever they reside and not just individuals residing in the EU. So if you employ any EU citizens or are considering hiring EU citizens, GDPR regulations are critical when it comes to data extraction. 3. Build the Business Case to Pull More It can be difficult to convince IT teams or central data engineering functions to support HR data extraction. So when you do get someone to assist, there can be a certain anxiety around the idea of “what if I need more”. This can cause a team to over-extract data or pull too much of it too soon. The feeling is understandable. I’ve been there. But as I’ve said before, the people analytics flywheel is a phenomenon that can be realised if you focus on prioritized business problems. This gives you the chance to revisit the data extraction conversation down the road should you need more. Your future arguments for data extraction will be stronger if business needs continue to be the rationale for additional requests for data extraction support. 4. Automate Your Extractions A native report is a report that comes pre-packaged with your HR system. While native reports are helpful to early data extraction wins, they can be difficult to scale and standardise. Native reports tend to have the following effects. They are usually just a subset of the data within the system that are typically pulled through a graphic user interface, which makes them rigid and difficult to repeat. They are prone to time out if you pull too much data or pull too frequently. They may end up looking different depending on which user pulled them due to filters, permission settings, and the effective date range for the data pulled. (HR never closes the books!) Over time, you’ll need to move away from native reports and to an API or another method to extract the data from the system. An API gets you access to the full data set, pulls data more frequently, and introduces standardisation and repeatability by leveraging data extraction tools and relying less on GUIs. APIs never get bored, can be logged and audited, and can run on their own. Automation changes repetitive and high-variance tasks into trusted processes. 5. Extract for Data Science, not just Reporting See the video above to learn more about extracting data from Workday. Meaningful analysis requires more data and often different data than snapshot extraction methods like native reports can provide. Snapshot extraction can handle basics, such as headcount reporting but cannot report what the company looked like on a given day. When you extract your HR data, make sure that you extract what you need for data science and not just your reporting needs. Data science applications require wider data sets and more features. The time component is the most important part of HR data science. An employee might touch 10 different HR systems as he or she joins a company, so the data in each system needs to be joined to the same employee record in a harmonized and sequential order. Make sure that the data in each system is captured at the time of the action with the time stamp. Naturally this creates a “transaction-level” record. Without those transaction records, you can end up with messy data. Examples include data that shows someone being promoted before they were hired or terminated before a transfer. HR is also notorious for back-dating work. Transaction-level records can prevent issues arising from those behaviors. Finally, your data science necessitates extracting the correct components. Prioritise Data Extraction, But Be Aware of the Nuances Are you ready to explore how to extract hr data at your company? Data extraction is an essential part of conducting people analytics. It is important for people analytics teams to prioritize their data extractions based on the needs and challenges of the business, be thoughtful about which data points are extracted, consider automating their data extractions, and be careful about the nuances of the data they extract. Looking to Extract Data Out of Your Specific HRIS Download our Resources Now! Delivering People Analytics out of Workday Delivering People Analytics from Successfactors
Read Article
Featured
6 min read
Marcus Joseph
In the US, leave is having a moment. From the US President’s State of the Union to New York’s 12 weeks of fully paid parental leave, to the FAMILY Act legislation, leave has been all over the feeds, which is encouraging given the majority of US workers struggle to take advantage of our current policy’s benefits. While most of the coverage seems to focus on longer-term family leave, in today's working environment, paid sick leave is more important than ever. In fact, US workers without paid sick leave could be three to four times more likely to quit their job than comparable workers who have this benefit. This is especially true for hourly versus salaried employees and for female employees who tend to disproportionately carry caregiving responsibilities outside of paid work. For most industrialized countries, sick worker pay is not a critical issue. In fact, 32 of 34 OECD countries guarantee paid leave for personal illness. Who are the two OECD countries holding out? The United States and North Korea. So, let’s dive into the American problem and what it can mean for businesses managing workers in the US. With a “tripledemic” threat of flu, COVID-19, and Respiratory Syncytial Virus (RSV), it’s evident that company sick pay is a critical benefit for companies of all sizes. Even some US government studies concluded that there was a noticeable rise in workers who quit with unpaid leave during 2020. FFCRA Leave and Changing Paid Sick Leave Law Amid COVID-19 The COVID-19 pandemic revealed that paid leave is essential to employee well-being and safety. In the past, paid leave was not considered critical to supporting the American economy. As COVID-19 cases ramped up, allowing workers to stay home or care for their sick family members helped meet real human needs, combat the spread of COVID-19 and mitigate the impact on the American economy. The Families First Coronavirus Response Act (FFCRA) was eventually implemented, which required certain employers to provide FFCRA leave and expanded family and medical leave for specified reasons related to COVID-19. About 25% of US firms did increase their sick leave options and one study found that states, where workers gained increased leave benefits under FFCRA, reported an average of 400 fewer cases of COVID-19 per day. However, 90% of companies reported these increases were intended to be temporary. Since Covid, there also seems to be a renewed interest from the Biden administration to make paid leave a requirement. During the State of the Union 2023, he backed up his claim to stop workers from being stiffed by fighting for paid family and medical leave. His secretary of labor is also calling for better national standards to mark the 30th anniversary of the Family and Medical Leave Act. Need to track Covid illnesses at your organisation? Try our free resource. Rising Turnover Reveals Paid Sick Leave Is Critical to Employee Retention One key reason why our people analytics teams should consider paid sick leave in our turnover models is the impact on retention. Certain populations of workers are much more likely to quit over paid leave. This means that employers who don't offer this benefit are at a disadvantage when it comes to retaining critical team members. The rise in turnover rates is already a nationwide problem. Plus, replacement costs for an employee can be as high as 50% to 60%, with overall costs from 90% to 200%. Offering paid sick leave is not only critical benefit employees look for in a business, but it is also a great way to live out your values of caring about individual well-being and your desire for employees to stay with your company for the long haul. Increase Employee Productivity and Engagement With PTO It’s pretty clear that when employees are out sick, they are not able to work and be productive. However, offering employees the ability to take the time they need to recover without worrying about losing pay will also positively impact productivity levels when those sick team members are back in action and healthy. When employees feel like their employer cares about them and their well-being, they are more likely to be engaged while at work. This leads to improved morale and a better work environment for everyone involved—improving life outcomes for individuals, the bottom line as an organization and your brand as an employer of choice. Ultimately, your standard sick leave policy is a factor your HR analytics team should consider when analyzing retention rates. Understanding how much your average PTO and sick leave is affecting your workforce this cold, flu and COVID season may be the difference between keeping and losing employees and remaining competitive in your market. HR teams should invest in knowing the internal and external story the data tells us and sharing it with leadership. Doing so could help improve employee retention rates, reduce turnover-related costs, and increase productivity in the long run–and help turn leave’s current “moment” into our new norm.
Read Article
Featured
7 min read
Chris Butler
The employee survey still is perhaps the most ubiquitous tool in use for HR to give their employees a voice. It may be changing and being disrupted (debatable) by regular or real-time continuous listening and other feedback mechanisms. Regardless, employee survey data collection will continue. I am, however, constantly amazed by the amount of power that is overlooked in these surveys. We’re gathering some incredibly powerful and telling data. Yet, we barely use a portion of the informational wealth it holds. Why? Most organizations don’t know how to leverage the confidential employee survey results correctly and maintain the privacy provisions they agreed with your employees during data collection. The Iceberg: The Employee Survey Analytics You're Missing Specifically, you are missing out on connecting employee survey answers to post-survey behaviours. Did the people who said they were going to leave actually leave? Did the people who answered they lack opportunity for training, actually take a training course when offered? Did a person who saw a lack of advancement opportunities leave the company for a promotion? How do employee rewards affect subsequent engagement scores? And of course, there are hundreds of examples that could be thrown out there, it is almost a limitless source of questioning, you don’t get this level of analysis ROI from any other data source. Anonymous vs. Confidential Surveys First, let me bring anyone who isn’t familiar with the difference up to speed. An anonymous survey is one where all data is collected without any identifiers at all on the data. It is impossible to link back to a person. There’s very little you can do with this data apart from what is collected at the time of questioning. A confidential survey, on the other hand, is collected with an employee identifier associated with the results. This doesn’t mean that the survey is open, usually, the results are not directly available to anyone from the business which provides effective anonymity. The survey vendor that collected these results though does have these identifiers and in your contract with them, they have agreed to the privacy provisions requested and communicated to your employees. And a number of survey vendors will be able to take additional data from you, load it into their systems and be able to show a greater level of analysis than you typically get from a straight survey. This is better than nothing but still far short of amazing. Most companies, however, are not aware that survey vendors are generally happy (accepting at least) to transfer this employee-identified data to a third party as long as all confidentiality and privacy restrictions that they, the customer, and the employees agreed to when the survey was collected. A three-way data transfer agreement can be signed where, in the case of One Model, we agree to secure access to the data and maintain confidentiality from the customer organization. Usually, this confidentiality provision means we need to: Restrict the data source from direct access. In our case, it resides in a separate database schema that is inaccessible by even a customer that has direct access to our data warehouse. Provide ‘Restricted’ metrics that provide an aggregate-only view of the data, i.e. only show data where there are more than 5 responses or more than 5 employees in a data set. The definition of how this is restricted needs to be flexible to account for different types of surveys. Manage Restricted metrics as a vendor, preventing them from being created or edited by the company when a restricted data set is in use. Support employee survey dimensionality that adheres to this restriction so you can’t inadvertently expose data by slicing a non-restricted metric by a survey dimension and several other dimensions to create a cut to a population that otherwise may be identifiable. Get Ready to Level Up Employee Survey Analysis! Your employee survey analytics can begin once your survey data is connected to every other data point you hold about your employees. For many of our customers that means dozens of people data sources across the recruit to retire, and business data spectrums. Want to know what the people who left the organization said in their last survey? Three clicks and a few seconds later and you have the results. Want to know if the people you are recruiting are fitting in culturally and which source of hire they were recruited from Or if low tenure terminations show any particular trends in engagement, or culture responses? Or whether people who were previously highly engaged and have a subsequent drop in engagement have a lack of (choose your own adventure) advancement|compensation|training|skilled-peers|respect for management? Literally, you could build these questions and analysis points for days. This is what I mean, a whole new world opens up with a simple connection of a data set that almost every company has. What can I do? Go and check your last employee survey results and any vendor/employee agreements for how the data was to be collected and used. If the vendor doesn’t state how it’s being collected, check with them, often they are collecting an employee identifier (id, email, etc). If you are lucky you might have enough leeway to designate a person or two within your company to be able to run analysis directly. Otherwise, enquire about a data transfer agreement with a third party who will maintain confidentiality. I’ve had this conversation many times (you may need to push a little). If you don’t have data collected with an identifier, check with HR leadership on the purpose of the survey, and the privacy you want to provide employees with and plan any changes for integration into the next survey. This is a massively impactful data set for your people analytics, and for the most part, it’s being wasted. However, always remember to respect the privacy promise you made to employees, communicate how the data is being used and how their responses are protected from being identified. With the appropriate controls, as outlined above, you can confidentially link survey results to actual employee outcomes and take more informed action on the feedback you collected in the employee survey analysis. If you would like to take a look at how we secure and make survey data available for analysis, feel free to book a demonstration directly below. Ready to see us Merge Employee Survey Data with HRIS Data? Request a Demo!
Read Article
Featured
11 min read
Taylor Clark
The human resources department is a mission-critical function in most businesses. So the promise of better people decisions has generated interest in and adoption of advanced machine-learning capabilities. In response, organizations are adopting a wide variety of data science tools and technology to produce economically-optimal business outcomes. This trend is the result of the proliferation of data and the improved decision-making opportunities that come with harnessing the predictive value of that data. What are the downsides to harnessing machine learning? For one, machines lack ethics. They can be programmed to intelligently and efficiently drive optimal economic outcomes. It seems as though the use of machines in decisions and seemingly desirable organizational behaviors. Of course machines lack a sense of fairness or justice. But optimal economic outcomes do not always correspond to optimal ethical outcomes. So the key question facing human resources teams and technology support is "How can we ensure that our people decisions are ethical when a machine is suggesting those decisions?” The answer almost certainly requires radical transparency about how artificial intelligence and machine learning are used the decision making process. It is impossible to understand the ethical aspect of a prediction made by a machine unless the input data and the transformations of that data are clear and understood as well. General differences between various machine learning approaches have a profound impact on the ethicality of the outcomes that their predictions lead to. So let's begin by understanding some of those differences. Let’s focus on the various types of machine learning models: the black box model, the canned model, and the inductive model. What is a Black Box Model? A black box model is one that produces predictions that can’t be explained. There are tools that help users understand black box models, but these types of models are generally extremely difficult to understand. Many vendors build black box models for customers, but are unable or unwilling to explain their techniques and the results that those techniques tend to produce. Sometimes it is difficult for the model vender to understand its own model! The result is that the model lacks any transparency. Black box models are often trained on very large data sets. Larger training sets can greatly improve model performance. However, for this higher level of performance to be generalized many dependencies need to be satisfied. Naturally, without transparency it is difficult to trust a black box model. As you can imagine, it is concerning to depend on a model that uses sensitive data when that model lacks transparency. For example, asking a machine to determine if a photo has a cat in the frame doesn't require much transparency because the objective lacks an ethical aspect. But decisions involving people often have an ethical aspect to them. This means that model transparency is extremely important. Black box models can cross ethical lines where people decisions are concerned. Models, like humans, can exhibit biases resulting from sampling or estimation errors. They can also use input data in undesirable ways. Furthermore, model outputs are frequently used in downstream models and decisions. In turn, this ingrains invisible systematic bias into the decision. Naturally, the organization jeopardizes its ethical posture when human or machine bias leads to undesirable diversity or inclusion outcomes. One of the worst possible outcomes is a decision that is unethical or prejudicial. These bad decisions can have legal consequences or more. What is a Canned Model? The terms "canned model" or “off-the-shelf model” describe a model that was not developed or tailored to a specific user’s dataset. A canned model could also be a black box model depending on how much intellectual property the model’s developer is willing to expose. Plus, the original developer might not understand much about its own model. Canned models are vulnerable to the same biases as black box models. Unrepresentative data sets can lead to unethical decisions. Even a representative data set can have features that lead to unethical decisions. So canned models aren't without their disadvantages either. But even with a sound ethical posture, canned models can perform poorly in an environment that simply isn’t reflective of the environment on which the model was trained. Imagine a canned model that segmented workers in the apparel industry by learning and development investments. A model trained on Walmart’s data wouldn’t perform very well when applied to decisions for a fashion startup. Canned models can be quite effective if your workforce looks very similar to the ones that the model was trained on. But that training set is almost certainly a more general audience than yours. Models perform better when training data represents the real life population that was targeted and represented in the training set. What are Custom Built Models? Which brings us to custom built models. Custom models are the kind that are trained on your data. One AI is an example of the custom built approach. It delivers specialized models that best understand your environment because it’s seen it before. So it can detect patterns within your data to learn and make accurate predictions. Custom models discover the unique aspects of your business and learn from those discoveries. To be sure, it is common for data science professionals to deploy the best performing model that they can. However, the business must ensure that these models comply with high ethical and business intelligence standards. That's because it is possible to make an immoral decision with a great prediction. So for users of the custom built model, transparency is only possible through development techniques that are not cloudy or secret. Even with custom built models, it is important to assess the ethical impact that a new model will have before it is too late. Custom built models may incorporate some benefits of canned models, as well. External data can be incorporated into the model development process. External data is valuable because it can capture what is going on outside of your organization. Local area unemployment is a good example of a potentially valuable external data set. Going through the effort of building a model that is custom to your organization will provide a much higher level of understanding than just slamming a generic model on top of your data. You will gain the additional business intelligence that comes from understanding how your data, rather than other companies' data, relates to your business outcomes. The insights gleaned during the model development process can be valuable even if the model is never deployed. Understanding how any model performs on your data teaches you a lot about your data. This, in turn, will inform which type of model and model-building technique will be advantageous to your business decisions. Don’t Be Misled by Generic Model Performance Indicators A canned model’s advertised performance can be deceptive. The shape of the data that the canned model learned from may be drastically different from the data in your specific business environment. For example, if 5% of the people in the model's sample work remotely, but your entire company is remote, then the impact and inferences drawn by the model about remote work are not likely to inform your decisions very well. When to be Skeptical of Model Performance Numbers Most providers of canned models are not eager to determine the specific performance of their model on your data because of the inherent weaknesses described above. So how do you sniff out performant models? How can you understand a good smelling model from a bad smelling one? The first reason to be skeptical lies in whether the model provider offers relative performance numbers. A relative performance value is a comparative one, and therefore failing to disclose relative performance should smell bad. Data scientists understand the importance of measuring performance. They know that it is crucial to understand performance prior to using a model’s outputs. So avoiding relative performance, the vendor is not being 100% transparent. The second reason to be skeptical concerns vendors who can't (or won't) explain which features are used in their model and the contribution that each feature makes to the prediction. It is very difficult to trust a model's outputs when the features and their effects lack explanation. So that would certainly smell bad. One Model published a whitepaper listing the questions you should ask every machine learning vendor. Focus on Relative Performance….or Else! There are risks that arise when using data without relative performance. The closest risk to the business is that faith in the model itself could diminish. This means that internal stakeholders would not realize “promised” or “implied” performance. Of course, failing to live up to these promises is a trust-killer for a predictive model. Employees themselves, and not just decision makers, can distrust models and object to decisions made with it. Even worse, employees could adjust their behavior in ways that circumvent the model in order to “prove it wrong”. But loss of trust by internal stakeholders is just the beginning. Legal, compliance, financial, and operational risk can increase when businesses fail to comply with laws, regulations, and policies. Therefore, it is appropriate for champions of machine learning to be very familiar with these risks and to ensure that they are mitigated when adopting artificial intelligence. Finally, it is important to identify who is accountable for poor decisions that are made with the assistance of a model. The act of naming an accountable individual can reduce the chances of negative outcomes, such as bias, illegality, or imprudence. How to Trust a Model A visually appealing model that delivers "interesting insights" is not necessarily trustworthy. After all, a model that has a hand in false or misleading insights is a total failure. At One Model, we feel that all content generated from predictive model outputs must link back to that model's performance metrics. An organization cannot consider itself engaged in ethical use of predictive data without this link. Canned and black box models are extremely difficult to understand, and even more difficult to predict how they respond to your specific set of data. There are cases where these types of models can be appropriate. But these cases are few and far between in the realm of people data in the human resources function. Instead, custom models offer a much higher level of transparency. Model developers and users understand their own data much better throughout the model building process. (This process is called Exploratory Data Analysis, and it is an extremely under-appreciated aspect of the field of machine learning.) At One Model, we spent a long time -- more than 5 years -- building One AI to make it easier for all types of human resources professionals build and deploy ethical custom models from their data, while ensuring model performance evaluation and model explainability. One AI includes robust, deep reporting functionality that provides clarity on which data was used to train models. It blends rich discovery with rapid creation and deployment. The result is the most transparent and ethical machine learning capability in any people analytics platform. Nothing about One AI is hidden or unknowable. And that's why you can trust it. Their Artificial Intelligence Still Needs Your Human Intelligence Models are created to inform us of patterns in systems. The HR community intends to use models on problem spaces involving people moving through and performing within organizations. So HR pros should be able to learn a lot from predictive models. But it is unwise to relinquish human intelligence to predictive models that are not understood. The ultimate value of models (and all people analytics) is to make better, faster, more data-informed talent decisions at all levels of the organization. Machine learning is a powerful tool, but it is not a solution to that problem.
Read Article
Featured
10 min read
Joe Grohovsky
During my daily discussions with One Model prospects and customers, two consistent themes emerge: A general lack of understanding of predictive modeling and a delay in considering its use until basic reporting and analytical challenges are resolved. These are understandable, and I can offer a suggestion to overcome both. My suggestion is based upon seeing successful One Model customers gain immediate insights from their data by leveraging the technology found in our One AI component. These insights include data relationships that can surface even before customers run their first predictive model. Deeper insights before predictive modeling? How? To begin, let’s rethink what you may consider to be a natural progression for your company and your People Analytics team. For years we’ve been told a traditional People Analytics Maturity Continuum has a building block approach that is something like this: The general concept of the traditional People Analytics maturity model is based upon the need to master a specific step before progressing forward. Supposedly, increased value can be derived when each step is mastered, and the accompanying complexity is mastered. While this may seem logical, it is largely inaccurate in the real world. The sad result is many organizations languish in the early stages and never truly advance. The result is diminished ROI and frustrated stakeholders. What should we be doing instead? The short answer is to drive greater value immediately when your people analytics project launches. Properly built data models will immediately allow for basic reporting and advanced analytics, as well as predictive modeling. I’ll share a brief explanation of two One Model deliverables to help you understand where I’m going with this. People Data Cloud™️ Core Workforce data is the first data source ingested by One Model into a customer's People Data Cloud. Although additional data sources will follow, our initial effort is focused on cleaning, validating, and modeling this Core Workforce data.This analytics-ready data is leveraged in their People Data Cloud instance. Once that has occurred storyboards are then created, reflecting a customer’s unique metrics for reporting and analytics. It is now that customers can and should begin leveraging One AI (Read more about People Data Cloud). Exploratory Data Analysis One AI provides pre-built predictive models for customers. The capability also exists for customers to build their own bespoke models, but most begin with a pre-built model like Attrition Risk. These pre-built models explore a customer's People Data Cloud to identify and select relevant data elements from which to understand relationships and build a forecast. The results of this selection and ranking process are presented in an Exploratory Data Analysis (EDA) report. What is exploratory data analysis, you ask? It is a report that provides immediate insights and understanding of data relationships even before a model is ever deployed. Consider the partial EDA report below reflecting an Attrition Risk model. We see that 85 different variables were considered. One AI EDA will suggest an initial list of variables relevant to this specific model, and we see it includes expected categories such as Performance, Role, and Age. This first collection of variables does not include Commute Time. But is Commute Time a factor in your ideal Attrition Risk model? If so, what should the acceptable time threshold be? Is that threshold valid across all roles and locations? One AI allows each customer to monitor and select relevant data variables to understand how they impact insights into your predictive model. Changing the People Analytics Maturity Model into a Continuum Now that we realize that the initial Core Workforce People Data Cloud can generate results not only for Reporting and Analytics but also for Predictive Modeling, we can consider a People Analytics Maturity Continuum like this: This model recognizes the fact that basic reporting and analytics can occur simultaneously after a proper data lake is presented. It also introduces the concept of Monitoring your data and Understanding how it relates to your business needs. These are the first steps in Predictive Modeling and can occur without a forecast being generated. The truth underlining my point is: Analytics professionals should first understand their data before building forecasts. Ignoring One AI Exploratory Data Analysis insights from this initial data set is a lost opportunity. This initial model can and should be enhanced with additional data sources as they become available, but there is significant value even without a predictive output. The same modeled data that drives basic reports can drive Machine Learning. The greater value of One AI is providing a statistical layer, not simply a Machine Learning output layer. The EDA report is a rich trove of statistical data correlations and insights that can be used to build data understanding, a monitoring culture, and the facilitation of qualitative questions. But the value doesn’t stop there. Integrated services that accompany One AI also provide value for all data consumers. These integrated services are reflected in storyboards and include: Forecasting Correlations Line of BestFit Significance Testing Anomaly Detection These integrated services are used to ask questions about your data that are more valid than what can be derived solely from traditional metrics and dimensions. For example, storyboards can reflect data relationships so even casual users can gain early insights. The scatterplot below is created with Core Workforce data and illustrates the relationship between Tenure and Salary. One AI integrated services not only renders this view but cautions that based upon the data used this result is unlikely to be statistically significant (refer to the comment under the chart title below). More detailed information is contained in the EDA report, but this summary provides the first step in Monitoring and Understanding this data relationship. Perhaps one of the questions that may arise from this monitoring involves understanding existing gender differences. This is easily answered with a few mouse clicks: This view begins to provide potential insight into gender differences involving Tenure and Salary, though the results are still not statistically significant. Analysts are thus guided toward discovering their collection of insights contained within their data. List reports can be used to reflect feature importance and directionality. In the above table report, both low and high Date of Birth values increase Attrition Risk. Does this mean younger and older workers are more likely to leave than middle-aged workers? Interesting relationships begin to appear, and One AI automatically reports on the strength of those relationships and correlations. Iterations will increase the strength of the forecast, especially when additional data sources can be added. Leveraging One AI's capability at project launch provides a higher initial ROI, an accelerated value curve, and better-informed data consumers. At One Model, you don’t need to be a data scientist to get started with predictive modeling. Contact One Model to learn more and see One AI in action. Customers - Would you like more info on EDA reports in One Model? Visit our product help site.
Read Article
Featured
17 min read
Chris Butler
Workday vs SuccessFactors vs Oracle Ratings Based on Experience Integrating HR Tech for People Analytics This vendor-by-vendor comparison will be a living post and we will continue to update as we have time to collect thoughts on each vendor and as we complete integrations with new vendors. Not every source we work with will be listed here but we'll cover the major ones that we often work with. At One Model we get to see the data and structure from a load of HR systems, and beyond, basically anything that holds employee or person data is fair game as a core system to integrate for workforce analytics. After more than a decade of HR analytics integration architecture experience where the solution is directly integrating data from these systems into analytics and reporting solutions, we have a lot of experience to share. Below I'll share our experience with highlights from each system and how they align with creating a people analytics warehouse. Some are better than others from a data perspective and there's certainly some vendors that are yet to understand that access to data is already a core requirement of buyers looking at any new technology. Bookmark this blog, add your email to the subscription email list to the right, or follow me (Chris Butler) and One Model on LinkedIn to stay up to date. A Quick Note on HRIS Platform Ratings Ratings are provided as an anecdotal and unscientific evaluation of our experience in gaining access to, maintaining, and working with the data held in the associated systems. They are my opinions.] If you would like to make use of any of our integrations in a stand-alone capacity, we now offer a data warehouse only product where you utilize just our data pipeline and modelling engine to extract and transform data into a data warehouse hosted by One Model or your own data warehouse. We'll be releasing some more public details soon but you are a company that likes to roll your own analytics, visualizations, and just need some help with the data side of the house, we can certainly help. Contact Us Cloud HRIS Comparison Workday One Model rating - 2.5/5 Method - API for standard objects, built-in reporting for custom objects (via reporting-as-a-service, or "RaaS") The Good - Great documentation, Easy to enable API access and control of accessible fields, and Good data structures once you have access. The RaaS option does a good job but is limited. The Bad - Slow; Slow; Slow; No custom fields available in API, Geared towards providing a snapshot, number of parallel connections limited, constant tweaking required as new behaviors identified, Expert integration skills required; True incremental feeds require you to read and interpret a transaction log Workday Requires a Custom-Built People Analytics Integration Architecture Workday analytics embedded into the product is underwhelming and we're yet to see Prism Analytics make a dent in filling the needs that people analytics teams or HR analysts have beyond convenience analytics. So in the meantime, if you are serious about improving reporting and people analytics for Workday you're going to need to get the data out of there and into somewhere else. On the surface, Workday looks to have a great API, and the documentation available is excellent. However, the single biggest downfall is that the API is focused on providing a snapshot, which is fine for simple list reports but does not allow a people analytics team to deliver any worthwhile historical analysis. You don't get the bulk history output of other systems or the ability to cobble it together from complete effective-dated transactions across objects. To capture the complete history we had to build an intense process of programmatically retrieving data, evaluating, and running other API calls to build the full history that we need. If you want more detail take a look at my blog post on the subject The end of the snapshot workday edition. The complexity of the integration, therefore, is multiplied and the time taken suffers immensely due to the object-oriented architecture that requires you to load each object into memory in order to be able to retrieve it. A full destructive data extraction means you're looking at 8+ hours for a small-medium enterprise and expanding to a week if you're a giant. The problem is exacerbated by the number of parallel connections allowed to run at a fraction of the stated limit. A full historical API integration here is not for the faint of heart or skill, we have spent 12+ months enhancing and tweaking our integration with each release (weekly) to improve performance and solve data challenges. Our integration to give a sense of scale generates some 500+ tables that we bring together in our modelling engine in preparation for analytics. Beware of Oversimplifying the API Integration Out-of-the-box integration plugins are going to be focused on the snapshot version of data as well so if you don't have the integration resources available I wouldn't attempt an API integration. My advice is to stick with the built-in reporting tools to get off the ground. The RaaS tools do a good job of combining objects and running in a performant manner (better than the API). However, they will also be snapshot focused and as painful as it will be to build and run each timepoint you will at least be able to obtain a basic feed to build upon. You won't have the full change history for deeper analysis until you can create a larger integration, or can drop in One Model. Robert Goodman wrote a good blog a little while back looking at both the API and his decision to use RaaS at the time, take a read here. Workday API vs RaaS Regardless of the problems we see with the architecture, the API is decent and one of our favorite integrations to work with. It is, however, little wonder that with the data challenges we have seen and experienced, half of our customers are now Workday customers. One Model Integration Capabilities with Workday One Model consumes the Public Web Service API's for all standard objects and fields. One Model configures and manages the services for API extractions, customers need only to create and supply a permissioned account for the extraction. Custom objects and fields need to use a Raas (Report as a service) definition created by the customer in the Enterprise Interface Builder (EIB). The Report can then be transferred by SFTP or can be interacted with as an API itself. Figure 1: One Model's data extraction from Workday SuccessFactors One Model rating - 4/5 Method - API The Good - A dynamic API that includes all custom MDF data!! Runs relatively quickly; Comprehensive module coverage; The Bad - Several API endpoints that need to be combined to complete the data view; Can drop data without indication; At times confusing data structures 4 out of 5 is a pretty phenomenal rating in my book. I almost gave SuccessFactors a perfect 5 but there are still some missing pieces from the API libraries and we've experienced some dropped data at times that have required some adaptations in our integration. Overall, the collection of SF APIs is a thing of beauty for one specific reason: it is dynamic and can accommodate any of the Meta Data Framework (MDF) custom changes in its stride. This makes life incredibly easy when working across multiple different customers and means we can run a single integration against any customer and accurately retrieve all customizations without even thinking about them. Compared to Workday where the API is static in definition and only covers the standard objects this facet alone is just awesome. This dynamic nature though isn't without its complexities. It does mean you need to build an integration that can interrogate the API and iterate through each of its customizations. However, once it is complete it functions well and can adapt to changing configurations as a result. Prepare to Merge API Integrations for People Analytics Multiple API endpoints also require different integrations to be merged. This is a result of both upgrades in the APIs available in the case of the older SuccessFactors API and the OData API as well as providing an API to acquired parts of the platform (i.e. Learning from the Plateau acquisition). We're actually just happy there is now an API to retrieve learning data as this used to be a huge bug bear when I worked at SuccessFactors on the Workforce Analytics product. The only SF product I know of right now that doesn't have the ability to extract from an API is Recruiting Marketing (RMK) from the jobs2web acquisition, hopefully, this changes in the future. Full disclosure, I used to hate working with SuccessFactors data when we had to deal with flat files and RDFs, but with the API integration in place, we can be up and running with a new SuccessFactors customer in a few hours and be confident all customizations are present. Another option - Integration Center I haven't spoken here about the new Integration Center release from earlier last year as we haven't used it ourselves and only have anecdotal evidence from what we've read. It looks like you could get what you need using the Integration Center and deliver the output to your warehouse. You will obviously need to build each of the outputs for the integration which may take a lot of time but the data structure from what I can tell looks solid for staging into an analytics framework. There are likely a lot of tables to extract and maintain though, we currently run around 400+ tables for a SuccessFactors customer and model these into an analytics-ready model. If anyone has used the Integration Center in an analytics deployment please feel free to comment below or reach out and I would be happy to host your perspective here. One Model Integration Capabilities with SAP SuccessFactors One Model consumes the SF REST API's for all standard fields as well as all customized fields including any use of the MDF framework. One Model configures and manages the service for API extractions, customers need only to create and supply a permissioned account for the extraction. SF has built a great API that is able to provide all customizations as part of the native API feed. We do us more than one API though as the new OData API doesn't provide enough information and we have to use multiple endpoints in order to extract a complete data set. This is expertly handled by One Model software. Figure 2: One Model's data extraction from SuccessFactors Oracle HCM Cloud (Fusion) One Model rating - 2/5 Method - HCM Extracts functionality all other methods discounted from use The Good - HCM Extracts is reasonable once you have it set up. History and all fields available. Public documentation. The Bad - The user interface is incredibly slow and frustrating. Documentation has huge gaps from one stage to the next where experience is assumed. API is not functional from a people analytics perspective: missing fields, missing history, suitable only for point-to-point integrations. Reporting/BI Publisher if you can get it working is a maintenance burden for enhancements. HCM Extracts works well but the output is best delivered as an XML file. I think I lost a lot of hair and put on ten pounds (or was it ten kilos?!) working through a suitable extraction method for the HCM Cloud suite that was going to give us the right level of data granularity for proper historically accurate people analytics data. We tried every method of data extraction from the API to using BI Publisher reports and templates. I can see why people who are experienced in the Oracle domain stick with it for decades, the experience here is hard-won and akin to a level of magic. The barriers to entry for new players are just so high that even I as a software engineer, data expert, and with a career spent in HR data many times over, could not figure out how to get a piece of functionality working that in other systems would take a handful of clicks. Many Paths to HRIS System Integration In looking to build an extraction for people analytics you have a number of methods at your disposal. There's now an API and the built-in reporting could be a reasonable option for you if you have some experience with BI Publisher. There are also the HCM Extracts built for bulk extraction purposes. We quickly discounted the API as not yet being up to scratch for people analytics purposes since it lacks access to subject areas, and fields, and cannot provide the level of history and granularity that we need. I hope that the API can be improved in the future as it is generally our favorite method for extraction. We then spent days and probably weeks trying to get the built-in reporting and BI Publisher templates to work correctly and deliver us the data we're used to from our time using Oracles on-premise solutions (quite a good data structure). Alas, this was one of the most frustrating experiences of my life, it really says something when I had to go find a copy of MS Word 2006 in order to use a plugin that for some reason just wouldn't load in MS Word 2016, all to edit and build a template file to be uploaded, creating multiple manual touchpoints whenever a change is required. Why is life so difficult?? Even with a bunch of time lost to this endeavour our experience was that we could probably get all the data we needed using the reporting/BI publisher route but that it was going to be a maintenance nightmare if an extract had to change requiring an Oracle developer to make sure everything ran correctly. If you have experienced resources this may work for you still. We eventually settled on the HCM Extracts solution provided that while mind-numbingly frustrating to use the interface to build and extract will at least reliably provide access to the full data set and deliver it in an output that with some tooling can be ingested quite well. There are a number of options for how you can export the data and we would usually prefer a CSV style extraction but the hierarchical nature of the extraction process here means that XML becomes the preferred method unless you want to burn the best years of your life creating individual outputs for each object tediously by hand in a semi-responsive interface. We, therefore, figured it would be easier, and enhance maintainability if we built our own .xml parser for our data pipeline to ingest the data set. There are .xml to .csv parsers available (some for free) if you need to find one but my experience with them is they struggle with some files to deliver a clean output for ingestion. With an extract defined though there's a good number of options on how to deliver and schedule the output and reliability is good. We've only had a few issues since the upfront hard work was completed. Changing an extract as well is relatively straightforward if you want to add a field or object you can do so through the front-end interface in a single touchpoint. We do love Oracle data, and don't get me wrong - the construction and integrity are good and we have a repeatable solution for our customer base that we can deliver at will, but it was a harrowing trip of discovery that to me, explains why we see so few organizations from the Oracle ecosystem that are out there talking about their achievements. Don't make me go back, mommy! Want to Better Understand How One Model can Help You? Request a Demo Today. Other HRIS Comparisons Coming Soon ADP Workforce Now
Read Article
Featured
10 min read
Phil Schrader
Post 1: Sniffing for Bull***t. As a people analytics professional, you are now expected to make decisions about whether to use various predictive models. This is a surprisingly difficult question with important consequences for your employees and job applicants. In fact, I started drafting up a lovely little three section blog post around this topic before realizing that there was zero chance that I was going to be able to pack everything into a single post. There are simply no hard and fast rules you can follow to know if a model is good enough to use “in the wild.” There are too many considerations. To take an initial example, what are the consequences of being wrong? Are you predicting whether someone will click on an ad, or whether someone has cancer? In fact, even talking about model accuracy is multifaceted. Are you worried about detecting everyone who does have cancer-- even at the risk of false positives? Or are you more concerned about avoiding false positives? Side note: If you are a people analytics professional, you ought to become comfortable with the idea of precision and recall. Many people have produced explanations of these terms so we won’t go into it here. Here is one from “Towards Data Science”. So all that said, instead of a single, long post attempting to cover a respectable amount of this topic, we are going to put out a series of posts under that heading: Evaluating a predictive model: Good Smells and Bad Smells. And, since I’ve never met an analogy that I wasn’t willing to beat to death, we’ll use that smelly comparison to help you keep track of the level at which we are evaluating a model. For example, in this post we’re going to start way out at bull***t range. Sniffing for Bull***t As this comparison implies, you ought to be able to smell these sorts of problems from pretty far out. In fact, for these initial checks, you don’t even have to get close enough to sniff around at the details of the model. You’re simply going to ask the producers of the model (vendor or in-house team) a few questions about how they work to see if they are offering you potential bull***t. At One Model, we're always interested in sharing our thoughts on predictive modeling. One of these great chats are available on the other side of this form. Back to our scheduled programming. Remember that predictions are not real. Because predictive models generate data points, it is tempting to treat them like facts. But they are not facts. They are educated guesses. If you are not committed to testing them and reviewing the methodology behind them, then you are contenting yourself with bull***t. Technically speaking, by bull***t, I mean a scenario in which you are not actually concerned with whether the predictions you are putting out are right or wrong. For those of you looking for a more detailed theory of bull***t, I direct you to Harry G. Frankfurt. At One Model we strive to avoid giving our customers bull***t (yay us!) by producing models with transparency and tractability in mind. By transparency we mean that we are committed to showing you exactly how a model was produced, what type of algorithm it is, how it performs, how features were selected, and other decisions that were made to prepare and clean the data. By tractability we mean that the data is traceable and easy to wrangle and analyze. When you put these concepts together you end up with predictive models that you can trust with your career and the careers of your employees. If, for example, you produce an attrition model, transparency and tractability will mean that you are able to educate your data consumers on how accurate the model is. It will mean that you have a process set up to review the results of predictions over time and see if they are correct. It will mean that if you are challenged about why a certain employee was categorized as a high attrition risk, you will be able to explain what features were important in that prediction. And so on. To take a counter example, there’s an awful lot of machine learning going on in the talent acquisition space. Lots of products out there are promising to save your recruiters time by using machine learning to estimate whether candidates are a relatively good or a relatively bad match for a job. This way, you can make life easier for your recruiters by taking a big pile of candidates and automagically identifying the ones that are the best fit. I suspect that many of these offerings are bull***t. And here are a few questions you can ask the vendors to see if you catch a whiff (or perhaps an overwhelming aroma) of bull***t. The same sorts of questions would apply for other scenarios, including models produced by an in-house team. Hey, person offering me this model, do you test to see if these predictions are accurate? Initially I thought about making this question “How do you” rather than “Do you”. I think “Do you” is more to the point. Any hesitation or awkwardness here is a really bad smell. In the talent acquisition example above, the vendor should at least be able to say, “Of course, we did an initial train-test split on the data and we monitor the results over time to see if people we say are good matches ultimately get hired.” Now later on, we might devote a post in this series to self-fulfilling prophecies. Meaning in this case that you should be on alert for the fact that by promoting a candidate to the top of the resume stack, you are almost certainly going to increase the odds that they are hired and, thus, you are your model is shaping, rather than predicting the future. But we’re still out at bull***t range so let’s leave that aside. And so, having established that the producer of the model does in fact test their model for accuracy, the next logical question to ask is: So how good is this model? Remember that we are still sniffing for bull***t. The purpose of this question is not so much to hear whether a given model has .75 or .83 precision or recall, but just to test if the producers of the model are willing to talk about model performance with you. Perhaps they assured you at a high level that the model is really great and they test it all the time-- but if they don’t have any method of explaining model performance ready for you… well… then their model might be bull***t. What features are important in the model? / What type of algorithm is behind these predictions? These follow up questions are fun in the case of vendors. Oftentimes vendors want to talk up their machine learning capabilities with a sort of “secret sauce” argument. They don’t want to tell you how it works or the details behind it because it’s proprietary. And it’s proprietary because it’s AMAZING. But I would argue that this need not be the case and that their hesitation is another sign of bull***t. For example, I have a general understanding of how the original Page Rank algorithm behind Google Search works. Crawl the web and work out the number of pages that link to a given page as a sign of relevance. If those backlinks come from sites which themselves have large numbers of links, then they are worth more. In fact, Sergey Brin and Larry Page published a paper about it. This level of general explanation did not prevent Google from dominating the world of search. In other words, a lack of willingness to be transparent is a strong sign of bull***t. How do you re-examine your models? Having poked a bit at transparency, these last questions get into issues of tractability. You want to hear about the capabilities that the producers of the model have to re-examine the work they have done. Did they build a model a few years ago and now they just keep using it? Or do they make a habit of going back and testing other potential models. Do they save off all their work so that they could easily return to the exact dataset that was used to train a specific version of the model. Are they set up to iterate or are they simply offering a one-size fits all algorithm to you? Good smells here will be discussions about model deployment, maintenance and archiving. Streets and sewers type stuff as one of my analytics mentors likes to say. Bad smells will be high level vague assurances or -- my favorite -- simple appeals to how amazingly bright the team working on it is.If they do vaguely assure you that they are tuning things up “all the time” then you can hit them with this follow up question: Could you go back to a specific prediction you made a year ago and reproduce the exact data set and version of the algorithm behind it? This is a challenging question and even a team fully committed to transparency and tractability will probably hedge their answers a bit. That’s ok. The test here not just about whether they can do it, but whether they are even thinking about this sort of thing. Ideally it opens up a discussion about you they will support you, as the analytics professional responsible for deploying their model, when you get challenged about a particular prediction. It’s the type of question you need to ask now because it will likely be asked of you in the future. As we move forward in this blog series, we’ll get into more nuanced situations. For example, reviewing the features used in the predictions to see if they are diverse and make logical sense. Or checking to see if the type of estimator (algorithm) chosen makes sense for the type of data you provided. But if the model that you are evaluating fails the bull***t smell test outlined here, then it means that you’re not going to have the transparency and tractability necessary to pick up on those more nuanced smells. So do yourself a favor and do a test whiff from a ways away before you stick your nose any closer.
Read Article
Featured
4 min read
Nicholas Garbis
Our team recently published a whitepaper which explains the "how and why" of our approach to getting data out of Workday. In it we share a lot of challenges and a heap of technical detail regarding our approach. There are also a couple of embedded videos within the paper (unless you print it!). We produced this whitepaper to share the knowledge and experiences we have gained working with our customers, many of whom have Workday as their core HCM. With these customers, we use our proprietary 'connectors' to extract the relevant data through Workday's APIs (adding in data from RaaS reports where needed). But that is just the beginning, because, while the extraction is critical, what comes out of it is essentially 'dull data' that lacks analytical value in its pre-modeled state. We don't stop there. One Model's unique expertise kicks in at this point, converting the volumes of data from Workday (and other HR and non-HR systems) it into what we like to call an "analytics-ready data asset". So, that begs the questions, "What exactly is an 'analytics-ready data asset'?" and "How does One Model create this data asset from Workday data?" So, here's a definition ... DEFINITION of an "Analytics-Ready Data Asset" A structured set of data, purpose built to support a variety of analytics deliverables, including: Metrics that are pre-calculated, can be updated centrally, and have relevant metadata Queries that can range from simple to complex Reports that contain data in table format (rows and columns) with calculations Dashboards and Storyboards that deliver data in compelling visuals that accelerate insights Data science such as predictive modeling, statistical significance testing, forecasts, etc. Integration of data from multiple sources (HR and non-HR) leveraging the effective-dated data structure Data feeds that can be set up to supply specific data to other systems (eg, data lakes) Security model that enables controls over who can see which parts of the organization AND which data fields they will see (some of them at summary, others at employee-level detail) One of the key elements of building such a data asset from Workday is the conversion of the source data into an effective-dated structure which will support views that trend over time (without losing data or creating conflicting data points). This is much more difficult than you'd expect, given that we are conditioned to think of HR data as representative of the employee lifecycle, and many systems of the past were architected with that in mind. This is not a knock on Workday -- not at all -- it's a great HCM solution that has transformed the HR tech industry with it's focus on manager and employee experience. They are not a huge success story on accident! However, delivering a great experience in a transactional HR system does not directly translate into an analytics capability that is powerful enough to support the people analytics needs of companies today (and for the future). To accelerate your people analytics journey, and to ensure you don't run out of runway, you need a solution like One Model to bring your Workday data to life. Download the whitepaper to get the full story. Go to www.onemodel.co/workday ABOUT ONE MODEL One Model’s industry-leading, enterprise-scale people analytics platform is a comprehensive solution for business and HR leaders that integrates data from HR systems with financial and operational data to deliver metrics, storyboard visuals, and predictive analytics through a proprietary AI and machine learning model builder. People data presents unique and complex challenges which One Model simplifies to enable faster, better, evidence-based workforce decisions. Learn more at www.onemodel.co
Read Article
Featured
6 min read
Nicholas Garbis
WATCH THE VIDEO! Conversation with our Chief Product Officer, Tony Ashton, on the topic of insight generation and he shows how One Model’s new insight function works. Insight Generation I believe that a key element of People Analytics should be on insight generation, reducing the time and cognitive load for HR and business leaders to generate insights that lead to actions. Many people analytics teams have made this a priority from a service offering, some of them even including "insights" in the naming of their team. With artificial intelligence, higher quality and faster insight generation can be driven across an organization. An organization with a mature people analytics capability should be judged on the frequency and quality of insight generation away from the center. Why I Stopped Liking Maturity Models Humor me for a moment while I share a very short rant and a confession. I have grown to despise the “maturity curves” that have been circulating through people analytics for over a decade. My confession is that I have not (yet!) been able to come up with a compelling replacement. My main issues? The focus is on data & technology deliverables, not on actions and outcomes. They are vague and imply that you proceed from one stage to the next, when in reality all of them can (and should) be constantly maturing and evolving without any of them ever being “done” or “perfect.” Too many times I have heard (mostly newer) people analytics leaders saying that they need to get their data and basic reporting right before they can consider any analytics. I personally don’t believe that to be true -- things will get easier, faster, and better with your analytics but you do not have to wait to make progress at any of the stages. Action Orientation For example, getting to “predictive” -- being able to foresee what is likely to happen -- is shown in many maturity models. It is easy to imagine, and you may have examples, where very mature predictive analytics deliverables have had little or no impact on the business. In my opinion, true maturity is not about the deliverable, but about the insights generated and the corresponding actions that are taken to drive business outcomes. Going further, getting to “prescriptive” means you have a level of embedded, artificial intelligence that is producing common language actions that should be considered. This would assume the “insight” component is completely handled by the AI which then proceeds into selecting or creating a recommended action. This is still quite aspirational for nearly all organizations, yet it is repeated often. Focus on Designing for Insight Generation at the “Edges” People analytics teams are typically centralized in a COE model, where expertise on workforce data, analytics, dashboard design, data science, insight generation, and data storytelling can be concentrated and developed. The COE is capable of generating insights for the CHRO and HR leadership team, but what about the rest of the organization? What about the HR leaders and managers farther out at the edges of the org chart? The COE needs to design and deliver content to the edges of the organization that enable them to generate insights without needing to directly engage the COE in the process. A storyboard or dashboard needs to be designed with specific intention to shorten the time between a user seeing the content and them having an accurate insight. A good design will increase the likelihood of a “lightbulb" moment. Humans and Machines Turning on “Lightbulbs” Together We need to ensure that the HR leaders and line managers are capable of generating insights from the people analytics deliverables (reports, dashboards, storyboards, etc). This will require some upskilling in data interpretation and data storytelling. With well-designed content, they will generate insights faster and with less effort. Human-generated insights will never be fully replaced. Instead, they will be augmented with machines in the form of AI and machine learning. With the augmentation of AI, the humans will get a boost and together the human-machine combination is a powerful force for insights and then actions. When we have augmentation of AI, we can stop trying to teach everyone statistical regression techniques which they will never use. The central PA team can manage the AI toolset and ensure it is delivering valid interpretations and then focus on enabling insight generation and storytelling by the humans, the HR leaders and line managers. One Model Lights Up Our Customers’ Data Visualizations One Model has just introduced a “lightbulb” feature that is automatically enabled on storyboard tiles that contain metrics that would benefit from forecasting or statistical significance tests. This is not just limited to the content our team creates, it is also automatically scanning the data within storyboards created by our customers. This is far more than basic language attached to a simple regression model. By integrating features of our One AI machine learning module into the user interface we are automatically interpreting the type & structure of the data in the visual and then selecting the appropriate statistical model for determining if there is a meaningful relationship which is described in easy-to-interpret language. Where a forecast is available it is based on an ARIMA model and all the relevant supporting data is just a click away. With this functionality built directly into the user interface, each time you navigate into the data, filtering or drilling into an organization structure, the calculations will automatically reassess the data and generate the interpretations for you. With automated insights generated through AI, One Model accelerates your people analytics journey, moving you from data to insights to actions. About One Model One Model’s industry-leading, enterprise-scale people analytics platform is a comprehensive solution for business and HR leaders that integrates data from HR systems with financial and operational data to deliver metrics, storyboard visuals, and predictive analytics through a proprietary AI and machine learning model builder. People data presents unique and complex challenges which One Model simplifies to enable faster, better, evidence-based workforce decisions. Learn more at www.onemodel.co One Model’s new Labor Market Intel product delivers external supply & demand data at an unmatched level of granularity and flexibility. The views in LMI help you to answer the questions you and your leaders need answers to with the added flexibility to create your own customized views. Learn more at www.onemodel.co/LMI
Read Article
Featured
14 min read
Chris Butler
If people analytics teams are going to control their own destiny they're going to need to need to support the enterprise data strategy. You see, the enterprise data landscape is changing and IT has heard its internal customers. You want to use your own tools, your own people, and apply your hard won domain knowledge in the way that you know is effective. Where IT used to fight against resources moving out of their direct control they have come to understand it's a battle not worth fighting and in facilitating subject matter experts to do their thing they allow business units to be effective and productive. Enter the Enterprise Data Architecture The movement of recent years is for IT to facilitate an enterprise data mesh into their architecture where domain expert teams can build, consume, and drive analysis of data in their own function...so long as you can adhere to some standards, and you can share your data across the enterprise. For a primer on this trend and the subject take read of this article Data Mesh - Rethinking Enterprise Data Architecture The diagram heading this blog shows a simplified view of a data mesh, we'll focus on the people analytics team's role in this framework. What is a Data Mesh? A data mesh is a shared interconnection of data sets that is accessible by different domain expert teams. Each domain team manages their data applying its specific knowledge to its construction so it is ready for analytics, insight, and sharing across the business. When data is built to a set of shared principles and standards across the business it becomes possible for any team to reach across to another domain and incorporate that data set into their own analysis and content. Take for example a people analytics team looking to analyze relationships between customer feedback and front-line employees' attributes and experience. Alternatively, a sales analytics team may be looking at the connection between learning and development courses and account executive performance, reaching across into the people analytics domain data set. Data Sharing becomes key in the data mesh architecture and it's why you've seen companies like Snowflake do so well and incumbents like AWS bring new features to market to create cross-data cluster sharing. There are two ways to share data across the enterprise: Cross Cluster / Data Warehouse sharing - each domain operates its own schemas or larger infrastructure for allowing other business units to access. AWS has an example here https://aws.amazon.com/redshift/features/data-sharing/ Feeding domain Analytics-Ready data into a centralized enterprise data architecture - This is more typical today and in particular is useful if the organization has a data lake strategy. Data lakes are generally unstructured and more of a data swamp, in order to be useful the data needs to be structured, so providing Analytics Ready data into either a data lake or data warehouse that adheres to common principles and concepts is a much more useable method of sharing value across data consumers. One Model was strategically built to support your HR data architecture. If you'd love to learn more, check out our people analytics enterprise products and our data mesh product. How can people analytics teams leverage and support the HR data architecture? The trend to the mesh is growing and you're going to be receiving support to build your people analytics practice in your own way. If you're still building the case for your own managed infrastructure then use these points for helping others see the light and how you are going to support their needs. Identify the enterprise data strategy I'm sure you've butted heads against this already but identify if the organization is supportive of a mesh architecture or you'll have to gear up to show your internal teams how you will give them what they need while taking away some of their problems. If they're running centralized or in a well-defined mesh, you will have different conversations to obtain or improve your autonomy. Supporting the enterprise data mesh strategy People analytics teams are going to be asked to contribute to the enterprise data strategy if you are not today. There are a number of key elements you'll need to be able to do this. Extract and orchestrate the feeds from your domain source systems. Individual systems will have their nuances that your team will understand that others in the enterprise won't. A good example is supervisor relationships that change over time and how they are stored and used in your HRIS. Produce and maintain clean feeds of Analytics-Ready data to the enterprise. This may be to a centralized data store or the sharing of your domain infrastructure across the business. Adhere to any centralized standards for data architecture, this may differ based on the tooling used to consume data. Data architected for consumption by Tableau is typically different (de-normalized) from a model architected for higher extensibility and maintenance (normalized) which would allow for additional data to be integrated and new analyses to be created without re-architecting your core data tables. You can still build your own nuanced data set and combinations for your domain purpose but certain parts of the feed may need to follow a common standard to enable easy interpretation and use across the enterprise. Define data, metrics, and attributes and their governance ideally down to the source and calculation level and document for your reference and for other business units to better understand and leverage your data. The larger your system landscape is the harder this will be to do manually. Connect with other domain teams to understand their data catalogues and how you may use them in your own processes. Why should people analytics care? This trend to the data mesh is ongoing, we've seen it for a number of years and heard how IT thinks about solving the HR data problem. The people analytics function is the domain expertise team for HR, our job is to deliver insight to the organization but we are the stewards of people data for our legacy, current, and future systems. To do our jobs properly we need to take a bigger picture view of how we manage this data for the greater good of the organization. In most cases, IT is happy to hand the problem off to someone else whether that's an internal team specialized in the domain or an external vendor who can facilitate How does One Model support the Data Mesh Architecture for HR It won't surprise you to hear but we know a lot about this subject because this is what we do. Our core purpose has been understanding and orchestrating people data across the HR Tech landscape and beyond. We built for a maturing customer that needed greater access to their data, the capability to use their own tools, and to feed their clean data to other destinations like the enterprise data infrastructure and to external vendors. I cover below a few ways in which we achieve this or you can watch the video at the end of the article. Fault Tolerant Data Extraction Off the shelf integration products and the front end tools in most HRIS systems don't cater for the data nuances, scale of extraction, or maintenance activities of the source system. Workday for example provides snapshot style data at a point in time and it's extraction capabilities quickly bog down for medium and large enterprises. The result is that it is very difficult to extract a full transactional history to support a people analytics program without arcane workarounds that give you inaccurate data feeds. We ultimately had to build a process to interrogate the Workday API about dozens of different behaviors, view the results and have the software run different extractions based on its results. Additionally most systems don't cater for Workday's weekly maintenance windows where integrations will go down. We've built integrations to overcome these native and nuance challenges for SuccessFactors, Oracle, and many other systems our customers work with. An example of a workday extraction task is below. Data Orchestration and Data Modelling Our superpower. We've built for the massive complexity that is understanding and orchestrating HR data to enable infinite extension while preserving maintainability. What's more it's transparent, customers can see how that data is processed and it's lineage and interact with the logic and data models. This is perfect for IT to understand what is being done with your data and to have confidence ultimately in the resulting Analytics-Ready Data Models. Data Destinations to the Enterprise or External Systems Your clean, connected data is in demand by other stakeholders. You need to be able to get it out and feed your stakeholders, in the process demonstrating your mastery of the people data domain. One Model facilitates this through our Data Destination's capability, which allows the creation and automated scheduling of data feeds to your people data consumers. Feeds can be created using the One Model UI in the same way as you may build a list report or an existing table and then just add it as a data destination. Host the Data Warehouse or Connect Directly to Ours We've always provided customers with the option to connect directly to our data warehouse to use their own tools like Tableau, Power BI, R, SAC, Informatica, etc. Our philosophy is one of openness and we want to meet customers where they are, so you can use the tools you need to get the job done. In addition to this a number of customers host their own AWS Redshift data warehouse that we connect to. There's capability to run data destinations to also feed to other warehouses or use external capability to sync data to other warehouses like Azure SQL, Google, Snowflake etc. A few examples Snowflake - https://community.snowflake.com/s/article/How-To-Migrate-Data-from-Amazon-Redshift-into-Snowflake Azure - https://docs.microsoft.com/en-us/azure/data-factory/connector-amazon-redshift Data Definitions and Governance With One Model all metric definitions are available for reference along with interactive explanations and drill through to the transactional detail. Data governance can be centralized with permission controls on who can edit or create their own strategic metrics which may differ from the organizational standard. HR Specific Content and Distribution We provide standard content tailored to the customers own data providing out of the box leverage for your data as you stand up your people analytics programs. Customers typically take these and create their own storyboards strategic to their needs. It's straightforward to create and distribute your own executive, HRBP, recruiting, or analysis project storyboards to a wide scale of users. All controlled by the most advanced role based security framework that ensures only the data permissioned can be seen by the user while virtually eliminating user maintenance with automated provisioning, role assignment, and contextual security logic where each user is linked to their own data point. Watch the two minute video of what One Model does
Read Article
Featured
9 min read
Chris Butler
Following our blog last month about how systems issues can open the door to staff underpayment, a number of our stakeholders have asked if we might be able to go deeper into how a people analytics solution and specifically, how One Model can solve this problem. We are nothing if not obliging here at One Model, so here we go! We thought we would answer this question by articulating the most common system-derived problems associated with people data and how One Model and an integrated people analytics plan can help resolve these issues. PROBLEM NUMBER ONE - PEOPLE DATA IS STORED IN MULTIPLE NON-INTEGRATED SYSTEMS As discussed previously, our experience is that most large organisations have at least 7 systems in which they store people data. In some larger organisations - that number can be more than 20! Data silos present a major risk to HR governance. Silos create the risk that information may be different between systems or updated in one system and then not updated in others. If information in one non-integrated system is wrong or out of date, it becomes very hard - firstly to isolate the issue and remediate it and secondly, if the error was made months or years in the past to understand which system controls the correct information. At One Model, we are consistently helping our customers create a single source of truth for their people data. Blending data together across siloed systems provides a great opportunity for HR to cross-validate the data in those systems before it becomes an issue. Blended data quickly isolates instances of data discrepancy - allowing HR to not only resolve individual data issues, but to uncover systemic problems of data accuracy. Often when people are working across multiple systems they will take shortcuts to updating and managing data; this is particularly prevalent when data duplication is involved. If it isn’t clear which system has priority and data doesn’t automatically update in other systems - human error is an inevitable outcome. With One Model, you can decide which systems represent the most accurate information for particular data and merge all data along these backbone elements resulting in greater trust and confidence. The data integration process that is core to the One Model platform can, in effect, create a single source of truth for your people data. This presentation by George Colvin at PAFOW Sydney neatly shows how the One Model platform was used by Tabcorp to manage people data silo issues. PROBLEM NUMBER TWO - LIMITED ACCESS TO DATA IN OLD AND NON-SUPPORTED SYSTEMS Further to the issue of data spread across multiple systems, our experience tells us that not only are most large organisations running multiple people data systems - at least one of those systems will be running software that is either out-of-date or no longer supported by the vendor. So even if you do wish to integrate data between systems, you may be unable. It is always best if you can identify data issues in real time to minimise exposure and scope of impact, but this isn’t always possible and you may have to dig into historical transactional data to figure out the scale of the issue and how it impacts employees and the company. If that wasn’t challenging enough - most companies when changing or upgrading systems for reasons of cost and complexity end up not migrating all of their historical data. This means that you are paying for the maintenance of your old systems or to manage an offline archived database. Furthermore, when you need to access that historical data, running queries is incredibly difficult. This is compounded when you need to blend the historical data with your current system. It is, to put it mildly, a pain in the neck! One Model’s cloud data warehouse can hold all of your historical data and shield your company from system upgrades by managing the data migration to your new system, or housing your historical data and providing seamless blending with the data in your current active systems. If you are interested in this topic and how One Model can help - have a read of this blog that covers in more detail how One Model can mitigate the challenges associated with system migration. PROBLEM NUMBER THREE - ACCESS TO KEY HR DATA IS LIMITED TO THE CENTRAL HR FUNCTION. Either as a result of technology, security, privacy and/or process, HR data in many large organisations is only accessible by the central HR department. As a result, individual line managers don’t have the autonomy or capability to isolate and resolve people data issues before they develop. Data discrepancies are more likely to be identified by the people closest to the real-world events reflected in the transactional system. Managers and HR Business Partners are your first line of defence in identifying data issues, as well as any other HR issue. Of course, line managers need good people analytics to make better decisions and drive strategy, but a byproduct of empowering managers to oversee this information is that they are able to provide feedback on the veracity of the data and quickly resolve data accuracy issues. Sharing data widely requires a comprehensive and thoughtful approach to data sensitivity, security, and privacy. One Model has the most advanced people analytics permissions and role based security framework in the world to help your company deploy and adopt data-driven decision making. PROBLEM NUMBER FOUR - EVEN IF I RESOLVE A HISTORICAL UNDERPAYMENT, HOW DO I ENSURE THIS DOESN’T HAPPEN AGAIN? One of the consistent pieces of feedback we received from the initial blog was that many stakeholders were comfortable that once an issue had been identified they would be able to resolve it - either internally or with the support of an external consulting firm. However, those stakeholders were concerned about their ability to uncover other instances of underpayment in their business or ensure that future incidents did not occur. There is no silver bullet to this problem, however, our view is that a combination of the following factors can ensure organisations mitigate these risks; integrated people data - having a one-stop single source of truth for your people data is crucial. access to historical data - to understand when and how issues developed is also very important. empowerment of line managers to isolate and resolve issues - managers are your first line of defence in understanding and resolving these issues and you need to enable them to fix problems before they develop. People analytics and the One Model product give organisations the tools to resolve all of these problems. If you are interested in continuing this conversation, please get in touch. PROBLEM NUMBER FIVE - A COMPLEX INDUSTRIAL RELATIONS SYSTEM AND A LACK OF PEOPLE HR RESOURCES Previously, most back office processes had a lot of in-built checks and balances. There were processes to cross-check work between team members, ensure transactions totaled up and reconciled correctly and supervisors who would double-check and approve changes. Over the last 20 years large enterprises have been accelerating ERP adoption, in order to realise ROI from that investment, many back office jobs in payroll and other functions were removed with organisations and management expecting that the systems would always get it right. Compounding this and despite many attempts over the years to simplify the industrial relations system, the reality is that managing employee remuneration is incredibly complex. This complexity means that the likelihood of making payroll system configuration, interpretation or processing mistakes is high. So what to do? Of course you need expertise in your team, or be able to access professional advice as needed (particularly for smaller companies). In addition, successful companies are investing in people analytics to support their team and trawl through the large volumes of data to find exceptions, look for anomalies, and track down problems. Our view at One Model is that organisations need to develop metrics to identify and detect issues early. It's what our platform does. We have developed data quality metrics to deal with the following scenarios; Process errors Data inconsistency Transactions contrary to business rules Human error A combination of quality metrics, system integrations, and staff empowered to isolate and resolve issues before they become problems are key to minimising the chances of an underpayments scandal at your business. Thanks for reading. If you have any questions or would like to discuss how One Model can help your business navigate these challenges, please click the button below to schedule a demo or conversation.
Read Article
Featured
4 min read
Josh Lemoine
Software companies today aren't exactly selling the idea of "lovingly crafting you some software that's unique and meaningful to you". There's a lot more talk about best practices, consistency, and automation. It's cool for software capabilities to be generated by robots now. And that's cool when it comes to things like making predictions. One Model is a leader in that space with One AI. This post isn't about machine learning though. It's about modeling your company's people data . The people at One Model work with you to produce a people data model that best suits your company's needs. It's like having your own master brewer help guide you through the immense complexity that we see with people data. Why does One Model take this hands-on approach? Because the people employed at your company are unique and your company itself is unique. Organizations differ not only in structure and culture but also in the combinations of systems they use to find and manage their employees. When you consider all of this together, it's a lot of uniqueness. The uniqueness of your company and its employees is also your competitive advantage. Why then would you want the exact same thing as other companies when it comes to your people analytics? The core goal of One Model is to deliver YOUR organization's "one model" for people data. A Data Engineer builds your data model in One Model. The Data Engineer working with you will have actual conversations with you about your business rules and logic and translates that information into data transformation scripting. One Model does not perform this work manually because of technical limitations or an immature product. It's actually kind of the opposite. Affectionately known as "Pipeo", One Model's data modeling framework is a major factor in allowing One Model to scale while still using a hands-on approach. Advantages of Pipeo include the following: It's fast. Templates and the "One Model" standard models are used as the starting point. This gets your data live in One Model very quickly, allowing for validation and subsequent logic changes to begin early on in the implementation process. It's extremely flexible. Anything you can write in SQL can be achieved in Pipeo. This allows One Model to deliver things outside the realm of creating a standard data model. We've created a data orchestration and integrated development environment with all the flexibility of a solution you may have built internally. It's transparent. You the customer can look at your Pipeo. You can even modify your Pipeo if you're comfortable doing so. The logic does not reside in a black box. It facilitates accuracy. Press a validation button, get a list of errors. Correct, validate, and repeat. The scripting does not need to be run to highlight syntax issues. OMG is it efficient. What used to take us six weeks at our previous companies and roles we can deliver in a matter of hours. Content templates help but when you really need to push the boundaries being able to do so quickly and with expertise at hand lets you do more faster. It's fun to say Pipeo. You can even use it as a verb. Example: I pipeoed up a few new dimensions for you. The role the Data Engineer plays isn't a substitute for working with a dedicated Customer Success person from One Model. It's in addition to it. Customer Success plays a key role in the process as well. The Customer Success people at One Model bring many years of industry experience to the table and they know their way around people data. They play a heavy role in providing guidance and thought leadership as well as making sure everything you're looking for is delivered accurately. Customer Success will support you throughout your time with One Model, not just during implementation. If you'd like to sample some of the "craft people analytics" that One Model has on tap, please reach out for a demo. We'll pour you a pint right from the source, because canned just doesn't taste as good.
Read Article
Featured
4 min read
Stacia Damron
One Model is keen on ensuring our customers have an exceptional experience interacting with both our software and team alike. That experience begins the moment we meet. Often, the moment that relationship begins is on our website. One Model's platform helps HR and People Analytics teams simplify the messiest of their workforce data, strewn over multiple systems. Our software makes life easier - and our website needs to reflect that simplicity. It needs to be straightforward, easy to navigate, and provide helpful resources and tools to help you continue to grow your people analytics functions. For months, we have been diligently working to create a site that betters your experience - a place that provides you with tools and resources to support you in your data-wrangling journey. Well, now it's official - at the end of Q2, we launched it! The new site has clearly defined solutions for companies looking to scale their people analytics capabilities at all levels - regardless of company size, including resources to get started for evolving teams, and strategies to leverage for more mature people analytics programs. Namely - our new website will more effectively serve those seeking more information regarding people analytics platforms and data warehousing solutions. One Model helps HR departments better support their people analytics team. The new website contains more materials, including white papers, customer testimonials, videos, and data-sheets. Our blog authors helpful tips, relevant articles, best practices, and useful insights for today's data-driven HR professionals and data scientists. The new website includes: Updated navigation better aligns customers with our offerings and core capabilities, reduces the number of user clicks to navigate the website, and directs users to relevant, meaningful content and solutions. List of integrations and partnerships enable users to easily identify integrations that can add value with their current software or platforms. Updated Blog enables users to quickly find applicable, informative content and industry news regarding workforce analytics, data warehouse management, data science techniques, and people analytics programs. More options to connect with the team via numerous information request forms. Additionally, they include more form variation, allowing users to submit requests for quotes, demos, or discussions. Supplementary materials to aid in decision making provide more materials to view, including white papers, customer testimonials, videos, and data-sheets. Career Opportunities showcase open roles and allow job-seekers to apply directly via that page. As our company continues to grow and expand within the US and UK markets, our new website will better represent One Model as we continue to set the bar for excellence in HR data warehouse management and people analytics team solutions. Visit onemodel.co for a comprehensive breakdown of our workforce data solutions. About One Model: One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own. Its newest tool, One AI, integrates cutting-edge machine learning capabilities into its current platform, equipping HR professionals with readily-accessible, unparalleled insights from their people analytics data.
Read Article
Featured
8 min read
Phil Schrader
Last week I was doodling some recruiting graphs in my notebook, with an eye toward building out some new recruiting efficiency dashboards. I was thinking about how requisitions age over time and I got an idea for a cool stacked graph that counts up how many requisitions you have open each month and breaks them out into age buckets. Maybe some supporting breakouts like recruiter, some summary metrics, etc. Something like this: Phil's Beautifully Hand-illustrated Cholesterol Graph (above) This would be an awesome view. At a glance I could see whether my total req load was growing and I could see if I’m starting to get a build up of really old reqs clogging the system. This last part is why I was thinking of calling it the Requisition Cholesterol Graph. (That said, my teammate Josh says he hates that name. There is a comment option below… back me up here!) But then I got to thinking, how am I actually going to build that? What would the data look like? Think about it: Given: I have my list of requisitions and I know the open date and close date for each of them. Problem #1: I want to calculate the number of open reqs I have at the end of each time period. Time periods might be years, quarters, months, or days. So I need some logic to figure out if the req is open during each of those time periods. If you’re an Excel ninja then you might start thinking about making a ton of columns and using some conditional formulas. Or… maybe you figure you can create some sort of pancake stacks of rows by dragging a clever formula down the sheet… Also if you are an Excel ninja… High Five! Being an Excel ninja is cool! But this would be pretty insane to do in Excel. And it would be really manual. You’d probably wind up with a static report based on quarters or something and the first person you show it to will ask if they can group it by months instead. #%^#!!! If you’re a full on Business Intelligence hotshot or python / R wiz, then you might work out some tricky joins to inflate the data set to include a record or a script count a value for each time the reqs open date is before or within a given period, etc. Do able. But then… Problem #2: Now you have your overall count of reqs open in each period. Alls you have to do now is group the requisitions by age and you’re… oh… shoot. The age grouping of the requisitions changes as time goes on! For example, let’s say you created a requisition on January 1, 2017. It’s still open. You should count the requisition in your open req count for January 2017 and you’d also count it in your open req count for June 2018 (because it’s still open). Figuring all that out was problem #1. But now you want to group your requisitions by age ranges. So back in January 2017, the req would count in your 0 - 3 months old grouping. Now it’s in your > 1 year grouping. The grouping changes dynamically over time. Ugh. This is another layer of logic to control for. Now you’re going to have a very wild Excel sheet or even more clever scripting logic. Or you’re just going to give up on the whole vision, calculate the average days open across all your reqs, and call it a day. $Time_Context is on my side (Gets a little technical) But I didn’t have to give up. It turns out that all this dynamic grouping stuff just gets handled in the One Model data structure and query logic -- thanks to a wonderful little parameter called $Time_Context (and no doubt a lot of elegant supporting programming by the engineering team). When I ran into $Time_Context while studying how we do Org Tenure I got pretty excited and ran over to Josh and yelled, “Is this what I think it is!?” (via Slack). He confirmed for me that yes, it was what I hoped it was. I already knew that the data model could handle Problem #1 using some conditional logic around effective and end dates. When you run a query across multiple time periods in One Model, the system can consider a date range and automatically tally up accurate end of period (or start of period) counts bases on those date ranges. If you have a requisition that was opened in January 2017 and you want to calculate the number of reqs you have open at the end of every month, One Model will cycle through the end of each month, check to see if the req was opened before then and is not yet closed, and add it to the totals. We use this for all sorts of stuff, particularly headcount calculations using effective dates and end dates. So problem one was no problem, but I expected this. What I didn’t expect and what made me Slack for joy was how easily I could also deal with Problem #2. Turns out I could build a data model and stick $Time_Context in the join to my age dimension. Then One Model would just handle the rest for me. If you’ve gotten involved in the database side of analytics before, then you’re probably acquainted with terms like fact and dimension tables. If you haven’t, just think vlookups in Excel. So, rather than doing a typical join or vlookup, One Model allows you to insert a time context parameter into the join. This basically means, “Hey One Model, when you calculate which age bucket to put this req in, imagine yourself back in time in whatever time context you are adding up at that moment. If you’re doing the math for January 2017, then figure out how old the req was back then, not how old is is now. When you get to February 2017, do the same thing.” And thus, Problem #2 becomes no problem. As the query goes along counting up your metric by time period, it looks up the relevant requisition age grouping and pulls in the correct value as of that particular moment in time. So, with our example above, it goes along and says, “Ok I’m imagining that it’s January 2017. I’ll count this requisition as being open in this period of time and I’ll group it under the 0 - 3 month old range.” Later it gets to June 2018 and it says, “Ok… dang that req is STILL open. I’ll include it in the counts for this month again and let’s see… ok it’s now over a year old.” This, my friends, is what computers are for! We use this trick all the time, particularly for organization and position tenure calculations. TL;DR In short, One Model can make the graph that I was dreaming of-- no problem. It just handles all the time complexity for me. Here’s the result in all it’s majestic, stacked column glory: So now at a glance I can tell if my overall requisition load is increasing. And I can see down at the bottom that I’m starting to develop some gunky buildup of old requisitions (orange). If I wanted to, I could also adjust the colors to make the bottom tiers look an ugly gunky brown like in the posters in your doctors office. Hmmm… maybe Josh has a point about the name... And because One Model can handle queries like this on the fly, I can explore these results in more detail without having to rework the data. I can filter or break the data out to see which recruiters or departments have the worst recruiting cholesterol. I can drill in and see which particular reqs are stuck in the system. And, if you hung on for this whole read, then you are awesome too. Kick back and enjoy some Rolling Stones: https://www.youtube.com/watch?v=wbMWdIjArg0.
Read Article
Featured
6 min read
Chris Butler
A few weeks ago I gave a presentation at the Talent Strategy Institute’s Future of Work conference (now PAFOW) in San Francisco about how I see the long term relationship between data and HR Technology. Essentially, I was talking through my thought process and development that I could no longer ignore and had to go start a company to chase down it’s long term vision. So here it is. My conviction is that we need to (and we will) look at the relationship between our data and our technology differently. That essentially the two will be split. We will choose technology to manage our data and our workflows as we need it. We will replace that technology as often as our strategy and our business needs change. Those that know my team, know that we have a long history of working with HR data. We started at Infohrm many years ago which was ultimately acquired by SuccessFactors and shortly after SAP. Professionally this was fantastic, worlds opened up and we were talking to many more organizations and the challenges they were facing across their technology landscape. How to achieve data portability. Over time I was thinking through the challenges our customers faced, a large one of which was how to help grease the wheels for the huge on-premise to cloud transition that was underway and subsequently the individual system migrations we were witnessing across the HR landscape. The pace of innovation in HR was not slowing down. Over the years hundreds of new companies were appearing (and disappearing) in the HR Tech space. It was clear that innovation was everywhere and many companies would love to be able to adopt or at least try out this innovation but couldn’t. They were being hampered by political, budgetary, and other technology landscape changes that made any change a huge undertaking. System migration was on the rise. As companies adopted the larger technology suites, they realized that modules were not performing as they should, and there were still gaps in functionality that they had to fill elsewhere. The promise of the suite was letting them down and continues to let them down to this day. This failure, combined with the pace of innovation meant the landscape was under continuous flux. Fragmentation was stifling innovation and analytical maturity. The big reason to move to a suite was to eliminate fragmentation, but even within the suites the modules themselves were fragmented and we as analytics practitioners without a method for managing this change only continued to add to this. We could adopt new innovation but we couldn’t make full use of it across our landscape. Ultimately this slows down how fast we can adopt innovation and downstream how we improve our analytical maturity. All HR Technology is temporary. The realization I started to come to is that all of the technology we were implementing and spending millions of dollars on was ultimately temporary. That we would continue to be in a cycle of change to facilitate our changing workflows and make use of new innovation to support our businesses. This is important so let me state it again. All HR technology is temporary. We’re missing a true HR data strategy. The mistake we were making is thinking about our technologies and our workflows as being our strategy for data management. This was the problem. If we as organizations could put in place a strategy and a framework that allowed us to disconnect our data from our managing technology and planned for obsolescence then we could achieve data portability. We need to understand the data at its fundamental concepts. If we know enough to understand the current technology and we know enough about the future technology then we can create a pathway between the two. We can facilitate and grease the migration of systems. In order to do this effectively and at scale you had to develop an intermediate context of the data. This becomes the thoroughfare. This is too advanced a concept for organizations to wrap their minds around. This is a powerful concept in essence and seems obvious, but trying to find customers for this was going to be near impossible. We would have to find companies in the short window of evaluating a system change to convince them they needed to look at the problem differently. Analytics is a natural extension. With the intermediate thoroughfare and context of each of these systems you have a perfect structure for delivering analytics from the data and powering downstream use cases. We could deliver data to vendors that needed it to supply a service to the organization. We could return data from these services and integrate into data strategy. We could write this data back to those core source systems. We could extend the data outside of these systems from sources that an organization typically could not access and make use of on their own. Wrap all this up in the burgeoning advanced analytics and machine learning capabilities and you had a truly powerful platform. We regain choice in the technology we use. In this vision, data is effectively separate from our technology and we regain the initiative back from our vendors in who and how we choose to manage our data. An insurance policy for technology. With freedom to move and to adopt new innovation we effectively buy ourselves an insurance policy in how we purchase and make use of products. We can test; we can prove; we can make the most of the best of breed and innovation that has been growing in our space. If we don’t like we can turn it off or migrate-- without losing any data history and minimizing switching costs. This is a long term view of how our relationship to data and our vendors will change. It is going to take time for this view to become mainstream, but it will. The efficiencies and pace that it provides to change the direction of our operations will deliver huge gains in how we work with our people and our supporting vendors. There’s still challenges to making this happen. Vendors young and old need to provide open access to your data (after all it’s your data). The situation is improving but there’s still some laggards. The innovative customers at One Model bought us for our data and analytical capabilities today, but they know and recognize that we’re building them a platform for their future. We’ve been working with system integrators and HR transformation groups to deliver on the above promise. The pieces are here, they’re being deployed, now we need to make the most of them.
Read Article
Featured
9 min read
Phil Schrader
We’re back with another installment of our One Model Difference series. On the heels of our One AI announcement, how could we not take this opportunity to highlight it as a One Model difference maker? In preparation for the One AI launch, I caught up with Taylor from our data science team and got an updated tour of how it all works. I’m going to try to do that justice here. The best analogy I can think of is that this thing is like a steam engine for data science. It takes many tedious, manual steps and let’s the machine do the work instead. It's not wizardry. It's not a black box system where you have to point at the results, shrug, and say, “It’s magic.” This transparent approach is a difference in its own right, and I’ll cover that in a future installment. For now though, describing it as some form of data wizardry simply would not do it justice. I think it’s more exciting to see it as a giant, ambitious piece of industrial data machinery. Let me explain. You know the story of John Henry, right? John Henry is an African-American folk hero who, according to legend, challenged a steam-powered hammer in a race to drill holes to make a railroad tunnel. It’s a romantic, heart-breaking story. Literally. It ends with John Henry’s heart exploding from the effort of trying to keep pace. If you need a quick refresher, Bruce Springsteen can fill you in here. (Pause while you use this excuse to listen to an amazing Bruce Springsteen song at work.) Data science is quite a bit easier than swinging a 30 pound hammer all day, but I think the comparison is worthwhile. Quite simply, you will not be able to keep pace with One AI. Your heart won’t explode, but you’ll be buried under an exponentially growing number of possibilities to try out. This is particularly true with people data. The best answer is hiding somewhere in a giant space defined by the data you feed into the model multiplied by the number of techniques you might try out multiplied by (this is the sneaky one) the number of different ways you might prepare your data. Oh, and that’s just to predict one target. There’s lots of targets you might want to predict in HR! So you wind up with something like tedious work to the fourth power and you simply should not do it all by hand. All data science is tedious. The first factor, deciding what data to feed in, is something we’re all familiar with from stats class. Maybe you’ve been assigned a regression problem and you need to figure out which factors to include. You know that a smaller number of factors will probably lead to a more robust model, and you need to tinker with them to get the ones that give you the most bang for your buck. This is a pretty well known problem, and most statistical software will help you with this. This phase might be a little extra tricky to manage over time in your people analytics program, because you’ll likely bring in new data sets and have to retest the new combinations of factors. Still, this is doable. Hammer away. Of course, One AI will also cycle through all your dimensional data for you. Automatically. And if you add factors to the data set, it will consider those factors too. But what if you didn’t already know what technique to use? Maybe you are trying to predict which employees will leave the company. This is a classification problem. Data science is a rapidly evolving field. There are LOTS of ways to try to classify things. Maybe you decide to try a random forest. Maybe you decide to try neural nets using Tensorflow. Now you’re going to start to lose ground fast. For each technique you want to try out, you’ve got to cycle through all the different data you might select for that model and evaluate the performance. And you might start cycling through different time frames. Does this model predict attrition using one year of data but becomes less accurate with two years…? And so on. Meanwhile, One AI will automatically test different types of models and techniques, over different time periods, while trying out different combinations of variables and evaluating the outcomes. In comparison, you’ll start to fall behind pretty rapidly. But there’s more... Now things get kind of meta. HR data can be really problematic for data science. There is a bunch of manual work you need to do to prepare any data set to yield results. This is the standard stuff like weeding out bad columns, weeding out biased predictors, and trying to reduce the dimensionality of your variables. But this is HR DATA. The data sets are tiny and lopsided even after you clean them up. So you might have to start tinkering with them to get them into a form that will work well with techniques like random forests, neural nets, etc. If you’re savvy, you might try doing some adaptive synthetic sampling (making smaller companies appear larger) or principal component analysis. (I’m not savvy, I’m just typing what Taylor said.) So now you’re cycling through different ways of preparing the data, to feed into different types of models, to test out different combinations of predictors. You’ve got tedious work to the third power now. Meanwhile, One AI systematically hunts through these possibilities as well. Synthetic sampling was a dead end. No problem. On to the next technique and on through all the combinations to test that follow. This is not brute force per se-- that actually would introduce new problems around overfitting. The model generation and testing can actually be organized to explore problem spaces in an intelligent way. But from a human vs. machine perspective, yeah, this thing has more horsepower than you do. And it will keep working the models over, month after month. This is steam powered data science. Not magic. Just mechanical beauty. And now that we have this machine for HR machine learning. We can point that three-phase cycle at different outcomes that we want to predict. Want to predict terminations? Of course you do. That’s what everyone wants to predict. But what if in the future you want to predict quality of hire based upon a set of pre-hire characteristics. One AI will hunt though different ways to stage that data, through different predictive techniques for each of those potential data sets, and through different combinations of predictors to feed into each of those models…and so on and so on. You can’t replicate this with human powered data science alone. And you shouldn’t want to. There’s no reason to try to prove a John Henry point here. Rather than tediously cycling through models, your data science team can think about new data to feed into the machine, can help interpret the results and how they might be applied, or can devise their own, wild one-off models to try because they won’t have to worry about exhaustively searching through every other option. This might turn out similar to human-computer partnership in chess. (https://www.bloomreach.com/en/blog/2014/12/centaur-chess-brings-best-humans-machines.html) One AI certainly supports this blended, cooperative approach. Each part of the prediction pipeline can be separated and used on its own. Depending on where you are at in your own data science program, you might take advantage of different One AI components. If you just want your data cleaned, we can give you that. Or, if you already have the data set up the way you want it, we can save you time by running a set of state of the art classifiers on it, etc. The goal is to have the cleaning/preprocessing/upsamping/training/etc pieces all broken out so you can use them individually or in concert. In this way, One AI can deliver value whatever the size and complexity of your data science team, as opposed to an all-or-nothing scenario. In that regard, our human vs. machine comparison starts to break down. One AI is here to work with you. Imagine what John Henry could have done if they’d just given him the keys to the steam engine? Book some time on Phil's calendar below to get your HR data-related questions answered. About One Model: One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own. Our newest tool, One AI, integrates cutting-edge machine learning capabilities into its current platform, equipping HR professionals with readily-accessible, unparalleled insights from their people analytics data. Notable customers include Squarespace, PureStorage, HomeAway, and Sleep Number.
Read Article
Featured
5 min read
Stacia Damron
How did Spring cleaning become a thing, and why do we do it? It’s officially March. Daylight savings has us up an hour earlier, the weather’s teasing us by thinking about getting warmer, and most of us are envious of the students enjoying spring break on a beach somewhere. Supposedly, this odd combination of things gets us in the mood to clean house. But there’s research to back it up: according to the experts, the warm weather and extra light are responsible for giving us the additional boost of energy. What is it about cleaning that gets us so excited? Is it the fresh smell of mopped floors? Is is the sigh of relief when you can actually park your car in the garage instead of using it for storage? Or is it the look of shock on your significant other’s face when they realize their 10-year-old socks (the ones with the huge holes) are gone for good? It's kind of weird. Now, before we get too far in - I hope you didn’t get really excited about reading some “spot-free window cleaning tips” or “how to declutter your closet in 12 easy steps.” After all, 1) this is a software blog, and 2) I haven’t mastered either of those things. Spring cleaning is a way to refresh and reset. It feels GOOD to declutter. This is the premise here. Most people associate Spring cleaning with their home - but what if we went into Spring with that same excitement at work as well? What if we wanted to share that same, cathartic feeling with our teams and coworkers? You can! One Model can help you Spring clean your people analytics data and provide your team with access to more insights within your current workforce analytics data. We’re the experts at pulling data from as many as 40 or so sources. We can place it on a single platform (that will automatically refresh and update), allowing your team can see how it all interacts together - in one place. Say goodbye to the days of exporting data and poking around with Vlookups in excel, only to have to manually create the same report over and over again. Using the One Model platform to manage your HR data is akin to having someone come in and untangle 200 feet of Christmas lights (but instead of lights, it’s untangling data from your workforce analytics systems). And when you use our platform, you won't have to untangle it again. How awesome is that? A work-related spring cleaning is even more satisfying than a spring cleaning at home. Honestly, it is. You’re not going to get a promotion from organizing your cookware cabinet. However, at work, you might be considered for one if you detangle your data and save your team hours of their valuable time and resources on preparing data for analyzation. So, if you suddenly get the itch to clean something - I urge you and your HR team to commit to participating in a workforce data spring cleaning. Call it a day, and contact One Model to sort out your data organization problem for you. Same satisfaction, less scrubbing - I promise. Then, go home and turn your Roomba on, knowing you just conquered spring cleaning on both frontiers. Book a demo. Or just book some time to get your HR data-related questions answered. About One Model: One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own. Our newest tool, One AI, integrates cutting-edge machine learning capabilities into its current platform, equipping HR professionals with readily-accessible, unparalleled insights from their people analytics data. Notable customers include Squarespace, PureStorage, HomeAway, and Sleep Number.
Read Article
Featured
4 min read
Stacia Damron
Find our team in a city near you, and stop by in person to learn more about our workforce analytics solutions. February 9, 2018 - Austin, TX - The One Model team recently returned from the People Analytics and Future of Work (PAFOW) in San Francisco, where we participated as a key sponsor and speaker. There, our CEO, Chris Butler, was invited to announce a preview of our latest feature: One AI. (Above) One Model CEO, Chris Butler, announces One Model's newest tool: One AI, at PAFOW in San Francisco. One AI is a huge leap into the future of workforce analytics. Finally - there's a tool that makes machine learning readily accessible to HR professionals . By applying One Model's full understanding of HR data, our machine learning algorithms can draw a parallel, predicting any target that our customers select. For example, this means a turnover risk predictive model can be created in minutes; consuming data from across the organization, cleaned, structured, and tested through dozens of ML models and thousands of hyperparameters to select a unique, accurate model that can provide explanations and identify levers for reducing an individual employees risk of turnover. Our Next Stop: London The One Model team will be showcasing One AI at the People Analytics World Conference in London this April. We invite HR professionals, people analytics experts, and partners to join. Come find the One Model team and learn more about our workforce analytics software for HR professionals and data scientists. If you'd like an opportunity to meet the team in person and learn more, we'll be attending the following events later this year: People Analytics Conference - London, England - April 11-12, 2018 HR Technology Conference and Expo - Vegas, NV - September 11-13th, 2018 More events, TBD. “As One Model continues to expand our client base in the U.S. and abroad, we’re looking forward to participating in more international HR, data science, and AI events,” says One Model’s Senior Marketing Manager, Stacia Damron. “Both domestic and international trade shows have helped us showcase our workforce analytics solution to a broader, more diverse audience, and they offer us an opportunity to foster and maintain valuable relationships with clients and partners alike." About One Model: One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team.
Read Article
Featured
13 min read
Chris Butler
I recently made a simple post on LinkedIn which received a crazy amount of views and overwhelmed us with requests to take a look at what we had built. The simple release was that we had managed to take Workday's point in time (snapshot) based reporting and rebuild a data schema that is effective dated, and transactional in nature. The vast majority of organizations and people analytics vendors use snapshots for extracting data from Workday because this is really the only choice they've been given to access the data. We don't like snapshots for several reasons They are inaccurate - you will typically miss out on the changes occurring between snapshots, this makes it impossible to track data/attribute changes in between, to pro-rate, and create analysis any deeper than the snapshot's time context. They are inflexible - an object or time context has already been applied to the data which you can't change without replacing the entire data set with a new context. They don't allow for changes - If data is corrected or changed in history you need to replace the entire data set, urggh. External data is difficult to connect - without effective dating joining in any external data means you have to assume a connection point and apply that time context's values to the external data set. This compounds the inaccuracy problem if you end up having to snapshot the external data as well. A pain in the #$% - to pull snapshots from Workday now you need to create the report for each snapshot period that you need to provide. Three years of data with a month end snapshot, that's 36 reports to build and maintain. With our background in working with raw data directly from HR systems this approach wasn't going to cut the mustard and couldn't deliver the accuracy that should be the basis of an HR data strategy. The solution is not to buy Workday's big data tools because you're going to be living with many of the same challenges. You need to take the existing structure, enhance, and fundamentally reconstruct a data architecture that solves these problems. We do just that, we extract all employee and object data, analyse the data as it flows and generate additional requests to the Workday API that work through the history of each object. Data is materialized into a schema close to the original but has additional effective dated transactional records that you just wouldn't see in a snapshot based schema. This becomes our raw data input into One Model, delivered to your own warehouses to be used any way you wish. The resulting dataset is perfect for delivering accurate, flexible reporting and analytics. The final structure is actually closer to what you would see with a traditional relational schema used by the HRIS sold by SAP, Oracle, PeopleSoft etc. Say what you will about the interfaces of these systems but for the most part the way that they manage data is better suited for reporting and analytics. Now don't get me wrong, this is one area most people know Workday lags in, and in my opinion it should be a low priority decision point when selecting an HRIS. Don't compromise the value of a good transactional fit of an HRIS for your business in an attempt to solve for the reporting and analytics capability because ultimately you will be disappointed. Choose the HRIS system that fits how your business operates, solving for the reporting and analytics needs in another solution as needed. Time to get a little more technical. What I'm going to discuss below is the original availability format of data in comparison to the approach we take at One Model. Object Oriented - the why of the snapshot Okay, so we all know that Workday employs an Object Oriented approach to storing data which is impressively effective for it's transactional use case. It's also quite good at being able to store the historical states of the object. You can see what i mean by taking a look at the API references as below: The above means the history itself is there but the native format for access is a snapshot at a specific point in time. We need to find a way of accessing this history and making the data useful for more advanced reporting and analytics. Time Context In providing a point in time we are applying a time context to the data at the point of extraction. This context is then static and will never change unless you replace the data set with a different time context. Snapshot extractions are simply a collection of records with a time context applied. Often when extracting for analytics, companies will take a snapshot at the end of each month for each person or object. We get a result set similar to the below: The above is a simple approach but will miss out on the changes that occur between snapshot because they're effectively hidden and ignored. When connecting external data sets that are properly effective dated you will need to make a decision on which snapshot is accurate to report against in the above but you simply don't have enough information available to make this connection correct. This is just simply an inaccurate representation of what is really occurring in the data set, and it's terrible for pro-rating calculations to departments or cost centers and even something as basic as an average headcount is severely limited. Close enough is not good enough. If you are not starting out with a basis of accuracy then everything you do downstream has the potential to be compromised. Remove the context of time There's a better way to represent data for reporting and analytics. Connect transactional events into a timeline Extract the details associated with the events Collapse the record set to provide an effective-dated set of records. The above distills down the number of records to only that which is needed and matches transactional and other object changes which means you can join to the data set at the correct point in time rather than approximating. Time becomes a flexible concept This change requires that you apply a time context at query time providing infinite flexibility for aligning data with different time constructs like the below Calendar Fiscal Pay Periods Weeks Any time construct you can think of It's a simple enough join to create the linkage left outer join timeperiods tp on tp.date between employee.effective_date and employee.end_date We are joining at the day level here which gives us the most flexibility and accuracy but will absolutely explode the number of records used in calculations into the millions and potentially billions of intersections. For us at One Model accuracy is a worthwhile trade-off and the volume of data can be dealt with using clever query construction and of course some heavy compute power. We recently moved to a Graphics Processing Unit (GPU) powered database because really why would you have dozens of compute cores when you can have thousands? (And, as a side note, it also allows us to run R and Python directly in the warehouse #realtimedatascience). More on this in a future post but for a quick comparison take a look at the Mythbusters demonstration What about other objects? We also apply the same approach to the related objects within Workday so that we're building a historical effective dated representation over time. Not all objects support this so there are some alternative methods for building history. Retroactive changes? Data changes and corrections occur all the time as we know we regularly see volumes of changes being most active in the last six months and can occur several years in the past. Snapshots often ignore these changes unless you replace the complete data set for each load. The smarter way is to identify changes and replace only the data that is affected (i.e, replace all historical data for a person who has had a retroactive change). This approach facilitates a changes-only feed and can get you close to a near-real-time data set. I say "close to near-real time" because the Workday API is quite slow so speed will differ depending on the number of changes occurring. Okay, so how do you accomplish this magic? We have built our own integration software specifically for Workday that accomplishes all of the above. It follows this sequence: Extracts all object data and for each of them it Evaluates the data flow and identifies where additional requests are needed to extract historical data at a different time context, then Merges these records, collapses them, and effective dates each record We now have an effective dated historical extract of each object sourced from the Workday API. This is considered the raw input source into One Model, and it is highly normalized and enormous in its scope as most customers have 300+ tables extracted. The pattern in the below image is a representation of each object coming through, you can individually select the object slice itself The One Model modelling and calculation engines take over to make sense of the highly normalized schema, connect in any other data sources available, and deliver a cohesive data warehouse built specifically for HR data. Data is available in our toolsets or you have the option to plug in your own software like Tableau, PowerBI, Qlik, SAS, etc One Model is up and running in a few days. To accomplish all of the above, all we need is a set of authorized API credentials with access provided to the objects you'd like us to access. With the data model constructed, the storyboards, dashboards, and querying capabilities are immediately available. Examples: Flexibility - the biggest advantage you now have We now have virtually all data extracted from workday in a historically accurate transaction-based format that is perfect for integrating additional data sources or generating an output with any desired time context (convert back to snapshots if required). Successful reporting and analytics with Workday starts with having a data strategy for overcoming the inherent limitations of the native architecture that is just not built for this purpose. We're HR data and people analytics experts and we do this all day long. If you would like to take a look please feel free to contact us or book some time to talk directly below. Learn more about One Model's Workday Integration Book a Demo
Read Article