7 min read
    Chris Butler

    The employee survey still is perhaps the most ubiquitous tool in use for HR to give their employees a voice. It may be changing and being disrupted (debatable) by regular or real-time continuous listening and other feedback mechanisms. Regardless, employee survey data collection will continue. I am, however, constantly amazed by the amount of power that is overlooked in these surveys. We’re gathering some incredibly powerful and telling data. Yet, we barely use a portion of the informational wealth it holds. Why? Most organizations don’t know how to leverage the confidential employee survey results correctly and maintain the privacy provisions they agreed with your employees during data collection. The Iceberg: The Employee Survey Analytics You're Missing Specifically, you are missing out on connecting employee survey answers to post-survey behaviours. Did the people who said they were going to leave actually leave? Did the people who answered they lack opportunity for training, actually take a training course when offered? Did a person who saw a lack of advancement opportunities leave the company for a promotion? How do employee rewards affect subsequent engagement scores? And of course, there are hundreds of examples that could be thrown out there, it is almost a limitless source of questioning, you don’t get this level of analysis ROI from any other data source. Anonymous vs. Confidential Surveys First, let me bring anyone who isn’t familiar with the difference up to speed. An anonymous survey is one where all data is collected without any identifiers at all on the data. It is impossible to link back to a person. There’s very little you can do with this data apart from what is collected at the time of questioning. A confidential survey, on the other hand, is collected with an employee identifier associated with the results. This doesn’t mean that the survey is open, usually, the results are not directly available to anyone from the business which provides effective anonymity. The survey vendor that collected these results though does have these identifiers and in your contract with them, they have agreed to the privacy provisions requested and communicated to your employees. And a number of survey vendors will be able to take additional data from you, load it into their systems and be able to show a greater level of analysis than you typically get from a straight survey. This is better than nothing but still far short of amazing. Most companies, however, are not aware that survey vendors are generally happy (accepting at least) to transfer this employee-identified data to a third party as long as all confidentiality and privacy restrictions that they, the customer, and the employees agreed to when the survey was collected. A three-way data transfer agreement can be signed where, in the case of One Model, we agree to secure access to the data and maintain confidentiality from the customer organization. Usually, this confidentiality provision means we need to: Restrict the data source from direct access. In our case, it resides in a separate database schema that is inaccessible by even a customer that has direct access to our data warehouse. Provide ‘Restricted’ metrics that provide an aggregate-only view of the data, i.e. only show data where there are more than 5 responses or more than 5 employees in a data set. The definition of how this is restricted needs to be flexible to account for different types of surveys. Manage Restricted metrics as a vendor, preventing them from being created or edited by the company when a restricted data set is in use. Support employee survey dimensionality that adheres to this restriction so you can’t inadvertently expose data by slicing a non-restricted metric by a survey dimension and several other dimensions to create a cut to a population that otherwise may be identifiable. Get Ready to Level Up Employee Survey Analysis! Your employee survey analytics can begin once your survey data is connected to every other data point you hold about your employees. For many of our customers that means dozens of people data sources across the recruit to retire, and business data spectrums. Want to know what the people who left the organization said in their last survey? Three clicks and a few seconds later and you have the results. Want to know if the people you are recruiting are fitting in culturally and which source of hire they were recruited from Or if low tenure terminations show any particular trends in engagement, or culture responses? Or whether people who were previously highly engaged and have a subsequent drop in engagement have a lack of (choose your own adventure) advancement|compensation|training|skilled-peers|respect for management? Literally, you could build these questions and analysis points for days. This is what I mean, a whole new world opens up with a simple connection of a data set that almost every company has. What can I do? Go and check your last employee survey results and any vendor/employee agreements for how the data was to be collected and used. If the vendor doesn’t state how it’s being collected, check with them, often they are collecting an employee identifier (id, email, etc). If you are lucky you might have enough leeway to designate a person or two within your company to be able to run analysis directly. Otherwise, enquire about a data transfer agreement with a third party who will maintain confidentiality. I’ve had this conversation many times (you may need to push a little). If you don’t have data collected with an identifier, check with HR leadership on the purpose of the survey, and the privacy you want to provide employees with and plan any changes for integration into the next survey. This is a massively impactful data set for your people analytics, and for the most part, it’s being wasted. However, always remember to respect the privacy promise you made to employees, communicate how the data is being used and how their responses are protected from being identified. With the appropriate controls, as outlined above, you can confidentially link survey results to actual employee outcomes and take more informed action on the feedback you collected in the employee survey analysis. If you would like to take a look at how we secure and make survey data available for analysis, feel free to book a demonstration directly below. Ready to see us Merge Employee Survey Data with HRIS Data? Request a Demo!

    Read Article

    9 min read
    Chris Butler

    HR professionals have heard the stories and read the news. Large organisations are having considerable success implementing a people analytics strategy for their organisations. That may leave you wondering what people analytics success can do for your own organisation. Perhaps you fantasise about incredible dashboards, with charts and graphs that are elegant and easy to disseminate across your teams and decision-makers. Maybe you yearn for your organisation’s people data to be governed and protected with the same diligence as other enterprise resource planning (ERP) data strategies. Or perhaps you simply want an end to the back-and-forth that’s associated with custom analysis and forecasting. Wouldn’t it be nice to have an HR data analytics technology that orchestrates everything needed for decision-makers to be able to make brilliant decisions quickly? Envision Winning on HR Analytics It's important to think about how your organisation will win with an HR analytics results approach that encompasses people analytics tools. For example, you will need to make a choice between buying an HR analytics platform or building one. If you choose to build a people analytics platform in-house (or you engage an outside party to build a custom people analytics platform for you, then you are accepting a loss in scalability, reducing time to value, and almost certainly limiting the completeness of your analytics-ready data set. We explain more about this choice in a recent whitepaper. Learn more On the other hand, if you choose to buy an off-the-shelf people analytics platform, you will surely find out that not all solutions are the same. As one of the industry’s most respected people analytics platforms, One Model brings an obsession around customer success that is unique when compared to other solutions on the market. We asked a number of our team here at One Model to share why they’re so passionate about customer success with people analytics. If you’re asking yourself, “What can people analytics do for me?”, keep reading. Building a Product that Creates Success Will Myers, One Model Product Lead Will spends his days making sure that One Model’s People Data Cloud™ people analytics platform delivers the result that our customers expect. He notes that, before you can create brilliant people stories or deliver impactful insights to across your organisation, you must first access the data that is needed, anywhere it may live. That’s tricky because these traditional data sources and repositories often lack the interfaces needed to do this. So you have to trust the team behind the technology to get data orchestration where it needs to be. Delivering Results by Changing How HR Teams Work Kelley Kirkpatrick, One Model Customer Success Lead in Australia Throughout her career, Kelley has seen HR teams collaborate over people data in countless ways. She has a unique perspective when it comes to investing in human resources data analysis technology. In her video, she lets us know that both data and people are key things to “get right” when expanding people analytics capabilities. Transparency drives trust, so Kelley works to ensure that People Data Cloud is the most transparent people analytics tool for her customers. It’s her favourite way to directly access metrics and models built from your data. Quick Turn-arounds Lead to More Wins Nicole LI, One Model Senior UX designer Nicole shares a great example that many technology buyers overlook when selecting a software vendor or technology partner. Most customers expect continuous improvement and rapid innovation. But they rarely get that from large companies. She’s extremely proud of One Model’s approach. It’s exciting to turn around upgrades and new features in a 2-week sprint. As our Senior UX Designer, Nicole thrives on solving problems quickly for her customers. She has some exciting user experience innovations to roll out in the coming months, so stay tuned. Everyone in Your Organisation Wins Jen Lincoln, One Model Customer Success Specialist Jen points out that unlocking your data to all the people leaders in the organisation generates excitement within her customer’s internal teams. One Model is democratising analytics and machine learning, so more people can make better decisions, faster. Have you been able to guess another big One Model strength from these videos? One Model People Make the Difference You can clearly see in these videos that the people we have at One Model make all the difference in your company’s people analytics success experience. We have talented product designers and developers that create unique, innovative tech and customer success champions that roll up their sleeves and do the heavy lifting for all of our customers. Our difference boils down to three strengths: people, platform, and product. I’m honoured to work with every member of the One Model team. We love talking about winning on HR Analytics! Want to have a conversation with a great member of the One Model team? Request Time to Chat with Us Today.

    Read Article

    17 min read
    Chris Butler

    Workday vs SuccessFactors vs Oracle Ratings Based on Experience Integrating HR Tech for People Analytics This vendor-by-vendor comparison will be a living post and we will continue to update as we have time to collect thoughts on each vendor and as we complete integrations with new vendors. Not every source we work with will be listed here but we'll cover the major ones that we often work with. At One Model we get to see the data and structure from a load of HR systems, and beyond, basically anything that holds employee or person data is fair game as a core system to integrate for workforce analytics. After more than a decade of HR analytics integration architecture experience where the solution is directly integrating data from these systems into analytics and reporting solutions, we have a lot of experience to share. Below I'll share our experience with highlights from each system and how they align with creating a people analytics warehouse. Some are better than others from a data perspective and there's certainly some vendors that are yet to understand that access to data is already a core requirement of buyers looking at any new technology. Bookmark this blog, add your email to the subscription email list to the right, or follow me (Chris Butler) and One Model on LinkedIn to stay up to date. A Quick Note on HRIS Platform Ratings Ratings are provided as an anecdotal and unscientific evaluation of our experience in gaining access to, maintaining, and working with the data held in the associated systems. They are my opinions.] If you would like to make use of any of our integrations in a stand-alone capacity, we now offer a data warehouse only product where you utilize just our data pipeline and modelling engine to extract and transform data into a data warehouse hosted by One Model or your own data warehouse. We'll be releasing some more public details soon but you are a company that likes to roll your own analytics, visualizations, and just need some help with the data side of the house, we can certainly help. Contact Us Cloud HRIS Comparison Workday One Model rating - 2.5/5 Method - API for standard objects, built-in reporting for custom objects (via reporting-as-a-service, or "RaaS") The Good - Great documentation, Easy to enable API access and control of accessible fields, and Good data structures once you have access. The RaaS option does a good job but is limited. The Bad - Slow; Slow; Slow; No custom fields available in API, Geared towards providing a snapshot, number of parallel connections limited, constant tweaking required as new behaviors identified, Expert integration skills required; True incremental feeds require you to read and interpret a transaction log Workday Requires a Custom-Built People Analytics Integration Architecture Workday analytics embedded into the product is underwhelming and we're yet to see Prism Analytics make a dent in filling the needs that people analytics teams or HR analysts have beyond convenience analytics. So in the meantime, if you are serious about improving reporting and people analytics for Workday you're going to need to get the data out of there and into somewhere else. On the surface, Workday looks to have a great API, and the documentation available is excellent. However, the single biggest downfall is that the API is focused on providing a snapshot, which is fine for simple list reports but does not allow a people analytics team to deliver any worthwhile historical analysis. You don't get the bulk history output of other systems or the ability to cobble it together from complete effective-dated transactions across objects. To capture the complete history we had to build an intense process of programmatically retrieving data, evaluating, and running other API calls to build the full history that we need. If you want more detail take a look at my blog post on the subject The end of the snapshot workday edition. The complexity of the integration, therefore, is multiplied and the time taken suffers immensely due to the object-oriented architecture that requires you to load each object into memory in order to be able to retrieve it. A full destructive data extraction means you're looking at 8+ hours for a small-medium enterprise and expanding to a week if you're a giant. The problem is exacerbated by the number of parallel connections allowed to run at a fraction of the stated limit. A full historical API integration here is not for the faint of heart or skill, we have spent 12+ months enhancing and tweaking our integration with each release (weekly) to improve performance and solve data challenges. Our integration to give a sense of scale generates some 500+ tables that we bring together in our modelling engine in preparation for analytics. Beware of Oversimplifying the API Integration Out-of-the-box integration plugins are going to be focused on the snapshot version of data as well so if you don't have the integration resources available I wouldn't attempt an API integration. My advice is to stick with the built-in reporting tools to get off the ground. The RaaS tools do a good job of combining objects and running in a performant manner (better than the API). However, they will also be snapshot focused and as painful as it will be to build and run each timepoint you will at least be able to obtain a basic feed to build upon. You won't have the full change history for deeper analysis until you can create a larger integration, or can drop in One Model. Robert Goodman wrote a good blog a little while back looking at both the API and his decision to use RaaS at the time, take a read here. Workday API vs RaaS Regardless of the problems we see with the architecture, the API is decent and one of our favorite integrations to work with. It is, however, little wonder that with the data challenges we have seen and experienced, half of our customers are now Workday customers. One Model Integration Capabilities with Workday One Model consumes the Public Web Service API's for all standard objects and fields. One Model configures and manages the services for API extractions, customers need only to create and supply a permissioned account for the extraction. Custom objects and fields need to use a Raas (Report as a service) definition created by the customer in the Enterprise Interface Builder (EIB). The Report can then be transferred by SFTP or can be interacted with as an API itself. Figure 1: One Model's data extraction from Workday SuccessFactors One Model rating - 4/5 Method - API The Good - A dynamic API that includes all custom MDF data!! Runs relatively quickly; Comprehensive module coverage; The Bad - Several API endpoints that need to be combined to complete the data view; Can drop data without indication; At times confusing data structures 4 out of 5 is a pretty phenomenal rating in my book. I almost gave SuccessFactors a perfect 5 but there are still some missing pieces from the API libraries and we've experienced some dropped data at times that have required some adaptations in our integration. Overall, the collection of SF APIs is a thing of beauty for one specific reason: it is dynamic and can accommodate any of the Meta Data Framework (MDF) custom changes in its stride. This makes life incredibly easy when working across multiple different customers and means we can run a single integration against any customer and accurately retrieve all customizations without even thinking about them. Compared to Workday where the API is static in definition and only covers the standard objects this facet alone is just awesome. This dynamic nature though isn't without its complexities. It does mean you need to build an integration that can interrogate the API and iterate through each of its customizations. However, once it is complete it functions well and can adapt to changing configurations as a result. Prepare to Merge API Integrations for People Analytics Multiple API endpoints also require different integrations to be merged. This is a result of both upgrades in the APIs available in the case of the older SuccessFactors API and the OData API as well as providing an API to acquired parts of the platform (i.e. Learning from the Plateau acquisition). We're actually just happy there is now an API to retrieve learning data as this used to be a huge bug bear when I worked at SuccessFactors on the Workforce Analytics product. The only SF product I know of right now that doesn't have the ability to extract from an API is Recruiting Marketing (RMK) from the jobs2web acquisition, hopefully, this changes in the future. Full disclosure, I used to hate working with SuccessFactors data when we had to deal with flat files and RDFs, but with the API integration in place, we can be up and running with a new SuccessFactors customer in a few hours and be confident all customizations are present. Another option - Integration Center I haven't spoken here about the new Integration Center release from earlier last year as we haven't used it ourselves and only have anecdotal evidence from what we've read. It looks like you could get what you need using the Integration Center and deliver the output to your warehouse. You will obviously need to build each of the outputs for the integration which may take a lot of time but the data structure from what I can tell looks solid for staging into an analytics framework. There are likely a lot of tables to extract and maintain though, we currently run around 400+ tables for a SuccessFactors customer and model these into an analytics-ready model. If anyone has used the Integration Center in an analytics deployment please feel free to comment below or reach out and I would be happy to host your perspective here. One Model Integration Capabilities with SAP SuccessFactors One Model consumes the SF REST API's for all standard fields as well as all customized fields including any use of the MDF framework. One Model configures and manages the service for API extractions, customers need only to create and supply a permissioned account for the extraction. SF has built a great API that is able to provide all customizations as part of the native API feed. We do us more than one API though as the new OData API doesn't provide enough information and we have to use multiple endpoints in order to extract a complete data set. This is expertly handled by One Model software. Figure 2: One Model's data extraction from SuccessFactors Oracle HCM Cloud (Fusion) One Model rating - 2/5 Method - HCM Extracts functionality all other methods discounted from use The Good - HCM Extracts is reasonable once you have it set up. History and all fields available. Public documentation. The Bad - The user interface is incredibly slow and frustrating. Documentation has huge gaps from one stage to the next where experience is assumed. API is not functional from a people analytics perspective: missing fields, missing history, suitable only for point-to-point integrations. Reporting/BI Publisher if you can get it working is a maintenance burden for enhancements. HCM Extracts works well but the output is best delivered as an XML file. I think I lost a lot of hair and put on ten pounds (or was it ten kilos?!) working through a suitable extraction method for the HCM Cloud suite that was going to give us the right level of data granularity for proper historically accurate people analytics data. We tried every method of data extraction from the API to using BI Publisher reports and templates. I can see why people who are experienced in the Oracle domain stick with it for decades, the experience here is hard-won and akin to a level of magic. The barriers to entry for new players are just so high that even I as a software engineer, data expert, and with a career spent in HR data many times over, could not figure out how to get a piece of functionality working that in other systems would take a handful of clicks. Many Paths to HRIS System Integration In looking to build an extraction for people analytics you have a number of methods at your disposal. There's now an API and the built-in reporting could be a reasonable option for you if you have some experience with BI Publisher. There are also the HCM Extracts built for bulk extraction purposes. We quickly discounted the API as not yet being up to scratch for people analytics purposes since it lacks access to subject areas, and fields, and cannot provide the level of history and granularity that we need. I hope that the API can be improved in the future as it is generally our favorite method for extraction. We then spent days and probably weeks trying to get the built-in reporting and BI Publisher templates to work correctly and deliver us the data we're used to from our time using Oracles on-premise solutions (quite a good data structure). Alas, this was one of the most frustrating experiences of my life, it really says something when I had to go find a copy of MS Word 2006 in order to use a plugin that for some reason just wouldn't load in MS Word 2016, all to edit and build a template file to be uploaded, creating multiple manual touchpoints whenever a change is required. Why is life so difficult?? Even with a bunch of time lost to this endeavour our experience was that we could probably get all the data we needed using the reporting/BI publisher route but that it was going to be a maintenance nightmare if an extract had to change requiring an Oracle developer to make sure everything ran correctly. If you have experienced resources this may work for you still. We eventually settled on the HCM Extracts solution provided that while mind-numbingly frustrating to use the interface to build and extract will at least reliably provide access to the full data set and deliver it in an output that with some tooling can be ingested quite well. There are a number of options for how you can export the data and we would usually prefer a CSV style extraction but the hierarchical nature of the extraction process here means that XML becomes the preferred method unless you want to burn the best years of your life creating individual outputs for each object tediously by hand in a semi-responsive interface. We, therefore, figured it would be easier, and enhance maintainability if we built our own .xml parser for our data pipeline to ingest the data set. There are .xml to .csv parsers available (some for free) if you need to find one but my experience with them is they struggle with some files to deliver a clean output for ingestion. With an extract defined though there's a good number of options on how to deliver and schedule the output and reliability is good. We've only had a few issues since the upfront hard work was completed. Changing an extract as well is relatively straightforward if you want to add a field or object you can do so through the front-end interface in a single touchpoint. We do love Oracle data, and don't get me wrong - the construction and integrity are good and we have a repeatable solution for our customer base that we can deliver at will, but it was a harrowing trip of discovery that to me, explains why we see so few organizations from the Oracle ecosystem that are out there talking about their achievements. Don't make me go back, mommy! Want to Better Understand How One Model can Help You? Request a Demo Today. Other HRIS Comparisons Coming Soon ADP Workforce Now

    Read Article

    9 min read
    Chris Butler

    A major shift has occurred in Human Resources over the past five years. The world went from a handful of companies experimenting with people analytics - early adopters - to thousands of companies investing in dedicated roles and teams to take on a new way of thinking about Human Resources. What can you learn from the early adopters? Every CHRO who has moved forward with people analytics has some secrets to success. These secrets will help you provide a better, faster way to more effectively deploy HR resources and build a successful organization. Better human resources strategy means better business You need to accept and measure change There will be change. That is the only constant. Change could come from any number of places like new technology, new customers, new leadership, new products, new competition, new market and strategies. You will get maximum benefit from your HR strategy if you accept the reality that change is the only constant - the only certainty is a world of uncertainty. If you want to survive in a world of uncertainty you need to have a process to constantly take in new information to understand changing reality and use this new information to adapt. You need a way to measure to see if your organization is changing in the way that you and your leadership team expect. Change is what people analytics is for. Hire specific talent for a meaningful business advantage The problem is that most HR strategies are far too general to develop any sustainable advantages: “We will hire great people.” Great idea - you and everyone else! You cannot do everything well all the time - as the cost to attract and retain top talent just gets more expensive so you have to choose. You have to choose specifically where you want your business advantage to be and then you have to figure out how to create this advantage through People. In this way we and others may realize that the people's perspective of the business is not necessarily based in “soft stuff”, “political correctness” and administrative minutia, but profound business insights that arise in conjunction with observation and reason. Basically there are two types of insight: that which is not based on any special observable reason and that which is backed by observable reason. In the case of the second type of view, an individual is motivated to examine an insight and investigate its relevance to his or her situation, needs and requirements. Actions are applied after seeing the reason why this insight is advantageous. Change with people, new action, is motivated by new insight; that insight is powered by people analytics. Don’t get led astray by “traditional” HR metrics it is not always clear how to relate HR actions to business impact, and so we settle to monitor activities as a measure of progress which is a traditional metric for success. Measuring progress as activities that have an unknown relationship to current business objectives leads HR into waste. HR is broken into multiple functional centers of excellence (Staffing, Benefits, Compensation, Labor Relations, Talent Management, Organization Design), each with different goals and activities, we end up with hundreds of metrics that do not align with each other and do not drive towards a unified goal. This results in efforts that either have no impact or work against each other, not to mention waste in the process of analytics itself. Because we have not previously devised a single HR metric that has a direct business impact that can be applied universally across organizations and sub organizations, we substitute simplistic measures that while a good intention, may not be a universally good idea, may conflict with other objectives and may not correlate in any way with measurable business impact. This results in the wrong efforts/objectives. Investing heavily in quantitative metrics doesn’t automatically give us solutions. Metrics can usually tell us what’s going wrong, usually not why. The more you invest in quantitative metrics, with a process for qualitative input, the more you end up drowning in a sea of non-actionable data. Create a culture of success Those leaders who want to create a healthy organization or “culture of success” are motivated (or should be motivated) to attain a genuine camaraderie with all people in an organization. When a group of people have a common vision, maintain a sense that they are all in it together, and have compassion for each other then there is nothing that cannot be accomplished. At this juncture, in addition to many great spiritual teachings of varied doctrines, we also have a foundation of great insights in science and engineering and access to those examples and methods. At the heart of it: we analyze people within an organization for benefit to that organization - and that is also the people. It must be both. Useful analysis helps us all understand current reality and take the right actions now to achieve optimal outcomes: an outcome of joint benefit to managers, employees, shareholders and possibly society. A continual reduction in tenacious organizational problems and continual reinforcement of a culture of success is the ultimate result of useful analysis. Disciplined action (as opposed to frantic thrashing) is the benefit of useful analysis. Our concept of a healthy organization is not something physical. Therefore the spread of a healthy culture depends on increasing the depth of understanding of the benefit of new actions to provide strong motivation to pursue those new actions. When we are able to reduce the defects in how we think about people in an organization a healthy culture will naturally increase. Thus, effecting positive transformations in organizations through observation and feedback situation by situation, subdivision by subdivision, manager by manager, and employee by employee is the method we will employ to effect the change we desire. Unlike manufactured goods it should be fairly obvious that culture is not a tangible entity, that it cannot be sold or bought in the marketplace or physically constructed. Watch out for HR constraints (budget, credibility, time) Most of the programs HR watches over have very large budgets. Labor costs are frequently 70% or more of revenue. Benefits may represent 30% or more of labor costs. On an absolute basis these costs increase over time as the employee base grows. Things get sideways when business plan projections get off track and the cost of labor grows faster than revenue or when revenue retracts. It is critical for CHROs to be able to identify—quickly, early, and accurately— whether a project or activity is worth pursuing, rejecting, continuing or dropping so it may protect its commitments and preserve resources for those programs that drive the most value. Besides the obvious constraint of budget, the other constraint is credibility. In order to influence, HR professionals need to hold on to and build on what little credibility they start with. As CHRO you will have to justify HR’s right to have a seat at the business table by demonstrating the business impact of your programs to a CEO, management team or business line head to whom you support. At some stage, you will all be called on to demonstrate progress. Finally, we are all constrained by time. Every minute spent on an activity that is doomed to fail is wasted. HR has historically relied on two categorical measures of progress: how much stuff they are doing and how much people like what they are doing. Yet unfortunately, both of these metrics are unreliable proxies of business impact and both of these lead us down the wrong path—building something that ultimately does not matter, has no impact on the business or worse, the wrong impact. People analytics can be hard First, there is a misconception around how successful earth shattering people analytics get built. The media loves stories of “wunderkind” nerds invading HR who are so smart they helped the moribund HR function (usually at some cool tech company) figure this problem out. The reality, however, rarely plays out quite as simply. Even the unveiling of the hiring algorithms at Google, in Laszlo Bock's words, was years in the making, built on the contributions of many and several incremental innovations (and failures). Second, the classic technology-centric Reporting or “Business Intelligence” approach front-loads some downstream business partner involvement during a “requirements-gathering phase” but leaves most of the HR business partner and business customer validation until after the reporting solution is released. There is a large “middle” when the Analytics function disengages from the ultimate intended users of these reports for months, maybe even a year, while they build and test their solution. Sometimes the solution is rolled out in HR first, just to be sure it is safe for humans before inflicting it on the rest of the organization. Imagine a few wild eyed HR people hiding in the bushes outside the office preparing to jump up on an unexpecting executive on his way into work one morning. During this time, it’s quite possible for the Analytics function to either build too much or be led astray from building anything remotely useful to the organization. Third, People are complex and messy. People are not structural engineering challenges that are within the abilities of an engineer to control precisely. People and organizations are not like machines or computers. There is always a certain degree of uncertainty about the effect of our actions on people and organizations. We try things based on an entirely plausible premise and they fail. Usually we had not factored in or considered the thing or things which made it fail. There are too many variables, too many possibilities and too much change occurring within and all around us. Is this not in some sense the beauty of life? Would you rather take this away? In human systems, failure is not the problem, the problem is failure to learn from the failure. If we want to improve HR we should shift our attention to how we can learn more quickly. These Secrets are the CHRO’s Real Guide to People Analytics People analytics gives CHROs a better, faster way to more effectively deploy HR resources and build successful organizations. People analytics enable better listening, learning, strategic focus, measurable business impact, and rigorous process.

    Read Article

    7 min read
    Chris Butler

    When I first started work with InfoHRM in the people analytics domain back in 2006, we were the only vendor in the space and had been for over a decade. The product was called Workforce Analytics and Planning and after its acquisition by SuccessFactors (2010) and SAP (2012) it's still called that today. So what's the difference? Why do we have Workforce Analytics, HR Analytics, and People Analytics and can they be used interchangeably? I have to give credit to Hip Rodriguez for the subject of the blog. He posted about People Analytics vs HR Analytics a couple weeks ago and I've followed the conversation around it. Hip's Linkedin post here. So what does the data say? Workforce vs HR vs People? Being an analytical person at heart, I turned to the data and analyzed job titles containing "HR, Workforce, People, Human" and "Analytics or Analyst". As you can see in the table below (truncated for space), the data isn't supportive of people analytics being the most popular. In fact, you have to get down to row 25 before you see a people analytics title. HR Analytics and Workforce Analytics related titles are the clear leaders here by volume. Keep in mind though that titles particularly for less senior roles can take time to adapt, especially for more rigid position structures in larger organizations. Likely, many of these junior roles have a more basic reporting focus than an analytics focus. So why then does it feel like People Analytics has become the dominant term for what we do? The Evolution of HR Analytics (and my opinion) I believe that it's not so much a difference between HR Analytics and People Analytics, but rather, an evolution in the term. Let’s Start with the Evolution of Workforce Analytics Early when we were delivering Workforce Analytics it was to only a handful of forward-thinking organizations that also had the budget to be able to take workforce reporting seriously. I specifically say reporting because mostly that's what it was: getting data in the hands of executives and directors weren't happening at scale so even basic data would blow people's minds. It's crazy how often the same basic data still blows people's minds 20 years later. There were not many teams running project focused analysis like there are today. For example: looking at drivers of turnover to trial different retention initiatives or how onboarding programs affect net promoter scores of recent hires. Workforce Analytics was for the most part aggregate reporting. The analysis of this was primarily driven by hardcore segmentation of this data looking for nuggets of gold by a handful of curious people. It was done at scale with large numbers and rarely focused on small populations. A Look at the Difference Between HR Analytics HR Analytics is by far and away the most common term and has lived alongside Workforce Analytics for a very long time now. It is a natural extension of the naming of the human resources department, you're in HR and looking at HR data from our Human Resources Information System (HRIS) you are therefore an HR Analyst. If we were to be more aligned with the term we would be analyzing the effectiveness and the efficiency of the HR function e.g. HR Staffing ratios and everything else that goes along with it. An HR Analyst in this fashion would be more aligned with Talent Acquisition Analyst roles that we see growing in the domain today. In my view, HR Analytics is really no different to Workforce Analytics and we will see these titles transition towards People Analytics over time. Why Evolve to People Analytics Then? I do not believe there is a significant difference between people analytics vs HR analytics vs workforce analytics in terms of the work that we do. The evolution of the terms, in my opinion, has been more about how we view people as individuals in our organizations as opposed to the large scale aggregate of a workforce or even worse to me as "human resources". We've recognized as a discipline that people need to be treated and respected as individuals, that we need to provide career development, and life support, and that it is important that people actually take vacation time. It is treating people as people and not numbers cranking out widgets. It is no coincidence that knowledge worker organizations have been the biggest adopters of people analytics, they have the most to gain especially in the tight labor market where choice and compensation are abundant. The care for workers must exist whereas many years ago it was a different story. I love the fact that we have people analytics teams who are going deep on how they promote a diverse workforce, on how they create career development opportunities. We even have one customer that integrates cafeteria data into their solution to help identify what people are enjoying eating. So is it just a branding change? Yes, and No. Our space has definitely matured, and our capabilities have grown. We've moved from basic reporting and access to data which is now table stakes, to project-based analysis with intent and hypotheses to prove or dispel. People Analytics is a more mature discipline than it ever was but effectively the same activities could roll up under either term. Impacting people's work lives through our analysis of data is ultimately our goal, having in mind that outcome is why we'll see further adoption of People Analytics as a term. We'll see job titles change to reflect this move over time. And I'm certainly not always right, and there are larger nuances between these terms applicable to some organizations. Heather Whiteman gives a good overview of a more nuanced definition here Interested in Learning More? So whether you call it HR Analytics or People Analytics, if you're new to this and want to understand what it can do for an organization, check out the eBook written by Heather White and Nicholas Garbis on Explore the Power of People Analytics for a further dive in this area. Download eBook Today

    Read Article

    2 min read
    Chris Butler

    One Model took home the Small Business Category of the Queensland Premier's Export Awards held last night at Brisbane City Hall. The award was presented by Queensland Premier and Minister for Trade, Hon Annastacia Palaszczuk MP and Minister for Employment, Small Business, Training and Skills Development, Hon Dianne Farmer MP. “We are delighted to receive this award given the quality of entrepreneurs and small business owners in Queensland,” One Model CEO, Chris Butler said. “It is a tribute to the exceptional team we have in Brisbane and the world leading people analytics product One Model has built.” “From our first client, One Model has been an export focussed business. With the profile boost this award gives us, we look forward to continuing to grow our export markets of the United States, Europe and Asia,” Mr Butler said. Following this win, One Model is now a finalist in the 59th Australian Export Awards to be held in Canberra on Thursday 25 November 2021. One Model was founded in Texas in 2015, by South-east Queensland locals Chris Butler, Matthew Wilton and David Wilson. One Model generates over 90% of its revenue from export markets, primarily the United States. One Model was also nominated in the Advanced Technologies Award Category. One Model would like to congratulate Shorthand for winning this award as well as our fellow finalists across both categories - Healthcare Logic, Tactiv (Advanced Technologies Category), iCoolSport, Oper8 Global, Ryan Aerospace and Solar Bollard Lighting (Small Business Category). The One Model team would like to thank Trade and Investment Queensland for their ongoing support. To learn more about One Model's innovative people analytics platform or our company's exports, please feel free to reach out to Bruce Chadburn at bruce.chadburn@onemodel.co. PICTURE - One Model Co-Founders Chris Butler, Matthew Wilton and David Wilson with Queensland Premier, Hon Annastacia Palaszczuk MP and the other award winners.

    Read Article

    15 min read
    Chris Butler

    The public sector is rapidly evolving, is your people analytics strategy fit for purpose and can it meet the increasing demands of a modern public sector? In this blog, we will highlight the unique challenges that public sector stakeholders face when implementing a people analytics strategy. In light of those challenges, we will then outline how to best design and implement a modern people analytics strategy in the public service. When it comes to people analytics, the public sector faces a number of unique challenges; The public sector is the largest and most complex workforce of any employer in Australia. A workforce that bridges everything from white collar professionals to front line staff and every police officer, teacher and social worker in between. Public sector workforces are geographically dispersed with operations across multiple capital cities in the case of the Commonwealth Government, or a mix of city and regional staff in the case of both state and federal governments. The public service operates a multitude of HR systems acquired over a long time, leading to challenges of data access and interoperability. Important public service HR data may also be held in manual non-automated spreadsheets prone to error and security risk. A complex industrial relations and entitlements framework, details of which are generally held in different datasets. Constant machinery of government (MoG) changes demand both organisational and technological agility by public servants to keep delivering key services (as well as the delivery of ongoing and accurate HR reporting). The public sector faces increased competition for talent, both within the public service and externally with the private sector. Citizen and political pressure for new services and methods of government service provision is at an all time high - so not only are your critical stakeholders your customers, they are your voters as well. Cyber security and accessibility issues that are unique to the public sector. This all comes under the pressure of constant cost constraints that require bureaucracies to do more with limited budgets. As a result - understanding and best utilising limited human capital resources is crucial for the public sector at both a state and federal level. Now that we have isolated the unique people analytics challenges of the public sector, how do HR professionals within the public service begin the process of implementing a people analytics strategy? 1. Data Orchestration “Bringing all of your HR data together.” The first stage of any successful people analytics programme is data orchestration, without having access to all of your relevant people data feeds in one place, it is almost impossible to develop a universal perspective of your workforce. Having a unified analytical environment is critical as it allows HR to; Develop a single source of truth for the data you hold on employees. Cross reference employee data within and between departments to adequately benchmark and compare workforces to drive team-level, department-level and public service wide insights. Establish targeted interventions and not one-size-fits-all solutions. For example, a contact centre is going to have very different metric results than your corporate groups like Finance or Legal. Blend data between systems to uncover previously hidden insights. Uncover issues such as underpayments that develop when different systems don’t communicate. Using people analytics to mitigate instances of underpayment is covered extensively in this blog. Provide a clean and organised HR data foundation from which to generate predictive insights. Have the capacity to export modelled data to an enterprise data warehouse or another analytical environment (PowerBI, Tableau etc). Allow HR via people analytics to support the Enterprise data mesh - covered in more detail in this blog post. People data orchestration in the public sector is complicated by the reliance on legacy systems, as well as the constant changes in structure driven by machinery of government reforms. Successful data orchestration can only be achieved through an intimate knowledge of the source HR systems and a demonstrated capacity to extract information from those systems and then model that information in a unified environment. This takes significant technology knowledge, such as bespoke API integrations for cloud based systems and proven experience working with on premise systems. It also requires subject matter expertise in the nuances of HR data. It can not be easily implemented without the right partners. Ideally, the end solution should be a fully flexible open analytics infrastructure to future proof the public sector and allow for the ingestion of data from new people data systems as they arise (such as new LMS or pulse survey products) while also facilitating the migration of data from legacy systems to more modern cloud based platforms. 2. Data Governance “Establishing the framework to manage your data.” Now that all of your data is in one place, it is important you develop a robust framework for how to manage that data - in our view this has two parts - data definition and data access. Data Definition Having consolidated multiple sources of data in one environment, the next step is metric definition, which is critical to being able to convert the disparate data sets that you have assembled into coherent, understandable language. It is all well and good to have your data in one place, but if you have 5 different definitions of what an FTE means from the five different systems you are aggregating then the benefits you receive from your data orchestration phase will be marginal. Comprehensive metric definitions with clear explanations are needed to ensure your data is properly orchestrated and organisation-wide stakeholders have confidence that data is standardised and can be trusted. Data Access HR data is some of the most complex and sensitive a government holds, so existing HR data management practices based on spreadsheets that can be easily distributed to non-approved stakeholders both inside and outside of your organisation are no longer fit for purpose. Since your people analytics data is coming from multiple systems you need to provide an overarching security framework that controls who gets access to what information and why. This framework must based on logical rules, aligned to broader departmental privacy policies and flexible enough to accommodate organisational change and to scale to your entire department or agency regardless of its size. Critically, there needs to be a high level of automation and scalability to use role based security as a mechanism for safely sharing data to decision makers. Today’s spreadsheet based world relies on limiting data sharing, which also limits effective data driven decision making. Finally, these role based security access frameworks need to be scalable so each new user or change in structure doesn’t require days of manual work from your team to ensure both access and compliance. 3. Secure People Analytics Distribution “Delivering people analytics content to your internal stakeholders.” The next step, once you have consolidated your data and established an appropriate data governance framework, is to present and distribute this data to your internal stakeholders. This is what we refer to as the distribution phase of your people analytics implementation. We established in the last section that for privacy and security reasons, different stakeholders require access to varying levels of information. The distribution phase goes one step further and places access within the prism of what individual stakeholders need in order to successfully do their jobs. For example, the information and insights necessary for a Departmental Secretary and a HR business partner to do their jobs are wildly different and therefore should be tailored to their particular needs. So, organisation wide metrics and reports in the case of the Departmental Secretary and team or individual level metrics for the HR BP or line manager. This is further complicated by disclosure requirements and reporting unique to the public service. This includes; Media requests regarding public servant pay and conditions Statutory reporting requirements for annual state of the public service reports Submissions to and appearances before parliamentary committees Disclosure to independent oversight inquiries or agencies As a result, public sector HR leaders are required to walk a tightrope of both breadth and specificity. So how do we recommend you do this? Offer a baseline of standardised metrics for the whole organisation. Tailor that baseline based on role-based access requirements, so stakeholders only see information that is relevant to drive data driven decision making. Deliver those insights at scale - the wider the stakeholder group consuming your outputs the better. Ensure those outputs are timely and relevant - daily or weekly updates are recommended. Be able to justify your insights and offer access to raw data, calculations and metric definitions. Continually educate your stakeholders about best practice people analytics. Increase reporting sophistication based on the people analytics maturity of your stakeholders - simple reporting for entry level stakeholders, more complicated predictive insights for the more advanced. To get the most out of your people analytics strategy you need to deliver two things; Role based access to the widest stakeholder group across your department, the wider the group of employees that have access to detailed datasets the easier it will be to deliver data driven decision making. Support your team with a change management programme to grow their analytical capability over the course of time. 4. Extracting Value from your Data “Using AI + Data Science to generate predictive insights.” Now we get to the fun part - using data science to supercharge your analysis and generate predictive insights. However, to quote the great theologian and people analytics pioneer - Spiderman - “With great power comes great responsibility.” Most data science work today is performed by a very small number of people using arcane knowledge and coding in technologies like R or Python. It is not scalable and rarely shared. The use of machine learning capabilities with people data requires a thoughtful approach that considers the following; Does your AI explain its decisions? Could the decisions your machine learning environment recommends withstand the scrutiny of a parliamentary committee? Do you adhere to ethical AI frameworks and decision making? What effort has been made to detect and remove bias? Does harnessing predictive insights require a data scientist or can it be used by everyday stakeholders within your department? Will your use of AI adhere to current or future standards, such as those recently proposed by the European Commission? To learn more about the European Commission proposal regarding new rules for AI, click here. In integrating the use of machine learning into your people analytics programme, you must ensure that models are transparent and can be explained to both your internal and external stakeholders. 5. Using People Analytics to Support Public Sector Reform “Public sector HR driving data-driven decision making.” A people analytics strategy does not exist in isolation, it is a crucial aspect of any departmental strategy. However, in speaking to our public sector HR colleagues - they often feel that their priorities are sidelined or they don’t have the resources to argue for their importance. A lot of this has to do with the absence of integrated datasets and outputs to justify HR prioritisation and investment. We see people analytics and the successful aggregation of disparate data sets as the way that HR can drive their people priorities forward. If HR can present an integrated and trusted dataset, that allows comparison and cross validation with data from other verticals including finance, community engagement, procurement and IT. This gives HR the capability to be central to overall decision making and support broader departmental corporate strategies from the ground up. We have written extensively about the importance of data driven decision making in HR and using people analytics to support enterprise strategy, this content can be found on our blog here - www.onemodel.co/blog Why you should invest in people analytics and what One Model can do to help. The framework of a successful public sector people analytics project outlined above is the capability that the One Model platform delivers. From data orchestration to predictive insights, One Model delivers a complete HR Analytics Capability. The better you understand your workforce, the more ambitious the reform agendas you can fulfil. One Model is set up to not only orchestrate your data to help the public service understand the challenges of today, but through our proprietary OneAI platform - to help you build the public service of the future. One Model’s public sector clients are some of our most innovative and pragmatic, we love working with them. At One Model, we are constantly engaging with the public sector about best practice people analytics - last year, our Chief Product Officer - Tony Ashton (https://www.linkedin.com/in/tony-ashton/) - himself a former Commonwealth HR public servant appeared on the NSW Public Service Commission’s The Spark podcast to discuss how the public sector can use people data to make better workforce decisions. That podcast can be found here. Let’s start a conversation If you work in a public service department or agency and are interested in learning more about how the One Model solution can help you get the most out of your workforce, my email is patrick.mcgrath@onemodel.co

    Read Article

    1 min read
    Chris Butler

    One Model has announced its appointment to the Australian Government’s Digital Transformation Agency Cloud Marketplace, a digital sourcing arrangement of cloud computing offerings for Australian government. One Model’s globally recognised and award-winning People Analytics platform, is now available via the Cloud Marketplace to all Australian federal, state, and territory government agencies seeking to reimagine and accelerate their People Analytics journey. One Model delivers a comprehensive people analytics platform to business and HR leaders that integrates, models and unifies data from the myriad of HR technology solutions through the out-of-the-box metric library, storyboard visuals, and advanced analytics using a proprietary AI and machine learning model builder. People data presents unique and complex challenges which the One Model platform simplifies to enable faster, better, evidence-based workforce decisions. Many public sector departments and organisations around the world realised the power of One Model and selected One Model as their partner to success, including the Australian Department of Health, the Australian Civil Aviation Safety Authority (CASA), and Tabcorp to name just a few. The Cloud Marketplace can be accessed via the DTA’s BuyICT platform.

    Read Article

    14 min read
    Chris Butler

    If people analytics teams are going to control their own destiny they're going to need to need to support the enterprise data strategy. You see, the enterprise data landscape is changing and IT has heard its internal customers. You want to use your own tools, your own people, and apply your hard won domain knowledge in the way that you know is effective. Where IT used to fight against resources moving out of their direct control they have come to understand it's a battle not worth fighting and in facilitating subject matter experts to do their thing they allow business units to be effective and productive. Enter the Enterprise Data Architecture The movement of recent years is for IT to facilitate an enterprise data mesh into their architecture where domain expert teams can build, consume, and drive analysis of data in their own function...so long as you can adhere to some standards, and you can share your data across the enterprise. For a primer on this trend and the subject take read of this article Data Mesh - Rethinking Enterprise Data Architecture The diagram heading this blog shows a simplified view of a data mesh, we'll focus on the people analytics team's role in this framework. What is a Data Mesh? A data mesh is a shared interconnection of data sets that is accessible by different domain expert teams. Each domain team manages their data applying its specific knowledge to its construction so it is ready for analytics, insight, and sharing across the business. When data is built to a set of shared principles and standards across the business it becomes possible for any team to reach across to another domain and incorporate that data set into their own analysis and content. Take for example a people analytics team looking to analyze relationships between customer feedback and front-line employees' attributes and experience. Alternatively, a sales analytics team may be looking at the connection between learning and development courses and account executive performance, reaching across into the people analytics domain data set. Data Sharing becomes key in the data mesh architecture and it's why you've seen companies like Snowflake do so well and incumbents like AWS bring new features to market to create cross-data cluster sharing. There are two ways to share data across the enterprise: Cross Cluster / Data Warehouse sharing - each domain operates its own schemas or larger infrastructure for allowing other business units to access. AWS has an example here https://aws.amazon.com/redshift/features/data-sharing/ Feeding domain Analytics-Ready data into a centralized enterprise data architecture - This is more typical today and in particular is useful if the organization has a data lake strategy. Data lakes are generally unstructured and more of a data swamp, in order to be useful the data needs to be structured, so providing Analytics Ready data into either a data lake or data warehouse that adheres to common principles and concepts is a much more useable method of sharing value across data consumers. One Model was strategically built to support your HR data architecture. If you'd love to learn more, check out our people analytics enterprise products and our data mesh product. How can people analytics teams leverage and support the HR data architecture? The trend to the mesh is growing and you're going to be receiving support to build your people analytics practice in your own way. If you're still building the case for your own managed infrastructure then use these points for helping others see the light and how you are going to support their needs. Identify the enterprise data strategy I'm sure you've butted heads against this already but identify if the organization is supportive of a mesh architecture or you'll have to gear up to show your internal teams how you will give them what they need while taking away some of their problems. If they're running centralized or in a well-defined mesh, you will have different conversations to obtain or improve your autonomy. Supporting the enterprise data mesh strategy People analytics teams are going to be asked to contribute to the enterprise data strategy if you are not today. There are a number of key elements you'll need to be able to do this. Extract and orchestrate the feeds from your domain source systems. Individual systems will have their nuances that your team will understand that others in the enterprise won't. A good example is supervisor relationships that change over time and how they are stored and used in your HRIS. Produce and maintain clean feeds of Analytics-Ready data to the enterprise. This may be to a centralized data store or the sharing of your domain infrastructure across the business. Adhere to any centralized standards for data architecture, this may differ based on the tooling used to consume data. Data architected for consumption by Tableau is typically different (de-normalized) from a model architected for higher extensibility and maintenance (normalized) which would allow for additional data to be integrated and new analyses to be created without re-architecting your core data tables. You can still build your own nuanced data set and combinations for your domain purpose but certain parts of the feed may need to follow a common standard to enable easy interpretation and use across the enterprise. Define data, metrics, and attributes and their governance ideally down to the source and calculation level and document for your reference and for other business units to better understand and leverage your data. The larger your system landscape is the harder this will be to do manually. Connect with other domain teams to understand their data catalogues and how you may use them in your own processes. Why should people analytics care? This trend to the data mesh is ongoing, we've seen it for a number of years and heard how IT thinks about solving the HR data problem. The people analytics function is the domain expertise team for HR, our job is to deliver insight to the organization but we are the stewards of people data for our legacy, current, and future systems. To do our jobs properly we need to take a bigger picture view of how we manage this data for the greater good of the organization. In most cases, IT is happy to hand the problem off to someone else whether that's an internal team specialized in the domain or an external vendor who can facilitate How does One Model support the Data Mesh Architecture for HR It won't surprise you to hear but we know a lot about this subject because this is what we do. Our core purpose has been understanding and orchestrating people data across the HR Tech landscape and beyond. We built for a maturing customer that needed greater access to their data, the capability to use their own tools, and to feed their clean data to other destinations like the enterprise data infrastructure and to external vendors. I cover below a few ways in which we achieve this or you can watch the video at the end of the article. Fault Tolerant Data Extraction Off the shelf integration products and the front end tools in most HRIS systems don't cater for the data nuances, scale of extraction, or maintenance activities of the source system. Workday for example provides snapshot style data at a point in time and it's extraction capabilities quickly bog down for medium and large enterprises. The result is that it is very difficult to extract a full transactional history to support a people analytics program without arcane workarounds that give you inaccurate data feeds. We ultimately had to build a process to interrogate the Workday API about dozens of different behaviors, view the results and have the software run different extractions based on its results. Additionally most systems don't cater for Workday's weekly maintenance windows where integrations will go down. We've built integrations to overcome these native and nuance challenges for SuccessFactors, Oracle, and many other systems our customers work with. An example of a workday extraction task is below. Data Orchestration and Data Modelling Our superpower. We've built for the massive complexity that is understanding and orchestrating HR data to enable infinite extension while preserving maintainability. What's more it's transparent, customers can see how that data is processed and it's lineage and interact with the logic and data models. This is perfect for IT to understand what is being done with your data and to have confidence ultimately in the resulting Analytics-Ready Data Models. Data Destinations to the Enterprise or External Systems Your clean, connected data is in demand by other stakeholders. You need to be able to get it out and feed your stakeholders, in the process demonstrating your mastery of the people data domain. One Model facilitates this through our Data Destination's capability, which allows the creation and automated scheduling of data feeds to your people data consumers. Feeds can be created using the One Model UI in the same way as you may build a list report or an existing table and then just add it as a data destination. Host the Data Warehouse or Connect Directly to Ours We've always provided customers with the option to connect directly to our data warehouse to use their own tools like Tableau, Power BI, R, SAC, Informatica, etc. Our philosophy is one of openness and we want to meet customers where they are, so you can use the tools you need to get the job done. In addition to this a number of customers host their own AWS Redshift data warehouse that we connect to. There's capability to run data destinations to also feed to other warehouses or use external capability to sync data to other warehouses like Azure SQL, Google, Snowflake etc. A few examples Snowflake - https://community.snowflake.com/s/article/How-To-Migrate-Data-from-Amazon-Redshift-into-Snowflake Azure - https://docs.microsoft.com/en-us/azure/data-factory/connector-amazon-redshift Data Definitions and Governance With One Model all metric definitions are available for reference along with interactive explanations and drill through to the transactional detail. Data governance can be centralized with permission controls on who can edit or create their own strategic metrics which may differ from the organizational standard. HR Specific Content and Distribution We provide standard content tailored to the customers own data providing out of the box leverage for your data as you stand up your people analytics programs. Customers typically take these and create their own storyboards strategic to their needs. It's straightforward to create and distribute your own executive, HRBP, recruiting, or analysis project storyboards to a wide scale of users. All controlled by the most advanced role based security framework that ensures only the data permissioned can be seen by the user while virtually eliminating user maintenance with automated provisioning, role assignment, and contextual security logic where each user is linked to their own data point. Watch the two minute video of what One Model does

    Read Article

    11 min read
    Chris Butler

    This week One Model was delighted to participate with an elite group of our industry peers in the HR Tech Alliances, Virtual Collaboration Zone - Best New People Analytics Solution competition. I'm excited to share some detail on what the judges saw to justify the outcome. This wasn't an empty competition either and had some significant companies in the field. The overall scores were as below: 1st - One Model - 4.28 2nd - activ8 intelligence 4.06 3rd - Visier - 3.93 Given how proud I am of our team for winning this award, I thought I would share our presentation. Before I do that, I would like to acknowledge how far the pure play people analytics space has come in recent time. As an industry, this is something that we should celebrate as we continue on a path of innovation to deliver better products and better outcomes for our clients. People analytics is an exciting place to be as 2020 comes to its (merciful) conclusion! We'll take a quick tour through the highlights of our presentation and demonstration. Who are we? One Model provides its customers with an end to end people analytics platform that we describe as an infrastructure. We call it an infrastructure, because from the ground up - One Model is built to make everything we do open and accessible to our most important stakeholder - you the customer. Everything from our data models to our content catalogues, right down to the underlying data warehouse is transparent and accessible. One Model is not a black box. Over the last five years, we have been guided by the principle that because of One Model’s transparency and flexibility - our customers should feel as if this is a product that they built themselves. Our History For those of you who are unfamiliar with the history of One Model, the core of our team is derived from workforce analytics pioneers InfoHRM. InfoHRM was acquired by Successfactors in 2010 and subsequently SAP in 2012. During our extraordinary ride from humble Australian business to integral part of one of the world’s largest software companies - our team learned that while our solution gave low maturity users what they needed in terms of the what, why and how of measuring their workforce. Our solution remained an inflexible tool that customers outgrew as their own capabilities increased. With an increased sophistication, customers were asking new and more complicated questions and the solution simply couldn't evolve with them. Five years later and sadly, this is what we continue to see from other vendors in our space. Meeting our customers where they are on their people analytics journey and supporting them through their evolution is fundamental to the One Model platform. Be Open; Be Flexible; Don't put a ceiling on your customers capabilities. One Model takes care of the hard work of building a people analytics infrastructure. We built One Model to take care of both low maturity users, who need simple and supported content to understand the power of people analytics. At the same time, we need to deliver an experience that customers grow into and higher maturity users can leverage world-leading One AI data science and statistical engine. Furthermore, if they want to use their own tools or external data science teams - their people analytics platform should enable this - not stand in its way. One Model’s Three Pillar People Analytics Philosophy Pillar 1: Data Orchestration People data is useless if you can’t get access to it. Data orchestration is critical to a successful people analytics program. At One Model - Data Orchestration is our SUPERPOWER! Many thousands of hours have been invested by our team in bespoke integrations that overcome the native challenges of HR Tech vendors and provide full, historic and transactional extracts ready for analytics. This isn’t easy. Actually, it’s terrifyingly hard. Let’s use Workday as an example; To put it mildly, the data from their reporting engine and the basic API used to download these reports is terrible. It's merely a snapshot that doesn't provide the transactional detail required for analytics. It's also impossible to sync history as it changes over time - an important feature given the nature of HR data. You have to go to the full API to manage a complete load for analytics. We are 25,000 hours in and we're still working on changes! To power our data orchestration, we built our own Integrated Development Environment (IDE) for managing the enormous complexity of people data and to house our data modelling tools. Data quality and validation dashboards ensure we identify and continue to monitor data over time for correction. Data destinations allow us to feed data out to other places, many of our customers use this to feed data to other vendors or push data to other business units (like finance) to keep other business units up to date. Unlike garden variety Superpowers (like flying), our data orchestration capability did not develop by serendipity or luck. It developed and continues to develop by the hard work and superior skills of our team. Pillar 2: Data Presentation Most other vendors in our space exist here. They don't provide open and flexible toolsets for Data Orchestration or Value Extraction / Data Science. When we started One Model, we hadn't planned on a visualization engine at all. We thought we could leverage a Tableau, Looker, or Birst OEM embedded in our solution. After much evaluation, we just couldn't deliver the experience and capability that analyzing and reporting on HR data requires. Generic BI tools aren't able to deliver the right calculations, with the right views across time, in a fashion that allows wide distribution according to the intense security and privacy needs of HR. We had to build our own. Ultimately our vertical integration allows unique user security modelling, integration of One AI into the frontend UI, all while not limiting us to the vagaries of someone else's product. Our implicit understanding of how HR reports, analyzes, and distributes data required us to build a HR specific data visualization tool for One Model. Pillar 3: Data Science / Value Extraction - One AI I like to describe the third pillar of our people analytics philosophy as our 'Value Extraction' layer. This layer is vertically integrated on top of our data models, it allows us to apply automated machine learning, advanced statistical modelling, and to augment and extend our data with external capabilities like commute time calculations. Predictive capabilities were our first target and we needed to build unique models at scale for any customer, regardless of their data shape, size, or quality. A one size fits all algorithm that most other vendors in the HR space provide wasn't going to cut it. Enter automated machine learning - Our One AI capability will look across the entire data scope for a customer, it will introduce external context data, select it's own features, train potentially hundreds of models and permutations of those models and select the best fit. It provides a detailed explanation of the decisions it made, enough to keep any data scientist happy. The best of all these models can be scheduled and repeated so every month it could be set to re-learn, re-train, and provide an entirely different model as a fit to your changing workforce. This unbelievable capability doesn't lock out an experienced team, but invites them in should they wish to pull their own levers and make their own decisions. The One AI engine is now being brought to bear in a real time fashion in our UI tacking forecasting, bayesian what if analyses, bias detection, anomaly detection, and insight detection. We have barely scratched the surface of the capability and our vertical integration with a clean, consistent data model allows these advanced tools to work to deliver the best outcomes to customers. Labor Market Intelligence One Model has the world’s best understanding of your internal HR data set; we do wonderful things with the data you already have - but we were missing the context of the external labor market and how that impacted our customer's workforces. As a result, we have developed a proprietary Labor Market Intelligence (LMI) tool. LMI is being released in January 2021 as a standalone product providing labor market analytics to our customers. LMI retains the functionality that you love about our people analytics platform - the ability to flexibly navigate data, build your own storyboard content, and drill through to granular detail. Importantly what LMI will allow for One Model enterprise customers is the ability to link external market data to internal people data. Delivering outcomes like identifying persons paid lower than the market rate in their region, identifying employees in roles at risk of poaching due to high market demand and turnover, and helping you understand if your talent are leaving for promotions, or lateral moves. Collaboration with the HR Tech Ecosystem Finally, One Model understands the power of collaboration in the HR Tech ecosystem. We are already working with leading consultancies like Deloitte and are embedded in HCM vendors helping consume and make sense of their own data to deliver people analytics and extract value for their customers. At the end of the day, our vision is to understand the entire HR Tech ecosystem at the data layer, to help customers realize their investment in these systems, and to provide a data insurance policy as they transition between systems. Analytics is a by-product of this vision and thankfully it also pays the bills ;)

    Read Article

    9 min read
    Chris Butler

    Following our blog last month about how systems issues can open the door to staff underpayment, a number of our stakeholders have asked if we might be able to go deeper into how a people analytics solution and specifically, how One Model can solve this problem. We are nothing if not obliging here at One Model, so here we go! We thought we would answer this question by articulating the most common system-derived problems associated with people data and how One Model and an integrated people analytics plan can help resolve these issues. PROBLEM NUMBER ONE - PEOPLE DATA IS STORED IN MULTIPLE NON-INTEGRATED SYSTEMS As discussed previously, our experience is that most large organisations have at least 7 systems in which they store people data. In some larger organisations - that number can be more than 20! Data silos present a major risk to HR governance. Silos create the risk that information may be different between systems or updated in one system and then not updated in others. If information in one non-integrated system is wrong or out of date, it becomes very hard - firstly to isolate the issue and remediate it and secondly, if the error was made months or years in the past to understand which system controls the correct information. At One Model, we are consistently helping our customers create a single source of truth for their people data. Blending data together across siloed systems provides a great opportunity for HR to cross-validate the data in those systems before it becomes an issue. Blended data quickly isolates instances of data discrepancy - allowing HR to not only resolve individual data issues, but to uncover systemic problems of data accuracy. Often when people are working across multiple systems they will take shortcuts to updating and managing data; this is particularly prevalent when data duplication is involved. If it isn’t clear which system has priority and data doesn’t automatically update in other systems - human error is an inevitable outcome. With One Model, you can decide which systems represent the most accurate information for particular data and merge all data along these backbone elements resulting in greater trust and confidence. The data integration process that is core to the One Model platform can, in effect, create a single source of truth for your people data. This presentation by George Colvin at PAFOW Sydney neatly shows how the One Model platform was used by Tabcorp to manage people data silo issues. PROBLEM NUMBER TWO - LIMITED ACCESS TO DATA IN OLD AND NON-SUPPORTED SYSTEMS Further to the issue of data spread across multiple systems, our experience tells us that not only are most large organisations running multiple people data systems - at least one of those systems will be running software that is either out-of-date or no longer supported by the vendor. So even if you do wish to integrate data between systems, you may be unable. It is always best if you can identify data issues in real time to minimise exposure and scope of impact, but this isn’t always possible and you may have to dig into historical transactional data to figure out the scale of the issue and how it impacts employees and the company. If that wasn’t challenging enough - most companies when changing or upgrading systems for reasons of cost and complexity end up not migrating all of their historical data. This means that you are paying for the maintenance of your old systems or to manage an offline archived database. Furthermore, when you need to access that historical data, running queries is incredibly difficult. This is compounded when you need to blend the historical data with your current system. It is, to put it mildly, a pain in the neck! One Model’s cloud data warehouse can hold all of your historical data and shield your company from system upgrades by managing the data migration to your new system, or housing your historical data and providing seamless blending with the data in your current active systems. If you are interested in this topic and how One Model can help - have a read of this blog that covers in more detail how One Model can mitigate the challenges associated with system migration. PROBLEM NUMBER THREE - ACCESS TO KEY HR DATA IS LIMITED TO THE CENTRAL HR FUNCTION. Either as a result of technology, security, privacy and/or process, HR data in many large organisations is only accessible by the central HR department. As a result, individual line managers don’t have the autonomy or capability to isolate and resolve people data issues before they develop. Data discrepancies are more likely to be identified by the people closest to the real-world events reflected in the transactional system. Managers and HR Business Partners are your first line of defence in identifying data issues, as well as any other HR issue. Of course, line managers need good people analytics to make better decisions and drive strategy, but a byproduct of empowering managers to oversee this information is that they are able to provide feedback on the veracity of the data and quickly resolve data accuracy issues. Sharing data widely requires a comprehensive and thoughtful approach to data sensitivity, security, and privacy. One Model has the most advanced people analytics permissions and role based security framework in the world to help your company deploy and adopt data-driven decision making. PROBLEM NUMBER FOUR - EVEN IF I RESOLVE A HISTORICAL UNDERPAYMENT, HOW DO I ENSURE THIS DOESN’T HAPPEN AGAIN? One of the consistent pieces of feedback we received from the initial blog was that many stakeholders were comfortable that once an issue had been identified they would be able to resolve it - either internally or with the support of an external consulting firm. However, those stakeholders were concerned about their ability to uncover other instances of underpayment in their business or ensure that future incidents did not occur. There is no silver bullet to this problem, however, our view is that a combination of the following factors can ensure organisations mitigate these risks; integrated people data - having a one-stop single source of truth for your people data is crucial. access to historical data - to understand when and how issues developed is also very important. empowerment of line managers to isolate and resolve issues - managers are your first line of defence in understanding and resolving these issues and you need to enable them to fix problems before they develop. People analytics and the One Model product give organisations the tools to resolve all of these problems. If you are interested in continuing this conversation, please get in touch. PROBLEM NUMBER FIVE - A COMPLEX INDUSTRIAL RELATIONS SYSTEM AND A LACK OF PEOPLE HR RESOURCES Previously, most back office processes had a lot of in-built checks and balances. There were processes to cross-check work between team members, ensure transactions totaled up and reconciled correctly and supervisors who would double-check and approve changes. Over the last 20 years large enterprises have been accelerating ERP adoption, in order to realise ROI from that investment, many back office jobs in payroll and other functions were removed with organisations and management expecting that the systems would always get it right. Compounding this and despite many attempts over the years to simplify the industrial relations system, the reality is that managing employee remuneration is incredibly complex. This complexity means that the likelihood of making payroll system configuration, interpretation or processing mistakes is high. So what to do? Of course you need expertise in your team, or be able to access professional advice as needed (particularly for smaller companies). In addition, successful companies are investing in people analytics to support their team and trawl through the large volumes of data to find exceptions, look for anomalies, and track down problems. Our view at One Model is that organisations need to develop metrics to identify and detect issues early. It's what our platform does. We have developed data quality metrics to deal with the following scenarios; Process errors Data inconsistency Transactions contrary to business rules Human error A combination of quality metrics, system integrations, and staff empowered to isolate and resolve issues before they become problems are key to minimising the chances of an underpayments scandal at your business. Thanks for reading. If you have any questions or would like to discuss how One Model can help your business navigate these challenges, please click the button below to schedule a demo or conversation.

    Read Article

    31 min read
    Chris Butler

    The first in a series of posts tackling the individual nuances we see with HR technology systems and the steps we take in overcoming their native challenges to deliver a comprehensive people analytics program. Download the White Paper on Delivering People Analytics from SAP SuccessFactorsQuick Links A long history with SuccessFactors Embedded Analytics won't cut it, you have to get the data out World leading API for extraction Time to extract data Full Initial Load Incremental Loads Modelling Data Both SuccessFactors and External SF Data Modelling Analytics Ready Fact Tables Synthetic Events Core SuccessFactors Modules MDF Objects Snowflake Schema Inheritance Metrics - Calculations - Analytics Delivered Reporting and Analytics Content Creating and Sharing your own Analytics Content Using your own Analytical Tools Feed Data to External Vendors What About People Analytics Embedded? What About SAP Analytics Cloud? What About SuccessFactors Workforce Analytics? The One Model Solution for SAP SuccessFactors A long history with SuccessFactors I'm starting with SuccessFactors because we have a lot of history with SuccessFactors. SF acquired Infohrm where many of our team worked back in 2010 and the subsequent acquisition by SAP in 2012. I personally built and led a team in the America's region delivering the workforce analytics and planning products to customers and ensuring their success. I left SAP in 2014 to found One Model. Many of One Model's team members were in my team or leading other global regions and, of course, we were lucky enough to bring on a complete world-leading product team from SAP after they made the product and engineering teams redundant in 2019 (perfect timing for us! Thanks SAP they're doing a phenomenal job!). So let's dive in and explore SuccessFactors data for people analytics and reporting. Embedded Analytics won't cut it, you have to get the data out. It's no secret that all vendors in the core HR technology space espouse a fully integrated suite of applications and that they all fall short to varying degrees. The SF product set has grown both organically and via acquisition, so you immediately have (even now) a disconnected architecture underneath that has been linked together where needed by software enhancements sitting above. Add in the MDF framework with an almost unlimited ability to customize and you quickly have a complexity monster that wasn't designed for delivering nuanced analytics. We describe the embedded reporting and analytics solutions as 'convenience analytics' since they are good for basic numbers and operational list reporting but fall short in providing even basic analytics like trending over time. The new embedded people analytics from SF is an example where the data set and capability is very limited. To deliver reporting and analytics that go beyond simple lists and metrics (and to do anything resembling data science), you will need to get that data out of SF and into another solution. World leading API for data extraction One Model has built integrations to all the major HRIS systems and without a doubt SuccessFactors has the best API architecture for getting data out to support an analytics program. Deep, granular data with effective dated history is key to maintaining an analytics data store. It still has its issues, of course, but it has been built with incremental updates in mind and importantly can cater for the MDF frameworks huge customizability. The MDF inclusion is massive. It means that you can use the API to extract all custom objects and that the API flexes dynamically to suit each customer. As part of our extraction, we simply interrogate the API for available objects and work through each one to extract the full data set. It's simply awesome. We recently plugged into a huge SuccessFactors customer of around 150,000 employees and pulled more than 4,000 tables out of the API into our warehouse. The initial full load took about a week, so it was obviously a huge data set, but incremental loads can then be used for ongoing updates. Some smaller organizations have run in a matter of minutes but clearly the API can support small through to enormous organizations, something other vendors (cough, cough ... Workday) should aspire to. To give you a comparison on level of effort we've spent on the One Model API connectors, approximately 600 hours has been spent on SuccessFactors versus more than 12,000 hours on our Workday connector. Keep in mind that we have more stringent criteria for our integrations than most organizations including fault tolerance, maintenance period traversal, increased data granularity, etc., that go beyond what most individual organizations would have the ability to build on their own. The point is, the hours we've invested show the huge contrast between the SF and Workday architectures as relates to data access. Time to Extract data Obviously, the time needed to extract the data depends on the size of the organization but I’ll give you some examples of both small and huge below. Figure 1: Data extraction from SAP SuccessFactors using APIs Full Initial Loads In the first run we want everything that is available -- a complete historical dataset including the MDF framework. This is the most intense data pull and can vary from 20 minutes for a small organization of less than 1,000 employees to several days for a large organization above 100,000 employees. Luckily, this typically only needs to be done once during initial construction of the data warehouse, but there are times where you may need to run a replacement destructive load if there are major changes to the schema, the extraction, or for some reason your synchronization gets out of alignment. API’s can behave strangely sometimes with random errors, sometimes missing records either due to the API itself or the transmission just losing data, so keep this process handy and build to be repeatable in case you need to run again in the future. The One Model connectors provide an infrastructure to manage these issues. If we're only looking for a subset of the data or want to restrict the fields, modules, or subject areas extracted, we can tell the connector which data elements to target. Figure 2: Configuring the connector to SF in One Model platform Incremental Updates With the initial run complete we can switch the extraction to incremental updates and schedule them on a regular basis. One approach we like to take when pulling incrementals is to take not just the changes since the last run but also take a few extra time periods. For example, if you are running a daily update you might take the last two to three days worth of data in case there were any previous transmission issues, this redundancy helps to ensure accuracy. Typically we run our incremental updates on a daily basis, but you want to run more often than this you should first need to consider: How long your incremental update takes to run. SF is pretty quick, but large orgs will see longer times, sometimes stretching into multiple hours How long it takes your downstream processes to run an update any data If there’s a performance impact to updating data more regularly, typically if you have a level of caching in your analytics architecture this will be blown away with the update to start over again. Impact on users if data changes during the day. Yes, there can be resistance to data updating closer to real-time. Sometimes it's better to educate users that the data will be static and updated overnight. Whether or not the source objects support incremental updates. Not all can, and with SF there’s a number of tables we need to pull in a full load fashion, particularly in the recruiting modules. Modelling data both SuccessFactors and External Okay, we have our SF data and of course we have probably just as much data from other systems that we're going to need to integrate together. SF is not the easiest data set to model, as each module operates with its own nuances that, if you're not experienced with, will send you into a trial and error cycle. We can actually see a lot of the challenges the SF data can cause by looking at the failures the SF team themselves have experienced in providing cross-module reporting over the years. There have been issues with duplicates, incorrect sub domain schemas, and customer confusion as to where you should be sourcing data from. A good example is pulling from employee profile versus employee central. The SAP on premise data architecture is beautiful in comparison (yes really, and look out soon for a similar post detailing our approach to SAP on premise). Modeling the SF Data At this point we're modelling (transforming) the raw source data from SF into analytics-ready data models that we materialize into the warehouse as a set of fact and dimension tables. We like to keep a reasonable level of normalization between the tables to aid in the integration of new, future data sources and for easier maintenance of the data set. Typically, we normalize by subject area and usually around the same timescale. This can be difficult to build, so we've developed our own approaches to complete the time splicing and collapsing of records to condense the data set down to where changes occurred. The effort is worth it though, as the result is a full transactional history that allows the most flexibility when creating calculations and metrics, eliminating the need to go back and build a new version of a data set to support every new calculation (something I see regularly with enterprise BI teams). This is another example of where our team's decades of experience in modelling data for people analytics really comes to the fore. During the modelling process there's often a number of intermediate/transient tables required to merge data sets and accommodate modules that have different time contexts to each other, but at the end of the day we end up materializing them all into a single analytics-ready schema (we call it our One schema) of tables. Some of what you would see is outlined below. Analytics Ready Fact Tables One.Employee - all employee effective dated attributes One.Employee_Event - all employee events, equivalent to action/reason events (e.g. Hire, Termination, Transfer, Supervisor change, etc.). Usually you'll need to synthetically create some events where they don't exist as action/reason combinations. For example, many customers have promotions that aren't captured in the system as a transaction but are logically generated where a pay grade occurs alongside a transfer or any similar combination of logic. One.Requisitions - all Requisition's and events One.Applications - all application events One.Performance_Reviews - all performance review events ... the list goes on Dimension Tables One.dim_age - age breakout dimension with levelling One.dim_gender - gender breakout dimension typically a single level One.organizational_unit - The multi-level organization structure … we could go on forever, here's a sample below of fields Figure 3: Examples of tables and fields created in the One Model data schema Synthetic Events A core HRIS rarely captures all events that need to be reported on, either because the system wasn't configured to capture it or the event classification is a mix of logic that doesn't fit into the system itself. These are perfect examples of why you need to get data out of the system to be able to handle unsupported or custom calculations and metrics. A frequently recurring example is promotions, where an action/reason code wasn't used or doesn't fit and for reporting a logic test needs to be used (e.g. a change in pay grade + a numeric increase in salary). We would implement this test in the data model itself to create a synthetic event in our Employee_Events model. It would then be seen as a distinct event just like the system-sourced events. In this fashion you can overcome some of the native limitations of the source system and tailor your reporting and analytics to how the business actually functions. Core SuccessFactors Modules Employee Central - Aligns with our Employee, Employee Event tables and typically includes about 100+ dimensions as they're built out. The dimension contents usually come from the foundation objects, picklist reference tables, an MDF object, or just the contents of the field if usable. This is the core of the analytics build and virtually all other modules and data sets will tie back to the core for reference. Recruiting - Aligns with our Applications, Application_Event, and Candidates fact tables covering the primary reporting metrics and then their associated dimensional tables. Succession - Aligns with Successor and associated dimensions Performance - Performance Reviews (all form types) and associated dimensions Learning - Learning Events, Courses, Participants Goals - Goals, Goal_Events MDF objects MDF objects are generally built into the HRIS to handle additional custom data points that support various HR processes. Typically we’ll see them incorporated into one of the main fact tables aligning with the date context of the subject fact table (e.g. employee attributes in One.Employee). Where the data isn’t relevant to an existing subject, or just doesn’t align with the time context, it may be better to put the data into its own fact table. Usually the attribute or ID would be held in the fact table and we would create a dimension table to display the breakout of the data in the MDF object. For example, you might have an MDF object for capturing whether an employee works from home. Captured would be the person ID, date, and the value associated (e.g. ‘Works from Home’ or ‘Works from Office’). The attribute would be integrated into our Employee fact table with the effective date and typically a dimension table would also be created to show the values allowing the aggregate population to be broken out by these values in reporting and analysis. With the potential for a company to have thousands of MDF objects, this can massively increase the size, complexity, and maintenance of the build. Best to be careful here as the time context of different custom objects needs to be handled appropriately or you risk impacting other metrics as you calculate across domains. Inheritance of a snowflake schema Not to be confused with Snowflake the database, a snowflake schema creates table linkages between tables that may take several steps to join to an outer fact or dimension table. An example is that of how we link a dimension like Application Source (i.e., where a person was hired from) to a core employee metric like Headcount or Termination Rate which has been sourced from our core Employee and Employee Event Tables. An example of this is below, where to break out Termination Rate by Application Source and Age we would need to connect the tables below as shown: Figure 4: Example of connecting terminations to application source This style of data architecture allows for a massive scale of data to be interconnected in a fashion that enables easier maintenance and the ability to change pieces of the data model without impacting the rest of the data set. This is somewhat opposite of what is typically created for consumption with solutions like Tableau which operate easiest with de-normalized tables (i.e., giant tables mashed together) which come at the cost of maintenance and flexibility. Where one of our customers wants to use Tableau or similar solution we typically add a few de-normalized tables built from our snowflake architecture that gives them the best of both worlds. Our calculation engine is built specifically to be able to handle these multi-step or matrix relationships so you don’t have to worry about how the connections are made once it’s part of the One Model data model. Metrics - Calculations - Analytics When we get to this point, the hardest work is actually done. If you've made it this far, it is now relatively straight forward to build the metrics you need for reporting and analytics. Our data models are built to do this easily and on the fly so there isn't a need for building pre-calculated tables like you might have to do in Tableau or other BI tools. The dynamic, on the fly nature of the One Model calculation engine means we can create new metrics or edit existing ones and be immediately using them without having to generate or process any new calculation tables. Creating / Editing Metrics Figure 5: Example of creating and editing metrics in One Model Delivered Reporting and Analytics Content With an interconnected data model and a catalogue of pre-defined metrics, it is straight forward to create, share and consume analytics content. We provide our customers with a broad range of pre-configured Storyboard content on top of their SuccessFactors data. A Storyboard library page allows a quick view of all subject areas and allow click through to the deeper subject specific Storyboards beneath. This content is comprehensive covering off the common subject areas for analytics and reporting such as workforce profile, talent acquisition, turnover, diversity, etc. There is also the ability to create dashboards for monitoring data quality, performing data validations, and viewing usage statistics to help manage the analytics platform. Figure 6: Sample of standard Storyboard content in One Model Creating and Sharing your own Analytics Content Every one of our customers adds to the pre-configured content that we provide them, creating their own metrics and storyboards to tell their organization's people story, to support their HR, business leaders, and managers, and to save their people analytics team time by reducing ad-hoc requests for basic data. Our customers make the solution their own which is the whole point of providing a flexible solution not tied to the limitations of the underlying source system. Content in One Model is typically shared with users by publishing a storyboard and selecting which roles will have access and whether they can edit or just view the storyboard itself. There's a number of other options for distributing data and content including: Embedding One Model Storyboards within the SuccessFactors application itself Embedding One Model Storyboards within Sharepoint, Confluence, or any other website/intranet (e.g. the way we have used frames within this site: https://covidjobimpacts.greenwich.hr/#) Pushing data out to other data warehouses (what we call a "data destination") on a scheduled basis, something that works well for feeding other tools like Tableau, PowerBI, SAP Analytics Cloud, and data lakes. Sharing Storyboards Embedding Storyboards Example of embedded storyboard COVID Job Impacts site - https://covidjobimpacts.greenwich.hr/# Figures 7, 8, 9: Storyboard sharing and embedding Using your own Analytical Tools We want to ensure you never hit a ceiling on what you can achieve or limit the value you can extract from your data. If you wish to use your own tools to analyse or report on your data, we believe you should have the power to do so. We provide two distinct methods for doing this: Direct Connection to the One Model Data Warehouse. We can authorize specific power users to access the data warehouse directly and read/write all the raw and modeled tables in the warehouse. If you want to use Tableau or PowerBI in this way, you are free to do so. You can write your own queries with SQL or extract directly from the warehouse in your data science programs such as Python or R. The choice is yours. At this point, it is essentially your warehouse as if you created it yourself, we have just helped to orchestrate the data. Data Destinations. If you need to feed data to an enterprise data warehouse, data lake, or other data store, then our data destinations functionality can send the selected data out on a scheduled basis. This is often used to integrate HR data into an enterprise data strategy or to power an investment in Tableau Server or other where teams want the HR data in these systems but don't want to build and run the complex set of APIs and data orchestration steps described above. In both of these scenarios, you're consuming data from the data model we've painstakingly built, reaping the productivity benefits by saving your technical team from having to do the data modelling. This also addresses a perennial issue for HR where the IT data engineering teams are often too busy to devote time to understanding the HR systems sufficiently to deliver what is needed for analytics and reporting success. Feed data to external vendors Another use for the data destinations described above is to provide data to external vendors, or internal business teams with the data they need to deliver their services. Many of our customers now push data out to these vendors rather than have IT or consultants build custom integrations for the purpose. We, of course, will have the complete data view, so you can provide more data than you did in the past when just sourcing from the HRIS system alone. A good example of this is providing employee listening/survey tools with a comprehensive data feed allowing greater analysis of your survey results. Another use case we've also facilitated is supporting the migration between systems using our integrations and data models as the intermediate step to stage data for the new system while also supporting continuity of historical and new data. (Reference this other blog on the topic: https://www.onemodel.co/blog/using-people-analytics-to-support-system-migration-and-innovation-adoption) Scheduled Data Destinations Figure 10: Example of data destinations in One Model What About People Analytics Embedded? This solution from SF is great for what we call 'convenience analytics' where you can access simple numbers, low complexity analytics and operational list reports. These would provide basic data aggregation and simple rates at a point in time without any historical trending. In reality, this solution is transactional reporting with a fancier user interface. Critically, the solution falls down in its inability to provide the below items: Trending across time (an analytics must have) Limited data coverage from SF modules (no access to data from some core areas including learning and payroll) Challenges joining data together and complexity for users in building queries No ability to introduce and integrate external data sources No ability to create anything of true strategic value to your organization. What About SAP Analytics Cloud? SAC has shown some great promise in being able to directly access the data held in SF and start to link to some external source systems to create the data integrations you need for a solid people analytics practice. The reality, however, is the capability of the product is still severely limited and doesn't provide enough capacity to restructure the data and create the right level of linkages and transformations required to be considered analytics-ready. As it is today, the SAC application is little more than a basic visualization tool and I can't fathom why an organization would take this path rather than something like Tableau or PowerBI which are far more capable visualization products. SAP Analytics Cloud has not yet become the replacement for the Workforce Analytics (WFA) product as it was once positioned. The hardest parts of delivering a robust people analytics software has always been the ongoing maintenance and development of your organizational data. The SF WFA's service model provided this with an expert team on call (if you have the budget) to work with you. With SAC, they have not even come close to the existing WFA offering, let alone something better. The content packages haven't arrived with any depth and trying to build a comprehensive people analytics suite yourself in SAC is going to be a struggle, perhaps even more than building it on your own in a more generic platform. What About SuccessFactors Workforce Analytics? Obviously, our team spent a lot of time with SuccessFactors' WFA product even predating the SF acquisition. The WFA product was a market and intellectual pioneer in the people analytics field back in the day and many members of our team were there, helping hundreds of organizations on their earliest forays into people analytics. The WFA solution has aged and SF has made little to no product improvements over the last five years. It is, however, still the recommended solution for SF customers that want trending and other analytics features that are relatively basic at this point. Several years ago, we started One Model because the SF WFA product wasn't able to keep pace with how organizations were maturing in their people analytics needs and the tool was severely limiting their ability to work the way they needed to. It was a black box where a services team (my team) had to deliver any changes and present that data through the limited lens the product could provide, all for a fee of course. Organizations quickly outgrew and matured beyond these limitations to the point I felt compelled to tackle the problem in a different fashion. One Model has become the solution we always wanted to help our customers become successful and to grow and mature their people analytics capability with data from SAP SuccessFactors and other systems. We provide the integrations, the analytical content, the data science, the transparency, scalability, and configurability that our customers always wished we could provide with SF WFA. We built our business model to have no additional services cost, we keep all aspects of our data model open to the customer, and our speed and delivery experience means there's no limit to which modules or data sets you wish to integrate. The One Model Solution for SAP SuccessFactors Direct API Integration to SuccessFactors Unlimited data sources Daily data refresh frequency Unlimited users Purpose built data models for SAP and SF No additional services costs People analytics metrics catalogue Create your own metrics and analytics Curated storyboard library for SuccessFactors Operational reporting Embed and share storyboards HR's most advanced predictive modelling suite Access all areas with a transparent architecture Use your own tools e.g. Tableau, PowerBI, SAC Take a tour in the video below We are happy to discuss your SuccessFactors needs.

    Read Article

    8 min read
    Chris Butler

    With the continued growth of the Coronavirus pandemic our leaders are going to be asking for regular updates on our employees health and our business’s productivity. This is not going to be a flash in the pan event either. The path back to normal will be long and gradual which means we need to approach data collection, reporting, and analysis with an emphasis on repeatability. To that end there’s a number of questions that HR teams are going to need to answer in order to provide a status of and show the progression of the businesses adaptation to these challenges and how our workforce is coping. What questions are we going to need to answer? What % of our workforce can be switched to work remotely if needed? What % has already shifted to working remotely due to COVID-19? What is the trend as we ramp this ability? What % of our workforce is currently not working due to COVID-19? How are infection rates trending in the countries/states/provinces where we have employees? What is the trend in our employee infection rates and how do they compare to the relevant country/state/province? What is the risk level of our workforce in a given area based on the age distribution and other relevant factors? Do we have any locations that are significantly impacted by COVID-related absences? What is the average duration of employees being unavailable due to COVID-19 - illness or other? What % of our infected workforce has recovered and returned to work? What % of our temporarily remote workforce has returned to working on location? What is our current productive capacity %? How long are impacted employees non productive for? How much productive capacity have we lost? So, what data do we need and how do we organize it to address the questions above? Download our Covid-19 Tracking Worksheet This is where things get tricky and HR needs to be collecting additional data beyond what they have today. This is likely going to need to come from manager input with HR acting as the central collation point. Ideally this information can be captured and held within your HRIS, but most likely this is going to start out as a spreadsheet as your HRIS may not have the required fields for what we need to measure beyond traditional absence & availability information. My view is that shortly we're going to be asking managers to provide information to HR when their employees move into quarantine, infection, and start/stop work (when remote) because of illness. This may or may not also be in association with an HRIS event recording absence or similar. As this data is collated you'll want to make sure you can collect a few key data points as per the below Ability to work from home Currently working from home Date employee stopped working Date employee returned to work Date employee in a Quarantine status Date employee in a Infected status Date employee cleared of Quarantine/Infected status This data can then be merged with the following HRIS information Location information: Country, State/Province, City/Location (for site-level metrics and comparison to global/national statistics) Personal information: age, gender (optional - for risk assessment and forecasting, data would not include name of employee ID number) Employment: employee type (regular, temp/FTC, contractor), Full/Part time A combination of this event-related data alongside the HRIS data will create the ability to track over time the status of our workforce so we can report and analyze the trend and impact on the business. Some of these data points can be inferred from your existing systems It’s going to be a challenging job to collect and keep collated the above data so if you have already or can get data from some additional systems like facilities and IT access you can infer some of these data points. Below are some examples of business logic some of the organizations we have been talking to are using. Ability to work from home = Have access to a VPN, Have a laptop. Currently working from home = Are accessing the VPN, have not badge swiped into an office/facility. Not working due to infection = A leave of absence record with no recent vpn access or badge swipe. How would we present this information? Note: the above is mock data Workforce Composition & Employee Health Overall metrics on the current workforce showing the total population, working status, remote working rates, and infection rates. We also want to show this trending over time so we have an idea of the growth and ultimately the recession of infection rates. Key Metrics Headcount, Headcount % - Quarantine, Infected, Recovered statuses Currently Working % Working from Home % High Risk Populations Comparison against daily statistics by country/state/province produced by health organizations will enable you to compare your infection trend to the prevailing trend in the relevant geographical area. If the area is seeing an acceleration of cases, you should anticipate similar risks for your workforce. If the area has hit an inflection point and is leveling off, the risk to that part of your workforce should be on the wane as well. Beyond geographic risk, age is the biggest factor on the impact to the employee and we're going to see longer infection periods and mortality rates for older employees than we will for other populations. Obviously any steps the organization can take to protect higher risk populations should be fought for. Key Metrics Regional Infection Rates (where available) Active, Quarantine, Infected, Recovered by Age, Location, Department Productive Status The absence of employees will reduce the productivity of your workforce, potentially impacting customers and creating financial risks such as over-ordering of supplies and raw materials, over-estimating orders and revenues, or committing to delivery dates that cannot be achieved because of workforce impacts beyond the view of the manufacturing location’s management. Questions we need to answer include how many of our employees are currently working, whether from home or their normal location? How many hours have we lost due to infection-related absences? While productivity isn't a major concern when lives are at stake, many of the actions we may take in protecting our employees will show up in either infection rates or in absences and we should be measuring to see what was effective and what wasn't. Key Metrics Currently Working % - Active, Quarantine, Infected Ability to work from home % Lost productive hours (key here are the dates for when people stop and return to work) Duration of Quarantine and Infection. As we plan for how our employees are impacted we need to be analyzing how long our quarantined and infected employees are unable to work. Many employees may still be able to work from home while under quarantine and or infection but there will be periods when they are infected and unwell that they won’t be able to work while when their symptoms are significant or while they are recovering. Questions we need to answer include how long is the infection period for our workforce? What is our forecast for our currently infected employees being back to work? Key Metrics Days in Quarantine Days from Infection to Recovered Number of Reinfections, Reinfection Rate Extension opportunities The above metrics and questions are a bare bones set of pertinent information that you could provide to leaders even if you don’t have fully integrated HR systems. Of course, there are many more attributes available that leaders may want to view. These will be specific to each business’ strategy so my advice is to include a number of common HRIS fields into the data collection process when you build your initial data set so you can segment later as needed. Suggested other data elements to consider adding Succession – can we tap into the successors for persons in critical roles? How impacted are these successors? Critical roles – prioritize remote work arrangements or advise early preventive quarantine measures for specific roles in certain areas Skills data – capture potential temporary backfills for employees that are unable to work Set-up expense tracking – spending to facilitate remote working capabilities Re-infection rates – tracking of persons previously cleared

    Read Article

    4 min read
    Chris Butler

    The SuccessFactors Workforce Analytics (platform pictured above) is soon to be sunset. If you haven't heard already, the SuccessFactors Workforce Analytics and Planning teams were made redundant yesterday. Product, Support, and Engineering teams for the platform (pictured above) have been given notice leaving a handful of services to maintain existing customer deployments. A lot of talented friends and pioneers in people analytics are now looking for new jobs. If and when formal word comes out of SAP, I am sure it will be along the lines of "Workforce Analytics (WFA) is not dead but moving to SAP Analytics Cloud (SAC)" with no specific timeline or plan for doing so let alone whether equivalent capability will be available (it won't be). Luckily, if you're up and running on WFA you've done all the hard work to get there. Your data is flowing and your business logic is defined. I'm here to offer all WFA customers a transition to One Model with no cost and a promise you'll be up and running with a more capable solution in a matter of days. Simply switch your existing data feeds to One Model, provide us your WFA data specification, and we'll do the rest. Literally - we'll have you up and running in a matter of days. And we can do more in more in a single day than SuccessFactors Workforce Analytics used to be able to provide in six weeks. What's awesome about One Model: Experience an all-inclusive platform: access all your data with no limits, no modules, and no implementation fees. Leverage our experience, models, and content catalogues. Don't deal with extra charges: no paid services for building metrics, dimensions, and building new modules. Daily data refreshes. Get a real HR Data Strategy built for the future of people analytics that will fully support your evolving technology landscape. Gain full access to the data warehouse, data modeling, and full exposure for user transparency. Plug in your own tools like Tableau, Excel, SAC. Truly system agnostic. Access automated machine learning to build custom predictive models relevant to you. Use the worlds most advanced Role Based Security and overcome the challenges you currently have providing secure data views to the right users. Embed within SuccessFactors using a SF extension built by one of our partners. Embed within Portals like Sharepoint, and Confluence. Feed external systems and vendors with clean, consolidated data and use us as part of any system migration to maintain history and configure data for the new system. Way too much more to list here... The Offer: Switch to One Model with no implementation fee. Redirect your feeds. Provide your Workforce Analytics data specification. Receive a people analytics infrastructure and toolkit built to support your growth in maturity and capability. Bonus: One Model will match the SF WFA subscription price if our subscription is higher. HR Analytics should flow as a by-product of how you manage your people data. "This is the way data will be managed." "OneModel’s approach is significantly different from the rest of the pack. It understands the dynamic nature of organizations and provides monitoring and maintenance capacity for the inevitable moment in which a data model ceases to be effective." - John Sumser, HR Examiner About One Model One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team.

    Read Article

    11 min read
    Chris Butler

    About ten years ago, as the pace of HR technology migration to the cloud started to heat up, I started to have a lot more conversations with organizations that were struggling with the challenges of planning for system migration and what to do with the data from their old systems post-migration. This became such a common conversation that it formed part of the reason for One Model coming into existence. Indeed much of the initial thought noodling was around how to support the great cloud migration that was and still is underway. In fact, I don't think this migration is ever going to end as new innovation and technology becomes available in the HR space. The pace of adoption is increasing and more money is being made than ever by the large systems implementation firms (Accenture, Deloitte, Cognizant, Rizing etc). Even what may be considered as a small migration between two like systems can cost huge amounts of money and time to complete. One of the core challenges of people analytics has always been the breadth and complexity of the data set and how to manage and maintain this data over time. Do this well, though, and what you have is a complete view of the data across systems that is connected and evolving with your system landscape. Why then are we not thinking in a larger context about this data infrastructure to be able to support our organizations adoption of innovation? After all, we have a perfect data store, toolset, and view of our data to facilitate migration. The perfect people analytics infrastructure has implemented an HR Data Strategy that disconnects the concept of data ownership from the transactional system of choice. This has been an evolving conversation but my core view is that as organizations increase their analytical capability, they will have in place a data strategy that supports the ability to choose any transactional system to manage their operations. Being able to quickly move between systems and manage legacy data with new data is key to adopting innovation and organizations that do this best will reap the benefits. Let's take a look at a real example, but note that I am ignoring the soft skill components of how to tackle data structure mapping and the conversations required to identify business logic, etc., as this still needs human input in a larger system migration. Using People Analytics for System Migration Recently we were able to deploy our people analytics infrastructure with a customer to specifically support the migration of data from Taleo Business Edition to Workday's Recruiting module. While this isn't our core focus as a people analytics company, we recently completed one of the last functional pieces we needed to accomplish this, so I was excited to see what we could do. Keep in mind that the below steps and process we worked through would be the same from your own infrastructure but One Model has some additional data management features that grease the wheels. To support system migration we needed to be able to Extract from the source system (Taleo Business Edition) including irregular data (resume files) Understand the source and model to an intermediate common data model Validate all source data (metrics, quality, etc) Model the intermediate model to the destination target model Push to the destination (Workday) Extract from the destination and validate the data as correct or otherwise Infinitely and automatically repeat the above as the project requires. Business logic to transform and align data from the source to target can be undertaken at both steps 2 and 4 depending on the requirement for the transformation. Below is the high level view of the flow for this project. In more detail The Source There were 132 Tables from Taleo Business Edition that form the source data set extracted from the API plus a separate the collection of resume attachments retrieved via a python program. Luckily we already understood this source and had modeled them. Model and Transform We already had models for Taleo so the majority of effort here is in catering for the business logic to go from one system to another and any customer specific logic that needs to be built. This was our first time building towards a workday target schema so the bulk of time was spent here but this point to point model is now basically a template for re-use. The below shows some of the actual data model transformations taking place and the intermediate and output tables that are being created in the process. Validation and Data Quality Obviously, we need to view the data for completeness and quality. A few dashboards give us the views we need to do so. Analytics provides an ability to measure data and a window to drill through to validate that the numbers are accurate and as expected. If the system is still in use, filtering by time allows new data to be viewed or exported to provide incremental updates. Data Quality is further addressed looking for each of the data scenarios that need to handled, these include items like missing values, and consistency checks across fields Evaluate, Adjust, Repeat It should be immediately apparent if there are problems with the data by viewing the dashboards and scenario lists. If data needs to be corrected at the source you do so and run a new extraction. Logic or data fills can be catered for in the transformation/modelling layers including bulk updates to fill any gaps or correct erroneous scenarios. As an automated process, you are not re-doing these tasks with every run - the manual effort is made once and infinitely repeated. Load to the Target System It's easy enough to take a table created here and download it as a file for loading into the target system but ideally you want to automate this step and push to the system's load facilities. In this fashion you can automate the entire process and replace or add to the data set that is in your new system even while the legacy application is still functioning and building data. On the cutover day you run a final process and you're done. Validate the Target System Data Of course, you need to validate the new system is correctly loaded and functioning so round-tripping the data back to the people analytics system will give you that oversight and the same data quality elements can be run against the new system. From here you can merge your legacy and new data sets and provide a continuous timeline for your reporting and analytics across systems as if they were always one and the same. Level of Effort We spent around 16-20 hours of technical time (excluding some soft skills time) to run the entire process to completion which included Building the required logic, target to destination models for the first time Multiple changes to the destination requirements as the external implementation consultant changed their requirements Dozens of end to end runs as data changed at the source and the destination load was validated Building a python program to extract resume files from TBE, this is now a repeatable program in our augmentations library. That's not a lot of time, and we could now do the above much faster as the repeatable pieces are in place to move from Taleo Business Edition to Workday's Recruiting module. The same process can be followed for any system. The Outcome? "Colliers chose One Model as our data integration partner for the implementation of Workday Recruiting. They built out a tailored solution that would enable us to safely, securely and accurately transfer large files of complex data from our existing ATS to our new tool. They were highly flexible in their approach and very personable to deal with – accommodating a number of twists and turns in our project plan. I wouldn’t hesitate to engage them on future projects or to recommend them to other firms seeking a professional, yet friendly team of experts in data management." - Kerris Hougardy Adopting new Innovation We've used the same methods to power new vendors that customers have on-boarded. In short order, a comprehensive cross-system data set can be built and automatically pushed to the vendor enabling their service. Meanwhile the data from your old system is still held in the people analytics framework enabling you to merge the sets for historical reporting. If you can more easily adopt new technology and move between technologies you mitigate the risks and costs of 'vendor lock-in'. I like to think of this outcome as creating an insurance policy for bad fit technology. If you know you can stand up a new technology quickly, then you can use it while you need it and move to something that fits better in the future without loss of your data history then you will be more likely to be able to test and adopt new innovation. Being able to choose the right technology at the right time is crucial for advancing our use of technology and ideally creating greater impact for our organization and employees. Our Advice for Organizations Planning for an HR System Migration Get a handle and view across your data first -- if you are already reporting and delivering analytics on these systems you have a much better handle on the data and it's quality than if you didn't. The data is often not as bad as you expect it to be and cleaning up with repeatable logic is much better than infrequently extracting and running manual cleansing routines. You could save a huge amount of time in the migration process and use more internal resources to do what you are paying an external implementation consultant to deliver. Focus more time on the differences between the systems and what you need to cater for to align the data to the new system. A properly constructed people analytics infrastructure is a system agnostic HR Data Strategy and is able to deliver more than just insight to your people. We need to think about our people data differently and take ownership for it external to the transactional vendor, when we do so we realize a level of value, flexibility and ability to adopt innovation that will drive the next phase of people analytics results while supporting HR and the business in improving the employee experience. About One Model One Model delivers a comprehensive people analytics platform to business and HR leaders that integrates data from any HR technology solution with financial and operational data to deliver metrics, storyboard visuals, and advanced analytics through a proprietary AI and machine learning model builder. People data presents unique and complex challenges which the One Model platform simplifies to enable faster, better, evidence-based workforce decisions. Learn more at www.onemodel.co.

    Read Article

    3 min read
    Chris Butler

    Earlier this year i joined one of The Learning Forum's workforce analytics peer groups and i wanted to share my experience in attendence and why i came away thinking these groups are a great idea and should be considered by every PA practitioner. There are a number of groups that you can take a look at joining including Insight 222, and The Conference Board, but Brian Hackett from The Learning Forum had asked me earlier this year to come and present to their group about what we were doing at One Model. We had come up in their conversations and peer group emails where members had been asking about different technologies to help them in their building their HR analytics capabilities. The Learning Forum is a group of mostly Fortune 2000 companies with a sizable proportion being Fortune 500 organizations, of course i accepted. Our presentation went well and we had some great questions from the group around how we would tackle existing challenges today and where the platform is heading for their future projects. A great session for us but the real value i took away was in staying for the rest of the day to be a fly on the wall for how the group worked and what they shared with each other. Brian had tabled on the agenda some pre-scheduled discussions on what the attendees were interested in learning about and discussing with their peers. The agenda was attendee curated so all subjects were relevant to the audience and provided some structure and productivity to the event. Following was time for members to be able to present on any recent projects, and work they had been conducting in their teams and any valuable insights, outcomes, and advice they could share with the group. This was awesome to sit in on and listen to how others in our space are working, what their challenges are, how they fared, and to do so in an environment of open confidential sharing. It's the spirit of confidentiality and sharing between peers that i felt most made this group able to help and learn from each other that you just don't receive from a run of the mill conference. Practitioner's were here to share, to learn, and openly seek advice from their more experienced colleagues. Presentations ranged from experience using different vendors, to cobbled together projects using spit, glue, and anything else hands could be laid on. I found the cobbled together solutions to be the most innovative, even where a company of the practitioner's size has significant resources the insight's came from innovative thinking and making use of tools that every company has access to. It's these projects of working smart not hard that make me smile the most, and the best part is that it could be shared in a fashion, and truthfulness that couldn't have occurred at a conference, or a public linkedin post. Peer forums provide an educational opportunity that you won't get elsewhere, i highly recommend for all people analytics practitioners. Thanks Brian Hackett at The Learning Forum for letting me present and learn about how your members are learning from each other.

    Read Article

    6 min read
    Chris Butler

    A few weeks ago I gave a presentation at the Talent Strategy Institute’s Future of Work conference (now PAFOW) in San Francisco about how I see the long term relationship between data and HR Technology. Essentially, I was talking through my thought process and development that I could no longer ignore and had to go start a company to chase down it’s long term vision. So here it is. My conviction is that we need to (and we will) look at the relationship between our data and our technology differently. That essentially the two will be split. We will choose technology to manage our data and our workflows as we need it. We will replace that technology as often as our strategy and our business needs change. Those that know my team, know that we have a long history of working with HR data. We started at Infohrm many years ago which was ultimately acquired by SuccessFactors and shortly after SAP. Professionally this was fantastic, worlds opened up and we were talking to many more organizations and the challenges they were facing across their technology landscape. How to achieve data portability. Over time I was thinking through the challenges our customers faced, a large one of which was how to help grease the wheels for the huge on-premise to cloud transition that was underway and subsequently the individual system migrations we were witnessing across the HR landscape. The pace of innovation in HR was not slowing down. Over the years hundreds of new companies were appearing (and disappearing) in the HR Tech space. It was clear that innovation was everywhere and many companies would love to be able to adopt or at least try out this innovation but couldn’t. They were being hampered by political, budgetary, and other technology landscape changes that made any change a huge undertaking. System migration was on the rise. As companies adopted the larger technology suites, they realized that modules were not performing as they should, and there were still gaps in functionality that they had to fill elsewhere. The promise of the suite was letting them down and continues to let them down to this day. This failure, combined with the pace of innovation meant the landscape was under continuous flux. Fragmentation was stifling innovation and analytical maturity. The big reason to move to a suite was to eliminate fragmentation, but even within the suites the modules themselves were fragmented and we as analytics practitioners without a method for managing this change only continued to add to this. We could adopt new innovation but we couldn’t make full use of it across our landscape. Ultimately this slows down how fast we can adopt innovation and downstream how we improve our analytical maturity. All HR Technology is temporary. The realization I started to come to is that all of the technology we were implementing and spending millions of dollars on was ultimately temporary. That we would continue to be in a cycle of change to facilitate our changing workflows and make use of new innovation to support our businesses. This is important so let me state it again. All HR technology is temporary. We’re missing a true HR data strategy. The mistake we were making is thinking about our technologies and our workflows as being our strategy for data management. This was the problem. If we as organizations could put in place a strategy and a framework that allowed us to disconnect our data from our managing technology and planned for obsolescence then we could achieve data portability. We need to understand the data at its fundamental concepts. If we know enough to understand the current technology and we know enough about the future technology then we can create a pathway between the two. We can facilitate and grease the migration of systems. In order to do this effectively and at scale you had to develop an intermediate context of the data. This becomes the thoroughfare. This is too advanced a concept for organizations to wrap their minds around. This is a powerful concept in essence and seems obvious, but trying to find customers for this was going to be near impossible. We would have to find companies in the short window of evaluating a system change to convince them they needed to look at the problem differently. Analytics is a natural extension. With the intermediate thoroughfare and context of each of these systems you have a perfect structure for delivering analytics from the data and powering downstream use cases. We could deliver data to vendors that needed it to supply a service to the organization. We could return data from these services and integrate into data strategy. We could write this data back to those core source systems. We could extend the data outside of these systems from sources that an organization typically could not access and make use of on their own. Wrap all this up in the burgeoning advanced analytics and machine learning capabilities and you had a truly powerful platform. We regain choice in the technology we use. In this vision, data is effectively separate from our technology and we regain the initiative back from our vendors in who and how we choose to manage our data. An insurance policy for technology. With freedom to move and to adopt new innovation we effectively buy ourselves an insurance policy in how we purchase and make use of products. We can test; we can prove; we can make the most of the best of breed and innovation that has been growing in our space. If we don’t like we can turn it off or migrate-- without losing any data history and minimizing switching costs. This is a long term view of how our relationship to data and our vendors will change. It is going to take time for this view to become mainstream, but it will. The efficiencies and pace that it provides to change the direction of our operations will deliver huge gains in how we work with our people and our supporting vendors. There’s still challenges to making this happen. Vendors young and old need to provide open access to your data (after all it’s your data). The situation is improving but there’s still some laggards. The innovative customers at One Model bought us for our data and analytical capabilities today, but they know and recognize that we’re building them a platform for their future. We’ve been working with system integrators and HR transformation groups to deliver on the above promise. The pieces are here, they’re being deployed, now we need to make the most of them.

    Read Article

    20 min read
    Chris Butler

    We received a lot of interest from Part 1 of this blog post so if you haven't read it yet head over for a summary view of our observations in Part 1. In Part 2 I'm going to give you a brief walkthrough of setting up and running a turnover risk prediction in AWS' machine learning. At the end of this post, I have some further observations about improving tweaking and improving the performance of the base offering and additionally why we chose to move away from these toolsets and develop our own approach. AWS Machine Learning https://aws.amazon.com/aml/ Step 1 - Sign up for an account If you don't have an AWS account, you can sign up through the above link. Please check with your IT department for guidance on using AWS and what data you can upload to their cloud. You may need authorization or to anonymize your data prior to loading. Cost A quick exploration of expected cost so you know what to expect. Current pricing is below. $0.42c per hour for model processing $0.10c per thousand predictions In my experience for a 5,000 employee company, this results in the below 10 minutes processing per model = $0.07c 5,000 predictions = $0.50c $0.57c per model and set of predictions run. I typically will create historical backtests generating a model each month for at least the last two years so I can gauge expected performance and track any wild divergence in model behavior. So let's call it $15 to run a full test (optional). Step 2 - Prepare your Data We'll need a flat .csv file that we can load, it's best to include a header row otherwise you will need to name your columns later in the interface which is just painful. The data you include will be all the data features we want to process and a field that shows our target that we are trying to predict, in this case, terminated that I have highlighted in yellow below. The data i use in my file is generally the active headcount of current employees and the last 1-2 years of terminations. The actives have a 0 for terminated and the terminated records have a 1. For a 5,000 person company with a 12% turnover rate that means I should have 5,000 active (0) records and around 1,200 terminated (1) records. The data features used are important and as you create different models or try to improve performance you'll likely spend a good chunk of time adding, removing, or cleaning up the data in these features. A couple guiding points you'll want to do as you build your file You can't have null values, it will drop the record if there's an empty value in a column. Instead, replace any nulls either with a placeholder (the ? you can see above) or depending on the data field you may want to insert the median value for the column. The reason being is any placeholder will be treated as a distinct value and used in pattern detection, the median instead will treat the record as no different from other median records. If you can create a range, it's often useful to do so at this step especially if you are writing SQL to extract as it will then be repeatable on each data extraction (although there are options to do this in the aws UI later). I will often use both the actual value and the range itself as individual data features i.e. Tenure (years) we would have the number of years say 3 as a column and the range 3-<5 years as a column as well. One will be treated as a continuous numeric value while the other as a categorical grouping. I like to include hierarchical structures in the data features like department, or supervisor relationships, you don't need the whole tree, the top parts of the structure are often redundant but the middle to leaf levels are quite important. You can spend days building features and creating calculations, my general approach is to start with a basic set of features and expand as I can lay my hands on more data or have time to merge in a new data set. You can then at least test how a basic set of features performs which for some organizations can perform extremely well. Adding features can reduce performance and cause overfitting so having a baseline to compare with is always good. Step 3 - Create a Datasource and ML Model The wizards make the process of creating a datasource and a model ridiculously easy. Select "Datasource and ML model" from the "Create new" menu on the Machine Learning dashboard. You'll need to load your data file into S3 (AWS file storage system) and from there you can provide it's location to the wizard and give the source a name. You will likely have a number of datasources created over time so make the name descriptive so you can tell them apart. You'll notice some information about providing a schema file. I do prefer to provide a schema file (see documentation here) as it means i can skip the next step of creating a schema for the file but if you have included a header row in your file you can tell the wizard to use the first row as the column names. You still, however, will need to provide a data type for column so the engine know how to treat the data. You have a choice of Binary - use this where there are only two possible states, our target status of terminated is either a 0 or 1 so it's a binary. Can also be used for other binary types e.g. true/false, yes/no, etc Categorical - perfect for any of the attribute or dimension style of fields i.e gender, age range, tenure range, department, country, etc. This is the most common selection I use. Numeric - any number will automatically be assigned this value but you will want to check it is applied properly to a numeric range i.e. age is correct as a numeric and will be treated as a discrete series but if you leave a department number as a numeric this is going to be worthless (change it to categorical) Text - you really shouldn't have a set of text values for this type of scenario so ignore for now and use categorical if in doubt. If you hit continue from here you'll get an error that you haven't selected a target so go ahead and select the column that you used for your terminated status then hit continue. You'll need to do the same for your person identifier (usually an employee id) on the next screen. The next Review screen will give some info on the number of types etc but there's nothing else to do here but hit continue and move to our model selections. Name your model (usually I'll match the datasource name with a -model or similar to the name). The same with the evaluation. Your biggest decision here is to use the default training and evaluation settings or to use the custom. With the custom you change the amount of training and evaluation data, the regularization type, the number of passes the engine should run over your data to detect patterns and the size of the model itself. For the most part, I've had the most success using the default settings, don't get into the custom settings until you are really trying to fine tune results as you can spend a lot of time here and have mixed results. So select default and move on. You can see the default settings on the review screen, we're going to have a training/evaluation split of 70/30, it will run 10 passes over the data looking for patterns and apply a regularization method (helps to reduce the number of patterns and avoid overfitting). Hit create, grab a coffee, and in a few minutes, you'll have a data source, a predictive model, and an evaluation demonstrating it's performance. Refresh your screen until the model shows as completed. Once complete you can click on the data source id and go explore some of the data source information, I like to view the correlations of each data feature to our target which helps to decide if I should remove features or change them in some fashion. The big piece of info though is the Evaluation result which in the above tells us that the Area Under the Curve (AUC) was 0.944 which as the next screenshot tells you is extremely good (suspiciously good). Click on the result and you'll see the performance metrics Yes you'll want to explore performance The above information set is pretty impressive, if we set our probability score threshold at 0.5 which is the point where a score above will be predicted as a termination and a score below will be predicted as active then we end up with 90% of our guesses being accurate. You can see the other metrics associated here for false prediction rates and you can play around with the sliders to adjust the trade-off score to different levels. Now, this looks awesome but keep in mind this is an evaluation set of historical data and I had spent a fair amount of time selecting and constructing data features to get to this point. In real life the model didn't perform this well, success was more like 70-75% of guesses being correct which is still great but not as good as what you'll see in the evaluation. My guess here is I still have some overfitting occurring in the model. If your evaluation performs poorly you'll want to go look at the info provided, you may have rows or columns being dropped from the data source (explore the data source id), your features may not be relevant, or some other problem has occurred. If your results are too good AUC = 1.0 then you likely have included a perfect predictor in the data features without realising i.e. an employment status or a placeholder department when somebody terminates or is about to terminate, check for something like this and remove. Step 4 - Generate Predictions When ready to generate some real-life predictions you can go ahead and click the "Generate Batch Predictions". You'll need to load a file to S3 for your predictions, this file will be the same as your input file but you will remove the terminated column (our target column) so it will only be slightly different. The contents will be for the people you wish to predict on, usually the current active headcount or if you are testing historically the active headcount at x point in time (if you do test historically your model obviously needs to be generated using data from x-1 day point in time). Use the "My data source is in S3, and I need to create a datasource" go through the same prompts as you did for your training data source and once finished processing you'll have a predictions file to download. This file gives you each person, their prediction value, and the probability score associated. You can load this into your own database or just view in excel however you may wish to consume. Observations and Tweaking suggestions Data Sources Start with a basic set of features and expand over time so you can evaluate how the new data is affecting your models. Some targets and models for organizations respond better to simple models and others need a lot more data features to find predictive patterns. Review the correlations of your attributes from the data source information after the source is created and processed. These will help you decide if a feature is useful and most importantly if you have a feature that is suspiciously predictive that you may wish to remove so that you don't pollute the model. If you are going to continue to experiment and iterate then definitely create .schema file it will save a bunch of time in avoiding setting UI options and make generating new source/models very fast. Try creating some features combining different fields you think may have some relation to each other e.g. Age-Tenure, 30-35_3-<5 yrs as an example of joining two ranges together. The ML will pick up some patterns like this but I've found creating some of these can help. The amount of data I describe early in the post is a little controversial i.e. using the current active headcount and historical terminations. Many data scientists will have issue here for one reason or another. For these people know that yes i have tested a number of different methods of balancing the data set, of oversampling data, and generally constructing to overcome different problems and through testing found in this example case of turnover the changes haven't reliably produced better real-life results. So my advice for people starting out is to just use a simple data set, and allow the toolset to do it's thing, then evaluation what you are seeing by applying your predictions back to your actual turnover. The amount of termination history can impact how a model performs, if behaviors change and you have a long history of terminations then the model may not adjust fast enough to cater for these new behaviors, it does help sometimes to shorten the amount of history you use if you have changing workforce behaviours. I was additionally creating new models every month for this reason as well. Models Always use the default to start with while you figure out the datasource and features being used. No point playing around with advanced settings when you can extract the most gains from adding or altering data features early on. If you suspect overfitting and you've looked at all your features for anything suspicious then try a higher level of regularization in the advanced settings, you should still be able to leave the other settings at their default. I've not had Evaluations Use them as an indicator that the model is doing its job and not perfectly fitting and not severely underfitting the data. In general aim for a AUC between 0.75 and 0.95 and you will generally do well. Adjust the score threshold to focus on precision if you want to reduce the number of people predicted as going to terminate (see next section). Using Predictions Generally, I'll take my predictions output and ignore the binary terminated/active column and just use the probability score column. With this I can create my own risk categories where I can bucket people into Low, Medium, High Risk categories. The high risk people may be only the top 100 or so people that I have a high confidence are at risk. Particularly if you are going to focus on a group of people you probably want to focus on a smaller group to start with. If creating your own risk buckets i will plot out these scores and the actual results and decide which scores fit into each buckets. To do this you need to test historically to see how the model performs and to help guide your decision. Watch the model and it's results over time, don't do anything about the results just yet but try to understand how it is performing and if you can be confident in what it is predicting. MOST IMPORTANTLY - if you have enough confidence to start putting retention strategies in place with these people at risk, you must record this action. The action or lack of action needs to feed back into the model as it may affect behaviors and it's absence from the model will pollute its accuracy over time. I generally describe this as my back to the future theory of turnover risk, if you take an action and the model doesnt know about it you are effectively changing the past and destroying it's prediction of the future. Why we didn't use these tools ourselves The toolsets available from AWS, Google, Azure are fantastic easy entry points to start using your data in a predictive fashion. For One Model though they did not provide enough levers to pull when data or workforce behaviors don't fit into the out of the box view from these simplified toolsets. We needed a solution that would allow us to roll into any customer, evaluation all data for that customer, test through thousand of models, and build the most effective predictive model for any target. What's more, we wanted to open this capability to our customers whether they wanted to create their own models in a few clicks or if they had their own data science team and they wished to run their own predictive or statistical models in our infrastructure. We couldn't achieve these objectives and we had to build our own approach that gave us this flexibility. One AI the new name for our augmentations is the result, and I obviously am biased but it is truly amazing. One AI is a collection of advanced calculations (feature engineering), data extensions (commute time, stock price, social data, etc), and the application of our automated machine learning frameworks. It can concurrently test thousands of models and select the most accurate model for the target and the customer's data set. One problem it may choose a basic decision tree, for the next it will decide a neural network works best, and it's able to do this in minutes. The customer though still has the ability to adjust, customize, and put their own stamp on the models in use. One of the biggest drawbacks of the black box methods though is that you have very little explanation as to why a prediction is made, this meant we couldn't provide our customers with the reasons why a person was at risk or what to do about it. In One AI we've built an explanation and prescriptive action facility to be able to show for each person the reasons why their prediction was made and what the biggest levers are to change this prediction. We'll be officially announcing One AI shortly and making available collateral on our website in the meantime if you would like to talk about our framework sooner please contact us or

    Read Article

    11 min read
    Chris Butler

    “We scoffed when you predicted he would leave, six weeks later he was gone. Never in a million years would I have said he would leave ” — One Model AWS ML Test Customer Prediction is becoming a Commodity I've been meaning to write this post for a couple of years now after first testing AWS machine learning tools for use with our customer's data sets. Prediction is becoming commoditized by highly available and inexpensive tools like AWS Machine Learning, Google's Cloud Machine Learning Engine, and Microsoft's Azure ML platform. It is now easy to take advantage of machine learning at a ridiculously low cost to the point that anyone can pick it up and start using the toolsets. For HR this means any analyst can cobble together a data set, build a predictive model, and generate predictions without a data science team and no advanced knowledge required. Further below I give a rundown on how to create your own attrition risk model and predictions using Amazon's machine learning service but first, I'll discuss some of the observations we've had in using the service. When everything works well Right out of the gates I had a good experience with using these toolsets, I loaded a fairly simple data set of about 20 employee attributes (known as data features) and ran through the available UI wizard creating a predictive model. Even before generating a set of predictions the data source and model created provide some interesting information to look at, correlations, and a test set of predictions to see how well the model was expected to perform. You can see in the images above an example of the data correlations to target (termination), and the performance of the model itself in a test evaluation. An encouraging first step, and so far I've spent about $0.05c in processing time. Loading a data file of employees that I wanted to run a prediction on and a couple minutes later we have a probability score and prediction for each person in the organization. The performance wasn't quite as good as the evaluation test but it was still quite significant, I ran this test on a historical dataset (data as at one year ago) and could check the real-life performance of the model using actual terminations since that time. It wasn't bad, around 65% of people the model predicted as a 1 (terminated) ended up leaving the organization. This was on a data set that had a historical termination rate of ~20%. With some minor tweaking adding additional data features, removing some others that looked problematic and running the models and predictions monthly to incorporate new hires we pushed the performance up to an average of 75% over the following 12 months. That means 75% of the people the machine said would leave, did so in the next 12 months. Not bad at all. For one of our customer tests, we found 65 high performing employees that were at risk of leaving. That's a turnover cost equivalent to at least $6,000,000 and this was on the first run only two weeks after they started our subscription with us. In fact, if they could save even one of those persons from leaving they would have well and truly paid for our subscription cost let alone the $15 it cost me to run the model. I mocked a dashboard on our demo site that was similar to that delivered to the customer below. Since testing with other real world data sets I have the below observations about where the AWS tools work well. Works really well on higher turnover organizations, you simply have more patterns, and more data to work with, and a statistically greater chance of someone leaving. Turnover greater than 15% you could expect good performance. Simple feature sets work well with high turnover organizations i.e. employment attributes, performance, etc. I would however always add in more calculated features though to see if they correlate e.g. Time since last promotion/transfer/position change, Supervisor changes, Peer terminations etc. The less turnover you have the more important these additional data features are. A model generated across the whole company's data worked just as well as a model generated across a subset i.e. sales, engineering. Great, for the most part we could generate a single model and use against the whole organization. Ignore the categorical prediction 1 vs 0 and instead use the probability score to create your own predictions or buckets, i've found it easier to look at and bucket populations into risk categories using this method and obtain populations with probability values that we can focus on. This is particularly useful when we want to bucket say the top 600 or the top 12% of our population to match our historic turnover. I found the best test of performance before applying to current data was to run one model every month for a historical period say the last two to three years (24-36 monthly models), load the results into a database and be able to see how the models perform over time. It allows you to take a wider view of the models performance. When everything falls apart Well not quite but when it doesn't perform as well as you might expect, conversely to the above i've run tests on organizations where i haven't seen the same stellar outcomes. Or where the model works really well for a period of time but then dives off a cliff with no explanation as you can see in the below image. This is an example where we had a model that was perfoming really well until the turnover behaviour changed completely and was no longer predictable with the data we had feeding the model. This could happen with any model but we had particular issue trying to overcome with limited set of levers we could pull in AWS. You can see that the new behaviours were being identified but it took time to re-learn and regain it's performance. A note on the metrics use below; I like to use Termination Rate - Annualized as a measure of performance because typically we run and make these predictions monthly so the populations in each bucket are changing as new hires are made, terminations leave, and peoples attributes change which may make them change risk categories. This is the reason why you will see rates exceeding 100% as the denominator population is being refreshed with new people in the risk bucket i.e. Termination Rate - Annualized: High Risk = Terminations: High Risk / Average Headcount: High Risk (annualized then of course) Generally I've seen lower performance working with organizations who have low turnover (<8%) or are just relatively small. There just were not enough reliable patterns in the data to be able to obtain the same level of gains that we see in higher turnover, larger organizations. You can increase performance by adding more features that may show additional patterns but in the testing we did we could only get so far with the data available. However while we had lower performance we still saw turnover rates (terminations/average headcount) of high risk populations around the 40-60% mark which is still significantly better than the average turnover and provides a population to go and work with so the effort is not wasted. To counter some of this you can use the probability scores to create risk buckets where you can then focus on precision of the prediction sacrificing recall (number of terminations captured). In this way you can be quite confident about a population even though it will be a smaller subset of the terminated population. Ultimately we didn't use these tools in a production capacity because we needed to overcome a different set of challenges that individual organizations don't have to deal with i.e. how to deliver at scale for any customer, with any size (even small companies) and shape data set, to do so regularly, and always be at the highest level of accuracy. The automated tools available just couldn't meet our requirements and i'll discuss some of those reasons below, so we had to build our own Machine Learning for HR which we will release some content around soon. In People Analytics the most common use case of prediction is still turnover as it represents a huge cost to the business and data is for the most part readily available. Next we will spin up a model in AWS and generate some predictions. Stay Tuned for Part 2 If you would like to talk about our framework sooner please contact us or

    Read Article

    7 min read
    Chris Butler

    The biggest change in people analytics that surprised me in 2017 wasn't any new leap in technology or shiny new object. For me, it was the growth in interest and uptake by smaller organizations. Traditionally this space has been reserved for companies that had statistically significant populations and budget's to match them. They could hire a team to build and grow HR analytics and had discretionary budget to spend on tool-sets to assist them. A few years ago you rarely would have seen a company with less than 5,000 employees spending resources on these initiatives. The last couple years and this year in particular however we've seen a substantial increase in appetite from companies with less than 1,000 employees. In fact, the smallest company I spoke to in 2017 was barely over 100 employees. These companies are not just kicking tires either, they are purchasing technology, hiring people analysts, and making out sized gains in capability when compared to their larger peers. Budget is being procured and goals are being set that would shame many large organizations. Did you know that 35% of our new customers in 2017 came from organizations with less than 1000 employees. What are they doing? Basically the same activities as larger organizations. They are gathering and making sense of all their people data, delivering reporting/analytics to their business users, and moving into advanced analytics across the hire to retire spectrum. Goals are lofty, and without a significant organizational burden they are able to move fast. Time from analysis to decision to action is on the order of days if not hours. System complexity is a major challenge. Smaller companies still struggle with the same challenges however, often the system complexity is the same or sometimes more complex than larger organizations. We have observed that often smaller companies have collected a number of systems to help make life easier but generally they're sourced from a rainbow of vendors and often are the bright new shiny applications that don't have the maturity to provide the level of data detail and access that a more established vendor may provide. At this size it is also much easier to transition to a different product or spin up a new technology which while collecting some great data can make life much more difficult to merge these systems together for analysis. Overall i think many organizations at this scale have much richer data than many of their larger peers but it is more fragmented. How can you possibly find statistical significance with a small population? From my conversations with these companies this is a known factor in how they conduct analysis and interpret their findings. It's not a showstopper but just another data point to be kept in mind. We personally had to adapt some of our machine learning prediction functionality to be able to cater for smaller companies. A predictive attrition model for example generally works better the more terminations you have, with a small population of terminations you typically won't do so well. "For smaller organizations we now employ a method of synthetically creating data that is not the same as, but is representative of the original data set - essentially making a 500 person company look like it's a 50,000 person company." One Model has had great success in employing this to very large enterprises as well to enhance the behaviors and patterns seen in the data. There are options to overcome the smaller data set challenges. Most people we know in this space don't care that they only have 500 people, because our software allows us to deliver value to their organization irregardless. Is it a passing fad for smaller organizations? I don't know yet, but I don't think so. We have to keep in mind the number of smaller organizations is orders of magnitude larger than larger organizations and it really is only the most forward thinking of these companies that are undertaking these activitites. Typically (but not always) it's the companies that are growing, doing well, and apt to hiring people who are interested in using data to support decisions (think technology, and bio-tech). This is not every smaller company, but my belief is that the entry point for HR analytics is becoming earlier, and earlier in an organization's growth curve. What does it mean for us in larger organizations and the space in general? Increased demand for HR analytics skillsets from smaller companies (more choice in who you work for). Conversely, if you need to hire for your people analytics practice don't discount people who have worked for smaller companies - you may find some great candidates in this pool. Any human capital competitive advantage you have in being a larger company is being assumed by your smaller competitors. An increase in the number of vendors supplying technology in the space, small companies are typically the entry point for new startups. With availability of technology targeted at the systems smaller companies use, expect the adoption rate and therefore the effects of the first two points to increase. We're going to see different examples for the use of people analytics at smaller scale companies, these will be interesting and learnings may even apply to smaller business units within large organizations. We're giving away free 30-min consultations to help companies take charge of their HR people analytics and data in 2018. Would like like to learn how we can help you take your people analytics and workforce data to the next level? Take advantage! Click here, or click on button below to schedule a complimentary consultation. One of our team members will get in touch with you and speak with you one-on-one to address any specific challenges your company might have. Cheers to a new year, Chris Butler One Model, CEO

    Read Article

    6 min read
    Chris Butler

    On behalf of the One Model team I am excited to announce on our third anniversary of founding that we have secured an amazing seed round investment of $3.7M to take our people data platform to the next level for our customers. 2017 has been an incredible year of growth for us and has shown that our approach and value proposition resonates strongly with customers. So much so that we are yet to lose a single customer (0% churn) which is just about unheard of for a SaaS company that is now three years old. Our vision hasn't changed, we believe that in order to fully deliver on the value to be found in people data organizations need One Model to connect and understand the data held in the dozens of systems they use to manage the workforce today. Effectively we become a secondary system of record free of the constraints that transactional systems (HRIS, ATS, Talent Management, Payroll) suffer from. The last three years have been spent building out our core data platform, to connect and accept data from any source, to understand all the behaviors between data sets, and deliver our bespoke reporting and analytics platform. With this powerful framework in place we can add in more of the high value use cases that ordinary organizations would never be able to achieve on their own. Extending data with external sources, advanced algorithmic calculations, your own custom R/Python programs, and our incredible new automated machine learning tools. All running within our data pipeline and managed by HR. We're incredibly excited about our immediate future and this investment gives us the resources to chase it down. Chris Butler (press release below) One Model Secures $3.7 Million in Funding to Fuel Growth in HR Data and Analytics Software Market Austin, Texas, November 1, 2017 – One Model, the people data strategy platform, announced the closing of $3.7 million in Series Seed funding from The Geekdom Fund, Otter Consulting, Techstars, and Lontra Ventures. The One Model team will leverage this additional funding to fuel its international growth strategy, accelerate enterprise adoption for its products, and to further develop its leading HR data and analytics platform. “Getting to know the One Model team over the past couple of years made it an easy decision for us to want to lead this round. From the beginning, the team has been able to address major enterprise needs with their powerful HR data analytics platform driving data insights from machine learning, delivering this to customers flexibly while making implementation easy,” voiced Don Douglas, Managing Director of The Geekdom Fund. “One Model's people analytics infrastructure has changed how a number of organizations plan, execute, and evaluate their HR strategies and we are excited to support the proliferation of this game changing platform.” “The HR departments of multinationals see the value proposition that One Model brings to their infrastructure, evidenced by the rapid growth One Model has experienced. The implementation time and dollars saved are enormous,” states Mike Wohl, the Investment Manager of Otter Consulting. “The future looks very bright for One Model and all of the companies that utilize their offering.” The Austin, Texas-based startup is uniquely positioned to address a key pain point within the HR industry and is primed for growth. The company’s platform removes the heavy lifting out of extracting, cleansing, modelling, and delivering analytics from your workforce data. “One Model sits at the center of all people data held by an organization. As such, we’re in a unique position to understand, extend, and deliver organizations with transformative value from this data. Our vision is that every company will need what amounts to a secondary system of record that connects together all of their disparate people systems and provides a level of insight that no transactional system can achieve on it’s own. We’re only beginning to scratch the surface of what is possible with the level of HR system interaction we are now achieving, and this investment allows us to double down on our approach” says Chris Butler, CEO of One Model. Founded in late 2014, the company has rapidly grown to support HR data and analytics needs of customers in over 156 cities around the world. This additional round of funding continues to authenticate the universal need for improved HR data and analytics management, and to validate One Model’s decision to assume a leadership role in addressing these data challenges head-on. “One Model leads the charge as the HR industry embraces analytics to improve career satisfaction, retention, and equity. The team is comprised of true industry experts who understand the nuances of enterprise software and the power machine learning. One Model’s robust pipeline of enterprise and channel customers will transform the lives of millions of professionals across the globe,” according to Andrea Kalmans, Lontra Ventures. About One Model One Model provides a data management platform and comprehensive suite of people analytics directly from various HR technology platforms to measure all aspects of the employee lifecycle. Use our out-of-the-box integrations, metrics, analytics, and dashboards, or create your own as you need to. We provide a full platform for delivering more information, measurement, and accountability from your team. Request a demo today at http://www.onemodel.co. About The Geekdom Fund The Geekdom Fund is a venture capital fund that invests in early stage IT startups in San Antonio, South Texas and beyond. It is managed by Riverwalk Capital, LLC. About Lontra Ventures Lontra Ventures is an Austin, Texas based entrepreneurial consultancy that specializes in life science consulting, and technology for high-growth companies. About Otter Consulting Otter Consulting, LLC operates as a venture capital firm. The company, which is headquartered in Florida, provides early stage venture capital financing services. About Techstars Techstars Ventures is the venture capital arm of Techstars. Techstars Ventures has $265M under management and is currently investing in their third fund ($150M). Alongside the VC and Angel communities, they co-invest in companies built by Techstars accelerator companies and alumni. For questions, please contact Stacia Damron, Senior Marketing Manager at stacia.damron@onemodel.co.

    Read Article

    13 min read
    Chris Butler

    I recently made a simple post on LinkedIn which received a crazy amount of views and overwhelmed us with requests to take a look at what we had built. The simple release was that we had managed to take Workday's point-in-time (snapshot)-based reporting and rebuild a data schema that is effective dated, and transactional in nature. The vast majority of organizations and People Analytics vendors use snapshots for extracting data from Workday because this is really the only choice they've been given to access the data. We don't like snapshots for several reasons They are inaccurate - you will typically miss out on the changes occurring between snapshots, this makes it impossible to track data/attribute changes in between, to pro-rate, and create analysis any deeper than the snapshot's time context. They are inflexible - an object or time context has already been applied to the data which you can't change without replacing the entire data set with a new context. They don't allow for changes - If data is corrected or changed in history you need to replace the entire data set, urggh. External data is difficult to connect - without effective dating joining in any external data means you have to assume a connection point and apply that time context's values to the external data set. This compounds the inaccuracy problem if you end up having to snapshot the external data as well. A pain in the #$% - To pull snapshots from Workday now you need to create the report for each snapshot period that you need to provide. Three years of data with a month-end snapshot, that's 36 reports to build and maintain. With our background in working with raw data directly from HR systems, this approach wasn't going to cut the mustard and couldn't deliver the accuracy that should be the basis of an HR data strategy. The solution is not to buy Workday's big data tools because you're going to be living with many of the same challenges. You need to take the existing structure, enhance, and fundamentally reconstruct a data architecture that solves these problems. We do just that, we extract all employee and object data, analyze the data as it flows and generate additional requests to the Workday API that work through the history of each object. Data is materialized into a schema close to the original but has additional effective-dated transactional records that you just wouldn't see in a snapshot-based schema. This becomes our raw data input into One Model, delivered to your own warehouses to be used any way you wish. The resulting dataset is perfect for delivering accurate, flexible reporting and analytics. The final structure is actually closer to what you would see with a traditional relational schema used by the HRIS sold by SAP, Oracle, PeopleSoft etc. Say what you will about the interfaces of these systems but, for the most part, the way they manage data is better suited for reporting and analytics. Now don't get me wrong, this is one area most people know Workday lags in, and in my opinion it should be a low priority decision point when selecting an HRIS. Don't compromise the value of a good transactional fit of an HRIS for your business in an attempt to solve for the reporting and analytics capability because ultimately you will be disappointed. Choose the HRIS system that fits how your business operates, solving for the reporting and analytics needs in another solution as needed. Time to get a little more technical. What I'm going to discuss below is the original availability format of data in comparison to the approach we take at One Model. Object-Oriented - The Why of the Snapshot Okay, so we all know that Workday employs an Object-Oriented approach to storing data, which is impressively effective for its transactional use case. It's also quite good at being able to store the historical states of the object. You can see what I mean by taking a look at the API references below: The above means the history itself is there but the native format for access is a snapshot at a specific point in time. We need to find a way of accessing this history and making the data useful for more advanced reporting and analytics. Time Context In providing a point in time, we are applying a time context to the data at the point of extraction. This context is then static and will never change unless you replace the data set with a different time context. Snapshot extractions are simply a collection of records with a time context applied. Often when extracting for analytics, companies will take a snapshot at the end of each month for each person or object. We get a result set similar to the below: The above is a simple approach but will miss out on the changes that occur between snapshot, because they're effectively hidden and ignored. When connecting external data sets that are properly effective- dated, you will need to make a decision on which snapshot is accurate to report against, but you simply don't have enough information available to make this connection correct. This snapshot is an inaccurate representation of what is really occurring in the data set, and it's terrible for pro-rating calculations to departments or cost centers and even something as basic as an average headcount is severely limited. Close enough is not good enough. If you are not starting out with a basis of accuracy, then everything you do downstream has the potential to be compromised. Remove the Context of Time There's a better way to represent data for reporting and analytics. Connect transactional events into a timeline Extract the details associated with the events Collapse the record set to provide an effective-dated set of records. The above distills down the number of records to only that which is needed and matches transactional and other object changes which means you can join to the data set at the correct point in time rather than approximating. Time Becomes a Flexible Concept This change requires that you apply a time context at query time, providing infinite flexibility for aligning data with different time constructs like the below Calendar Fiscal Pay periods Weeks Any time construct you can think of It's a simple enough join to create the linkage left outer join timeperiods tp on tp.date between employee.effective_date and employee.end_date We are joining at the day level here, which gives us the most flexibility and accuracy but will absolutely explode the number of records used in calculations into the millions and potentially billions of intersections. For us at One Model, accuracy is a worthwhile trade-off and the volume of data can be dealt with using clever query construction and, of course, some heavy compute power. We recently moved to a Graphics Processing Unit (GPU)-powered database because, really, why would you have dozens of compute cores when you can have thousands? (And, as a side note, it also allows us to run R and Python directly in the warehouse #realtimedatascience). More on this in a future post but for a quick comparison, take a look at the Mythbusters demonstration What About Other Objects? We also apply the same approach to the related objects within Workday so that we're building a historical effective-dated representation over time. Not all objects support this, so there are some alternative methods for building history. Retroactive Changes? Data changes and corrections occur all the time, as we regularly see volumes of changes being most active in the last six months and can occur several years in the past. Snapshots often ignore these changes unless you replace the complete data set for each load. The smarter way is to identify changes and replace only the data that is affected (i.e., replace all historical data for a person who has had a retroactive change). This approach facilitates a changes-only feed and can get you close to a near-real-time data set. I say "close to near-real time" because the Workday API is quite slow, so speed will differ depending on the number of changes occurring. Okay, So How Do You Accomplish This Magic? We have built our own integration software specifically for Workday that accomplishes all of the above. It follows this sequence: Extracts all object data and for each of them it... Evaluates the data flow and identifies where additional requests are needed to extract historical data at a different time context, then... Merges these records, collapses them, and effective-dates each record. We now have an effective-dated historical extract of each object sourced from the Workday API. This is considered the raw input source into One Model, and it is highly normalized and enormous in its scope, as most customers have 300+ tables extracted. The pattern in the below image is a representation of each object coming through; you can individually select the object slice itself The One Model modeling and calculation engines take over to make sense of the highly normalized schema, connect in any other data sources available, and deliver a cohesive data warehouse built specifically for HR data. 6. Data is available in our toolsets or you have the option to plug in your own software like Tableau, PowerBI, Qlik, SAS, etc. 7. One Model is up and running in a few days. To accomplish all of the above, all we need is a set of authorized API credentials with access provided to the objects you'd like us to access. 8. With the data model constructed, the storyboards, dashboards, and querying capabilities are immediately available. Examples: Flexibility - The Biggest Advantage You Now Have We now have virtually all data extracted from Workday in a historically accurate transaction-based format that is perfect for integrating additional data sources or generating an output with any desired time context (even convert back to snapshots, if required). Successful reporting and analytics with Workday starts with having a data strategy for overcoming the inherent limitations of the native architecture that is just not built for this purpose. We're HR data and People Analytics experts and we do this all day long. If you would like to take a look, please feel free to contact us or book some time to talk directly below. Learn more about One Model's Workday Integration Book a Demo

    Read Article

    2 min read
    Chris Butler

    The One Model team has a huge amount of experience in the HR data and analytics field. Our careers started at Infohrm, the world’s first SaaS workforce analytics provider. Infohrm was acquired by SuccessFactors in 2010 and we later moved into SAP with their acquisition of SF in 2012. As a result we’ve worked with more customers across more data sources than just about anyone else in the world. Customers with 200 employees right through to 600,000 employee behemoth organizations. This experience has earned us a unique perspective on how organizations currently use their people data, how they could be using their data in a perfect world and the amount of supporting technology that is available to them. We’ve learned that data and the correct management of it, is the real key to organizations becoming successful with their talent analytics programs. Every company I have ever met struggles with their HR data. Visualization tools are a red herring to true capability without a properly constructed and maintained method for bringing together all of your HR technology data. It will give you some early wins but you’ll soon outgrow the offered capability with nowhere else to go. Analytics, planning, and even application integration should flow as a natural byproduct of a well-executed data strategy. This is what we bring to our customers with One Model. All of your HR technology data brought together in a single unified source, automatically organized into expert built data models ready for intelligence and to support any other use case. With all of your data together regardless of the source the opportunities for using this data, for choosing better business software, and interaction between data sets become limitless. Our passion is for this data set and the HR challenges we can solve with it. We have always wanted to be able to build without restriction, the tools to collect data, to build the calculations, algorithms, and thought leadership initiatives we know our customers want. One Model is architected exactly for that, highly automated, flexible, intuitive, and open to use any other toolset you may have already invested in or want to invest in. Easily use tableau, qlikview, excel, and successfactors workforce analytics. See how we compare to the competition. We are looking for more great customers to come on board and help us refine our roadmap and prioritize capabilities important to you. On premise or cloud sources we’re ready to onboard your data and give you complete control, please contact me if you would like to join our customer engagement program.

    Read Article