QUICK FILTERS
Featured
3 min read
The One Model Team
One Model was founded around a goal of helping teams tell data-informed stories that lead to brilliant, data-driven talent decisions. By leveraging data and story, we can help teams communicate a deeper understanding of the tangible benefits of diversity, equity, and inclusion initiatives, and how they contribute to the success of a business. Data-informed stories can be a powerful tool for uncovering how the work environment is impacting our employees. Through data, we can demonstrate the positive impact of treating people well, and how this can drive business success. Let’s walk through one of the “classic stories” we hear in HR and people analytics for a fictional organisation called Innovative Enterprise. We’ll start with the story, introduce the data, and then apply One Model’s data-informed storytelling framework to the story to show how our platform easily weaves the narratives together. This is a common story that HR teams are asked to tell around employee experience and the impact that a positive work environment can have on the overall business. Story alone: Within Innovative Enterprise, while we have a diverse workforce, this diversity is yet to permeate our leadership effectively. Our leadership team, although competent and committed, does not fully represent the diverse perspectives present within our broader team. This lack of representation in leadership could potentially influence our culture and engagement levels. Data alone: Internal data at Innovative Enterprise shows that while 49% of our workforce identifies as ethnically diverse, only 15% of our leadership does. Recent industry studies that the people analytics team analysed indicate that organisations with diverse leadership teams outperform those without by 35% in terms of innovation and creativity. Moreover, organisations that boast diverse leadership report a 25% higher employee satisfaction score compared to companies with less diverse leadership teams. Data story: At Innovative Enterprise, the lack of diversity in our leadership team becomes evident. Our internal data reveals that while our workforce is 49% ethnically diverse, only 15% of our leadership reflects this diversity. It's clear we're falling short, and this is a challenge that we share with many organisations across our industry. However, industry data provides a clear directive: organisations with diverse leadership teams are more innovative and creative by 35%. They also report a 25% higher employee satisfaction score, indicating a more engaged and motivated workforce. This compelling combination of our internal situation and broader industry data paints a powerful argument for enhancing diversity, equity, and inclusion at the leadership level. The data provides clear guidance — it's time for us to take action. Ready to learn more This example from Innovative Enterprise demonstrates the power of data-informed storytelling in HR. For more impactful stories and detailed analysis, download our eBook Why Data-Informed Storytelling Is the Future of HR to explore additional examples and learn how One Model can help your organisation tell compelling, data-driven stories.
Read Article
Featured
5 min read
Dennis Behrman
We asked our friends at Culture Curated why organizations should have a strong focus on human resource compliance. That led to a more foundational question: What is an organization’s culture? Culture: Ping Pong Tables or Compliance in HR? In the quest to boost workplace culture–and thus performance, our initial instinct might be to think of adding fun elements, like a ping pong table in the break room. However, the journey to improving corporate culture delves much deeper than surface-level entertainment. It begins with the bedrock of strong human resource compliance. In fact, “Any good culture is going to be built on the foundation of strong compliance,” says Season Chapman, Partner & Principal Consultant of Culture Curated. “It’s about how we must treat people.” But compliance isn't just about adhering to HR compliance laws or procuring a human resources compliance solution. It's about establishing a framework within which people are treated fairly and decisions are made responsibly. This foundation of compliance in HR is essential, not just for its own sake, but as the ground floor upon which the rest of the company culture is built. Laying Your Culture’s Foundation: Accountability and Belonging Moving beyond the notion that culture is merely about having fun, culture is–at its core–about accountability, achieving results, and fostering trust among team members. But how do we shift the conversation towards these deeper aspects of culture? The answer lies in starting with human resources compliance as the base layer. Drawing from psychological principles, humans seek a sense of belonging and connection. They want to feel aligned with the company's mission and vision. The secret to that goal starts with a focus on building meaningful relationships with employees and fostering a sense of belonging and support. In today's workplace, the concept of psychological safety is paramount for cultivating a culture where employees feel confident in sharing ideas. This safe space is critical for a vibrant, innovative workplace culture. Starting the Journey Towards a Balanced Culture So, how does an organization embark on this journey towards a culture that balances fun, compliance, and psychological safety? According to Yuliana Lopez, Partner & Principal Consultant of Culture Curated, “The starting point is an organizational assessment.” She explains that such assessments gauge the current state of compliance and how employees feel about their work environment and relationships with peers. This comprehensive evaluation can identify areas for improvement and set the stage for developing a culture that not only meets legal requirements but also fulfills and inspires its workforce. Are You Ready for the Coming Wave of AI Regulation for Human Resources? How One Model Helps With Compliance Foundations One Model assists with compliance by providing an integrated analytics platform designed to manage and analyze workforce data according to legal standards and best practices that includes: Offering advanced analytics and reporting capabilities that enable compliance with regulatory requirements. Prioritizing robust data security and privacy measures to protect sensitive information and comply with data protection regulations. Featuring role-based access controls so that only authorized personnel have access to sensitive data, in order to maintain continuous compliance with labor laws, and occupational safety and other standards. Providing customizable dashboards to monitor key compliance indicators, from wage and hour laws to benefits regulations and beyond. While the allure of quick fixes like a ping pong table may seem like an easy way to boost morale, the real work in improving culture goes much deeper. By establishing a strong foundation of compliance with human resource compliance solutions like One Model, organizations can lay the groundwork for a positive culture. This foundation enables leaders to enhance performance, foster genuine connections, and support the well-being of every employee. Wondering about compliance in the world of AI and Machine Learning? We’ve got you covered. 1. Understand how ethics are changing in a world with AI. Read more. 2. Be prepared with regulations coming to HR. Join the Regulations and Standards Masterclass today. Learning about AI regulations and standards for HR has never been easier with an enlightening video series from experts across the space sharing the key concepts you need to know.
Read Article
Featured
5 min read
Joe Grohovsky
In a recent editorial (here), Emerging Intelligence Columnist John Sumser explains how pending EU Artificial Intelligence (AI) regulations will impact its global use. A summary of those regulations can be found here. You and your organization should take an interest in these developments and yes there are HR legal concerns over AI. The moral and ethical concerns associated with the application of AI are something we must all understand in the coming years. Ignorance of AI capabilities and ramifications can no longer be an excuse. Sumser explains how this new legislation will add obligations and restrictions beyond existing GDPR requirements and that there is legislation applicable to human resource machine learning. The expectation is that legal oversight will arise that may expose liability to People Analytic users and their vendors. These regulations may bode poorly for People Analytics providers. It is worth your while to review what is being drafted related to machine learning and the law as well as how your current vendor addresses the three primary topics from these regulations: Fairness – This can address both training data used in your predictive model as well as the model itself. Potential bias toward things like gender or race may be obvious, but hidden bias often exists. Your vendor should identify biased data and allow you to either remove it or debias it. Transparency – All activity related to your predictive runs should be identifiable and auditable. This includes selection and disclosure of data, the strength of the models developed, and configurations used for data augmentation. Individual control over their own data – This relationship ultimately exists between the worker and their employer. Sumser’s article expertly summarizes a set of minimum expectations your employees deserve. When it comes to HR law, our opinion is that vendors should have already self-adopted these types of standards, and we are delighted this issue is being raised. What are the differences between regulations and standards? Become a more informed HR Leader by watching our Masterclass Series: Why One Model is Preferred when it comes to Machine Learning and the Law? At One Model we are consistently examining the ethical issues that are associated with AI. One Model already meets and exceeds the Fairness and Transparency recommendations; not begrudgingly but happily because it is the right thing to do. Where most competitors put your data into a proverbial AI black box, One Model opens its platform and allows full transparency and even modification of the AI algorithm your company uses. One Model has long understood the HR law and how the industry has an obligation to develop rigor and understanding around Data Science and Machine Learning. The obvious need for regulation and a legal standard for ethics has risen with the amount of snake oil and obscurity being heavily marketed by some HR People Analytics vendors. One Model’s ongoing plan to empower your HR AI initiatives includes: Radical transparency. Full traceability and automated version control (data + model). Transparent local and model level justifications for the predictions that our Machine Learning component called OneAI makes. By providing justifications and explanations for our decision-making process One Model builds paths for user-education and auditability for both simple and complex statistics. Our objective has been to advance the HR landscape by up-skilling analysts within their day-to-day job while still providing the latest cutting edge in statistics and machine learning. Providing clear and educational paths to statistics is in the forefront of our product design and roadmaps, and One Model is just getting started. You should promptly schedule a review of the AI practices being conducted with your employee data. Ignoring what AI can offer risks putting your organization at a competitive disadvantage. Incorrectly deploying AI practices may expose you to legal risk, employee distrust, compromised ethics, and incorrect observations. One Model is glad to share our expertise around People Analytics AI with you and your team. High level information on our OneAI capability can be found in the following brief video and documents: https://bit.ly/OneModelPredictiveModeling https://bit.ly/OneModel-AI https://bit.ly/HR_MachineLearning For a more detailed discussion please schedule a convenient time for a personal discussion. http://bit.ly/OneModelMeeting
Read Article
Featured
1 min read
Lauren Canada
This infographic dives into what the IT security risks are in the people analytics space, how they can impact your business financially, legally, or otherwise, and how One Model works to limit those security risks. Click here to view the full infographic! Click here to view the full infographic!
Read Article
Featured
10 min read
Dennis Behrman
Ever play with a Magic 8 Ball? Back in the day, you could ask it any question and get an answer in just a few seconds. And if you didn't like its response, you could just shake it again for a new prediction. So simple, so satisfying. Today's HR teams and businesses obviously need more reliable ways of predicting outcomes and forecasting results than a Magic 8 Ball. But while forecasting and predicting sound similar, they're actually two different problem-solving techniques. Below, we'll go over both and explain what they're best suited for. What is HR forecasting? Remember the Magic 8 ball? At first glance, the Magic 8 ball "predicts" or "forecasts" an answer to your question. This is not how forecasting works (at least, for successful companies or HR departments). Instead, HR forecasting is a process of predicting or estimating future events based on past and present data and most commonly by analysis of trends. "Guessing" doesn't cut it. For example, we could use predictive forecasting to discover how many customer calls Phil, our product evangelist, is likely to receive in the next day. Or how many product demos he'll lead over the next week. The data from previous years is already available in our CRM, and it can help us accurately predict and anticipate future sales and marketing events where Phil may be needed. A forecast, unlike a prediction, must have logic to it. It must be defendable. This logic is what differentiates it from the Magic 8 ball's lucky guess. After all, even a broken watch is right two times a day. What is predictive analytics? Predictive analytics is the practice of extracting information from existing data sets in order to determine patterns and trends that could potentially predict future outcomes. It doesn't tell you what will happen in the future, but rather, what might happen. For example, predictive analytics could help identify customers who are likely to purchase our new One AI software over the next 90 days. To do so, we could indicate a desired outcome (a purchase of our people analytics software solution) and work backwards to identify traits in customer data that have previously indicated they are ready to make a purchase soon. (For example, they might have the decision-making authority on their people analytics team, have an established budget for the project, completed a demo, and found Phil likeable and helpful.) Predictive modeling and analytics would run the data and establish which of these factors actually contributed to the sale. Maybe we'd find out Phil's likability didn't matter because the software was so helpful that customers found value in it anyway. Either way, predictive analytics and predictive modeling would review the data and help us figure that out — a far cry from our Magic 8 ball. Managing your people analytics data: how do you know know if you need to use forecasting vs. predictive analysis? Interested in how forecasting and/or predictive modeling / predictive analytics can help grow your people analytics capabilities? Do you start with forecasting or predictive modeling? The infographic below (credit to Educba.com - thanks!) is a great place to compare your options: Recap: Should you use forecasting or predictive analysis to solve your question? Forecasting is a technique that takes data and predicts the future value of the data by looking at its unique trends. For example - predicting average annual company turnover based on data from 10+ years prior. Predictive analysis factors in a variety of inputs and predicts future behavior - not just a number. For example - out of this same employee group, which of these employees are most likely to leave (turnover = the output), based on analyzing past employee data and identifying the indicators (input) that often proceed with the output? In the first case, there is no separate input or output variable but in the second case, you use several input variables to arrive at an output variable. While forecasting is insightful and certainly helpful, predictive analytics can provide you with some pretty helpful people analytics insights. People analytics leaders have definitely caught on. We can help you figure it out and get started. Want to see how predictive modeling can help your team with its people analytics initiatives? We can jump-start your people analytics team with our Trailblazer quick-start package, which really changes the game by making predictive modeling agile and iterative process. The best part? It allows you to start now and give your stakeholders a taste without breaking the bank, and it allows you to build your case and lay the groundwork for the larger scale predictive work you could continue in the future. Want to learn more? Connect with Us. Forecasting vs. Predictive Analysis: Other Relevant Terms Machine Learning - machine learning is a branch of artificial intelligence (ai) where computers learn to act and adapt to new data without being programmed to do so. The computer is able to act independently of human interaction. Read Machine Learning Blog. Data Science - data science is the study of big data that seeks to extract meaningful knowledge and insights from large amounts of complex data in various forms. Data Mining - data mining is the process of discovering patterns in large data sets. Big Data - big data is another term for a data set that's too large or complex for traditional data-processing software. Learn about our data warehouse. Predictive Modeling - Predictive modeling is a form of artificial intelligence that uses data mining and probability to forecast or estimate more granular, specific outcomes. Learn more about predictive analytics. Descriptive Analytics - Descriptive analytics is a type of post-mortem analysis in that it looks at past performance. It evaluates that performance by mining historical data to look for the reasons behind previous successes and failures. Prescriptive Analytics - prescriptive analytics is an area of business analytics dedicated to finding the potential best course of action for a given situation. Data Analytics - plain and simple, data analytics is the science of inspecting, cleansing, transforming, and modeling data in order to draw insights from raw information sources. People Analytics - All these elements are important for people analytics. Need basics? Learn more about people analytics. About One Model One Model’s people analytics solutions help thriving companies make consistently great talent decisions at all levels of the organization. Large and rapidly-growing companies rely on our People Data Cloud™ people analytics platform because it takes all of the heavy lifting out of data extraction, cleansing, modeling, analytics, and reporting of enterprise workforce data. One Model pioneered people data orchestration, innovative visualizations, and flexible predictive models. HR and business teams trust its accurate reports and analyses. Data scientists, engineers, and people analytics professionals love the reduced technical burden. People Data Cloud is a uniquely transparent platform that drives ethical decisions and ensures the highest levels of security and privacy that human resource management demands.
Read Article
Featured
11 min read
Taylor Clark
The human resources department is a mission-critical function in most businesses. So the promise of better people decisions has generated interest in and adoption of advanced machine-learning capabilities. In response, organizations are adopting a wide variety of data science tools and technology to produce economically-optimal business outcomes. This trend is the result of the proliferation of data and the improved decision-making opportunities that come with harnessing the predictive value of that data. What are the downsides to harnessing machine learning? For one, machines lack ethics. They can be programmed to intelligently and efficiently drive optimal economic outcomes. It seems as though the use of machines in decisions and seemingly desirable organizational behaviors. Of course machines lack a sense of fairness or justice. But optimal economic outcomes do not always correspond to optimal ethical outcomes. So the key question facing human resources teams and technology support is "How can we ensure that our people decisions are ethical when a machine is suggesting those decisions?” The answer almost certainly requires radical transparency about how artificial intelligence and machine learning are used the decision making process. It is impossible to understand the ethical aspect of a prediction made by a machine unless the input data and the transformations of that data are clear and understood as well. General differences between various machine learning approaches have a profound impact on the ethicality of the outcomes that their predictions lead to. So let's begin by understanding some of those differences. Let’s focus on the various types of machine learning models: the black box model, the canned model, and the inductive model. What is a Black Box Model? A black box model is one that produces predictions that can’t be explained. There are tools that help users understand black box models, but these types of models are generally extremely difficult to understand. Many vendors build black box models for customers, but are unable or unwilling to explain their techniques and the results that those techniques tend to produce. Sometimes it is difficult for the model vender to understand its own model! The result is that the model lacks any transparency. Black box models are often trained on very large data sets. Larger training sets can greatly improve model performance. However, for this higher level of performance to be generalized many dependencies need to be satisfied. Naturally, without transparency it is difficult to trust a black box model. As you can imagine, it is concerning to depend on a model that uses sensitive data when that model lacks transparency. For example, asking a machine to determine if a photo has a cat in the frame doesn't require much transparency because the objective lacks an ethical aspect. But decisions involving people often have an ethical aspect to them. This means that model transparency is extremely important. Black box models can cross ethical lines where people decisions are concerned. Models, like humans, can exhibit biases resulting from sampling or estimation errors. They can also use input data in undesirable ways. Furthermore, model outputs are frequently used in downstream models and decisions. In turn, this ingrains invisible systematic bias into the decision. Naturally, the organization jeopardizes its ethical posture when human or machine bias leads to undesirable diversity or inclusion outcomes. One of the worst possible outcomes is a decision that is unethical or prejudicial. These bad decisions can have legal consequences or more. What is a Canned Model? The terms "canned model" or “off-the-shelf model” describe a model that was not developed or tailored to a specific user’s dataset. A canned model could also be a black box model depending on how much intellectual property the model’s developer is willing to expose. Plus, the original developer might not understand much about its own model. Canned models are vulnerable to the same biases as black box models. Unrepresentative data sets can lead to unethical decisions. Even a representative data set can have features that lead to unethical decisions. So canned models aren't without their disadvantages either. But even with a sound ethical posture, canned models can perform poorly in an environment that simply isn’t reflective of the environment on which the model was trained. Imagine a canned model that segmented workers in the apparel industry by learning and development investments. A model trained on Walmart’s data wouldn’t perform very well when applied to decisions for a fashion startup. Canned models can be quite effective if your workforce looks very similar to the ones that the model was trained on. But that training set is almost certainly a more general audience than yours. Models perform better when training data represents the real life population that was targeted and represented in the training set. What are Custom Built Models? Which brings us to custom built models. Custom models are the kind that are trained on your data. One AI is an example of the custom built approach. It delivers specialized models that best understand your environment because it’s seen it before. So it can detect patterns within your data to learn and make accurate predictions. Custom models discover the unique aspects of your business and learn from those discoveries. To be sure, it is common for data science professionals to deploy the best performing model that they can. However, the business must ensure that these models comply with high ethical and business intelligence standards. That's because it is possible to make an immoral decision with a great prediction. So for users of the custom built model, transparency is only possible through development techniques that are not cloudy or secret. Even with custom built models, it is important to assess the ethical impact that a new model will have before it is too late. Custom built models may incorporate some benefits of canned models, as well. External data can be incorporated into the model development process. External data is valuable because it can capture what is going on outside of your organization. Local area unemployment is a good example of a potentially valuable external data set. Going through the effort of building a model that is custom to your organization will provide a much higher level of understanding than just slamming a generic model on top of your data. You will gain the additional business intelligence that comes from understanding how your data, rather than other companies' data, relates to your business outcomes. The insights gleaned during the model development process can be valuable even if the model is never deployed. Understanding how any model performs on your data teaches you a lot about your data. This, in turn, will inform which type of model and model-building technique will be advantageous to your business decisions. Don’t Be Misled by Generic Model Performance Indicators A canned model’s advertised performance can be deceptive. The shape of the data that the canned model learned from may be drastically different from the data in your specific business environment. For example, if 5% of the people in the model's sample work remotely, but your entire company is remote, then the impact and inferences drawn by the model about remote work are not likely to inform your decisions very well. When to be Skeptical of Model Performance Numbers Most providers of canned models are not eager to determine the specific performance of their model on your data because of the inherent weaknesses described above. So how do you sniff out performant models? How can you understand a good smelling model from a bad smelling one? The first reason to be skeptical lies in whether the model provider offers relative performance numbers. A relative performance value is a comparative one, and therefore failing to disclose relative performance should smell bad. Data scientists understand the importance of measuring performance. They know that it is crucial to understand performance prior to using a model’s outputs. So avoiding relative performance, the vendor is not being 100% transparent. The second reason to be skeptical concerns vendors who can't (or won't) explain which features are used in their model and the contribution that each feature makes to the prediction. It is very difficult to trust a model's outputs when the features and their effects lack explanation. So that would certainly smell bad. One Model published a whitepaper listing the questions you should ask every machine learning vendor. Focus on Relative Performance….or Else! There are risks that arise when using data without relative performance. The closest risk to the business is that faith in the model itself could diminish. This means that internal stakeholders would not realize “promised” or “implied” performance. Of course, failing to live up to these promises is a trust-killer for a predictive model. Employees themselves, and not just decision makers, can distrust models and object to decisions made with it. Even worse, employees could adjust their behavior in ways that circumvent the model in order to “prove it wrong”. But loss of trust by internal stakeholders is just the beginning. Legal, compliance, financial, and operational risk can increase when businesses fail to comply with laws, regulations, and policies. Therefore, it is appropriate for champions of machine learning to be very familiar with these risks and to ensure that they are mitigated when adopting artificial intelligence. Finally, it is important to identify who is accountable for poor decisions that are made with the assistance of a model. The act of naming an accountable individual can reduce the chances of negative outcomes, such as bias, illegality, or imprudence. How to Trust a Model A visually appealing model that delivers "interesting insights" is not necessarily trustworthy. After all, a model that has a hand in false or misleading insights is a total failure. At One Model, we feel that all content generated from predictive model outputs must link back to that model's performance metrics. An organization cannot consider itself engaged in ethical use of predictive data without this link. Canned and black box models are extremely difficult to understand, and even more difficult to predict how they respond to your specific set of data. There are cases where these types of models can be appropriate. But these cases are few and far between in the realm of people data in the human resources function. Instead, custom models offer a much higher level of transparency. Model developers and users understand their own data much better throughout the model building process. (This process is called Exploratory Data Analysis, and it is an extremely under-appreciated aspect of the field of machine learning.) At One Model, we spent a long time -- more than 5 years -- building One AI to make it easier for all types of human resources professionals build and deploy ethical custom models from their data, while ensuring model performance evaluation and model explainability. One AI includes robust, deep reporting functionality that provides clarity on which data was used to train models. It blends rich discovery with rapid creation and deployment. The result is the most transparent and ethical machine learning capability in any people analytics platform. Nothing about One AI is hidden or unknowable. And that's why you can trust it. Their Artificial Intelligence Still Needs Your Human Intelligence Models are created to inform us of patterns in systems. The HR community intends to use models on problem spaces involving people moving through and performing within organizations. So HR pros should be able to learn a lot from predictive models. But it is unwise to relinquish human intelligence to predictive models that are not understood. The ultimate value of models (and all people analytics) is to make better, faster, more data-informed talent decisions at all levels of the organization. Machine learning is a powerful tool, but it is not a solution to that problem.
Read Article
Featured
3 min read
Nicholas Garbis
We wrote this paper because we believe that AI/ML has the potential to be a very valuable and powerful technology to support better talent decisions in organizations – and it also has the potential to be mishandled in ways that are unethical and can do harm to individuals and groups of employees. In this paper, we provide some process-thinking substance to the conversation that has too often been dominated by hyperbolic “AI/ML is great!” and “AI/ML will destroy us!” headlines. In the paper, you will find a set of Guiding Principles … And, most importantly, a set of Processes for Ethical ML Stewardship that we believe you should be discussing (immediately) within your organizations. Each of these processes (and sub-processes) is defined in the paper in plain, readable language to enable the widest possible readership. We believe we are at a delicate and critical point in time where AI/ML has been embedded into so many HR technology solutions without sufficient governance amongst the buying organizations. Vendors (like One Model) need to have their AI/ML solutions challenged to provide sufficient transparency into the AI/ML models – model features, performance measures, bias detection, review/refresh commitments, etc. One Model has built our “One AI” machine learning toolset to enable the processes that our customers can use to ensure ethical model design and outputs. To be clear, this paper is not a promotional piece about One Model, but it is absolutely intended to challenge the sellers and buyers of HR technology to get this right. Without the appropriate focus on ethics, AI/ML products and projects could become too risky for organizations and then summarily eliminated along with all the potential value for individuals and organizations. DOWNLOAD PAGE: https://www.onemodel.co/whitepapers/ethics-of-ai-ml-in-hr
Read Article