| 4 min read

What does AI Mean in the Context of Workday?

Written by Workday Republic

One of the most talked about technologies today is artificial intelligence. Almost every business, in every industry, is discussing how they can take advantage of this emerging technology to transform their capabilities and gain a competitive edge in today’s digitally-driven world.

And Workday is no different; indeed, their name has been thrown into the ring with the term artificial intelligence increasingly over the past few months. However, when Workday talk of AI they often talk about harnessing the capabilities of machine learning and mathematical algorithms to power predictions and insights. But Workday isn’t advancing AI-related technology itself, and they aren’t following in the footsteps of other technology companies by building separate products like Einstein, Watson or Coleman.

So, this leads us to question: what Workday are actually doing with artificial intelligence.

As David Clarke, SVP Tech and Infrastructure at Workday, put it:

We're not inventing fundamentally new techniques in machine learning or AI. What we have that's unique are the data sets that we have access to

So, with over 2,300 customers on standardised data models, Workday clearly intends to pair their customer data with this already prevalent tech, in order to train machine learning and pattern recognition algorithms for specific jobs. 

But what does this look like for their customers?


AI and Analytics within Workday

Workday Prism Analytics is the platform designed to bring together and present Workday and non-Workday data in a self-service tool for users to access insights in real time. The result? A consolidated view of an entire business’ operations; perhaps a typical business would link external data from a CRM, non—Workday finance and HR data with native Workday project data, for instance. But, whilst this does provide a complete snapshot, how rich is the insight and how much still depends on human deduction? The answer, unfortunately, is a lot. So, this is where Workday are using their familiar Power of One to its advantage.

Prism itself can offer a snapshot of an entire organisation in one place but layer on machine learning, automation and augmented analytics you find yourself moving away from self-service into AI predictions. This is encapsulated in Workday People Analytics which combines all of the below;

  • Automated pattern-detection capabilities to look for important changes a human might not see
  • Graph processing to find connections across vast datapoints
  • Machine learning to predict the most important issue for you to see
  • Natural language generation to explain what is happening in a simple story

To put it frankly: the capabilities of Workday Prism are advanced. Indeed, when it comes to the practical application of what might seem like a long list of fancy can-dos, the possibilities are game-changing to organisations. Take anomaly detection, for instance; whether it’s the cadence of customer payments or monitoring regrettable attrition. It will not only alert individuals, but also draw on multiple data points to offer insight into why this might be changing. In short it alerts and explains so people can act and not react; something that is crucial in today’s highly competitive and technologically driven markets. 

The example from Workday talks of a people leader who might receive a message showing new hire turnover has increased overall, and that not only should they look at the sales organization in London, but also dig into compensation, as well as a specific hiring manager. Having the capability to track data in such a way does offer the insights that businesses will soon rely on but – unfortunately -also brings with it questions on misuse, ethics and regulation…


The Limits of AI

So, how far should a company be allowed to track employee data? Should they, for example, be able to predict whether an employee is looking to make a move outside of the organisation? This is an entirely plausible scenario, given the fact that feed data points surrounding their LinkedIn activity, age, compensation, seniority and performance are all drawn into People Analytics. With this combination of data, and the artificial Intelligence technology, businesses can potentially have a powerful insight into predicting their employee behaviour – but there are serious questions over the ethics of doing so.

Indeed, as more businesses bring machine learning, AI and deep learning into their organisation, it is becoming increasingly important for them to take responsibility for the outputs of any system that they choose to deploy, and for any actions that they take on based on the information.

To put it plainly, with the general public increasingly voicing concerns over the rise of artificial intelligence, worrying about biased, fraudulent and malicious applications, it is now the responsibility of the technical providers of these technologies to respond.

In a recent interview with CNBC in Davos for the World Economic Forum Bhusri wisely states that technology is neither good nor bad, it is neutral – what makes it good or bad is how it is applied. But, he quickly qualifies this with his strong belief that every company that uses data to make decisions, needs to be transparent, where the employee needs to be made aware of what data is used and for what purpose.

As well as other leaders of tech giants - such as the likes of Google, Microsoft and Salesforce - Bhusri suggests that an Ethics Officer will soon be needed in these companies, in order to lay down the rules on policies around privacy and to provide assurance that the technology in question is being used responsibly. Alongside their skilled team, a Chief Ethics Officer will look at inputs and outputs of AI systems, and ensure that they are aligned with an organisation’s ethical responsibilities.

So, as businesses daily use and reliance on artificial intelligence to power their decisions increases, so too will questions on what accountability they have. Companies can promise to behave ethically and regulate themselves but how will this stand up when financial pressures are applied? Idealism may have to take the hit. Could the only true way to ensure ethical practices be the regulation of AI by independent regulators?

When it comes to Workday, it seems like they’re only on the forefront of their artificial intelligence capabilities; with their current offering already having the potential to transform the way in which organisations can do business. In a world where a technological advantage can make or break an organisation, businesses will be clamouring for Workday to advance their artificial intelligence capabilities – and we are sure Workday will do so. But it will be interesting to see how the company continues to respond to increasing pressures from the outside world to ensure their technology is safe, can be regulated and – fundamentally – can be trusted with their data.