Skip To Main Content
A giant hand holding a magnifying glass focused on a woman working on a laptop
Data and technology

How does AI factor into privacy in the workplace?

As AI integration becomes more commonplace, is privacy and data security about to become the next workplace shakeup?

What would you say if your employer asked you to wear a headband that tracked your brain waves to determine fatigue levels on the job? Or a pair of headphones that monitor your stress levels while you’re working? While this sounds like the stuff of dystopian sci-fi movies, these are in fact examples from our present reality, courtesy of companies like SmartCap (maker of the headband, used by over 5000 companies worldwide), who are at the forefront of a growing trend of using artificial intelligence in the workplace to track employees’ vital statistics, including their mental and physical health.  

In theory, Corporate Big Brother is doing this for the good of their employees. Those Smartcap headbands, for example, are designed to prevent fatigue-induced workplace accidents, vital in industries like trucking or mining, where tiredness has far more severe consequences than just nodding off at your desk. (There’s also a profit-driven reason, of course; according to the Harvard Business Review, fatigue costs USD $136 billion in lost productivity each year.) 

In practice, however, it’s much more complicated—and employees may be disinclined to believe their employers’ motives are as altruistic as they say. Even just the feeling of being watched at work, let alone wearing a wearable tech device that’s monitoring your bodily functions, affects employees: According to a recent study by the American Psychological Association, almost a third of people who knew their boss was monitoring them reported “fair or poor” mental health, a higher percentage than those who weren’t being watched at work.  

“I see this as an invasion of the worker's private life, in addition to the work space,” says Fabricio Barili, an academic who studies the way employers use surveillance in the workplace. “Algorithms are always looking to quantify each worker's performance and compare it with others, making it easier to reward or penalize those who have results that are outside the average.” 

This isn’t new, of course: Employers have long-used key cards to monitor who is (and who isn’t) showing up to the office, and, as Barili points out, during the pandemic we saw companies like Ford experiment with using wristbands that buzzed when employees got too close to each other as a means of enforcing social distancing on the line. We’ve also seen American workplaces fit out their employees with Fitbits, with the promise that better health will equal lower health insurance premiums.  

“Big Brother is not concerned about health per se, because if that were the case, he would think about reducing his [employees’] working hours,” notes Barili, who is leery of the blurring between personal and professional that happens when a device is tracking you 24 hours a day. “He's concerned about increasing productivity, even if it's something you do outside of work hours.”  

Similarly, Barili raises concerns about, say, a predictive algorithm in a wearable device that might one day help an employer guess that a person is pregnant, opening a myriad of sinister possibilities for what might happen with that data, the most benign of which is that an employer might suspect you’re expecting before you do.  

On the flip side, there are some solutions that have their origins in a much purer motive—like Watercooler AI, a tool that can be added to Slack to boost engagement by automatically setting up informal touch points between employees, and to give workers the chance to ask leadership questions anonymously.  

“What really urged me to start Watercooler was a single data point that I came across: 120,000 people lose their lives each year due to work related stress,” says Eitan Vesely. “I couldn’t live with knowing that people die because they go to work. Where are their managers? Couldn’t they see that they are ‘killing’ their own people?!”  

Watercooler uses AI to analyze the data created by an employee's digital footprint—email, instant messaging, project management tools like Jira—and applies algorithms to study that data across 150 distinct variables.  

“The algorithms are trained to look for unique and abnormal behavioral patterns. It’s crucial to note that the algorithm doesn’t examine each variable in isolation,” says Vesely. “For example, the significance of the variable ‘excessive work hours’ will be based on whether it is occurring throughout the organization or limited to specific teams or individuals.”  

So, Watercooler might determine that employees are least likely to quit when they spend between five and 12 hours a week in meetings, and interact with their manager one or two times a week. But, Vesely says, the company uses an approach called Differential Privacy, which involves aggregating data and sometimes adding ‘random noise,’ especially in smaller teams, so managers “cannot reverse engineer the analytics to figure out the identity of individual employees.” 

This proper handling of data is imperative when it comes to these emerging technologies, says Martin Fox, managing director, Canada, at global recruitment firm Robert Walters.   

“While data can yield significant benefits, it heavily depends on who processes and interprets it. Are there biases at play? Would assumptions be made about someone with a learning disability?” he says. “Before we even begin to collect such sensitive data on our employees, it is imperative to establish a comprehensive education and training process to ensure that the data is handled and perceived in the intended manner.” 

Melissa Robertson, a CPA and principal at Chartered Professional Accountants of Canada, agrees. This is why she argues organizations need to have a robust data governance policy—and it should specifically account for the use of AI tools on employee data. She’s particularly concerned about the potential for bias in these tools, especially as it pertains to the employees themselves. 

“Just as we care about how our customers’ data is managed and protected, we care about our own worker’s data and their right to privacy and security,” she says. “A red flag for me is where an organization is throwing time and investment to implement AI tools without adequately ensuring that it has a robust data governance program in place to manage how, when and why AI systems are being used. When we think of AI being used to automate internal processes in an organization, especially where those processes involve automating decisions about employees—think, hiring, performance assessment, promotion readiness, salary assessments etc.—ensuring that these systems are fair, and that employees understand how these tools are being used and how they impact decisions within the organization and that affect them, is important.”  

Of course, there are some cases where sharing more may be an employees’ benefit. Fox points to recent data from a survey Robert Walters conducted with over 6,000 North American professionals. “People wanted their employer to understand them better, on a personal level,” he says. “In fact, our findings revealed that a lack of understanding or awareness among managers regarding their staff’s personal situations, including aspects like mental health, caregiving responsibilities, and religious beliefs, sometimes acted as a barrier to career progression.”  

It’s a fine line to maneuver, though, and relies on the employer responding appropriately—and transparently. 

“Gathering information about your employees can be immensely valuable for companies looking to enhance themselves. [But] when trialing any new initiatives, it’s crucial to involve them in the process, communicate that it’s a pilot, and highlight that it’s an opportunity to gather insights on whether this technology aligns with future goals,” Fox says.