Managing the Coexistence of AI and Human Resources

Katrina Faessel:
There are legal and practical considerations of navigating and implementing various aspects of AI into your HR processes. Learn best practices and actionable tips for embracing AI as we welcome TriNet's Executive Director of Client HR Consulting Services, Jacqueline Breslin, and Senior Counsel, Jyan Ferng, to the stage.

Please note that we may not be able to answer all questions submitted during this session, but we'll try to get back to you after the session via email or chat.

Jacqueline Breslin:
Thanks so much for joining us today. We look forward to talking about managing the coexistence of AI and human resources.

Jyan Ferng:
We know that all of you have artificial intelligence and its use in the workplace at the top of your list of concerns. We are experiencing rapid development of AI powered tools for HR that are changing the entire field but like with other areas of rapid change that would have lasting and significant impact, legislatures and agencies are responding.

In general, the legislative and regulatory response so far has been to counter a real concern that technology can perpetuate the same biases possessed by humans, which makes sense because AI tools are created by us and they learn based on information generated by us. The law is not going to take a backseat to technology when it comes to any potential discrimination and adverse impacts on protected groups that may result in the use of AI.

As a result, we're seeing responses that try to create safeguards in the use of AI in employment, recruiting, hiring, and evaluating processes to help eliminate any negative impacts on one or more protected groups. So let's start with a poll to get an idea of AI usage by all of you out there.

Jackie:
We can wait for some feedback. Interested to see what everybody has to say about their usage.

Jyan:
Yeah, I would imagine that it's probably quite high at this point, given how popular AI is right now. It's gone so mainstream just in the past year. It's really been incredible how fast it's kind of taken over our collective consciousness.

Jackie:
I agree. Like the adoption seems to be, like we seem to be racing to adopt AI in the workplace as quickly as we can, but then also holding back because is it too soon? Is it the right time? Okay, so far we're seeing 20% yes, 80% no. It's not quite what I would have predicted, but that's okay. Their numbers are changing a little bit, so now it's up to 20, 27%.

Jyan:
I'm a little surprised, so I think it's a good sign. Right, Jackie? Because it sounds like this audience is being cautious with their use of AI, which I think is a good thing. I mean, it's such a new technology and there are so many unknowns that I think it is a good idea for most employers to cautiously wade into AI use.

Jackie:
Yeah. Okay. Look, we've sort of flipped now. We're at… we're staying the same, right? About 16.67, yes, and 83.33%, no. Okay. So interesting. Okay. I feel like we've caught the audience at the right time though.

Jyan:
Absolutely and again, I think that this is a good idea because there are a lot of potential pitfalls and landmines with using AI and we're gonna talk about some of them right now. The first one is, let's talk about the federal level, and the EEOC, Equal Employment Opportunity Commission, provided guidance in May 2023, warning employers that using AI to assist with hiring or other employment-related actions could violate Title Seven, which is the federal anti-discrimination law if that tool is improperly used.

And the EEOC gave five specific examples of where the use of AI could trigger Title VII violations. If you're using resume scanners that prioritize applications using certain keywords, virtual assistants or chatbots that might ask job candidates about their qualifications and reject those who do not meet predefined requirements.

Video interviewing software that evaluates candidates based on their facial expressions and speech patterns. Testing software that provides job fit scores for applicants or employees regarding their personalities, aptitudes, cognitive skills or any perceived cultural fit based on their performance on a game or even a more traditional test.

Finally, employee monitoring software that rates employees on the basis of their keystrokes or other factors. The EEOC noted that the use of such AI could have a disparate impact, where employers aren't intending to discriminate, but the result ends up having a statistically significant negative impact on a certain protected class of workers.

To help determine whether a disparate impact exists, the EEOC stated that employers can use the four-fifths rule as a general guideline. The four-fifths rule compares the selection rate of a certain group with the most successful group selection rate and if that's less than four-fifths of the successful group selection rate, then your AI tool might be creating a disparate impact.

For example, if your AI tool conducts a test to applicants where 40 men and 40 women take the test, but 30 men advanced compared to 10 women, that means 75% of the men but only 25% of the women advance and thus it would be a rate of one-third, 25 divided by 75 which is less than four-fifths and that is an indication that your AI tool might be disparately impacting female applicants.

However, keep in mind that even if you meet the forfeits threshold, that doesn't automatically mean your AI tool is 100% lawful. There are many other factors that can also provide risk of violations. The EEOC also noted in his guidance that they encourage employers to conduct ongoing self-analysis to determine whether they're using technology in a way that could result in discrimination.

We at TriNet generally advise our clients to conduct self-audits with legal counsel. Not only for their expertise in testing and interpreting results, but also for attorney-client privilege protection, which is especially helpful if your audit has less than sterling results. Outside counsel can also help you try to mitigate risk and improve your use of AI if your audit does show any potential discriminatory impacts.

And importantly, employers can't hide behind AI vendors and algorithms and blame them for any discriminatory results. The EEOC's guidance stated that employers may be held responsible for the actions of their agents, which may include entities such as software vendors, if the employer has given them authority to act on their behalf.

To emphasize how this area is just constantly evolving hot off the press, we couldn't get it into this slide deck in time because the Department of Labor, just yesterday published a field assistant bulletin to provide guidance to employers on the use of AI as it relates to compliance with the Fair Labor Standards Act, or FLSA, and the Family and Medical Leave Act, or FMLA, among other laws.

Essentially, the DOL warned employers that using AI might result in violating the FLSA, by failing to properly pay for all hours work because the AI thinks the employee isn't working when they are, or because the AI thinks the employee took an unpaid break that day at the usual time when they didn't. Which by the way is one reason why we generally discourage employers from using block time entries, especially for meal periods. Also, because the AI might inaccurately be calculating the regular rate of pay.

The DOL also warned employers that using AI might violate the FMLA because it is incorrectly determining FMLA leave eligibility, improperly requiring more certification of FMLA leave than required, or by not providing extra time when warranted to provide certification. The DOL also noted that AI may unfairly penalize employees who lawfully take FMLA leave with respect to other employment actions, for example, by assigning them negative attendance points for their FMLA leave.

Now, it's not just at the federal level. We're also seeing attempts to address potential discrimination caused by AI at local levels. The most wide-ranging AI law currently on the books right now for employment decision tools is in New York City, where effective April 15, 2023 employers with New York City employees or even remote employees outside of New York, but are associated with an office in New York City, are only able to use an automated employment decision tool, AEDT, such as artificial intelligence and or algorithms after subjecting that tool to an audit bias test performed by an independent auditor prior to the use. The audit bias test is to make sure that the AEDT does not lead to any potential disparate impact, meaning discrimination against applicants or employees on the basis of race, ethnicity or sex.

The audit must be done no more than one year before using the AEDT and the summary results must be publicly disclosed, such as by posting them on the employer's website. Employers also have to provide specific notifications and disclosures to job candidates about the use of such tools within 10 business days of the AEDT being used, informing the applicants about the use of the AEDT for assessment or evaluation, the ability for the candidate to request an alternative selection process or accommodation, and the job qualifications and characteristics that the AEDT will use in the assessment of the candidate or employee.

Within 30 days of a written request by the candidate or employee, the employee must also provide information regarding the type of data collected for use by the AEDT, the source of such data and the information about the employer or employment agency's data retention policies. Alternatively, this information may be disclosed on an employer website.

It can be pretty steep. Employers that violate this law will be subject to civil penalties of up to $1,500 per day with no cap on the total amount of civil penalties that can be assessed. At the state level, effective since January 1st, 2020, Illinois has required employees that use AI to analyze video interviews during the hiring process to make sure that employers must inform applicants that AI will be used to analyze their interview videos. Although the law doesn't require that such notice be in writing, we would recommend that as a best practice so you can document such notice was provided.

Employers must also explain to applicants how their AI program works and what characteristics the AI uses to evaluate an applicant's fitness for the position. Employers must obtain the applicant's consent to be evaluated by AI before the video interview and may not use AI to evaluate a video interview without consent. Again, the law doesn't require written consent, but we would recommend that as a best practice.

Employers will be permitted to share the videos only with persons whose expertise or technology is needed to evaluate the applicant. Lastly, employers must destroy both the video and all copies within 30 days after an applicant makes such a request. They also have to show it to any other people who have copies of the video to destroy those copies as well.

In Maryland, and the law in effect since October 1st, 2020, employers using facial recognition technology in job interviews must first obtain a written consent waiver from the applicant that includes the applicant's name, the date of the interview and that the applicant consents to the use of facial recognition during the interview and that they have read that consent waiver.

For those of you with operations north of the border, Ontario, Canada, just passed a new law in March, 2024, that will require employers that use AI to screen, assess or select applicants to disclose that fact on all publicly advertised job postings. That effective date will be established at a later time by proclamation.

Although these are just a few of the bills that were passed, the number of proposed bills regarding the use of AI in employment settings has also jumped a lot the past couple of years. Let's talk a little bit about that. At the federal level, the proposed American Data Privacy and Protection Act bill would have required employers to evaluate the design structure and data inputs of algorithms such as AI, to reduce the risk of potential discriminatory impacts and required the use of an external independent researcher or auditor to conduct the evaluation on an annual basis and then submit that assessment to the FTC.

Again, this bill did not get passed, but it does provide insight into how the federal government is thinking about addressing AI as potential discriminatory impact in the workplace. 2023, as I said, saw an uptake in a lot of proposed legislation at the state level. In Illinois, the state proposed a bill that would restrict employers from using race or ZIP code as a proxy for race when making automated hiring decisions.

In Massachusetts, a bill would have required employees to provide employees with notice about algorithmic decisions and monitoring, and would also have given employees the right to request information process to algorithms. Similarly, New Jersey's bill would have required notice of, and also required bias audits for automated decision tools used for hiring.

New York State proposed two bills that would have added criteria for using automated decision-making tools, required disparate impact analysis, and required employers to give notice to applicants, very similar to the law in existence in New York City. Vermont's bill was focused on restricting electronic monitoring of employees for employment related decisions.

As you can see, the use of AI in employment is on the minds of many legislatures, and this is just the tip of the iceberg in how AI's use in employment is going to be restricted. We do anticipate 2024 having an influx of additional proposed legislation dealing with AI. Now I'll hand it over to Jackie to discuss the concerns of AI discrimination in the workplace.

Jackie:
Thanks, Jyan. I love this topic so much. I feel like especially from the poll, we're talking to employers that are figuring out what to do next. I like when we can get things with a bit of a head start. According to the AI Bill of Rights published by the White House in October of 2022, algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex, religion, age, national origin, disability, veteran status, genetic information or any other classification protected by law.

With many organizations adopting AI as part of their business, how do we prevent algorithmic discrimination? Good place to start is consider vetting your recruiting vendor or service provider regarding how they avoid algorithmic discrimination. Are there any steps they have taken to avoid algorithmic discrimination?

An example would be when an employer doesn't want to recruit in certain areas because of the employment requirements of that specific location or they avoid applicants in certain urban areas as they would potentially be disfavoring minorities. You also want to engage in the interactive process if you offer, so you can offer any reasonable accommodations to applicants during the application process if you're using an AI tool.

For example, a company utilizes a chatbot as part of the interview process for a sales position that requires attending trade shows or any other type of event. If the chatbot asks the applicant whether or not they can stand during a trade show and the applicant ends the interview because the applicant says no, they're in a wheelchair so they can't stand during a trade show, it might end in the interview and it might believe that they can't do the physical requirements of the job. However, the applicant may very well be able to meet the requirements with a reasonable accommodation and it's the employer's duty to engage in the interactive process to make that determination. To remedy, you don't want to just filter out applicants who can't meet the physical requirements without some way to engage in the interactive process.

So it's really important to continue to monitor the EEOC for guidance. They are continuing to develop resources and updating an area of their website titled "Artificial Intelligence and Algorithmic Fairness Initiative."

Let's talk some more about who owns the governance of AI in your organization. It's an opportunity for HR, legal, security and technology teams to work together when it comes to AI. As much of AI is new and so intriguing to employees, we should never assume employees know the appropriate expectations about how to use these types of tools. It's logical to think, due to all the buzz regarding AI and the ease of availability and many products available at free or at very low cost, that employees want to use them and might be anxious to use them.

As employers talk about the importance of efficiency and new stories promote how AI increases efficiency, employees may conclude that they should give AI a try at work. Your managers may have incredibly long to-do lists and therefore their temptation to see how AI can help them be more productive will intrigue them. For example, a manager who's trying to do recruiting, they might think about downloading an AI software tool without clearing with you that it's okay to do so.

Before the use of AI takes on a life of its own within your workplace, get clear on your company's position on using AI. Collaborative decisions amongst the leaders and in HR, legal, security and technology teams is important. Without direct communication amongst leaders, you'll likely find assumptions are being made.

A common assumption is that the technology team will lead the AI initiatives and everybody else can stay focused on their areas of specialty and technology will keep you updated. This approach leaves out important perspectives and may create risk if HR security and legal are not weighing in and involved.

Prioritize figuring out your company's approach and use of AI, get your leadership team on the same page, develop a philosophy that gets turned into a message with relevant policies and training, and everyone in the company should hear that same message. Take the important steps to make sure all employees understand the company's position. You want to be transparent about how you're using or planning to use AI in the workplace.

Looks like we have some poll results coming in. Jyan, do you want to take us through those?

Jyan:
Yeah, it looks like pretty evenly split between drafting emails to prospects or customers, creating company materials for internal or external purposes, writing thought leadership materials. Those three are definitely dominating the categories of what they are using AI for.

Jackie:
Okay. Great. That's good. Good information to have.

Jyan:
Yeah.

Jackie:
HR professionals have an important role in helping their company's AI initiatives succeed. HR must stay informed about the evolving legal and ethical considerations related to AI in the workplace. Jyan did a thorough job about what's out there, what might be coming and just the real message, this is ever evolving.

This includes compliance with data privacy protection and IP laws addressing concerns about AI driven decision-making and ensuring fair and transparent use of AI technologies. You want to keep watching the federal state and local regulatory landscape for AI topics. It's just as important as making sure all of us in HR understanding the latest paid sick and safe leave laws and trends around compensation and employee engagement.

It's all important to us. Let's discuss some HR considerations to have in mind as you develop your company's policy or approach regarding the use of tools in the AI tools in the workplace. As a best practice, determine whether your employees will be permitted to use this type of technology, and if so, in what capacity.

Will you allow employees to use the tools that draft emails or presentations or any other type of internal or external content? Looks like some of you are already. If you decide the use is permitted, you want to require that employees disclose that they use such technology or otherwise cite the source of information.

Citing a source is always a good idea, even if it's a tool like ChatGPT, for example, this will help you later if you ever need to identify any work product that involved an AI tool, and you want to create a process or a policy that requires employees to fact check and confirm the accuracy of all information that is sourced using AI tools that create content.

For example, such technology may have limited industry-specific knowledge or knowledge that isn't current or quickly evolving on certain topics for which inaccurate or incomplete results are produced. Include in your policy a statement prohibiting employees from disclosing confidential information.

Remind employees to not disclose or upload confidential information, proprietary or personal information while using this type of tool. Depending on your business, you may consider incorporating this as part of your security or confidentiality training, as well as within your written policies. Employees may not realize that they should not disclose confidential information when entering information into a generative tools prompts.

Also, document your expectations and policies, train your managers and employees on the policies, and revisit policies as frequently as you can, because this topic is quickly evolving. This is not a, you know, write a policy and put it on a shelf and hope it stays good for a while. This type of policy and these types of processes need, you know, constant review. If your company uses an AI tool to develop work product, consider working with legal counsel and any potential copyright and patent issues.

Jyan:
Jackie, I saw in the poll that there actually are about 18% of the audience members don't even know if the employees are using AI. I think that is even more reason why it's important to have an AI policy in place and to train, because that way you've got something to fall back on as an employer, if they are using AI and they're using it improperly and doing things that might get your organization into some hot water.

Jackie:
Oh my gosh, such good points. They're not alone, that is so important to make sure that employees know what the expectations are.

AI is really opening up incredible opportunities for our workplaces, but it's also opening up an equal amount of fear from employees, too. I think fear from employers, too, as we talk about our employees even using it. Some employees may feel that their jobs are threatened that AI is too complex for them to understand or they won't be able to keep up with the pace that things are changing.

Other employees may see AI as the answer to a more rewarding job and interesting work and be ready to embrace it. HR pros need to lead the change management charge within their organization, helping employees that fear get more comfortable and helping those that are ready to use AI whenever possible understand what the boundaries are within the workplace.

Describe to your employees how your business is using AI or how you hope to use it or simply that you are in the very early stages of figuring it out. Just don't be silent. We see performance concerns popping up amongst clients who are caught off guard when an employee uses a tool like ChatGPT or Jasper or any one of the many other options available.

Jackie:
An employee may be using the tools to create a marketing campaign or help replying to prospects to draft performance reviews, they may be using it to complete coding tasks. Then our clients are concerned with that confidential company information has been inappropriately shared or that proper sources have not been cited, or they feel misled that the work product that they thought was original work is not really original work.

We've also seen situations where some members of a sales team have been using AI tools where others have not and it creates friction amongst coworkers. Many times in these situations, while the details of what happened varies, a common piece of the story is that the company has not established or communicated guidelines.

Managers without a common approach have allowed or not allowed the tools to be used and it really results in employee relations issues that could have been avoided. They could have been avoided if there was clear communication about the company's position or clear communication about the company's philosophy, clear communication about utilization, training for managers and training for employees and training for anybody new within the organization.

We really want to encourage you to have a clearly defined philosophy and in your relevant practices and policies, so employees know exactly what's expected of them. Not only will this set appropriate expectations, but it will help protect your organization as the topic continues to evolve.

I know as time goes on, Jyan, we will have more to share on this topic. I look forward to continuing to watch closely AI in the workplace and how it evolves and what's in store for us next.

Jyan:
Absolutely. Like I said in my little spiel, this is just the beginning in terms of legislative and regulatory impacts. There are so few actual enacted laws at the moment, and I'm sure that will snowball in the coming years as AI becomes even more advanced and even more ingrained within the workplace and as these tools proliferate throughout society. It's definitely going to impact a lot of us, and it's definitely something we all have to keep in mind.

The policies and training that you mentioned, Jackie, will be so instrumental for all employers and also very key for them to keep updating those. It's such a constantly changing feel, like yesterday, something just came out. DOL, so it's happening all the time. As soon as something comes out, you've got to look at your policies, you've got to look at your training, you've got to make sure they're all updated, and they're still aligning and complying with anything that the legislatures or the regulatory agencies have put out. It's definitely something that's going to keep all of us on our toes for many years to come.

Jackie:
Well said. Thanks so much for joining our session today. We appreciate this opportunity to talk with all of you and hope to be back in front of you again talking about this topic. Thank you.

Jyan:
We'll see you all soon. Thank you.