Harnessing AI's Potential for a Responsible Future of Work

Episode 20
 | 
Published: September 14, 2023
Navrina Singh, Founder and CEO, Credo AI Ben Sherry, Staff Reporter, Inc. Magazine Hear why moving fast and breaking things should not apply to AI—moving fast with intent should. Leaders must decide what values they’re bringing into this new age. Greed is not one of those values; but building trust is. And setting up guardrails is key.

Michael Mendenhall: Well, this past year certainly has been a testament to the lightning speed that AI is advancing and impacting our lives. We've heard about this over the last two days and there's no doubt about it, that ethics is going to play a very important role with this new technology and perhaps no one has thought more about this than Navrina Singh. Navrina is the founder and CEO of Credo AI, an organization on a mission to empower companies to develop AI with the highest ethical standards.

Before founding Credo AI, Navrina held a multitude of leadership roles with Microsoft, Qualcomm, etc. She is a member of the U.S. Department of Commerce's National Artificial Intelligence Advisory Committee that advises the president and the National AI Initiative Office. And here to discuss harnessing AI's potential for a responsible future of work with Navrina is, Ben Sherry, a staff reporter for Inc. Magazine, where he covers how the rise of AI is impacting the workplace. He has previously written for the Financial Times and Investopedia. Please welcome Navrina and Ben to PeopleForce.

Ben Sherry: Hey Navrina.

Navrina Singh: Hi, Ben. How are you?

Ben: I'm good. How are you?

Navrina: I'm great now.

Ben: I feel like we're a little fake because we were just chatting backstage.

Navrina: I know, but we did well.

Ben: Yeah. So, as you just heard, Navrina is the founder and CEO of, oh my God, I'm blanking on your company's name. What's happening?

Navrina: It's okay. Credo AI.

Ben: Credo AI.

Navrina: Credo means a set of values that guides your actions. So you know, we're on a mission to guide AI.

Ben: Yeah, and obviously it's such an important topic right now. I'm sure everybody in this room is experimenting with ChatGPT or DALL-E or any of the other kind of generative AI that have taken over the world in the last eight months, nine months, a year and are looking for ways to integrate that into their company and into their business practices. So can you talk about why ethical AI and responsible AI is so important to you and why this is a path that you really wanted to blaze?

Navrina: Absolutely. So, how many of you here have phones right now with you? I can't see. Awesome. How many of you are using artificial intelligence in work, at work? How many of you are using ChatGPT? See, right there. So, AI has become the fabric that is powering our society and our infrastructure. And I would say that this has been a monumental year, because all the complexities of machine learning and artificial intelligence have been made available to everybody because of ChatGPT.

And I would say that a big component of that was not the technology challenge, but the design challenge. As you know, Ben, artificial intelligence is an old technology, 50-plus years old. But this is the first time, moment in time, that we are seeing, because of the accessibility and democratization of artificial intelligence, everybody can use it. My nine-year-old uses ChatGPT and DALL-E on a day-in, day-out basis. So when you have a technology which is that powerful, I think one of the key questions you need to ask yourself is—what are the core values that are going to guide that technology and how are we going to make sure that humans continue to stay right in the middle of this AI revolution?

And this is where responsible AI comes in. Responsible AI really means a set of practices, governance structures, policies that help guide the development, the use, the procurement of this powerful technology to make sure that it's always serving our purpose.

Ben: Yeah, and sort of going off of that, one of the, you know, main components of what you guys do at Credo is establishing an AI constitution. So can you kind of talk about, you know, why it is so important to set hard and fast rules for how you're going to operate any AI project that you're taking on from the procurement of it to the actual development to having consumers actually use the end product?

Navrina: Yeah, you know, Ben, that's a great question. So one of the things is just simplifying artificial intelligence. It is a set of algorithms that reason over massive amounts of data. So as you can imagine, a big component of what the output of these systems is, is determined by what's it fed. And if it is fed a certain set of characteristics, which might not include a person like me, let's say, Brown, CEO, women in technology, what ends up happening is you get outputs which are not going to depict those characteristics.

So it becomes really important in this new age of AI to find the right constitution. And what we mean by constitution is a set of, it could be regulations. It could be standards coming from NIST, ISO and others. It could be company values or it could be industry best practices. So just to give you an example, since we are at PeopleForce and right in the heart of New York, in July of this year, New York passed a very monumental law, which is called New York City Bias Audit Law.

And the core reason that law was passed was within New York City, enterprises are not only buying third party HR tools, but they're developing a lot of these HR tools, which are determining who should be hired, how should they be trained? How should they be retained? What should the performance of individuals look like? So as you can imagine, when we start to farm out a lot of these decision making on these algorithms, we better make sure they work for you and I.

Ben: Right.

Navrina: So the core reason this New York City law number 144 was put in place was to provide that constitution. How can we ensure that the companies have taken responsibility of doing comprehensive testing so that when they're putting out these tools out in the market, they are not having desperate impact and leaving out certain demographics, which generally might be left out of an employment conversation?

Ben: And how can people kind of go about, you know, determining what their own personal constitution for their businesses is? Are there specific things that you should take into consideration, depending on the type of business or industry that you're in?

Navrina: Absolutely, Ben. I think that's a very good question, because what does good look like really depends on not only the organizations, but the leaders within those organizations. So I think we are living through this very interesting moment in time, where the leadership of an organization, the board of the organization, the individuals within that organization are really dictating how they're going to show up in this age of AI, and what is going to be a core differentiation for enterprises is essentially the values that these companies uphold.

So as an example, Credo works with one of the largest cloud providers. They build facial recognition technology. And, if you've observed in the past four years, facial recognition has been at the center of a lot of debates, especially when you use facial recognition for surveillance and how it does poorly on certain demographics. So in that case, you can imagine this particular organization was very prescriptive about what is their constitution? What is the set of values that they are going to clearly indicate to their customers against which they are testing? Because when you go outside those values, this organization does not take accountability for that system performing as expected. So I think really defining what good looks like in AI comes from leadership and from people, and human centricity is going to be really critical in AI now.

Ben: Yeah, and kind of going off of that, one of the main topics that you've spoken a lot about is involving many stakeholders from many different areas. Every kind of area that your business touches in those decision-making sessions, in that process of figuring out what your constitution is. So can you talk about the importance of involving everyone and not just a small group of tech execs or what have you?

Navrina: Yeah. So two things on that, Ben. I truly believe that AI is fundamentally different than any software or any other technological revolution we've seen in the past multiple decades. And the reason it is so different is because there's a very important component of how and how it's making those decisions. And when you start thinking about technology and looking at what it is trained on, where that training data is coming from and how the training data is basically cleaned up to make the decisions, it becomes really critical to make sure that there are diverse perspectives and as you know, I spent first 20 years of my career building products in AI and in mobile.

And what was fascinating was, as a technologist, you move into this phase of innovation at the forefront and move fast and break things, and I think that narrative does not work in AI. This is the moment in time where we have to move fast, but with intention. And the only way that intentionality can come in is when you bring in diverse perspectives.

So what we are seeing in AI when organizations are building their constitution, when they're defining what good looks like for them, the companies that are able to do that well or countries that are able to do that well are bringing in the technologists, the AI experts, but they're also bringing in policy, compliance, risk management, HR, procurement, social sciences. Because the thing is right now you have this fundamentally transformational technology that is not just zeros and ones. It is actually determining—are you going to get the next job? Is your claim going to be rejected? Are you going to be pulled to the side at TSA because you don't fit in a particular mold? So it really needs a very diverse set of perspectives to come to the table to build that constitution.

Ben: And that even includes consumers, right?

Navrina: Absolutely. Impacted communities. This is a massive, I would say, discussion that we are having and today is an interesting day because right in D.C., Senator Schumer is holding the first bipartisan gathering of executives from big tech, from media companies, from civil society, from impacted communities to really have this conversation around—how do we put the guardrails for this transformational technology, which is moving at a speed that we cannot control? And I think this is where we need to, again, change the story here, because humans are not powerless. Humans need to be central to this AI revolution and really make sure that the human oversight is what's guiding AI technologies.

Ben: And, you know, you brought up Washington, D.C., for anybody that doesn't know, that you're on President Biden's AI task force committee helping him in the White House to determine what their approach is going to be to the regulation of AI. What can you tell us about that work and coming together with your fellow professionals to have a voice in this huge moment?

Navrina: You know, one, I can't publicly disclose anything which is confidential. So, I am here in the capacity of CEO of Credo AI. But what I can disclose is, I think this is the first time I have really seen our administration move really fast on some of the core decision makings around technology. And I've also seen for the first time that the tech sector coming to government and seeking guidance on how do you actually put guardrails around this tech? And as I mentioned, as a technologist and a product person, I had very little understanding of policy and regulation. So I've spent the past six years really in D.C. and Brussels trying to understand how can we inform policymakers around this very core technology. So I would say that the three things that are happening really well is: one, there is action, not just conversations. And we're going to see more and more of that coming out of Biden and Harris administration.

The second thing that's happening is, I think, at a global level, much more collaboration between U.S. and our allies to really figure out, unlike GDPR, which became a, you know, Brussels effect moment and everyone had to adhere to it, how can we make sure that there is harmonization of standards and regulations, especially among allies?

And I would say that the third debate that we are having, which is a tough one to solve for is, in this speed and impact of AI, how do we not give too much to our adversaries? So China is central to that conversation and really thinking about AI's influence on national security and how can we ensure that we are speeding up innovation, but again, being intentional about the guardrails is a big focus.

Ben: And it seems that is such an issue, of you saying also, you know, not giving any secrets or important leads to our adversaries, but I also imagine that it's difficult to find that balance like you're talking about, of encouraging innovation and letting companies go wild and try new things and create amazing products, but also instituting those guardrails to keep us safe. Do you have any personal thoughts on what of governing, you know, what that would look like for AI to have some sort of a governing body? There's been talk of starting up an agency of AI, or I think Sam Altman has talked about even having an international consortium to govern worldwide and create worldwide rules. Do you have any personal opinions on?

Navrina: Certainly. I think it's tough not to have personal opinions when you're right in the middle of this hurricane. I think what's important to understand is we need to fundamentally have capacity building in organizations, in government around AI. And I think there is a massive, I would say, AI expertise deficit that we are right now encountering, which is causing a challenge in all these parties coming together and having that conversation.

Having said that, I think in the long term, having a specialized AI agency that can keep pace with the speed of innovation of AI certainly makes sense. I don't think it makes sense right now. I think right now we need to help capacity build in the agencies that exist, and make sure that we have a pathway to making sure when this independent agency is set up, that it's successful.

Having said that, I think there's another very important aspect to consider—the role that companies and individuals are going to play in that narrative. Again, in tech, we've adopted this mantra of experimentation and iteration and learning very quickly from how consumers give feedback. I would say we have not been as forgiving of policymakers and we are expecting perfect rules, perfect regulation, perfect everything, and I think that's where we are missing the point. I think this is the point in time that we need to also allow for iterative policymaking so that we can move fast and test out the guardrails because the technology is changing on a day-to-day basis.

Ben: And we were kind of talking a couple of days ago about how it's not exactly the tortoise and the hare, but it isn't not the tortoise and the hare, right? You want to move fast, like you're saying, but you also want to have that intention and go in with a plan and so much of what Credo does is helping you create that plan, right?

Navrina: Absolutely. I think, one of the things that is really top of mind for me and for a lot of people that we work with is upcoming elections. If you just take that as an example, 10 years ago, we didn't do our jobs as technologists, as policy makers to manage social media. And we are seeing the implications of that. We are seeing implications of mass misinformation. We are seeing implications of deepfake technology showing up in a way that we have not experienced before, and I think we are seeing the implications of it on everybody. You know, as I mentioned, my nine-year-old, like she goes and uses all these AI tools and she's online and I think she's trying to figure out who she is in this new world of artificial intelligence and what are her rights, which is fascinating and I think you and I were discussing. Just last night, I was in conversations with Duncan Ireland, who is the chief negotiator for SAG AFTRA, and he was mentioning the past 60 days have been so difficult because the narrative is simple.

You have a set of media companies and then you have the creatives, you know, the writers and the music directors and the actors and guess what? All the AI technology is built on the IP and the data produced by humans. So as you think about the future, the key question is—how can these humans maintain agency and creativity and are compensated for that in this new AI world? And that should not be a tough conversation, but apparently it is.

Ben: Well, and I think that, and this is something that you've talked about too, is that your success with AI will be determined by how greedy you are, right? If you're more greedy, it's going to be difficult for you because you're going to try to move fast and break things, and you're not going to take the time to be responsible and have that additional time to think about these things and really consider all the options. So yeah. And it's interesting, especially with the actors and the writers strikes, you know, these are people that are fighting to implement regulations in their industries when it comes around AI. How do you feel about that kind of the creative people that are worried about the impact of AI and how that could change how they work or how they get compensated? Is there a way for creatives and AI to work together in harmony and for everybody to still come away making money, basically?

Navrina: You know, I think this is where values become really critical. What are we as humans going to value as we move forward? And I know it sounds really easy as—I have a startup and I have a fiduciary responsibility to all the shareholders who are involved. But again, the question is, "What do I want this company to accomplish?" And I think this is a moment in time that leaders of enterprises have to really decide what are the values that they're going to bring into this age of AI. And honestly, greed is not going to help them win. What's going to help them win is trust. And so how do you actually inculcate that trust with the right set of stakeholders? And guess what, who are the stakeholders? It's humans. So how do we respect our data that we've created? How do we respect the IP we've created?

How do we make sure that all the AI systems that are getting trained, ChatGPT included, on this massive strobes of data are consented, right? So that you're actually saying, "Hey, all the blogs I'm writing, I'm giving you consent to use those blogs to train your algorithms." Without that, the fabric of humanity is going to break.

Ben: Yeah.

Navrina: So greed is not going to be, I would say, the winning strategy in this age of AI; trust is. So I think this is a very interesting time that as employees within an organization or leaders or board members, you need to be thinking about what those values are that are keeping humans central to this AI conversation.

Ben: Yeah. And, you know. Very basically, how can companies go about building that trust when they're creating these systems? Are there metrics that can be tracked to determine how well regulated your program is or your project is?

Navrina: Yeah. So Ben, that's where a lot of the work that we do at Credo AI comes in. So we have a software platform that really automates a lot of this oversight and governance, starting not only from your data sets, your algorithms, but also to your applications. But I think central to it, even before going into metrics, is transparency. Right? So the best way to build trust is disclosing what are the sources of data sets you've used for training your algorithm?

What kind of testing have you done? What was the provenance of decisions you made? Who actually reviewed the output? Did you have human oversight for high-risk applications? If you did need third party audits, how are those audits conducted? Et cetera. So we actually have across that entire AI life cycle, a set of requirements.

But as you can imagine, a core component of those requirements is transparency, because already we are operating with black box systems, in many cases. And now with large language models, it's black hole systems. We don't know what these systems are doing and the emergent capabilities of these systems are something that we should really pay attention to.

So the best way to start here, and this is something that I advocate pretty extensively for, is disclosure reporting by public companies as well as private companies around how they are building these systems, where these systems can be used, who can use these systems, and providing the transparency around that entire ecosystem.

Ben: Yeah, and like you just brought up, so many of these large language models, the really big ones, your ChatGPTs, yeah, they're more than black boxes. So how do you sort of square that? Because that's going to be, from what many people have said, the main way that many companies are going to integrate AI into their own business practices is through these prebuilt models. Do you think that these models should be opened up and have more information, be able to be easily accessible about them? Or is it just one of these things we have to deal with?

Navrina: Okay. So let me just digress a little bit and then I promise I will answer this question. So I think it's important to understand the pre ChatGPT and post ChatGPT world, and then the difference between those two worlds. So the pre ChatGPT world was essentially traditional machine learning. You had these algorithms reasoning over corpus of data, which was primarily your data, and it was built by mostly your data science and machine learning teams in house. That was a traditional model. But post ChatGPT, what has happened is, when you think about the capabilities of these trillion-dollar, like trillion to billion parameter models, what ends up happening is most of the companies don't want to take on that capital investment and that compute cost that comes with training those large language models.

So as a result of which, what Ben you are saying, that the dependency on some of the largest players, like Microsoft, OpenAI, Google, Anthropic starts to happen and that's the world that we are living in today because of the ease of use. And also, because of, you know, these seven or eight players dictating that technology, there's a lot of power shift that has happened, so key questions that we should be asking not only of them, but of us as well is, we should be asking what percentage of, you know, their AI investments are they actually spending on AI safety and governance?

Because if you think about some of the challenges with large language models, IP leakage is a big challenge, copyright infringement because they are training on this massive amount of, they think, free internet data. But you and I provided that data, so I think really thinking through copyright issues or hallucinations, which is essentially these systems going off rail and then not providing factual outputs.

So when you start thinking about those challenges: one, the responsibility is asking these large language model providers as to how are they thinking about safety and governance, and what proof can you disclose that there is not going to be downstream risk. And then you as an enterprise or a user should be asking, "Hey, is this the right place to use this large language model for this particular application? Do I understand this risk enough to sort of start creating customer support chatbots or marketing chatbots, which potentially could have risk?” And a lot of this, I would say, negotiation of risk and risk surface area is something that Credo AI also solves for.

Ben: Do you think that there will eventually be, you know, certifications that you'll have to reach in order to deploy an AI model of a significant size?

Navrina: Absolutely. Currently, in one of the leading regulations in the world is something called as EU AI Act, which is going to get passed end of this year. Basically, summary of that regulation is based on the risk profile of a particular application. You know, high risk, critical risk, low, medium risk.

There is not only requirement around registering that use case within notification bodies in Europe, but potentially following through with certification and sort of a seal off, you know, this is trusted AI. You can imagine, Ben, getting to that stage is a really hard problem because…

Ben: As it should be.

Navrina: As it should be. But how do you define what does good look like for that application? And so much of it is dependent on context. As an example, you might use the same facial recognition systems for unlocking your phone and same facial recognition system to access this auditorium. And as you can imagine, to access this auditorium is a higher risk than unlocking your phone. So you can imagine that context plays a critical role. So yes, there is a lot of work globally on certification schemes. But I think that's going to be a little bit way out. We're going to foresee a set of context focus regulations, sector specific regulations, and then following the success of that certification schemes emerge.

Ben: So kind of shifting a little bit. Something that I wanted to discuss with you is historical parallels to this current moment in AI. I know you said that there isn't really historical parallels in the technology space, but I do think that there are instances where amazing technology has come through at a very fast pace and that the government or regulation or just like simple kind of setting rules in place have not been put in time for dangerous stuff to happen. I think social media is a big one that a lot of people talk about. So, do you see any similarities there between the rise of social media and the sort of unregulated nature of it at first and how that kind of, you know, did a bit of trouble to, I think, a lot of people's mental health, potentially? But do you see any similarities there and do you think that there are lessons that we can learn from, you know, those instances that we can use as we move forward with AI?

Navrina: You know, absolutely. Like, you give examples of social media and privacy. I would say those are two really recent ones that we can learn a lot from. But I think humans are really lazy. We don't learn very fast unless we repeat those mistakes, right? So I think that is what I'm worried about in the AI world, that somehow we've erased our memory of what social media did to our kids and what social media did to you and me and what it did to our democratic system, that we've sort of forgotten that and we have the goldfish memory here and we should not. So, that's why I said there is no historical parallels because, honestly, this is the moment in time we could literally eradicate humanity and our planet if we don't take steps right now. And I sincerely feel this is, you know, you and I were discussing the Oppenheimer moment, the atomic bomb moment, and I hate those comparisons, but I think there is some truth there, that as humans, we had a choice, and now we have a choice.

The choice is how important and what is the role humanity is going to play in oversight of these systems, or are we going to yield power and say, "Oh my god, AI is so powerful technology. We can't do anything about it." And by the way, there are, I would say, groups that do believe that we are powerless in this new technology. I don't subscribe to that.

Ben: You think that there is, you know, and I think that most people would agree with you that there is a happy middle ground to be found. We're all just kind of looking for it now, right?

Navrina: I think there's no middle ground here. I think we need to take action. We need to put guardrails in place. Organizations need to get very clear on what their values are and one of the things that we are finding in our work, the way we view the world of AI is, you have these AI first ethics forward companies, and these are companies that have used AI, deployed AI for many years, but they've fundamentally done that by focusing on their values and ethics. It's central to everything that they're doing. And what we are noticing with those organizations is that they are building more trust with their customers because of which they are retaining customers longer, they are able to have faster procurement cycles and sales cycles.

They're also able to build trust with their employees and employees love transparent organizations. So I think going back to the conversation around, is greed going to win in this age of AI? Absolutely not. And so I think we need to start thinking about more and more who we are as human and what our values are that we are bringing to this technology.

Ben: Yeah, because as you said, it has such a transformative aspect to it that, you know, being very thoughtful in a way that, you know, maybe you weren't. You didn't have to be so thoughtful in those other, you know, technological innovation moments. But yeah, I think that that's fascinating.

Navrina: And, you know Ben, I do want to underscore. My narrative here is not to say, "We should not move fast and AI does not have benefits." It actually does have a lot of benefits. I use it day-in day-out, as an example, ChatGPT to write my emails. Just a side story, I grew up in India. I've been in United States for about 20 years. Of course, I found a man from Montana to marry. So as you can imagine, my household is very interesting.

And my nine-year-old is trying to figure out who she is in this world because she has a white dad and a brown mom and the Alexa doesn't understand me. So it's a very interesting dynamics in the household. But I think it is those moments where technology has just become a part of our lives, which is causing a little bit of identity crisis, especially in these hybrid families. So I think it's really critical that we take our responsibility in this new age of AI very seriously.

Ben: Yeah, and you know, it's interesting you bring up Alexa and I feel like so many of these personal assistants are something that a lot of companies are betting big on, right? And they think that if you take the large language model technology that powers ChatGPT and apply it to something like an Alexa or something like a Siri, that could be transformative. Do you see that as being, you know, one of the big areas that is going to sort of prop up in the next few years about AI?

Navrina: Oh, absolutely. I would say that large language models, the first set of applications that we are seeing are actually not the traditional applications, which are high asset value for companies. So if you think about a financial sector company with risk scoring model, underwriting models, those are still traditional ML base.

They're not large language model base, and I think it's going to be a little bit of a pathway to start using large language models for those very, I would say critical to business applications. Where we are seeing MLs really show up is productivity gains. So as I mentioned, you know, it's whether writing a marketing copy or writing a speech or trying to do customer support, we are seeing large language models start to use pretty aggressively in all those areas. And the reason, again, why those are the areas is the productivity gains, right? Now, I, as an individual within an organization, I don't have to think so much about writing an email to my boss, which is professional coming from a non-American background.

I could just feed those prompts to large language models and out comes this beautifully crafted email, which my boss loves, right? So when you start to see these productivity gains, what ends up happening is our laziness gene kicks in. And I'll give you a very significant example. We work a lot with government, and what we found was with the introduction of ChatGPT, one of our government customers, they found that their employees were using ChatGPT to write RFPs for national security for government use cases.

Ben: I don't know about that one.

Navrina: Just think about the magnitude. One, you are giving your company's information to these large language models and this is prior to the upgraded terms and conditions that OpenAI had put there. So they were actually using the data for training on that. So IP leakage, massive concern. Right there, you're writing an RFP that you might not even be checking. So all the hallucinations, which weren't fixed at that time, factually incorrect information is coming out. And given we are humans, we'd rather be watching “Ugly Betty” than checking these RFPs. What ends up happening is you're not fact checking.

So think about the magnitude for national security of these large language models. So, I think my point here is humans really start needing to step up, take responsibility, understand what our values are, and provide that oversight to the CI technology.

Ben: And I think it also speaks to how important education for everyone is going to be, right, not just the people that are creating these tools and using them, but the people that are, you know, being advertised to with AI, so the people that are, you know, seeing AI generated art, you know, without really realizing it. I'm sure a lot of people in here know about the lawyer that used ChatGPT to create some cases for a trial that he was working on that did not exist, and he got brought out in front of the judge and basically spanked publicly for a while. And it's embarrassing, right? And I think that for a lot of people, you know, how can they educate themselves on the basics, the stuff that they really need to understand, you know, even just to exist in this world that's being created by all these generative models?

Navrina: So, you know, I've been actually thinking deeply about that. And especially when I look at the curriculum of our education, our schools. I'm heading to Wisconsin after this, which is my alma mater to go and actually talk about this because they've been, yeah, go badgers. They've been talking a lot about the use of ChatGPT within school.

But I think there's a fundamental shift, again, that we need to understand in AI. You don't have to be an AI expert, but you have to figure out how to ask the right questions. And the way to ask those right questions is really just getting informed in terms of—what is happening. Why is this technology different? How can we put the right guardrails? Who is responsible? How can we provide that oversight, et cetera. So, I think this goes back to the conversation we were just having around multi stakeholder perspective. I want your perspective, as a reporter, as to how would this impact your job.

Similarly, I'm sure you want my perspective, as a product person, that how are we building this technology? Why can't we handle it? Why can't we control it? Why can't I use consented data? So I think that is what we need to start, you know, putting down the barriers in communication and understanding that we all don't have to be experts. We just have to be educated enough to ask the right questions and create those bridges, so that we can actually bring the right values to AI.

Ben: Yeah, and it speaks to what you were talking about with context earlier. And how every industry, every business is going to have a little bit of a different process for integrating AI or how AI can help their business. Like, as a reporter, I use ChatGPT sometimes to write headlines or to, you know, if I'm looking for a synonym that starts with a specific letter. I think that there's a lot of really great ways that you can use it to integrate creativity into your life in an interesting way or sort of aid you. How do you use AI in your day to day to just improve little things?

Navrina: Yeah. So, I want to be careful about how I answer that because I don't use AI to improve my productivity, I use governed AI to improve my productivity—a drastic difference. Because I truly do believe that AI can help us be better and spend time where we should be spending time. I think it's going to elevate our creativity levels. It's going to give us more time to do things that we enjoy. But it has to be governed and it has to be managed appropriately. So, day to day, I would say I do extensively use AI, whether it is, conversing with my personal assistant. So I use Alexa and Siri quite extensively, not only to take notes, but to set reminders. I use extensively, I would say, for writing and communication which, as you can imagine, as a non-native speaker of English, is beautiful. But I think the things that I'm not actually using it for intentionally is what I'm worried about.

So as an example, I started the company a week before shelter in place in California. So perfect timing, and you can imagine all the hiring was done remotely. The company was built remotely. So we've literally never had physical offices, but as you can imagine, what was really important that brought us together was technology. We extensively use Slack. And just yesterday, at Dreamforce, they announced all these AI augmentations to Slack where there's so much capabilities now, you don't even realize you're using AI, but it's actually powering it. It's powering the right insights, it's doing the right reminders, it's doing the right summarization, it's making you take the right actions. So a lot without me intentionally using AI, it's actually just crept up in every parts of my life.

Ben: Yeah, it looks like we have a couple minutes left and we do have an audience question. So I'll read out to you.

Navrina: Okay.

Ben: How much time do you think it will take for AI to fully operate physically on their own, like fully autonomous vehicles, healthcare machinery?

Navrina: So I think that's a very hard question. The reason it is hard is because that's not only the innovations on the software side, but also on the hardware side. So on the software side, I would say that we are seeing unprecedented levels of innovation. And we are seeing these emergent capabilities, which I'm sure you've read up on with some of these large language models emerge. And we are not even able to explain why those happen. The issue that happens and to address this question is on the hardware side and the physical side. So as an example, one of the early areas that I was focused on was robotics and building collaborative robots for manufacturing plants.

And as you can imagine, one of the biggest problems that we had and the magnitude was insane, is you had this manufacturing plant which is manufacturing cell phones. And one of the jobs of the humans in that particular manufacturing plant was to look at the LCD screens and figure out dead pixels. Now if you think about it, one, it's such a burden on your eyesight and actually the lifetime of a particular employee or human working in that manufacturing plant was only two years, because they would lose their eyesight looking at these dead pixels.

So one of the big innovations we did, ten years ago, was actually replacing that with computer vision systems, so that it's not a burden on humans. But you have these now collaborative robots actually working side-by-side with humans who are doing pick and place, but they were also doing this computer vision and figuring out the dead pixel.

But in that scenario, you can imagine physical harm that can be caused when you have a robot side-by-side with human was a massive concern. So we literally had to put these collaborative robots in cages to manage that. So the reason I'm struggling with that question around fully autonomous capabilities is it's a combination of hardware and software. There's a lot that we've done on software. Hardware still is challenging, but as you can imagine, the progress is being made quite actively and certainly check out Boston Dynamics’ robotics capabilities, which have been unparalleled, I would say, in the past couple of years.

Ben: There's some crazy YouTube videos up there of flipping robots and yeah, it's pretty amazing.

Navrina: They're not deepfakes.

Ben: And then there's even the delivery robots that are all around California now, right? They're out there. All right. I think that's basically time for us. So yeah, thank you so much.

Navrina: No, thank you for having me.

Get the latest HR trends, insights, advice and more sent straight to your inbox.