Rancho Mesa's Alyssa Burley and Client Communications Coordinator Megan Lockhart address the need for an AI company policy.
Show Notes: Subscribe to Rancho Mesa's Newsletter.
Director/Host: Alyssa Burley
Guest: Megan Lockhart
Producer/Editor: Lauren Stumpf
Music: "Home" by JHS Pedals, “News Room News” by Spence
© Copyright 2023. Rancho Mesa Insurance Services, Inc. All rights reserved.
Transcript
Alyssa Burley: Hi, this Alyssa Burley with Rancho Mesa's Media Communications and Client Services Department. Thank you for listening to today's top Rancho Mesa News. Brought to you by our Safety and Risk Management Network, StudioOne. Welcome back, everyone. My guest is Megan Lockhart, Client Communications Coordinator with Rancho Mesa. Today, we're addressing the need for an air company policy. Megan, welcome to the show.
Megan Lockhart: Yeah, thanks for having me, Alyssa.
AB: Now, since the release of ChatGPT in November 2022, it seems like nearly every technology platform is incorporating artificial intelligence or AI into their products. I'm seeing press releases weekly, if not daily, about new AI integrations with software that is widely used in offices around the world. And as a result, Rancho Mesa has been developing an AI policy for our own business to address the secure and ethical use of this technology. So, Megan, what are the benefits of using AI in the workplace?
ML: AI offers many benefits to businesses like enhancing productivity, assisting in brainstorming, and creativity. But it can also pose new risks that our listeners should be aware of. For example, human services organizations like health care facilities and nonprofits may be particularly vulnerable to risks associated with AI. But that doesn't mean our construction and landscape clients are immune. Organizations that have personal or sensitive data need to be cautious.
AB: Okay, so what are the concerns organization should have about implementing AI into their workplace?
ML: Well, the concerns are quickly rising regarding the effects of AI technology in every industry. And as more companies begin to incorporate AI into their products and operations, now is the time to evaluate the necessity of an AI policy for your organization. I was reading an article by Nick Layton, a business owner, bestselling author and motivational speaker who also writes for Forbes Magazine, and he wrote and I quote, It's important not to blindly jump into AI technology without a proper plan in place. You could be setting yourself up for costly mistakes and risks. End quote. He then goes on to say, Creating an AI policy doesn't have to be overly complex. It's best to start with simple guidelines that you can expand and adapt as your usage of the technology expands.
AB: That's good advice. And we're writing our AI policy to be flexible because we just don't know what AI will be released in the coming weeks, let alone the next year. And advancements in technology happen so fast. We're hearing that implementing AI can improve productivity. Is this technology going to be the ultimate tool to generate content, whether it's a company report, client proposal or marketing materials?
ML: Well, one cause for concern when using AI platforms such as ChatGPT is that responses will not always generate accurate information. It's possible the sources used are incomplete, biased or just flat out errors. To avoid employees distributing false information to clients. Content should always be proofread by an actual human being. Your policy should address the facts that humans must be responsible for any content generated by AI.
AB: Absolutely. And employees must check the facts before distributing anything that's generated by AI. Have you talked to any experts about the use of AI in human services organizations?
ML: Yeah. Sam Brown, our Vice President of the Human Services group with Rancho Mesa, said, and I quote, "Nonprofit and human services organizations depend on the public's trust. Whether a development director uses AI to learn from donor data and personalize their experience or a program director uses AI to improve client outcomes, a thoughtful AI policy will ensure human oversight and minimize risk when implementing new technology." Along the same lines, organizations also risk plagiarism when utilizing AI. While it can help inspire ideas and creativity, if a company uses AI-generated ideas or guidance, it's important to ensure content is original to avoid unknowingly distributing the work or insight of others to clients or the public.
AB: So it's vital to know the AI platforms Responsibility for claims of copyright violations for it's AI generated works.
ML: Yes, exactly.
AB: So what's the most dangerous risk for organizations when using AI?
ML: So arguably the most dangerous risk organizations face when utilizing AI as a tool for efficiency is the threat to security. Many AI platforms are designed to retain the information they are given and use it to adapt. For example, using ChatGPT to organize an Excel spreadsheet of confidential client information could be absorbed into the system's database for learning and pose a security risk. Organizations must fully understand how their data is used once it's input into an AI platform. Or is it stored and used for future learning, or is it deleted immediately? And these are questions that must be asked before implementing AI into any organization.
AB: Of course. So as innovations in AI technology continue to advance the importance of having an AI policy that guides employee usage becomes even more crucial. It's a good idea for clients to evaluate the potential risks of integrating AI into their operations and ensure their policy adequately covers these concerns. Megan, I know this is a topic that we're going to hear about a lot more in the future, but thanks for joining me in StudioOne.
ML: Of course. Thank you so much.
AB: This is Alyssa Burley with Rancho Mesa. Thanks for tuning into our latest episode produced by StudioOne. For more information, visit us at ranchomesa.com and subscribe to our weekly newsletter.