AI Risks and the Need for Corporate Policies

Written by Pearl Kapoor on Saturday, 21 October 2023. Posted in Feature Article

Photo by Christina @ wocintechchat.com on Unsplash  


In an era where artificial intelligence (AI) is rapidly becoming an integral part of our daily lives, the potential benefits and risks associated with this technology are of growing concern. From self-driving cars to virtual personal assistants, AI is transforming industries and improving efficiency, but it also presents a set of unique challenges that demand thoughtful corporate policies.

One of the biggest risks of using generative artificial intelligence applications is confidentiality. Like most other online applications and algorithms, AI chatbots typically take input from the user, store it, process it, and provide an output. The issue arises from the storing part of the process. Data leaks, privacy breaches, and many other data-related incidents can lead to the reveal of information that was meant to be kept secret. The laws currently addressing this issue require employees to “maintain reasonable steps to protect the secrecy,” yet it does not manage AI by name, leaving the Trade Secret Laws to interpretation. 

Another legal drawback of using bots like ChatGPT is intellectual property infringement. Many businesses require unique business pitches and ideas or original work to be presented. The use of chatbots for generating ideas and completing such products and services could be categorized as intellectual property infringement since the individual delivering the solutions was not the one who came up with the ideas. However, this incites another complex debate about who actually owns the intellectual property that AI generates. Since there is no clear answer to this question, it has yet to be answered. Intellectual property laws do not address this; thus, it is left to business owners to regulate their constituents and maintain the integrity of their corporations. 

Finally, and perhaps most importantly, artificial intelligence often violates civil rights and other discriminatory violations. The data the AI outputs is often a processed version of the input data, but one thing that often flies under the radar is that if the input data had a bias, so would the output. This would violate the Fair Credit Report Act and can be regarded as a biased algorithm. The U.S. Equal Employment Opportunity Commission (EEOC) is actively monitoring the use of AI. The EEOC has initiated an agency-wide effort to guarantee adherence to federal civil rights laws when employing various technologies, such as AI and machine learning tools, in hiring and employment choices. This shows how AI is not only used within the context of business deals but on a larger organizational level, emphasizing the need for increasing regulation. 

ChatGPT is being used everywhere today; businesses can no longer ignore it. The expansion of technology is inevitable, and every organization must begin evaluating the issues that artificial intelligence brings. Alongside these transformative capabilities come challenges related to ethics, data privacy, and potential job displacement. Companies need to proactively address these issues, not just as a matter of compliance but as a strategic imperative for remaining competitive and responsible in an AI-driven world.

About the Author

Pearl Kapoor

Pearl Kapoor

Pearl is a Business Features Writer at Girls For Business.

Leave a comment

Please login to leave a comment.

© 2025 Girls For Business. All Rights Reserved.