A data visualization card that lists what marketing leaders should advocate for in an AI use policy. The list includes accountability and governance, planned implementation, clear use cases, intellectual property rights and disclosure details.
Bonus resource: Use this workbook to identify the unique ethical considerations that must be addressed to promote the responsible use of AI within your organization.
Download the workbook
Your corporate AI use policy must clearly describe the roles and self employed data responsibilities of individuals or teams entrusted with AI governance and accountability in the company. Responsibilities should include implementing regular audits to ensure AI systems are compliant with all licenses and deliver on their intended objectives. It’s also important to revisit the policy frequently so you’re up-to-date with new developments in the industry, including legislation and laws that may be applicable.
The AI policy should also serve as a guide to educate employees, explaining the risks of inputting personal, confidential or proprietary information into an AI tool. It should also discuss the risks of using AI outputs unwisely, such as verbatim publishing AI outputs, relying on AI for advice on complex topics, or failing to sufficiently review AI outputs for plagiarism.
Planned implementation
A smart way to mitigate data privacy and copyright risks is to introduce AI tools across the organization in a phased manner. As Rispin puts it, “We need to be more intentional, more careful about how we use AI. You want to make sure when you do roll it out, you do it periodically in a limited fashion and observe what you’re trying to do.” Implementing AI gradually in a controlled environment enables you to monitor usage and proactively manage hiccups, enabling a smoother implementation on a wider scale later on.