As artificial intelligence (AI) takes an increasingly prominent role in the workplace, brands can benefit from a host of efficiencies and opportunities. However, as the technology moves forward at pace, it’s essential that businesses keep up and ensure they are aware of the risks as well as the benefits.
The term AI is an umbrella one. Within that the key areas are large language models (LLMs) like ChatGPT, machine learning (ML) which is about pattern finding or predictive analytics and deep learning (DL) where multiple layers of ML models are used to refine an application, so a familiar example today would be speech to text.
Such technologies are already being widely implemented – whether it’s your bank asking for your face in the frame to get a photo ID before making a payment or Siri knowing which music you’re asking it to play.
Yet some are being used in ways that were not intended. Notably, ChatGPT is being treated as a search engine, with people simply inputting a query and assuming the response is correct. In fact, ChatGPT is just one of several models available (others include Copilot and Gemini). Although trained on enormous datasets, it will only provide the information that has been input so there’s inherent bias in what you will receive and the response may vary with each of the models.
Businesses need to be aware that if employees are dropping questions into the free version of Chat GPT, this is no longer confidential information, something that can and has posed major reputational and legal issues already for some brands.
Instead, it’s important to teach your workforce about prompt engineering – the different ways to ask the questions – to help them understand how to use these tools effectively and ethically and to provide the appropriate access where needed.

So let’s look at some of the main things that businesses need to consider when incorporating generative AI into their brand strategy:
Knowledge and training: AI offers possibilities and efficiencies across most areas of business and different sectors. Look at the tools that are already available and how you can integrate them efficiently. Set out your own policy and code of conduct so parameters are clear. There are wider proposed policies, namely the Artificial Intelligence (Regulation) Bill which Lord Holmes is championing, which would establish a new body, the AI Authority, to help address AI regulation in the UK. This also proposes that businesses should have a designated AI officer.
Ethics and societal impact: control and transparency must be key considerations. Think about what controls the business has in place in terms of how AI is being used. Are you being transparent with customers? This will be part of your policy, which needs to consider the potential impacts of AI. One recent example that has sparked debate is clothing brand Mango using only AI-generated models for certain clothing campaigns. The company made clear the models weren’t real but the ensuing debate from consumers highlighted the kind of concerns that are coming alongside these kinds of business efficiencies.
Increased fraud risks: the new technologies also give rise to misinformation and deepfakes which can be used by fraudsters. In one large scale example an employee was tricked by a deepfake into transferring 243,000 USD after a conversation with his boss – or rather his boss’s voice. That was some years ago at the outset of the technology – now it’s worryingly common, with industry reports stating that deepfake scams account for one in 15 fraud cases.
System vulnerabilities: AI brings with it some increased risk of data breaches, model manipulation and adversarial attacks (when an attacker is able to input data into the AI model). Having the right training and policies can help protect your business and your brand reputation. You should have a crisis communications plan in place and a clear understanding of what and how you will communicate to relevant audiences if needed. For example, a data breach may need to be reported to the industry body, the Information Commissioner’s Office (ICO) and you may need to inform customers or respond to media enquiries. Setting up a process – and bringing in the expertise to do that if needed – will save you much needed time in the event of an issue.
Bias and discrimination: since AI is trained by humans, it’s unsurprising that bias has made its way into the data and algorithms deployed. Additionally, the negative effects can be amplified because of the scale of the models. The three main sources of bias in AI are training data bias, algorithmic bias and cognitive bias. Eliminating these can prove challenging but having the right governance and policies in place will enable businesses to ask the right questions to help ensure fairness.
Ownership and copyright: this is an interesting topic when we think about the ease with which we can now create images, video and music with AI. These tools are built in with some platforms used for design and can reduce licensing costs, as well as speeding up the creation process. What once took hours or even days can be done in a matter of clicks. But AI is drawing on content that it has ingested and there are various questions here around ownership and copyright, so guess who we asked, AI! The response was: “Who owns an AI-generated image depends on the jurisdiction and the specific circumstances, and the law is still evolving.”
Needless to say, this is an important consideration in images, videos or other assets you’re creating with AI to represent your brand. There is a definition in UK legislation for computer-generated works but there are various complexities. According to the University of Portsmouth: "The law suggests content generated by an artificial intelligence (AI) can be protected by copyright. However, the original sources of answers generated by AI chatbots can be difficult to trace – and they might include copyrighted works."
Security and compliance: traditional risk management systems may fall short for AI systems so you should review what you have in place. Having appropriate security and compliance will reduce risks, increase trust and ultimately improve the performance of AI systems. This can range from avoiding discrimination to protecting the privacy of those whose data is being used to train systems.
Ultimately, AI should be effective, fair, interpretable and secure. These four guiding pillars will not only help you implement efficiencies but also help protect your brand – and your bottom line.
Talk to the Amp team for help with brand strategy, communications plans, PR and more. Email us here.