Article

Caution is the watchword

Ben Rapp, Founder & Principal
February 2024
article-ai

Securys sponsored an AI governance round table recently. We were joined by representatives from a number of large organisations as well as leading law firms. While individual approaches to AI governance varied, there were some clear points of agreement around the table:

  1. Your organisation is already using AI, whatever your stated position. Many vendors have rushed to add AI features to their existing products, or to enhance their offerings with background AI processing, so there’s no point trying to be King Cnut – the tide will come in anyway.
  2. Therefore the single most important thing is to keep abreast of what is being used and put appropriate, manageable governance and guidance in place. There is no substitute for education – remember that many of your staff may not realise that they are using AI at all or may not understand the risks involved in sharing personal or confidential information with AI components of existing toolsets.
  3. Actual internal development of dedicated AI systems is proceeding with caution – everyone is experimenting and exploring, but without using live data and with a great deal of attention to approval processes and risk mitigation. No one around the table felt that competitive risks justified any kind of hasty adoption.
  4. The EU AI Act was a source of concern – both because of the very wide scope of definition of AI and because of the sheer weight of bureaucracy it envisages. There are real questions to answer about the application of conformity assessment and the whole risk management edifice – clarity is needed on when an organisation will be considered to be a provider rather than a user, given the extent to which current internal research is focused on training proprietary models. If the previous sentence reads like gobbledegook to you, consider downloading a copy of our guide to AI regulation, in which we go over the proposed EU approach in detail.
  5. AI governance is cross-functional. In the same way that you need to extend your guidelines and education to your whole organisation, you also need to involve all of your internal functions in contributing to governance activity and decision-making. Even more so than privacy, AI needs input from legal, compliance, technical, ethical and operational areas of your organisation if it is to be effective.

It was heartening to see that a wide sample of firms were taking AI governance seriously, but there is clearly a lot of work to be done in terms of practical implementation of that governance. Above all this means ensuring that senior decision-makers properly understand the technologies being deployed in terms of their risks and implications for the business, and enabling the widest possible set of eyes and ears on the ground to keep tabs on how AI is actually being used beyond your known and visible research projects and primary applications of the technology.

AI governance may seem like a natural extension of your existing privacy programme, and that’d definitely be a good place to start, but it needs access to deeper and broader technical understanding than you may already have in place; you should also consider how AI affects your risks in other sensitive areas of data including commercial confidentiality and price-sensitive information. The way forward is collaborative compliance, working with partners who have developed field-tested governance models and can bring the necessary technical, procedural and legal skills to the table.

Act now and speak to us about your privacy requirements

Start a conversation about how Privacy Made Practical® can benefit your business.

Click here to contact us.

Back to top