Mon 31 Jul 2023

Artificial Intelligence & Recovery of Evidence: Who has what?

As Julie Hamilton noted at this year's Law and Technology Conference organised by Law Society of Scotland, the use of Artificial Intelligence ("AI") raises several practical issues for those advising and representing clients in civil disputes. The Case of Getty Images against Stability AI and their image generator Stability Diffusion is a case in point.

But what about disputes in the Scottish Courts? And more specifically, how do technological advances in AI impact on a crucial stage in most ligations - the recovery of evidence?

The Workings of AI

The starting point is attempting to understand how AI works. The author pleads no technical competence on this matter - the mysteries of AI and its functionality are obviously the domain of the IT priests. But having a working understanding is important. Legal definitions are not particularly helpful in trying to understand this. Here is the proposed definition in the draft AI Regulation proposed in the European Union:

 "artificial intelligence system means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments."

The only definition from a UK source of law is to be found in Schedule 3 to the National Security and Investment Act 2021 (Notifiable Acquisition) (Specification of Qualifying Entities) Regulations 2021 and does not advance one's understanding much further.

The definition might tell us what AI is but not how it works. At the same time, the mechanics of AI are of significant importance. In the case of Generative AI (i.e. AI that generates output - text, images, sounds, etc.) we understand that there is processing of large volumes of data necessary for the generation of the output. For example, if what is needed is for the AI to be able to paint like Claude Monet, then the AI would have to be fed with a sufficient amount of his paintings to allow it to learn the characteristic traits or features of the artist’s style and then apply them on command. But where does it take this data from? How does it decide which data to use and which to ignore? And ultimately, does it recreate any of Monet's work in the paintings it can be prompted to produce?

Expert Advice

The answers to the above would be helpful, but the bigger issue is actually asking the right questions. The example questions almost certainly simplify the manner in which AI learning and decision-making work. In order to ask the right questions, parties in disputes involving the use of AI will undoubtedly have to seek IT expert input. This is especially so where the recovery of evidence is concerned.

Recovery of Evidence

In Scotland, the Court has significant powers under the Administration of Justice (Scotland) Act 1972 and at common law to order the production of "documents and other property". Does information relating to the learning, decision-making, and output of AI fall into "documents and other property"? The question demonstrates the datedness of the language used in the applicable law. Arguably, the statute (or at the very least, the common law) should be capable of accommodating this, but appropriate changes in the law in view of the advances of the digital information age are sorely needed. 

Key Questions

Assuming one can call on the Court to order the production of the necessary information (and assuming the order can be served on someone within the jurisdiction of the Court), various questions arise:

  • What information is actually needed? Is it something akin to an event log that shows the decision-making path the AI took from identifying relevant data, through "crawling" through said data, to generating the output?
  • Who holds all of this information? Are they based in Scotland in a way which would allow for the information to be "seised" via an order of the Scottish Courts? Companies developing and/or using AI often have presence in multiple jurisdictions while the information in question is in the "cloud". Establishing where the information is actually held is therefore a hurdle of itself and might require a separate recovery procedure.
  • Is all of the data the AI considered/used easily identifiable?
  • In the event that it is not possible to understand how the AI and its various algorithms operate, who should be asked to provide evidence on this? It has been suggested that sometimes it is difficult for AI's creators/programmers to explain how their creations work. What chance do mere lawyers stand without expert input?
  • If a commissioner has to be appointed, who is to be summoned before them to give evidence on whether and where the relevant information exists/is held?
  • If there is compliance with the order, how should the information be produced and would it be navigable?

Conclusion

As AI is not currently regulated, the above questions are difficult to answer. The answers will, however, be of immense importance in advising clients on whether court orders should be sought and at what cost. The Getty Images case will provide us with some answers, but until a similar matter is heard before the Scottish Courts, it is unlikely we will have the full picture.

In the meantime, it should be apparent that changes in the law are needed, and we hope to see these implemented proactively.

For more information on the regulation of AI, please click here.

Make an Enquiry

From our offices we serve the whole of Scotland, as well as clients around the world with interests in Scotland. Please complete the form below, and a member of our team will be in touch shortly.

Morton Fraser MacRoberts LLP will use the information you provide to contact you about your inquiry. The information is confidential. For more information on our privacy practices please see our Privacy Notice