Harry Potter and the Ghost of Christmas Future

December 14, 2021

The only way of discovering the limits of the possible is to venture a little way past them into the impossible. Any sufficiently advanced technology is indistinguishable from magic.

Arthur C. Clarke

I am presently reading J.K. Rowling’s Harry Potter books to my son.  Disappointingly, they say nothing about how civil litigation works in the world of Harry Potter.  Perhaps Rowling thought it an insufficiently exciting topic to explore in a children’s book. I prefer to speculate, though, that it is never discussed because there are certain features of Harry Potter’s world which would logically have served to make any kind of civil litigation between wizards either exceedingly rare or completely redundant. 

Certain spells and magical effects described in the books, were they transplanted to our world, would certainly render obsolete much of the system which we have constructed for enforcing our rights and obligations and resolving our civil disputes.  This article discusses those spells, and what civil litigation might look like in the world of Harry Potter before turning to the ‘magic’ of new technologies like AI, blockchain and smart contracts to ask whether they might yet have a similarly disruptive effect on our dispute resolution institutions.

Small population

Before looking at what would be the implications of individual spells and magical artifacts described in Harry Potter, it is worth acknowledging that broader, structural factors were always going to make civil disputes far scarcer in the world of Harry Potter than in our society.

Wizards are a tiny, privileged fraction of the UK population.  Rowling has said there are 3,000 wizards in the UK and Ireland, though there is much in the books to imply a larger population (the stadium for the Quidditch world cup had seating for 100,000, Hogwarts has around 1,000 students aged 11-18, St. Mungo’s Hospital for Magical Maladies has at least 49 wards, and so forth).  Statisticians seem to have spent a surprising amount of effort to infer a more accurate figure from such data.  BBC Radio 4’s More or Less estimated the number at around 13,000.  Considering that in non-wizard (‘muggle’) society, most people manage to go their whole lives without ever appearing in court, one would not expect a population of just 13,000 people to generate much litigation.

High trust economy

A small population means most wizard-to-wizard transactions will be with someone you know, who you’ve transacted with before, expect to transact with again, who knows lots of your other potential customers and/or who is in some way related to you (this tallies with the books – the characters bump into people they know everywhere they go). Add that, in the wizarding world, all transactions are face-to-face because wizards seem to use cash exclusively (see below).  Anyone who defaults on their obligations will quickly run out of people to transact with. 

Low scarcity

Wizards ought to be able to obtain most things they might need or want by magic.  It is evidently possible to bring objects into existence from nothing and change objects into different objects. There are plainly some limits on what can be obtained by these means, but exactly what they are is never explained.  Aside from potions (which use physical ingredients) using magic also doesn’t seem to consume any resources or involve much effort.  Wizards should thus rarely need to enter into contracts to acquire things. 

Magic would eliminate some of the most common sources of tort claims.  Wizards can, for example, teleport, or travel by other magical means, so there are no road traffic accidents to argue over. Magic also seems effective to treat most illnesses and heal most personal injuries. 

As an aside and incidentally, the first Harry Potter book features some even more impressive medical technology in the form of an artifact called the Philosopher’s Stone, which can preserve life indefinitely.  Rather than use this to save lives and create a post-death society, where no one ever need die unwillingly again, the stone’s creator, Nicholas Flamel, just uses it to extend his own and his wife’s lives for a few years.  At the end of the first book, Flamel’s friend, Dumbledore (aided by Harry Potter) persuades Flamel to destroy the Philosopher’s Stone, condemning himself and his wife to death.  This is hailed as a great victory for the questionable reason that, per Dumbledore, “death is the … greatest adventure”.  In denying the benefits of the stone to untold billions of (less fatalistic) people, Potter, Dumbledore and Flamel were arguably responsible for far more deaths than Voldemort ever was.

A further factor ensuring that wizards do not want for much is the existence of house elves.  These act as wizards’ unpaid servants.  Sinisterly, the elves have either been bred, evolved or been enchanted so that they live only to serve and find great joy and satisfaction in their labour, so wizards need have no fear of slave revolts. Wizards, more broadly, seem to occupy the top seat in a hierarchy.  The books feature several other sentient, intelligent species, such as goblins, giants, veela (a kind of nymph), leprechauns and merpeople.  At least some of these can evidently interbreed with humans and use magic.  Members of these other species, though, seem routinely persecuted, discriminated against or, at the very least, denied access to all the privileges of wizarding society.  Some half-goblins, half-giants and half-veela are evidently allowed to attend Hogwarts and other schools, but treated with suspicion, and pure-bred members of these species seem to be denied a magical education. 

Egalitarian (if you’re a wizard)

As between the wizarding elite there is little material inequality.  Harry’s friends the Weasley family are always described as ‘poor’, but seem relatively comfortable, and never suffer any real hardship beyond the fact that their younger children wear clothes inherited from older siblings, which is hardly desperate poverty (and presumably represents a lifestyle choice on their part, since they could have used magic to overcome this had they wished – see above). 

Wizards do need to buy wands and other equipment, and attend a luxurious boarding school for seven years, but J.K. Rowling has said (in a tweet) that “There’s no tuition fee! The Ministry of Magic covers the cost of all magical education!”.  As to where the Ministry of Magic gets its resources from, there is never any reference to wizards paying tax.  The Ministry seems to be a secret part of the British government which exists to govern and serve the UK’s wizards, while keeping magic secret from the general population, and denying them its benefits.  The Ministry is not subject to much in the way of constitutional checks and balances, separation of powers or accountability – it regularly imprisons people without trial and tortures, and occasionally kills, its enemies. The books feature several instances of the Ministry using magic to exploit non-wizards and manipulate their thoughts and memories.  It thus seems like a good bet that wizards’ privileged lifestyles might ultimately be funded using resources which their benevolent-ish dictatorship is siphoning off from the non-wizard taxpayer. 

Unsophisticated economy

The wizard economy is startlingly unsophisticated and does not seem to support any large businesses.  The only businesses we hear mentioned by name are small owner-managed retailers like Olivander’s wand shop and Honeydukes.  Gringott’s Bank has at least two branches (one in London and one in Egypt) but it doesn’t really seem to be a ‘bank’– more a safe-deposit company, where treasure is stored in physical vaults.  There is never any mention of Gringott’s paying interest on deposits or making loans.  One branch serves the whole of the United Kingdom, and customers have to visit in person if they want to make a deposit or a withdrawal.  All purchases are made in person using cash in the form of heavy gold coins (not even bank notes) which people have to carry around with them (the only transaction I can recall which is not a face to face cash on delivery purchase is where Hermione Granger orders some newspapers and magazines – but how she transmits payment is never stated).  The currency is not even decimal (1 Galleon = 17 Sickles = 493 Knuts) which must be especially tricky given that wizards can only have very basic knowledge of maths, since from age 11 onwards children’s education seems to be solely concerned with magic. 

Political conflict

Given the above, substantial disputes over resources and labour of the kind we resolve by litigation, must necessarily be rare.  Conflict, competition and cooperation in the world of Harry Potter are instead mostly political (or military) in nature, rather than economic.  The central conflict is between a ‘virtuous’ conservative faction, who wish to preserve the status quo, and a rebellion, led by Voldemort who wants to rule wizard society and exploit and subjugate non-wizards even more than wizards already do, though possibly only as a means to his ultimate goal of achieving immortality.  Voldemort was born Tom Riddle, the son of a wizard mother and non-wizard father, who abandoned him after his mother’s death (a death which, no-doubt, was eminently preventable – see above).  Presumably it is because Riddle is of mixed-parentage that wizard society condemns him to being brought up in care, in what is implied to have been an unpleasant non-wizard orphanage, rather than being adopted by other wizards, or brought up in some sort of alternative wizard care system with all the privileges that would entail.  Against that background, Riddle’s contempt for both wizards and non-wizards seems understandable. 


Veritaserum is a costly, difficult to manufacture potion which forces the imbiber to answer any questions put to them, and to do so truthfully.  While there is no civil litigation in the Harry Potter books, there are several criminal trials.  One might have thought that veritaserum would have rendered much of the trial process portrayed in the books redundant – just give the accused some veritaserum, and ask if they did it.  Veritaserum never seems to feature, though, in these trials, and the books feature several characters who have been convicted of crimes they did not commit. 


Students at Hogwarts receive lessons in divination (magically predicting the future), and the idea of destiny features heavily in the books.  But, if divination worked reliably, the implications for the story, for litigation and for much else besides, would be dramatic (perhaps catastrophic).  People do have the odd accurate premonition, usually delivered in a dramatic way to make clear to the reader that it is genuine.  But generally Rowling portrays divination as being either a fraud, or else so Delphic, subjective and unreliable as to have no real utility and uses it more for comic effect than to drive the story (e.g. rather than do his divination homework properly, Harry simply makes up a series of completely invented predictions, claiming to have arrived at them by the methods taught in his divination class, and some of these predictions then come true).


Some wizards can read others’ thoughts (legilimency) but, as with divination, this is portrayed as unreliable and subjective, so probably wouldn’t make much difference to litigation.  Veritaserum and unbreakable vows (see below) appear far superior methods of obtaining reliable evidence. 


Polyjuice causes whoever takes it to assume, temporarily, someone else’s exact physical appearance, presumably including fingerprints, retinas and DNA.  The books also mention a liquid called ‘Thief’s Downfall’ which counters the effects of polyjuice, revealing the true identity of anyone who comes into contact with it.  One would expect that to be used in any context where someone’s identity needed to be verified. 

In theory, the existence of polyjuice might result in issues of alleged mistaken identity arising more frequently in litigation, with parties and witnesses claiming that it was not them, but some indistinguishable impostor, who was witnessed to have acted as alleged.  But veritaserum and unbreakable vows would resolve such disputes without difficulty. 


These artifacts seem to allow the user to record and preserve the full range of their experience (i.e. all their sensations, thoughts and memories) to experience them again, or replay them to someone else like very high resolution, full-spectrum VR. People who routinely or continuously uploaded their memories to a pensieve would have a contemporaneous record of events which they could use as evidence, rather in the way that people fit dashcams to their cars, or police officers wear body cams, and a future decision-maker would not have to rely upon an imperfect recollection.  Alternatively, even where witnesses had failed to upload their experiences at the time, they could upload their present recollection to a pensieve for review by an arbitrator who would at least be able to see what each witness’s present recollection was, bypassing the need for witness statements and cross-examination.

Unbreakable vows

Wizards living in the world of Harry Potter can make an unbreakable vow, which is probably the single magical device which would have the most far-reaching implications for civil litigation.  Unbreakable vows can, in fact, be broken, but anyone who does so dies instantly. Despite their serious implications, it seems surprisingly easy to make unbreakable vows. The promisor and promisee hold hands and the promisor recites the promise aloud while a third party holds the promisor’s wand.  Even wizard children can evidently make unbreakable vows, since at one point Harry is told by Ron Weasley that, when he was five, Ron’s brothers (who would have been six or seven) nearly succeeded in tricking him into making one. 

Unbreakable vows are portrayed as one-way promises (like deeds).  But it seems easy to repurpose them to impose mutual obligations.  Each party would just have to vow “I shall do X if [counterparty] makes an unbreakable vow to do Y”.  The fact that unbreakable vows must be recited aloud potentially limits the complexity of the contracts that may be made in this way, but it might be that one can incorporate written terms by reference into one’s oral vow (“I hereby make an unbreakable vow in the terms of the document I have just signed”).  One could also use unbreakable vows in place of veritaserum to guarantee a witness’s truthfulness (“I vow to tell the truth”). 

Zero-cost monitoring and enforcement

As with everything in Harry Potter, there are some unanswered questions as to how all this works.  Presumably some omniscient, disembodied, magical arbiter must (for zero cost) continuously monitor anyone who has made an unbreakable vow, deciding in real-time whether they are complying with their obligations, based on that entity’s interpretation of the words used and any terms it finds to be implied, and it executes its decisions instantly, without hearing any representations from the vow maker or the person to whom the vow was made. 

It is unclear how this entity goes about interpreting vows.  Perhaps it has perfect knowledge of what the vow-maker understood their vow to mean and applies that?  It is similarly unclear whether this entity recognises any vitiating factors which might excuse non-performance – for example, whether it will enforce a vow which has been obtained by fraud, or if it can ever treat an unbreakable vow as having been frustrated by subsequent events, whether vows can be waived and so on. 

Unbreakable vows and the wizard economy

Once unbreakable vows become a possibility, refusing to express one’s promise as an unbreakable vow becomes a red flag. It signals to a potential counterparty that you do not intend to perform or are not confident in your ability to perform.  And, if you decline to contract on ‘unbreakable vow-terms’, your counterparty knows they are going to have to invest time and money in monitoring your performance, and that if you do breach, they are going to have to spend more time and money on enforcing their rights.

Anyone who was prepared to enter into unbreakable vows would thus enjoy an absolutely decisive advantage over a competitor who was not.  So long as someone is willing to promise the same performance on unbreakable-vow terms, people who are averse to making unbreakable vows will find it impossible or prohibitively expensive to borrow money or obtain goods or services on credit, or obtain payment in advance of their own future performance.  Anyone looking to buy goods or services for future delivery (e.g. contracting to have a new kitchen supplied and installed) would always prefer a supplier who was prepared to contract on unbreakable-vow terms over one who was not, and would happily pay such a person in full in advance. 

It might be objected that the penalty for breach is so severe that wizards would not routinely use unbreakable vows for commercial transactions.  There would always be the fear that, even if you have every intention of doing what you promise, some unforeseen circumstance might cause you to breach.  But one suspects that sufficient ingenuity in the wording of vows could reduce these risks to an acceptable level and the competitive advantage of transacting on unbreakable-vow terms would be such that people would overcome their inhibitions. 

For example, rather than making promises in absolute terms people could always preface their vows by saying something like “I intend to perform this vow, believe I am capable of doing so and more likely than not to remain capable of doing so, and shall in any case use at least my best endeavours to perform this vow as fully as I can”.  A similar effect could be achieved by incorporating force majeure and extension of time provisions, allowing additional time for performing their vows if they are delayed by certain events or circumstances.  Wizards could incorporate into their vows by reference some kind of code which sets out what is to be the effect of misrepresentation, frustration and so on.  Vows could also provide for (in effect) liquidated damages, giving the vow-maker the option either to perform or else pay a monetary penalty, specifying that they are to pay that penalty as soon as they can lawfully obtain funds with which to do so. 

The end result would be very similar to a contract in the non-wizard world which contained an enforceable liquidated damages clause: the promisee receives either the promised performance, or a pre-defined sum of money, at least so far as the promisor has the means to pay.  The difference is that the wizard to whom such a promise is made does not have the burden of having to monitor performance, and obtaining and enforcing a judgment or award. 

People might be hesitant to casually entrust their lives to the will of an invisible, inscrutable, unchallengeable decision-maker.  But it is only a slight exaggeration to say that people already do the equivalent every day.  The average person is unfamiliar with the terms of the contracts they have signed and has little knowledge of contract law, civil litigation or arbitration. Even experts in these fields (i.e. lawyers) often fail to predict accurately what decision-makers (judges and arbitrators) will decide the contracts mean / do. And (in arbitration) parties agree to abide by such arbitrator’s decisions about those contracts, even if the decisions are ‘wrong’ (this being the effect of agreeing to exclude the right to appeal under section 69 of the Arbitration Act 1996).  Consider too that, historically, in several religious traditions, people were prepared to swear oaths despite sincerely believing that supernatural forces would monitor their performance and punish breaches in the hereafter.  People today routinely entrust their lives to (for example) inscrutable aircraft autopilot systems, medical diagnosis software and (increasingly) to driver assistance software.


The term ‘artificial intelligence’ was coined by John McCarthy in 1956.  He defined ‘intelligence’ as “the computational part of the ability to achieve goals in the world.  Varying kinds and degrees of intelligence occur in people, many animals and some machines” and ‘artificial intelligence’ as “the science and engineering of making intelligent machines, especially intelligent computer programs” (McCarthy 2007). 

By that definition, our world is already permeated with AI – robot vacuum cleaners, chess apps, predictive text, internet search engines, autopilots, GPS route finders, hearing aid algorithms, facial recognition programs and other software.  It is just that, as McCarthy reportedly observed: “as soon as it works, no one calls it AI any more” (Vardi, 2012).  Our wonder at these miracles swiftly fades, and we think of them, instead, as mere ‘unremarkable’ software. 

Superhuman general AI

Our existing ‘weak’ or ‘narrow’ AI is frequently ‘superhuman’ in its field (i.e. it vastly outperforms any human).  Chess software beats any human at chess.  Search engines find information far faster than human librarians.  Pocket calculators beat us at arithmetic.  What continues to elude us is human or superhuman general AI (or ‘strong’ AI) which equals or outperforms humans in every field. 

The (intended) purpose of any machine – flint hand axes, wheels, nuclear reactors - is to: (i) perform better, or more cheaply, tasks presently performed by humans; or (ii) perform tasks which humans cannot.  Superhuman general AI, though, is liable to be swiftly transformative of society in a way which no other invention has been.  As the mathematician I.J. Good (one of Alan Turing’s codebreaking team) wrote in 1963 “an ultraintelligent machine … can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines … Thus the first ultraintelligent machine is the last invention that man need ever make”. 

Even before an AI began improving itself in this way, any AI which was our equal in general intelligence would always be starting from a base where it already surpassed us in every one of the narrower fields where ‘weak’ AIs already surpasses us (calculation, chess playing, accurately storing, retaining and recalling data …).  Hence the idea that we ever might achieve ‘merely’ human level general AI seems unsound. 

Unfriendly AI

For as long as there has been speculative fiction the possibility of general AI which outperforms humans has been a staple.  In Mary Shelley’s 1818 novel Frankenstein Victor Frankenstein succeeds in creating a creature which is superhumanly fast, strong and intelligent and learns superhumanly quickly (teaching itself fluent French from snatches of overheard conversation, and how to read from an abandoned copy of Paradise Lost) - very different to the lumbering creature portrayed in later films.  Samuel Butler’s 1872 novel Erewhon depicts a society which is persuaded to forego machines out of fear that if technology continues to advance, machines will become so intelligent as to subsume humanity (“the servant glides by imperceptible approaches into the master”).  Other negative portrayals of AI abound, notably the Matrix and Terminator films in which unfriendly superhuman general AIs respectively enslave and seek to exterminate humanity. 

Serious scholars concerned about AI safety suggest such films to be grossly overoptimistic, and falsely reassuring insofar as they portray humanity as the ultimate victors, where humanity would in fact stand no chance against an unfriendly superhuman general AI.  For a book-length discussion of the dangers posed by superhuman general AI and the challenges of trying to ensure that an AI was friendly see Bostrom Superintelligence (2014) (for a shorter treatment see Yudkowsky 2008).  Bostrom’s book begins with a striking allegory wherein a flock of sparrows complain about how small and weak they are, and imagine with great excitement how much easier it would be if they had a big strong owl to protect and advise them, to help build their nests, look after chicks and elderly sparrows.  They eagerly set off to search for an abandoned owl egg, disregarding one sparrow who warns that they should first give some thought to the practicalities of owl-taming and domestication.  “Taming an owl seems like an exceedingly difficult thing to do.  It will be difficult enough to find an owl egg … after we have succeeded in raising an owl, then we can think about taking on this other challenge”. (Bostrom’s work, incidentally, contains some other memorable allegories – check out his essay The Fable of The Dragon Tyrant, about extending human’s lives through technology).

Lawyers and AI

For any field you care to name, a quick internet search will deliver a slew of articles where authors enthusiastically list new software tools and speculate about what will be the impact of AI on their field.  Overwhelmingly, it seems, such articles focus on the enormous potential benefits of AI, and all the things it will be useful for.  Some authors might light-heartedly consider whether AI will make them personally redundant, but most conclude that AI will never be clever enough to do their job (or at least not in their working lifetime). 

Many articles about AI written by lawyers follow a similar pattern, with authors seemingly competing to see who can produce the longest list of exciting software tools which you’ve never heard of, but which sophisticated law firms (apparently) now consider absolutely indispensable for drafting, contract management, document review and much else (Fagella, Toews).  The sheer volume of new products in this area is extraordinary. lists hundreds of legal tech start-ups which were active as at Q1 2021, breaking down their products into 11 broad categories.  A sceptic might be tempted to ask how much some lawyers’ articles listing such products and their capabilities owe to such developers’ marketing, and how much to direct experience of the products in question. 

AI arbitrators

When it comes to AI and law a recurring topic for legal authors in search of exciting-sounding material is the possibility that people might appoint AIs as arbitrators to make binding decisions in disputes about private law rights and obligations which could then be enforced by the courts.  Writing in this area typically focuses on two questions.  Does the law presently provide for the possibility of AIs acting as arbitrators and when (if at all) will AI technology reach the point at which AI arbitrators are technically feasible? 

Would the law enforce an AI’s award?

The question of whether the present law would enforce an award made by a hypothetical future AI is, with respect, not a very interesting or consequential one.  If the law does not presently allow AI to be used for this or that, the law could be changed.  Before we had aircraft we had no laws about where they could be flown. 

The most one can really say is that the Arbitration Act 1996 does not presently stipulate that arbitrators must be human, by contrast with some other countries which refer to arbitrators as ‘people’ or require that an arbitrator be ‘an individual with full capacity to exercise his civil rights’ (De La Jara).  I have argued at length elsewhere that one could make a binding arbitration agreement which provided for a dispute to be resolved by a coin flip or spinning a roulette wheel, by the parties competing in a race or even by combat, so it will come as no surprise that I think one could presently make a binding agreement to have a computer program resolve one’s dispute. 

Should the law enforce an AI’s award?

Some authors ask whether the law should permit / encourage the use of AI arbitrators in principle.  The typical conclusion (exemplified by Eidenmuller and Varesis) is simply that if parties wish to use AI arbitrators the law should facilitate that, reflecting the principle of party autonomy which underlies modern arbitration statutes.  To anyone steeped in the orthodoxy of arbitration law, that seems obvious enough. 

To precisely reproduce the functions of an arbitrator, though, is (presumably) an ‘AI-complete’ problem.  In other words, if you can build a machine which can understand natural-language evidence and submissions and deliver reasoned awards of a similar quality to human arbitrators then you have already built a superhuman general AI, whose implications extend far beyond arbitration. 

If one accepted that superhuman general AIs might pose a risk to humanity then (obviously) there would be a strong case for regulating their development, just like we regulate nuclear reactors – technology with enormous utility (generating electricity) which can also be used to produce fissile material for nuclear weapons, which pose an existential threat to humanity.  In discussing how to regulate AI, the question of whether we should allow AIs to make arbitration awards seems like a minor concern.

Will AI ever be able to perform the function of arbitrator?  When?

When arbitration lawyers come to opine on the question of whether AI will replace arbitrators, they typically exhibit the same Panglossian enthusiasm as authors in other fields.  They enthuse about the potential of AI to produce time and cost savings and quality improvements in tasks like document review, disclosure, legal research, predictive drafting, transcription and translation but shy away from the idea that AI could ever replace a human arbitrator.  Franklin (2020) reports a Chartered Institute of Arbitrators panel discussion entitled AI Technology and International Arbitration - Are Robots Coming for Your Job?  One speaker confidently reassures us that: “By 2050 there is a 50% chance of general AI being used [in arbitration]. … however … AI is automation, it will not replace humans.” 

The reasoning is contradictory: ‘automation’ is, by definition, the replacement of humans – causing a device which was formerly controlled or operated manually – i.e. by a human – to work instead by itself.  Maybe there is something special and unique about humans’ (or at least lawyers’ and arbitrators’) minds which a machine will never succeed in replicating.  But you could also make an argument that arbitrators might be easier to automate than some other jobs.  Many jobs (building and manufacturing things, caring for people) would, after all, require sophisticated machines through which for an AI to interact with the physical world.  Arbitration requires no such infrastructure.  The parties could just email the AI all the documents and the camera feed from the hearing, and it could email back its decision in a fraction of the time a human arbitrator would take and at a fraction of the cost (processing time / storage space plus the fee charged by the developer for the use of the software).  It has often been observed that physical skills which all able-bodied humans possess, like fine motor skills, which are used extensively in low-paid, ‘low-skilled’ work are particularly difficult to replicate in machines where other skills, which not all humans possess (reading, writing, calculation) have proved far easier to reproduce. 

Lawyers confidently opining that they are irreplaceable might prove to be like 18th century handloom weavers asserting the impossibility of power looms.  The distinction is that the average lawyer is probably far less qualified to form a view on AI and make predictions about it than the average handloom weaver would have been to understand and make predictions about the advent of power looms.  The point is that any lawyer’s opinion about what is going to be technical feasible with AI and when it is going to happen should be taken with copious amounts of salt and articles about law and AI (including this one) should always be viewed through that lens. 

Bostrom discusses several surveys of experts working in the AI field.  On average, it seems, they do think there is a 50% chance of human level general AI being achieved by the mid 21st century, and a 90% chance of its being achieved by 2100.  But these experts’ views also require seasoning.  Even among AI experts: “machines matching humans in general intelligence … have been expected since the advent of computers in the 1940s.  At that time, the advent of such machines was often placed 20 years into the future.  Since then, the expected arrival date has receded at a rate of one per year so that today futurists … still often believe that intelligent machines are a couple of decades away” (Bostrom 2015).  Experts, then, routinely predict that general AI is close enough to make it meaningful or exciting (i.e. it is going to happen within your lifetime) but not close enough that the expert’s prediction could be proved wrong within the space of their remaining career.

The wrong questions

Lawyers who focus on if / when we will achieve superhuman general AI which will replace arbitrators and judges are not just asking a question they are not qualified to answer, but asking the wrong question.  A better question is whether there are other, more proximate technologies, which stand a good chance of fundamentally disrupting the dispute resolution landscape long before we achieve superhuman general AI (if we ever do). 

Automation and error prevention

Obviously, the more jobs are automated, the fewer human workers are employed reducing both employment litigation and litigation arising out of accidents caused by human error.  But, even while businesses continue to employ significant numbers of people, there are several nearer-term technologies, short of full-scale automation, which seem likely to greatly reduce human errors, and the volume of litigation they generate. 

A rich source of litigation historically, for example, has been road traffic accidents.  Increased homeworking and home-based or online leisure and entertainment, itself facilitated by technology, seems liable to reduce the frequency with which people drive.  Regulation and taxation of internal combustion engine cars and fossil fuels may have the same effect.  Since their inception cars have incorporated ever more safety equipment and ever more automation, tending to reduce the per capita frequency and severity of accidents and injuries.  Even if full self-driving technology is still some way off, many cars already feature driver assistance technology linked to cameras and radar/lidar which will automatically steer a car to keep it within lane markings, maintain safe braking distance from the car in front, apply brakes and deliver audible warnings when obstacles are detected.  Similar trends are likely to be observed in other machines like ships, aircraft and heavy plant and equipment, in each case reducing the potential for human errors to give rise to accidents and litigation.

There are also other kinds of technology which have the potential to greatly reduce the frequency of human error and oversight.  Marshall Brain’s dystopian 2003 novel Manna describes a society in which jobs are increasingly automated, leading to large scale unemployment, and is essentially a book-length argument for universal basic income.  Whatever one thinks of that argument, he does give a striking account of simpler technologies which might form the stepping-stones to large scale automation.  The book’s narrator describes his minimum wage job at a fast-food chain which introduces a revolutionary software system:

At any given moment Manna had a list of things that it needed to do.  There were orders coming in from the cash registers, so Manna directed employees to prepare those meals. There were also toilets to be scrubbed on a regular basis, floors to mop, tables to wipe, sidewalks to sweep, buns to defrost, inventory to rotate, windows to wash and so on. Manna kept track of the hundreds of tasks that needed to get done, and assigned each task to an employee one at a time.  Manna told employees what to do simply by talking to them. Employees each put on a headset when they punched in. Manna had a voice synthesizer, and with its synthesized voice Manna told everyone exactly what to do through their headsets. Constantly. Manna micro-managed minimum wage employees to create perfect performance.

The software would speak to the employees individually and tell each one exactly what to do. For example … “Jane, when you are through with this customer, please close your register. Then we will clean the women’s restroom.”  And so on. The employees were told exactly what to do, and they did it quite happily. It was a major relief actually, because the software told them precisely what to do step by step.

When Jane entered the restroom, Manna used a simple position tracking system built into her headset to know that she had arrived. Manna then told her the first step.  Manna: “Place the ‘wet floor’ warning cone outside the door please.”  When Jane completed the task, she would speak the word “OK” into her headset and Manna moved to the next step in the restroom cleaning procedure.  Manna: “Please block the door open with the door stop.” Jane: “OK.” Manna: “Please retrieve the bucket and mop from the supply closet.” Jane: “OK.”  And so on.

Once the restroom was clean, Manna would direct Jane to put everything away. Manna would make sure that she carefully washed her hands. Then Manna would immediately start Jane working on a new task. Meanwhile, Manna might send Lisa to the restroom to inspect it and make sure that Jane had done a thorough job. Manna would ask Lisa to check the toilets, the floor, the sink and the mirrors. If Jane missed anything, Lisa would report it.

Such systems – directing, rather than replacing, workers – do not seem to require any radical leap in terms of technology and would certainly require far less technological progress than full-scale automation of such jobs.  From the business’s perspective the benefits of such systems are obvious.  The number of managers required is reduced, a single operational model is applied consistently everywhere, performance and productivity are monitored constantly and the details of the operational model can be tweaked in real time to increase efficiencies.  At the same time, the scope for employee error or misconduct, and litigation generated by that, is vastly reduced. 

Beyond highly standardised franchise businesses like fast food restaurants, one can see that similar systems might next be applied to, say, routine operation and maintenance of production and manufacturing plants and then to more complex manufacturing and construction tasks.  For the latter, a designer would create a computer model of an intended ship or building and a Manna-type system would direct employees where to apply a weld or lay a brick, direct another employee to check the work, and maintain a real time model to reflect the progress of the work.  Use of such systems would likely increase productivity while reducing costs by limiting the relative skill levels and training required to execute work, reducing the need for management oversight and monitoring / measurement of work.  Such systems are likely both to be facilitated by, and further encourage, increased standardisation of designs or design elements, modularisation and off-site fabrication. 

Lethal Autonomous Weapons Systems

While of no direct relevance to international arbitration, it would seem remiss when discussing the implications of automation not to mention the alarming prospect of so-called Lethal Autonomous Weapons Systems (LAWS).  These use (weak, narrow) AI, similar to that used in drones and self-driving cars to identify and engage (i.e. kill) human targets without human intervention.  The principal concern surrounding such systems is not that they might ‘go rogue’ and turn against humanity (like an unfriendly superhuman general AI).  Rather, the concern is such technology can be used to make, in effect, weapons of mass destruction which could easily be used to kill very large numbers of people and would be far cheaper and easier to make, conceal and deploy than any nuclear, biological or chemical weapon.  A particular concern is the possibility of cheaply creating vast ‘swarms’ of tens of thousands of very small, very manoeuvrable, relatively fast aircraft each equipped with shaped explosive charges, and capable of working together in a coordinated fashion to find and kill people who meet pre-programmed target criteria (anyone holding a weapon, any male adult, anyone of a certain ethnicity, anyone not wearing a given marker).  Research into and development of such weapons is widespread, and such weapons may already be operational.  Turkish defence contractor Savunma Teknolojileri Mühendislik ve Ticaret A.Ş. (STM), for example, produces small quadcopter drones called Kargu 2 (each about the size of a dinner plate) which can operate in swarms (see  Turkey ordered at least 500 Kargu 2s in 2020.  In March 2021 the UN Security Council’s Panel of Experts on Libya published a report stating that a Kargu-2 had been used to hunt down and kill a human target in Libya.  This was widely reported as the first example of a human being having been killed by an autonomous robot, though there is some confusion as to whether the drone was in fact operating in fully-autonomous mode at the time.  The problem of LAWS and efforts by international lawyers and AI researchers to ban their development is the subject of one of this year’s BBC Reith Lectures

Surveillance and record-keeping

A substantial part of the arbitration process consists of interviewing factual witnesses, producing their witness statements and listening to them be cross-examined.  Many disputes, however, turn on the meaning of contracts, the content of which is not itself in dispute.  Insofar as a case turns on a disputed question of fact, the best evidence is contemporaneous documents – what people wrote at the time – with witnesses’ recollections a distant second.  Scepticism about the value of witness evidence is widespread (see, the ICC Commission on Arbitration’s report The Accuracy of Fact Witness Memory in International Arbitration).  Judicial hostility to the use of lengthy witness statements which stray beyond the disputed facts of a case has long been evident in judgments, the Commercial Court Guide and, now, in Practice Direction 57AC. 

There is a huge body of existing or proximate technologies the effect of which will increasingly be to produce and preserve objective contemporaneous, records of events and interactions of the kind which, in the past, might have been disputed, and where the only evidence might have been a witness’ recollection or, at most, their notes, or emails they sent describing the events. The importance of witness testimony will decline even further.

A few examples will suffice.  Anyone who visited a financial advisor in the last decade will likely have found that a CCTV camera recorded the advice they were given (advice given at such meetings having proved a common source of litigation).  The global pandemic has accelerated the use of video conferencing software for meetings, allowing what was formerly said at face-to-face meetings to be routinely recorded and transcripts automatically generated. Mobile phone cameras, GPS data from mobile phones and wearable tech, bodycams used by police and security personnel, car dashcams and telematics data recorders, CCTV systems, drone cameras … the list goes on. 

If the disputed events take place over a long time period, or the timing of the events is unknown (common in a criminal investigation) the process of sifting through such evidence is time consuming, but this is an area where (weak) AI, particularly facial recognition software, is likely to assist in the relatively near term.  A potential problem lies in the increased sophistication of CGI technology, allowing convincing ‘deep fake’ evidence to be fabricated relatively easily.  Some of the answer to that problem may lie in the increased use of blockchain technology (see below) to create a robust, indelible, objective record of when and where any given piece of data was created/recorded. 

Litigation prediction software

One possible disruptive technology is what might be termed litigation prediction software.  A ‘training’ database is created from a public source (i.e. court judgments, court files) recording facts about cases and their reported outcomes (a human presumably has to read and interpret the judgments and other documents to enter the information, though one might expect increasing automation in this area).  The software analyses this data, looking for correlations between cases’ features and their outcomes.  You then input the corresponding information about a pending case, and the software, assuming the same correlations, makes a prediction as to the outcome. 

As long ago as 1999 Arditi and Tokdemir reported having created a database which recorded features of 102 construction cases before the courts of Illinois and their outcomes.  There were just 49 variables recorded.  A few examples were: whether each party was an owner, contractor, subcontractor, architect/engineer or supplier, whether the contract was cost plus or fixed price, the contract value, whether unknown site conditions were in issue, whether liquidated damages were in issue and so on.  When supplied with equivalent data about a new case, their software predicted the ‘outcome’ correctly in 83% of cases.  A problem with this study is that it is unclear how the outcomes were defined.  There seems to have been a single data-point for ’outcome’, and the authors simply entered there the party whom they considered to be the ‘winner’.  These must have been complex cases with multiple issues and cross-claims, with a range of possible outcomes.  The parties thus might have had different views to the study’s authors regarding who was the winner and who was the loser.  A defendant who always expected to lose on liability but secured a judgment where the claimant recovered a tiny amount of the quantum claimed would probably consider that a ‘win’, but the authors might have classified that as a ‘loss’.

In 2002 Ruger et al used a case database consisting of US Supreme Court decisions from 1986 to 2001.  Each case was described according to just six variables (including some objective criteria: circuit of origin, type of petitioner, issue area, whether it was being argued that a practice was unconstitutional but also a more subjective criterion, of whether the judgment appealed was considered to be liberal or conservative) and the software was told the outcome (whether the appeal was dismissed or upheld – it is unclear what happened with respect to cases where appeals succeeded in part).  Based on this data, the algorithm correctly predicted the outcome of appeals decided in 2002 with 75% accuracy, where a control group of legal experts achieved 59.1%. 

In 2016 Aletras et al used as their training database the text of previous European Court of Human Rights decisions.  In each case, the software was told the outcome of the case (violation or no violation) and given those parts of the judgment in which the court set out each parties’ case.  They then gave the software the equivalent passages from other judgments, without telling the software the outcome.  The software ‘guessed’ the outcome correctly in 79% of these cases. 

Besides these academic studies several commercial developers offer litigation prediction products, claiming varying degrees of sophistication and accuracy (rarely peer-reviewed or with full data disclosure) but all use broadly the same model – they are not analysing the natural language data, arriving at a factual model and applying legal rules to those facts to arrive at a reasoned view of a case’s merits, only looking for correlation.  So, in simple terms, if defendants who are subcontractors in cases involving fixed cost contracts, or use this or that word or phrase in their submissions, lose more often than they win, the presence of those features will push the software towards a ‘loss’ prediction and if defendants in cases where the contract value exceeds $1 million and liquidated damages are in issue win more often than they lose, the presence of those features will push the pendulum back in the direction of a ‘win’ prediction. 

Oddly, this is an area where the less sophisticated tools might turn out to be more useful.  To use an Arditi and Tokdemir-type system to predict the outcome of a new construction case in Illinois, it would only be necessary to answer 49 yes/no or multiple choice questions about the new case, all of which could easily be answered at an early stage.  A prospective claimant (or funder) could use that kind of system to derive an early prediction of a case’s outcome.  Text-based systems like Aletras’, which correlate outcomes with the parties’ submissions quoted in judgments, seem less useful.  One would have to have developed one’s case and submissions up to the same level as the textual submissions which formed the original dataset – i.e. submissions at a final hearing – and then feed those into the software in order to obtain a prediction. 

There might be some businesses which, by their nature, receive large numbers of relatively similar, relatively low value claims, which are ‘win/lose’ cases (i.e. if the claimant is successful, it is clear how much they will recover).  Examples might be claims for unpaid social security, claims under some consumer insurance policies, some categories of medical negligence claim, compensation claims against train operating companies for train delays, disputed parking fines and claims against logistics businesses or online marketplace sellers for missing or damaged items.  Such serial litigants could use litigation review software to make early decisions about whether to settle or defend such claims without detailed review by a lawyer or paralegal or claims manager.  And these systems would likely become more reliable over time, as each case which did not settle would add more data to the training database.

Might some trusted systems emerge which routinely outperformed lawyers in predicting the outcome of a given kind of case from simple yes/no or multiple choice data?  If so, claimants and defendants could base their decisions (whether to bring a claim, whether to make / accept a settlement offer) on the software’s predictions, so that the software’s prediction came to be used as a proxy for a court or tribunal’s decision. A key challenge in the design of such systems is guarding against the possibility that claimants might discover some formulaic way of expressing or describing their claims which trigger settlement offers and exploit that vulnerability.  You would probably want the software to detect and flag suspicious patterns emerging in the claims submitted, and you would probably want some human oversight, perhaps having a human, who has not been told the software’s prediction review randomly selected cases and decide how to proceed.  The outcomes in those cases can then be compared to the predictions, to ensure that the predictive mechanism remains robust. 

For more complex cases, the potential utility of predictive software is less clear.  Complex high value cases are rarer than simpler lower-value ones.  Many such cases are resolved in confidential arbitrations, so that portion of the data is unavailable.  The more complex the cases, the more potential differences between them.  So, for any given set of facts, you will have few comparable cases in the training data.  Suppose I have only two cases in the data set involving (say) subcontractor defendants and sub-subcontractor claimants, where liquidated damages were in issue, an estoppel argument was deployed, the contract value was over £1 million and the contract was cost-plus (and so on), and in both cases the sub-subcontractor won.  There is a 100% correlation between those features and the sub-sub-contractor winning, but the small sample size means there is a good chance that this correlation is random, rather than significant. 

A further issue is that complex cases are often bundles of claims and counterclaims.  For each claim or counterclaim that succeeds, there might be a range of possible recoveries.  Deciding on how to proceed in such cases necessitates forming a view on the most likely outcome not just with respect to liability, but with respect to quantum and the range of possible recoveries.  To be useful, predictive software would have to be able to express these more complicated outcomes, rather than just giving binary ‘win or lose’ predictions. 

Litigation data collation

For complex cases, a more promising prospect than full litigation prediction software is what might be termed ‘litigation data collation’.   An example is ‘Solomonic’, a statistical database concerning judgments from the Business and Property Courts which (amongst other things) appears to allow users to find answers to questions such as how often an argument based on (say) frustration succeeds, or an argument that a clause is a penalty.  It also allows the answers to such questions to be broken down by judge: how often has Mrs. Justice X accepted an argument based on promissory estoppel? 

Such systems potentially help the legal adviser by giving them objective data about the baseline, ‘all else being equal’ likelihood of a given line of argument succeeding (or succeeding before a given judge).  The lawyer can then use their own knowledge of the particular facts of the case to update those probabilities.  This kind of software is hugely interesting, with great potential to facilitate a more quantitative, empirical, Bayesian approach to legal analysis.  It does not, however, seem to be a disruptive technology in the sense of something which is going to dramatically reduce litigation / arbitration, or change the way disputes are resolved. 


The term ‘blockchain’ refers to a complex technology which allows the creation of immutable electronic records which, once created, cannot be altered. 

In the 1980s a physicist called Scott Stornetta read of a scandal in which a prominent biologist was proven to have manipulated experimental data when lab notebooks in which the results of experiments were recorded by hand were found to have been altered retrospectively using a different ink.  Stornetta grew concerned that all such records would soon be stored digitally, making such manipulation trivially easy and impossible to detect.  At Bell Labs, Stornetta and a cryptographer called Stuart Haber worked on the problem of creating immutable digital records. 

In Stornetta and Haber (2001) they posited a client who wanted to be able to prove that a given document existed on a particular date. They gave the example of an intellectual property matter where it was crucial to prove the date on which an inventor first put a patentable idea into writing.  They suggested a naïve solution would be to create a ‘digital safety deposit box service’ whereby whenever a client wanted a document (e.g. a set of lab results) time stamped, they would send a copy of the document to a trusted time-stamping service (“TSS”) which would keep a copy of the document and a record of the time it was received.  This had several flaws.  A document could be intercepted during transmission.  The TSS would have access to the documents.  If the TSS’s security was compromised, the documents could be accessed by a third party.  The TSS would require large amounts of storage capacity, making the service costly.  And (fundamentally) the TSS could potentially collude with the client (or a hostile party) to claim a document had been received on a different date to that on which it was actually received. 

Stornetta and Haber’s insight was to use hash functions.  A hash function is an algorithm which will convert any piece of data, of any length (a single figure, the whole text of War and Peace, a whole library) into a ‘hash’ – a short string of numbers and figures of a fixed length.  If any part of the source data changes (if one comma is omitted from War and Peace) then applying the hash function to that source data will generate an entirely different number, proving that the document has been altered.  A customer who wished to be able to prove exactly when a given document had been created could apply a hash function to it to generate a hash, and then send the hash to a trusted time stamping service (“TSS”).  The TSS would retain a record of the hash and when it was received.  Thus avoiding entrusting the document itself to the TSS.

To prevent collusion between the TSS and the customer, the TSS would issue each customer a certificate recording a further hash which had been generated using: (1) the hash which the customer had submitted; (2) the time; and (3) the hash from the preceding certificate.  In this way, certificates issued to successive customers would form a chain with each certificate being cryptographically connected to its predecessors and successors.  If the TSS subsequently created a false certificate, certifying that a different hash had been received from the customer, or that it had been received at a different time, then that certificate would not correlate with the rest of the chain.  To present a consistent record, the TSS would have to generate a new chain of certificates for all the transactions which took place after the date recorded on the false certificate.  Even then, a third party could expose the collusion by contacting one of the later customers and comparing their certificate to the record.  In effect, the certificates held by all previous and subsequent users of the database serve as a distributed back-up copy of the database which can be compared with the TSS’s version to ascertain whether the TSS has tampered with it.

Haber and Stornetta’s elsewhere suggested using a fully ‘distributed’ system, whereby the document would not be certified by a single TSS, but by several different entities.  Subsequently they suggested that, to guarantee its bona fides, a TSS would periodically (say once a week) apply a hash function to its whole database to generate a hash and publish it in some public forum (like the classified pages of a newspaper).  Haber and Stornetta described the essential features of a blockchain database, and came close to describing a second: (i) data is entered in increments (‘blocks’) with each block being cryptographically linked to its successors, so that if any given block is altered it will no longer correlate with the successor blocks; (ii) there is no single authoritative version of the blockchain held by a single trusted witness. Rather, multiple copies exist, held by different people.  A divergent version, which differs from the rest, can be disregarded.


In 2008 and 2009 a (presumed pseudonymous) author called Satoshi Nakamoto published first a white paper describing, and then software implementing, the Bitcoin digital currency system.  This referenced and built upon Haber and Stornetta’s ideas, and introduced the term ’blockchain’.  The Bitcoin blockchain is a distributed public digital ledger.  Each user first generates a number known as a ‘private key’ which is used to generate a public address which appears on the blockchain and can be shared with others.  The Bitcoin balance associated with that address and the complete record of transactions to and from it, can be viewed on the blockchain.  Whomever knows a given address’s private key can transfer Bitcoin associated with that address to any other address.  The blockchain, however, contains no record of who controls any given address. 

When Bitcoins (or fractions of Bitcoins) are transferred to become associated with new addresses, new ‘blocks’ of data are added to the chain to record these transactions.  Each new block added is 1MB of data.  Taken together, the whole blockchain (presently around 377GB) serves as an authoritative record of where all Bitcoin resides and of all previous transfers.  Like Stornetta and Haber’s TSS certificates, each block in the Bitcoin blockchain is cryptographically linked to the next.  Each block has a hash code associated with it which is generated from the transactions recorded in the block and the previous block’s hash code.  If any individual block were changed then its hash – the cryptographic signature of that block – would no longer fit with its successors.  Anyone who wanted to make an undetected change would thus need to change a given block, and then generate new hash codes for every successive block.  The process of generating a valid code which can be recorded in the block chain is deliberately made complex and time consuming, so that prohibitively vast computing resources would be needed to alter each successive block.

Institutions like banks have commonly used ledgers to record transfers to and from customers’ accounts and the present balance on computers which the bank controls.  Depending on how they are stored and backed-up such ledgers might be relatively fragile (an accidental power failure, hacking attack or accident could destroy the record).  An issue may arise as to whether the institution can be trusted with the ledger.  The bank or a rogue employee or hacker could in theory reduce or increase the balance or make transfers on a customer’s account, an insurer might alter its records to show a policy was never renewed and so on.  The Bitcoin blockchain, by contrast, uses a distributed public ledger.  There is no trusted central authority which controls a single authoritative copy.  Rather, multiple copies of the blockchain are distributed over several computers (nodes) controlled by different people.  Altering any individual local copy has no effect, because the majority version prevails as authoritative.  Similarly, a new block recording a new tranche of transactions can only be added if a majority (confusingly termed a ‘consensus’) of nodes (51% of the processing power) agree that the new block meets certain criteria. 

When a user wishes to transfer Bitcoin, they issue a request to transfer Bitcoin from their address to some other address, and they apply a unique ‘signature’ to that request, generated using their private key, the destination address and the amount to be transferred.  With their instruction, the user can also specify a fee (in Bitcoin) which will be paid to the Bitcoin miner (see below) who succeeds in adding the transaction to the blockchain.  For a request to transfer Bitcoin from a given address to be valid, and eligible for inclusion in a new block, it is necessary that: (i) there be sufficient funds associated with the address (this information is recorded publicly on the blockchain); and (ii) that the request be signed using a signature which was generated using the private key associated with that address.  It is possible to test mathematically whether a given public address and signature were generated using the same private key, though it is impossible to derive the private address itself. 

The process of validating transactions and recording them in the blockchain relies on users known as ‘Bitcoin miners’.  Unconfirmed transactions are placed in waiting rooms or pools, where they wait to be added to the blockchain.  Around 4 or 5 new transactions are usually being added to these pools each second.  Miners select transactions from the pools and check they are valid (i.e. that the public address and the signature were generated from the same private key and that the blockchain presently shows the address as having sufficient funds associated with it).  In selecting transactions to process, Bitcoin miners will prioritise those for which users have specified reward amounts.

A miner next assembles a block of data, containing (1) details of several valid transactions (usually around 1,000 to 2,500) including transfers to itself of any offered fees; (2) the hash code from the last block which was added to the blockchain; and (3) a random value called a ‘nonce’.  The miner then seeks to have its block of data added to the blockchain.  To do this the miner applies a hash function.  The result is a code which is a candidate to be used as the new block’s hash code.  The blockchain protocol, however, will only accept a new block and add it to the blockchain if its hash code begins with seven zeroes. 

A miner thus has to keep changing the block’s nonce value at random and re-testing it with the hash function in the hope of eventually stumbling upon some combination which generates a hash code which begins 0000000, before some other miner succeeds in doing so for a block which contains any of the same transactions.  This mining process consumes enormous processing power and thus energy.  According to the Cambridge Center for Alternative Finance, Bitcoin mining presently consumes around 110 Terawatt Hours per year — 0.55% of global electricity production, or roughly equivalent to the annual energy draw of a country like Malaysia or Sweden.  This energy is used while processing only around four or five transactions per second (by contrast Visa processes around 1,700 transactions per second).

Once a miner finds an eligible hash code, it broadcasts the proposed block, including the hash code, to all other data miners.  They test the legitimacy of the block (i.e. that the transactions are valid) and the hash code (i.e. that applying the hash function to the proposed block does generate the claimed hash value).  If a majority accept that the block is valid, then it is added to the blockchain.  A new block is successfully added to the blockchain around every ten minutes.  As well as receiving fees specified by the transferors whose requested transactions are recorded in the new block, the Bitcoin protocol also rewards the successful miner by creating new Bitcoin which is credited to that miner’s address.  The reward for adding a new block is presently 6.25 BTC, which is around £275,000.  This reward level is set to reduce over time.  The possibility of earning these rewards incentivises miners to collectively do the work of checking the validity of each other’s proposed transactions. 

Following the success of Bitcoin several other cryptocurrencies were created, all based on the blockchain principle though the details of how they work varied with some being (or claiming to be) asset-backed. Several (e.g. Ethereum) use different mechanics to validate transactions, which result in transactions being validated and added to the blockchain more quickly and with less energy being consumed than is the case with Bitcoin. Later other cryptoassets were created, in particular so-called ICOs (‘initial coin offerings) which serve as a means of raising capital for start-up ventures, performing a similar function to an initial public offering of shares, but without the same regulatory oversight. 

What difference will cryptoassets make to dispute resolution?

Cryptocurrencies like Bitcoin are presently used almost entirely for speculation, with relatively few purchases paid for using cryptocurrency, many sellers being unwilling to accept it due to its volatility.  Speculators who acquire these assets frequently have little understanding of how the products they purchase are purported to work. In 2018, British comedian John Oliver memorably described cryptoassets as “… everything you don’t understand about money combined with everything you don’t understand about computers”. 

Some cryptocurrency blockchains allow users to create new assets known as ‘tokens’.  These are used to represent real world objects.  A (sincere?) example given in the literature ( is of someone using this to create an ad hoc equity release scheme: “an individual requiring $50,000 taken out of a condo valued at $500,000. This individual may have tokenized their condo into 500,000 security tokens, each worth 0.0002%. They might sell 50,000 tokens, instead of selling the entire property and losing its utility as a livable space, thus ensuring a more liquid asset”.  It should go without saying that anyone buying such tokens might struggle to enforce them against a subsequent purchaser who had acquired the property by more conventional means. 

Many cryptoexchanges (where cryptoassets can be bought, stored and sold) are unregulated.  There are several instances of cryptoassets and fiat currency held by exchanges for investors having been stolen from these exchanges, either by third parties or in frauds perpetrated by employees and managers.  Several well-known examples have involved the loss of hundreds of millions of dollars of assets: $450 million stolen from MtGox in 2014, $146 million stolen from BitGrail in 2018, $543 million stolen from CoinCheck in 2018, $135 million stolen from QuadrigCX in 2019, $281 million stolen from KuCoin in 2020, $610 million stolen from Poly Network in 2021.  In other instances, the prices of small or thinly traded asset classes have been manipulated to the detriment of investors or assets have been created to attract investment for some supposed venture with the backers then simply eloping with the money raised (a scam known as a ‘rug-pull’).  In a striking example last month, fraudsters promoted a new ‘play to earn’ token called “Squid”.  Each token gave the holder a chance of being selected to play in an online game based on the popular Squid Game TV show.  Details of how the game was to work and precisely what the prize was were unclear, with it being claimed still to be in development. In the space of a week promoters succeeded in launching their Squid token, generating enormous hype, selling huge numbers of tokens, seeing the price of tokens increase 110,000% and then eloping with $3.4 million in the space of a week, with the price of the tokens falling to zero.

Given the fertile environment they provide for fraud, it is a safe bet, then, that litigation about, or involving, cryptoassets, their tracing and recovery is liable to become more common.  But the existence of cryptoassets per se seems unlikely to result in any radical change in the way disputes are resolved, or the number of disputes requiring to be resolved.  The underlying blockchain technology may have more fundamental impacts, but it is always necessary to be alert to the fact that hype, hyperbole and overoptimism are palpably rife in this field, with blockchain being touted as a panacea capable of solving problems where its application is, in fact, unclear.  Consider, for example, that in 2017 a small beverage company called “Long Island Iced Tea” succeeded in tripling its share price just by changing its name to “Long Island Blockchain Corp”.

Impacts of blockchain on evidence

To date, blockchains have mainly been used as the basis for cryptoassets (by some estimates there are now around 10,000 distinct cryptoasset blockchains).  But blockchain is liable to be used increasingly in other areas including, as noted above, to routinely create time-stamped tamper proof records.  Creation of false documentary evidence after the fact is thus liable to become increasingly difficult.  Records of say, geophysical surveys carried out during oil and mineral exploration, Lidar wind data recorded during wind farm viability assessments, trial data from medical devices or medication, laboratory tests of samples, non-destructive tests carried out during manufacturing, professional advice, insured’s disclosure and similar are all increasingly likely to be recorded in blockchain databases.  The most material impact of blockchain for dispute resolution, though, may be its role in facilitating self-enforcing contracts. 

Self-enforcing contracts

A great number of court cases and arbitrations prove not to have been the result of any bona fide dispute.  A defendant simply fails to pay money when it is due, for no good reason.  A claimant is then forced to chase payment, go through pre-action correspondence and then pursue litigation or an arbitration, obtain a judgment or award and then pursue further court proceedings to enforce it, incurring some irrecoverable costs and wasting management time.  All this time and money could be saved if it were possible easily to make truly self-enforcing contracts, giving each party absolute confidence that the other side would be compelled to perform their side of the bargain. Merely automating payments is straightforward.  Many businesses have systems in place which automate business decisions to pay money.  For example, gambling websites make payouts automatically when win conditions are satisfied, banks transfer money when customers make the relevant inputs into banking software, or put their card and a pin number into an ATM.  Financial trading businesses have in place systems which make payments based on share prices and other data. 

Such systems, however, fall considerably short of being self-enforcing contracts, because they are always subject to human veto.  A human manager could veto a pay-out, or frustrate a payment by transferring assets out of the relevant account.  A business will usually be under some reputational pressure – it needs to perform its contractual obligation to attract repeat business.  But, ultimately the only guarantee of performance is the threat of litigation or arbitration, with all the cost and uncertainty that entails.  So how close are we to truly self-enforcing contracts?

Smart contracts

A ‘smart contract’ can be thought of as a computer program which a party irrevocably empowers to effect some promised performance if agreed preconditions are satisfied, and which has the means to detect whether they are.  So long as C renders the agreed performance, C need have no concern over whether D will make the required payment.  If the preconditions are met, D will have no choice.  If such contracts could be used routinely, they would have the potential to eliminate much litigation and arbitration. There are, however, several obstacles to the widespread use of such systems to create self-enforcing contracts which would fulfil all the functions we presently use ‘juristically enforced’ contracts for.

The irrevocability problemAnyone entering into a smart contract needs to be confident that the contract is genuinely irrevocable, and yet computer programs can easily be edited.

The authority problem:  It would be easy for D to create a smart contract which, if certain conditions were met, would interface with the D’s bank and trigger a payment to C.  D faces a problem, though, in creating a program which C will accept has that effect.  D can sign in to its bank’s website, give instructions to its bank, enter some authentication (password, one time code) and trigger the requisite payment.  In principle, each of these steps could be entrusted to a computer program, but the program will then contain details of how to make payments from D’s account, which D will not wish to share with C. 

The funding problem:  Any prospective payee entering into a smart contract needs to be confident that the payor will pay if they possibly can – like a wizard promising to use their best endeavours to pay.  A smart contract could provide for money to be paid on the happening of a certain event, but what is to prevent the payor moving its assets out of that account so that, when the time comes, the payment fails.  A simplistic solution is to use an escrow-type system, where the funds are ring-fenced or frozen in the account by the software.  But this then means they cannot be used in the meantime to run an honest payor’s business and perform the contract.  What is needed is some means of replicating, with a computer program, what would be the end result of litigation/arbitration, and an enforcement process – the payor does all they can to pay.  That is the funding problem.

The characteristic performance problem:  Contracts usually involve obligations to pay money and obligations to give what might be termed ‘characteristic performance’ – to do the thing which one is being paid for, which could be anything.  Characteristic performance is hard to compel and hard to measure.

The advantage of the unbreakable vows described in the Harry Potter books is that they compel characteristic performance.  If someone makes an unbreakable vow to build a house, an invisible entity watches over them and kills them if they do not perform.  The result is that wizards can always pay for goods and services ‘up-front’, confident that the work will be performed.  For a smart contract though, the only sanction which the contract can impose for non-performance is: (i) to withhold payment; or (ii) to trigger a payment of liquidated damages from the non-performing party.

For some kinds of contracts, it will be quite easy to determine whether payment / liquidated damages conditions have been met.  For example, where the substantive performance was to transfer shares, or pay money for money (as in a currency trade) or transfer some other asset the ownership of which is registered.  Also contracts where there is no substantive performance, and payments are triggered by objectively verifiable, widely reported events (movements of the share prices, weather events).  This is one reason why smart contracts will most likely see widespread adoption first in the finance sector. 

On-demand guarantees would be a good example a contract which would be ripe for automation.  These are contracts where, as security for D’s performance, a bank agrees to pay C a sum of money upon a demand being submitted in a particular form, without evidence as to whether D actually performed.  The bank will then seek to recover from D (the bank usually being the bank which holds D’s assets, and having a floating charge or other security).  If D disputes its liability, it has to try to claw the money back from C.  Although these contracts should operate mechanistically, it is common for D to try to restrain the bank from paying, and to pursue proceedings for that purpose.  A smart contract which simply triggered a payment on receipt of the requisite instruction from C would avoid such litigation.

There are other contracts where real-world events are readily measurable and measurement can be automated, although coming up with a tamper-proof system which parties will accept as truly authoritative presents some difficulties.  While blockchain can render an immutable record, there is no guarantee that the data was accurate when it was entered.  Nonetheless, there are some contexts where tamper proof measurement systems are probably viable in the short term.  Consider drivers working for services like Uber.  Whether they have driven to the requisite location is readily verifiable from the driver’s and the customer mobile phone GPS data.  There is no smart contract between driver and customer, but one can see that this is a transaction which could potentially be reduced to a smart contract.  It seems unlikely that smart contracts will be deployed for such low-value transactions, but one can see that contracts for the transport of larger items might well be susceptible to smart contracts.  A charterparty could be rendered as a smart contract, with payment conditional upon a ship’s arrival, the payment of demurrage being triggered by late arrival and so on.  Depending on who bears the weather risk, objective data about weather conditions could be factored into the contract, with additional time being allowed if predetermined wind speeds are exceeded, for example. 

Contracts for the drilling of oil wells might be another area where true smart contracts could readily be used.  There, the data which is used to assess whether performance has been achieved is all recorded in electronic form, since it is being collected remotely by instruments in the well bore and on the rig.  There are some other contracts which might be susceptible to automation in part.  For example, many contracts (oil field Joint Operating Agreements, some other joint venture agreements and many construction contracts) feature what could broadly be termed ‘pay now argue later’ provisions, similar to those found in on-demand guarantees.  A (provisional) payment is due if certain conditions are met – if the operator submits a cash call, if the employer fails to submit a pay-less notice.  These kinds of mechanistic function could readily be made the subject of smart contracts.

Smart contracts are much harder to deploy in other contexts.  To use a truly smart contract for a building / shipbuilding contract, for example, would require the smart contract to have access to an accurate, comprehensive, tamper-proof mechanism for measuring and assessing the quality of work (including design work) – complex tasks which, while they might be improved by technology (see above) are likely to require human involvement for a long time to come.  A smart contract which depends on human input, and human judgment, to determine whether payment conditions have been satisfied and trigger the payment of consideration / liquidated damages is not a true smart contract. 

Smart contracts associated with cryptocurrency blockchains

Several blockchain-based cryptocurrencies allow a user who owns that cryptocurrency to create a contract which is essentially an instruction to pay a specified amount of cryptocurrency from the user’s account to another account if certain conditions are satisfied within a stated time period.  The contract is saved in the public blockchain, so that the other party can review it, and both parties can be confident that the instruction cannot be altered or rescinded.  Thus, cryptocurrency blockchain based contracts resolve the irrevocability problem.

Due to the way signatures authorising transfers of cryptocurrency are encrypted, the other party will be able to verify that the instruction is valid (i.e. that it will have the effect of making a payment from the relevant address if the conditions are met and there are sufficient funds associated with that address) but will not be able to use that information to derive the private key associated with the address. Cryptocurrency blockchain based contracts thus also resolve the authority problem.

With some of these smart contracts, the funds are effectively placed into escrow.  The funds are frozen for the stated time period.  During that period, they can only be used to make the contracted payment.  The paying party cannot transfer them to any other account.  In other cases, there is no such freezing or ringfencing mechanic.  In these other cases the contract is, effectively, only a promise to pay funds if there are funds associated with the stated address.  It might be appropriate to use this kind of mechanism if the stated address is a trading account, with a history of liquidity, and which was identified with a known business or individual who had a reputation to maintain, and there were other contracts recorded in the blockchain which had a prospect of transferring currency to that address in the relevant time.  But if there is a chance of the other party simply removing the funds associated with the relevant address, then this option would be very risky.  Cryptocurrency blockchain based contracts do not resolve the funding problem. 


Only events which occur in and are recorded on the blockchain can be used as conditions in blockchain based contracts.  So, for example, if Coin X and Token Y are on the same blockchain, a smart contract might say “IF Address A transfers Coin 0.1 to Address B on or before 1 January 2022 AND Address A transfers Token X [to Address B on or before 31 March 2022 THEN Address B SHALL transfer Coin 1.0 to Address B”.  A can pay B 0.1 Coin before 1 January 2022 to buy a ‘put’ option to sell Token X to B for 1.0 Coin anytime in the first three months of 2022.  One can create similar contracts to effect straightforward sales. In theory, one could also make loans.  If A transfers 1 Coin to B on or before 1 January, then B will pay A … [list interest payments and dates].  The problem with making loans, though, is that if you use an escrow mechanism, and freeze the funds B will need to repay the principal and interest, then there is no point having the loan, because B can’t spend it – it has to sit in B’s account. 

If parties want to make contracts to do things outside the blockchain, they need to use a third party service called an ‘oracle’ (examples of oracle services include Oracle, Provable, R3 Corda and Osiri).  For a fee, the service provider will monitor events outside the blockchain environment and, if certain preconditions are satisfied on or before a certain date, will execute a transaction within the blockchain (presumably transfer a small amount of currency between two addresses it controls) to signal / record that the event has occurred.  This transfer can be used as the execution condition for the payment in the smart contract.  Using this function it is possible to make self-executing bets on outside events which are widely reported or public knowledge through authoritative sources published online: weather, movements of the stock market, commodity prices, results of sporting events and so on.  Such contracts could be used for speculation or hedging. 

More often, though parties will want to make contracts based on events which are not public in the same way.  Did the parcel arrive on time?  When did the ship arrive?  To make contracts about these things, it is necessary to give the oracle some authoritative, tamper-proof means of detecting the occurrence of these events.

The fiat money problem

The principal problem with using smart contracts associated with cryptocurrency blockchains is that they can only generate transfers of cryptocurrency.  Most cryptocurrency is not presently particularly useful as currency, but is used for speculation.  If a business receives a payment in cryptocurrency, then they probably will not be able to use it to pay their staff or suppliers, because all their contractual obligations will be denominated in fiat currency.  To do anything useful with the value received, they will need to convert it into fiat currency. Many cryptocurrencies are simply too volatile for this to be a realistic prospect.  A payment denominated in cryptocurrency might fall greatly between the contract being entered and the payment falling due, wiping out the profit of the transaction.  Equally, businesses might make material profits from such fluctuation.  Businesses which were created to pursue safe, well-understood real-world businesses for their shareholders drift into a kind of back-door cryptocurrency speculation.

One work around would be to price contracts in (say) $US and, when the time comes for payment, have software automatically purchase the equivalent value in cryptocurrency at the going rate and use that for payment, so that you always pay / receive the agreed dollar amount.  Of course some cryptocurrencies routinely swing 10% or more in the space of a day, so even then there is a risk that the value of the cryptocurrency might plummet in between its being received and its being converted into fiat currency.  Also, this kind of system necessitates the purchase of cryptocurrency in exchange for fiat currency, and that cannot be done on a cryptocurrency blockchain.

People have created so-called ‘stablecoins’ which have been designed to maintain a stable value tied to US Dollars, Euros or some other currency, either by having the coins ‘asset-backed’, or in other more innovative ways.  A business which transacted in USD could have a fund of these coins, or buy and sell them as they are required.  One issue, though, is how confident you are in the issuer of the stablecoins, and whether they will maintain the assets which back them up.  See, for example, Faux, writing recently in Bloomberg Businessweek about the world’s biggest ‘stablecoin’, Tether:

Exactly how Tether is backed, or if it’s truly backed at all, has always been a mystery. For years a persistent group of critics has argued that, despite the company’s assurances, Tether Holdings doesn’t have enough assets to maintain the 1-to-1 exchange rate, meaning its coin is essentially a fraud. But in the crypto world, where joke coins with pictures of dogs can be worth billions of dollars and scammers periodically make fortunes with preposterous-sounding schemes, Tether seemed like just another curiosity.

Then, this year, Tether Holdings started putting out a huge amount of digital coins. There are now 69 billion Tethers in circulation, 48 billion of them issued this year. That means the company supposedly holds a corresponding $69 billion in real money to back the coins—an amount that would make it one of the 50 largest banks in the U.S., if it were a U.S. bank and not an unregulated offshore company.”

Another problem with using stablecoins, even if one is confident in them, is the transactional cost of constantly shifting money back and forth between cryptocurrency and fiat currency. 

Barriers to widespread adoption of smart contracts

The remaining bars to widespread adoption of smart contracts, then – as yet unresolved by the magic of blockchain - are the fiat money problem, the funding problem and (to some degree) the characteristic performance problem. 

The characteristic performance problem is likely to be resolved, at least for many contracts.  More and more tamper-proof or tamper evident systems will be created for recording real-world data in the form of immutable blockchain, which can then be used by the smart contract.  At the same time, contractual obligations will increasingly be expressed in an absolute, mechanistic way, eliminating the need for a human to consider what was a ‘reasonable’ time, whether a party used ‘reasonable’ endeavours and so on.  There will, however, remain a substantial body of contracts where performance is inherently qualitative, subjective, hard to measure and requires judgment or where measurement is prohibitively costly / difficult.  Such contracts cannot readily be made truly self-enforcing – there will be a human ‘in the loop’ somewhere, and the role of the court / arbitrator is preserved, albeit that the evidential record may be of far higher quality, and the dispute narrower, than at present, leaving the tribunal with a reduced role.

The fiat money problem seems resolvable.  Some common system is likely to emerge whereby banks use blockchain to record smart contracts and reliably execute fiat money transfers based on that blockchain.  One is still reliant on the banking system to hold the fiat money funds and execute the payments – just like now.  But you have the added utility of smart contracts. 

The funding problem seems more intractable – how to guarantee that the payor will fund the promised payment if they can, while leaving them free to make use of their funds in the meantime.  This is an area where more work is needed.  One possibility would be to find some kind of effective sanction for non-payment which could be executed automatically upon non-payment.  Payors with other assets, ownership of which was recorded on blockchain, could potentially use those as security with those assets being frozen pending payment, but the payor being free to use their cash in the meantime.  Doing this with cryptoassets would be straightforward, but real-world assets could also be used.  For example, if an authoritative register recording the ownership of real property or high value items (like vessels) were committed to blockchain, one could have in place systems which transferred ownership of property if payments were not forthcoming.  Allied to this, or as a sanction in its own right, real world assets could be impaired as the sanction for non-payment.  A car, a ship, or a tunnel boring machine could be disabled.  A building door could be locked.  If some authoritative, very widely-used system for making smart contracts for payments in fiat currency did emerge, it might be that an effective sanction for default would simply be to bar any defaulter from future use of that system.

Do unbreakable vows stifle commerce?

It is interesting to consider whether unbreakable vows might themselves have been part of the explanation for the wizarding economy’s lack of sophistication.  Since the effect of breaching an unbreakable vow is to automatically kill the vow-breaker it follows that only natural persons (not corporations) can make unbreakable vows.  Sole traders, then, enjoy a huge competitive advantage over companies.  Yet it is only limited liability companies which allow the pooling of resources and sharing of risks which make large, risky, complex ventures possible. 

The low-sophistication of wizard society might represent the point of equilibrium between two competing forces: aversion to entering unbreakable vows oneself while distrusting others who are unwilling to enter into them.  Perhaps the compromise which emerges is one where buyers and sellers confine themselves to transactions which are face-to-face, with cash-on-delivery, avoiding so far as possible the issue of whether to enter into an unbreakable vow by avoiding contracts with payment and performance separated in time.  Where parties cannot avoid contracts with payment and performance at different times, they will enter into unbreakable vows, but make those contracts as simple as possible, always using well-understood precedents and standard forms and avoiding complex, risky, innovative ventures or transactions.  Some level of risk taking – people entering into contracts which they know they might be unable to perform – is desirable.

Any such stifling effect, though, is not principally the result of the fact that the contract is automatically monitored and enforced.  It is the result of the stiff penalty for breach (death).  Any stifling effect would be much less if rather than instant death, breaching an unbreakable vow automatically transferred a predefined sum of money to the innocent party by way of compensation.  Smart contracts might, nonetheless, play a role in suppressing some riskier economic activity.

Truly unbreakable contracts result in inefficiencies

Scenario 1.  We make a deal which will make each of us 100 galleons better off.  Later, something unexpected happens which is going to increase my cost of performing my side of the bargain by 300 galleons, so that I will be 200 galleons worse off if I perform.  As against that, performing the contract will still only make you 100 galleons better off.  The net result, then, is that performing this contract makes us 100 galleons worse off.  The efficient result is not to perform it. 

Scenario 2.  Instead of making you 100 galleons better off, my performing the contract will make you 300 galleons better off.  Performing the contract makes me 200 galleons worse off and you 300 galleons better off.  The net result is that performing the contract will makes us 100 galleons better-off overall.  The efficient result is to perform it.

In each scenario, performing the contract will make me 200 galleons worse-off so, if there is no law (or magic) to force me to comply with the contract, then I will not perform it.  In scenario 1 that is the efficient outcome.  In scenario 2, if I refuse to perform, and there is no law or magic to compel my performance, there will be a renegotiation.  My performance is worth 300 galleons to you and will cost me 200 galleons.  So you will offer me (say) an extra 250 galleons.  Now performing the contract will make you 50 galleons better off and me 50 galleons better off, so the contract will be performed – which is the efficient result. 

In theory, a system in which contracts were not enforced at all by law (or magic) would result in the efficient outcome, because contracts would be renegotiated, and terms would be agreed which resulted in their performance.  In practice, transaction costs inhibit this efficient outcome.  It will be rare for each party’s costs and benefits to be as clear as in the examples above.  It will take time and effort to negotiate a new price.  Each of us has an incentive to lie about our costs and benefits in an attempt to obtain better terms.  Each of us has reason to distrust the other’s claims.  Negotiations will often break down and efficient contracts will go unperformed.  A system of laws (or magic) which made contacts truly unbreakable, on the other hand, always forcing people to render the substantive performance they have promised, leads to an inefficient outcome in cases like scenario 1, causing wasteful contracts to be performed.  This is (arguably) why the law rarely provides for specific performance of contracts. 

Instead, we have a system whereby I can refuse to perform, but must pay you damages to make you as well-off as if I had performed it.  In deciding whether to breach the contract, I have to consider all the costs of doing so.  This leads to the efficient outcome.  In scenario 1, I will prefer to breach the contract (and pay you 100 galleons in damages) rather than perform (obtain 100 galleons of benefit, incur 300 galleons of additional cost, net cost 200 galleons).  In scenario 2, I will prefer to perform (obtain 100 galleons of benefit, incur 300 galleons of additional cost, net cost 200 galleons) rather than breach (and pay 300 galleons in damages). 

Truly unbreakable contracts, then, which always compel substantive performance come-what-may would often lead to inefficient outcomes.  A mature system of smart contracts which was effective always to compel performance would likely have a similar effect. Smart contracts will only result in economic efficiencies – and so an overall benefit to humanity - if the savings in enforcement cost outweigh the opportunities which are foregone as a result of being unable to breach. 


Literature focusing on the question of when AIs will be able to fully replicate or surpass the functions of judges and arbitrators are looking too far to the horizon.  There are several technologies, most notably smart contracts, which (although difficulties undoubtedly remain) have the potential to result in a reduced role for judges and arbitral tribunals well before humans develop superhuman artificial general intelligence - if they ever do. 


Aletras N, Tsarapatsanis D, Preoţiuc-Pietro D, Lampos V. 2016. Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective. PeerJ Computer Science 2:e93

Arditi, Tokdemir Using Case Based To Predict the Outcome of Construction Litigation (2002)

Bostrom Superintelligence Paths, Dangers and Strategies (2017)

Eidenmueller, Horst G. M. and Varesis, Faidon, What is an Arbitration? Artificial Intelligence and the Vanishing Human Arbitrator (June 17, 2020). Available at SSRN: or

Faux Anyone Seen Tether’s Billions? 7 October 2021

Franklin AI Technology and International Arbitration - Are Robots Coming for Your Job? (2020)

Katz DM, Bommarito MJ, II, Blackman J (2017) A general approach for predicting the behavior of the Supreme Court of the United States. PLoS ONE 12(4): e0174698.

McCarthy What is Artificial Intelligence? (2007)

Newell et al 1958 Chess Playing Programs and the Problems of Complexity IBM Journal of Research and Development 2(4):320-35

Ruger T.W., Kim P.T., Martin A.D, & Quinn K.M., The Supreme Court Forecasting Project: Legal and Political Science Approaches to Predicting Supreme Court Decision Making Columbia Law Review 104, 1150-1210 (2004)

Stornetta and Haber How to Time-Stamp a Digital Document J. Cryptology (1991) 3: 99-111

Toews AI in Law and Legal Practice – A Comprehensive View of 35 Current Applications

Vardi 2012 Artificial Intelligence : Past and Future Communications of the ACM 55(1): 5.

Yudkowsky Artificial Intelligence as a Positive and negative factor in global risk in Bostom et al. Global Catastrophic Risks (2008)


Media Contacts