Dog, Pig or Loaf of Bread?

elisabetta delponte
3 min readFeb 9, 2022

--

The Dog-Pig dilemma from “The Mitchell vs. the Machines” movie

Dog. Pig. Dog. Pig. Loaf of bread. System error!

It’s hard to believe that an AI system would be so dull as not to distinguish a dog from a pig. However, if you watched the movie “The Mitchell vs. the Machines”, you have seen how Katie, the aspiring filmmaker teenager, uses her family’s ugly dog called Monchi to malfunction the robots, and stop their plans to eliminate all humans and conquer our planet. Within the next paragraphs, we’ll see why this trick isn’t only a cinematographic make-up, but it’s a clear example of how machine learning algorithms could fail.

From GPUs to nowadays’ successes

I studied computer vision and machine learning when nothing was working, or at least it was working with less accuracy than today. It was the early 2000s when I was a student in the SlipGuru team (that is now part of the Machine Learning Genoa Centre) at the University of Genova. At that time Convolutional Neural Networks (CNNs) powered by GPU started to gain popularity and to demonstrate their effectiveness.

Long story short, thanks to the increased availability of computational power provided by GPUs, and consequently the availability of a huge amount of data, from that moment, machine learning algorithms began achieving more and more successes, paving the way to the widespread diffusion of Artificial Intelligence we see nowadays.

Even poker, which is characterised by an informational deficit (you can’t see your opponents’ cards) that increases enormously its complexity compared for instance to chess, has seen the rise of computer programs that can predict the optimal strategy for managing a hand on the basis of the combination of huge amounts of data.

However, as these algorithms are sort of black boxes, the motivations behind their suggestions remain unknown, leaving the poker players only with the possibility to make some interpretations and replicate the strategy in similar situations. This is one of the risks of AI algorithms: when making a decision about a critical process, it may be difficult to rely upon processes that are not entirely comprehensible to humans.

Garbage in, Garbage out

Following this classic saying, historically attributed to the British mathematician Charles Babbage, we get to another critical element of machine learning algorithms: their dependency on good and valid training data. As the wider research community recognises this limit, there are a few studies that aim to verify the use of best practices for labelling training data in different contexts’ applications, thus reducing the risks of biased models.

Monchi’s system error

And we’re back to the probable cause of Monchi’s system error: even if the robots were trained with a humongous number of pictures of pigs and dogs, their knowledge does not allow them to generalise the dog concept and recognise the weird appearance of the Mitchells’ pet. No matter how many dogs or pigs examples: the robots will fail because they are not able to generalise.

With no pretence of identifying limitations and future challenges for Artificial Intelligence, I wanted to notice how Monchi’s system error is the key to trick those systems who are trying to label us. That’s why I think that the story of the Mitchell family is a good example of how we should embrace our diversities and recognise that being different could be a value, today more than when AI was less efficient.

The Mitchell vs. the Matchines.

--

--

elisabetta delponte
elisabetta delponte

Written by elisabetta delponte

Full time communicative housewife, used to be a consultant, I am mum of A&B.