I devote a lot of time to understanding, critiquing and criticizing the AI Industrial Complex. Although much – perhaps most- of this sector’s output is absurd, or dangerous (AI reading emotions and automated benefits fraud determination being two such examples) there are examples of uses that are neither which we can learn from.

This post briefly reviews one such case.

During dinner with friends a few weeks ago, the topic of AI came up. No, it wasn’t shoehorned into an otherwise tech-free situation; one of the guests works with large-scale engineering systems and had some intriguing things to say about solid, real world, non-harmful uses for algorithmic ‘learning’ methods.

Specifically, he mentioned Siemens’ use of machine vision to automate the inspection of wind turbine blades via a platform called Hermes. This was a project he was significantly involved in and justifiably proud of. It provides an object lesson for the types of applications which can benefit people, rather than making life more difficult through algorithm.

You can view a (fluffy, but still informative) video about the system below:

Hermes System Promotional Video

A Productive Use of Machine Learning

The solution Siemens employed has several features which make it an ideal object lesson:

1.) It applies a ‘learning’ algorithm to a bounded problem

Siemens engineers know what a safely operating blade looks like; this provides a baseline against which variances can be found.

2.) It applies algorithms to a bounded problem area that generates a stream of dynamic, inbound data

The type of problem is within the narrow limits of what an algorithmic system can reasonably and safely handle and benefits from a robust stream of training data that can improve performance

3.) It’s modest in its goal but nonetheless important

Blade inspection is a critical task and very time consuming and tedious. Utilizing automation to increase accuracy and offload repeatable tasks is a perfect scenario.


How Is This Different from AI Hype?

AI hype is used to convince customers – and society as a whole – that algorithmic systems match, or exceed the capabilities of humans and other animals. Attempts to proctor students via machine vision to flag cheating, predict emotions or fully automate driving are examples of overreach (and the use of ‘AI’ as a behavioral control tool). I use ‘overreach‘ because current systems are, to quote Gary Marcus in his paper The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence‘, “pointillistic” – often quite good in narrow or ‘bounded’ situations (such as playing chess) but brittle and untrustworthy when applied to completely unbounded, real world circumstances such as driving, which is a series of ‘edge cases’.

Visualization of Marcus’ Critique of Current AI Systems

The Siemens example provides us with some of the building blocks of a solid doctrine to use when evaluating ‘AI’ systems (and claims about those systems) and a lesson that can be transferred to non-corporate uses.