Imagine being dropped in the middle of a mountain range – the Alps, say, or the Himalayas – and you’re blind. All you have is a walking stick. Your task is to find the lowest point, the bottom of the deepest valley. You use your walking stick to determine which way is down.

But you don’t know if this is the way to the valley basin. The valley might be located on the other side of the range, which means you should have gone up first, not down. But your only command is ‘go down’. Eventually you arrive at a plateau, and your stick can’t determine up or down in any direction. You’re stuck.

“This is the worst for deep learning,” according to Eberhard Schoneburg, who made these comments during two lectures he gave on stage at the Hong Kong Fintech Week in October. “There’s no mathematics to resolves this.”

Welcome to the world of artificial intelligence. A.I. works a lot like a blind hiker – but expressed through mathematics, as explained by Schoneburg, founder of Hong Kong-based company Artificial Life and an advisor to PwC.

**The history bit…**

A.I. has developed as an offshoot of mathematics. Liebniz and Newton both invented calculus (the mathematical study of continuous change) in the 17^{th} century. Their work formed the basis of **deep learning**, which is a learning method based on sets of data, rather than from a set of rules.

Modern A.I. began in the 1950s, invented by mathematicians, physicists and data engineers. A.I. emerged then because of two other sciences that had come about: one, that of computers (invented to help crack spy codes during World War Two); and second, the first models of the human brain as a network of trillions of neurons, which provided a template for how a self-learning intelligence might operate.

Given this background, A.I. has evolved in a technical, engineering fashion, and all of its models rely heavily on mathematics – the source of roadblocks as well as inspiration.

**…and the numbers bit**

Because deep learning is based on data, or inputs, the learning comes from constant training, such as to distinguish between photographs of cats versus dogs, or how to tell if an image represents the numeral eight.

In math-speak, a function is the relationship between the inputs and what gets spit out on the other end. A function could be an equation. A.I. training involves finding differences among functions to understand derivatives of functions – derivatives, same as in finance, except these are derivatives of an equation, rather than a stock price’s movement.

Mathematicians use derivatives to work out the best, or optimal, function – in other words, what’s the most likely path the blind alpinist can take to find the valley. They do so by coming up with rewards for when the computer comes up with a right answer (cat versus dog, correctly identifying the numeral eight, etc.), and error messages when it’s wrong.

**Deeply learned mistakes**

Over time, the computer finds a path with the fewest error messages – that’s deep learning. To humans, it seems like the computer can now tell the difference between a cat and a dog. However, the machine may reach a point where it thinks it knows the difference, but it hasn’t been exposed to enough data, so it starts making mistakes. This is like the blind alpinist who reaches a plateau, so their walking stick doesn’t reveal the best direction to follow.

The more complex the problem, the harder it is for A.I. specialists to know when a machine has been trained enough.

“You think you’ve figured it out, but you haven’t,” Schoneburg said.

Modern **neural networks** emerged to address this problem. Instead of telling the alpinist to find their way down, the computer is trained until it is error free, given a particular data set, so that the alpinist is given a map (in Braille, presumably). Neural networks will work fine, so long as the map is accurate, but new data points – unaccounted for hills and rivers – create new mistakes.

“This creates a huge problem: how long do I have to train this network?” Schoneburg said. “No one can tell you when to stop.”

A machine can be trained until it’s error-free, but it learns by rote. It doesn’t know how to generalize or think in abstracts, or understand a trend.

**The problems with A.I.**

Once the system has learned enough, it can be put to use in the real world, such as by applying a trading algorithm. But first it has to be trained – and **training** isn’t straightforward. There are three problems.

- Training is separate from application. Deep learning is done through batch processing, so training sessions cannot coincide with a computer acting in a live environment, at least not without risking reactions that human programmers cannot understand.
- No one knows how long it takes to train; low error rates could mask blind alleys, which in the context of a trading platform, could lead to catastrophic losses. “The only way through this is experience,” Schoneburg said. “You have to feel it. It’s an art.”
- Inputs need to be of good quality. A learning process can be sabotaged or manipulated to achieve a desired outcome, either deliberately, or through human cultural blinkers (like racism, conscious or not). A face- or voice-recognition algorithm could be deliberately corrupted to confuse identities, creating a cyber-security nightmare; or blind assumptions could accidentally discriminate against people.

More fundamentally, Schoneburg argues that the math-heavy approach to A.I. has created limitations. Although AlphaGo got plenty of attention for mastering the game of go, A.I.’s concrete results have otherwise been paltry.

“Compare the development time of A.I. with that of airplanes,” Schoneburg said. The first airplanes emerged in the 1900s, and within sixty years, humans landed on the moon. “There’s nothing like a moon landing in A.I.,” Schoneburg said. “We have to be open to alternative approaches.”