Exclusives : AI, IQ and the Importance of Open Information

AI, IQ and the Importance of Open Information

Christoffer Holmgård has spent almost 20 years studying human psychology and Artificial Intelligence in one form or another. Now CEO at Modl.ai, developing AI tools to support the gaming industry, 6GWorld spoke to him about the use, abuse and future of AI and Machine Learning.

Context is King

AI expert Kate Crawford recently described AI as “Neither artificial nor intelligent”. Holmgård has a slightly different take on this.

In his view, human intelligence is contextual by definition. It’s about understanding and applying the appropriate response to a context. “In fact, much of the cortex is devoted to stifling responses and urges that would not be helpful in context.”

Humans have evolved certain kinds of intelligence, such as reading facial expressions, and on average we’re very good at this. Meanwhile, understanding theoretical physics requires a good deal more effort because we haven’t.

Human “IQ”, in this sense, can be defined as how well a person is able to correctly identify the context of a situation and select the appropriate kinds of response. Meanwhile, current AI can be considered intelligent only in a particular context.

“An AI can understand a user’s behaviour, but only within the bounds of a given context. Let’s say in a particular game they are methodical and careful. Can we say that this is a methodical and careful person? No, because their behaviour in another game might be very different, and different again in real life. Behaviour is contextual too.”

This is a far cry from the general intelligence or consciousness that the public is given to fear. “Some might argue about the possibility of emergent properties [creating an artificial general intelligence], but I don’t think so,” Holmgård mused. “We do not have the right kinds of processes. Not any time soon, for sure.”

The Dangers of Unbounded AI

Conversation turns to the implementation of AI in practice, and especially in the gaming industry. “No Man’s Sky”, a title released in 2016, famously relies on an AI to generate a near-infinite number of unique planets and terrains – and, while the hype around the game in development was huge, the initial response was less spectacular.

Holmgård is, nevertheless, a fan. He describes the game as using a limited AI: “It was bounded by the initial set of programmes and inputs, so the game can create millions of variations… but it’s limited by those boundaries.”

At the same time, the difference between human intelligence and algorithms was clear, and led to some disillusionment. “The challenge with planet creation lies in extracting what’s interesting or striking from all the different possible variations. Many are likely to be very boring or similar-seeming to humans.” Meanwhile the AI is able to diligently create different worlds as requested, but it has no concept of “interesting”.

Holmgård compares this to AI Dungeon, a text adventure generated by a deep learning algorithm, where users could type in any response to a situation and gain, usually, a plausible response. Among other things it took in user-generated content as inputs to build its stories.

This “unbounded” AI ended up creating some very unexpected and unsafe content, thanks to the inputs from some of the members. “You might find a dragon in one place and type in that you want to mount the dragon and fly away,” Holmgård explained. “The AI could give you a very unexpected response to that kind of command.

“So there are problems with bounded and unbounded AI, if you want to create an interaction that is engaging and also appropriate. There needs to be some sort of middle ground.”

This sounds strangely similar to the challenges faced by governments in enabling liberties but preventing criminal actions. Is the answer some kind of user parliament? Chuckling, Holmgård nodded: “Perhaps something like that. Perhaps. It’s not something where there is a clear answer.”

Data as a Two-Way Relationship

Speaking of the relationships between users and businesses, there are increasingly a variety of players involved in making many AI-driven activities happen. There are the end users as a data source; owners of training data; the service owners; and the people who develop the AI algorithms. The ecosystem seems to work well, so long as everything is running smoothly. What happens in case of problems, though? Is it clear yet how liability or risk will function?

As networks become more AI-driven, this is liable to be an increasingly significant question in the telecoms world.

Holmgård is hesitant. “There is no very clear answer here because the way that contracts tend to be structured isn’t uniform by any means. But one thing lacking today, which would make a big difference, is better transparency around what is happening to the data, how it moves and what conclusions are drawn from it.”

It would be particularly helpful for the end user to understand what is being inferred about them as a way to build trust in the companies harvesting data.

“If they give back the conclusions they draw, then the end user can benefit from those insights; or, if they are not very deep insights, the user may simply not mind that a company knows these things about them.” This two-way exchange would be very different from the extractive data model today.

Holmgård draws a parallel between online shops that learn preferences and real-world stores. “If you have a local shop you visit regularly, the shopkeeper may start to recognise your face and what kinds of things you buy. He might start chatting with you and pointing out new items you might like, and learning more from your responses. Nobody thinks that would be weird or creepy.” The difference between offline and online is that the offline customer also builds up an awareness of what the shopkeeper knows and why, through their interactions.

With that ability to build trust, Holmgård hopes that the next frontier in AI might be to start pulling together an understanding of behaviour across different contexts – for example across different media or activities – in order to create more complex and accurate models of human behaviour en masse and deliver better individually-tailored experiences. Is this, 6GWorld asked, a way to start moving towards IQ in a computer system – finding the appropriate response in context?

“That would be the holy grail,” Holmgård replied, “But we have to earn that trust first.”

Image by Pedro Woodstock from Pixabay

SPONSORED BY:1
Share:
Share:

Insights

Registration

To reserve your ticket please fill out the registration form