Max also tells us why now is the right time for small businesses and projects to start – especially when you regularly get annoyed with Alexa or Google Home.
The term artificial intelligence (AI) has made the jump from the cinema into everyday life. We no longer automatically think about Skynet from the film “Terminator” or about “I, Robot” and “Interstellar”. Instead, we’re delighted by IBM’s Watson when he wins at “Jeopardy!”. Or we get annoyed with Alexa when she doesn’t do what we want.
We now come into contact with AI every day through Siri, Alexa, Google Home, or Cortana. But anyone who has yelled at one of these “helpers” (“Alexa, no!”) knows how far the path to a real digital assistant is.
And this is the problem. “True” artificial intelligence, we believe, is only for big companies. It works with mysterious methods far removed from everyday life, in shielded data centres, and goes by the name of AlphaGo or Watson.
But this is not really the case. AI can become “tangible” and highly relevant in everyday life – and thus, applicable for smaller projects and budgets. Today, AI already includes versatile methods and techniques for digital services and products. But before we start discussing how all of us can make our systems smarter, we must first distinguish between apples and oranges. What does artificial intelligence actually mean?
There is no specific technology behind the term artificial intelligence (AI). Rather, it is a generic term that describes the project of mapping human or natural intelligence through artificial processes – also in machines.
While we used to find computers brilliant because of their computing ability and data storage, they can now solve problems that were once exclusively the domain of human intelligence – speech recognition, image recognition, personality and mood analysis, or even self-driving cars. Perhaps we will soon reach the point where computers are not only better at these tasks than people, but also more “intelligent”. Forecasts and discussions about how such a world might be are very exciting.
Machine learning technologies can help sort and categorise data based on different characteristics.
Machine learning and deep learning
Welcome to the specific use case. Machine learning is essentially about teaching a program how to recognise and learn familiar patterns and rules. This method is often used in the context of classification problems. It involves assigning an object, action or behaviour to a category based on different measured attributes.
- For example, a credit card transaction could be considered legitimate or fraudulent.
- For text and handwriting recognition, a read symbol must be assigned to a letter or a number.
- Visitors to a website can be assigned to different personas based on their history and can then be offered dedicated content.
In learning these algorithms – just as with people – we want to prevent the program from simply memorising the answers (this is called “overfitting” a model), but we want it to learn the rules, which should at best be relatively easily formulated. Ideally, after learning these rules, the person or algorithm will be able to solve new cases in the corresponding problem class.
There are basically two ways of learning: supervised and unsupervised learning.
In supervised learning, a set of training data is specified; these are already assigned to the corresponding categories. For example, imagine a stack of holiday photos, consisting of beach pictures and city shots. This is done by specifying a number of relevant attributes that the algorithm should take into account – such as the number of blue pixels in a photo and its GPS position.
The machine now begins to configure itself by analysing the attributes of the training data and internally trying to create a generic model. With a second set of data, the test data, it is possible to check how reliably the machine has learned the model – it should now categorise the test data more or less reliably. This is also how the quality of differently configured models can be compared. You can repeat the model creation step as many times as you like and vary it until you have found a model that best compares with the test data in the benchmark. This is sometimes a lengthy process.
In unsupervised learning, only the data itself is analysed. The system then tries to divide them into groups, for example. This helps it to identify frequencies, structures, and patterns in the data. For example, in our example, unsupervised learning would find that there is an accumulation of photos with a high blue content (the beach photos) and a second accumulation with a lower blue content (the city photos). If the location information is also considered, the frequencies can be described and delimited in more detail. In this area, great progress has been achieved in recent years through deep learning, which is part of machine learning. The algorithm creates new layers of meaning in several steps. This allows it to find and recognise similar structures in the data.
Looking for patterns in overlay data.
Data mining considers data as a raw material that can be extracted and processed. For example, an existing unstructured data collection (such as a data lake) can be searched for patterns to determine which day of the week specific actions are most often performed – this example is very close to data analytics.
More elaborate approaches try to map non-existent or hard-to-measure data via other characteristics, and then check whether the approach correlates well with the characteristic being searched for. This is how Google used searches to identify a trend toward the spread of flu.
An even more abstract example are data being generated from data. Google Maps, for example, generates building data from satellite images and place information from street-view data. Combining building data with place data lets Google generate data about areas of interest.
AI as a service
Things are also getting exciting for non-tech gurus. Fortunately, no one today must develop, train, test, and operate their own speech recognition program just to develop a simple Alexa skill.
Various providers, including the major and well-known technology companies, offer machine-learning-relevant services as a usable and sometimes fee-based software interface that people can use for their own projects. Amazon has services in its AWS offering, Microsoft provides the Azure Machine Learning Studio, and Google has the Cloud Machine Learning Engine, for example. Apple is also heavily involved in machine learning and publishes the Apple Machine Learning Journal, for example.
The big advantage is that people can now use previously unthinkable and incredibly complex functions in their own projects, just like a software library. As a rule, this results in low costs per function call (whereby the costs can of course multiply depending on the project size). Recorded voice files can be transcribed easily, and user-provided photos can be easily analysed, tagged, and categorised using one of these services. Without creating your own image database and training a model yourself, you can find out, for example, whether a user has uploaded a holiday photo of the beach or whether the content is more relevant to the protection of minors (fun fact: algorithms are not very good at recognising the difference between the curves of a human body and those of a sand dune).
The advantage – yet also a major disadvantage of this – is that the technology and know-how are in the hands of the service provider. The more users there are who take advantages of these services, the more the provider can refine their models, thereby increasing their technological lead.
Typically, these services contain little domain knowledge, that is, information about a specific application area. While some services may recognise prominent people in photos, a parquet seller cannot (yet) use such services to analyse images of different wooden floors. For such cases, however, there are other services that let you train your own model.
As discussed above about supervised learning, you bring your own reference data material with you. The service provider is then responsible for training the machine.
This finally puts us at a point where we no longer need a multimillion-dollar budget to integrate “smart” add-on features into applications, websites, and services. We have succeeded in distinguishing between the use and the development of intelligent machines. This abstraction layer extends the tool set available for developing exciting new apps – similar to the internet connection and the camera modules that have become standard in smartphones. Artificial intelligence has thus become more tangible and practical, and is now being transformed from a playground of innovative visions and demonstrations to applications that will accompany us in everyday life.
Why is this important to me?
To remain competitive in the future, businesses should think now about how developments in artificial intelligence will affect their own industry.
These easy-to-use tools and services are already being exploited in new products and services. Tasks that were previously carried out by users or employees are now being taken over by increasingly reliable, scalable, and automated systems. This also requires approaches in the design of new products that consider the data processing by AI.
Those who have already started to train their models will be able to constantly refine and deepen this application-specific know-how. Users learn the behaviour of intelligent services via digital assistants and adjust their expectations. Tasks that previously had to be done manually, such as photo post-processing or composing a holiday movie, are now fully automated, raising the bar when it comes to creating stunning products and presentations.
The self-reinforcing effect is constantly improving these learning models until it will no longer be practical to allow non-intelligent services to compete. It will be difficult to catch up to current services that are already securing these advantages.