Hot topics

What is AI? History, definitions and applications

AndroidPIT Kirin chip 8658
© nextpit

Read in other languages:

Everyone is talking about artificial intelligence, also known in its abbreviated form, AI. But what is it all about? That’s precisely what we’ll be explaining today.

History

Artificial intelligence is increasingly playing a greater role in our lives, and the latest trend are AI chips and the accompanying smartphone applications. But this technology began to be developed as early as in the 50s with the Dartmouth Summer Research Project on Artificial Intelligence at Dartmouth College in the U.S. Its origins date back even further to the work of Alan Turing—to whom we can attribute the famous Turing test—, Allen Newell and Herbert A. Simon, but AI did not make it into the spotlight on the world stage until the arrival of chess supercomputer Deep Blue by IBM, which was the first machine to defeat the then-defending world chess champion Garry Kasparov in a match in 1996. AI algorithms have been used in data centers and on large computers for many years, but is only more recently present in the realm of consumer electronics.

Definition of artificial intelligence

The definition of artificial intelligence characterizes it as a branch of computer science that deals with automating intelligent behavior. Here’s the hard part: Since you cannot precisely define intelligence per se, artificial intelligence cannot be exactly defined either. Generally speaking, the term is used to describe systems whose objective is to use machines to emulate and simulate human intelligence and the corresponding behavior. This can be accomplished with simple algorithms and pre-defined patterns, but can become far more complex as well.

AndroidPIT apps train brain 0338 1
The brain is just another machine. / © ANDROIDPIT

Various kinds of AI

Symbolic or symbol-manipulating AI works with abstract symbols that are used to represent knowledge. It is the classic AI that pursues the idea that human thinking can be reconstructed on a hierarchical, logical level. Information is processed from above, working with human-readable symbols, abstract connections and logical conclusions.

Neural AI became popular in computer science in the late 80s. Here, knowledge is not represented through symbols, but rather artificial neurons and their connections—sort of like a reconstructed brain. The gathered knowledge is broken down into small pieces—the neurons—and then connected and built into groups. This approach is known as the bottom-up method that works its way from below. Unlike symbolic AI, a neural system must be trained and stimulated so that the neural networks can gather experience and grow, therefore accumulating greater knowledge.

Neural networks are organized into layers that are connected to each other via simulated lines. The uppermost layer is the input layer, which works like a sensor that accepts the information to be processed and passes it on below. This is now followed by at least two—or more than twenty in large systems—layers that are hierarchically above each other and that send and classify information via the connections. At the very bottom is the output layer, which generally has the least number of artificial neurons. It provides the calculated data in a machine-readable form, i.e. "picture of a dog during the day with a red car."

Methods and tools

There are various tools and methods for applying artificial intelligence to real-world scenarios, some of which can be used in parallel.

The foundation of all this is machine learning, which is defined as a system that builds up knowledge from experience. This process gives the system the ability to detect patterns and laws—and with ever-increasing speed and accuracy. In machine learning, both symbolic and neural AI is used.

Deep learning is a subtype of machine learning that is becoming ever more important. Only neural AI, i.e. neural networks are used in this case. Deep learning is the foundation for most current AI applications. Thanks to the possibility of increasingly expanding the design of the neural networks and making them more complex and powerful with new layers, deep learning is easily scalable and adaptable to many applications.

There are three learning processes for training neural networks: supervised, non-supervised and reinforcement learning, providing many different ways to regulate how an input becomes the desired output. While target values and parameters are specified from the outside in supervised learning, in unsupervised learning, the system attempts to identify patterns in the input that have an identifiable structure and can be reproduced. In reinforcement learning, the machine also works independently, but is rewarded or punished depending on the success or failure.

Applications

Artificial intelligence is already being used in many areas, but by no means are all of them visible at first glance. Therefore, selecting scenarios that take advantage of the possibilities of this technology is by no means a completed list.

Artificial intelligence’s mechanisms are excellent for detecting, identifying, and classifying objects and persons on pictures and videos. To that end, simple but CPU-intensive pattern detection is used. If the image information is decrypted and machine-readable in the first place, photos and videos can be easily divided into categories, searched and found. Such recognition is also possible for audio data.

Customer service is increasingly using chatbots. These text-based assistants perform recognition using key words that the customer may tell it and they respond accordingly. Depending on the use, this assistant can be more or less complex.

Opinion analysis is not only used for forecasting elections in politics, but also in marketing and many other areas. Opinion mining, also known as sentiment analysis, is used to scour the internet for opinion and emotional expressions, allowing for the creation of a largely anonymized opinion survey.

Search algorithms like Google’s are naturally top secret. The way in which search results are calculated, measured and outputted are largely determined by mechanisms that work with machine learning.

Word processing, or checking the grammar and spelling of a text, is a classic application of symbolic AI that has been used for a long time. Language is defined as a complex network of rules and instructions that analyzes blocks of text in a sentence and, under some circumstances, can identify and correct errors.

These abilities are also used in synthesizing speech, which is currently the talk of the town with assistant systems like Siri, Cortana, Alexa or Google Assistant.

amazon echo plus
AI is indispensable for systems such as Alexa. / © Amazon

On new smartphone chips like the Kirin 970, artificial intelligence is integrated into its own component, the NPU or neural processing unit.The processor is making its debut in the Huawei Mate 10. You will learn more about it and the roles that the technology will play on the Huawei smartphone once we have a chance to experiment with it in the near future. Qualcomm has already been working on an NPU, the Zeroth processor, for two years, and the new Apple A11 chip contains a similar component.

Furthermore, there are numerous research projects on artificial intelligence and the most prominent of all may be IBM’s Watson. The computer program had already made its first public debut in 2011 on the quiz show Jeopardy, where it faced off against two human candidates. Watson won, of course, and additional publicity appearances took place afterwards. A Japanese insurance company has been using Watson since January to check insured customers, their history and medical data and to evaluate injuries and illnesses. According to the company’s information, Watson has replaced roughly 30 employees. Loss of jobs through automation is just one of the ethical and social issues surrounding AI that is the subject of corporate and academic research.

Projection

AI isn’t something that just came out of nowhere recently, but it is coming close to a breakthrough in the world of consumer electronics, which is more than enough reason for everyone to keep up to date with this topic in the future.

Which aspects of artificial intelligence do you find exceptionally interesting? Let us know in the comments below!

  nextpit recommendation Price tip Luxury version with handle Price tip with handle For Garmin fans Mid-range tip
Product
Image Withings Body Smart Product Image Renpho Smart Body Fat Scale Product Image Withings Body Scan Product Image Lepulse Lescale P1 Product Image Garmin Index S2 Smart Scale Product Image eufy Smart Scale P3 Product Image
Deals*
Go to comment (3)
Liked this article? Share now!
Recommended articles
Latest articles
Push notification Next article
3 comments
Write new comment:
All changes will be saved. No drafts are saved when editing
Write new comment:
All changes will be saved. No drafts are saved when editing

  • 27
    Sorin Dec 21, 2018 Link to comment

    Perhaps not even "Terminator", but with unusual use of this new technology, AI, you can get less desired results. But the field of exploration is very vast, and AI is still a real help.


  • 6
    Robert Dunn Oct 17, 2017 Link to comment

    I believe that one day we will see a robot rebellion like in the Terminator series. It's only a matter of time unless safeguards are put into place. Other than that, AI is fascinating..especially how the neural network can essentially "learn" like a human brain.

    SorinLoie Favre

Write new comment:
All changes will be saved. No drafts are saved when editing