AI for dummies / Introduction in the world of Artificial Intelligence
Artificial Intelligence (AI) is booming the last decade. The options seem limitless, the potential is huge, the gains are substantial and therefore the current investments are enormous. Gartner estimates that the market value of AI in this year will become 2.8 trillion (2.800.000.000) euros. The AI competition is fierce, since AI development is still quite expensive and the stakes are high. From a helicopter view, the development is still at the begin phase. However, applied AI is working its way up to become a major part of modern-day life. If it isn’t already. Companies like Amazon, Apple, Facebook, Google, IBM, Intel, LinkedIn, Microsoft, NASA, Netflix, Tesla and Uber are all utilizing AI to become market leader in their field.
AI is the ability of a machine to perform cognitive functions as humans do, such as perceiving, learning, reasoning and solving problems. In other words, AI is the science of training machines to imitate or reproduce human task. AI provides a cutting-edge technology to deal with complex data which is near to impossible to handle by a human being. AI automates redundant jobs, allowing a worker to focus on the high level, strategic, value-adding tasks. When AI is implemented at scale, it leads to cost reduction and revenue increase.
Three different types / phases of AI
- Narrow / weak AI: a machine can perform a specific task in a pre-determined and pre-defined range better than a human.
- General / strong AI: a machine can perform any intellectual task (without specific instructions) with the same accuracy level as a human would. The machine should be able to solve problems, make conclusions under uncertainty, plan, learn and use knowledge in decision-making.
- Artificial superintelligence: a machine can beat humans in all aspects of a task, from creativity, to general wisdom, to problem-solving.
It is safe to say that ‘advanced narrow AI’ is the current state of the AI development. The main goal for the next years is trying to master general AI, where the focus lies on the increasing capability of the self-learning component. There are concerns about Artificial superintelligence, since this could mean that super-intelligent machines take over the world by eliminating humankind. The more optimistic theory is that AI and humans go hand in hand optimizing the current processes.
Artificial Intelligence, Machine Learning and Deep Learning are related terms. Machine Learning is a specific area of Artificial Intelligence and Deep Learning is one of the four basic Machine Learning paradigms. The other three are: supervised learning, unsupervised learning and reinforced learning.
Supervised learning is showing software labeled example data to teach a computer what to do. Unsupervised learning is learning without annotated examples, just from experience of data or the world. This is trivial for humans, but not generally practical for machines (yet). Reinforcement learning is software that experiments with different actions to figure out how to maximize a cumulative reward.
- Self-driving cars: AI takes full control over operating the car.
- Language and speak: AI translates any language in another in any possible function (text, subtitles video, smart assistants). Helpdesks become more and more voice-controlled interfaces.
- Recognize patterns in data sets: AI quickly understands the customer’s priorities and predict needs for quick decision-making.
- Robotics and Internet of Things (IoT): replacement of employees in complex production processes, microsurgery or work in dangerous places. AI also provides smart sensors connected to IoT, for example to a toothbrush (teach how to brush better) or thermostat (optimize energy costs based on personal behavior).
The current main use of Artificial intelligence is reducing or avoiding easy repetitive tasks. Taking AI to the next level, an AI machine does not need to be explicitly programmed by people. The programmers give some examples, and the computer is going to learn what to do from those samples: hardcoded programming is no longer needed as the machine will be in control by its self-learning ability.
In 1955, Herbert Simon and Allen Newell develop the Logic Theorist, the first artificial intelligence program. However, support or at least belief in AI wasn’t there at that time. AI’s current development can be attributed to three different factors different from back then:
- Hardware: the power of CPU increased heavily the last decade, so a simple deep-learning model can be trained on a laptop. For more complex models, like for vision and recognize objects, more CPU power is needed.
- Data: storing huge amounts of data in a warehouse has become easier over the last years. Data powers AI. Data is an unique competitive advantage that no firm should neglect. AI is able to provide the best answers from your data.
- Algorithm: Artificial intelligence uses a progressive learning algorithm allowing the data to do the programming. Because of the programming advances, the computer can teach itself how to perform different tasks.
Python is the most used programming language for AI purposes, as it is easy to understand and write, has many libraries and has a significant user community. Python supports advanced machine learning and deep learning implementations of popular frameworks, like TensorFlow, PyTorch and Keras.
- Access: AI robots can be located where it is not safe for humans (from laboratories to mines, from deep ocean caves to other planets).
- Accuracy and precision: difficult decisions can be made based on data-driven arguments and are not limited by lack of human attention, fear, distraction and emotional responses.
- Recurring manual task: AI can take over low-level repetitive tasks, which enables employees to do more strategic, high value work.
- Convenience: easier to do things. Limited effort, high reward.
- Lack of transparency: AI is allowed to make the decisions based on a self-learning algorithm: hard to understand and challenge, especially if experts leave the company.
- Ethics: If biased people train the AI machine, the system will be polluted with discriminated data and therefore the result will be biased (garbage in, garbage out). This will becomes worse and worse (self-fulfilling prophecy). Prejudices are a blind spot.
- Liability: it has become harder pointing at the responsible party of a failing self-learning AI machine. In the end, it sets its own course.
- Privacy issues: trade-off between privacy guarantee and more data (which lead to better results).
As stated, the growth of the AI field has been exponential over the last decade. The question is not if more progress will be made, but how much. Is it possible to create a machine with a conscious, incredible creativity that has something leaning towards emotions? Time will tell, but for now, let’s focus on what already is possible with AI that makes our lives easier and how it can be applied to your business. Reading the newspaper, blogs and innovative stories should give us enough inspiration to get started with our own AI journey..
Anaplan Consultant at Sonum International
Jort is a person who knows how to deal with difficult problems and loves challenges. He truly enjoys designing, managing and implementing complex models in order to increase efficiency and allow employees to spend time on challenging tasks instead of time-consuming repeating tasks. He expresses all his creativity and enthusiasm in the tool Anaplan and is an expert in FP&A, S&OP and statistical forecasting / AI solutions. Jort is quite stress resistant, statistically well established and has a great analytical ability to give customers the service they deserve.
Read more articles
For the first time, Anaplan has been named a Leader in the 2022 Gartner Magic Quadrant™ for Supply Chain Planning Solutions.
Learn how the elevation of sales and operational planning to integrated business planning can bring value to your business.
Stay in the know!
Sign up for our mailing list so you don’t miss out on any of our upcoming events, articles, or news!