by Miha Grabner, Data Scientist at Milan Vidmar Electric Power Research Institute (Slovenia)
Artificial Intelligence (AI) is becoming more and more popular and it has made its way also in Electrical Energy Industry. Nevertheless, most power engineers haven’t learned these methods at the University. Therefore, I feel the obligation to explain basic concepts and buzzwords to the power engineering community.
First of all, AI became so popular due to the huge effectiveness of deep learning especially in the field of: computer vision; natural language processing; and speech recognition.
What is interesting is that these models that work very well in computer vision or natural language processing can be used with slight modifications for modeling time series data. And in the energy industry, we are dealing with time series data most of the time, therefore we can conclude that new advances in AI also bring new approaches to the energy industry.
Let’s explain the most common buzzwords…
What is Artificial Intelligence, Machine Learning and Deep Learning?
There are many definitions of AI, whereas in general AI represents a broader field. Wikipedia says that AI is the field of study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.
Methods that we usually use to solve these problems are called Machine Learning. Actually, when you are studying this in the textbooks, you never study AI, but Machine Learning. Machine Learning is a field of study that gives computers the ability to learn without being explicitly programmed (Arthur Samuel, 1959). There is a funny phrase going around which says: AI is Written in PowerPoint, Machine Learning is Written in Python.
There is another buzzword lying around which is deep learning. Deep learning is a sub-field of machine learning and due to its huge success, it became the field for itself. The figure below shows the aforementioned relations very clearly.
Fig. 1: AI framework
Why did AI become so popular in recent years?
The major shift happened in 2012 on the ILSVRC (also called ImageNet) competition, where the deep convolution neural network model (called AlexNet) achieved great improvement over the previous State of the Art approaches. In the following years, these Deep Learning models became even better.
If I summarize words from prof. Andrew Ng from Stanford University, a major shift in Deep Learning happened due to:
i. Big Data - Figure 2 shows the performance of learning algorithms against the amount of the data. The performance of traditional learning algorithms such as Support Vector Machines or Logistic Regression improves with the amount of data until we reach a certain point, where it starts saturating. This is different with deep learning where the performance increases with the amount of data as we can train deeper and deeper neural networks and capture highly complex and non-linear dependencies.
Fig. 2: Deep Learning performance
ii. Computational power - We train modern deep neural networks on GPUs. Another fact is accessibility – high computational power became accessible to everyone, where the two most popular cloud providers are Amazon and Google.
iii. New Algorithms - Better model architectures, activation functions, weight initializations, optimizers, etc. Whereas new and more efficient algorithms also enable training the same models faster which enables testing more approaches, therefore it accelerates the development.
How we divide machine learning?
Supervised learning – we need labeled data.
regression (we predict the continuous variable). The most common usage of regression in the energy industry is load forecasting.
classification (we predict discrete variable). Classification is used less often in the energy industry. It can be used for outlier detection, predicting element failure, etc.
Unsupervised learning – we do not need labeled data.
Clustering - is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar to each other than to those in other groups (clusters). Clustering is used for consumer profiling based on smart meter data (read more here).
Dimensionality reduction - can be used in the energy industry as a preprocessing before using clustering algorithms, for interpreting atypical conditions in the transmission system, etc.
Another type is reinforcement learning, which is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward.
Reinforcement learning (RL) is also an interesting research field in the energy industry. There are various approaches based on RL for optimal EV charging or flexibility management and for power system operation. I suggest you check the challenge led by RTE (French TSO).
What about AI in Smart Grids?
AI in the energy industry has been historically dominated by the load and renewable energy forecasting community (as stated here). Whereas due to smart meter rollouts all over the world, smart meter data analytics also became a popular field for applying machine learning methods. Furthermore, AI lab of the French TSO (RTE) has done a lot in the field of RL for power system operation in recent years, whereas this is just becoming an active research area and methods have not been tested in real environments yet.
In general, as I see at the conferences AI in Smart Grids is divided into applications for: operation & planning; and asset management.
For a great overview of the field, I encourage readers to check a paper written by popular authors in this field, which provides a recent survey of big data analytics applications and associated implementation issues.
If you want to connect with other experts working in this field, join my LinkedIn group AI in Smart Grids where I post about this topic. For blogs, tutorials and resources about this topic visit my website.