Predictive edge cloud autoscaling for energy savings

We are working towards the MVP of an Edge cloud autoscaler to save energy for Industrial customers having invested on Edge cloud solutions for their businesses.

 

  • Idea

Appropriate AI/ML methods can be used to predict the upcoming client workload and dynamically calculate the parameters of scale-out actions. In comparison with the stock autoscaling approach, we have demonstrated that our forecasting models are effective in deriving appropriate scale-down actions’ parameters that outperform K8s’ and Open Source Mano’s autoscaling policies using fixed steps in the SCALE_OUT actions. This leads to energy saving to Core and Edge clouds.


  • Success story

We have extended the Open Source Mano (OSM) implementation with our Qiqbus-based autoscaler which leverages state of the art forecasting algorithms (incl. ARIMA, ARIMA+, Prophet, Holt Winters, RNN, CNN and LSTM as well as our hybrids’ implementations, incl. CNN-LSTM and RNN-LSTM).

Our predictive autoscaler has been validated via series of experiments within the 5TONIC 5G laboratory (https://www.5tonic.org/)  where we demonstrated the use of our machine learning approach for autoscaling of OSM-managed WebRTC server instances. The WebRTC sessions were being launched to implement a CPU and memory-intensive VNF environment.

Below graphically depicted how our solution outperformed the static autoscaling approach via its ability to predict the load decrease in an over provisioning scenario, indicatively using the lightweight Holt-Winters prediction algorithm. In the experiment producing the results of the diagram below, the NFVO  static autoscaler occupied 4880 CPU-seconds during the over provisioning phase, while our predictive autoscaler only occupied 2550 CPU-seconds.

This results in an estimated (based on the metric CPU-second) energy saving of approximately 48% for the Edge cloud provider hosting the WebRTC video streaming 5G service.


Our autoscaler can operate in both VM-oriented and container-oriented VIMs (specifically we have validated our approach by implementing an in-house Kubernetes agent within our private Kubernetes deployment (please email info@lamdanetworks.io for more information). Furthermore, we have employed a layered approach in the implementation of our autoscaler to avoid extensive changes in cases where the VIM abstraction may leak the underlying implementation of VDUs.

Overall, we have validated our predictive autoscaling approach both on VM-based and container-based environment with strong results in energy savings via data driven, forecast-based scale-down actions in the Edge cloud.

  

Lamda Networks participates to the European Telecommunications Standards Institute (ETSI) https://www.etsi.org/ and focuses on providing to OSM https://osm.etsi.org/with predictive-analytics based autoscaling features.