Global climate change is the largest challenge of our generation. Increased carbon emissions over the last century have caused more extreme weather events at rates never seen before. In Australia, these have manifested as intense fires and floods that are increasingly making areas of the country less habitable. While businesses have been quick to migrate their applications to cloud providers, optimisation has been focused purely on finances (cost of goods sold). Carbon emissions is a new dimension in cloud optimisation. Cloud providers with datacentres distributed across the globe are in a unique position to take advantage of the geographical differences in energy production and enable greener solutions to be designed. With tools like the Carbon Aware SDK and the Carbon-Optimised Workload Manager, solution architects can shift their application’s workloads geographically and temporally to reduce the carbon emitted by their applications.
All programs need to on a computer somewhere. This is typically on a server in a datacentre, chosen based on speed, cost, and regulatory requirements. My solution allows companies to use global emissions data to minimise their program’s carbon emissions by shifting it to a region where, or time when, electricity emits less carbon. For example, energy demand in off-peak hours is less and usage can be drawn more from renewable sources such as wind rather than baseload sources such as coal and gas. This results in fewer carbon emissions.
The job routing library uses the Carbon Aware SDK to determine the Azure cloud region where electricity produces the lowest gram of CO2 emitted per kilowatt. The dispatcher uses this information to send the job to a different region for processing. Locally, the CLI can produce synthetic job data and process it using the routing library to estimate carbon emission savings. This can be used on historical data to quantify what savings could be and to evaluate the library’s efficacy.
The efficacy of the router was evaluated by applying the algorithm on 10,000 generated jobs of constant duration that were each randomly assigned start times and geographic regions. The baseline was determined to be 5,884,285 g CO2/kw, and the geo-shifted workload was 3,726,081 g CO2/kw. This is a 36.68% reduction when applying geo shifting alone.
Time-shifting (deferring workloads using a NotLaterThan concept) was not evaluated but is hypothesised to further increase the reduction. Jobs of varied duration were not considered however the additional temporal elements would necessitate time-shifting functionality to produce optimal results.
The workload manager model can be applied broadly to any solution with discrete units of computation, with some considerations. First is additional data transfer time if the volume needed for compute is high. A remote server may spend longer reading the data than a local server would and thus use more energy, negating the benefit of geo-shifting. Furthermore, the assumption is that compute can be switch off when not in use. An orchestrator could manage a service’s global compute to ensure servers are not sitting idle.
Its generic nature allows it to be adopted by solution architects globally and applied to many projects. This would amplify the realised reductions beyond what any single company could achieve.
The solution includes almost all the pieces required for the dispatcher to geo-shift workloads.
It has a:
For production readiness: Caching for API responses.
I would like to see my solution be adopted by companies around the globe to make their applications and services more efficient. In my career, I have seen that optimisation far too often comes after functionality and the only dimension considered is dollars. Making information about carbon emissions more available helps increase awareness of the real-world impact that their services have. These solutions can then be integrated to help combat the problem and give companies a positive message on their journey to zero emissions.
https://github.com/rifuller/carbon-efficient-workloadmanager