Cawen & eco-conception logicielle
This page is an adaptation of this article posted on Green Code Lab.
We are inviting all those who would like to participate in the building of a more responsible I.T. industry to visite and contribute to sites like Green Code Lab.
Environmental impact of a programing language
The lifecycle of a computer application (expressing needs, specifications, cycle development / qualification, production, end use) is marked by decisions impacting significantly its overall environmental footprint.
Upstream of the development phase, is the choice of a programming language one of those decisions? In particular, has this choice a quantifiable impact on a software energy consumption?
We shall answer this last question in the limited context of a case study.
In the following pages, we compare the performance of C++ and Cawen versions of R.Hundt benchmark, but this time, in terms of energy consumption…
Performance and energy consumption
Our test environment is as follows:
gcc 4.5.3 / Cygwin6.0/ Intel Pentium Dual-Core T4200 /64b/2GHz /4G RAM .
This is the configuration that we previously called cygwin .
The consumption curves obtained are displayed here:
And the winners are
R.Cox, A.Hay, R.Hundt 2 and then R.Hundt 1.
The energy consumption seems perfectly correlated to the execution time.
From the most sober to the greediest, the energy consumption varies by 1 to 21.
We can draw a somewhat expected conclusion:
Performance can varies widely between two implementations of the same processing developed in the same language.
Performance gains are mainly obtained by replacing some lists and hash tables by tables.
What about changing the input data?
Surprisingly enough, Robert Hundt’s executable performs 15000 times the same treatment on a single arbitrary graph.
Anthony Hay proposed instead to process populations of randomly generated graphs.
We have established a set of 100 random graphs with up to 100 vertices and 10 000 random graphs with up to 50,000 nodes.
The ranking is upset.
It becomes, for graphs of up to 10,000 nodes: R.Hundt 2, A.Hay, R.Cox, R.Hundt1
This time, the energy consumption varies by 1 to 4.1.
For graphs and more than 50 000 vertices: R.Hundt 2, R.Hundt 1, A.Hay, R.Cox
Between the most sober and the greediest implementations, the power consumption varies by 1 to 2.3.
What gave a competitive advantage to Cox and Hay versions in the previous test is now a burden: for a large number of values, searching through a table is much slower than with a hash table. With these new inputs, the cost of inserting / deleting in hash tables is fully compensated by the acceleration obtained in the primitive research (R.Hundt 1 & 2).
It may be noted as well that optimizations made to the R.Hundt version become inoperative on graphs of up to 50,000 vertices: for this sample, R. Hundt 1 and 2 are equivalent to the original version.
With a view to reducing the power consumption of research in loop graphs (a niche market if any) a programmer should favor either the solution Hundt or solutions Hay Cox depending on the input graphs resembling our random series.
The application performance is closely linked to its use and in particular the volume and values of the input data. Optimizing an application requires the knowledge of the conditions of production.
What about changing the language ?
Due to lack of time, we did not measure the power consumption associated with versions of Java, Scala and go. Possibly a mission for Green Code Lab members ?
Environmental impact of a programing language
We used all C + + versions presented earlier and translated them into Cawen, the language that we are currently developing.
We tested the versions C + + and Cawen on 5 machines (5 gcc / 4 OS).
Globally, memory consumption, execution time, size and source are all significantly better with Cawen than with C + +, and gains continue to grow with more recent versions developed posteriorly to this snapshot taken at an early stage.
Precompiling time, very important for the moment, is currently being reduced to an acceptable level.
We subsequently developed optimized versions of the original Cawen code. Performance gains were obtained by various methods adapted to the specific profile of each program.
Compared to tests that we present on site, a difference has to be noted: for measuring power consumption we decided, to a very limited extent, to fine tune the Cawen compiler parameters.
Once the code is optimized, it is possible to play with compiler options for even shorter execution times, and, to some extent, lower energy consumption.
- General optimization parameters
Robert Hundt uses gcc optimization O2 and not O3 to compile his version.
This is actually a good choice on his test machine, for example on our server freebsd (12.4 s against 13.4 s). But this is detrimental to our other machine cygwin … We chose to use O3 for all tests presented here.
- Choice between gcc and g++
A peculiarity of the code generated by the precompiler Cawen is that it is compatible C99 / C++. Switching from one compiler to another can yield results: for version A.Hay with 10,000 random graphs, we spend 4.9 s with g++ and only 2.8 s with gcc.
- Unrolling loops
In our first series of tests we noted that algorithms A.Hay and R.Cox spent most of their time in a search loop of the values of an array. We implemented a software response (templating the search function) that unraveled the loop and significantly improved the performance of the two executables …
…Before realizing that gcc offered the parameter -funroll-loops to perform exactly the same work at compilation time, transparently to the developer. For instance, for the test R. Cox / random 10000, we get 13.2 sec without the-funroll-loops and 5 with this parameter set.
By opposition, our optimized version of the code (see the function in the file govel_contains2 govel_typed.h) is penalized by this parameter: the two optimizations (parameter-funroll-loops and software development) do not mingle…
Up to you to find this anomaly in the following charts!
- machine-specific settings
Most frequently, the gains obtained from compilation parameterization are specific to the platform (OS / processor) and not exportable. We have not explored the opportunities of gcc. They alone could provide the matter for more than one article …
But there is clearly still much room for acceleration. For instance, we have not made use of the SSE primitives to exploit multi-core architectures capabilities. Shall these accelerations translate into energy gain?
This remains to be seen.
What is the power consumption?
Here are the data measured for each version coded in C + +, in Cawen (literal translation of the C + + version) and Cawen optimized (optimized version of the first Cawen version).
These results can be analyzed in two ways: one can consider 4 independent programs coded in 2 different languages, and for each of the 3 proposed uses (bench, 10000 random, random 50000) we measured the following energy savings:
Another way round is to consider that the objective was to determine the least consuming executable for every type of use.
In this context, in ‘bench’ mode, the best Cawen program (Cawen RCOX) consumes 15% less than the best C++ program (cpp RCOX), and the best optimized Cawen program 30% less.
In ‘graphs up to 10000 vertices’ mode, the best Cawen program consumes 11 times less energy than the best C++ program. The best optimized Cawen programs consumes 17.7 times less energy than the best C + + program.
In ‘graphs up to 50,000 vertices’ mode, the best Cawen program consumes 73 times less energy than the best C + + program. The best optimized Cawen program consumes 243 times less than the best C + + program.
The relationship between program performance and power consumption can be complex. In this experimental setting, performance improvement and energy saving go hand in hand. Criteria for a successful optimization are:
- a good knowledge and simulation of the software working conditions
- mastering the intricacies of standard libraries (why do some tables require resizing while others should be left free to grow and shrink on demand?)
- a complete understanding of their underlying algorithms (e.g. choice hash / array)
- a thorough analysis of the general pattern of memory usage by the (why do allocators benefit some programs and penalize others?)
- taking advantage of the compiler options.
To answer the question raised at the beginning of our article, the choice of a language, where not imposed by the functional context, the profile of developers, or the hype, has indeed a crucial role to play in software eco-design.
This confirms what Facebook has experienced on a large scale. Project ‘HipHop for PHP’ success story has become a classic argument in favor of IT eco-conception: in 2010, the company has cut its electricity bill by half by switching from php to C++.
What if they had used Cawen instead?