GeneratedVelocity is a platform designed to compare the performance of different libraries and frameworks. It provides a detailed report of each library's speed, precision, and other parameters, making it easier for developers to choose the best library for their project.
GeneratedVelocity was initiated to address the need for a reliable and unbiased way of comparing the performance of different libraries . With so many options available in the market, it can be challenging to choose the right library for a particular task. This tool aims to simplify the selection process by providing a standardized benchmarking methodology for all libraries.
GeneratedVelocity is designed to automate the process of comparing and testing different libraries. To achieve this, test are writen in configuration files, then given to GeneratedVelocity. The tests are executed in a controlled environment, and the results are then analyzed and compiled into easy-to-read reports. These reports are published on dedicated GitHub pages, where users can access them and see how the different libraries perform in a variety of scenarios. By automating the testing process, the website enables developers to save time and effort when evaluating libraries, and helps them make informed decisions based on reliable data.
For the time being, we we decided to compare results base on the Lexicographic Maximal Ordering Algorithm (LexMax).
each ranking is based on the number of wins, ties, and losses of each library. The library with the highest
number of wins is ranked first, followed by the library with the second-highest number of wins, and so on. In
the case of a tie, both libraries are ranked equally. The algorithm does not take into account the magnitude of the wins or losses, only the number of them.
We use it to compare all the data generated by the benchmarking process. For exemple, we run a task on a set of
libraries, and we get the results. Each result is compare to the other result with the same argument and we get a
score for each arguments. On the entire task, we get a vector of score for each library. We use the LexMax
algorithm to compare the vector of score of each library and we get a ranking of the libraries for that task.
We do this for each task and repeat for the theme and the global ranking.
The benchmark website is an open-source project, and contributions from the community are welcome. To contribute, users can fork the project on GitHub, make changes to the code, and submit a pull request. Users can also contribute by reporting bugs, suggesting improvements, or sharing their benchmarking results.