download github

This benchmark suite is intended as a tool for Java benchmarking by the programming language, memory management and computer architecture communities. It consists of a set of open source, real world applications with non-trivial memory loads.

The initial release of the suite was the culmination of over five years work at eight institutions, as part of the DaCapo research project, which was funded by a National Science Foundation ITR Grant, CCR-0085792. A further three years of development went gone into the 2009 release. That work was been funded by the ANU, the Australian Research Council and a generous donation from Intel. Since then, development has continued at ANU with support from Oracle and Google.

Our suite evolves to maintain its relevance. It is therefore essential that you cite the version number associated with the release in any use of the benchmark, and as a courtesy to the developers, we ask that you please cite the paper from OOPSLA 2006 describing the suite:

Blackburn, S. M., Garner, R., Hoffman, C., Khan, A. M., McKinley, K. S., Bentzur, R., Diwan, A., Feinberg, D., Frampton, D., Guyer, S. Z., Hirzel, M., Hosking, A., Jump, M., Lee, H., Moss, J. E. B., Phansalkar, A., Stefanovic, D., VanDrunen, T., von Dincklage, D., and Wiedermann, B. The DaCapo Benchmarks: Java Benchmarking Development and Analysis, OOPSLA ‘06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-Oriented Programing, Systems, Languages, and Applications, (Portland, OR, USA, October 22-26, 2006) (pdf, bibtex).

News

  • June 17, 2019 After two years of work, we have started making evaluation snapshots of our upcoming release available.
    • Seven diverse and completely new benchmarks: biojava, cassandra, graphchi, h2o, jme, kafka, and zxing.
    • A complete overhaul of the trade benchmarks, replacing geronimo with wildfly.
    • Full updates of all existing benchmarks, bringing them up to date with latest stable versions.
    • A number of benchmarks now have ‘huge’ configurations that run to GB-sized heaps.

    The suite is not fully calibrated yet, we are yet to cull some of the older benchmarks, and we are still making a number of refinements to the harness and build process. However, we are looking forward to community feedback on the snapshots of the suite. We will use the feedback to shape the suite’s final composition. We hope to have the suite ready in Q3 2019. Please use github to file bug reports or contribute fixes or improvements, or share your feedback via the mailing list.

  • May 10, 2018 An uncalibrated full referesh of every benchmark in the suite is now available on github. This is not a release, yet. Before we release we need to fully calibrate each workload, add new workloads, and assess the whole suite. We’re working on that right now. In the meantime, we encourage you to take a look and give us feedback (on github, or the mailing list).
  • Jan 12, 2018 We made a maintenance release of the benchmark suite. This is the first full release in a number of years, and fixes a handful of issues with the suite, without changing the existing benchmarks. Major changes are listed here. In short, the source distribution should now build correctly (broken URLs fixed), the suite should run fine on Java 8 JVMs (with the exception of tomcat which has an underlying problem unrelated to DaCapo, and we have added a new benchmark, lusearch-fix, which is identical to lusearch except that a one-line bug fix to lucene has been applied (we recommend lusearch-fix over lusearch). The issue with lusearch is described in this paper.

Sponsors

Current Sponsors and Contributors

ANU

OracleGoogle

Past Sponsors and Contributors

ANU U Colorado Purdue U. U. Texas U. Mass UNM

IBM Intel

ARCNSF