News
2018-01
We have just made
a maintenance
release of the benchmark suite. This release fixes a handful of
issues with the suite, without changing the existing benchmarks.
Major changes
are listed
here. In short, the source distribution should now build
correctly (broken URLs fixed), the suite should run fine on Java 8
JVMs (with the exception of tomcat which has an
underlying problem
unrelated to DaCapo, and we have added a new benchmark,
lusearch-fix, which is identical to lusearch except that a
one-line bug
fix to lucene has been applied (we recommend lusearch-fix over
lusearch). The issue with lusearch is
described in this
paper.
2017-12
We have re-invigorated the project, with intentions to make a major new release in 2018. The project is now hosted on github. We plan to have a minor maintenance in the short term, before commencing a community effort to put together the next release of the suite.
2009-12
After three years of development, the new release of the DaCapo benchmark suite is finally available. You can grab it here. Please be sure to read the release notes before using this release. This release includes new workloads, deletes many old ones, overhauls all other workloads. The release also includes many improvements to the harness and commandline interface.
2009-07
Our continuous performance regression pages (release and development) have been updated to include time stamps and direct links to the jars used. This means you can always download the current development jar.
2009-06
After a huge development effort, we now have early drafts of two benchmarks based on the Apache DayTrader J2EE workload: tradebeans and tradesoap. At this stage both benchmarks are unstable on some platforms, including our testing environment. We strongly encourage feedback (please use the mailing list). Please try them out by downloading the development jar. Read more about these workloads here.
2009-06
As part of a major push to get a beta version of the suite ready, we have started culling those benchmarks that will not appear in the next release. The following benchmarks have been removed: antlr (jython now uses antlr internally), bloat (we have a number of program analysis tools), chart (batik has some similarity as a vector graphics renderer), hsqldb (replaced by derby).
2009-06
A new benchmark, avrora, has been added to the suite. Avrora was proposed by Ben Titzer from Sun. Avrora is a parallel discrete event simulator that performs cycle accurate simulation of a sensor network. Avrora exhibits interesting patterns of parallelism, and is unlike any of the existing DaCapo workloads, so makes a very interesting addition. Please try it out by downloading the development jar here. Thank you very much Ben!
2009-06
Almost all benchmarks have been updated to reflect the latest publicly available releases. For jython, lusearch, luindex, and pmd this marks significant changes. Only eclipse and xalan remain to be updated.
2008-11
In anticipation of the upcoming release, we have created a TODO list of work that needs to be done before the upcoming release. Please send email to the mailing list or directly to Steve Blackburn if you think you can help.
2008-08
A new paper describing the development of the DaCapo benchmark suite and broader issues of methodology appears in August 2008 CACM.
2008-07
We perform comprehensive, 12-hourly performance regressions, running various VMs against the DaCapo svn head (for the upcoming release), and against the 2006 release.
2008-06
Slides from a presentation discussing the development of the DaCapo suite and the broader topic of evaluation methodology are now available.
2008-06
Cliff Click pointed us to the fragger widgit, which artificially injects fragmentation into a heap. We are considering including this as a runtime option in the DaCapo suite, as it could be very for those using DaCapo for memory management and locality work. Thanks Cliff.
2008-01
Stability and performance comparisons among a number of JVMs running against the DaCapo suite can now be found here, with tests against the current DaCapo development head available here. These numbers are updated daily. Note that they currently just present raw data, which tends to be very noisy given the run-to-run variation due to the non-determinism of adaptive optimization in modern JVMs.
2007-12
We have started overhauling our workloads in anticipation of our next release. This includes the revival of the batik workload and a reworking of hsqldb to use the Apache derby database engine. We have also started overhauling the implementation of multithreading within the DaCapo harness.
2007-10
We have started evaluating Apache Geronimo and its DayTrader benchmark for potential inclusion in the next DaCapo benchmark suite.
2007-01-27
A second minor maintenance release (dacapo-2006-10-MR2) is now available. This includes minor bug fixes and a repackaging of the benchmarks suitable for people wanting to use tools like Soot to perform ahead of time analysis of the code. The workloads are unchanged.
2007-01-23
With assistance from Chris Kulla, we have a draft of a new SunFlow benchmark in our subversion repository as a candidate for inclusion in the DaCapo suite. To evaluate the benchmark, grab the svn head ("svn co https://dacapobench.svn.sourceforge.net/svnroot/dacapobench/benchmarks/trunk dacapo"), build the sunfow benchmark ("cd dacapo/benchmarks; ant sunflow.source clean sunflow jar.quick"), then run it ("java -jar dacapo.jar sunflow"). SunFlow, along with other candidates, will be evaluated over the next 6-12 months for possible inclusion in the next release of the suite. Note that SunFlow requires Java 1.5 or later.
2006-12-26
A minor maintenance release (dacapo-2006-10-MR1) is now available. This includes a bug fix and minor enhancements to the interface. The workloads are unchanged.
2006-10-25
The first full release of the dacapo benchmark suite (dacapo-2006-10)
is now available. This release includes the following bug fix:
- Fixed a bug in pmd sources. Three of the pmd input sources included a non-UTF-8 character where the resulting behavior is undefined. Some JVMs (correctly) produced different behavior for these files. The offending character has been removed from the three input files.
2006-10-15
We have produced release candidates in anticipation of our initial full release around October 23. These are now available for download and evaluation. Feedback recieved prior to October 20 may be included in the full release. New features for release candidates include:
- Further improvements to the validation mechanism. We now accommodate the DOS "\" path separator--hopefully this is the last of the OS-specific issues.
- We have added a new build target split-deps. This allows those who build from source to separate out dependencies into a separate jar (this feature was added after feedback on the mailing list, and is used to enable whole program analysis). [Thanks to Eric Bodden of McGill!]
- Improvements to the harness. Fixed a bug that prevented output from subsequent benchmarks being displayed when executing more than one benchmark at a time. Improved usage message. [Thanks to Vladimir Strigun of Intel].
- The eclipse benchmark now uses a self-contaied dummy jre for its build tasks, so it should work on any jvm without special action (previously a third-party jre needed to be specified on the command line of jvms which Eclipse did not recognise).
- Minor improvements to the build scripts. This includes updates to source URLs, refactoring, comments.
2006-10-07
We have made a new beta release, beta-2006-10, which has a number of improvements including:
- The xalan benchmark has been re-written. This version has a more realistic workload. We also force the use of version 2.4.1 and ensure that xalan is used (rather than a bundled xslt processor). [Thanks to Kev Jones of Intel!]
- The validation mechanism has been completely overhauled. The validation mechanism ensures that benchmarks only "PASS" if they complete correctly. The harness should now work properly on any OS, and deals with the users' pwd which creeps into some benchmark output. [Thanks to Robin Garner of ANU!]
- The lusearch benchmark has been improved, so that each query thread includes a mix of unique and shared queries (previously all queries were unique/non-shared).
- The build system has been improved considerably and is a little better documented.