Current candidates for exclusion from the upcoming suite include bloat
(has some notable idiosyncrasies, and is not extensively deployed), hsqldb
(superseded by derby
), and possibly antlr
and/or chart
.
I have some info about the Dacapo Eclipse benchmark that you may be interested in. If I should contact someone else instead, feel free to let me know.If you run eclipse for a large number of iterations, the performance forms a saw tooth, degrading significantly (a factor of 10 or more), then jumps back to normal. An excel graph of the performance over time is attached below. (Dacapo version 2006-10.jar). It happens on both Sun's and IBM VM's.
During the slow iterations the program is spending most of its time in jitted code. The problem is the following two methods:
org/eclipse/jdt/internal/compiler/util/WeakHashSetOfCharArray.add([C)[C
org/eclipse/jdt/internal/compiler/util/WeakHashSet.add (Ljava/lang/Object;)Ljava/lang/Object;
Both of these methods have a linear search through some kind of linked list of weak references. Here's my guess at what is happening: This list grows over time and the linear searches eventually becomes a huge bottleneck. Eventually some memory threshold is crossed and the VM clears the weak references, and performance goes back to normal.This is clearly crappy code (linear searching) but it could also be a benchmark bug. Is it possible that this data structure should have been re-initialized between iterations, and the iterating nature of the driver is creating a problem is unlikely to exist in the real application?
Copyright 2001-2008 by the DaCapo
Project,
All Rights Reserved.