[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [creduce-dev] Making creduce more parallel
We put all potential reductions into a shared work queue, have a
worker per core which is pulling from this shared queue. We globally
maintain a current most-reduced test case.
I think this is reasonable.
At this point I should mention that Moritz Pflanzer has a
re-implementation of C-Reduce in Python that we have tentatively planned
to replace the Perl implementation with, at some suitable time. It's
possible that C-Reduce hacking of the type you are proposing should be
done on that version.
However, if this reduction passes the interestingness test, but it is
not smaller than the current most-reduced global, then we add a new
potential reduction to the shared work queue: the merge of this
reduction and the current most-reduced global. The idea is that the
union of two reductions (a merge) is itself a potential reduction. If
there is a conflict in the merge, we discard it immediately and don't
even run the interestingness test.
As I was saying, some randomness will need to be integrated into the
passes to reduce the likelihood of conflicts. The passes aren't really
geared for that so that'll take a bit of thought.
Because merges are just another kind of potential reduction, and there
is no waiting for merges to complete before trying any given potential
reduction, this scheme should better saturate all those cores.
Yes, definitely, though at least sometimes we'll be running up against
limits other than cores, such as the fact that compilers use a lot of
It is worth pointing out that this is non-deterministic and racy:
merges depend on what happens to be the current most-reduced test case
and what the order of potential reductions in the queue happen to be.
Running creduce on the same input file twice won't guarantee the same
order of reductions or even the same results. For what it is worth, my
understanding of the current paradigm is that it has similar
Often the interestingness test itself is non-deterministic (due to
timeouts) so it's very hard to truly avoid non-determinism.
There's one algorithmic choice in the parallel reducer where we have to
decide to take the first process that terminates with interesting test
case or whether we pull these off the queue in order. I can't remember
for sure but I believe I went with the more conservative choice even
through it sacrifices a bit of performance.
For a paper that we're working on about C-Reduce I have some experiments
planned to evaluate the effect of determinism on reduction. In other
words, if we run the same reduction 100 times but with phase ordering
randomized, what does the resulting distribution of final file sizes
look like? My guess is that in many cases the distribution will be
fairly tight but that every now and then the randomness will find a much
better solution. Anyhow I'm looking forward to seeing the results of this!
I don't want to implement merging or rebasing patch files myself. What
if we leveraged git (or hg) for this? Each potential reduction would
locally clone a repository containing the test case into the temp dir,
commit the reduction's changes, and merging different reductions would
be outsourced to merging these commits with `git merge`.
My intuition is that git/hg would eat a lot of performance but I could
be wrong. It would certainly be amusing if C-Reduce ended up being a
small pile of git hooks :).
I am interested in polishing this idea, prototyping it, and if all
goes well contributing these changes to creduce. My hope is that
implementation mostly involves changes to orchestration and that
reductions can remain unchanged.
The thing that I'm proudest of in the C-Reduce implementation is the
modularity. The core is just not that complicated, most of the good
stuff lives in passes that can be thought of as purely functional. So
the structure does lend itself to the kind of experimentation you're
Anyhow take a look at Moritz's implementation too:
What IPC mechanism do you have in mind for the work queue?