[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [creduce-dev] Making creduce more parallel



Hi Nick!

This sounds like cool work. It sounds like you're using C-Reduce as a fuzzer, basically?

I'm afraid to say that 6 hours is not really that bad when dealing with C++, I think a number of us here have seen reductions take quite a bit longer than that.

I'm not sure why C-Reduce isn't using all of your cores. In some phases there just isn't a lot of parallelism available but the workhorse line-removal passes do have a lot of parallelism.

The only thing that C-Reduce runs in parallel is the interestingness test, the transformations and the C-Reduce core remain serial. How long is your interestingness test taking? If it's fairly fast that might explain the loss of parallelism.

Anyhow the only easy solution I can offer you is to run multiple C-Reduce instances, solving different problems. Is there any chance that fits what you're doing here?

Initially, I thought about a map-reduce-y kind of thing: map out
potential reductions and interestingness tests across all cores and
then merge them back together in pairs in parallel. If a merge
conflicts, take the smaller of the two. Retest interestingness after
each merge and if it fails take the smaller of the two test cases
(both of which we know already passed the interestingness test).
Repeat this process to a fixed point.

This is interesting, and is certainly a way to un-bottleneck C-Reduce. It would probably be necessary to randomize the transformations to prevent them from creating merge conflicts too often (by default transformations walk through the program).

When I originally parallelized C-Reduce I had something like this in mind, but I eventually reached the conclusion that you reached, which is that merges will kill it.

Your next idea I need to think about some more!  More later.

John