Parallel Local Search for SAT
Alejandro Arbelaez
Feb 15, 2012
•
Computer engineering degree 2000
–
2006, Colombia
–
Universidad Javeriana
–
AVISPA Group
•
Ph.D in Computer Science 2007

2011, France
–
Microsoft
–
INRIA joint lab
•
Post

doc at JFLI (University of Tokyo) 2012
–
current,
Japan
2
3
Agenda
•
SAT
•
Sequential Algorithms
–
Complete Algorithms
–
Incomplete Algorithms
•
Parallel Algorithms
–
Divide

and

Conquer
–
Parallel Portfolio
•
Parallel Local Search
•
Massively Parallel Local Search
•
Conclusions & Future Work
4
SAT
•
First known NP

complete problem [Cook 1979]
•
Used as starting point for proving that other
problems are NP

complete.
–
Π is verifiable in poly

time
–
Poly

time reduction from SAT α Π
5
SAT
•
Boolean Variables: Positive and Negative
Literals
•
Clauses
Applications
•
Software Verification
•
Bioinformatics
–
Haplotype inference
–
Gene regulatory network
•
Puzzle solving
–
Sudoku
6
7
Agenda
•
SAT
•
Sequential Algorithms
–
Complete Algorithms
–
Incomplete Algorithms
•
Parallel Algorithms
–
Divide

and

Conquer
–
Parallel Portfolio
•
Parallel Local Search
•
Massively Parallel Local Search
•
Conclusions & Future Work
8
SAT Solving
•
Complete Search
–
Tree

based search
•
Incomplete Search
–
Local Search
•
Variable/Value selection heuristics
•
Restart Methods
–
Luby, static, geometric, etc.
9
Tree

based Search
•
Depth

first search
•
Variable/Value selection heuristics are used to
determine which variable to assign next
10
Complete Search
•
Tree

based search
•
Constraint propagation
•
Conflict

clause learning
–
Add new clauses
–
Avoid the same conflict in the future
•
Intelligent backtracking
11
Local Search
•
Start with a random configuration (values for
the variables)
•
Iteratively apply local moves in order to find a
solution
Initial Configuration
Local Neighbor
12
Local Search for SAT
Local Search algorithms for SAT
usually perform better without
restarts
13
Variable Selection in LS
•
GSAT
–
Select the best variable (score function)
•
WalkSAT
–
Select an UNSAT clause
C
–
Select the best variable in
C
(score function)
•
DLS
–
Adding weights to clauses
•
Hybrid
[Selman et al. 1992]
[Selman et al. 1994]
[Hunter et al. 2002]
14
Novelty
•
Select a random UNSAT clause
•
Select a variable using a given heuristic
[McAllester et al. 1997]
15
Novelty+
•
With a probability
wp
select a random variable
from an UNSAT clause
•
With probability 1

wp uses Novelty
[Hoos 1999]
16
AdaptNovelty+
•
Noise is a key point!
•
Adaptive tuning
[Hoos 2002]
17
Agenda
•
SAT
•
Sequential Algorithms
–
Complete Algorithms
–
Incomplete Algorithms
•
Parallel Algorithms
–
Divide

and

Conquer
–
Parallel Portfolio
•
Parallel Local Search
•
Massively Parallel Local Search
•
Conclusions & Future Work
18
Parallel Algorithms for SAT
•
Divide

and

conquer
•
Parallel portfolio

based algorithm (multi

start)
19
Parallel Algorithms for SAT
•
Divide

and

conquer
–
Divide the problem space into subspaces
•
Load balancing
20
Parallel Algorithms for SAT
Parallel Portfolio
–
Algorithms compete and cooperate on the full
problem
–
Use different and complementary strategies
–
No need of load balancing
21
Parallel Algorithms for SAT
Parallel Portfolio
–
Portfolio
without
cooperation: As good as the best
Core 1
Core 2
Core 3
Core 4
Parallel time
22
Parallel Algorithms for SAT
Parallel Portfolio
–
Portfolio
with
cooperation:
better than the best!
–
Complete algorithms exchange learned clauses
Core 1
Core 2
Core 3
Core 4
Parallel time
23
Parallel Algorithms for SAT
Complete
Solver
Parallel
Architecture
Knowledge
Sharing
Psato [Zhang et al. 1996]
Yes
Workstations
No sharing
PGSAT [Roli 2002]
NO
Multicore
No sharing
GrADSAT [Chrabakh et al 2006]
Yes
Workstations
Clause sharing
gNovelty+T [Pham et al. 2009]
NO
Multicore
No sharing
ManySAT [Hamadi et al. 2009]
Yes
Multicore
Clause sharing
Plingeling [Biere 2010]
Yes
Multicore
Clause sharing
CSLS [Arbelaez et al. 2011]
NO
Multicore
Configuration
sharing
24
Parallel Algorithms for SAT
Portfolios of Complete Algorithms:
–
Usually better when using few cores (up to 8

16
cores)
–
ManySAT reached (on average) a super

linear
speedup on instances of the 2008 SAT Race
–
All parallel solvers that qualified in the SAT’10
Competition

parallel track

were portfolio
algorithms
25
Agenda
•
SAT
•
Sequential Algorithms
–
Complete Algorithms
–
Incomplete Algorithms
•
Parallel Algorithms
–
Divide

and

Conquer
–
Parallel Portfolio
•
Parallel Local Search
•
Massively Parallel Local Search
•
Conclusions & Future Work
28
Parallel Portfolio
Exchange useful information
•
Let several algorithms (or different copies of
the same one with different parameters)
compete
and
cooperate
to solve a given
problem instance
•
Complete Solvers exchange learnt clauses
•
Incomplete Solvers?
–
Best known configurations
Parallel Local Search for SAT
Best configurations found so far
Configuration Costs
•
Exploiting best known configurations to build a
new restart point
•
Shared memory:
New restart point
29
Aggregation Operators
30
Aggregation Operators
1.
Agree
2.
Majority
3.
Prob
Weighting best known configurations
4. Majority RankW
5. Majority NormalizedW
6. Prob RankW
7. Prob NormalizedW
31
Restart Policy
Best known configurations tend to be similar,
therefore the algorithm would be restarted
from similar configurations again and again
A[i] =
Restart iff something has changed in M
without considering the i

th row
32
Experiments
Performance metrics:
–
Median runtime across 10 runs
–
Penalized Average Runtime: Average runtime but
unsolved instances contribute 10 times the time
cutoff.
–
Example (300 sec timeout):
•
Alg1: 290s, 295s, 280s. AVG:
288.3s
PAR:
288.3s
•
Alg2: 300s, 300s, 260s. AVG:
286.6s
PAR:
2086.6s
31
Parallel Local Search for SAT
N
copies of the best algorithm vs Different
algorithms
32
Parallel Local Search for SAT
Experiments:
•
359 known SAT instances from the RANDOM
category of the 2009 SAT competition
•
Algorithms
:
•
PAWS
[Thornton et al. 2004]
•
G2WSAT+p [Li et al. 2005]
•
Adaptive G2WSAT [Li et al. 2007]
•
Adaptive G2WSAT+p [Li et al. 2007]
•
G2WSAT [Li et al. 2005]
•
SAPS [Hutter et al. 2002]
•
RSAPS [Hutter et al. 2002]
•
Adaptive Novelty+ [Hoos 2002]
33
Parallel Local Search for SAT
Cooperation Vs No Cooperation
36
Parallel Local Search for SAT
Solution cost for not solved instances
37
Parallel Local Search for SAT
Performance using 8

core portfolios
Portfolio
Solved
instances
PAR
Never
Solved
PAWS
286
5213.84
56
No Sharing
311
3743.63
8
gNovelty+T
304
4173.14
33
Agree
305
3952.19
17
Majority
315
3163.02
6
Prob
335
2247.97
2
Majority RankW
325
2944.92
4
Majority NormalizedW
314
3298.60
9
Prob RankW
333
2313.80
2
Prob NormalizedW
327
2295.99
1
38
Experiments
Diversification

Intensification
39
2011 SAT Competition
•
1
st
Place Sparrow (Sequential Solver)
•
2
nd
Place CSLS (Cooperative Parallel Solver)
•
3
rd
Place sattime2011 (Sequential Solver)
•
Sparrow is several orders of magnitude faster
than all algorithms in CSLS (on random
instances)
38
2011 SAT Competition
•
1
st
Place Sparrow (Sequential Solver)
•
2
nd
Place CSLS (Cooperative Parallel Solver)
•
3
rd
Place sattime2011 (Sequential Solver)
•
Sparrow is several orders of magnitude faster
than all algorithms in CSLS (on random
instances)
–
Extensively fine tuned
•
96 CPU days (Automatic tuning)
•
166 CPU days (Human tuning)
39
40
Agenda
•
SAT
•
Sequential Algorithms
–
Complete Algorithms
–
Incomplete Algorithms
•
Parallel Algorithms
–
Divide

and

Conquer
–
Parallel Portfolio
•
Parallel Local Search
•
Massively Parallel Local Search
•
Conclusions & Future Work
Massively Parallel Local Search
•
What’s the performance of Parallel Portfolio

based for SAT with a large degree of
parallelism?
•
Lots of information to exchange:
–
10000 variables would be about 40K x 512 (cores)
= 20M (Send and Receive)
–
About 10GB of traffic over the net (first message)
•
Lots of diversification
41
Massively Parallel Local
Search
42
Random Instances
Massively Parallel Local
Search
43
Crafted Instances
Agenda
•
SAT
•
Sequential Algorithms
–
Complete Algorithms
–
Incomplete Algorithms
•
Parallel Algorithms
–
Divide

and

Conquer
–
Parallel Portfolio
•
Parallel Local Search
•
Massively Parallel Local Search
•
Conclusions & Future Work
44
45
Conclusions
•
Cooperation helps to improve the
performance of a portfolio of state

of

the

art
local search algorithms
•
Improvements in both number of Solved
Instances and Penalized Average Runtime
•
Different degrees of parallelism
–
Few cores (e.g., 16 cores)
–
Large degree of parallelism (e.g., 100 cores)
–
massively parallel algorithms (e.g., more than 100
cores)
Future Work
•
Limiting cooperation to group of solvers
•
Studying the performance of parallel local
search on GPUs
•
Studying the use of Machine Learning to
improve performance
–
Characterizing “good” and “bad” runs
•
Studying other information to exchange
•
Adaptive tuning for parallel algorithms
46
47
Thanks for your attention!
Parallel Local Search for SAT
Mean number of restarts for each algorithm (4cores Prob)
Algorithm
3

SAT
5

SAT
7

SAT
PAWS
8.6
5.0
3.2
G2+p
12.1
5.4
3.1
AG2
12.4
6.3
3.5
Ag2+p
11.1
6.0
3.2
38
Comments 0
Log in to post a comment