Neural network to solve fuzzy constraint satisfaction problems

,


INTRODUCTION
The constraint satisfaction problem (CSP) is a central part of artificial intelligence [1].Many problems can be represented and solved as CSPs.In recent years, this concept has been extended to incorporate soft constraints, leading to the emergence of valued constraint satisfaction problems (VCSPs).VCSPs associate costs with sub-assignments of variables within the scope of constraints, and the goal is to find an optimal value for the aggregate function of all the valued constraints.This generalization has resulted in various extensions, including valued CSP [2], weighted CSP [3], and fuzzy CSP [4], [5].
In this paper, our focus lies on fuzzy CSP.In a fuzzy constraint satisfaction problem (FCSP), each constraint represents preferences through fuzzy sets rather than strict categorical satisfaction [6].As a result, acceptability becomes a gradual notion, allowing for probable solutions.It is well-known that solving CSP is an No polynomial complete problem, meaning its values and status are unknown and the problem needs to be solved in polynomial time.Consequently, various approaches have been developed to tackle FCSPs [3], [4], [7], [8].In this work, we introduce a generalization of our CSP model resolution [9], [10] to effectively Int J Artif Intell ISSN: 2252-8938 ❒ 229 solve FCSPs.The last model concerned is based on the Hopfield network [11]- [13], which has shown great efficiency in solving combinatorial problems.This fact has encouraged us to adopt and enhance this network for the resolution of fuzzy CSP problems.The Hopfield neural network was proposed by Hopfield and Tank [14] with both discrete and continuous modes, and it has been widely used in various applications, including identification [15], pattern recognition, and optimization [16].The continuous Hopfield neural network (CHN) has also demonstrated its ability to solve hard optimization problems [17], [18].Our main objective in this paper is to adapt the weights and settings of the continuous Hopfield neural network to be able to solve FCSP.Additionally, we study the capability of this model with fuzzy constraints to solve binary CSP as well.The structure of this paper is organized as follows: In the next section, we introduce some approaches for solving the fuzzy constraint satisfaction problem.In section 3, we adapt a general method proposed in [19] to solve quadratic problems (QP).The last section is devoted to the experimental results.

FUZZY CONSTRAINT SATISFACTION PROBLEMS
In practice a wide number of artificial intelligence problems and many other areas can be redefine as a fuzzy constraint satisfaction problem.The goal is to find a variables assignment which satisfied all constraints.In some case not all the given constraints need to be satisfied.In particular, how much the satisfaction intensity reached by a given assignment.In this paper we will consider binary FCSPs only.The typical way to solve a FCSP to extending a partial solution to build the complete one.

Fuzzy CSP formulation
In the context of this study, a fuzzy constraint is defined as the assignment of a grade to each tuple in the Cartesian product of variables involved in the constraint, which falls within the interval [0, 1].The FCSP can be formulate by extending the definition of CSP as a quadruplet sets (X; D; C; µ C ) where: Each constraint C i refers to an ordered subset of Y.We call This subset of variables the scope of C i and is denoted by var(C i ).The arity of constraint is the number of variables in its scope.The structure µ C the brings together the different evaluation functions of each constraint, we consider the constraint c i with a cardinality k (arity), and we designate by (v 1 , v 2 ..., v k ) the projection of domains values of a given assignment A on var(c i ), in this case the function µ ci associates the tuple (v 1 , v 2 ..., v k ) with it degree of satisfaction regard to the constraint c i .Thus, this fuzzy valued describe how well this tuple satisfies the associated constraint.When µ ci = 1 we say that c i is fully satisfies, while for the other extreme side 0 we say that c i is fully violated.Then goal of resolving is to maximize the combination of membership.To calculate the cost of a FCSP solution there are many objective function, for example in [7] the cost is defined by the average of the satisfaction of the individual by a solution v: µ Ruttkay [20] suggest to favourite the maximisation of the last membership function; therefore to achieve a perfect solution, it is necessary to ensure that all individual constraints are satisfied optimally.This means that each constraint should be fulfilled to the highest possible degree, maximizing the overall satisfaction level across all constraints: M ax(µ C (v)) = {max µ Ci (v), f or all C i }.Medini and Bouamama [8] propose to maximise the worst membership function as: M ax(µ C (v)) = max{min{µ Ci (v) , f or i = 1 to M }}.

FCSPS SOLVED BY CHN
To resolve a binary FSCP by CHN we need first to reformulate it as a quadratic problem (QP), this can be easily done by the association a binary variable x ik to each variable y i of FCSP model, with k is in the domain of y i , such that: for two variable i and j with r and s it assigned value respectively we put: Neural network to solve fuzzy constraint satisfaction problems (Bouhouch Adil) let y i and y j two variables,then for constraints these associate binary fuzzy C ij , we introduce a state function defined as: To be more generalize we can reformulate also unary constraint.So, for a variable i a constraint C i will be reformulate as (4).
We defined Q and q in way to be equal 1 if the concerning constraint is fully violated.Considering all equations defined in ( 3) and ( 4), thus we can consider the objective function as a quadratic problem (QP): q ir valid solutions must conform to stringent constraints that can be mathematically expressed as a system of linear equations: .N By puting matrix A with an N × M dimensions and the vector b with M dimension which is fully initialised to 1 This formulation can also be reformulate as Ax = b.So, the model is given as follows: now we can use any optimization approach to solve the last problem.The combinatorial nature of this problem favours the use of meta-heuristic approaches rather than execte approches based.We recently developed an approach based on Hopfield neural network which is ideal for this kind of NP-hard problem.To apply this recurrent neutral network to minimise the last objectif function, the objective is to construct an energy function in such a way that the local minima of the energy function of a Hopfield recurrent neural network (RNN) align with the viable solutions of the problem.

Hopfield neural network
In [14] is the first work which introduced Hopfield neural network, in this paper Hopfield and Tank, originally aimed to tackle combinatorial optimization problems.Since its inception, the Hopfield neural network has undergone extensive research, enhancements, and application in various domains, including optimization, pattern identification, and pattern recognition.Notably, the Hopfield neural network has shown promising capabilities in providing acceptable solutions to challenging optimization problems.In this paper, we aim to extend the applicability to solve quadratic programming (QP) by continuous Hopfield network (CHN) in the context of fuzzy constraint satisfaction problems (CSP).The Hopfield model comprises a set of interconnected neurons, typically represented by n units.The dynamics of the continuous Hopfield network (CHN) are governed by (5).
where x represents the vector of neuron inputs; y represents the vector of outputs, which belongs to R n ; and T is denotes the weight matrix between each pair of nodes.The network inputs are defined each one by an activation function x = g(y), where g(y) is a function that bounds the values between 0 and 1.The specific form of the activation function g(y) is given by g(y) = 1 2 (1 + tanh( y u0 )).The energy function of the continuous Hopfield network (CHN) is defined as (6).
Int J Artif Intell, Vol.We construct the CHN energy function as in [19], [12], and we rewrite it to be able to solve the binary FCSPs quadratic formulation in the previous section (QP).The CHN energy function used in this work is based on the construction described in [19] and [12].To adapt it for solving binary fuzzy constraint satisfaction problems (FCSPs), we modify the energy function to formulate it as a quadratic problem (QP).
The two first terms penalize constraints violation based on there satisfaction degrees, the second one ensure that each variable is assigned a single value.the third term aggregate all linear constraint Ax = b, and the last terms satisfies the integrity propriety to enforce neurone to be closed at 0 or 1 state.To calculate the the network weight link it's enough to match in ( 6) and ( 7): is the Kroneckern symbol.In [19] to obtain an equilibrium point for the CHN, authors use an adapted algorithm based step by step update approach introduced by Talavaán and Yáñez in [21].This approach significantly increases the convergence speed of the neural network.The main idea is to iteratively move in the direction dy dt until the network reaches the minimum energy state.To ensure stability, both Haddouch et al. [19] and Talavan and Yanez [22] utilize the hyperplane method.This method involves dividing the Hamming hypercube, which represents all possible solutions, by a hyperplane that contains only the feasible solutions.By using the hyperplane method, the CHN is guided towards stable and optimal solutions while ensuring convergence towards the global energy minimum.This technique helps to enhance the performance and effectiveness of the neural network in solving complex optimization problems.To ensure convergence, we impose two conditions by studying the derivative of the energie function E: − The first one is imposed to prevent the assignment of two different values to one variable, denoted as ∃i, j ∈ D(X k ) with i ̸ = j, and their corresponding neurons satisfying x ki = x kj = 1, it is necessary to ensure that the energy function, denoted as E(x), satisfies the following condition: E ir (x) ≥ α q min + 2ϕ + β − γ ≥ ε. − In order to ensure that all variables are assigned values and avoid the case where x ir = 0 for all r ∈ 1, . . ., d i , an additional condition must be imposed on the energy function E(x).This condition can be expressed as E ir (x) ≤ α d + β + γ ≤ −ε with: We founded that the optimal value of ε is 10 −5 and for parameter settings there are obtained by finding a solution that satisfies the given inequality [19]:

COMPUTATIONAL EXPERIMENTS
In our knowledge there are no benchmark created for FCSP.Thus we will use on the one hand a random geneated FCSP, and on the other hand we study the performance of our solver on typical CSP instances [13], [23].The first choice is aimed at demonstrating the practical relevance of our approach, highlighting its potential benefits in real-world applications.We strive to showcase how our approach can address practical challenges effectively.The second choice involves evaluating the performance of our approach across diverse problem types, including random, academic, and real-world problems.To achieve this, we utilize the benchmark datasets provided by Cril University [24].These datasets encompass a wide range of problem instances, allowing us to assess the effectiveness and versatility of our approach across different problem domains.The last study is summarize in Table 1 with V represent the number variables in the specific instance; C is constraints number of a given instance; ratio mean is the average of the optimal value founded of a run series on each instance; and CPU(s) is the average execution time of multiple runs on this specific problem instance.In our analysis, we have observed that there is no significant difference between continuous Hopfield neural network for fuzzy constraint satisfaction problems (CHNFCSP) and continuous Hopfield neural network for constraint satisfaction problems (CHNCSP) when it comes to solving normal CSPs.Both approaches exhibit comparable performance and solution quality.To generate random instances for experimentation, we consider an FCSP problem as a triple (n, d, t), where n represents the number of variables as well as the domain size for each variable.The value of n is randomly selected from the interval 10 to 20.The parameters d and t denote the network density and tightness, respectively, and can take values from the set 0.1, 0.3, 0.5, 0.7, 0.9.Thus, we can generate a total of 25 classes (d, t) by considering all possible combinations of d and t.For each of these (d, t) classes, we randomly generate 20 instances, resulting in a total of 500 randomly generated CSPs for our evaluation.The truth value of each tuple admitted by a constraint is assigned a random value within the interval [0, 1].The random instance generation process allows us to cover a diverse range of problem characteristics and evaluate the performance of our approach across various scenarios.
The performance evaluation of our approach compared to a genetic algorithm-based evolutionary algorithm [25] on random instances is presented in Table 2.The results are displayed in a matrix format, where each cell represents a particular combination of density and average tightness values.Within each cell, the average best truth value (mftv) discovered and the average CPU execution time for both algorithms are provided.Despite this approach and very old but remains the only one that has explicitly adapted the genetic algorithm for the case of Fuzzy constraints.Then we carry out other comparison with recent evolutionary approaches adapted to resolve of ordinary CSPs without fuzzy constraints.In order to study the capacity of this approach to solve the binary CSP, this last can be considerate as a particular case of FCSP.So we choose two algorithms elaborated specially to solve CSP [26], [27].In their work, Ortiz et al. [26] employed a variable length genetic algorithm to address the challenge of encoding more intricate rules by utilizing a larger number of features and heuristic actions.The genetic algorithm's chromosome structure comprised ten values, where nine of them represented landscape features and the tenth indicated the chosen low-level heuristic for variable ordering.The objective was to find the chromosome in the genetic algorithm that closely matched the current solution state of the constraint satisfaction problem (CSP), and subsequently apply the corresponding low-level heuristic to guide the search process.
The difference between particle swarm optimization (PSO) and standard mother tee PSO (MPSO) [27] lies in the implementation of operators.In the discrete mother tree optimization (DMTO)-CSP MPSO approach, similar to feeder recommended pools, authors utilize recommendation pools for exploration and exploitation through these operators.They introduce a local recommended pool (LRP) and global RP (GRP) that contain variable assignments from the local best and global best, respectively, excluding the current particle.The assignments minimizing conflicts are selected.To ensure a fair comparison, we didn't rely solely on the original settings provided by the authors.Instead, we empirically determined them by varying the values and selecting the ones yielding the best performance.For the genetic algorithm (GA), we used a population size of 200, a mutation rate of 5%, and a crossover rate of 72%.As for MPSO, we set φ 1 = φ 2 = 1 and a fixed population size of 100.We conducted 500 runs for each method on randomly generated instances using the RB model [23].This model, a variant of the standard Model B, is capable of generating challenging problem instances that are hard to solve.Figure 1 show the comparative results, in terms of violated Constraint Number as we can noted our approach performs closely to GA and can be improved by introducing heuristics like the chosen GA while generating the starting point like the number of constraint in which a given variable is implicated.0.6750 0.0383 0.5830 0.0341 0.3740 0.0218 0.0320 0.0019 0.0000 0.0000 GA 0.6700 0.0536 0.5780 0.0338 0.3630 0.0212 0.0310 0.0018 0.0000 0.0000 FCHN 0.5 0.4790 0.0272 0.3660 0.0214 0.0620 0.0036 0.0000 0.0000 0.0000 0.0000 GA 0.4790 0.0383 0.3610 0.0211 0.0590 0.0034 0.0000 0.0000 0.0000 0.0000 FCHN 0.7 0.3500 0.0199 0.1540 0.3004 0.0060 0.0004 0.0000 0.0000 0.0000 0.0000 GA 0.3460 0.0277 0.1480 0.3126 0.0050 0.0003 0.0000 0.0000 0.0000 0.0000 FCHN 0.9 0.2750 0.0156 0.1250 0.3701 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 GA 0.2720 0.0217 0.1230 0.3761 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000

CONCLUSION
In this paper, we have addressed the challenges associated with fuzzy constraint satisfaction problems (FCSPs), which find numerous applications in various domains and are extensively studied in operations research.We have proposed a novel approach for solving binary fuzzy constraint satisfaction problems, comprising several key steps.First, we introduced a new model that formulates the FCSP as a 0-1 quadratic program subject to linear constraints.Next, we employed the continuous Hopfield network (CHN) to tackle this problem effectively.To evaluate the performance of our new approach, we conducted experiments comparing it to our previously developed method for solving CSPs using CHN.We ran these approaches on typical CSP instances with conventional constraints, as well as on a collection of randomly generated instances with varying levels of tightness.The results of our experiments demonstrate the effectiveness of our new approach for solving FCSPs.It outperformed the previous method on various problem instances, showcasing its improved performance and suitability for handling fuzzy constraints.This study contributes to the advancement of solving FCSPs and provides valuable insights for researchers and practitioners working on constraint satisfaction problems.

Table 1 .
Performance of CHNFCSP and CHNCSP over typical instances problems

Table 2 .
Performance of GA and CHN over randoms instances problems