 Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

/  Project   /  Blog: Heuristic Search Techniques in Artificial Intelligence

## Searching and organizing data is an important topic within Artificial Intelligence. There are many problems that require searching for an answer within the solution domain.

There are many possible solutions to a given problem and we do not know which ones are correct. By efficiently organizing the data, we can search for solutions quickly and effectively.

More often, there are so many possible options to solve a given problem that no algorithm can be developed to find the right solution. Also, going through every single solution is not possible because it is prohibitively expensive. In such cases, we rely on a rule of thumb that helps us narrow down the search by eliminating the options that are obviously wrong. This rule of thumb is called a heuristic. The method of using heuristics to guide our search is called heuristic search.

Heuristic techniques are very handy because they help us speed up the process. Even if the heuristic is not able to completely eliminate some options, it will help us to order those options so that we are more likely to get to the better solutions first.

#### Uninformed versus Informed search

If you are familiar with computer science, you should have heard about search techniques like Depth First Search (DFS), Breadth First Search (BFS), and Uniform Cost Search (UCS). These are search techniques that are commonly used on graphs to get to the solution. These are examples of uninformed search. They do not use any prior information or rules to eliminate some paths. They check all the plausible paths and pick the optimal one.

Heuristic search, on the other hand, is called Informed search because it uses prior information or rules to eliminate unnecessary paths. Uninformed search techniques do not take the goal into account. These techniques don’t really know where they are trying to go unless they just stumble upon the goal in the process.

In the graph problem, we can use heuristics to guide the search. For example, at each node, we can define a heuristic function that returns a score that represents the estimate of the cost of the path from the current node to the goal. By defining this heuristic function, we are informing the search technique about the right direction to reach the goal. This will allow the algorithm to identify which neighbour will lead to the goal.

We need to note that heuristic search might not always find the most optimal solution. This is because we are not exploring every single possibility and we are relying on a heuristic. But it is guaranteed to find a good solution in a reasonable time, which is what we expect from a practical solution. In real-world scenarios, we need solutions that are fast and effective. Heuristic searches provide an efficient solution by arriving at a reasonable solution quickly. They are used in cases where the problems cannot be solved in any other way or would take a really long time to solve.

#### Constraint Satisfaction Problems

There are many problems that have to be solved under constraints. These constraints are basically conditions that cannot be violated during the process of solving the problem. These problems are referred to as Constraint Satisfaction Problems (CSPs).

CSPs are basically mathematical problems that are defined as a set of variables that must satisfy a number of constraints. When we arrive at the final solution, the states of the variables must obey all the constraints. This technique represents the entities involved in a given problem as a collection of a fixed number of constraints over variables. These variables need to be solved by constraint satisfaction methods.

These problems require a combination of heuristics and other search techniques to be solved in a reasonable amount of time. In this case, we will use constraint satisfaction techniques to solve problems on finite domains. A finite domain consists of a finite number of elements. Since we are dealing with finite domains, we can use search techniques to arrive at the solution.

#### Local search techniques

Local search is a particular way of solving a CSP. It keeps improving the values until all the constraints are satisfied. It iteratively keeps updating the variables until we arrive at the destination. These algorithms modify the value during each step of the process that gets us closer to the goal. In the solution space, the updated value is closer to the goal than the previous value. Hence it is known as a local search.

Local search algorithm is a heuristic search algorithm. These algorithms use a function that calculates the quality of each update. For example, it can count the number of constraints that are being violated by the current update or it can see how the update affects the distance to the goal. This is referred to as the cost of the assignment. The overall goal of local search is to find the minimal cost update at each step.

Hill climbing is a popular local search technique. It uses a heuristic function that measures the difference between the current state and the goal. When we start, it checks if the state is the final goal. If it is, then it stops. If not, then it selects an update and generates a new state. If it’s closer to the goal than the current state, then it makes that the current state. If not, it ignores it and continues the process until it checks all possible updates. It basically climbs the hill until it reaches the summit.

#### Simulated Annealing

Simulated Annealing is a type of local search as well as a stochastic search technique. Stochastic search techniques are used extensively in various fields such as robotics, chemistry, manufacturing, medicine, economics, and so on. We can perform things like optimizing the design of a robot, determining the timing strategies for automated control in factories, and planning traffic. Stochastic algorithms are used to solve many real-world problems.

Simulated Annealing is a variation of the hill climbing technique. One of the main problems of hill climbing is that it ends up climbing false foothills. This means that it gets stuck in local maxima. So it is better to check out the whole space before we make any climbing decisions. In order to achieve this, the whole space is initially explored to see what it is like. This helps us avoid getting stuck in a plateau or local maxima.

In Simulated Annealing, we reformulate the problem and solve it for minimization, as opposed to maximization. So, we are now descending into valleys as opposed to climbing hills. We are pretty much doing the same thing, but in a different way. We use an objective function to guide the search. This objective function serves as our heuristic.

The reason it is called Simulated Annealing is because it is derived from the metallurgical process. We first heat metals up and then let them cool until they reach the optimal energy state.

The rate at which we cool the system is called the annealing schedule. The rate of cooling is important because it directly impacts the final result. In the real world case of metals, if the rate of cooling is too fast, it ends up settling for the local maximum. For example, if we take the heated metal and put it in cold water, it ends up quickly settling for the sub-optimal local maximum.

If the rate of cooling is slow and controlled, we give the metal a chance to arrive at the globally optimum state. The chances of taking big steps quickly towards any particular hill are lower in this case. Since the rate of cooling is slow, it will take its time to choose the best state. We do something similar with data in our case.

We first evaluate the current state and see if it is the goal. If it is, then we stop. If not, then we set the best state variable to the current state. We then define our annealing schedule that controls how quickly it descends into a valley. We compute the difference between the current state and the new state. If the new state is not better, then we make it the current state with a certain predefined probability. We do this using a random number generator and making a decision based on a threshold. If it is above the threshold, then we set the best state to this state. Based on this, we update the annealing schedule depending on the number of nodes. We keep doing this until we arrive at the goal.

#### Constructing a string using a greedy search

Greedy search is an algorithmic paradigm that makes the locally optimal choice at each stage in order to find the global optimum. But in many problems, greedy algorithms do not produce globally optimum solutions. An advantage of using greedy algorithms is that they produce an approximate solution in a reasonable time. The hope is that this approximate solution is reasonably close to the global optimal solution.

Greedy algorithms do not refine their solutions based on new information during the search. For example, let’s say you are planning on a road trip and you want to take the best route possible. If you use a greedy algorithm to plan the route, it would ask you to take routes that are shorter but might end up taking more time. It can also lead you to paths that may seem faster in the short term but might lead to traffic jams later. This happens because greedy algorithms only think about the next step and not the globally optimal final solution.

Let’s see how to solve a problem using a greedy search. In this problem, we will try to recreate the input string based on the alphabets. We will ask the algorithm to search the solution space and construct a path to the solution.

We will be using a package called `simpleai` throughout this chapter. It contains various routines that are useful in building solutions using heuristic search techniques. It’s available at http://bit.ly/1HWgpeJ. We need to make a few changes to the source code in order to make it work in Python3. A file called `simpleai.zip` has been provided along with the code for the book. Unzip this file into a folder called `simpleai`. This folder contains all the necessary changes to the original library necessary to make it work in Python3. Place the `simpleai` folder in the same folder as your code and you’ll be able to run your code smoothly.

Create a new Python file and import the following packages:

`import argparse import simpleai.search as ss`

Define a function to parse the input arguments:

`def build_arg_parser(): parser = argparse.ArgumentParser(description='Creates the input string \ using the greedy algorithm') parser.add_argument("--input-string", dest="input_string", required=True, help="Input string") parser.add_argument("--initial-state", dest="initial_state", required=False, default='', help="Starting point for the search") return parser`

Create a class that contains the methods needed to solve the problem. This class inherits the `SearchProblem` class available in the library. We just need to override a couple of methods to suit our needs. The first method `set_target` is a custom method that we define to set the target string:

`class CustomProblem(ss.SearchProblem):  def set_target(self, target_string):  self.target_string = target_string`

The `actions` is a method that comes with a `SearchProblem` and we need to override it. It’s responsible for taking the right steps towards the goal. If the length of the current string is less than the length of the target string, it will return the list of possible alphabets to choose from. If not, it will return an empty string:

`# Check the current state and take the right action def actions(self, cur_state): if len(cur_state) < len(self.target_string): alphabets = 'abcdefghijklmnopqrstuvwxyz' return list(alphabets + ' ' + alphabets.upper()) else: return []`

a method to compute the result by concatenating the current string and the action that needs to be taken. This method comes with a `SearchProblem` and we are overriding it:

`# Concatenate state and action to get the result  def result(self, cur_state, action):  return cur_state + action`

The method `is_goal` is a part of the `SearchProblem` and it’s used to check if we have arrived at the goal:

`# Check if goal has been achieved  def is_goal(self, cur_state):  return cur_state == self.target_string`

The method `heuristic` is also a part of the `SearchProblem` and we need to override it. We will define our own heuristic that will be used to solve the problem. We will calculate how far we are from the goal and use that as the heuristic to guide it towards the goal:

`# Define the heuristic that will be used def heuristic(self, cur_state): # Compare current string with target string  dist = sum([1 if cur_state[i] != self.target_string[i] else 0 for i in range(len(cur_state))])`
` # Difference between the lengths diff = len(self.target_string) - len(cur_state)`
` return dist + diff`

e input arguments:

`if __name__=='__main__':  args = build_arg_parser().parse_args()`

Initialize the `CustomProblem` object:

`# Initialize the object  problem = CustomProblem()`

Set the starting point as well as the goal we want to achieve:

`# Set target string and initial state  problem.set_target(args.input_string)  problem.initial_state = args.initial_state`

Run the solver:

`# Solve the problem  output = ss.greedy(problem)`

Print the path to the solution:

`print('\nTarget string:', args.input_string)  print('\nPath to the solution:')  for item in output.path():  print(item)`

The full code is given in the file `greedy_search.py`. If you run the code with an empty initial state:

`\$ python3 greedy_search.py --input-string 'Artificial Intelligence' --initial-state ''`

You will get the following output:

If you run the code with a non-empty starting point:

`\$ python3 greedy_search.py --input-string 'Artificial Intelligence with Python' --initial-state 'Artificial Inte'`

You will get the following output:

#### Solving a problem with constraints

We have already discussed how Constraint Satisfaction Problems are formulated. Let’s apply them to a real-world problem. In this problem, we have a list of names and each name can only take a fixed set of values. We also have a set of constraints between these people that needs to be satisfied. Let’s see how to do it.

Create a new Python file and import the following packages:

`from simpleai.search import CspProblem, backtrack, \  min_conflicts, MOST_CONSTRAINED_VARIABLE, \  HIGHEST_DEGREE_VARIABLE, LEAST_CONSTRAINING_VALUE`

Define the constraint that specifies that all the variables in the input list should have unique values:

`# Constraint that expects all the different variables # to have different values def constraint_unique(variables, values):  # Check if all the values are unique  return len(values) == len(set(values))`

Define the constraint that specifies that the first variable should be bigger than the second variable:

`# Constraint that specifies that one variable # should be bigger than other def constraint_bigger(variables, values):  return values > values`

Define the constraint that specifies that if the first variable is odd, then the second variable should be even and vice versa:

`# Constraint that specifies that there should be # one odd and one even variables in the two variables def constraint_odd_even(variables, values):  # If first variable is even, then second should  # be odd and vice versa  if values % 2 == 0:  return values % 2 == 1  else:  return values % 2 == 0`

Define the `main` function and define the variables:

`if __name__=='__main__':  variables = ('John', 'Anna', 'Tom', 'Patricia')`

Define the list of values that each variable can take:

`domains = {  'John': [1, 2, 3],  'Anna': [1, 3],  'Tom': [2, 4],  'Patricia': [2, 3, 4],  }`

Define the constraints for various scenarios. In this case, we specify three constraints as follows:

• John, Anna, and Tom should have different values
• Tom’s value should be bigger than Anna’s value
• If John’s value is odd, then Patricia’s value should be even and vice versa

Use the following code:

`constraints = [  (('John', 'Anna', 'Tom'), constraint_unique),  (('Tom', 'Anna'), constraint_bigger),  (('John', 'Patricia'), constraint_odd_even),  ]`

Use the preceding variables and the constraints to initialize the `CspProblem`object:

`problem = CspProblem(variables, domains, constraints)`

Compute the solution and print it:

`print('\nSolutions:\n\nNormal:', backtrack(problem))`

Compute the solution using the `MOST_CONSTRAINED_VARIABLE` heuristic:

`print('\nMost constrained variable:', backtrack(problem,  variable_heuristic=MOST_CONSTRAINED_VARIABLE))`

Compute the solution using the `HIGHEST_DEGREE_VARIABLE` heuristic:

`print('\nHighest degree variable:', backtrack(problem,  variable_heuristic=HIGHEST_DEGREE_VARIABLE))`

Compute the solution using the `LEAST_CONSTRAINING_VALUE` heuristic:

`print('\nLeast constraining value:', backtrack(problem,  value_heuristic=LEAST_CONSTRAINING_VALUE))`

Compute the solution using the `MOST_CONSTRAINED_VARIABLE` variable heuristic and `LEAST_CONSTRAINING_VALUE` value heuristic:

`print('\nMost constrained variable and least constraining value:',  backtrack(problem, variable_heuristic=MOST_CONSTRAINED_VARIABLE,  value_heuristic=LEAST_CONSTRAINING_VALUE))`

Compute the solution using the `HIGHEST_DEGREE_VARIABLE` variable heuristic and `LEAST_CONSTRAINING_VALUE` value heuristic:

`print('\nHighest degree and least constraining value:',  backtrack(problem, variable_heuristic=HIGHEST_DEGREE_VARIABLE,  value_heuristic=LEAST_CONSTRAINING_VALUE))`

Compute the solution using the minimum conflicts heuristic:

`print('\nMinimum conflicts:', min_conflicts(problem))`

If you run the code, you will get the following output:

You can check the constraints to see if the solutions satisfy all those constraints.

#### Solving the region-colouring problem

Let’s use the Constraint Satisfaction framework to solve the region-colouring problem. Consider the following screenshot:

We have a few regions in the preceding figure that are labeled with names. Our goal is to color with four colors so that no adjacent regions have the same color.

Create a new Python file and import the following packages:

`from simpleai.search import CspProblem, backtrack`

Define the constraint that specifies that the values should be different:

`# Define the function that imposes the constraint # that neighbors should be different def constraint_func(names, values):  return values != values`

Define the `main` function and specify the list of names:

`if __name__=='__main__': # Specify the variables names = ('Mark', 'Julia', 'Steve', 'Amanda', 'Brian',  'Joanne', 'Derek', 'Allan', 'Michelle', 'Kelly')`

Define the list of possible colors:

`# Define the possible colors  colors = dict((name, ['red', 'green', 'blue', 'gray']) for name in names)`

We need to convert the map information into something that the algorithm can understand. Let’s define the constraints by specifying the list of people who are adjacent to each other:

`# Define the constraints  constraints = [  (('Mark', 'Julia'), constraint_func),  (('Mark', 'Steve'), constraint_func),  (('Julia', 'Steve'), constraint_func),  (('Julia', 'Amanda'), constraint_func),  (('Julia', 'Derek'), constraint_func),  (('Julia', 'Brian'), constraint_func),  (('Steve', 'Amanda'), constraint_func),  (('Steve', 'Allan'), constraint_func),  (('Steve', 'Michelle'), constraint_func),  (('Amanda', 'Michelle'), constraint_func),  (('Amanda', 'Joanne'), constraint_func),  (('Amanda', 'Derek'), constraint_func),  (('Brian', 'Derek'), constraint_func),  (('Brian', 'Kelly'), constraint_func),  (('Joanne', 'Michelle'), constraint_func),  (('Joanne', 'Amanda'), constraint_func),  (('Joanne', 'Derek'), constraint_func),  (('Joanne', 'Kelly'), constraint_func),  (('Derek', 'Kelly'), constraint_func),  ]`

Use the variables and constraints to initialize the object:

`# Solve the problem  problem = CspProblem(names, colors, constraints)`

Solve the problem and print the solution:

`# Print the solution  output = backtrack(problem)  print('\nColor mapping:\n')  for k, v in output.items():  print(k, '==>', v)`

The full code is given in the file `coloring.py`. If you run the code, you will get the following output on your Terminal:

If you color the regions based on this output, you will get the following:

You can check that no two adjacent regions have the same color.

#### Building an 8-puzzle solver

8-puzzle is a variant of the 15-puzzle. You can check it out at https://en.wikipedia.org/wiki/15_puzzle. You will be presented with a randomized grid and your goal is to get it back to the original ordered configuration. You can play the game to get familiar with it at http://mypuzzle.org/sliding .

We will use an A* algorithm to solve this problem. It is an algorithm that’s used to find paths to the solution in a graph. This algorithm is a combination of Dijkstra’s algorithm and a greedy best-first search. Instead of blindly guessing where to go next, the A* algorithm picks the one that looks the most promising. At each node, we generate the list of all possibilities and then pick the one with the minimal cost required to reach the goal.

Let’s see how to define the cost function. At each node, we need to compute the cost. This cost is basically the sum of two costs — the first cost is the cost of getting to the current node and the second cost is the cost of reaching the goal from the current node.

We use this summation as our heuristic. As we can see, the second cost is basically an estimate that’s not perfect. If this is perfect, then the A* algorithm arrives at the solution quickly. But it’s not usually the case. It takes some time to find the best path to the solution. But A* is very effective in finding the optimal paths and is one of the most popular techniques out there.

Let’s use the A* algorithm to build an 8-puzzle solver. This is a variant of the solution given in the `simpleai` library. Create a new Python file and import the following packages:

`from simpleai.search import astar, SearchProblem`

Define a class that contains the methods to solve the 8-puzzle:

`# Class containing methods to solve the puzzle class PuzzleSolver(SearchProblem):`

Override the `actions` method to align it with our problem:

`# Action method to get the list of the possible  # numbers that can be moved in to the empty space  def actions(self, cur_state):  rows = string_to_list(cur_state)  row_empty, col_empty = get_location(rows, 'e')`

Check the location of the empty space and create the new action:

`actions = [] if row_empty > 0: actions.append(rows[row_empty - 1][col_empty]) if row_empty < 2: actions.append(rows[row_empty + 1][col_empty]) if col_empty > 0: actions.append(rows[row_empty][col_empty - 1]) if col_empty < 2: actions.append(rows[row_empty][col_empty + 1])`
` return actions`

Override the `result` method. Convert the string to a list and extract the location of the empty space. Generate the result by updating the locations:

`# Return the resulting state after moving a piece to the empty space def result(self, state, action): rows = string_to_list(state) row_empty, col_empty = get_location(rows, 'e') row_new, col_new = get_location(rows, action)`
` rows[row_empty][col_empty], rows[row_new][col_new] = \ rows[row_new][col_new], rows[row_empty][col_empty]`
` return list_to_string(rows)`

Check if the goal has been reached:

`# Returns true if a state is the goal state  def is_goal(self, state):  return state == GOAL`

Define the `heuristic` method. We will use the heuristic that computes the distance between the current state and goal state using Manhattan distance:

`# Returns an estimate of the distance from a state to  # the goal using the manhattan distance  def heuristic(self, state):  rows = string_to_list(state)  distance = 0`

Compute the distance:

`for number in '12345678e': row_new, col_new = get_location(rows, number) row_new_goal, col_new_goal = goal_positions[number]`
` distance += abs(row_new - row_new_goal) + abs(col_new - col_new_goal)`
` return distance`

Define a function to convert a list to string:

`# Convert list to string def list_to_string(input_list):  return '\n'.join(['-'.join(x) for x in input_list])`

Define a function to convert a string to a list:

`# Convert string to list def string_to_list(input_string):  return [x.split('-') for x in  input_string.split('\n')]`

Define a function to get the location of a given element in the grid:

`# Find the 2D location of the input element def get_location(rows, input_element):  for i, row in enumerate(rows):  for j, item in enumerate(row):  if item == input_element:  return i, j`

Define the initial state and the final goal we want to achieve:

`# Final result that we want to achieve GOAL = '''1-2-3 4-5-6 7-8-e''' # Starting point INITIAL = '''1-e-2 6-3-4 7-5-8'''`

Track the goal positions for each piece by creating a variable:

`# Create a cache for the goal position of each piece goal_positions = {} rows_goal = string_to_list(GOAL) for number in '12345678e':  goal_positions[number] = get_location(rows_goal, number)`

Create the A* solver object using the initial state we defined earlier and extract the result:

`# Create the solver object result = astar(PuzzleSolver(INITIAL))`

Print the solution:

`# Print the resultsfor i, (action, state) in enumerate(result.path()): print() if action == None: print('Initial configuration') elif i == len(result.path()) - 1: print('After moving', action, 'into the empty space. Goal achieved!') else: print('After moving', action, 'into the empty space')`
` print(state)`

If you run the code, you will get a long output on your Terminal. It will start as follows:

If you scroll down, you will see the steps taken to arrive at the solution. At the end, you will see the following on your Terminal:

#### Building a maze solver

Let’s use the A* algorithm to solve a maze. Consider the following figure:

The `#` symbols indicate obstacles. The symbol `o` represents the starting point and `x` represents the goal. Our goal is to find the shortest path from the start to the end point. Let’s see how to do it in Python. The following solution is a variant of the solution provided in the `simpleai` library. Create a new Python file and import the following packages:

`import math from simpleai.search import SearchProblem, astar`

Create a class that contains the methods needed to solve the problem:

`# Class containing the methods to solve the maze class MazeSolver(SearchProblem):`

Define the initializer method:

`# Initialize the class  def __init__(self, board):  self.board = board  self.goal = (0, 0)`

Extract the initial and final positions:

`for y in range(len(self.board)): for x in range(len(self.board[y])): if self.board[y][x].lower() == "o": self.initial = (x, y) elif self.board[y][x].lower() == "x": self.goal = (x, y)`
` super(MazeSolver, self).__init__(initial_state=self.initial)`

Override the `actions` method. At each position, we need to check the cost of going to the neighbouring cells and then append all the possible actions. If the neighbouring cell is blocked, then that action is not considered:

`# Define the method that takes actions  # to arrive at the solution  def actions(self, state):  actions = []  for action in COSTS.keys():  newx, newy = self.result(state, action)  if self.board[newy][newx] != "#":  actions.append(action)  return actions`

Override the `result` method. Depending on the current state and the input action, update the `x` and y coordinates:

`# Update the state based on the action  def result(self, state, action):  x, y = state  if action.count("up"):  y -= 1  if action.count("down"):  y += 1  if action.count("left"):  x -= 1  if action.count("right"):  x += 1  new_state = (x, y)  return new_state`

Check if we have arrived at the goal:

`# Check if we have reached the goal  def is_goal(self, state):  return state == self.goal`

We need to define the `cost` function. This is the cost of moving to a neighboring cell, and it’s different for vertical/horizontal and diagonal moves. We will define these later:

`# Compute the cost of taking an action  def cost(self, state, action, state2):  return COSTS[action]`

Define the heuristic that will be used. In this case, we will use the Euclidean distance:

`# Heuristic that we use to arrive at the solution  def heuristic(self, state):  x, y = state  gx, gy = self.goal  return math.sqrt((x - gx) ** 2 + (y - gy) ** 2)`

Define the `main` function and also define the map we discussed earlier:

`if __name__ == "__main__":  # Define the map  MAP = """  ##############################  # # # #  # #### ######## # #  # o # # # #  # ### ##### ###### #  # # ### # #  # # # # # # ###  # ##### # # # x #  # # # #  ##############################  """`

Convert the map information into a list:

`# Convert map to a list  print(MAP)  MAP = [list(x) for x in MAP.split("\n") if x]`

Define the cost of moving around the map. The diagonal move is more expensive than horizontal or vertical moves:

`# Define cost of moving around the map  cost_regular = 1.0  cost_diagonal = 1.7`

Assign the costs to the corresponding moves:

`# Create the cost dictionary  COSTS = {  "up": cost_regular,  "down": cost_regular,  "left": cost_regular,  "right": cost_regular,  "up left": cost_diagonal,  "up right": cost_diagonal,  "down left": cost_diagonal,  "down right": cost_diagonal,  }`

Create a solver object using the custom class we defined earlier:

`# Create maze solver object  problem = MazeSolver(MAP)`

Run the solver on the map and extract the result:

`# Run the solver  result = astar(problem, graph_search=True)`

Extract the path from the result:

`# Extract the path  path = [x for x in result.path()]`

Print the output:

`# Print the result  print()  for y in range(len(MAP)):  for x in range(len(MAP[y])):  if (x, y) == problem.initial:  print('o', end='')  elif (x, y) == problem.goal:  print('x', end='')  elif (x, y) in path:  print(' ', end='')  else:  print(MAP[y][x], end='')  print()`

If you run the code, you will get the following output:

Source: Artificial Intelligence on Medium

(Visited 208 times, 1 visits today)