Othello is a well-known two-player strategy game. For this project, you will develop portions of an intelligent program that plays Othello. We've provided you with a user interface and the game logic (code that implements the rules of the game). You will write the search and strategy routines that will allow your program to play the game against a human opponent.
Othello — also known as Reversi — is a strategy game played on a square board divided into an 8x8 grid. The rules of the game, along with some notion of strategy, are described in the Wikipedia entry on Reversi; if you haven't played Othello before, or don't remember how it works, you should at least ready the first couple of sections of the Wikipedia entry.
Be sure you know how to play the game before attempting to complete this program; it will save you much time and effort. As provided, the program will already play a game of Othello with two human players, but it will not allow you to play against a computer player until you create your AI class and write the necessary code in the OthelloAIFactory class (which is described later in the write-up).
All of the code that you'll need to complete the project is included in the Zip archive. Much of the code is provided in compiled (i.e., .class) form. The provided .java files are heavily commented.
You'll only need to work on two classes. First, you need to create a new class that implements the OthelloAI interface. Your class needs to be named in a certain way; specifically, the name of the class needs to begin with OthelloAI, followed by your eight-digit student ID#. So, if your student ID# is 12345678, your class should be called OthelloAI12345678. This is important!
Once you've created your AI class, you'll also need to write one line of code in the OthelloAIFactory class. The comments in that class will explain what you need to do and why.
Everything else is to be left as-is.
The Othello class contains a main( ) method. To run the program, execute the Othello class. The provided GUI is simple and straightforward to use. When you run Othello, a window will appear with a green area with the label "Click here to start game." Click the green area and you'll be asked to specify whether each player should be controlled by a human or the computer; for now, specify human for both, as you haven't implemented your AI yet. Clicking on OK starts the game.
A human-controlled player makes a move by double-clicking an empty square on the grid. Not all squares constitute valid moves; the mouse cursor will turn into a "hand" when a square is a valid one, much like when you hover over a link in your browser. (Note that the mouse handling is a tad bit buggy at the moment, so you may have to move the mouse within the square a bit in order for the cursor to change.) The computer simply moves when it is its turn. The GUI animates the placing and flipping of tiles, so that you can see the moves "in action." Status messages display the score and remind you whose move it is.
You will be building a rudimentary artificial intelligence (AI) so that the computer can play a game of Othello against you (or against another instance of your artificial intelligence). Your task for this project is fairly narrow, so you can disregard the vast majority of the code that we gave you, most of which implements either the GUI or the game logic. In fact, most of the code has been provided in compiled (i.e., .class) form, rather than as source code, for this very reason.
There are three main abstractions that you need to understand in order to write the code required for this project:
You can think of the possible game states as being arranged, conceptually, in a kind of search tree called a game tree. Each node of the tree contains a particular game state g. Its children are the game states that can result from making each valid move from the state g.
The root of the tree is the initial game state — that is, the Othello game before the first move is made. The children of this initial state are all of the possible states that can arise from the black player (who moves first) making a valid opening move. There are four such states, corresponding to the four possible moves that the black player is permitted to make at the opening. (All other moves are illegal and, as such, are not to be considered.)
Here is a partial look at an Othello game tree: See image.
In the picture, from the initial state, there are four possibilies from which the black player can choose its initial move. From the first of those, we see that there are three possible moves that the white player can make in response. Other moves aren't pictured, but the tree continues to grow in this fashion. (Not surprisingly, the game tree can grow large rather quickly.)
We'll call the leaves in such a game tree the final states. These leaves indicate the states in which one player or the other has won the game.
Each time a player wants to pick a move, he or she wants to pick the one that will lead to a winning game state. We can determine the best move in three steps:
Assuming that you had a complete game tree at your disposal, this is a simple approach to implement. However, practical limitations make this approach impossible. First of all, the number of game states on each level of the tree grows exponentially as you work your way down the tree, since there are a number of possible moves that can be taken from any particular game state. There simply won't be enough memory to store the entire game tree. (You can imagine that, if you build the game tree 20 levels deep, and there are four possible moves that can be made from any particular state, the number of nodes in the tree would be greater than 420, which is a large number!) Besides, even if there were enough memory available to store the tree, the processing time to create the entire game tree would be prohibitive.
So we'll need to find a compromise — an approach that perhaps doesn't always find the best possible outcome, but that makes a decision in a reasonable amount of time and using a reasonable amount of memory.
The study of artificial intelligence has much to say about good ways to search toward a goal when it's impractical to check all possible paths toward it.
We can first make use of the following observation: Suppose the top player has made a move in the game, and the bottom player wants to figure out the best move to make, using the search tree approach we've been discussing. Then the bottom player need only concern himself with the subtree that has the current game state as its root. Once a move is made, all the other moves that could have been made can be ignored, as it is now not possible to take those paths down the tree. Thus, when analyzing the next move to make, we need only generate the part of the search tree that originates from the current game state.
This approach reduces our storage needs significantly — and we don't waste time or memory processing parts of the tree that we can no longer reach.
Even if we generate only the part of the tree that we need, that part will still be much too large to store until we're nearing the end of the game. This is where a heuristic search comes into play. In a heuristic search, we generate as much of the relevant subtree as is practical, using the resulting game states to guide us in selecting a move that we hope will be the best.
There are several strategies that we could use. At the heart of the strategy that we'll use is the notion of an evaluation function that we discussed earlier. We'll need to rate each particular game state in some way, so that we can decide which of a large number of game states is the best outcome for us. A simple approach — though one that ignores some aspects of the game — is the following:
eval(state) = number of tiles belonging to me − number of tiles belonging to my opponent
It's also important to note here that you do not need to actually build a game tree in memory. Our algorithm will perform a sort of depth-first search on the game tree, meaning that we can use parameters in a recursive method (stored on the run-time stack) to perform the search, negating the need to actually build and store a game tree. This will dramatically reduce the amount of memory needed to choose a move, since only one path in the tree will ever need to be stored on the run-time stack at a time.
Putting these ideas together, we can develop a search algorithm that will look for the move that leads to the game state that evaluates to the highest value. That algorithm looks something like this:
int search(OthelloGameState s, int depth) { if (depth == 0) return evaluation of s else { if (it's my turn to move) { for each valid move that I can make from s { make that move on s yielding a state s' search(s', depth - 1) } return the maximum value returned from recursive search calls } else { for each valid move that my opponent can make from s { make that move on s yielding a state s' search(s', depth - 1) } return the minimum value returned from recursive search calls } } }
There are a few things we need to discuss about the algorithm above. First, notice that there are two cases of recursion: either it is the computer player's turn (who is currently making the decision) or its opponent's turn. In each case, the algorithm is almost the same, except:
You may not assume that the computer player will always be the black or the white player. Either the black or the white player (or both!) might be a computer player. When deciding whether it's "my turn" or "my opponent's turn," you'll have to exercise some caution to ensure that you're making the right decision.
Second, notice the depth parameter. This will be used to limit the depth of our search, to make sure that our search is of a manageable length. Each time we recurse one level deeper, the depth is reduced by one, and we stop recursing when it reaches zero.
Thirdly, observe that when one player makes a move, it isn't necessarily the case that the other player will be making the next move; occasionally, in Othello, the same player gets to move twice in a row. So, care must be taken in deciding whose turn it is. The easiest way to deal with this problem is to count on the current game state to keep track of this for you; it can always tell you reliably whose turn it is.
Lastly, note that this algorithm returns the evaluation of the best state, not the best state itself. In short, calling search(s, 4) for some state s asks the following question: "Looking four moves into the future, and assuming I do the best I can do and so does my opponent, how well will the state s turn out for me?" You'll need to exercise some care in actually implementing this algorithm so that chooseMove( ) will be able to call search( ) and use the result to help it choose the right move.
The core of your AI — what will set it apart from others — is the evaluation function that it uses to decide how "good" each board configuration is. I'm leaving this as an open problem and you're welcome to implement your evaluation function however you'd like. You might want to poke around the web looking for strategy guides or other information, taking into account, for example, that some squares on the Othello board are considered more important than others.
It's intended to be fun to play against your own program to see if you can beat it, and I also hope you enjoy fine-tuning your program until you have trouble beating it.
After this project's due date has passed, I'll be gathering all of your AIs together and running a tournament to determine who has the best AI. In fairness, I'll explain here how the tournament will be organized:
The outcome of the tournament will have no bearing on your grade, but it will hopefully motivate you to think a bit about how you might tune up your evaluation function — or explore alternative ways of helping your AI to see farther into the future. (You are required, fundamentally, to use the algorithm shown in this write-up, though there are optimizations you can make to it, if you're so inclined. If you're not sure whether your idea is permissible, ask me and I'd be glad to let you know what I think about it.)
Good luck!