Thoughts on algorithms

I’ve been thinking about how a Letterpress-playing algorithm might work. If programming to play and/or analyze Letterpress games aren’t of interest to you, you can safely skip the rest of this post!

For the record, I think that it’s fair game to use a spellchecker if you can’t remember how to spell something … bivuacs bivoacs bivuoacs oh yeah, bivouacs … and fair to try an “un-” prefix or “-ly” suffix or whatever just in case … and certainly fair to use all those scrabble words I’ve learned over the years even if I have no idea what many of them mean.  But using an algorithm without the opponent knowing that you’re doing it is going too far, IMO.  Also, I’m no expert on game AI by any means, but it’s a long-time interest.  So here goes …

I’m assuming that the algorithm would use alpha/beta pruning (   One thing that makes me think this approach could solve most boards perfectly is that for most games, the depth of the tree never has to get very large.  Most games seem to be less than twenty moves, i.e., ten per player, and the kind of situation where a game can go a lot longer is, as far as I can see, only the situation in which a few unclaimed tiles are left and the players are jockeying to get the last move.

Also, in any given position, my intuition is that of all the possible moves some are clearly much stronger than others in ways that could be fairly well recognized heuristically.  The number of defended tiles is an obvious heuristic.  Add a bonus for a defended edge tile (more for a corner), and a bonus for adjacent defended tiles.  A tile that is going to be in high demand, such as the only “E” on a board with few vowels, should carry an additional bonus.

All of those things being equal, it should prefer positions where the score differential is higher.  And positions where the number of unclaimed letters is lower, to give it a bias for trying to finish off a game when it can.

Even if some boards would be hard to solve perfectly, a fairly simple algorithm should be far stronger than almost any human player.  It’s a much easier game to solve than Scrabble, for example.


6 thoughts on “Thoughts on algorithms

  1. The problem with Alpha/Beta is that it assumes perfect play from its opponent. This would mean that the AI player could run into significant challenges as most humans do not play Letterpress perfectly. Alpha/Beta will choose its move to maximize its lowest guaranteed score, and this strategy would most likely be no better (and in some cases worse) than a simpler algorithm early in the game.

    I’ve designed a weighted greedy player that weights letters based on their position being next to its own, undefended letters of its opponent, letters in corners, and letters on edges. The letter weights are summed for all possible words and the word with the highest weight is made. The results are promising so far, and the weighted player is able to beat a first-moving player making the longest possible word.

    • Thanks for the reply! I’ve seen a couple of other people with heuristic algorithms — and — and as I expected the result is a pretty strong player without any search. Certainly much stronger than a player that just takes score into account.

      Is your program playable? I’d love to try it out. I found the project via your home page but haven’t done more than glance at it.

      I don’t see your point about alpha/beta. I’m assuming that what would be maximized would not be the game score, but a heuristic evaluation (as is done alpha/beta for chess or other games). Chess programs that use alpha/beta this way don’t assume perfect play from the human player. If the human makes weak plays then alpha/beta works even better. Maybe I just missed your point, or perhaps my memory of all this is just too rusty and I’m wrong. I haven’t looked at it in years.

  2. You can play the automatic players today, but it requires you to tweak Letterpress’ main() function, compile the project from source, and to use the command line to enter a string of coordinates. Hopefully I’ll make it friendlier in the future. I’d like to have a GUI so make it easy to play against a couple of algorithms. I’ve thrown up some console output onto Pastebin so you can follow along at home in the meantime (

    In Alpha/Beta the player works to optimize the utility (reward) received at the end of the game. Let’s say that the A/B player has reached the end of the game and has a choice between two moves.

    1. The first move can result in a win or a loss after the opponent’s move. (Utility of 1.0 for the win and -1.0 for the loss.)
    2. The second move will result in a tie. (Utility of 0.)

    The A/B Player will choose the tie, as it will assume that the opponent will win the game if it makes the first move. (Choosing like this: MAX(0.0,MIN(1.0,-1.0) == MAX(0.0,-1.0) == 0.0.)

    But it’s possible that in order to win the opponent would have to make an obscure word (say HEXYLENES). It could be quite unlikely that the average human opponent would play this word. If the computer player were using an algorithm other than A/B it might be able to come away with a win in this situation, where A/B would settle for the tie.

    Heuristic functions certainly have a role in A/B, but they are used to approximate the utility of a state of the game. In Chess there is no score per se, or the official score is misleading (trading queens looks good for the first half of the move). A heuristic evaluation function can account for these variances and produce an estimated value of the game that is correlated with the actual utility for a given state.

    This heuristic function will also be used to truncate the number of calls A/B makes. Looking at the output of the program at Pastebin, there are 1,224 possible words that can be made on that board. That’s the branching factor for the A/B implementation. The players play to ~700 moves. That means A/B pruning with a good move ordering would have to look at 1,224^(700/2) states (or 5.3*10^1080 states). That’s way too many. At a certain point A/B will rely on its heuristic function to predict a utility summarizing the rest of the moves, thereby cutting the exponent down and yielding a more manageable number of states. With a branching factor as large as this A/B will have a very shallow search tree (looking three moves ahead still yields 43 thousand states, and four moves gives 1.5 million states). That means that A/B play will be very dependent on the quality of its heuristic function, and the ability of the heuristic to accurately predict the utility of the game from 600 moves away.

    That’s a long way of elaborating on my original point—A/B will run into situations where it could win a game, but instead it will choose a guaranteed tie. This is a function of the algorithm’s design, and it yields optimal play. But a riskier algorithm may yield better results over the long run, especially if it is able to exploit the limited set of words that a given player knows to its advantage.

    A/B remains an interesting challenge, and it’s on the list of players I’d like to write. But the above issues are why A/B may not be an ideal choice for a skillful computer player. That’s also why I made the Weighted Greedy player first. Greedy algorithms are typically simpler, faster, and have the potential to yield similar results (I think it’s a within a factor of 2 for knapsack problems).

    I’d be curious to hear any thoughts on what characteristics of a Letterpress game could be used the find a good heuristic. Certainly there’s the score, number of remaining neutral letters, and ratios of defended letters to undefended for both players. Any else that might prove useful?

    PS: If I’ve made any errors please let me know. I believe that what I’ve written above is accurate.

    PPS: My implementation has one serious flaw that you’ll notice if you look at the output. Both the Greedy and Weighted Greedy players use the first available instance of a letter that they come to. That lengthens the game significantly, so the quoted move numbers may be an order of magnitude off. That would mean that the quoted total state numbers are significantly large. But the numbers for shallow depths (3,4) still apply and are the main limiting factor in an A/B implementation. This could also mean that a heuristic would only need to look tens of moves ahead instead of hundreds.

    • If you want to increase the chance of winning by assuming that the human opponent is not going to use some of the more obscure words that’s easy to do, just by not considering them when you’re looking at the opponent’s options.

      But I’m pretty sure I’ve played HEXYLENES before, and other chemical names — mostly learned from playing Scrabble, not from chem classes. 🙂 So that’s an assumption that I’m not sure you want the program to make. And draws are extremely rare in this game anyway. In most games against most players, my hunch is that the games will tend to be be short, in part because of forced endings like this example:
      (that nobody has responded to yet). By the time the board is simple enough for a human to find a forced win, my hunch is that there were probably forced wins on prior moves that I would not be able to find.

      Good chess programs using alpha/beta pruning don’t look all that deep in the tree, and don’t come close to evaluating every possible position to that depth (the whole point of a/b pruning). A letterpress algorithm won’t have to look very deep in the tree either, especially considering that humans rarely do any search when they play letterpress.

      Or I guess I should say, *I* rarely search when I’m playing. Sometimes in the opening, when I can see a second move that would expand corner territory, and fairly often now at the end when there are just a few letters remaining (although very imperfectly). I doubt many human players do much more search than that in their analysis. It’s nearly all heuristic — something I like about the game very much. So a computer player that has strong heuristics would gain a huge advantage with only a little search.

      And there’s a lot of pruning that you don’t seem to be considering. First, you’d want to identify the key letters on the board — the undefended opponent letters, and unclaimed letters. Since those are the only ones that change the game state, you only have to consider words that contain one or more of those letters. As you get closer to the endgame this focuses the search more and more.

      Then eliminate words that are redundant to search. If HEXYLENES is playable then you don’t have to consider HEXES or EELS, etc. (To a first approximation anyway, there are subtleties I’m ignoring.) Furthermore if you already own the only X and T on the board, then you don’t have to search ETHYLENES either, because it produces exactly the same game state as HEXYLENES.

      After that pruning, heuristically evaluate each play. Take the best N, for some relatively small N. Do the same for the opponents best N replies. If you can search at least two ply deep with N >= 20 or so, you’ve got a killer algorithm. Search four ply and I’ll bet it wins 95% after giving the human the first move.

      There are a lot of ways to improve on this — pruning, transpositions, broadening the search when looking for forced wins, etc. But it would be such a strong player those enhancements might not be noticeable against human players.

  3. I’ve implemented most of the heuristics and pruning techniques described here. I am giving rough numbers from my last 10 or so moves played.

    These boards have between 1000-10000 playable words each. Pruning words which are prefixes of others, and pruning words that don’t affect the score can get those down to the 800-8000 range. Sorting out equivalent words like the ETHYLENES/HEXYLENES example above gets you down to the 700-3000 range. These 700-3000 words have to be evaluated in every way possible, to determine which way of playing a word scores highest.

    Take all this pruning into account (pruning itself takes time) and the time it takes to measure which of the 700-3000 is best, and you are looking at between 1.5-30 seconds of processing for just the first ply. I really think alpha/beta would have to be massively parallel to be at all feasible. You are correct that using this pruning makes endgame search faster… but at 1.5 second per move, examining all opponent responses to those 700 is likely to take 1.5 * 700 seconds, over 15 minutes. Trimming the moves to examine the second ply (like you say only the top 20) isn’t recommended. I ran into an endgame recently where the highest scoring 112 moves lost because they all took too many unclaimed squares (the opponent could take the remaining unclaimed squares and win on his next turn), but the 113th move did not, so it was the selected choice.

    About the heuristics: one heuristic you suggested confused me. I mean the one about adding a bonus for adjacent defended squares. I did not implement it, as other players have suggested that taking more than one corner (not necessary connected) is often more useful than expanding one corner.

    I added another heuristic you did not mention, sort of building on the high demand metric. Save the average demand for each of a tile’s neighbors. That is a (somewhat crude) measure of how easily a tile can be defended once it is used. I use this to weigh undefended tiles in the evaluation.

    • Very cool, thanks for posting. I would agree that defending a new corner can be better than expanding the defended area around an existing corner, sometimes enough better to be worth ceding a bit of currently-defended area for it. It would be interesting to compare heuristics that handle such cases differently.

      I may be wrong about alpha/beta being a good idea! It’s certainly not necessary if the goal is just to have an AI player that is nearly always overwhelmingly stronger than human opponents.

      Re: Trimming the moves to examine the second ply (like you say only the top 20) isn’t recommended. I ran into an endgame recently where the highest scoring 112 moves lost because they all took too many unclaimed squares (the opponent could take the remaining unclaimed squares and win on his next turn), but the 113th move did not, so it was the selected choice.

      The suggestion was that searching the heuristically-best 20 moves a few moves deep would yield a strong player, but you’re right that in a case like you’re describing here, which I suspect is very common near the endgame, you’ll end up without a good play if you only consider the first 20 candidates and give up if those are eliminated. But in this example you eliminate those first 112 words with very little search, so you wouldn’t stop there. Modify the suggestion to say: you want to pick from the 20 heuristically-best words that still look good after you’ve done a small amount of look-ahead.

      The point is that if the heuristic evaluation function is relatively accurate, as I believe to be the case here, then a relatively small amount of search should go a long way toward eliminating the kind of pitfalls that pure heuristic evaluation is prone to, such as endgame traps.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s