Deep learning for… chess (addendum)

My previous blog post about deep learning for chess blew up and made it to Hacker News and a couple of other places. One pretty amazing thing was that the Github repo got 150 stars overnight. There was also lots of comments on the Hacker News post that I thought were really interesting. (See this skeptical comment for instance).

A couple of things came up in several places. I actually fully agree with a lot of the skepticism my blog post got. Here's a bit of clarification + other stuff

My assumption that amateur players make near-optimal moves

Let me retract that statement a bit. But just a little bit. There's several ideas here. The first one is that if 1,000 amateur chess players could vote for the next move, that move is probably pretty strong. There's some anecdotal evidence suggesting that a large amount of amateurs actually, eg. Kasparov vs the World. The cool thing is that training this machine learning model, it will actually learn to pick the move that corresponds to what “most” players would choose. (You can actually see that the probability distribution over all next valid moves are given by a softmax distribution where the $$ z $$ values are given by the evaluation function).

The second idea is that a lot of moves are pretty obvious, because you are forced to do something. The third thing is that almost any move is good compared to a random move.

I think in hindsight it's probably not correct that most moves by amateur players are “near-optimal”, but I don't think it matters for the model.

What does each layer show if you look at it?

I looked at it, but it's pretty much all over the place. Unlike convolutional neural networks, where the first layer often represents edges, there is nothing like that in this network. It seems like the logic is encoded throughout the whole network. Here's the first few coefficients of the first feature (out of the 2048 features in the first layer), ranked in decreasing order of magnitude:

<td>
  q @ e7
</td>
<td>
  P @ f7
</td>
<td>
  q @ f6
</td>
<td>
  P @ d3
</td>
<td>
  r @ c6
</td>
<td>
  N @ e4
</td>
<td>
  P @ d6
</td>
<td>
  r @ e6
</td>
<td>
  q @ d6
</td>
<td>
  p @ d7
</td>
0.0856
-0.0686
0.0658
0.0657
-0.0655
0.0650
0.0648
-0.0625
0.0625
0.0588

White pieces are upper case, black are lower case. I don't see much going on here.

There is actually at least one paper about using deep neural networks for Go

****Ilya Sutskever and Vinod Nair wrote this paper in 2008. It even uses convolutional neural networks. It only has about 10k parameters (compared to 10M in my model) but it does something very similar to what I did: it tries to predict the next move of an expert player. I'm not sure why they didn't evaluate playing with it though. I would guess it probably needs a lot more parameters to play well.

 

 

Tagged with: math