48665b2a — tslil clingman 1 year, 1 month ago master
--fast-math seems saves some time

That i can tell at a glance, the largest relevant change this
introduces is flush-to-zero for floats. Some testing is required to
ensure that the CNN isn't too sensitive to this. Preferable to that
option is to simply train it without relying on subnormal floats in
the first place.
c2ffe028 — tslil clingman 1 year, 1 month ago
Went for more standard unicode symbols
2fc1a9a0 — tslil clingman 1 year, 1 month ago
Corrected win screen in geminict
c7687f51 — tslil clingman 1 year, 1 month ago
Enforce quoted ptn string for geminict
309ee7aa — tslil clingman 1 year, 1 month ago
Welcome geminict!

This is a special interface to negamax_cnn1986 which is designed to
generate output for use in a CGI tak interface to be used over gemini.

Also in this commit is a reformating of the various source files to
use the traditional tab width of 8 spaces.
b4a42bc0 — tslil clingman 1 year, 4 months ago
More towel wringing: re-implemented check_road_colour

Previously check_win would call check_road_colour once for each road
colour, and check_road_colour would call a depth-first search (DFS)
for each of the two axes. This meant that we were doing (up to) *four*
depth-first searches for each call of check_win.

I have replaced both axial DFS with the world's worst TM
implementation of a connected component generation algorithm backed by
the least guaranteed disjoint set data structure. Essentially doing
anything about union find correctly is slower than just ... not doing
it. Although we lose the asymptotic complexity, in practice we're
doing this millions of times per turn, for a fixed board size and
that's what matters.

All in all, it appears that i've managed to shave about 69ns off
check_win, per call -- nice! This amounts to 50ms or so saved at depth
5 per engine move, in one of my test games.

Unfortunately nearly 99% of the time is still taken by evaluating the
convolutional neural network. It's slow.
993d3d82 — tslil clingman 1 year, 5 months ago
Trying to make things faster

I tried the following, but they all made things worse:
- moving away from the singly-linked (tail tracking) list for actions
	  + using an array zipper for a deque
		+ using an array to poorly hold a floating deque
- caching the results of generating move lists in the transposition
	table and then
		+ copying the resulting list/zip/deque instead of generating it
		+ applying the move-to-front without copying, but this made the
			search order worse. Presumably in this case shallower nodes were
			messing up the search tree with garbage moves?

I think some of this is not supposed to happen, but i have just the
right combination of poor evaluation function and naively ordered and
cheap move generation that i'm in a local minimum here.
d8dae1d7 — tslil clingman 1 year, 8 months ago
Merge branch 'master' of git.sr.ht:~tslil/ctak
2bc34e80 — tslil clingman 1 year, 8 months ago
Fix copyright notice in files, and small preemptive optimisation

Eventually there'll be a more complicated data generation step than
the one we're presently using, so having it in-lined in the loop is
wasteful. Ideally also this would be update per ply and we could avoid
recalculating it entirely for every query -- though it's probably
``fast enough'' for now. Also, caching is WIP.
84850904 — tslil clingman 1 year, 8 months ago
Corrected generation of training data for 6s
9ea869f8 — tslil clingman 1 year, 9 months ago
Don't generate header for training data + tweaks

For some reason it would seem that moving flats to a lower value and
increasing the proximity between caps and top flats improves
acquisition. Still not great, but every bit counts.
b9abf8f8 — tslil clingman 1 year, 9 months ago
Change the training data generation a little

Although it pains me to say it, ``label smoothing'' appears to be
actually work. I'm also currently experimenting with training simply
against _all_ games, instead of only bot matches. Once the training
finishes i'll pit cttei against itself with old and new weights,
hopefully there'll be a noticeable improvement.
f17fb3bd — tslil clingman 1 year, 9 months ago
Rename ct_k -> ct, IANAL but ...
efde8ff4 — tslil clingman 1 year, 9 months ago
Small typo in generated output for weights
4f8848a6 — tslil clingman 1 year, 9 months ago
Correct line clearing behaviour, EXIT_FAILURE <~ -1
f958413e — tslil clingman 1 year, 9 months ago
Renamed binaries
259dc3f0 — tslil clingman 1 year, 9 months ago
TEI interface working!
935eea9b — tslil clingman 1 year, 9 months ago
Tried some naive iterative deepening. Work on TEI interface next

If TEI is implemented, then i could make use of Morten's
racetrack (https://github.com/MortenLohne/racetrack) and develop a
quantitative measure of the bot's performance. This is the current
8065e752 — tslil clingman 1 year, 9 months ago
Just some #weightgoals ;)

It turns out that while i was training on a 0/1 classification
problem, i was using 2*eval - 1. Training using this function instead,
and on bot-dominated game choices (chosen_player in extract.sh) seems
to have given a better evaluation function. At the least, Morten's
swindle doesn't work anymore.
cf35b386 — tslil clingman 1 year, 9 months ago
Fairly important bug fixes to lcdlib, LCD now echoes input!

Input polling without line-buffering is done using ncurses, so the
buildroot configuration had to change accordingly to include that

The Makefile changed to accommodate stand-alone building of ct1986 and
to include -lcurses where appropriate. There were also some typos
about copying ct1986 and ctaklm to the correct directories.