Skip to content
21 June 32012 / Robin Wellner

A random alphabet

After reading Mechanic #152 – Pictograph Language on Three Hundred Mechanics, I had an idea. I’m assuming you’ve read the post by the time you’re reading these words. If not, this post might not make much sense to you.

I liked the idea, but while reading it, I had a different vision. The words “procedural generation” popped up.

Sean Howard’s idea was cool, but lacked replayability. Once you’ve deciphered the alien language, that’s it. Next time, playing the game provides no challenge.

But what if the language and alphabet would be completely new and fresh, every time you start a new game?

How would that work? How would you let a computer design an alphabet from scratch?

Here’s what I did:

They are sixteen points to form the scaffolding for thousands and thousands of possible alphabets.

It would work this way: between those points can be lines. On a sheet of paper, I drew the points and lines between them: eight straight lines and four circles. I simplified it to the following:

When generating a certain letter/character/radical of an alphabet, the algorithm basically goes by each of those lines and tosses a coin. Heads, the line is part of the character, tails, it’s not.

Below you can see an example of such a character. Of course, in the actual characters, you don’t see the points, only the lines.

After that, the brand new character is checked after several constraint rules, which can be swapped in and out easily:

  • Each character needs to have more than 2 but less than 10 lines. This makes sure the character has at least some body and is not a tangled mess of lines either. The chance of as few as two lines is actually incredibly small, especially with some of the other constraints, but I felt I needed to include the check anyway.
  • There are five sets of lines that cross each other. Make sure that for each pair no more than one is included in the character.
  • All lines must be connected to each other (or: the graph must have exactly one component (I’ve actually written an implementation for both formulations of this constraint)). This is a debatable one, and rules out characters that look like “i”, but I decided that this constraint improved how the generated characters looked.
  • Each character must be unique in the alphabet. I’ve never seen this constraint violated, because the number of possible characters is pretty large: without constraints, there are 238 = 274 877 906 944 possible characters. What the exact number is with these constraints, I don’t know, but it’s probably in the millions still. So as long as the number of characters in the alphabet doesn’t run into the thousands, it probably won’t be a problem — and it was never my intention to make alphabets so large.

So you might have noticed that none of the examples of characters above actually fit all constraints in their current state, so I’ve got a few ones which do:

           

If you have LÖVE (or SELÖVE, if you’re less trusting), you can download the generator and play with it. Press escape to exit, space for a new alphabet, and the arrow keys do things as well.


So now that I can generate a random alphabet, what’s next?

Well, we’ve got an alphabet, but not a language yet. So next would be assigning (arrays of) characters to semantic concepts.

I imagine some concepts would always be elementary and have a single character to represent them. Other characters are randomly chosen to have either one or more characters. I imagine some complex concepts would be composed of characters by semantically related concepts (for example, “forest” might be “tree” followed by “butt-load”).

Grammar production rules could be simple enough not to be a problem.

Since it’s supposed to be a dead alien language, the computer only needs to produce the language, never to consume it, which matters greatly.

Then I need to build a game around it.


Imagine waking in a strange world — a dead world. You don’t recognise the language the sentient inhabitants had used, but (as per the original) the dead aliens had the decency to label everything. That means you can find out the meaning of things. You get a sort of sketchbook to scribble guesses or discoveries in, to aid in your learning of the language.

That’s all I got so far. You might have noticed I did away with some parts of the Mechanic, in particular the speech part, because I think those are more appropriate for games featuring non-procedural content.

3 May 32012 / Robin Wellner

Déjà Vu Grew a Pair

Another Déjà Vu post. If you don’t care about the design and implementation of programming languages, this post is not for you.

Anyway, now that I’ve lost the good majority of my readership, I’d like to talk about types.

In Déjà Vu, there used to be 6 types:

  • Strings
  • Idents (symbols would probably have been a better and more familiar name)
  • Numbers
  • Lists
  • Dictionaries
  • Functions

And now I’ve added a 7th: pairs. A pair is similar to a two-element tuple in Python or to a cons cell in Lisp. You can create a pair by calling &, like this:

& "hi" "there"

print &< dup # prints the first element
print &>     # prints the second element

Big deal, right? We already have lists. Anything pairs can do, lists can do better. Lists can do anything better than you. Erm… than pairs.

The thing is, pairs are still useful. Pairs are immutable. Pairs can hold only elements of simple types (strings, idents, numbers and other pairs). These two things mean that a pair can never be a part of a reference cycle, which means they don’t have to be garbage collected. Objects of a simple type are only reference counted, which is better performance-wise and memory wise.

Pairs are small too: except for the minimum amount of data a Value has to have (16 bytes), it only has to hold two pointers to the other Values. Pairs are small and easy on the GC, which means you can create heaps of them for a low price.

I already used them for a number of purposes, among them lightweight binary trees and Lisp-style lists.

Here’s a function that is basically the equivalent of map for pair-based trees:

treemap f:
	if = :pair type dup:
		treemap @f &> dup
		treemap @f &< swap
		&
	else:
		f

. treemap @++ & & 0 10 & & & 5 7 -2 & 0 0
# prints      & & 1 11 & & & 6 8 -1 & 1 1

For larger literal trees, the syntax can get rather hard to read. The nature of Déjà Vu makes that a bit hard to fix, unfortunately.

Anyway, that’s it.

7 October 32011 / Robin Wellner

The Déjà Vu virtual machine

It has been a while since I posted here. Not that I’ve been sitting on my hands. A lot has happened the past three months.

For example, I took up Déjà Vu again. The implementation had some problems, or rather one big problem: the C stack.

You see, the Déjà interpreter has two functions step_eval and getnextnode which call each other recursively, while walking the source tree. A simple function in Déjà Vu is often three or four nodes deep, so calling a function in Déjà Vu can easily add eight functions that are called recursively. CPython uses the C stack for that. The C stack is rather limited in size, so the interpreter usually can’t handle more than a node depth of a few hundred nodes, and that includes recursive function calls.

Now, there are other implementations of Python that don’t have that limitation, but I don’t want to require the user to use Stackless or PyPy or something else. It is possible to increase the size of the C stack, but that’s a workaround and only works for applications of less than an arbitrary size. I deemed neither solution acceptable.

So what then?

My reasoning was as follows: the recursion of the interpreter is the problem, so I have to make it not recurse any more. Thus I wrote a function called flatten, which made the tree a list of instructions. (Actually, some parts kept a tree shape, but those were limited and unlikely to cause problems). Then I did nothing with it for several months.

When I got started on it again, I decided the code to flatten the tree could easily be used to generate byte code. I then spent a while thinking of useful opcodes and other low-level decisions. When that was done, I had a pretty good idea how I wanted to do the byte code and the virtual machine. I wrote several functions and adapted flatten, to produce proper byte code.

Although this byte code is a binary format, I wrote it for portability, so the compiler would not be needed by users of Déjà Vu programs.

I started writing a virtual machine in C, in code as clean and portable as possible.

As regular readers* will know, I am a high-level thinking guy, so C isn’t my forte, especially for writing virtual machines. I found a sensible way to implement stacks and values that can have multiple types, and I think I know how I can handle return stacks and closure stacks, but there’s still a long road ahead.

At the current state, it has a garbage collector: reference counting plus the synchronous method of cycle collection described in Concurrent Cycle Collection in Reference Counted Systems.

* Ha! As if!

14 June 32011 / Robin Wellner

The Path to Philosophy

Wikipedia trivia: if you take any article, click on the first link in the article text not in parentheses or italics, and then repeat, you will eventually end up at “Philosophy”.
— Randall Munroe, title text of xkcd/903.

I may have written a script inspired by the alt-text of http://xkcd.com/903/ --- blog post and GitHub Gist will follow whenever.

Read more…

16 May 32011 / Robin Wellner

Mastermind Solver

Last week, our assignment for Object Orientation was to implement a version of Mastermind in Java. The implementation of the AI was okay, but it could have been better. It was a strategy I wrote as a temporary solution. Sometimes it won — sometimes in four or five times — but other times it lost. This was the entire reasoning of the AI:

  1. Each position gets a list with all possible values (A, B, C, D, E, F).
  2. I choose a random value from each list to select the next attempt.
  3. If the score I get for attempt the attempt (for example ABCD) has no red points, I know that:
    • A cannot be in the first list, so I remove it.
    • B cannot be in the second list, so I remove it.
    • C cannot be in the third list, so I remove it.
    • D cannot be in the fourth list, so I remove it.
  4. If the score I get for the attempt has no white points either, I know that all the colours in the attempt must be removed from all the lists.
  5. If the right solution hasn’t been found yet, go to 2.

That’s it. That’s the whole of the algorithm. I didn’t think it would work. But sometimes, it did. It was more luck than wisdom, but still.

As you might know, our major is Artificial Intelligence, so I wanted to make it smarter, once we dealt with the annoyance of MVC — I can see how that can be useful in certain situations, but we never have practical assignments that do the pattern justice.

Anyway, the Wikipedia page I linked to earlier had an interesting section on algorithms. I wanted to implement one of them, but we soon abandoned the idea. Firstly, the computer playing a near-perfect game each time seemed boring. Secondly and more importantly, Java couldn’t let us do it easily or elegantly.

The problem seemed to be object orientation to me. It simply wasn’t cut out for this. I thought functional implementation would be much better. Java really sucks at this, so I made up my mind to write an implementation of Knuth’s five-guess algorithm in Python, using every functional trick I could think of.

That’s exactly what I did last weekend. The complete program, with CLI, is only 42 lines long (yes, that was intentional). And it works. You can supply a code as a command line argument, or enter it later. The first attempt is always AABB. The second attempt takes a while. Depending on the score at the first attempt, it may take twenty or forty seconds on my machine, but I have seen it take only three.

At first, it didn’t work. It took me some time to find the problem: the search space was too limited. Because of careless reading, I used the wrong function to generate the list of all possibilities. It actually generated a list of permutations of ABCDEF of length four, meaning codes containing duplicates (AABC or DABD, for example) could not be found. Using the right function solved all that.

I made a Gist, enjoy.

PS: I don’t like using a global to keep track of the number of attempts. If you know of an elegant and functional solution, please let me know!

26 April 32011 / Robin Wellner

Modelling Agent Beliefs in Games

Kicking of with a bit of a warning: this post is rather theoretical compared to its usual content.

The post proper starts with a generic question: how does one model belief states for NPCs/agents in games?
Read more…

22 April 32011 / Robin Wellner

Lazy-Lith

As I was trying to make a proper programming language out of Lith, I noticed something: it became less and less elegant. And although it now had functions, to truly work as a programming language, I needed to add even more inelegant cruft.

When reading about the history of Haskell, I had an idea: I would return Lith to a former state, without functions, so that it could be used to describe data and nothing more, while guaranteeing any valid source code has an equivalent canonical form (only containing T and [ ]). This could not be guaranteed when using functions, which have no canonical representation. The other part of the idea was that I would copy the Lith project to a new project named Lazy-Lith, which would be a lazily evaluated superset of Lith. Functions (in the form of ( )) are completely gone now: Lith does not have a way to suspend evaluation, while in Lazy-Lith, every list is a lazy list.

This means I have to let go of the idea of a stack, rendering the -th in Lith obsolete, because when all lists are lazily evaluated, there is no way of knowing beforehand when the stack will be used.

I have added alphabetical names and assignment with = in the parser, they’re not yet evaluated.

That’s pretty much all there is to say about Lazy-Lith at the moment.

10 April 32011 / Robin Wellner

Lith revisited

A while back, I wrote about Lith, a minimalistic language.

Yesterday, I decided to implement the rest of the language, which was surprisingly easy. (Of course, Lith is still a simple language, and I have more experience with this sort of thing now.) It’s up on GitHub now.

In today’s post, I discuss the theory and implementation of the latest version of Lith.
Read more…

29 March 32011 / Robin Wellner

Writing a Tic-Tac-Toe Minimax

Last Sunday, I decided to write an implementation of tic-tac-toe, complete with computer opponent. I ended up implementing a minimax algorithm, which worked well enough. The relatively small amount of states in tic-tac-toe (3⁹ = 19 683, but this includes unreachable states) made this rather simple.

I pitted two minimax players against each other. Predictably enough, it resulted in a draw. Always the same, since the algorithm is deterministic:

X X O
O O X
X O X

The total running time was consistently just under 3 seconds.

That got me thinking, surely a heuristic could do things faster? So I drew up some heuristic rules the enhanced computer player would use:

  1. If the board is empty, just play the top-left corner.
  2. If you can win, play the winning move.
  3. If your opponent can win, prevent them from winning.
  4. If you can find a nice corner you can use to force a win (a number means it is empty, a period means it doesn’t matter), that is:
    X 2 3
    . . 6
    . . X

    or

    X 2 3
    . 5 .
    X . .

    (possibly rotated), do that move.

  5. If the opponent has above opportunity, prevent them from doing so.

I tried the heuristics, and sure enough, they helped. Drasticly. But only when the heuristic went first (total time: just over 0.2 seconds). Otherwise, the timing didn’t change at all. Not even just a little bit. They did nothing.

After a bit of experimenting — not too much, you can probably tell what’s going on here just by reading the previous paragraph, I just wanted to confirm it — I found out what was going on:

None of the heuristics did anything. Except for the one for the empty field. You see, the further you get in the game, the smaller the number of possible future game states gets. Reduction early on is likely to matter more. Especially if you can throw away 8/9 of the search space.

The nature of minimax did the rest: if you can win in one move, it will already be selected immediately by the algorithm. If your opponent can win in one move, the algorithm will find you lose quickly after choosing anything but that one soon enough. And those fancy heuristics for finding ways to trick the opponent into inevitable failure won’t be needed, because the opponent won’t let you get that far (remember, this is minimax vs minimax).

So I removed all but the first heuristic.

I put it up on GitHub as a Gist. If you scroll up, you can see the old implementation of the heuristic (much longer, a lot less elegant, and not a bit faster than the new one).

What have we learned? I’d say the lesson is that, when relying on a minimax as fall-back, you only need to implement heuristics for the opening moves, which will dramatically decrease the search space.

8 March 32011 / Robin Wellner

How I implemented Déjà Vu in two days

I’ve been thinking a lot about Déjà Vu (the programming language I invented) lately.

A while ago, I’ve written a post about it. I couldn’t quite get it to work. After much distress and failed attempts to fix things, I let it rest.

Once in a while I revisited it. For example, when I (re)discovered Parrot, it seemed like a good idea to implement it that way. The problem was that you pretty much need to do each part in one of Perl, some variant of Perl, or Parrot assembly. Perl is horribly opaque and I never learned it, which was problematic. I’m a high-level thinker, so assembly works against me. But the worst of all is that writing grammars and parsers and whatnot is just plain hard. No, scratch that: the absolute worst is the combination of those things. Learning a language you’ll probably never like is much harder if your equivalent to a “hello world” is a top-down grammar parser.

So I let that rest as well.

But I kept thinking about Déjà Vu. Not on how to implement it — I’m a high-level thinker, remember? No, I was thinking about concepts, about constructs, about connotations…

All ideas were kept in a file called “idea”. The language grew and matured, even though it had no implementation. I added new kinds of statements, full lexical scoping and new datatypes, and improved the function definition syntax.

This spring break, I decided to try it one more time last Sunday. Following the success of implementing a tree parser for Lith, (by the way, the answer to the question posed in that post, “Will I ever implement more?”, is “No, I will not.”) I decided to start simple: read the file line by line, perform a series of simple text transformations to deal with indentation, strings, comments, words and statements (it had to be in that order — I did spend some time thinking about implementation). After the transformations, produce a tree. I’d see what would happen next if I finished it.

I finished it within a day. I was shocked. Sure, Déjà Vu is not a large language — I kept the amount of syntax and the number of statements purposefully small — but I had so much problems with it before, and now it worked! In a day! It had everything. It had full lexical scoping! It took Lua 8 years to get that, and this implementation of Déjà Vu had it within a day! All I had to do now is find some way to deal with that parsed tree.

Let’s see. I had a tree filled with conveniently semantically typed nodes. Why not just traverse it, instead of struggling to convert it to byte code? (And probably back again at that.) So I started on that. When I went to bed that day, the first bits where in place for interpreting trees and what’s more: it looked like it was going to work.

The following day, Monday (it is past midnight as I’m writing this post), I implemented the interpreter. It was unbelievably easy. After figuring out some minor problems, I was even able to implement the For statement without much trouble, and I had expected that to hold up the project for a while.

It was the second day, and the language worked. It just worked. I had hastily added an interactive interpreter, modeled after Python’s. That worked too. I have written a neat standard library, implemented in Python. My idea file is over 400 lines, and I had implemented everything I proposed in it, save for those that were superseded by other ideas.

It’s all up on GitHub.

Tomorrow, I will write the much-needed documentation.

One more thing: the implementation as a whole defines 40 classes to parse, interpret, and represent Déjà Vu source. 26 of which are subclasses of the Node class alone. I think that’s the highest number of classes I’ve ever used for a single project.