I made a little interactive puzzle that goes along with this idea. If you’d like, you can install the Python script and try it on your own or have someone read the rules below and act as the computer. You could also skip the puzzle, but where’s the fun in that?
Game rules: There are seven buttons which you can toggle between blue (b) and purple (p). The goal of the puzzle is to find the permutation which returns a score of 0. Once someone gives you a set of colors (pbpbpbp), you return a score based on the rules below.
- Convert string to binary withp
being 1 andb
being 0.
- If n > 64, return a random number between [70, 100]
- Else, return 64 - n
⁂
In genetic algorithms, integers are represented using bitstrings. Now this has a few advantages, like making it easy to implement crossover and mutation operators, but it also makes representing numbers a bit odd. Take for instance, this example:
01111 - 15
10000 - 16
Even though they’re only one apart, these binary representations are exact opposites of one another. The entire premise of the puzzle above is this notion of deceptiveness. That although the scoring metric makes you feel like you’re getting closer, your underlying representation is getting further and further away from the objective.
It seems very common to think of objectives as being deceptive and leading us away from the objective, but what if it was our underlying representations that were misleading us? Perhaps a limitation of genetic algorithms is this inability to change representations. That although the search process isn’t directed using any hand-designed components, the representation being used is whatever us clever humans can devise.
- Is it possible to merge genetic algorithms and neural networks in a way to allow for the evolution of primitives?
- How might we deal with this issue of deception by allowing our algorithms to decide how they want to represent the search space?
- How does this problem of deceptiveness in representation affect humans?