Maxed Out: Genetic Algorithms


Reading time ( words)

Have you ever heard the term "genetic algorithm" and wondered what this might mean? Well, some time ago I became involved in these little rascals, which are really rather clever, although they do tend to be a little mind-boggling when you first engounter them.

Solution Spaces and Dreamscapes Let's assume that we were to construct a simple device consisting of a power source and two potentiometers in series with a small incandescent light bulb as illustrated in Figure 1. (A potentiometer is a type of variable resistor.)

Let's further assume that the potentiometers can be rotated through a 180-degree range, from -90 degrees to +90 degrees, and that the point of least resistance for each potentiometer is in its center (upright) position.

fig-30-01.jpg

Figure 1. A simple device consisting of a power source and two potentiometers in series with a light

Now, suppose that we were to set the potentiometers to random positions. Then we proffer the device to a friend and we say, "Play with this and try make the light as bright as possible." In the case of my friends, we also have to say "You're only allowed to fiddle with the potentiometers ... you aren't allowed to do anything weird like doubling the supply voltage."

Most people would select one of the potentiometers, turn it a little way to the left or right, and note the result. If the light gets brighter they will continue to turn the potentiometer in that direction. Alternatively, if the light dims, they will reverse their original action and try turning the potentiometer the other way.

After a little experimentation, our friend would soon discover the optimal setting for the first potentiometer, for which the lamp's brightness reaches its peak. He would then turn his attention to the second potentiometer and perform a similar sequence of actions.

The phrase "solution space" refers to all of the possible solutions associated with a particular problem. We can represent the simple solution space corresponding to this problem in the form of a three-dimensional graph as illustrated in Figure 2. In this particular case, we're assuming that the x axis corresponds to the rotation of the first potentiometer, the z axis corresponds to the second potentiometer, and the y axis reflects the brightness of the light.

fig-30-02.jpg

Figure 2. The solution space corresponding to our simple problem can be represented as three-dimensional graphs

If we now assume the presence of some computer-controlled actuators capable of rotating the potentiometers, and also of some sensor to measure the brightness of the bulb (say a photo-detector, or perhaps some method of measuring the current in the circuit), then we could decide to write a computer program to automatically determine the optimal settings for the potentiometers.

But what algorithm should our program employ? One possible approach would be to use a "hill-climbing" algorithm, which is based on the same scenario we discussed with our imaginary friend. Using this technique, the computer would commence at some random point and make a small modification to one of the potentiometers. The program would then check to see if the quality of the solution - as reflected by the sensor measurement - had improved or otherwise.

In the case of the former, the program would continue to effect changes to the potentiometer in the same direction; but if the original modification had caused the solution to worsen, then the computer would reverse the direction in which it had changed the potentiometer and try again.

Also, the program would require some way to recognize when it had reached its peak with this potentiometer, i.e., when a change in either direction causes the solution to worsen. At this point the program must realize that it's time to move on to the other potentiometer, otherwise the program would commence to oscillate around the first potentiometer's center position without progressing any further.

This hill-climbing technique is certainly of use for some classes of problems, but it can run into trouble as the complexity of the solution space increases. For example, suppose we added more potentiometers (pots) to our device, and further suppose that the relationships between the various potentiometers were non-trivial; that is, turning a pot in one direction may cause the lamp to brighten or dim depending on the positions of one or more of the other potentiometers. In this case, our solution space may consist of a number of "hills" and "valleys" as illustrated in Figure 3.

fig-30-03.jpg

Figure 3. Some solution spaces may contain a number of hills and valleys

The problem here is that our hill-climbing algorithm is obliged to commence at some random point. This means that it's more than possible for the algorithm to manipulate the potentiometers in such a way that the program locates some "local maxima" (the top of one of the hills), but the algorithm has no way of "seeing" any other hills, some of which may be higher.

A low-level (no pun intended) solution would be to perform a number of searches, commencing each new search from a randomly generated starting point. But this still fails to provide 100% confidence that our algorithm has discovered the Mount Everest of our solution space.

Furthermore, as the number of variables used to describe the problem increases, the result can be a solution space containing constructs such as tunnels, bridges and even weirder elements like "cities in the sky." The names of these constructs are fairly self-explanatory.

As the topology of a solution space approaches these levels of complexity, finding the highest peak (or even being able to tell the difference between up and down) becomes highly problematical.

Alternative Search Techniques

The hill-climbing algorithm discussed here is only one of a variety of search techniques, where such techniques may be grouped into three main classes as illustrated in Figure 4.

fig-30-04.jpg

Figure 4. Search techniques may be categorized as being calculus-based, enumerative, or guided random searches

Calculus-based techniques can be subdivided into two main groups: direct and indirect methods. Direct methods (such as those described by Fibonacci and Newton) skip merrily around the search space and evaluate the gradient at each new point as they trek around looking for peaks. Our original hill-climbing algorithm falls into this class, which is typically of use only in the case of "well-behaved" solution spaces.

By comparison, enumerative techniques tend to search every point in the solution space, one at a time, so they are relatively easy to implement and are applicable to a wide range of problems, but they can take an inordinate amount of processing time, which can therefore make them effectively unusable for many problems.

For example, consider another simple circuit involving 100 switches, each of which can be up or down. Now, assume that each of these switches provides a contribution to some measurable value (such as voltage, current or resistance), but also that the amount of each switch's contribution (and whether or not said contribution is additive or subtractive) depends on the state of one or more of the other switches. Note that there's an underlying assumption that the switches' contributions and interactions are non-random, but instead reflect some underlying meaningful relationships.

This problem would certainly be applicable to a rudimentary enumerative solution, in that we could simply cycle through every possible combination of switches and measure the result. However, the fact that there are 2100 different combinations means that even if we evaluated 10 million solutions a second, it would be one heck of a long time before we found the optimal result. Try working out the math for yourself; the answer will scare you!

The third major class of search algorithms is the guided random search, which is in some respects similar to enumerative techniques, except that it employs additional information to guide the search. One subclass of this category is "simulated annealing," which is based on the thermodynamic model of cooling metals.

Algorithms based on the simulated annealing concept have appeared in some surprising applications, such as place-and-route software for PCBs.

The other major subclass of guided random searches encompasses evolutionary algorithms, of which one of the most interesting groups, at least to me, is the genetic algorithm.

Genetic Algorithms

The key concept underlying genetic algorithms is that they mimic evolutionary processes in the natural world; specifically, those of natural selection based on the "fitness" of individuals in a population that evolves by exchanging genetic material and also by random mutations. The principles underlying genetic algorithms are actually quite simple, and were first described by J.H. Holland in the early 1970s (see also "Adaptation in natural and artificial systems," University of Michigan Press, Ann Arbor, MI, 1975).

First of all, we manipulate the problem under consideration in such a way that its variables can be represented as a string of 0s and 1s. This would obviously be a simple task in the case of variables that can occupy a limited number of discrete states, such as the switches in our 100-switch example. We'll consider how to handle analog variables a little later.

We now "seed" our environment with an initial population of randomly generated strings; that is, strings containing random sequences of 0s and 1s [Figure 5(a)].

fig-30-05.jpg

Figure 5. After generating a random initial population (a), we rank the individual strings according to their fitness (b), then the fittest strings are allowed to "breed" (c)

Next, we evaluate each string in our population by testing the measurable quantity (or quantities) in our system to see how close we came to some desired result. We use the results of this evaluation to assign a fitness to each string, and then we rank the strings in order of fitness [Figure 5(b)]. Low-ranking strings may be discarded, while high-ranking strings represent the individuals that will be permitted to breed; that is, the strings that ranked the highest will be permitted to generate the offspring that will form part of the next generation.

Now, here's the clever part, because the strings that will act as the parents for the next generation undergo a process known as crossover, in which we emulate the natural process of exchanging genetic material in order to create offspring strings [Figure 5(c)].

Note that the original parent strings remain part of this new population, because we don't want to discard the best solutions we've obtained thus far. Also note that we've only a shown a small initial population and the mating of two pairs of strings; in reality we would have a much larger population pool and there would be many such unions. Furthermore, some algorithms allow strings to mate proportional to their fitness, in which case a high-ranking string will be allowed to mate with more strings than a lower-ranking companion. Thus we see that a key feature to genetic algorithms is "survival of the fittest," whereby the fitter strings generate more offspring and therefore have a higher chance of passing their "genetic information" on to future generations.

Also note that the combination of Figure 5b and Figure 5c reflect only the first "ranking and breeding" cycle. We would then repreat this process for many, many such cycles.

Now, let's consider the process of crossover in a little more detail. First we take two strings that have been selected to "mate" (we'll focus on strings B and D from Figure 5). Next, we randomly select a crossover point, which can occur anywhere throughout the length of the strings, and we merge the left side of string B with the right side of string D, and vice versa. This generates two new strings as illustrated in Figure 6.

fig-30-06.jpg

Figure 6. After generating a random crossover point, we fragment the parent strings around this point and recombine them to generate their offspring

For the purposes of simplicity, Figure 6 only considers strings containing 10 bits; in reality, the strings used by genetic algorithms can be hundreds of bits in length.

One problem with the crossover technique is that it's possible to lose valuable "genetic" material. For example, if a number of strings evolve to have a group of 0s at the same location, then mating these strings will never manage to generate 1s in those positions. Thus, another important component to genetic algorithms is that of mutation. Following crossover, every bit in each of the new strings has some finite chance of undergoing mutation (that is, our algorithm might decide to flip its state from a 0 to a 1, or vice versa).

Of course the probability of mutation is maintained at a very low level (say a chance of 1 in 10,000); also note that each bit is treated independently, such that the mutation of one bit doesn't affect the probability of surrounding bits being mutated. From the above we can derive a high-level view of the genetic algorithm process as illustrated in Figure 7.

fig-30-07.jpg

Figure 7. High-level pseudo-code representation of the process

Note that all of the actions in the inner loop would equate to one cycle or generation, and that termination of the loop may occur after some specific number of cycles (which could number in the tens of thousands) or when one or more solutions come close enough to the desired result.

This discussion has focused on digital variables such as switches, but genetic algorithms can easily handle analog variables as well. For example, the angular position of the potentiometers in Figure 1 could easily be encoded into a certain number of bits; a 1-byte (8-bit) field would allow us to encode the 180-degree range of one of the potentiometers to an accuracy of 0.7 degrees, and more bits could be used to increase this accuracy as required. Also, multiple potentiometers simply equate to more bits in the main string, and both analog and digital quantities can easily be mixed together, because the genetic algorithm just sees them as strings of 0s and 1s.

What Are They Good For?

Although the whole concept of genetic algorithms might seem a bit nebulous, their use of pseudo-natural selection and mutation manages to direct the search towards regions of high potential in the solution space. These mechanisms also allow genetic algorithms to explore a greater range of potential solutions than afforded by more conventional search techniques, and to converge on optimal results in complex solution spaces faster and more efficiently than other approaches.

One application for genetic algorithms in the electronics arena would be to fine-tune analog circuits, which often have contradictory requirements such as increasing edge rates while reducing power consumption. These problems, which contain large numbers of variables (component values) with complex interrelationships, are ideally suited to a genetic algorithm approach. In this case the algorithm would be used in conjunction with an analog simulator, where the algorithm would be used to modify the variables and the simulator would be employed to measure the results (thereby determining the fitness of the genetic population).

Two other fields of interest are those of "fuzzy logic" and "neural networks." These disciplines have been successfully combined on a number of occasions, resulting in two hybrids: neuro-fuzzy, in which the neural network is used to establish a set of fuzzy rules; and fuzzy-neural, in which fuzzy logic is used to fine-tune a neural network. Similarly, the concept of genetic algorithms opens the doors to four more hybrids: genetic-neuro, neuro-genetic, genetic-fuzzy, and fuzzy-genetic.

However, perhaps one of the more exciting areas for future research involves the possibility of combining the concepts of genetic algorithms with those of adaptive logic. This refers to a possible future generation of FPGA-like components, whereby individual portions of the device can be reconfigured on-the-fly without disturbing the operation of the rest of the device.

This would make it possible to compile new design variations in real time, which may be thought of as dynamically creating subroutines in hardware! Thus, we might consider implementing a genetic algorithm in the form of dynamically evolving hardware. The implications of all of this may make the mind boggle, but it's better than being bored!

AcknowledgementsThis column was abstracted from Designus Maximus Unleashed (Banned in Alabama) with the kind permission of the publisher.

About the AuthorClive "Max" Maxfield is president of TechBites Interactive, a marketing consultancy firm specializing in high technology. Max is the author and co-author of a number of books, including Bebop to the Boolean Boogie (An Unconventional Guide to Electronics), The Design Warrior's Guide to FPGAs (Devices, Tools, and Flows)How Computers Do Math featuring the pedagogical and phantasmagorical virtual DIY Calculator.

In addition to being a hero, trendsetter, and leader of fashion, Max is widely regarded as being an expert in all aspects of computing and electronics (at least by his mother). Max was once referred to as "an industry notable" and a "semiconductor design expert" by someone famous who wasn't prompted, coerced or remunerated in any way. To contact the author, click here

Share

Print



Copyright © 2021 I-Connect007. All rights reserved.