jump to main content

Searching for Chaos

Attention Conservation Notice

An attempt to find pretty bits of art by playing with coefficients on a 2D system of nonlinear differential equations (2D Quadratic Maps based on the Hénon map). Exploring this in sufficient depth would lead to a lot of fun in chaos theory / nonlinear dynamics-land. This is not sufficient depth

The Generative Art Internet

What a wonderful place! People all over the world tinker with math and scripts to make beautiful art. I was looking at old blog posts from Fronkonstin and thought it would be fun to recreate the Chaotic Galaxies post and tinker with the code.

Fronkonstin was nice enough to link where he found the initial content:

Strange Attractors: Creating Patterns in Chaos

The author (Julien Sprott) was gracious and provided the book for free on his website. All of the code from the book is in some dialect of BASIC. The content in the book is a fun read; it provides an exciting build-up of strange attractors and nonlinear dynamics, starting from the simplest 1D maps. I haven't read it all, but from what I have read he brings the content to life without assuming that you are a graduate student in analysis.


My first goal is to recreate the plots that Fronkonstin made

Differential Equations

The system of differential equation that this post hinges on is the following:

\(x_{n + 1} = a_0 + a_1x_n + a_2x_n^2 + a_3x_ny_n + a_4y_n + a_5y_n^2\)

\(y_{n + 1} = a_6 + a_7 x_n + a_8 x_n^2 + a_9x_ny_n + a_{10}y_n + a_{11}y_n^2\)

The Code

The code looks super simple on Fronkonstin's blog. It consists of the differential equation function definitions, a set of coefficients, a function that generates the steps iteratively, and a plotter. These pieces map to python well

The Equations

def attractor(x, y, z):
    a = z[0] + z[1] * x + z[2] * x**2 + z[3] * x * y + z[4] * y + z[5] * y**2
    b = z[6] + z[7] * x + z[8] * x**2 + z[9] * x * y + z[10] * y + z[11] * y**2
    return [a, b]

The Coefficients

z1 = [1.0, -0.1, -0.2,  1.0,  0.3,  0.6,  0.0,  0.2, -0.6, -0.4, -0.6,  0.6]
z2 = [-1.1, -1.0,  0.4, -1.2, -0.7,  0.0, -0.7,  0.9,  0.3,  1.1, -0.2,  0.4]
z3 = [-0.3,  0.7,  0.7,  0.6,  0.0, -1.1,  0.2, -0.6, -0.1, -0.1,  0.4, -0.7]
z4 = [-1.2, -0.6, -0.5,  0.1, -0.7,  0.2, -0.9,  0.9,  0.1, -0.3, -0.9,  0.3]
z5 = [1.0, 0, -1.4, 0, 0.3, 0, 0, 1, 0, 0, 0, 0]

The function that generates the steps iteratively

def make_galaxy(z_input, iters=1000):
    output = [[0, 0]]
    for i in range(1, iters):
        new_vals = attractor(*output[i - 1], z = z_input)
    df = pd.DataFrame(output)
    df.columns = ['x', 'y']
    return df

the plotter

For the plotter, I'm going to use the color palette that I generated for this blog in this post [Color Palette Generation]​:

colors = ['#a252f2', '#ae6bf2', '#f9bf76', '#d1313d', '#e5625c', '#8eb2c5', '#4dc9e3']
colors_extended = colors + ['#4dc9e3', '#1f3f6c', '#a96e14', '#a129df', '#00ff9f']

def plot_galaxy(df, color_palette=colors_extended, save_location=None):
    color_vals = [choice(color_palette) for _ in range(len(df))]
    _, ax = plt.subplots()
    ax.scatter(df['x'], df['y'], c=color_vals, alpha=np.random.uniform(0.5, 0.9), s=3)
    plt.tick_params(left = False, right = False, labelleft = False,
    labelbottom = False, bottom = False)
    if save_location:

Now we have all the pieces, and can generate some plots!

for z in [z1, z2, z3, z4, z5]:

Initial Curated Plots


Z1 Coefficients: [1.0, -0.1, -0.2, 1.0, 0.3, 0.6, 0.0, 0.2, -0.6, -0.4, -0.6, 0.6]


Z2 Coefficients: [-1.1, -1.0, 0.4, -1.2, -0.7, 0.0, -0.7, 0.9, 0.3, 1.1, -0.2, 0.4]


Z3 Coefficients: [-0.3, 0.7, 0.7, 0.6, 0.0, -1.1, 0.2, -0.6, -0.1, -0.1, 0.4, -0.7]


Z4 Coefficients: [-1.2, -0.6, -0.5, 0.1, -0.7, 0.2, -0.9, 0.9, 0.1, -0.3, -0.9, 0.3]


Z5 Coefficients: [1.0, 0, -1.4, 0, 0.3, 0, 0, 1, 0, 0, 0, 0]

Z5 is the Hénon map, a special case of our more general map.

After generating the plots on the blog, I hoped to begin tweaking the coefficients and generating some new pictures.

Fronkenstin gives the words of encouragement:

Changing the vector of parameters you can obtain other galaxies. Do you want to try?

How were the effective coefficients found by others?

It seems that there is an island of instability for these types of maps. Here is a view of state space for the simple Hénon map:

From the book:

As with the logistic map, the Hénon map has unbounded solutions, chaotic solutions, and bounded nonchaotic solutions, depending on the values of a and b. The bounded nonchaotic solutions may be either fixed points or periodic limit cycles. Figure 8-1 shows a region of the ab plane with the four classes of solutions indicated by different shades of gray.

The bounded solutions constitute an island in the ab plane. On the northwest shore of this island is a chaotic beach, which occupies about 6% of the area of the island. The chaotic beach has many small embedded periodic ponds. The boundary between the chaotic and the periodic regions is itself a fractal.

There is also more information on these boundaries [here]​

Like all bouts of tinkering, I need to read some source material if I want to get better results! Thankfully, the book is available for free on the author's website!

The Book & Encoding

In the book, the author uses a coding system to represent different coefficient values. It looks like this:

key system

Then he outputs the key at the top of each plot:

book example

In this case, he leads with an E as a code for a quadratic 2-dimensional map and the rest of the letters translate to coefficients in the key.

For example, J = -0.3, T = 0.7, S = 0.6, M = 0.0, B = -1.1, and so on until you have all 12 coefficients for this map. This becomes handy further in the book as the degree of the polynomials increases (and allows for even more complexity of output).

If I do a follow-up post, this method could be used to make more intricate strange attractors (with higher degree polynomials)

def letter_range(start, stop, step=1):
    """Yield a range of letters"""
    for ord_ in range(ord(start), ord(stop), step):
        yield chr(ord_)

def translate_to_coefs(input_str: str) -> List[float]:
    coefs = merge([{l: d / 100} for l, d in zip(letter_range('A', '['), range(-120, 140, 10))])
    # drop the E that leads the string
    return [coefs[a] for a in list(input_str)[1:]]

def plot_galaxy_with_key(key, color_palette=colors_extended, save_location=None):
    df = make_galaxy(translate_to_coefs(key))
    color_vals = [choice(color_palette) for _ in range(len(df))]
    fig, ax = plt.subplots()
    plt.title(key, loc='left', color='#f9bf76')
    ax.scatter(df['x'], df['y'], c=color_vals, alpha=np.random.uniform(0.5, 0.9), s=3)
    ax.set_facecolor('#1B1D21')  # this is the background color of my blog :D
    plt.tick_params(left = False, right = False, labelleft = False,
    labelbottom = False, bottom = False)
    if save_location:


for code in codenames:

Unfortunately, random choosing these values doesn't seem to be super helpful (we tried that above), but it does allow us to recreate some of the nice figures from the book with a simple codename:

additional plots

ex1 ex1 ex1 ex1 ex1 ex1 ex1 ex1 ex1 ex1

Searching For More

It would be helpful if we had a way to directly determine if a given set of coefficients leads to a chaotic output without viewing the plots.

It seems this is a difficult problem: How to know whether an Ordinary Differential Equation is Chaotic?

Makeshift Measures of Chaotic Bulbousness

What a neat title! That could be a band name.

What I want is to find a way to consistently generate bulbous (roundish, relatively self-contained) strange attractors.

How do we measure the bulbousness of a strange attractor?

The idea:

  • generate a convex hull around the strange attractor

convex hull

  • figure out how many clusters there are in the data. This is akin to cross-validation for a clustering algorithm.
  • divide the area of the convex hull by the number of cluster groups

bulbous = convex hull area / k groups

from scipy.spatial import ConvexHull, convex_hull_plot_2d
from gap_statistic import OptimalK

def bulbous_measure(df):
    # get convex hull area
    hull_area = ConvexHull(df).volume
    # get optimal number of clusters
    optimal_k_clusters = OptimalK(parallel_backend='multiprocessing')(df.values, cluster_array=np.arange(1, 10))
    return hull_area / optimal_k_clusters

!!! Note that in 2 dimensions, ConvexHull volume attribute is actually area

For our results, we want something that is mostly contained in a single group (or a couple of groups) and has a relatively large area. We seek to maximize our bulbous area.

valid_galaxies = []
for _ in range(10000):
    # generate array
    z = [round(np.random.uniform(-1.2, 1.2), 1) for _ in range(12)]
        df = make_galaxy(z)
        # check if out of bounds
        a, b = df.max().to_list()
        # cast a bigger net
        if 0.5 < abs(a - b) < 1.5:
            # get bulbous area
            bulbous = bulbous_measure(df)
            if bulbous > 1:
                valid_galaxies.append((df, bulbous))
    except OverflowError:

Of the 10,000 iterations, the algorithm found 12 results

Bulbous Measure Plots

Did it work? kind of. Here are the plots in descending order of the bulbous measure:













Enhancement Ideas

  • Find a way to get rid of outliers so that the numerator (area) isn't large for very diffuse images
    • Use the optimal K algorithm above to fit a Gaussian Mixture Model and then eradicate values that are far from the centers
    • Fit a linear model and use Cook's Distance to remove points that give a lot of pull
    • Use an anomaly detection algorithm like Isolation Forest to remove outliers
  • Some kind of variance/concentration measure
  • We could check for cycles and once we find one (if we find one) we can take the average pairwise distance between each point (assuming the set of points in a cycle is manageable). Then we could remove those points that have average distances greater than some cutoff.
  • Throw a neural network at it
    • Label a dataset of images that are appealing to the eye along with their coefficient sets
    • ([coefficient_1, ..., coefficient_n], appealing_binary_indicator)
    • Train a network to predict (for a given set of coefficients) whether it would lead to an appealing image

▽ Comments