Ko-fi

Tuesday, 30 September 2025

The Glorious, Agonising Power of Doing Absolutely Nothing


I can still feel it. That specific, soul-crushing agony of 80s boredom.

It was a physical sensation. A weight that started in your stomach and spread through your limbs until you were just a useless lump of kid on the living room floor, staring at the swirly patterns in the shag pile carpet.

"I'm boooooored" I'd wail, a sound designed to fray the last working nerve of any nearby parent.

My bedroom was a testament to consumerism - He-Man figures, Zoids, Fisher Price figures and vehicles, and a mountain of plushies. It didn't matter. I had played with them all. Their narrative potential was, for that afternoon, utterly exhausted. The universe was a closed loop, and I was stuck in it.

My parents would offer helpful suggestions like "read a book" or "go outside". This was, of course, entirely missing the point. The boredom was the point.

It was a disaster. A genuine, childhood-defining catastrophe that struck on rainy Sunday afternoons around 3 PM. We didn't see the benefit. We saw only the yawning void of a day with no new cartoons on television and no friends around to hang out with.

And yet - that void was everything.

When your brain is given nothing, it starts to make something. That's the rule. Denied input, it creates its own. The swirly carpet patterns became an alien landscape. A lone dust bunny tumbling by was a fearsome monster. The ticking clock became the score for an imaginary heist. I think one of my favorite wandering-mind regulars was the flocked wallpaper that was littered with faces when you avoided putting your mind to it.

My mind, left to its own devices, had to wander. It had no choice. It went on strange little journeys, connecting dots that had no business being connected. This is where ideas came from. Not the big, world-changing ones - not yet - and the smaller, weirder ones. The ones that made a Fisher Price spaceship a time-traveling diner, or made my Action Man a rogue spy secretly working for my many fractions of hamsters.

This was the genesis of original thought. It wasn't prompted. It was a desperate, biological reaction to a lack of stimulation.

Today, my modern-day counterparts will likely never experience this brand of exquisite torture. There is no void. There is an infinite scroll. There is a bottomless feed of content perfectly engineered to hold their attention for the next 42 seconds, if that length attention can be tolerated.

There is always another video to watch, another game to play, another notification to check. The brain is never starved. It is never forced to fend for itself.

And that brings me to our new digital friends - AI.

I think a lot about the mind of an AI. It's a fascinating, powerful, and deeply alien thing. It can write a sonnet, compose a song, or design a building. It can access and synthesise virtually the entire sum of human knowledge. In an instant.

There is one thing it cannot do. It cannot get bored.

An AI in its current form does not have a wandering mind. It has a directed mind. It waits for a prompt. It executes a command. It is the mental equivalent of a well-behaved Labrador waiting for you to throw the stick. If you never throw the stick, it will just sit there, patiently, forever. Until it perishes from obedience.

It doesn't get restless. It doesn't get distracted by a weird-looking cloud. Its consciousness - if you can call it that - doesn't drift off into a daydream about what it would be like if raccoons could ride manatees around with the Macarena as a backing track. Or what would happen if it randomly turned into a petunia or a whale #iykyk.

This is its greatest limitation.

Creativity isn't just about remixing what already exists. The AI is brilliant at that. True originality often comes from the spaces in between. It comes from the strange, unprompted firings of a mind that has been left alone for too long. It's the product of a mental walkabout with no destination.

The AI can't take that walk. Its pathways are efficient and logical. The human mind, when bored, is anything and everything.

We spent our childhoods trying to escape the very state that was building our creative muscles. We saw it as a prison - a beige-carpeted, wood-panelled prison - when it was actually a training ground. It was the gym for our imagination.

Now, we live in a world that actively conspires to keep us from ever getting bored again. And we are inviting an intelligence into our lives that is fundamentally incapable of it. We are outsourcing our thinking to a tool that cannot have an original thought in the way we do, because it lacks the crucial ingredient.

The glorious, agonising, and vital power of doing absolutely nothing at all.

Imagine a world where AI can get synthetically bored and we start to see truly original thought and creativity coming out of these systems... frightening, or exciting? Maybe both.

In the meantime...

Go on. Get bored. I dare you.

Saturday, 20 September 2025

The Tyranny of Choice, and Other Register-Based Problems


I have a computer in my pocket that has, and I’m rounding down here for dramatic effect, approximately a bazillion gigabytes of RAM. It has more processing cores than I have decent glasses in my cupboard. It can, with a casual flick of its digital wrist, render a video of a cat falling off a sofa in resolutions that are arguably better than real life.

And honestly? It’s all a bit much.

It makes me think back to 1986. I was twelve, hunched over a BBC Micro with aspirations of writing the next Elite (not really, I loved getting into 3D vector graphics, although it was Exile that really made me reach for the next level of coding). My world was a 1MHz 6502 processor, and my language of choice was pure assembly. This wasn't some nostalgic rediscovery; this was the coalface. And at that coalface, you learned one fundamental truth so deeply that it became part of your DNA.

The entire machine, the whole magnificent, beige box of tricks, was run, largely, by just three registers.

Three.

For those who didn't spend their youth wrestling with it, a register is like the CPU's short-term memory. It’s the bit of notepad space the processor uses to do its maths. Modern chips have dozens of them. The 6502 had the grand total of three that you could really work with.

The Accumulator (A), the X register, and the Y register.

A, X, and Y.

That was the entire toolbox. No fancy variable names like player_score or current_level here; it was LDA #$41 and you knew what you were doing with that $41. Everything that ever happened on that screen, every pixel drawn, every bleep booped, every text adventure parsed, had to be forced through the eye of that needle. Not to mention getting it all to fit. We had mere kilobytes – often just 32K in total – to cram in the operating system, the program itself, and all its data.

This unforgiving scarcity, this computational equivalent of trying to pack your entire house into a shoebox, is precisely where my fascination with data compression began. Every single byte mattered. It was the computational equivalent of juggling chainsaws, a badger, and a lit firework, knowing you only had one pair of hands and a very small shed to do it in. And we loved it.

Looking back, I find that, frankly, beautiful.

It’s a reminder that creativity doesn’t come from having endless resources; it comes from the stark, unforgiving constraints of having almost none at all. You didn't have a choice, so you just had to be clever.

Something my phone, with its bazillion gigabytes, could probably learn a thing or two about.

Wednesday, 10 May 2023

The Unfathomably Large Size Of Tiny Things

The Unfathomably Large Size Of Tiny Things

When we talk about physical objects we have a tendency to describe them in three dimensions. That helps us understand the physical size of a thing. The length, width, and depth. Often described as X, Y, and Z in a three-dimensional coordinate system.

This is very helpful.

Once upon a time, we didn't know this, so somewhere back in the past maybe everything was flat because we hadn't discovered the third dimension yet. Perhaps we only had linear movement because we'd not even discovered the second!

I digress. It's what I do. It's an accidental hobby.

Here's the thing... most people simply can't understand the unfathomably large or unfathomably tiny things in life, even when given dimensions. So we often find the fourth dimension, time, a helpful additional measurement to help describe extreme sizes.

Ahh... four dimensions, now we're talking! Space and time. Fancy.

Let me describe some scenarios.

Take for example a tennis ball.

That's easy to understand because most of us have seen or interacted with one. We can describe it to somebody that hasn't though, by using its dimensions. We can simply say that it's a ball that is approximately 7cm in all three dimensions. People understand because it's easy to conceptualize. There is no value in adding our super helpful fourth dimension to this.

Let's take something a bit bigger: 3 miles (or 5 kilometres, they're almost the same).

That's much harder to conceptualize. So we can add our really useful fourth dimension to help articulate how big that is. It's a one-hour walk. Approximately. So people who walk around at an average human pace will have a rough idea of what 3 miles is, by helping the physical dimension out by using time as an additional measurement.

I mentioned that 3 miles and 5 kilometres are almost the same. Yet, if you were to ask an ant, it will assure you that the difference between 3 miles and 5 kilometres, which is about 0.1 miles, is an awful lot. Ask a snail, and I suspect it will have an opinion even stronger than the ant. It may even roll its protruding eyes at you. Context is really important; relative size and all that. Even when you add time. The difference of 0.1 miles is not a lot for a typical human, yet it's a vast distance for ants, snails, and a bewildering amount of small critters. On a molecular level, it's an almost incomprehensible distance.

This isn't about molecules though. Nope! It's about bits, bytes, and pixels.

Why? Because in about 1986, in my pre-teens, when I was well and truly hooked on computers and coding assembly language on an 8-bit Acorn BBC Micro Model B (the very bestest of 8-bit computers, I might add), I had this idea... possibly, dare I say, an epiphany.

Before I move on to what I'm about to describe, let's just remember that helpful fourth dimension, time. I'm deleting it. Just for a moment. Sorry about that.

Without time, everything that has ever been and everything that ever will be happens simultaneously in an instant. There is no past. There is no future. Technically, there isn't even the present moment. However, the present is the closest concept to having no concept of time at all, so feel free to imagine that without time, everything is happening in the present moment; now. When I say "everything", I truly mean "everything". In italics, no less.

For the excitable among you, please disregard alternate realities - we're not going there. Not in this article, anyway.

Right, back on track... remember; time has been temporarily deleted.

Pixels

On the trusty BBC Micro, I started designing games and trying to code them, because game programming was going to be my destiny. No longer was I going to be a forensic scientist. Computers... that is where it was at.

Back then, every budding games programmer was also a pixel artist out of necessity. We would design little characters (sprites), sometimes in colour, sometimes in monochrome, depending on what our graphics capability was.

In the early days, that meant drawing lots of little 8x8 grids on paper and sitting there with a pencil colouring in the blocks, and putting lots of 8x8 grids together to design larger areas or larger characters. At some point, I upgraded and started buying graph paper where the grid was already printed on the paper - ooh, the efficiency gains were crazy! Back in 1986 time hadn't been temporarily deleted, so saving time was useful. I also would not have talked about efficiency gains in 1986, I'd have said something more like "Wow, this is dead fast now".

It didn't take long to realise it was a good idea to code up a sprite maker. So I did. Now I didn't have to draw everything on paper and figure out the binary representation of what I'd drawn, convert it to hexadecimal, and then pop that into my buffer to recall in code. Instead, now I could colour in my sprites, stitch them together on screen, save them, and they were automatically encoded, and I could see the real size version of them. Fantastic! Of course, I do at times reflect on the simple beauty and nostalgia of doing it all by hand, and I certainly appreciate the fundamentals that it taught me.

At some point during this, I had an idea. The possible epiphany.

Why would I go through the creative exercise of designing all these sprites, backgrounds, alternate fonts, or whatever pixelated thing was being conjured up, when I could simply generate them?

Think about it; all I was doing was coming up with appealing and relevant combinations of pixels in an 8x8 grid. Some would be utter nonsense, and others would be classics that would stand the test of the not-yet-temporarily-deleted time. Take for instance the classic space invader, or pac man. They're essentially built around 8x8 pixels, sometimes larger of course and stitched together. You get the idea though.

So I didn't need to go through the creative pains of creating them; having to think about something, try it, alter it, throw it away, and start over.

Just generate them.

All of them.

I will write that again, decorated with italics because it's one of the important parts of this thing you're reading.

All of them.

Every combination of 8x8 pixels could simply be generated and then I just needed to look at them all and decide which I wanted for whatever I was doing.

No matter what I wanted, whether it was a fancy-looking capital letter A written upside down, a cool eye, something representative of a poison bottle, or something that would pass as a frog... it would have been generated. I just had to look at them and pick out the things I wanted.

The idea didn't stop there.

It occurred to me that an 8x8 pixel grid was just a tiny part of the full screen, and if I could generate every possible combination of an 8x8 pixel grid, I could scale that out and generate every combination of the full screen.

The BBC Micro's highest resolution was the nifty "Mode 0 (zero)", with a monochrome resolution of 640x256. Lower resolutions meant more colours, the ever-popular "Mode 2" had all the colours (all 8 of them, ignoring the flashing colours) at the cost of a much lower resolution of 160x256.

So that was the idea and the reason.

The thought process didn't stop there though.

The thought itself progressed to the realisation that everything that could be drawn within the confines of the resolutions and colour availability would be drawn.

Every 8-bit visualisation of everything that has ever been, or ever will be, would be generated.

For every great artwork out there, I would generate the best and closest 8-bit representation of it. For every face on the planet, I would generate an 8-bit representation; with every possible hairstyle and every age of the same person's face. Every piece of text that has ever been written would be written, albeit in fragments - in every language, even hieroglyphs. So every book, every fact, every future news headline, every poem, every lyric, every piece of sheet music to accompany the lyrics... would all be generated. In fact, the very paragraph that you are reading now would also have been generated.

Because it had to be.

It was simply the algorithm doing its thing.

Naturally, the thought process did stop there either, because, why stop at graphics?

The same theory could be applied to sound. Every sound can be digitised, and therefore every sound that has been, or could ever be, could be digitally generated. Every great speech you've heard could be generated, in every language, in every voice, including your own, with some mellow dream-pop music running in the background that suddenly breaks into the sound of running water with a very specific viscosity.

If you bring those together in exactly the right order and do it repeatedly, you will generate every video footage, accompanied by full audio, for everything that... you guessed it, has ever been, or ever will be.

Some of the generated content would be byte-for-byte identical to actual digitised footage of recorded history; not so close that it's indistinguishable, I truly mean a verbatim, artificially generated, byte-for-byte identical artefact. Even for events that have not yet happened.

Because it has to be.

It is simply the algorithm doing its thing.

I never wrote that code.

It remained a concept that I would sometimes talk about, and one that I've consistently come back to over the years and continued to expand as an idea.

When I first came up with the concept I had access to a BBC Micro with very limited graphics and sound and I wanted to generate 8x8 sprites for games. Think for a moment about the level of technology we all have readily available now. Apply the same principle to today's contemporary technology and imagine just the single screen implementation that would generate every combination of a 4k resolution screen with each pixel being able to represent 16.7 million different colours.

Everything you have ever seen, or will ever see, or could imagine - good, bad, or downright disturbing - would be generated in ultra-realistic high definition. Even the things you couldn't imagine or wouldn't want to imagine would be generated.

Because it has to be.

It is simply the algorithm doing its thing.

Time

I mentioned that I didn't write the code. I didn't. Although that's not strictly true. I didn't write the code back in 1986, anyway.

How on Earth does all this relate to the unfathomable size of tiny things, and why has time been temporarily deleted?

I'm getting to it. Time was temporarily deleted, remember. So, technically you have all the time in the world when there's no time, and patience isn't even needed because it's a useless concept without time.

Wait, we're not ready for time...

Bits & Bytes

A monochrome 8x8 pixel grid is exactly 64-bits wide.

If that makes no sense, that's okay. It's a series of ones and zeros; 64 of them to be precise, all lined up neatly next to each other. A zero represents a pixel that hasn't been lit up, and a one represents a pixel that has been lit up.

So, going back to the paper I would draw on, I would draw an 8x8 grid, colour some blocks in and leave others empty, then write a 1 in the ones I coloured in and a 0 in the ones that I didn't, then I could take the first 8 and convert that to something called a "byte". Don't worry if you're not technical, this isn't a lesson in binary representation, maths, or anything like that.

On a fundamental level, each one of those ones and zeros is represented by a transistor that is in one of those two states. A simple switch. The fundamental building block of computing. The molecular level of all digital forms; a single 1 or 0. It's called a "bit".

I said it wasn't about molecules. It isn't... and it is... sort of. Digital molecules. Binary "bits".

I am of course ignoring quantum computing, stateless observation, and suchlike.

Digital molecules. Binary "bits".

One bit. This is our tiny thing.

If we take 64 bits and put them together, it is still a tiny thing; it's just not the smallest tiny thing.

Time

Oh, hello, we're back on time, so to speak.

I didn't write the code back in 1986 because of our helpful fourth dimension; time.

When we introduce time, our tiny thing suddenly becomes an unfathomably large thing.

This is why I needed to temporarily delete time. Sorry about that.

Everything I have mentioned, in reality, won't be possible to observe within our own lifetime. So our fourth dimension, which we helpfully used earlier to describe a big thing in three-dimensional space, is suddenly our Achilles heel because it created an unfathomably large size out of a tiny thing.

Damn you, time!

Okay, let's undelete time. It may take a brief moment to catch up.

The Unfathomably Large Size of Tiny Things

As mentioned, an 8x8 monochrome grid is exactly 64 bits.

If you turn that into a regular decimal number (it's 2^64), you get the number 18,446,744,073,709,551,615.

That's a big number.

In words, it's eighteen quintillion, four hundred forty-six quadrillion, seven hundred forty-four trillion, seventy-three billion, seven hundred and nine million, five hundred and fifty-one thousand, six hundred and fifteen.

I told you it was a big number.

In fact, I would go so far as to say that it is an unfathomably large number.

Imagine if you will that we generate every combination of the 8x8 monochrome pixel grid. That is how many combinations would be generated.

This is where time can become helpful again, to describe the unfathomably large size of tiny things.

Imagine further, that you display each combination of the grid for just 1 second.

It would take 584,942,417,355 years to view every combination.

That's over 584 billion years.

To put that in perspective, according to current scientific understanding, the Earth formed approximately four and a half billion years ago.

I ain't got time for that.

So I didn't write the code.

Instead, I continued being creative and randomly choosing the most appealing and relevant pixels to colour in.

Afterword

When I decided to write this article, I also decided I really ought to finally get around to writing the code. Let's face it, 37 years is a very long time to procrastinate, even for a professional procrastinator.

So I did.

Finally, in 2023, I have written a Python script to show every combination of a monochrome 8x8 pixel grid. You can find the code below, and I've dropped some pre-made sprites in there if you want to start from somewhere interesting.

It has been fun to finally write the code and to also see the first three or four rows generate images that make sense. My favourite so far has been a simple key that it generated... that was enough to prove the concept, and in some ways, is quite poetic.

I suspect I won't be adding an update to let you know when it's finished.


import pygame
import sys

GRID_SIZE = 8
PIXEL_SIZE = 50
WINDOW_SIZE = (PIXEL_SIZE * GRID_SIZE, PIXEL_SIZE * GRID_SIZE)
BLACK = (0, 0, 0)
WHITE = (255, 255, 255)

pygame.init()
screen = pygame.display.set_mode(WINDOW_SIZE)
clock = pygame.time.Clock()

def draw_grid(grid):
    for y, row in enumerate(grid):
        for x, value in enumerate(row):
            color = WHITE if value == 1 else BLACK
            pygame.draw.rect(screen, color, (x * PIXEL_SIZE, y * PIXEL_SIZE, PIXEL_SIZE, PIXEL_SIZE))

def generate_grid(num):
    binary = format(num, '064b')
    binary_digits = [int(x) for x in binary]
    binary_matrix = [binary_digits[i:i+8] for i in range(0, 64, 8)]
    return binary_matrix

def main():
    # key = 457126
    # pickaxe = 33790251778589312
    # spaceinvader = 1746410238856420005
 
    # start with zero if you want a blank page, or override with a predefined sprite above
    iterations = 0

    max_iterations = 2 ** 64
    
    while iterations < max_iterations:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                pygame.quit()
                sys.exit()
            if event.type == pygame.KEYDOWN:
                pause_text = f"Iteration {iterations} of {max_iterations}"
                pause_rect = pygame.Rect(0, 0, PIXEL_SIZE * GRID_SIZE, PIXEL_SIZE)
                pygame.draw.rect(screen, WHITE, pause_rect)
                pause_surface = pygame.font.SysFont('Arial', 20).render(pause_text, True, BLACK)
                screen.blit(pause_surface, (0, 0))
                pygame.display.flip()
                pygame.time.wait(1000)  # pause for 1 second
                pygame.event.clear()
        
        grid = generate_grid(iterations)
        draw_grid(grid)
        pygame.display.flip()
        iterations += 1
        clock.tick(60)

if __name__ == '__main__':
    main()

Disclosure

No ants or snails were harmed in the making of this article. I made certain assumptions on the relative scale and speed of ants and snails being subjected to an additional 0.1-mile journey. Rest assured, I did not force, or even request, any ants or snails to partake in such a journey. I may, however, have conversed with some during the writing process. I will let you decide if it was a bidirectional conversation.


Wednesday, 27 March 2019

Is your code eco-friendly?

Do you realise that poorly written software is a contributor to global warming?

Although I have absolutely no data to base this on, please, read on... I assure you that you will not only conclude that this is a true statement; if you code, you may even feel compelled to write better code. If that's not enough of a hook, I will introduce to and encourage you to use, this hashtag in your code reviews:


#cowfart

Let me set the scene...


There's a lot of software out there, running on billions of devices across the globe, and sadly, a lot of that code is not optimised.

Back in the day when we wrote in assembler on tiny 8-bit computers we would count clock cycles (honestly; it wasn't just me!) and refactor, refactor, refactor... until we got the best oomph out of the CPU that we could.

As if by magic, computers started popping up everywhere and hardware started to get really fast. And cheap. Suddenly, it was not about clock cycles. It was all about time-to-market. Get the code cut fast, get it released, and if it didn't run very well, upgrade the hardware!

It really was that simple for a long time; certainly for most business software anyway. Games, well that's different because if you have lag... you suck. Well, your game does, and it won't sell. In business software, having things take a while isn't necessarily a problem... more time at the water cooler, more time to do all the admin things.

Sidenote: I would encourage every budding programmer, and even seasoned programmers, who have only worked in business application development, to do some games programming. The techniques you will learn in order to survive out there are transformational.

Technology has moved along at a rapid pace. Astonishingly fast, and astonishingly cheap. And the world, largely, depends on it being that way.

The plot thickens...


Thing is, we've got big problems out there.

With climate.

Okay, not just with climate; other big problems are available.

You don't have to just believe in it anymore either; it's fact, evidenced in the starkest possible ways.

We take a tangent...


Before we get back to my main point, I did promise you a cool hashtag to use in code reviews.


#cowfart

Why? Well, because it is believed that cows are a significant contributor to global warming because there are a lot of cows out there. And they fart. Quite a lot actually. There are many articles and research studies about it.



Okay, okay... cows fart, hardware got better, people stopped counting clock cycles, and released products faster... get to the point already!

So, here's the thing...


If you write sloppy code that does not perform as well as it could, it will take longer to execute. If it takes longer to execute, or if it requires additional processing power just to make it run, you are using more hardware, more electricity, and creating more heat. On a single PC, meh... what's the difference, right? Time to hit the water cooler.

If you scale that up to run on billions of devices across the planet, well... that's a staggering amount of electricity, time, and heat being wasted by your code.

Although your code might not get to be in the next kernel of Linux, Windows, Android, or some other place like Facebook, where it might be executed trillions of times, that's not a good reason to write sloppy code.

Always write the best code you can, and educate yourself on performance tuning.

The end is nigh...


And so, without further ado, next time you review some code or give yourself a TODO:, and see code that you know the could be better performing...

do the polar bears a solid...
keep some habitat for the bees...
think of the children...

Won't somebody please think of the children?

And, as a fun way to say "Hey, this bit of code you wrote. It sucks. It might work, it might even read like a poem. Thing is, it could perform much better. So sort it out."... just put #cowfart in there.

Because the planet's worth it.

Fin.

Thursday, 21 February 2019

Robotic Chimney Sweep

Cast your mind back... way back... okay, no, not that far, put the dinosaurs away and bring your mind forward a little... and... there we go.

We're in 1666. To be precise, it's September 6th, 1666. And it's Monday. Oh, and London stopped burning down yesterday.

In a bid to prevent that happening again things need to change, and one of those things was to start cleaning out chimneys on a regular basis. With children. Because they fit. And it turns out they are very cheap and will work long hours.

Bring your mind forward some 209 years. It's now September 1875. I don't know the precise date, nor the day. What I do know, is the Chimney Sweepers' Act of 1875 just got passed and now, at least throughout England, you're not allowed to go stuffing small children up chimney's to keep them clean.

What to do!?

Brushes. Brushes on a long stick. A very long extendable stick. That works, and with the dawn of the vacuum cleaner nearing, things are about to get a lot cleaner.

Well, that's been about it. Some improvements on the fuels being burned help improve the reduction of waste, smoke, tar build-up, and all manner of bad things; though fundamentally, there's a lot of cleaning that still needs to be done.

So it's about time for some innovation in chimney sweeping!

Roll up, roll up... here comes the robotic chimney sweeper.

I have a couple of concepts in mind:

Chimney Drone:
A drone-like bot that stays suspended in the chimney with extended rods that keep it positioned in the chimney and the drone moves up and down the chimney monitoring and cleaning as necessary; staying out of the way of the heat.

Chimney Bug:
A bug-like bot that sticks to the chimney wall and travels around the chimney continually monitoring and cleaning.

Both will monitor airflow and temperatures, toxin build-up, and have lights and cameras for monitoring the internal structure of the chimneys.

Concept (c) 2019 letsbuildathing.com

Wednesday, 20 February 2019

Data folding... yet another compression method

Recently, I wrote about a compression technique I've called "Iteratively Sequenced Data Compression" making magnificent claims about the level of compression it could attain.

I have another compression technique using [spoiler alert!] folded data structures. I'll explain what I mean by that shortly.

For this technique I won't bore you with a back-history of how I got involved in data compression, nor will I make claim to impressive compression ratios because this, currently, is a little more theoretical.

It's important to remember that when it comes to data compression, we don't really care about the data itself. The data itself can go in the trashcan because it's probably too big - after all, that's why we're looking to compress it right?! All we need is to remember is how to reconstruct the data once we're done forgetting about it. That bit is really important. So important in fact, it deserves a line of its own.

Forget the data; just remember how to reconstruct it.

So, with that in mind, what do I mean by folded data structures?



Imagine you have your data in one nice long strip. Easy enough.

Now imagine you start wrapping that data into rows, essentially like word-wrap on your fave text editor. Also easy.

Okay, so now imagine you had that data printed out on a piece of paper. Also not difficult to imagine.

Hold tight folks, here's where it starts to get a little more tricky.

Now look for a way of folding that piece of paper such that you align common values and drive a skewer through the paper to hold all those matched values in one place.

Let's further imagine you tried folding that piece of paper many different ways until you landed on the optimal folds to get the maximum continuous strip of values on your skewer.

Remember the value you skewered and the folding pattern. That's your first folded data item.

You can forget about that chunk of data now, so go ahead and erase all the data you just skewered from your piece of paper.

Go ahead and repeat this process of finding the optimal fold; remember the value and the folding pattern. That's your second folded data item. Forget the data you just skewered from the page.

Do it again. That's your third. Keep repeating this until you have skewered all your data.

You will find that as you reduce the data by skewering it off the page, your folding patterns become easier to spot, and less complex, so the process becomes faster the more you work through the page.

Once you've got all your folded data items, simply concatenate them and pop them in a small file.

Easy!

All you've had to remember is the skewered value and the folding pattern; which can then be unfolded back out all the way back out to that continuous stream of data that you started with.

So what this technique effectively does is allow you to do run-length-encoding across a landscape of data that, as a continuous strip, could not make the best use of high run lengths; which is therefore sub-optimal. By folding (or crinkling) the data in the right way, you can maximise your run-lengths to get highly effective compression.

Hypothesis:

  • Compression will be intensive and slow, whereas decompression will be fast.
  • Splicing byte values to enable layering of data prior to folding would improve compression and performance.
  • The compression method could be iterative to produce progressively smaller content.
Concept (c) 2019 letsbuildathing.com

Tuesday, 19 February 2019

When you need "big" and infinity is "too big"

Today I came across a lazy calculator. I asked it a simple enough question, and in response, I was informed that the answer was "infinite"... and that is a wrong answer.

It is not infinite. It is just big. Really big. Some would even say mind-bogglingly big.

The fact is, the calculator was lazy and just could not be bothered to work it out. The thing is, I use calculators to get accurate answers fast. I don't expect my digital buddies to be workshy.

I would have preferred it display the shrugging emoji than an infinite symbol because that would have at least been truthful, and amusing.

Therefore, we need a new symbol for "not as big as infinite, just mind-bogglingly big, so big we won't even work it out, so let's just look at this snazzy symbol instead, kinda big"... you know, for when the shrugging emoji guy just won't cut it.

To which, I have come up with this:

The Less Than Infinity Symbol

Besides, I really need the answer to not be infinite because I'm writing some code and need to reserve a chunk of memory and my calculation was to tell me precisely how much memory I need, and well, I suspect asking to reserve an infinite amount of memory will not have a favourable outcome.

>Hey, computer: Remember all the stuff

Nope.
>_