SIGAVDI #92: Breakfast Smoothie Edition

Hello friends,

It’s been a hectic couple of weeks. I’ve been doing a lot of RSpec test suite maintenance. I have lots of Big Opinions about how to do spec suites well, but the truth is the combination of Ruby, Rails, RSpec and FactoryBot furnish so many opportunities to grow thickets of impenetrable test code that I can’t fault anyone who finds themselves with a flaky, slow-running, hard-to-update mess on their hands. Test suites are best when they are boring, and these tools offer many, many opportunities to make them… interesting.

I'll Trade Ya!

Hey there! Archived SIGAVDI letters are for newsletter subscribers only. All it costs to join (and unlock this post) is an email address! I'll write to you weekly-ish with a few interesting links, some updates, and some reflections on the intersection of software and life. And I'll respond to your replies! Whattya say?

A photo of Avdi Grimm

What’s good

What’s New

What’s in my head

Recently I saw reference to an AI paper that asserted that algorithms don’t make poor decisions due to fatigue. As opposed to those sketchy, unreliable humans. This amused me.

A long time ago I worked on hard realtime ATC radar software for multi-runway airports. A lot of the work that went into that system involved graceful degradation as the system became overwhelmed. What do you do when there are more “targets” (aircraft with transponders) than the system was specced to handle? Do you fail in a surprising way? Do you handle it in a way that just silently loses radar tracks? Or do you fail predictably?

The thing is, computers make poor decisions, full stop. They make poorer decisions at the limits of their capacity. In the best case someone manages to predict and plan for likely types of exhaustion (memory, thread count, I/O, etc etc), and structure the degradation. In the worst case their response to overwhelm is nondeterministic.

And computers absolutely become fatigued too. Anyone who has ever rebooted their PC because it was getting slow and acting weird knows this. Fatigue is an artifact of complex systems, not some uniquely biological thing.

Fatigue can happen on very small timescales (garbage collection) or larger (oops, we leaked file a few file descriptors a day until we ran out). The more complex a system, the more unavoidable fatigue is. Evolution has had four billion years to work on this problem and its latest and most successful model spends eight hours out of every twenty-four rebooting. Apparently the flexibility has more utility than the availability.

The field of hard realtime programming is, in one sense, an attempt to minimize the possibility of that kind of fatigue by locking down sources of complexity. If your program never allocates it can never become GC-bound. But that also limits the flexibility of such systems.

In fact it limits them to making “decisions” in only the most trivial sense of the word. The more human-like the judgment required in terms of richness of input and context, the more complexity required, and the more potential for systems operating at their limits to screw up.


That’s all for today. As always, I welcome replies… or come chatter with me on the Tensegrity Discord, a perk of all Patreon subscriptions ????

Thanks for reading!

Avdi

Leave a Reply

Your email address will not be published. Required fields are marked *