So today, my Internet connection went out. The router got stuck overnight, and I rebooted it, and no big deal. Windows, however, still shows this, even though I have a perfectly fine network connection:
If you Google it, you’ll find lots of people have the problem, across multiple versions of Windows, going back years. The solutions vary from “just reboot” to complicated registry hacks to “reinstall Windows.” 🤦♂️
I can’t even.
I hear anecdotal stories about “weird problems” like the one above all the time: My friend’s father can’t get the printer to work without reinstalling the drivers every time he uses it. Your cousin’s word processor crashes every time she clicks the “Paste” button and there’s an image on the clipboard. My colleague’s video glitches, but only in a video call with more than three people. And invariably, the solutions are always the same: Reinstall something. This one weird registry hack. Try my company’s cleaning software!
So I’d like to let all of the ordinary, average, nontechnical people in the room in on a little secret:
This is bullshit.
All of it is bullshit. Start to finish. Nearly every answer you hear about how to “fix” your bizarre issues is lies and garbage.
Twenty years ago today, I got a Knuth Check. Knuth dated it October 14, 2001, and I received it in the mail in early November that year, I think.
I’d heard, of course, about Knuth Checks, but I didn’t really know as much about them then as I do today. I was a kid a few years out of college, just trying to eke out a living building software; and when I found a mistake in an algorithm in his book, I intended to write a note in a short e-mail to Dr. Knuth, but I found out he was (wisely) avoiding the medium, so I sent him a letter. And a few months later, I got back a check.
It’s sat in a frame on my desk for twenty years, singularly my proudest professional accomplishment. I’ve shipped tons of software, I’ve made and discovered amazing things, I’ve won other awards — but nothing holds a candle to having a Knuth Check with your name on it.
(More accurately, the original is in my safe-deposit box at the bank, and a copy has sat in a frame for twenty years, because I’m not leaving something this valuable out on my desk! 😃)
So here’s the story:
The Art of Computer Programming, Volume 1, has an algorithm on page 26 that describes an efficient solution for calculating a fractional logarithm in binary, using pure integer math. The algorithm in question derives from one first published in 1624 by Henry Briggs. At the bottom of page 26 in the 3rd Edition, the book mentions that Richard Feynman (yes, that Feynman) determined an efficient inverse algorithm for calculating a fractional exponent in binary, and it proposes to the reader to solve it as problem #28.
As is typical of all of Knuth’s “problems for the reader,” he includes the answer in the rather large appendix that comprises the last 150 pages of the book. The solution to problem #28 can be found at the bottom of page 470, and, unfortunately, it is not the same algorithm that Feynman discovered: It has a critical error in the first step:
E1. [Initialize.] If 1 – ϵ is the largest possible value of x, set y ← (nearest approximation to b1-ϵ), x ← 1 – ϵ, k ← 1. (The quantity yb-x will remain approximately constant in the following steps.)
TAoCP, V1, 3rd ed., pp. 470
That has a rather unfortunate error in it: x ← 1 – ϵ is potentially a meaningful expression in that context, but it’s wrong, and it will calculate not the correct exponent but garbage data, with, as I recall, mostly zero bits and a few semi-randomly-distributed one bits.
The correct expression is x ← 1 – x. It’s possible the other form was a typographical error on Knuth’s part, but it’s not in the same category as a simple omission or spelling error, since if you follow the mathematics as written, you will simply get garbage, and not get a valid fractional exponent.
(I struggled for a few days with the original: It was Knuth; surely I’d been the one to make the mistake in my code, not the good Doctor in his! But ultimately when I began to experiment with the broken algorithm, the answer became obvious.)
So I wrote Knuth a letter in the spring of 2001 saying that I thought that was wrong as published, since it generated garbage output, and I included that I thought that x ← 1 – x was actually the correct form, since it appeared to be correct. And he sent me back a rather nice letter and a check for $2.56 for having found an error.
(I had also suggested that the inherent parallel execution implied by the commas in step E2 appeared to be wrong, that sequential execution appeared to be needed there; Knuth chastised me in the letter that no, parallel execution was in fact intended. But I still got an error check for a broken algorithm!)
Interestingly, I didn’t get my check by just searching for errors, like many people have: I was trying to build something; I encountered that error merely by remembering that the books contained an answer to the problem I was facing. But it seems that not many people build things by literally reading the textbook and doing exactly what it says; and apparently, nobody had implemented Feynman’s exponentiation algorithm from the book after it appeared in the 3rd Edition in 1997. [ed. note: I checked, and Knuth added it to the 3rd Ed. in April 1995; my error can be seen in his addenda.]
That said, an entertaining question to posit: Why the heck was I trying to implement fractional logarithms and exponents in the first place?
The answer was that it was 2001, and Windows 98 and ’486 PCs were still pretty common, and you couldn’t assume a computer supported floating-point math. And I needed to calculate powers: As simple as xy, just like on your calculator, and it was even constrained: In my case, x was always an integer, and y was always a number between 1 and 2. But I didn’t have guaranteed floating-point hardware available so I could just write pow(x, y). So I improvised: I used the Briggs and Feynman methods to calculate the logarithm and the exponent in pure integer math, so that I could implement calculation of a power as exp(y * log x) instead, and effectively use multiplication — which the hardware supported — to produce a power (which it definitely didn’t).
Today, of course, I’d write pow(x, y), just like you would. But when you’re stuck on super-limited hardware, you have to be a bit more clever.
(For what it’s worth, you can see the code live, in action, if you have a copy of SpaceMonger 2 or 3; just go into the Settings, and turn up the “Exaggeration” slider: Every file’s size will be raised to the power you choose, using exactly this math!)
Feynman, of course, derived that algorithm for exactly the same reason I needed it: They were using mechanical adding machines during WWII as part of the Manhattan Project, and those machines didn’t do powers. But the relativistic math for the atomic bomb needed lots of powers; so he derived the exponent algorithm as the converse to the logarithm algorithm, so that they were able to just use addition, subtraction, and multiplication, which the adding machines could do, to still efficiently calculate powers, which the machines couldn’t.
So that’s my Knuth Check story. It’s unlikely I’ll ever earn another; I certainly haven’t in the last twenty years! I still haven’t even finished reading Volume 4A yet, and I’m far too busy to randomly pore over the other tomes again just in search of a mistake. But I still managed to earn a prize that’s among the most coveted in this field, and twenty years later, I’m still happy to have done it even once.
I’ve had to say this several times over the years, so I thought I’d repeat it in a place where I can just make a link to it: The Dictionary<K,V> class in .NET does not store your data in order.
When you .Add() or .Remove() items from a dictionary, they get stored in whatever order the dictionary finds is most efficient. It just so happens that as long as you never.Remove() an item, the underlying data structure ends up with the items in order when you enumerate them, but this is not guaranteed.
Here’s an example program to demonstrate that it doesn’t work the way you probably expect:
You might think reading this that the first output would be in order of [A, B, C, D, E, F], and the second would go [A, B, C, E, F, G]. But that’s not the output that’s produced:
Fruits:
A -> Apple
B -> Banana
C -> Cherry
D -> Durian
E -> Elderberry
F -> Fig
Fruits:
A -> Apple
B -> Banana
C -> Cherry
G -> Grape
E -> Elderberry
F -> Fig
The new entry gets slotted in where the last one was removed, which is exactly the behavior you’d expect if you had read the underlying code, but it’s very different than what you’d expect if you just assumed everything will always have insertion order.
Moral of the story: If you need dictionary data to be ordered, don’t assume it’s ordered — explicitly order it!
If you need things to be in insertion order, keep them in a list.
If you need things to be ordered by key, consider a SortedDictionary<K,V> instead.
If you only need things sometimes to be ordered by key, use Linq: dictionary.OrderBy(pair => pair.Key)
Comments Off on Reminder: .NET Dictionary isn’t ordered
In the old days if you encountered a badly-coded website, you could often fix it with tools like TamperMonkey: Hack a little JavaScript, and you could fix the worst of the site’s abuses, or even make it nicer and add features the original developers didn’t (or couldn’t) add. Everything was fairly open, so you could just drop new script on the page, and even substitute out functions if you had a better implementation.
We now live in a reality driven by WebPack, where every website’s JavaScript is packed, minified, bundled, and, importantly, smashed together inside a closure, making all of its innards and mechanics hidden and encapsulated, hidden even from its own code.
That’s great for application stability, if you have good developers. But it also means that the app you get is the app you get: Every bug, every design flaw, and every misfeature is hidden away inside that bundled ball — which means that no matter how bad it is, you can’t fix somebody else’s broken website anymore.
This is one of those strange posts I never thought I’d write.
For the record, I love ReactJS. The principle of being able to build an entire website using forward-only data flow was sheer raw brilliance, and was a fundamental architectural shift in how website UIs work.
I’ve wanted for a long time to describe in detail how SpaceMonger works. It’s been more than twenty years since I wrote SM 1.0, and there were a lot of innovations in SM 2.1 that I still really haven’t seen appear in any discussions elsewhere: I worked really hard to make SM fast and as smooth, even on the ridiculously limited computer hardware of the early 2000s.
Part of the work in making SM 2.1 was finding and using unusual algorithms that aren’t given much attention outside computer science literature — and part of that work was discovering and inventing new algorithms that, to the best of my knowledge, haven’t been published anywhere since. In this series of articles, I aim to fix that fact: It’s bothered me a lot that I’ve been hoarding some knowledge over the years, and it’s well past time that I share it.
I was asked recently (for the 97,000th time), “I loved the treemap layout algorithm in SpaceMonger. I’d like to implement something similar. How did that algorithm work?”
I’ve been meaning to write a few articles on this for a while now, so here we go with the first one: How did SpaceMonger 1.x’s treemapping work?
The latest:We’re expecting! The due date is the middle of February. First ultrasound looks good. Robin’s pretty sick a lot of the time, but there’s only a month left in the first trimester, and we’re hoping the nausea settles down after the first trimester like it did with Alan. She’s a lot sicker this time, and thinks that maybe that means we have a girl. Hard to say, but we’ll find out!
Alan keeps growing. He’s at the just-shy-of-terrible-twos now and exerting his independence by means of tantrum. (Lord help you if you want to change his diaper, or if you leave the room without his permission.) We took him to the zoo for the first time last weekend, and he was far more interested in the carousel than the animals. Ah, well. He recently learned that he can stack his alphabet blocks into towers, which has been highly entertaining to watch. Still no speech yet except for going “nyaah” when he sees a cat (it’s his “meow”), but with any luck, he’ll at least figure out “Mommy” and “Daddy” soon. He certainly knows what we’re saying, and nods (and waves his arms) and shakes his head (and waves his arms) to indicate “yes” and “no.”
What’s new in Smile-land? Back in April, I started rewriting the interpreter in C. It was never going to be fast enough in C#, and I always knew a C or C++ port would be necessary. I rejected C++ after a few attempts (more thoughts on that below). So far — well, I have a garbage collector and a working String object and a bunch of unit tests and that’s about it for the last three months. I only get an hour here or there at best to work on it. But Ben recently challenged me to have a minimally-usable interpreter by Christmas, and that’s put a little more fire under my butt to do something about it.
ASP.NET MVC gets all the notoriety these days in the Microsoft web-programming world. ASP.NET WebForms is kind of the ugly stepchild, sitting in the back corner, mostly neglected. “Did you see WebForms last week? He was awful, just awful,” say the voices at the party. “But MVC! MVC is sooo elegant. And so smart! Did you hear she was given early admission at Dartmouth?”
Now, I’ve been coding for thirty years, and I’ve seen a lot of technologies, languages, and environments come and go. And Microsoft is a major, major offender in this category: They have a tendency of building these big, awful, overarching frameworks that make the U.S. Federal Government look under-managed, and then they drop them as soon as the Next Big Thing comes along. MFC was a horrendous mishmash of bad ideas that all managed to somehow end up in one ball of code. COM was a fever-driven acid nightmare that the computer industry has thankfully mostly wakened from. And VB — do I really even need to explain why VB is bad?
Which brings us to ASP.NET. Now I know that the original goal with it was to bring the VB WinForms programmers to the web and get them the heck off of the junky VB/C++/COM conglomerations that were killing Microsoft’s own abilities to fix issues in Windows itself. To that end, they came up with some clever abstractions that made the web sort-of vaguely smell like Windows programming for programmers who weren’t really clever enough or willing enough to get with the future.
And if you were offended by that last sentence, then you can stop reading right now, because this article is not for you.
Computer science is a branch of mathematics. It’s a powerful, amazing tool based on logic and reasoning and the work of giants. And it seems to get no respect from the programmers who blithely don’t realize they’re using it every day.
In the business world, programming problems are solved by either gluing preexisting stuff together or “just hack it until it works.” I can’t count the number of times I’ve heard a colleague say, “Well, what if we just try this,” or “I know this trick,” or “I don’t know why that broke, maybe sunspots.” There seems to be little recognition of the value that computer science brings to the table: Everything is just tips and tricks and technologies, not reasoning and technique.
And if (like me) you’re one of the rare ones who uses computer science to solve a problem, it’s not attributed to all those proofs and techniques and hard science you learned: You’re just a “programming wizard.” It’s as if everybody around you is building houses out of pine, and they see you build a house out of stone and think you found really hard, strong pine somewhere. And worse, they then ask you to point out which trees you got it from.