Re: a couple entries ago, the one where I engaged in EWD worship, I mentioned a nifty proof in his essay but left it out.

It's a cool proof because: 1) it requires no knowledge of math 2) it requires no knowledge of chess 3) it makes his point about looking at a set as a whole rather than bit by bit.

To recap, say you remove the 2 squares on opposite corners of a chess board. Prove you can't cover the remaining board w/ 2x1 dominoes.

The proof: those squares will be the same color (see chessboard). So you'll have 30 white and 32 black sq. Or 32 black and 30 white. It's not important which, the important thing is there are an unequal # of black and white squares.

Every 2x1 domino will cover a black and white square. So if there's a covering, there will be exactly the same number of black and white squares covered. Only there isn't the same number of black and white squares, so you can't. QED.

## 2 comments:

Dijkstra's checkerboard/chessboard proof example is a cry *against* Unit Testing. Writing large quantities of "tests" to try and verify the program works okay is not something Dijkstra was fond of. He considers unit tests a waste of time, since actual verification would be a good proof. Unit tests can be used to show the presence of bugs but it is better to get it right the first time (i.e. a bug reporting system is just a test case reporting system, really - since people didn't get it right the first time).

However proofs of large programs can be extremely complicated and hard, nearly impossible to write. Consider trying to prove a compiler or operating system "correct". Consider trying to prove tcp/ip or networking "correct". You almost have to test the network to make sure it will work in bizarre situations like when the power fails and the computer shuts down right in between a connection. Maybe that can be proven without testing, I am not sure.

Proving a program entirely correct can discourage programmers from actually programming - since it may take hundreds of years for a single program to be proven correct (try proving the UNIX operating system correct, for example).

Although Dijkstra was a fan of proving programs correct, even his followers such as Niklaus Wirth still don't have "proof" that their Oberon compilers are "correct", nor have I seen proof that Algol or Pascal compilers are "correct". I've not seen anyone prove a program entirely correct - ever. There may be some small examples of program correctness and proofs somewhere. Has anyone found any?

Programs have become so large and complicated, it seems nearly impossible to prove large programs correct - but we can prove portions of them correct, I am sure. Modules will help with that, too (i.e. modula/oberon). And objects are just dynamic modules.

Where are the proofs for compilers though? Why hasn't Wirth proven his compilers correct, ever? Or has he? Am I missing something.

Where are the proofs for GUI programs of today? How do you prove whether or not an OK or CANCEL button is in the right position on the gui window?

I think GUI programs could be a combination of engineering, art, and proof. We can prove some parts of GUI programs (mostly the non GUI code), but not the artistic or engineering portions. How far should a button depress - 2 pixels, 5 pixels, 1 inch deep? How do you prove such? This is an art/engineering issue. So some parts of programs must be left unproven, since they are art and engineering, I think. However there is hope that we can prove the non-gui portions of programs.. input, output, text, calculations, etc.

Why isn't Unix proven? How big would the unix proof be? Would it even be possible to prove that a unix file system is correct, or that a network algorithm is correct? What is the goal of the network algorithm especially if it is built to have broken connections - can a system built to fail (HTTP) be proven?

A typical batch program that has a input and an output is a good candidate for proving. Toy programs are definitely easier to prove... sadly.

Consider an animal who is chasing its prey. Can we prove that the most correct way for that animal to kill its prey would be mathematically this certain way here? Can we prove that the animal is correctly chasing its prey? what is correct? Is it correct to kill a baby prey verus an adult prey? I find situations like this in programming all the time - where there is no one correct way. On the other hand there are is code in programs that can be proven. Well, one example of something you can't prove - is extracting some data from a website. This is like chasing down some prey. The data can change, the people who one the website can modify the data so your program doesn't work any more, etc. Like if the wind changes direction while you are chasing your pray, does your old mathematically correct program which didn't include wind, work any more?

Nice one, Anonymous. Yes, Dijkstra was very much a fan of the hardcore proofs of correctness. Those of us in the (cough) real world know what a daunting task that would turn out to be in practice.

Still, Dijkstra was obviously a sharp fellow, and definitely good for some quotes.

Post a Comment