This Week's Book Review - The Enigma Story

The Enigma Story
Looking for a good read? Here is a recommendation. I have an unusual approach to reviewing books. I review books I feel merit a review. Each review is an opportunity to recommend a book. If I do not think a book is worth reading, I find another book to review. You do not have to agree with everything every author has written (I do not), but the fiction I review is entertaining (and often thought-provoking) and the non-fiction contain ideas worth reading.

Book Review

Enigma Fully Revealed

Reviewed by Mark Lardas
December 4, 2022

“The Enigma Story: The Truth Behind the ‘Unbreakable’ World War II Cipher,” by John Dermot Turing, Sirius, 2022, 240 pages, $12.99 (paperback), $8.99 (ebook)

One thing “everybody knows” about World War II is Allied cracking of the German Enigma code machine allowed the Allies to win World War II. It has become an article of faith since the secret was first revealed in the 1970s. Is that accurate?

“The Enigma Story: The Truth Behind the ‘Unbreakable’ World War II Cipher,” by John Dermot Turing tackles that question along with many others. It provides a fresh look at the history of Enigma, dispelling many myths and placing World War II codebreaking in proper historical context.

Turing opens the book with a history of the Enigma machine. He tells of its development in post-World War I Germany. Originally intended for commercial purposes, improved version were eventually used by the German government, and licensed abroad. (Italy’s military used a simpler version, while Britain used a much-improved version for their Typex coding machines.)

He explores the early efforts to crack Enigma. Pre-war, the Poles took the lead, using mathematics to decrypt messages. France also proved an early leader in decryption. Britain picked up the torch after Poland and France fell. It mechanized the process, although Turing shows the US industrialized decryption.

Turing uses sidebars to explain how decoding worked and to introduce the principal players in the decoding effort. A nephew of Alan Turing, the author places his famous uncle in the proper perspective. He shows that while Alan Turing played an important part in the story, other played bigger roles, including Gordon Welchman, whose contributions are almost totally neglected.

Turing also attacks the many myths that have sprouted up around codebreaking, many of which were created by prior authors with incomplete knowledge of the whole story. It turns out the British government did not allow Coventry to be bombed to protect the Ultra secret. Coded German messages never named their target. Similarly, the Poles never stole a machine from the Germans.

Turing also examines the role played by decryption in winning the war. He concludes in isolation it was not a war- or battle-winning weapon. (The British decrypted virtually the entire German battle plan for Crete and still lost.) He concludes it did shorten the war, leading to an Allied victory a year or two earlier than might otherwise have occurred.

“The Enigma Story” may be the most complete examination of the history of Enigma yet written. For those interested World War II codebreaking, it is a must-read book.

Mark Lardas, an engineer, freelance writer, historian, and model-maker, lives in League City. His website is


Just offhand, thinking about the westwards progress of the Red Army after Stalingrad, it seems that if the Allied victory over Germany had been delayed until 1946 or 1947, that victory would have found the Red Army standing on the cliffs of Normandy ready to welcome their Western allies.


It’s really difficult to work out all of the different threads alternative history might have taken. Had the 4 rotor Enigma not have been broken in 1942, the Battle of the Atlantic might have gone much worse for the Allies, delaying reinforcement of Britain and deployment of U.S. forces there. On 1943-03-10, the Kriegsmarine changed their Enigma configuration for U-boat messages, resulting in a nine-day blackout during which Bletchley Park could not read the traffic. During that month, 82 Allied ships of 476,000 tons were lost in the Atlantic (120 ships worldwide), and the supply situation in Britain was considered dire. In April, after the codebreakers succeeded in reading the modified encryption, Allied losses dropped to 39 ships of 235,000 tons. In May, 43 U-boats were destroyed, 34 in the Atlantic, and Dönitz declared, “We had lost the Battle of the Atlantic.”

If increased losses in the Atlantic due to inability to break naval Engima delayed the U.S. build-up and D-Day, that would have meant that the atomic bomb would have become available before the defeat of Germany. Since the major motivation for development of the bomb was use against Germany, it’s highly likely it would have been used. What would have happened then, and how might its use have affected a Soviet march to the west? It’s hard to say.

Gregory Benford’s The Berlin Project is an alternative history in which the U.S. developed the atomic bomb in time to use it for a decapitating attack on Berlin simultaneous with the D-Day landings in June 1944. The consequences illustrate how many differences can flow from one change in the timeline.


That is an interesting point. On the other hand, the Red Army had taken Berlin before the atom bomb was ready to be used. One reasonable guess might have been that – if D-day had been further delayed – the Red Army’s onward march to the coast might have been very rapid after the fall of Berlin, making the use of nuclear weapons on Germany or occupied France unnecessary.

The standard WWII story we were all brought up with tended to understate the role played by the Red Army in the defeat of Germany. But if the UK had not been able to act as a base for the bombing of German infrastructure, then perhaps the German army might have been a tougher nut for the USSR to crack. There is the view that the simple superiority of the Allies in resources, manufacturing capacity, & manpower meant that the Axis was doomed to eventual defeat – but that defeat could certainly have taken very different forms.


Doesn’t that assume the Russian advance was completely independent of the Western Allies? Russia received a lot of suppies from the West. They might have received fewer absent penetration of German codes. Also, without a Western Front the Germans could have stationed more troops on the Eastern Front. Had the invasion of France been delayed until 1945 units destroyed at Falaise, and committed to the Ardennes Offensive would have been available for use in the East. Plus, as mentioned elsewhere, the atomic bomb, originally developed for use against Germany was available in August 1945.


An old guy with deep roots in the history of cryptography going back to the WW II intelligence agencies and banking industry invented the “magnetic ink” at the bottom of your checks when he was the architect of the Burroughs banking computers in Minneapolis. He was a friend of mine and provided me with a couple of insights regarding the connection between compression and cryptography when I brought The Hutter Prize to his attention. As it turns out, his first PhD student not only was Chinese, but became the director of the CCP’s industrial planning for computation. His PhD thesis? Compression. This is relevant because Algorithmic Information is measured by the smallest number of bits it takes to comprise a binary algorithm (machine binary language) that outputs whatever data one has available – and this is the best one can do to infer the causal structure of the world being observed. This is the job of the intelligence agencies.

That this doesn’t form the basis of the funding for machine learning in the present, despite the Hutter Prize having done so for over 15 years (and incorporates Algorithmic Information as the prize incentive) and has been out there in the published literature for over a half century of Moore’s Law is pretty strong evidence that not only the AI community has been lobotomized and therefore will continue to bark up the wrong tree so as to avoid genuine Intelligence, but so have the social sciences, since they continue to bicker about “causal inference” when the solution has been known for that entire half century. Suspecting some adversarial intelligence agency involvement in this lobotomization of not only the machine learning world but also the social sciences is not only reasonable, it is practically a requirement for one to be considered reasonable.

Here’s one of the last emails my colleague gave me permission to share publicly and it is directly relevant to Enigma:

X-Gmail-Received: f9e82aa6da943f5e32e18a44ea4571b1b906b543
Received: by with SMTP id v15cs179910pyk;
Thu, 31 Aug 2006 05:22:46 -0700 (PDT)
Received: by with SMTP id w11mr287738wxb;
Thu, 31 Aug 2006 05:22:46 -0700 (PDT)
Received: from ( [])
by with ESMTP id 43si676567wri.2006.;
Thu, 31 Aug 2006 05:22:46 -0700 (PDT)
Received-SPF: pass ( domain of designates as permitted sender)
Received: from by (mail_out_v38_r7.6.) id o.c4f.12626d3 (39954)
for; Thu, 31 Aug 2006 08:22:40 -0400 (EDT)
Date: Thu, 31 Aug 2006 08:22:40 EDT
Subject: Syntopticon
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary=“-----------------------------1157026960”
X-Mailer: 9.0 Security Edition for Windows sub 5331
X-Spam-Flag: NO

Content-Type: text/plain; charset=“US-ASCII”
Content-Transfer-Encoding: 7bit


Syntopticon is the Britannica encyclopdedia of ideas, and the Syntopticon
itself is the (one small volume) codex cross referencing all the great ideas of
mankind contained in all the many other books of this reference system. I
have an old copy and it is the size of the Britannica encyclopedia (ie an 8’
long collection of big books). The importance of Syntopticon in the Hutter
competition is that expression of ideas in different natural languages is a very
different proposition. Since an idea in Chinese is typically represented by
one or a few symbols, and that same idea in English may take several pages,
there is a huge difference to start with in the effectiveness of searches in
Chinese or English. There is also a difference in the size of the database
containing say a Chinese version of Syntopticon or an English version. And
a much greater difference in the effectiveness of doing automated
(programmatic) searches in Chinese or English.

But from the perspective of doing analyses on these databases (especially if
one represents say Wikipedia in Chinese), AND I do my vector fusion
conversion of that Chinese database to translate (and further compress) those symbols
into an analytically tractable form, the analysis of ideas is a vastly
different problem than if one attempts to do this all in English. Searches are
certainly one analytic process of major interest in natural languages; drawing
conclusions, making predictions, and drawing inferences are other less
investigated processes. My PhD graduate student Rok Sosic’s thesis was on software
that understands what it is doing (title: The Many Faces of Introspection,
Utah, 1992). His poem summarizing his thesis was:

    The box is a secret,  knotty, black
    It's so complex  that I've lost track.
    If somehow  it's  made reflective,
    The box will be  much more effective. 

Computerdom does not have a lot of art in inference engines (making
predictions). The most effective inference engine that I know of is the software
done for Colossus, Turing’s code breaking “computer” of WWII. The Brits still
treat that software as classified even though the hardware has been
declassified for years. So far as I know, nobody outside of UK knows the details of
that software. My point here is that drawing understanding from natural
languages is a relatively small art practiced mostly by cryptoanalysts. And my
further point is that the natural language of interest (be it English, Chinese,
Mayan or …) has a major influence on how one (person or program) goes
about doing analyses and making inferences.

From a practical perspective, the Hutter challenge would be much more
tractable for at least me if I could do it in Chinese. My first PhD student was
Jun Gu who is currently Chief Information Scientist for PRC. His thesis was on
efficient compression technologies.

If you wish, you can share these thoughts with whomever you please.

Bob Johnson
Prof. Emeritus
Computer Science
Univ. of Utah

Content-Type: text/html; charset=“US-ASCII”
Content-Transfer-Encoding: quoted-printable


One of the things the book revealed is Colussus was not used for Enigma code-breaking. Given the date of the e-mail, it is unsurprising that was not known by Professor Johnson. I think it was revealed in the 2010s.

The best-known inference engine may be probably the fictional “Merlin” from H. Beam Piper’s The Cosmic Computer. It starts out as a cargo cult story and then changes into something entirely different. As with all of Piper’s stuff, an excellent read…


Yes, the Colossus computer was used to break the “Tunny” (Lorenz) cipher, which was used for high-level diplomatic and strategic messages, as opposed to operational communications which were encrypted with Enigma and broken by the electromechanical “Bombe” machines.

An excellent account of the history of Colossus after the veil of secrecy had finally been lifted is the 2006 book edited by B. Jack Copeland, Colossus, in which chapters were contributed by people who were there, then, and describe precisely how the encryption was broken, including how “innovations” by its German designers intended to make it “unbreakable” actually made it more vulnerable to decryption.