SYS-CON MEDIA Authors: Pat Romanski, Liz McMillan, Yeshim Deniz, Elizabeth White, Courtney Abud

Blog Feed Post

The collaboration behind Colossus

CRI-117When I first heard about the heroic efforts during WWII to break the Nazi communications codes such as Enigma, I had in my mind the image of a lone cryptanalyst with pencil and paper trying to figure out solutions, or using a series of mechanical devices such as the Bombe to run through the various combinations.

But it turns out I couldn’t be more wrong. The efforts of the thousands of men and women stationed at Bletchley Park in England were intensely collaborative, and involved a flawless execution of a complex series of steps that were very precise. And while the Enigma machines get a lot of the publicity, the real challenge was a far more complex German High Command code called Lorenz, after the manufacturer of the coding machines that were used.

The wartime period has gotten a lot of recent attention, what with a new movie about Alan Turing just playing in theaters. This got me looking around the Web to see other materials, and my weekend was lost in watching a series of videos filmed at the National Museum of Computing at Bletchley Park.  The videos show how the decoding process worked using the first actual electronic digital computer called Colossus. Through the efforts of several folks who maintained the equipment during wartime, the museum was able to reconstruct the device and have it in working order. This is no small feat when you realize that most of the wiring diagrams were immediately destroyed after the war ended, for fear that they would fall into the wrong hands. And that many people are no longer alive who attended to Colossus’ operations.

The name was realistic in several ways: first, the equipment easily filled a couple of rooms, and used miles of wires and thousands of vacuum tubes. At the time, that was all they had, since transistors weren’t to be invented for several years. Tube technology was touchy and subject to failure. The Brits figured out that if they kept Colossus running continuously, they would last longer. It also wielded enormous processing power, with a CPU that could have had a 5 MHz rating. This surpassed the power of the original IBM PC, which is pretty astounding given the many decades in between the two.

But the real story about Colossus isn’t the hardware, but the many people that worked around it in a complex dance to input and transfer data from one part of it to another. Back in the 1940s we had punch paper tape. My first computer in high school had this too and let me tell you using paper tape was painful. Other data transfers happened manually copying information from a printed teletype output into a series of plug board switches, similar to the telephone operator consoles that you might recall from a Lily Tomlin routine. And given the opportunity to transfer something in error, the settings would have to be rechecked carefully, adding more time to the decoding process.

There is an interesting side note, speaking about mistakes. The amount of sheer focus that the Bletchley teams had on cracking German codes was enormous. Remember, the codes were transmitted over the air in Morse. It turns out the Germans made a few critical mistakes in sending their transmissions, and these mistakes were what enabled the codebreakers to figure things out and actually reconstruct their own machines. Again, when you think about the millions of characters transmitted and just finding these errors, it was all pretty amazing.

What is even more remarkable about Colossus was that people worked together without actually knowing what they did. There was an amazing amount of wartime secrecy and indeed the existence of Colossus itself wasn’t well known until about 15 or 20 years ago when the Brits finally lifted bans on talking about the machine. For example, several of the Colossus decrypts played critical roles in the success of the D-Day Normandy invasion.

At its peak, Bletchley employed 9,000 people from all walks of life, and the genius was in organizing all these folks so that its ultimate objective, breaking codes, really happened. One of the principle managers, Tommy Flowers, is noteworthy here and actually paid for the early development out of his own pocket Another interesting historical side note is the contributions of several Polish mathematicians too.

As you can see, this is a story about human/machine collaboration that I think hasn’t been equaled since. If you are looking for an inspirational story, take a closer look at what happened here.


Read the original blog entry...

More Stories By David Strom

David Strom is an international authority on network and Internet technologies. He has written extensively on the topic for 20 years for a wide variety of print publications and websites, such as The New York Times, TechTarget.com, PC Week/eWeek, Internet.com, Network World, Infoworld, Computerworld, Small Business Computing, Communications Week, Windows Sources, c|net and news.com, Web Review, Tom's Hardware, EETimes, and many others.

Latest Stories
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Here to help unpack insights into the new era of using containers to gain ease with multi-cloud deployments are our panelists: Matt Baldwin, Founder and CEO at StackPointCloud, based in Seattle; Nic Jackson, Developer Advocate at HashiCorp, based in San Francisco, and Reynold Harbin, Director of Product Marketing at DigitalOcean, based in New York. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Lo...
With the new Kubernetes offering, ClearDATA solves one of the largest challenges in healthcare IT around time-to-deployment. Using ClearDATA's Automated Safeguards for Kubernetes, healthcare organizations have access to the container orchestration to dynamically deploy new containers on demand, monitor the health of each container for threats and seamlessly roll back faulty application updates to a previous version, avoid system-wide downtime and ensure secure continuous access to patient data.
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lo...
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
Platform9, the open-source-as-a-service company making cloud infrastructure easy, today announced the general availability of its Managed Kubernetes service, the industry's first infrastructure-agnostic, SaaS-managed offering. Unlike legacy software distribution models, Managed Kubernetes is deployed and managed entirely as a SaaS solution, across on-premises and public cloud infrastructure. The company also introduced Fission, a new, open source, serverless framework built on Kubernetes. These ...
Emil Sayegh is an early pioneer of cloud computing and is recognized as one of the industry's true veterans. A cloud visionary, he is credited with launching and leading the cloud computing and hosting businesses for HP, Rackspace, and Codero. Emil built the Rackspace cloud business while serving as the company's GM of the Cloud Computing Division. Earlier at Rackspace he served as VP of the Product Group and launched the company's private cloud and hosted exchange services. He later moved o...
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, will discuss how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galer...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
Cloud-Native thinking and Serverless Computing are now the norm in financial services, manufacturing, telco, healthcare, transportation, energy, media, entertainment, retail and other consumer industries, as well as the public sector. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that pro...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...