Why you should read about computer history

I have been lately spending some good amount of time reading about the history of computers in details (networking, internet, transistors, microchips, etc..) and this turns out to be more exciting than I though, it provided me with great insights on how things started and why they are the way they are now.

I have been lately spending some good amount of time reading about the history of computers in details (networking, internet, transistors, microchips, etc..) and this turns out to be more exciting than I though, it provided me with great insights on how things started and why they are the way they are now.. I really believe that even if you didn’t use these info directly in your day to day job, it will still have an impact on your perception to computer and programming in general and it will still help you solve some problems in unexpected ways.

I am also a strong believer that the way to change things and to build better systems is to take a detailed look at how things started and about the pioneers in these domains and why they build stuff in a certain way.

Let’s say regarding computer architecture, I think most of us (programmers, and computer engineers) took computer architecture and logic design at university (CPU, memory, buses, truth table,etc..) but still what we learn in university is rarely about history, it is more about how things are currently engineered.. It is like we are learning about the “final DB schema” instead of the “accumulated migration files” that led to it..so for me, I had so many gaps about the history of computers (and of course I still do but at least now I know a bit more), but when the full picture becomes more vivid, I understood that it all started in different phases, and they were connected in an interesting way.

for example so much of our circuit design is actually based on the work of George Boole born in 1815 (<= boolean refers to him), where he thought that human decisions can be expressed in some sort of mathematical formulas and he wrote about that in details, what he wrote was mainly dismissed at his time, but around a century later this was the revolutionary idea to design computer circuits (boolean logic + logic gates,…).

Another example is that, before transistors, engineers used vacuum tubes. then transistors were made and perfected, and then the microchip was invented (which was literally a revolution, since it helped to fix a problem know as tyranny of numbers, where you had to connect a large number of wires to transistors and other components (and this isn’t feasible from testing,reliability, efficiency and practicality point of view).

The above is just an extreme summary of what I learned, but my point is by understanding all that you will have a better sense of history, so when you look to an electronic device or a hardware, you can understand the stages that made it reach this point.

That’s about the “understanding” part, the other important part is about the “problem solving”, and by reading about computer history, you would understand the problems faced by the earlier generations and how they solved them, and how by actually solving them, they literally started a revolution in certain areas.. Currently, we don’t use vacuum tubes but rather transistors, so we don’t have a vacuum tube heating problems,etc.. but it is interesting to know, when there were a vacuum tube problem, how the engineers and scientist worked to solved it, though the problem doesn’t exist nowadays, the way and the method used to solve it is still invaluable, but what’s interesting that I personally think that so many problems today can/may be solved by using (or inspired by) similar approach or methods that were used to solve problems that are “extinct” now.

so I highly recommend you spare some time reading about that, and I can provide references for those who are interested.