Multimedia Engineering
  Digital
 
Google
 
Web mediaeng.com
 
   
 

To the extent that technology encompasses daily life on planet earth, the challenge of communicating thoughts and ideas depends on updating the lexicon to which we depend for expressing them. The term "digital" once meant only "relating to a finger or fingers," but now has a new primary meaning as it joins the list of most used but misunderstood terms of technology and science. Most written definitions of the term fall far short on substance. Generically defined, digital refers to electronic methods whereby signals are encoded as binary language represented as "0" and "1", as opposed to analog representation of information in variable, but continuous, waveforms. You really need to know more than that to understand even on an elementary level what digital communications represents, how it works, its strengths and weaknesses and its future. There is no need to get overly technical or mathematical to achieve this level of understanding. What is in order is a more complete definition.

The origin of digital techniques can be traced to the original attempts at creating an adding machine the most famous of which was undertaken by French scientist and philosopher Blaise Pascal in 1642. It was not until shortly after 1834 that Charles Babbage, dissatisfied with the accuracy of printed math tables developed the concept of a digital computer. As is often the case, mechanical machines gave way to electrical ones. Here is a really brief history of the digital computer. As allergic to mathematics as you might be, it is important to understand that digital electronics is part of a natural progression in science and engineering, and did not just pop onto the scene 10 years ago.

So you might ask what is wrong with explaining digital as a binary representation of an analog quantity? The answer is not complicated. Without further understanding you can not come to grips with issues of resolution, accuracy, latency (delay) and compatibility. The devil is in the details.

When any analog signal is encoded to digital there are two main functions that are performed. The signal is sampled, and each sample is quantized (assigned a value or set of values ... then these values are converted to a binary number). Do not think that every signal has only the value of "0" or "1" ... high resolution material may have tens of millions of possible values assigned to its samples, but each value can be represented by a unique binary equivalent. Once the signal has been encoded, it can be operated upon, transmitted, and received at another location. In almost every case, this information is going to have to be converted back to its original form, or decoded. Without knowing how the signal was encoded, it is a challenge of varying difficulty to properly decode it. How often was it sampled? How detailed was the quantization? Did our received version of "0"s and "1"s exactly match the one that was sent?

Every wonder why the newly 24 bit remastered favorite CD in your music collection sounds so much better than the original 16 bit CD version you bought 15 years ago? What's the difference between digital television (DTV), digital video (DV), and high definition television (HDTV)? You can follow the links to full explanations, but let it suffice to say that how you sample and quantize any signal makes all the difference in the world. The next time you hear the term "digital" be sure and think beyond "0" and "1."