< home >  ::  This selection and arrangement of content is licensed under the Creative Commons Attribution License: http://creativecommons.org/licenses/by/1.0


Fundamentals of Electrical Engineering I : a BOOK by Don H. Johnson


Don H. Johnson, J.S. Abercrombie Professor ( Emeritus ) :: Electrical & Computer Engineering

713-348-4956, dhj@rice.edu : Office: Duncan Hall 2095 : :Teaching: ELEC 241: Fundamentals of Electrical Engineering I, MWF 11-11:55, DH1070

 [ https://courses.rice.edu/courses/courses/!SWKSCAT.cat?p_action=COURSE&p_term=202210&p_crn=10019 ]

Don Johnson received the S.B. and S.M. degrees in 1970, the E.E. degree in 1971, and the Ph.D. degree in 1974, all in electrical engineering from the Massachusetts Institute of Technology. He joined M.I.T. Lincoln Laboratory as a staff member in 1974 to work on digital speech systems. In 1977, he joined the faculty of the Electrical and Computer Engineering Department at Rice University, where he is currently the J.S. Abercrombie Professor Emeritus in that department and Professor Emeritus in the Statistics Department. At MIT and at Rice, he received several institution-wide teaching awards, including Rice’s George R. Brown Award for Excellence in Teaching and the George R. Brown award for Superior Teaching four times. He was a cofounder of Modulus Technologies, Inc. He was President of the IEEE’s Signal Processing Society, received the Signal Processing Society’s Meritorious Service Award for 2000 and was one of the IEEE Signal Processing Society’s Distinguished Lecturers. Professor Johnson is a Life Fellow of the IEEE.

Professor Johnson’s present research activities focus on issues in statistical signal processing. Particular areas of interest are determining the weave characteristics of the canvases of master paintings and non-Gaussian signal processing. A curriculum vita (PDF) is available as well as a list of recent publications, some of which have not appeared in print. ( https://dhj.rice.edu/publications/ )


Online:  http://cnx.org/content/col10040/  ::  C O N N E X I O N S

Rice University, Houston, Texas

©2016 Don Johnson


 
Table of Contents

  1. Introduction
    1. Themes ...................................................................... .............. 1
    2. Signals Represent Information ................................................ .............. 2
    3. Structure of Communication Systems ......................................... .............. 4
    4. The Fundamental Signal: The Sinusoid ....................................... .............. 6
    5. Introduction Problems ...................................................................... 7

Solutions ......................................................................................... 9

  1. Signals and Systems
    1. Complex Numbers ......................................................................... 11
    2. Elemental Signals .......................................................................... 14
    3. Signal Decomposition ...................................................................... 18
    4. Discrete-Time Signals ........................................................ ............. 18
    5. Introduction to Systems ...................................................... ............. 20
    6. Simple Systems ............................................................................ 22 2.7 Signals and Systems Problems ............................................................. 25

Solutions ........................................................................................ 30

  1. Analog Signal Processing
    1. Voltage, Current, and Generic Circuit Elements ............................................ 31
    2. Ideal Circuit Elements ..................................................................... 32
    3. Ideal and Real-World Circuit Elements ....................................... ............. 34
    4. Electric Circuits and Interconnection Laws ................................................. 35
    5. Power Dissipation in Resistor Circuits ........................................ ............. 37
    6. Series and Parallel Circuits ................................................... ............. 38
    7. Equivalent Circuits: Resistors and Sources ................................................. 43
    8. Circuits with Capacitors and Inductors ....................................... ............. 47 3.9           The Impedance Concept ................................................................... 48
    9. Time and Frequency Domains ............................................................. 49
    10. Power in the Frequency Domain ........................................................... 51
    11. Equivalent Circuits: Impedances and Sources .............................................. 53
    12. Transfer Functions ........................................................... ............. 54
    13. Designing Transfer Functions .............................................................. 56
    14. Formal Circuit Methods: Node Method ...................................... ............. 59
    15. Power Conservation in Circuits ............................................................ 62
    16. Electronics ................................................................................ 63
    17. Dependent Sources ........................................................................ 64
    18. Operational Amplifiers .................................................................... 66
    19. The Diode ................................................................... ............. 71 3.21 Analog Signal Processing Problems ........................................................ 73

Solutions ........................................................................................ 94

  1. Frequency Domain
    1. Introduction to the Frequency Domain ..................................................... 97
    2. Fourier Series .............................................................................. 97
    3. Classic Fourier Series ..................................................................... 103
    4. A Signal’s Power Spectrum ............................................................... 105
    5. Fourier Series Approximation of Signals ................................................... 107
    6. Encoding Information in the Frequency Domain .............................. ............ 111
    7. Filtering Periodic Signals ................................................................. 112

4

  1. Derivation of the Fourier Transform .......................................... ............ 114
  2. Linear Time Invariant Systems ............................................... ............ 119
  3. Modeling the Speech Signal .................................................. ............ 121 4.11 Frequency Domain Problems ............................................................. 126

Solutions ....................................................................................... 139 5 Digital Signal Processing

  1. Introduction to Digital Signal Processing ..................................... ............ 143
  2. Introduction to Computer Organization ................................................... 143 5.3          The Sampling Theorem ................................................................... 147
  3. Amplitude Quantization .................................................................. 150
  4. Discrete-Time Signals and Systems ........................................... ............ 152
  5. Discrete-Time Fourier Transform (DTFT) .................................... ............ 154
  6. Discrete Fourier Transforms (DFT) ....................................................... 158
  7. DFT: Computational Complexity ......................................................... 161
  8. Fast Fourier Transform (FFT) ............................................................ 162
  9. Spectrograms ................................................................ ............ 164
  10. Discrete-Time Systems ................................................................... 167
  11. Discrete-Time Systems in the Time-Domain .............................................. 168
  12. Discrete-Time Systems in the Frequency Domain ......................................... 171
  13. Filtering in the Frequency Domain ....................................................... 172
  14. Efficiency of Frequency-Domain Filtering ................................................. 176
  15. Discrete-Time Filtering of Analog Signals .................................... ............ 179 5.17 Digital Signal Processing Problems ....................................................... 180

Solutions ....................................................................................... 191

  1. Information Communication
    1. Information Communication .............................................................. 195
    2. Types of Communication Channels ........................................... ............ 196
    3. Wireline Channels ........................................................................ 196
    4. Wireless Channels ........................................................................ 201
    5. Line-of-Sight Transmission ................................................................ 202
    6. The Ionosphere and Communications ..................................................... 203
    7. Communication with Satellites ............................................................ 203
    8. Noise and Interference .................................................................... 204
    9. Channel Models .......................................................................... 205
    10. Baseband Communication ................................................................ 206
    11. Modulated Communication ............................................................... 206
    12. Signal-to-Noise Ratio of an Amplitude-Modulated Signal ..................... ............ 208
    13. Digital Communication ...................................................... ............ 209
    14. Binary Phase Shift Keying ............................................................... 210
    15. Frequency Shift Keying ...................................................... ............ 212
    16. Digital Communication Receivers ............................................ ............ 213
    17. Digital Communication in the Presence of Noise .......................................... 215
    18. Digital Communication System Properties ................................................ 217
    19. Digital Channels ......................................................................... 217
    20. Entropy .................................................................................. 218
    21. Source Coding Theorem .................................................................. 219
    22. Compression and the Huffman Code ...................................................... 220
    23. Subtleties of Coding ...................................................................... 222
    24. Channel Coding .......................................................................... 223
    25. Repetition Codes ......................................................................... 224
    26. Block Channel Coding ....................................................... ............ 225

5

  1. Error-Correcting Codes: Hamming Distance .............................................. 226
  2. Error-Correcting Codes: Channel Decoding ............................................... 228
  3. Error-Correcting Codes: Hamming Codes .................................... ............ 230
  4. Noisy Channel Coding Theorem .......................................................... 231
  5. Capacity of a Channel ....................................................... ............ 232
  6. Comparison of Analog and Digital Communication ....................................... 233
  7. Communication Networks ................................................................ 234
  8. Message Routing ......................................................................... 235
  9. Network architectures and interconnection ................................................ 236
  10. Ethernet ................................................................................. 236
  11. Communication Protocols ................................................................ 239
  12. Information Communication Problems ....................................... ............ 240

Solutions ....................................................................................... 255

  1. Appendix
    1. Decibels .................................................................................. 261
    2. Permutations and Combinations .......................................................... 262 7.3 Frequency Allocations .................................................................... 263

Solutions ....................................................................................... 265

Index ...............................................................................................

AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

  1. Introduction
    1. Themes ...................................................................... .............. 1
    2. Signals Represent Information ................................................ .............. 2
    3. Structure of Communication Systems ......................................... .............. 4
    4. The Fundamental Signal: The Sinusoid ....................................... .............. 6
    5. Introduction Problems ...................................................................... 7


Chapter 1 Introduction  (  https://courses.rice.edu/courses/courses/!SWKSCAT.cat?p_action=COURSE&p_term=202210&p_crn=14839  )

1.1 Themes 
From its beginnings in the late nineteenth century [ 1800 - 1899 ] , electrical engineering has blossomed from focusing on electrical circuits for power, telegraphy and telephony to focusing on a much broader range of disciplines. However, the underlying themes are relevant today:
  Power creationtransmission and information - have been the underlying themes of electrical engineering for a century and a half. [ "circuit design", computers, software ... ]
 
          This course [book?] concentrates on the latter theme: the representation, manipulation, transmission, and reception of informationby electrical means.

  This course describes:  A) what information is, B) how engineers quantify information, and C) how electrical signals represent information.

 

 [ https://www.news-medical.net/health/The-Neocortex-and-Motor-Commands.aspx ]
 

Information can take a variety of forms. FOR EXAMPLE: When you "speak" to a friend, your thoughts are translated - by your brain - into "motor commands" that cause various vocal tract components–the jaw, the tongue, the lips–to move in a coordinated fashion.
Information arises in your thoughts and is represented by speech, which must have a well defined, broadly known structure - so that someone else can understand what you say.

 
Utterances convey information in sound pressure waves, which "propagate" to your friend’s ear.

 There [ aT your friend’s ear] , "sound energy" is converted back to neural activity; and, if what you say makes sense [ TO THE RECEIVER ], she understands what you say.

Your words could have been recorded on a compact disc (CD), mailed to your friend and listened to (by her) on her stereo.

"Information" can take the form of a text file you type into your word processor. You might send the file via e-mail to a friend, who reads it and understands it.

From an information "theoretic viewpoint" [ THEORETICAL VIEWPOINT ], all of these scenarios are equivalent, although the forms of the information representation— sound waves, plastic and computer files—are very different.

Engineers, who don’t care about information "content", categorize information into two different forms: analog and digital.   [ Marshall McLuhan ]

Analog information is continuous valued; examples are audio and video.
Digital information is discrete valued; examples are text (like what you are reading now) and DNA sequences. [ "DNA sequences" "DIGITAL" : https://www.nature.com/articles/nature01410 ]

The conversion of information-bearing signals from one energy form into another is known as "energy conversion" or transduction. "transduce"

All conversion systems are "inefficient" since some input energy is lost as heat, but this loss does not necessarily mean that the conveyed information is lost.

 [ "interpolation" :  https://en.wikipedia.org/wiki/Interpolation   
  FOR EXAMPLE: The "temperature probe" measures a range - from 0 degrees Fareheit to 100 degrees Farenheit - AND reports this as an output of 1.0 mV to 2.0 mV. The digital data logger device records the "mV" value - every 10 minutes; and, clocks the start of a day - at 12:00+ pm.  GIVEN THE FOLLOWING VALUES (tabe A ) - FROM THE DIGITAL DATA LOGGER - WHAT WAS THE TEMPERATURE - IN "CENTIGRADE"  - ON MAY 22, 1973 ? - at 7:00 AM  ]

Conceptually, we could use any form of energy ...
[  Energy comes in six basic forms: 
chemical (energy), - https://en.wikipedia.org/wiki/Chemical_energy
electrical (energy), - https://en.wikipedia.org/wiki/Electrical_energy
radiant (energy), - https://en.wikipedia.org/wiki/Radiant_energy
mechanical (energy), - https://en.wikipedia.org/wiki/Mechanical_energy
thermal (energy) - https://en.wikipedia.org/wiki/Thermal_energy
and nuclear (energy)
. [ Nuclear power can be obtained from nuclear fissionnuclear decay and nuclear fusion reactions. ] 

... many additional forms (of energy) are combinations of these six basic categories.]   ... "electric signals" are uniquely well-suited for information representation, transmission (signals can be broadcast from antennas or sent through wires), and manipulation (circuits can be built to reduce noise and computers can be used to modify information).

Thus, we will be concerned with how to
•    represent all forms of information with electrical signals,
•    encode information as voltages, currents, and electromagnetic waves,
•    manipulate information-bearing electric signals with circuits and computers,...
•    receive electric signals
      and convert [ decode] the information expressed by electric signals back into a useful form.  
 
Telegraphy represents the earliest electrical information system, and it dates from 1837.  [ https://en.wikipedia.org/wiki/Telegraphy ]

At that time [1837+] , "electrical science" was largely empirical [ https://www.merriam-webster.com/dictionary/empirical ] , and only those with experience and intuition could develop telegraph systems.

   Electrical science came of age when James Clerk Maxwell  proclaimed in 1864 a set of equations that he claimed governed all electrical phenomena.  [ MAXWELL'S EQUATIONS ]

https://en.wikipedia.org/wiki/On_Physical_Lines_of_Force ( 1861) : https://en.wikipedia.org/wiki/Maxwell%27s_equations 
   " "A Dynamical Theory of the Electromagnetic Field" ( 1864 )   > https://royalsocietypublishing.org/doi/pdf/10.1098/rstl.1865.0008  ]


These "equations" predicted that light was an electromagnetic wave, and that energy could propagate.


"...  Maxwell's addition to Ampère's law is particularly important: it makes the set of equations mathematically consistent for non static fields, without changing the laws of Ampere and Gauss for static fields.[2] However, as a consequence, it predicts that a changing magnetic field induces an electric field and vice versa. Therefore, these equations allow self-sustaining "electromagnetic waves" to travel through empty space (see electromagnetic wave equation). ... The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents,[note 4] matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-raysradio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics. ..."  SOURCE:  https://en.wikipedia.org/wiki/Maxwell%27s_equations  


Because of the complexity of Maxwell’s presentation, the development of the telephone - in 1876 - was due largely to empirical work.
https://en.wikipedia.org/wiki/Telephone#Early_historyhttps://www.loc.gov/everyday-mysteries/technology/item/who-is-credited-with-inventing-the-telephone  ]

Once Heinrich Hertz confirmed Maxwell’s "prediction" of what we now call radio waves in about 1882, Maxwell’s equations were simplified by Oliver Heaviside and others, and were widely read.

This understanding of "fundamentals" led to a quick succession of inventions:
the wireless telegraph (1899), - [ https://en.wikipedia.org/wiki/Wireless_telegraphy ]
the vacuum tube (1905), - [  https://en.wikipedia.org/wiki/Vacuum_tube  ] 
and radio broadcasting
–that marked the true emergence of the communications age.

 During the first part of the twentieth century [year1900+] , "circuit theory" and "electromagnetic theory" were all an "electrical engineer" needed to know to be qualified and produce first-rate designs.

[ "circuit theory" ::  https://en.wikipedia.org/wiki/Network_analysis_(electrical_circuits)
[ "electromagnetic theory"  ::  https://en.wikipedia.org/wiki/History_of_electromagnetic_theory

Consequently, circuit theory served as the foundation and the framework of all of electrical engineering education.

At mid-century, three “inventions” changed the ground rules.
These were 1) the first public demonstration of the first electronic computer (1946),
2) the invention of the transistor (1947),
and 3) the publication of "A Mathematical Theory of Communication" - by Claude Shannon (1948).
     [ PDF : https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf  ]

Although conceived separately, these creations gave birth to the "information age", in which digital and analog communication systems interact and compete for design preferences.

About twenty years later, the laser was invented [ https://en.wikipedia.org/wiki/Laser#History ] , which opened even more design possibilities.

  Thus, the primary focus shifted from how to build communication systems (the circuit theory era) to what communications systems were intended to accomplish.

Only! - once the "intended system" is specified can an implementation be selected. Today’s electrical engineer must be mindful of the system’s ultimate goal, and understand the tradeoffs between digital and analog alternatives, and between hardware and software configurations in designing information systems. [ https://en.wikipedia.org/wiki/Electrical_network ]

1.2 Signals Represent Information 
Whether analog or digital, information is represented by the fundamental quantity in electrical engineering: the signal.

  Stated in mathematical terms, a "signal" is merely a function.

Analog signals are continuous valued; digital signals are discrete-valued. The independent variable of the signal could be time (speech, for example), space (images), or the integers (denoting the sequencing of letters and numbers in the football score).

1.2.1 Analog Signals

Analog signals are usually signals defined over continuous independent variable(s).

 Speech, as described in Section 4.10, is produced by your vocal cords exciting acoustic resonances in your vocal tract.

The result is pressure waves propagating in the air, and the speech signal thus corresponds to a function having independent variables of space and time and a value corresponding to air pressure: s(x,t) (Here we use vector notation x to denote spatial coordinates). [ https://en.wikipedia.org/wiki/Vector_notation ] 

When you record someone talking, you are evaluating the speech signal at a particular spatial location, x0 say. An example of the resulting waveform s(x0,t) is shown in Figure 1.1.


h  Analog signal figure




Photographs are static, and are continuous-valued signals defined over space. Black-and-white images have only one value at each point in space, which amounts to its optical reflection properties.

 [  https://en.wikipedia.org/wiki/Optical_properties  ]

In Figure 1.2, an image is shown, demonstrating that it (and all other images as well) are functions of two independent spatial variables.

 SOURCE:  
 https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Mathematical_Methods_in_Chemistry_(Levitus)/08%3A_Calculus_in_More_than_One_Variable/8.01%3A_Functions_of_Two_Independent_Variables


 
Figure 1.1: A speech signal’s amplitude relates to tiny air pressure variations. Shown is a recording of the vowel “e” (as in “speech”).
 
Figure 1.2: On the left is the classic Lena image, which is used ubiquitously as a test image. It contains straight and curved lines, complicated texture, and a face. On the right is a perspective display of the Lena image as a signal: a function of two spatial variables. The colors merely help show what signal values are about the same size. In this image, signal values range between 0 and 255; why is that?
 
Color images have values that express how reflectivity depends on the optical spectrum. [ https://www.merriam-webster.com/dictionary/reflectivity ] 

Painters long ago found that mixing together combinations of the so-called primary colors–red, yellow and blue–can produce very realistic color images.

Thus, images today are usually thought of as having three values at every point::



00    nul    01    soh    02    stx    03    etx    04    eot    05    enq    06    ack    07    bel
08    bs    09    ht    0A    nl    0B    vt    0C    np    0D    cr    0E    so    0F    si
10    dle    11    dc1    12    dc2    13    dc3    14    dc4    15    nak    16    syn    17    etb
18    car    19    em    1A    sub    1B    esc    1C    fs    1D    gs    1E    rs    1F    us
20    sp    21    !    22    "    23    #    24    $    25    %    26    &    27    ’
28    (    29    )    2A    *    2B    +    2C    ,    2D    -    2E    .    2F    /
30    0    31    1    32    2    33    3    34    4    35    5    36    6    37    7
38    8    39    9    3A    :    3B    ;    3C    <    3D    =    3E    >    3F    ?
40    @    41    A    42    B    43    C    44    D    45    E    46    F    47    G
48    H    49    I    4A    J    4B    K    4C    L    4D    M    4E    N    4F    0
50    P    51    Q    52    R    53    S    54    T    55    U    56    V    57    W
58    X    59    Y    5A    Z    5B    [    5C    \    5D    ]    5E    ^    5F    _
60    ’    61    a    62    b    63    c    64    d    65    e    66    f    67    g
68    h    69    i    6A    j    6B    k    6C    l    6D    m    6E    n    6F    o
70    p    71    q    72    r    73    s    74    t    75    u    76    v    77    w
78    x    79    y    7A    z    7B    {    7C    |    7D    }    7E    ∼    7F    del

Table 1.1:
The ASCII translation table shows how standard keyboard characters are represented by integers.
In pairs of columns, this table displays first the so-called 7-bit code (how many characters in a seven-bit code?) - then, the character the number represents.

 The numeric codes are represented in hexadecimal (base-16) notation.

Mnemonic characters correspond to control characters, some of which may be familiar (like cr for carriage return) and some not (bel means a “bell”).
 
in space, but a different set of colors is used: How much of red, green and blue is present.

Mathematically, color pictures are multivalued–vector-valued–signals: s(x) = (r(x),g (x),b(x))T.

Interesting cases abound where the analog signal depends not on a continuous variable, such as time, but on a discrete variable.

For example, temperature readings taken every hour have continuous–analog–values, but the signal’s independent variable is (essentially) the integers.


1.2.2 Digital Signals
The word “digital” means discrete-valued and implies the signal depends on the integers rather than a continuous variable. Digital information includes numbers and symbols (characters typed on the keyboard, for example). Computers rely on the digital representation of information to manipulate and transform information. Symbols do not have a numeric value, however each is typically represented by a unique number but performing arithmetic with these representations makes no sense. The ASCII character code shown in Table 1.1 has the upper- and lowercase characters, the numbers, punctuation marks, and various other symbols represented by a seven-bit integer. For example, the ASCII code represents the letter a as the number 97, the letter A with 65.

1.3 Structure of Communication Systems 
The fundamental model of communications is portrayed in Figure 1.3 (Fundamental model of communication). In this fundamental model, each message-bearing signal, exemplified by s(t), is analog and is a function of time. A system operates on zero, one, or several signals to produce more signals or to simply absorb them (Figure 1.4). In electrical engineering, we represent a system as a box, receiving input signals (usually coming from the left) and producing from them new output signals. This graphical representation is known as a block diagram. We denote input signals by lines having arrows pointing into the box, output signals by arrows pointing away. As typified by the communications model, how information flows, how it is corrupted and manipulated, and how it is ultimately received is summarized by interconnecting block diagrams: The outputs of one or more systems serve as the inputs to others.
 
message

Figure 1.3: The Fundamental Model of Communication.
 
Figure 1.4: A system operates on its input signal x(t) to produce an output y (t).
 
In the communications model, the source produces a signal that will be absorbed by the sink.

Examples of "time-domain signals" produced by a source are music, speech, and characters typed on a keyboard.

"Signals" can also be functions of two variables—an image is a signal that depends on two spatial variables—or more— television pictures (video signals) are functions of two spatial variables and time. Thus, information sources produce signals.

 In physical systems, each signal corresponds to an electrical voltage or current. To be able to design systems, we must understand electrical science and technology.
 However, we first need to understand the big picture to appreciate the context in which the electrical engineer works.

In communication systems, messages—signals produced by sources—must be recast for transmission. The block diagram has the message s(t) passing through a block labeled transmitter that produces the signal x(t). In the case of a radio transmitter, it accepts an input audio signal and produces a signal that physically is an electromagnetic wave radiated by an antenna and propagating as Maxwell’s equations predict.

In the case of a computer network, typed characters are encapsulated in packets, attached with a destination address, and launched into the Internet. "packet switching"

From the communication systems “big picture” perspective, the same block diagram applies although the systems can be very different.

In any case, the "transmitter" should not operate in such a way that the message s(t) cannot be recovered from x(t).

In the mathematical sense, the inverse system must exist, else the communication system cannot be considered reliable.  Encode > Decode 

(It is ridiculous to transmit a signal in such a way that no one can recover the original.

However, "clever systems" exist that transmit signals so that only the “in crowd” can recover them.
 Such cryptographic systems underlie secret communications.)

 

Transmitted signals next pass through the next stage, the evil channel. Nothing good happens to a signal in a channel: It can become corrupted by noise, distorted, and attenuated among many possibilities.

The channel cannot be escaped (the real world is cruel), and transmitter design and receiver design focus on how best to jointly fend off the channel’s effects on signals.

The channel is another system in our block diagram, and produces r(t), the signal received by the receiver.

If the channel were benign (good luck finding such a channel in the real world), the receiver would serve as the inverse system to the transmitter, and yield the message with no distortion.

However, because of the channel, the receiver must do its best to produce a received message sˆ(t) that resembles s(t) as much as possible.

 Shannon  showed in his 1948 paper that reliable—for the moment, take this word to mean error-free—digital communication was possible over arbitrarily noisy channels.

 It is this result that modern communications systems exploit, and why many communications systems are going “digital.”

 The module on Chapter 6, titled Information Communication, details Shannon’s theory of information, and there we learn of Shannon’s result and how to use it.

Finally, the received message is passed to the information sink that somehow makes use of the message.

In the "communications model", the "source" is a system having no input but producing an output; a sink has an input and no output.

Understanding signal generation and how systems work amounts to understanding signals, the nature of the information they represent, how information is transformed between analog and digital forms, and how information can be processed by systems operating on information-bearing signals.

This understanding demands two different fields of knowledge.

One is electrical science: How are signals represented and manipulated electrically?

The second is signal science: What is the structure of signals, no matter what their source, what is their information content, and what capabilities does this structure force upon communication systems?


1.4 The Fundamental Signal: The Sinusoid 

The most ubiquitous and important signal in electrical engineering is the sinusoid.

 

    Sine Definition
    s(t) = Acos(2πft + φ)    or    Acos(ωt + φ)    (1.1)


A is known as the sinusoid’s amplitude, and determines the sinusoid’s size. The amplitude conveys the sinusoid’s physical units (volts, lumens, etc). The frequency f has units of Hz (Hertz) or s−1, and determines how rapidly the sinusoid oscillates per unit time. The temporal variable t always has units of seconds, and thus the frequency determines how many oscillations/second the sinusoid has. AM radio stations have carrier frequencies of about 1 MHz (one mega-hertz or 106 Hz), while FM stations have carrier frequencies of about 100 MHz. Frequency can also be expressed by the symbol ω, which has units of radians/second. Clearly, ω = 2πf. In communications, we most often express frequency in Hertz. Finally, φ is the phase, and determines the sine wave’s behavior at the origin (t = 0). It has units of radians, but we can express it in degrees, realizing that in computations we must convert from degrees to radians. Note that if  , the sinusoid corresponds to a sine function, having a zero value at the origin.
         (1.2)
Thus, the only difference between a sine and cosine signal is the phase; we term either a sinusoid.
We can also define a discrete-time variant of the sinusoid: Acos(2πfn + φ). Here, the independent variable is n and represents the integers. Frequency now has no dimensions, and takes on values between 0 and 1.



    Exercise 1.1    (Solution on p. 9.)
Show that cos(2πfn) = cos(2π (f + 1)n), which means that a sinusoid having a frequency larger than one corresponds to a sinusoid having a frequency less than one. note: Notice that we shall call either sinusoid an analog signal. Only when the discrete-time signal takes on a finite set of values can it be considered a digital signal.
    Exercise 1.2    (Solution on p. 9.)
Can you think of a simple signal that has a finite number of values but is defined in continuous time? Such a signal is also an analog signal.


1.4.2 Communicating Information with Signals
The basic idea of communication engineering is to use a signal’s parameters to represent either real numbers or other signals. The technical term is to modulate the carrier signal’s parameters to transmit information from one place to another. To explore the notion of modulation, we can send a real number (today’s temperature, for example) by changing a sinusoid’s amplitude accordingly.
 
Figure 1.5
 
If we wanted to send the daily temperature, we would keep the frequency constant (so the receiver would know what to expect) and change the amplitude at midnight. We could relate temperature to amplitude by the formula A = A0 (1 + kT), where A0 and k are constants that the transmitter and receiver must both know.

If we had two numbers we wanted to send at the same time, we could modulate the sinusoid’s frequency as well as its amplitude. This modulation scheme assumes we can estimate the sinusoid’s amplitude and frequency; we shall learn that this is indeed possible.

Now suppose we have a sequence of parameters to send. We have exploited all of the sinusoid’s two parameters. What we can do is modulate them for a limited time (say T seconds), and send two parameters every T. This simple notion corresponds to how a modem works. Here, typed characters are encoded into eight bits, and the individual bits are encoded into a sinusoid’s amplitude and frequency. We’ll learn how this is done in subsequent modules, and more importantly, we’ll learn what the limits are on such digital communication schemes. 

 


 binary bits bytes register operations > SOURCE:  https://www.hackerearth.com/practice/basic-programming/bit-manipulation/basics-of-bit-manipulation/tutorial/

 [ https://en.wikipedia.org/wiki/Binary_number ] 

 byte composed 8 bits largest number =  8-bit "byte"   128+64+32+16+8+4+2+1= 255

 2^0=1
 2^1=2
 2^2=4
 2^3=8
 2^4=16
 2^5= 32
2^6=64
2^7=128

computer register  ( 128+64+32+16+8+4+2+1= 255 + null states  )  One 8-bit register can represent 256 "states" > https://web.cortland.edu/flteach/mm-course/characters.html (255 + null )

2 to the power of 2 =  2^2 =4 

"... 1 byte comprises of 8 bits and any integer or character can be represented using "bits" in computers, which we call its binary form(contains only 1 or 0) or in its base 2 form.

Example:
1) 14 = {1110 }2
= 1 * 23 + 1 * 22 + 1 * 21 + 0 * 20
= 14.

2) 20 = {10100 }2
= 1 * 24 + 0 * 23 + 1 * 22 + 0 * 21 + 0 * 20
= 20.

For characters, we use ASCII representation, which are in the form of integers which again can be represented using bits as explained above.   ..." 



 
Chapter 2

Signals and Systems

2.1 Complex Numbers  [  https://en.wikipedia.org/wiki/Complex_number  ]

 While the fundamental signal used in electrical engineering is the sinusoid, it can be expressed mathematically in terms of an even more fundamental signal: the complex exponential.

[ https://people.math.wisc.edu/~angenent/Free-Lecture-Notes/freecomplexnumbers.pdf ] 

Representing sinusoids in terms of complex exponentials is not a mathematical oddity.

 Fluency with complex numbers and rational functions of complex variables is a critical skill all engineers master.

Understanding information and power system designs and developing new systems all hinge on using complex numbers.  x^2= -1 (no solution, so invent "imaginary numbers")

In short, they are critical to modern electrical engineering, a realization made over a century ago.

 

2.1.1 Definitions

The notion of the square root of −1 originated with the quadratic formula: the solution of certain quadratic√
 
equations mathematically exists only if the so-called imaginary quantity −1 could be defined. Euler  first used i for the imaginary unit - but that notation did not take hold until roughly Ampère’s time. Ampère used the symbol i to denote current (intensité de current). It wasn’t until the twentieth century that the importance of complex numbers to circuit theory became evident. By then, using i for current was entrenched and electrical engineers chose j for writing complex numbers.√
 


SOURCE: http://tuttle.merc.iastate.edu/ee201/topics/complex_numbers.pdf
"...  (Note: In almost all other fields, it is conventional to use . However, in (electrical engineering) EE/CprE, we use i for current, and so it has become normal practice in our business to use j.) ..."

An imaginary number has the form jb = −b2. A complex number, z, consists of the ordered pair (a,b), a is the real component and b is the imaginary component (the j is suppressed because the imaginary component of the pair is always in the second position). The imaginary number jb equals (0,b). Note that a and b are real-valued numbers.
Figure 2.1 shows that we can locate a complex number in what we call the complex plane. Here, a, the real part, is the x-coordinate and b, the imaginary part, is the y-coordinate. From analytic geometry, we know that locations in the plane can be expressed as the sum of vectors, with the vectors corresponding to the x and y directions. Consequently, a complex number z can be expressed as the (vector) sum z = a + jb where j indicates the y-coordinate. This representation is known as the Cartesian form of z. An imaginary number can’t be numerically added to a real number; rather, this notation for a complex number represents vector addition, but it provides a convenient notation when we perform arithmetic manipulations.
Some obvious terminology. The real part of the complex number z = a + jb, written as Re[z], equals a. We consider the real part as a function that works by selecting that component of a complex number not multiplied by j. The imaginary part of z, Im[z], equals b: that part of a complex number that is multiplied by j. Again, both the real and imaginary parts of a complex number are real-valued. The complex conjugate of z, written as z∗, has the same real part as z but an imaginary part of the
11
 
Figure 2.1: A complex number is an ordered pair (a,b) that can be regarded as coordinates in the plane. Complex numbers can also be expressed in polar coordinates as r∠θ.
 
opposite sign.
z = Re[z] + jIm[z]
(2.1) z∗ = Re[z] − jIm[z]
Using Cartesian notation, the following properties easily follow.
•    If we add two complex numbers, the real part of the result equals the sum of the real parts and the imaginary part equals the sum of the imaginary parts. This property follows from the laws of vector addition.
a1 + jb1 + a2 + jb2 = a1 + a2 + j (b1 + b2)
In this way, the real and imaginary parts remain separate.
•    The product of j and a real number is an imaginary number: ja. The product of j and an imaginary number is a real number: j (jb) = −b because j2 = −1. Consequently, multiplying a complex number by j rotates the number’s position by 90 degrees.


    Exercise 2.1    (Solution on p. 30.)
Use the definition of addition to show that the real and imaginary parts can be expressed as a sum/difference of a complex number and its conjugate. Re[  and Im[ .
Complex numbers can also be expressed in an alternate form, polar form, which we will find quite useful. Polar form arises arises from the geometric interpretation of complex numbers. The Cartesian form of a complex number can be re-written as
 
By forming a right triangle having sides a and b, we see that the real and imaginary parts correspond to the cosine and sine of the triangle’s base angle. We thus obtain the polar form for complex numbers.
 
The quantity r is known as the magnitude of the complex number z, and is frequently written as |z|. The quantity θ is the complex number’s angle. In using the arc-tangent formula to find the angle, we must take into account the quadrant in which the complex number lies.
    Exercise 2.2    (Solution on p. 30.)
Convert 3 − 2j to polar form.
2.1.2 Euler’s Formula
Surprisingly, the polar form of a complex number z can be expressed mathematically as
    z = rejθ    (2.2)
To show this result, we use Euler’s relations that express exponentials with imaginary arguments in terms of trigonometric functions.
    ejθ = cosθ + j sinθ    (2.3)
         (2.4)
The first of these is easily derived from the Taylor’s series for the exponential.
 
Substituting jθ for x, we find that
 
because j2 = −1, j3 = −j, and j4 = 1. Grouping separately the real-valued terms and the imaginary-valued ones,
 
The real-valued terms correspond to the Taylor’s series for cos(θ), the imaginary ones to sin(θ), and Euler’s first relation results. The remaining relations are easily derived from the first. Because of the relationship√
 
r =    a2 + b2, we see that multiplying the exponential in (2.3) by a real constant corresponds to setting the radius of the complex number by the constant.
2.1.3 Calculating with Complex Numbers
Adding and subtracting complex numbers expressed in Cartesian form is quite easy: You add (subtract) the real parts and imaginary parts separately.
    (z1 ± z2) = (a1 ± a2) + j (b1 ± b2)    (2.5)
To multiply two complex numbers in Cartesian form is not quite as easy, but follows directly from following the usual rules of arithmetic.
z1z2 = (a1 + jb1)(a2 + jb2)
(2.6)
= a1a2 − b1b2 + j (a1b2 + a2b1)
Note that we are, in a sense, multiplying two vectors to obtain another vector. Complex arithmetic provides a unique way of defining vector multiplication.
    Exercise 2.3    (Solution on p. 30.)
What is the product of a complex number and its conjugate?
Division requires mathematical manipulation. We convert the division problem into a multiplication problem by multiplying both the numerator and denominator by the conjugate of the denominator.
         (2.7)
Because the final result is so complicated, it’s best to remember how to perform division—multiplying numerator and denominator by the complex conjugate of the denominator—than trying to remember the final result.
The properties of the exponential make calculating the product and ratio of two complex numbers much simpler when the numbers are expressed in polar form.
         (2.8)
To multiply, the radius equals the product of the radii and the angle the sum of the angles. To divide, the radius equals the ratio of the radii and the angle the difference of the angles. When the original complex numbers are in Cartesian form, it’s usually worth translating into polar form, then performing the multiplication or division (especially in the case of the latter). Addition and subtraction of polar forms amounts to converting to Cartesian form, performing the arithmetic operation, and converting back to polar form.
Example 2.1
When we solve circuit problems, the crucial quantity, known as a transfer function, will always be expressed as the ratio of polynomials in the variable s = j2πf. What we’ll need to understand the circuit’s effect is the transfer function in polar form. For instance, suppose the transfer function equals
         (2.9)
    s = j2πf    (2.10)
Performing the required division is most easily accomplished by first expressing the numerator and denominator each in polar form, then calculating the ratio. Thus,
(2.11)
(2.12)
(2.13)
2.2 Elemental Signals 
Elemental signals are the building blocks with which we build complicated signals. By definition, elemental signals have a simple structure. Exactly what we mean by the “structure of a signal” will unfold in this section of the course. Signals are nothing more than functions defined with respect to some independent variable, which we take to be time for the most part. Very interesting signals are not functions solely of time; one great example of which is an image. For it, the independent variables are x and y (two-dimensional space). Video signals are functions of three variables: two spatial dimensions and time. Fortunately, most of the ideas underlying modern signal theory can be exemplified with one-dimensional signals.
2.2.1 Sinusoids
Perhaps the most common real-valued signal is the sinusoid.
s(t) = Acos(2πf0t + φ)
For this signal, A is its amplitude, f0 its frequency, and φ its phase.
2.2.2 Complex Exponentials
The most important signal is complex-valued, the complex exponential.
s(t) = Aej(2πf0t+φ)    (2.14)
(2.15)
= Aejφej2πf0t
√ 
Here, j denotes −1. Aejφ is known as the signal’s complex amplitude. Considering the complex amplitude as a complex number in polar form, its magnitude is the amplitude A and its angle the signal phase. The complex amplitude is also known as a phasor. The complex exponential cannot be further decomposed into more elemental signals, and is the most important signal in electrical engineering! Mathematical manipulations at first appear to be more difficult because complex-valued numbers are introduced. In fact, early in the twentieth century, mathematicians thought engineers would not be sufficiently sophisticated to handle complex exponentials even though they greatly simplified solving circuit problems. Steinmetz  introduced complex exponentials to electrical engineering, and demonstrated that “mere” engineers could use them to good effect and even obtain right answers! See Section 2.1 for a review of complex numbers and complex arithmetic.
The complex exponential defines the notion of frequency: it is the only signal that contains only one frequency component. The sinusoid consists of two frequency components: one at the frequency +f0 and the other at −f0.
Euler relation: This decomposition of the sinusoid can be traced to Euler’s relation.
         (2.16)
         (2.17)
    ej2πft = cos(2πft) + j sin(2πft)    (2.18)
Decomposition: The complex exponential signal can thus be written in terms of its real and imaginary parts using Euler’s relation. Thus, sinusoidal signals can be expressed as either the real or the imaginary part of a complex exponential signal, the choice depending on whether cosine or sine phase is needed, or as the sum of two complex exponentials. These two decompositions are mathematically equivalent to each other.
         (2.19)
         (2.20)
Using the complex plane, we can envision the complex exponential’s temporal variations as seen in the above figure (Figure 2.2). The magnitude of the complex exponential is A, and the initial value of the complex exponential at t = 0 has an angle of φ. As time increases, the locus of points traced by the complex exponential is a circle (it has constant magnitude of A). The number of times per second we go around the circle equals the frequency f. The time taken for the complex exponential to go around the circle once is known as its period T, and equals  . The projections onto the real and imaginary axes of the rotating vector representing the complex exponential signal are the cosine and sine signal of Euler’s relation (2.16).
 
Figure 2.2: Graphically, the complex exponential scribes a circle in the complex plane as time evolves. Its real and imaginary parts are sinusoids. The rate at which the signal goes around the circle is the frequency f and the time taken to go around is the period T. A fundamental relationship is T = f1 .
 
2.2.3 Real Exponentials
As opposed to complex exponentials which oscillate, real exponentials (Figure 2.3) decay.
    s(t) = e−t/τ    (2.21)
The quantity τ is known as the exponential’s time constant, and corresponds to the time required for the exponential to decrease by a factor of  , which approximately equals 0.368. A decaying complex exponential is the product of a real and a complex exponential.
s(t) = Aejφe−t/τej2πft
(2.22) = Aejφe(−1/τ+j2πf)t
In the complex plane, this signal corresponds to an exponential spiral. For such signals, we can define complex frequency as the quantity multiplying t.
 
Figure 2.3: The real exponential.
 
2.2.4 Unit Step
The unit step function (Figure 2.4) is denoted by u(t), and is defined to be
    u(     (2.23)
 
Figure 2.4: The unit step.
Origin warning: This signal is discontinuous at the origin. Its value at the origin need not be defined because the value doesn’t matter in signal theory.
This kind of signal is used to describe signals that “turn on” suddenly. For example, to mathematically represent turning on an oscillator, we can write it as the product of a sinusoid and a step: s(t) = Asin(2πft)u(t). 2.2.5 Pulse
The unit pulse (Figure 2.5) describes turning a unit-amplitude signal on for a duration of ∆ seconds, then turning it off.

0,

p∆ (t) =    1,
0,    t < 0
0 < t < ∆ t > ∆    (2.24)
 
Figure 2.5: The pulse.
We will find that this is the second most important signal in communications.
2.2.6 Square Wave
The square wave (Figure 2.6) sq(t) is a periodic signal like the sinusoid. It too has an amplitude and a period, which must be specified to characterize the signal. We find subsequently that the sine wave is a simpler signal than the square wave.
 
Figure 2.6: The square wave.
2.3 Signal Decomposition 
A signal’s complexity is not related to how wiggly it is. Rather, a signal expert looks for ways of decomposing a given signal into a sum of simpler signals, which we term the signal decomposition. Though we will never compute a signal’s complexity, it essentially equals the number of terms in its decomposition. In writing a signal as a sum of component signals, we can change the component signal’s gain by multiplying it by a constant and by delaying it. More complicated decompositions could contain derivatives or integrals of simple signals.
Example 2.2
As an example of signal complexity, we can express the pulse p∆ (t) as a sum of delayed unit steps.
    p∆ (t) = u(t) − u(t − ∆)    (2.25)
Thus, the pulse is a more complex signal than the step. Be that as it may, the pulse is very useful to us.
    Exercise 2.4    (Solution on p. 30.)
Express a square wave having period T and amplitude A as a superposition of delayed and amplitude-scaled pulses.
Because the sinusoid is a superposition of two complex exponentials, the sinusoid is more complex. We could not prevent ourselves from the pun in this statement. Clearly, the word “complex” is used in two different ways here. The complex exponential can also be written (using Euler’s relation (2.16)) as a sum of a sine and a cosine. We will discover that virtually every signal can be decomposed into a sum of complex exponentials, and that this decomposition is very useful. Thus, the complex exponential is more fundamental, and Euler’s relation does not adequately reveal its complexity.
2.4 Discrete-Time Signals 
So far, we have treated what are known as analog signals and systems. Mathematically, analog signals are functions having continuous quantities as their independent variables, such as space and time. Discrete-time signals are functions defined on the integers; they are sequences. One of the fundamental results of signal theory details the conditions under which an analog signal can be converted into a discrete-time one and retrieved without error. This result is important because discrete-time signals can be manipulated by systems instantiated as computer programs. Subsequent modules describe how virtually all analog signal processing can be performed with software.
Discrete-time systems can act on discrete-time signals in ways similar to those found in analog signals and systems. Because of the role of software in discrete-time systems, many more different systems can be envisioned and “constructed” with programs than can be with analog signals. Consequently, discrete-time systems can be easily produced in software, with equivalent analog realizations difficult, if not impossible, to design.
As important as linking analog signals to discrete-time ones may be, discrete-time signals are more general, encompassing signals derived from analog ones and signals that aren’t. For example, the characters forming a text file form a sequence, which is also a discrete-time signal. We must deal with such symbolic valued (p. 153) signals and systems as well.
As with analog signals, we seek ways of decomposing real-valued discrete-time signals into simpler components. With this approach leading to a better understanding of signal structure, we can exploit that structure to represent information (create ways of representing information with signals) and to extract information (retrieve the information thus represented). For symbolic-valued signals, the approach is different: We develop a common representation of all symbolic-valued signals so that we can embody the information they contain in a unified way. From an information representation perspective, the most important issue becomes, for both real-valued and symbolic-valued signals, efficiency; What is the most parsimonious and compact way to represent information so that it can be extracted later.
2.4.1 Real- and Complex-valued Signals
A discrete-time signal is represented symbolically as s(n), where n = {...,−1,0,1,...}. We usually draw discrete-time signals as stem plots to emphasize the fact they are functions defined only on the integers. We can delay a discrete-time signal by an integer just as with analog ones. A delayed unit sample has the expression δ (n − m), and equals one when n = m.
 
Figure 2.7: The discrete-time cosine signal is plotted as a stem plot. Can you find the formula for this signal?
2.4.2 Complex Exponentials
The most important signal is, of course, the complex exponential sequence.
    s(n) = ej2πfn    (2.26)
2.4.3 Sinusoids
Discrete-time sinusoids have the obvious form s(n) = Acos(2πfn + φ). As opposed to analog complex exponentials and sinusoids that can have their frequencies be any real value, frequencies of their discretetime counterparts yield unique waveforms only when f lies in the interval  . This property can be easily understood by noting that adding an integer to the frequency of the discrete-time complex exponential has no effect on the signal’s value.
ej2π(f+m)n = ej2πfnej2πmn
(2.27)
= ej2πfn
This derivation follows because the complex exponential evaluated at an integer multiple of 2π equals one.
2.4.4 Unit Sample
The second-most important discrete-time signal is the unit sample, which is defined to be
(
    1    if n = 0
    δ (n) =    (2.28)
    0    otherwise
 
Figure 2.8: The unit sample.
Examination of a discrete-time signal’s plot, like that of the cosine signal shown in Figure 2.7, reveals that all discrete-time signals consist of a sequence of delayed and scaled unit samples. Because the value of a sequence at each integer m is denoted by s(m) and the unit sample delayed to occur at m is written δ (n − m), we can decompose any signal as a sum of unit samples delayed to the appropriate location and scaled by the signal value.

    s(n) = X s(m)δ (n − m)    (2.29)
m=−∞
This kind of decomposition is unique to discrete-time signals, and will prove useful subsequently.
2.4.5 Symbolic-valued Signals
Another interesting aspect of discrete-time signals is that their values do not need to be real numbers. We do have real-valued discrete-time signals like the sinusoid, but we also have signals that denote the sequence of characters typed on the keyboard. Such characters certainly aren’t real numbers, and as a collection of possible signal values, they have little mathematical structure other than that they are members of a set. More formally, each element of the symbolic-valued signal s(n) takes on one of the values {a1,...,aK} which comprise the alphabet A. This technical terminology does not mean we restrict symbols to being members of the English or Greek alphabet. They could represent keyboard characters, bytes (8-bit quantities), integers that convey daily temperature. Whether controlled by software or not, discrete-time systems are ultimately constructed from digital circuits, which consist entirely of analog circuit elements. Furthermore, the transmission and reception of discrete-time signals, like e-mail, is accomplished with analog signals and systems. Understanding how discrete-time and analog signals and systems intertwine is perhaps the main goal of this course.
2.5 Introduction to Systems 
Signals are manipulated by systems. Mathematically, we represent what a system does by the notation y (t) = S [x(t)], with x representing the input signal and y the output signal.
 
Figure 2.9: The system depicted has input x(t) and output y (t). Mathematically, systems operate on function(s) to produce other function(s). In many ways, systems are like functions, rules that yield a value for the dependent variable (our output signal) for each value of its independent variable (its input signal). The notation y (t) = S [x(t)] corresponds to this block diagram. We term S [·] the input-output relation for the system.
This notation mimics the mathematical symbology of a function: A system’s input is analogous to an independent variable and its output the dependent variable. For the mathematically inclined, a system is a functional: a function of a function (signals are functions).
Simple systems can be connected together–one system’s output becomes another’s input–to accomplish some overall design. Interconnection topologies can be quite complicated, but usually consist of weaves of three basic interconnection forms.
2.5.1 Cascade Interconnection
 
Figure 2.10: Interconnecting systems so that one system’s output serves as the input to another is the cascade configuration.
The simplest form is when one system’s output is connected only to another’s input. Mathematically, w(t) = S1 [x(t)], and y (t) = S2 [w(t)], with the information contained in x(t) processed by the first, then the second system. In some cases, the ordering of the systems matter, in others it does not. For example, in the fundamental model of communication (Figure 1.3) the ordering most certainly matters.
2.5.2 Parallel Interconnection
 
Figure 2.11: The parallel configuration.
A signal x(t) is routed to two (or more) systems, with this signal appearing as the input to all systems simultaneously and with equal strength. Block diagrams have the convention that signals going to more than one system are not split into pieces along the way. Two or more systems operate on x(t) and their outputs are added together to create the output y (t). Thus, y (t) = S1 ]x(t)]+S2 [x(t)], and the information in x(t) is processed separately by both systems.
2.5.3 Feedback Interconnection
 
Figure 2.12: The feedback configuration.
The subtlest interconnection configuration has a system’s output also contributing to its input. Engineers would say the output is “fed back” to the input through system 2, hence the terminology. The mathematical statement of the feedback interconnection (Figure 2.12) is that the feed-forward system produces the output: y (t) = S1 [e(t)]. The input e(t) equals the input signal minus the output of some other system’s output to y (t): e(t) = x(t) − S2 [y (t)]. Feedback systems are omnipresent in control problems, with the error signal used to adjust the output to achieve some condition defined by the input (controlling) signal. For example, in a car’s cruise control system, x(t) is a constant representing what speed you want, and y (t) is the car’s speed as measured by a speedometer. In this application, system 2 is the identity system (output equals input).
2.6 Simple Systems 
Systems manipulate signals, creating output signals derived from their inputs. Why the following are categorized as “simple” will only become evident towards the end of the course.
2.6.1 Sources
Sources produce signals without having input. We like to think of these as having controllable parameters, like amplitude and frequency. Examples would be oscillators that produce periodic signals like sinusoids and square waves and noise generators that yield signals with erratic waveforms (more about noise subsequently). Simply writing an expression for the signals they produce specifies sources. A sine wave generator might be specified by y (t) = Asin(2πf0t)u(t), which says that the source was turned on at t = 0 to produce a sinusoid of amplitude A and frequency f0.
2.6.2 Amplifiers
An amplifier (Figure 2.13) multiplies its input by a constant known as the amplifier gain.
    y (t) = Gx(t)    (2.30)
 
Figure 2.13: An amplifier.
The gain can be positive or negative (if negative, we would say that the amplifier inverts its input) and can be greater than one or less than one. If less than one, the amplifier actually attenuates. A real-world example of an amplifier is your home stereo. You control the gain by turning the volume control.
2.6.3 Delay
A system serves as a time delay (Figure 2.14) when the output signal equals the input signal at an earlier time.
    y (t) = x(t − τ)    (2.31)
 
Figure 2.14: A delay.
Here, τ is the delay. The way to understand this system is to focus on the time origin: The output at time t = τ equals the input at time t = 0. Thus, if the delay is positive, the output emerges later than the input, and plotting the output amounts to shifting the input plot to the right. The delay can be negative, in which case we say the system advances its input. Such systems are difficult to build (they would have to produce signal values derived from what the input will be), but we will have occasion to advance signals in time.
2.6.4 Time Reversal
With a time-reversal system, the output signal equals the input signal flipped about the vertical axis (the time origin).
    y (t) = x(−t)    (2.32)
 
Figure 2.15: A time reversal system.
Again, such systems are difficult to build, but the notion of time reversal occurs frequently in communications systems.
    Exercise 2.5    (Solution on p. 30.)
Mentioned earlier was the issue of whether the ordering of systems mattered. In other words, if we have two systems in cascade, does the output depend on which comes first? Determine if the ordering matters for the cascade of an amplifier and a delay and for the cascade of a time-reversal system and a delay.
2.6.5 Derivative Systems and Integrators
Systems that perform calculus-like operations on their inputs can produce waveforms significantly different than present in the input. Derivative systems operate in a straightforward way: A first-derivative system would have the input-output relationship  . Integral systems have the complication that the integral’s limits must be defined. It is a signal theory convention that the elementary integral operation have a lower limit of −∞, and that the value of all signals at t = −∞ equals zero. A simple integrator would have input-output relation
         (2.33)
2.6.6 Linear Systems
Linear systems are a class of systems rather than having a specific input-output relation. Linear systems form the foundation of system theory, and are the most important class of systems in communications. They have the property that when the input is expressed as a weighted sum of component signals, the output equals the same weighted sum of the outputs produced by each component. When S [·] is linear,
    S [G1x1 (t) + G2x2 (t)] = G1S [x1 (t)] + G2S [x2 (t)]    (2.34)
for all choices of signals and gains.
This general input-output relation property can be manipulated to indicate specific properties shared by all linear systems.
•    S [Gx(t)] = GS [x(t)] The colloquialism summarizing this property is “Double the input, you double the output.” Note that this property is consistent with alternate ways of expressing gain changes: Since 2x(t) also equals x(t) + x(t), the linear system definition provides the same output no matter which of these is used to express a given signal.
•    S [0] = 0 If the input is identically zero for all time, the output of a linear system must be zero. This property follows from the simple derivation S [0] = S [x(t) − x(t)] = S [x(t)] − S [x(t)] = 0.
Just why linear systems are so important is related not only to their properties, which are divulged throughout this course, but also because they lend themselves to relatively simple mathematical analysis. Said another way, “They’re the only systems we thoroughly understand!”
We can find the output of any linear system to a complicated input by decomposing the input into simple signals. The equation above (2.34) says that when a system is linear, its output to a decomposed input is the sum of outputs to each input. For example, if
x(t) = e−t + sin(2πf0t)
the output S (x(t)) of any linear system equals
 
2.6.7 Time-Invariant Systems
Systems that don’t change their input-output relation with time are said to be time-invariant. The mathematical way of stating this property is to use the signal delay concept described in Section 2.6.3.
    y (t) = S [x(t)] =⇒ y (t − τ) = S [x(t − τ)]    (2.35)
If you delay (or advance) the input, the output is similarly delayed (advanced). Thus, a time-invariant system responds to an input you may supply tomorrow the same way it responds to the same input applied today; today’s output is merely delayed to occur tomorrow.
The collection of linear, time-invariant systems are the most thoroughly understood systems. Much of the signal processing and system theory discussed here concentrates on such systems. For example, electric circuits are, for the most part, linear and time-invariant. Nonlinear ones abound, but characterizing them so that you can predict their behavior for any input remains an unsolved problem.
Input-Output Relation    Linear    Time-Invariant
 
y (t) = x(t − τ)    yes    yes
    yes    yes
    no    yes
    yes    yes
    yes    yes
    yes    yes
y (t) = cos(2πft)x(t)    yes    no
y (t) = x(−t)    yes    no
y (t) = x2 (t)    no    yes
y (t) = |x(t)|    no    yes
y (t) = mx(t) + b    no    yes
Table 2.1
2.7 Signals and Systems Problems 
Problem 2.1: Complex Number Arithmetic
Find the real part, imaginary part, the magnitude and angle of the complex numbers given by the following expressions.
 
Problem 2.2: Discovering Roots
Complex numbers expose all the roots of real (and complex) numbers. For example, there should be two square-roots, three cube-roots, etc. of any number. Find the following roots.
(a)    What are the cube-roots of 27? In other words, what is 27 ?
(b)    What are the fifth roots of 3 (3 )? (c) What are the fourth roots of one?
Problem 2.3: Cool Exponentials
Simplify the following (cool) expressions.
(a)    jj
(b)    j2j
(c)    jjj
Problem 2.4: Complex-valued Signals
Complex numbers and phasors play a very important role in electrical engineering. Solving systems for complex exponentials is much easier than for sinusoids, and linear systems analysis is particularly easy.
(a)    Express each as a sum of complex exponentials. Also, re-express each as the real and imaginary parts of a complex exponential. What is the frequency (in Hz) of each? In general, are your answers unique? If so, prove it; if not, find an alternative answer for the complex exponential representation. i) 3sin(24t)
ii)  iii)  
(b)    Show that for linear systems having real-valued outputs for real inputs, that when the input is the real part of a complex exponential, the output is the real part of the system’s output to the complex exponential (see Figure 2.16).
 
 
Figure 2.16
Problem 2.5:
Express each of the indicated voltages as the real part of a complex exponential: v (t) = Re[V est]. Explicitly indicate the value of the complex amplitude V and the complex frequency s. Represent each complex amplitude as a vector in the V -plane, and indicate the location of the frequencies in the complex s-plane.  (a) v (t) = cos(5t)
Problem 2.6:
Express each of the depicted signals (Figure 2.17) as a linear combination of delayed and weighted step functions and ramps (the integral of a step).
 
    1    1    2
    (a)    (b)
 
(c)    (d)
 
(e)
Figure 2.17
Problem 2.7: Linear, Time-Invariant Systems
When the input to a linear, time-invariant system is the signal x(t), the output is the signal y (t) (Figure 2.18).
(a)    Find and sketch this system’s output when the input is the depicted signal (Figure 2.19).
(b)    Find and sketch this system’s output when the input is a unit step.
 
Figure 2.18
 
Figure 2.19
 
    (a)    (b)
Figure 2.20
 
Figure 2.21
 
Problem 2.8: Linear Systems
The depicted input (Figure 2.20a) x(t) to a linear, time-invariant system yields the output y (t).
(a)    What is the system’s output to a unit step input u(t)?
(b)    What will the output be when the input is the depicted square wave (Figure 2.20b)?
Problem 2.9: Communication Channel
A particularly interesting communication channel can be modeled as a linear, time-invariant system. When the transmitted signal x(t) is a pulse, the received signal r(t) is as shown in Figure 2.21.
(a)    What will be the received signal when the transmitter sends the pulse sequence x1 (t) shown at the top of Figure 2.22?
(b)    What will be the received signal when the transmitter sends the pulse signal x2 (t) shown at the bottom of Figure 2.22 that has half the duration as the original?
 
Figure 2.22
 
Problem 2.10: Analog Computers
So-called analog computers use circuits to solve mathematical problems, particularly when they involve differential equations. Suppose we are given the following differential equation to solve.
 
In this equation, a is a constant.
(a)    When the input is a unit step (x(t) = u(t)), the output is given by y (t) = (1 − e−at)u(t). What is the total energy expended by the input?
(b)    Instead of a unit step, suppose the input is a unit pulse (unit-amplitude, unit-duration) delivered to the circuit at time t = 10. What is the output voltage in this case? Sketch the waveform.
Solutions to Exercises in Chapter 2
Solution to Exercise 2.1 (p. 12) z + z∗ = a + jb + a − jb = 2a = 2Re[z]. Similarly, z − z∗ = a + jb − (a − jb) = 2jb = 2jIm[z]
Solution to Exercise 2.2 (p. 12)
To convert 3 − 2j to polar form, we first locate the number in the complex plane in the fourth quadrant.
√ q 2 + (−2)2.
The distance from the origin to the complex number is the magnitude r, which equals    13 =    3

The angle equalsradians (−33.7 degrees). The final answer is 13∠(−33.7) degrees. Solution to Exercise 2.3 (p. 13) zz∗ = (a + jb)(a − jb) = a2 + b2. Thus, zz∗ = r2 = |z|2. Solution to Exercise 2.4 (p. 18) sq( 
Solution to Exercise 2.5 (p. 23)
In the first case, order does not matter; in the second it does. “Delay” means t → t − τ. “Time-reverse” means t → −t
Case 1 y (t) = Gx(t − τ), and the way we apply the gain and delay the signal gives the same result.
Case 2 Time-reverse then delay: y (t) = x(−(t − τ)) = x(−t + τ). Delay then time-reverse: y (t) = x(−t − τ).



 

Chapter 3  

3.2 Ideal Circuit Elements4

The elementary circuit elements—the resistor, capacitor, and inductor— impose linear relationships between voltage and current. 

[ Susan  :   "interpolation" [ https://en.wikipedia.org/wiki/Interpolation ]  

 EXAMPLE: Temperature probe measures  0  to  100 degrees  as  output current  0.0  to  1.0  > https://web.mst.edu/~cottrell/ME240/Resources/Temperature/Temperature.pdf  ]