Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
NEHRU ARTS AND SCIENCE COLLEGE PRINCIPLES OF COMMUNICATION SYSTEM 1) 2) 3) 4) 5) UNIT - I (PART-A) Ultrasonic testing is based on time-varying deformations or vibrations in materials. Many different patterns of vibrational motion exist at the atomic level. Sound waves can propagate in four principle modes that are based on the way the particles oscillate. The oscillations occur in the longitudinal direction or the direction of wave propagation. Shear waves require an acoustically solid material for effective propagation. (PART-B) 6) Describe EM waves? Wave Propagation Ultrasonic testing is based on time-varying deformations or vibrations in materials, which is generally referred to as acoustics. All material substances are comprised of atoms, which may be forced into vibrational motion about their equilibrium positions. Many different patterns of vibrational motion exist at the atomic level, however, most are irrelevant to acoustics and ultrasonic testing. Acoustics is focused on particles that contain many atoms that move in unison to produce a mechanical wave. When a material is not stressed in tension or compression beyond its elastic limit, its individual particles perform elastic oscillations. When the particles of a medium are displaced from their equilibrium positions, internal (electrostatic) restoration forces arise. It is these elastic restoring forces between particles, combined with inertia of the particles, that leads to the oscillatory motions of the medium. 7) Write a short note on free space propagation? Sound waves can propagate in four principle modes that are based on the way the particles oscillate. Sound can propagate as longitudinal waves, shear waves, surface waves, and in thin materials as plate waves. Longitudinal and shear waves are the two modes of propagation most widely used in ultrasonic testing. The particle movement responsible for the propagation of longitudinal and shear waves is illustrated below. 8) Write a short note on surface wave propagation? In communication systems where antennas are used to transfer information the environment between (and around) the transmitter and receiver has a major influence on the quality of the transferred signal. Buildings are the main source of attenuation but vegetation elements such as trees and large bushes can also have some reducing effects, on the propagated radio signal. In the case of attenuation by trees and bushes the incident electromagnetic field is mainly interacting with the leaves and the branches. The trunk does of course also have some influence on the attenuation but since the volume occupied by the trunk is much smaller than the total volume of a tree, for example, these effects can be considered as negligible. In the case of wave propagation between antennas that are located on heights — i.e. on rooftops — it will in principal only be the upper part of the tree crown that affects the attenuation. Since one of the fundamental assumptions, in this thesis, is communication between fixed antennas on heights, the attenuation effects, from the trunks, will thus be neglected in the vegetation models. 9) Describe sky wave propagation? The attenuation due to vegetation is also very sensitive to the wavelength. Since the interaction between the tree and the electromagnetic field mainly is due to leaves and branches, the size and shape of these are important. For low frequencies — when thewavelength is much larger than the scattering body — leaves and branches have only a small interaction to the electromagnetic field, which means that surface irregularities have no — or minor — influence on the attenuation. The incident field will approximately have the same magnitude over the whole body, which leads to that the body experiences the incident field as uniform. Since the vegetation element is exposed by an electric field, an internal electric field is induced. This give rice to secondary radiation and since the wavelength is much larger than the scattering body, the emitted radiation is spread out and forms a radiation pattern close to that formed by a dipole antenna. When the wavelength is decreased, the losses increase due to a larger interaction between the incident field and the vegetation elements. This proceeds until the wavelength approach the same size as the scattering body and thus enters the resonance region. Here will the absorption and scattering values fluctuate strongly and the attenuation becomes irregular and very frequency dependent. The size and shape of the body is the main reason why this happens. The incident electric field induces an internal electric field that takes different values at different parts of the scattering body (these values are of course time dependent) since the wavelength no longer is much larger than the size of the body. These different parts work as scatterers and will thus emit secondary radiation. The radiation from the different mitters interferes, which leads to that specific directions are predominated and radiation lobes are formed. When the frequency is increased further, the effects of the resonance gradually decay, which leads to a more predictable behavior. The attenuation of the leaves and branches increases with increasing frequency. When the wavelength is much less than the scattering body no resonance effects occur and the attenuation will be purely exponential. The number of scatterers — in the scattering body — will of course increase, which leads to an increase in the number of radiation lobes. For very high frequencies the width of the maximum lobes is thin and thus forms radiation beams. This means that the intensity in the lobes, whose direction corresponds to the beam directions, is much higher and differs by many orders of magnitude compared to the other lobes. The fundamental principles behind the interaction between the incident field and the scattering elements are very complicated and will therefore not be discussed here. It should be mentioned though that some factors that contribute to the losses are the fact that the incident field changes the permanent dipole moment in the liquid and induces currents in the medium. The induced currents can be created due to the charges in the saline water that the organic components contain. 10)Describe sky wave propagation? We have so far discussed the interaction in general between the incident electromagnetic field and the vegetation elements at different frequencies. From the discussion we find that three types of interacting exist from which approximations can be done. In the case of low frequencies we are dealing with Rayleigh scattering (long wave approximations) and in the case of high frequencies, physical optics or geometric optics (short wave approximations) are considered. In the resonance region there is no simple way to do any approximations which leads to that the electromagnetic problems are difficult to solve. If the electric properties of the scattering body can be considered as weak, Born or Rytov approximations can be used to simplify the calculations. In this case the internal fields inside the scattering body is approximated by the incident field which makes it possible to treat cases when resonance occurs. (PART-C) 11)Explain the space wave propagation? We have so far discussed the interaction in general between the incident electromagnetic field and the vegetation elements at different frequencies. From the discussion we find that three types of interacting exist from which approximations can be done. In the case of low frequencies we are dealing with Rayleigh scattering (long wave approximations) and in the case of high frequencies, physical optics or geometric optics (short wave approximations) are considered. In the resonance region there is no simple way to do any approximations which leads to that the electromagnetic problems are difficult to solve. If the electric properties of the scattering body can be considered as weak, Born or Rytov approximations can be used to simplify the calculations. In this case the internal fields inside the scattering body is approximated by the incident field which makes it possible to treat cases when resonance occurs. In the common microwave propagation models that are used today, assumptions of small or large wavelength in comparison to the scatterers are often done. Thus is Rayleigh scattering or physical optics considered. But when the wavelength of the transmitted field approach the size of the leaves and branches, resonance effects occur which leads to that these models generates incorrect results. The purpose of this work is to study the vegetation attenuation and scattering at 3.1 GHz and 5.8 GHz. Since the wavelengths of the transmitted fields are about the same size as the leaves and branches ( .=9.7 cm and .= 5.2 cm ) resonance effects occur. Since the common models can not be used the wave propagation through the canopy must be analyzed in detail which leads to an improved model for the attenuation. The attenuation model is based on the total cross section of a leaf and a branch. A computer program, based on the T-matrix theory,makes the computations of the total cross section. The results from the simulations of the improved attenuation model will finally be compared with measurements that have been made on a large test beech. Wave Propagation through Vegetation at 3.1 GHz and 5.8 GHz 2 Basic relationships This section gives a brief introduction to the theory of microwave propagation. We will start with the fundamental equations, i.e. Maxwell s equations, and from these derive the vector differential equation called the Helmholtz equation. This equation can be used to explain and predict how the fields propagate. Furthermore will also concepts like attenuation and average power density be treated.2.1 Maxwell s field equations For a medium characterized by a source density .the electromagnetic fields satisfies Maxwell s equations The vector fields in the equations are: E Electric field strength [V/m] Magnetic field strength [A/m] Electric flux density [As/m2] Magnetic flux density [Vs/m2] J Current density [A/m2] In a source free medium the divergence of the electric flux density is zero, .·D =0 . This means that Eq. (2.16) can be simplified and the result is Since the different models have some limitations it is important to investigate under which circumstances the models can be used. The weaknesses and strengths of the different models will be elucidated and show which parts that are useful and which parts that have to be improved. Furthermore, data from earlier executed measurements will also be presented. This data is extremely valuable to us since it increases our understanding of how electromagnetic radiation is affected by vegetation. It also works as complementary information to the results of our own measurements. Leaf model Effective dielectric properties are modeled by dielectric mixing theory. In the case of vegetation elements, the components are liquid water with a high permittivity, organic material with moderate to low permittivity and air with unit permittivity. For such highly contrasting permittivities and large volume fractions physical mixing theory has, so far, failed. In the attempt to overcome this problem Ulaby and ElRayes [6] assumed linear, i.e. empirical relationships between the permittivity and volume fractions of the different components. Dielectric measurements by Ulaby and El-Rayes indicate that the dielectric properties of vegetation can be modelled by representing vegetation as a mixture of saline water, bound water and dry vegetation. They derived a semi-empirical formula [6] from measurements at frequencies between 1 and 20 GHz on corn leaves with relatively high dry matter contents. The extrapolation of the formula to higher frequencies and lower dry matter contents leads to incorrect values. This was shown by M tzler and Sume [2]. From the data used in [6], and their own data at frequencies up to 94 GHz, they developed and improved a semi-empirical formula to calculate the dielectric constant of leaves. High and low dry matter contents were included. M tzler combined the data of Ulaby and El Rayes [6], El Rayes and Ulaby [9] and of M tzler and Sume [2] and derived a new dielectric formula [1] eleaf = 0.522(1-1.32 md )e sw + 0.51 + 3.84 md which is valid over the frequency range from 1 to 100 GHz. The formula is applicable to fresh leaves with md values in the range 0.1 = md = 0.5 . Here esw is the dielectric permittivity for saline water according to the Debye model and md is the dry-matter fraction of leaves given by dry mass md = fresh mass 3.2 Canopy opacity model Wegm ller, M tzler and Njoku [4] used the radiative transfer model, described by Kerr and Njoku [7], as a reference point for studying the vegetation attenuation and emission. The transfer model is a model for spaceborne observations of semi-arid land surfaces and it is based on the concept of temperature instead of the concept of electric and magnetic fields. It means that instead of analyzing how the magnitude of the electric and magnetic field is distributed to the different components one analyzes how the energy is distributed in terms of the temperature. Every component of the system — the land surface, air, leaves, branches etc. — Wave Propagation through Vegetation at 3.1 GHz and 5.8 GHz is considered as an object that emit, reflect and absorbs thermal radiation. For example is the soil-surface emission attenuated through the canopy and atmosphere given by -(t p +t a ) T =(1- rsp )Tse (3.1) where rsp is the reflectivity of the soil surface, Ts is temperature of the soil. The opacities of the atmosphere and the canopy are denoted by ta and tp where the polarization is denoted by p . The transfer model incorporates models of vegetation attenuation and emission that are valid at low frequencies only. Water is the observation angle relative to nadir. The coefficient Ap depends on the canopy geometry. Originally, as introduced by Kirdyashev et al. Ap appearing in Eq. (3.2) was a theoretically derived geometrical parameter. However, there is no simple way of deriving this parameter for actual vegetation such as grasses, trees or crops and the assumptions used in deriving Eq. (3.2) become invalid at higher frequencies. Hence, for comparing with satellite data, Kerr and Njoku [7] used Eq. (3.2) as an empirical formula and determined the parameter Ap individually for each frequency and canopy type. M tzler et al. [4] examined the theoretical origin of Eq. (3.2), which is based on the Effective Medium theory , and showed that a more accurate frequency dependence can be obtained by considering the geometric optics theory. 10) Explain the space wave propagation? In this section we present a brief overview of some of the established models used today. Since the different models have some limitations it is important to investigate under which circumstances the models can be used. The weaknesses and strengths of the different models will be elucidated and show which parts that is useful and which parts that have to be improved. Furthermore, data from earlier executed measurements will also be presented. This data is extremely valuable to us since it increases our understanding of how electromagnetic radiation is affected by vegetation. It also works as complementary information to the results of our own measurements. (PART-C) 11)Briefly the trophospheric scatter propagation? It is important to note that the relative dielectric constants of the leaves and branches are frequency dependent [1]. In the analysis constant values for the permittivities of the leaves and the branches have been assumed because the permittivities of the leaves and the branches do not change much between 800 MHz to 2000 MHz. Microwave transmissivity of a forest canopy Microwave measurements have been executed by M tzler [3] for the microwave transmissivities and opacities of the crown of a beech (Fagus sylvatica L.). The technique used for measurements corresponds to the one explained in section 3.2. To avoid any prejudice on the type of microwave propagation model, M tzler limit the physical interpretation to obvious facts and to consistency tests of the multivariate dataset. The main instruments that have been used in the study are the five microwave radiometers of the PAMIR system. The transmitted power has been recorded during a whole year. In this way it has been possible to get an apprehension of how much the attenuation is affected by the leaves alone since measurements were made both for a canopy containing leaves and branches and for a canopy without leaves. The microwave radiation at 4.9 GHz, 10.4 GHz, 21 GHz, 35 GHz and 94 GHz was measured about once every week between August 1987 and August 1988. During the measurements the radiometer was placed to measure the transmissivity in a vertical direction through the beech. Thus it measures the brightness temperature Tb1 of downwelling radiation from the beech. This temperature can be expressed by T = tT + rT +(1 - r- t)T1 (3.23) where t is the transmissivity and r the reflectivity of the vegetation layer. Here T1 is the physical tree temperature and Tb 2 is the sky brightness temperature. That from the ground upwelling brightness temperature Tb 0 is given by Tb 0 = e0T0 +(1 - e0 )Tb1 (3.24) where e0 is the emissivity of the ground surface and T0 is the ground temperature. Eq. (3.23) and Eq. (3.24) are the basic equations for the experiments and they can be used to get an expression for the transmissivity of the tree crown. After some algebra we find t = T1 + rd T - Tb1 (3.25) T - T1 b 2 where d T = Tb 0 - T1 . Since the emissivity of the grass-covered ground below the beech is near 0.95 — over the entire frequency range —Tb 0 approaches T0. This and the fact that the reflectivity of the beech is close to 0.1 lead to the following estimation rd T = 0.1(T0 - T1 ) 14)Explain the isonopheric abnormalities? Ultrasonic testing is based on time-varying deformations or vibrations in materials, which is generally referred to as acoustics. All material substances are comprised of atoms, which may be forced into vibrational motion about their equilibrium positions. Many different patterns of vibrational motion exist at the atomic level, however, most are irrelevant to acoustics and ultrasonic testing. Acoustics is focused on particles that contain many atoms that move in unison to produce a mechanical wave. When a material is not stressed in tension or compression beyond its elastic limit, its individual particles perform elastic oscillations. When the particles of a medium are displaced from their equilibrium positions, internal (electrostatic) restoration forces arise. It is these elastic restoring forces between particles, combined with inertia of the particles, that leads to the oscillatory motions of the medium. In solids, sound waves can propagate in four principle modes that are based on the way the particles oscillate. Sound can propagate as longitudinal waves, shear waves, surface waves, and in thin materials as plate waves. Longitudinal and shear waves are the two modes of propagation most widely used in ultrasonic testing. The particle movement responsible for the propagation of longitudinal and shear waves is illustrated below. In longitudinal waves, the oscillations occur in the longitudinal direction or the direction of wave propagation. Since compressional and dilational forces are active in these waves, they are also called pressure or compressional waves. They are also sometimes called density waves because their particle density fluctuates as they move. Compression waves can be generated in liquids, as well as solids because the energy travels through the atomic structure by a series of comparison and expansion (rarefaction) movements. In the transverse or shear wave, the particles oscillate at a right angle or transverse to the direction of propagation. Shear waves require an acoustically solid material for effective propagation, and therefore, are not effectively propagated in materials such as liquids or gasses. Shear waves are relatively weak when compared to longitudinal waves. In fact, shear waves are usually generated in materials using some of the energy from longitudinal waves. UNIT-II (PART-A)-ANTENNAS 1) An antenna (or aerial) is a transducer signed to transmit or receive electromagnetic waves. 2) An antenna is an arrangement of conductors that generate a radiating electromagnetic field. 3) The origin of the word antenna relative to wireless apparatus is attributed to Guglielmo Marconi. 4) Antennas have practical uses for the transmission and reception of radio frequency signals. 5) The directionality of the array is due to the spatial relationships and the electrical feed relationships between individual antennas. (PART-B) 6) Describe the antenna? An antenna (or aerial) is a transducer designed to transmitor receive electromagnetic waves. In other words, antennas convert electromagnetic waves into electrical currents and vice versa. Antennas are used in systems such as radio and television broadcasting, point-to-point radio communication, wireless LAN radar, and space exploration. Antennas usually work in air or outer space but can also be operated under water or even through soil and rock at certain frequencies for short distances. Physically, an antenna is an arrangement of conductors that generate a radiating electromagnetic field in response to an applied alternating voltage and the associated alternating electric current or can be placed in an electromagnetic field so that the field will induce an alternating current in the antenna and a voltage between its terminals. Some antenna devices (parabolic antenna Horn Antenna just adapt the free space to another type of antenna. 7) write a short notes on electromagnetic radiations? The origin of the word antenna relative to wireless apparatus is attributed to Guglielmo Marconi In 1895, while testing early radio apparatus in the Swiss Alps at Salvan, Switzerlandin the Mont Blancregion, Marconi experimented with early wireless equipment. A 2.5 meter long pole, along which was carried a wire, was used as a radiating and receiving aerial element. In Italian a tent pole is known as l'antenna centrale, and the pole with a wire alongside it used as an aerial was simply called l'antenna. Until then wireless radiating transmitting and receiving elements were known simply as aerials or terminals. Marconi's use of the word antenna (Italian for pole) would become a popular term for what today is uniformly known as the antenna. 8) Describe the elementary doublet? Antennas have practical uses for the transmission and reception of radio frequency signals (radio, TV, etc.). In air, those signals travel very quickly and with a very low transmission loss. The signals are absorbed when moving through more conducting materials, such as concrete walls, rock, etc. When encountering an interface, the waves are partially reflected and partially transmitted through. A common antenna is a vertical rod a quarter of a wavelength long. Such antennas are simple in construction, usually inexpensive, and both radiate in and receive from all horizontal directions (omnidirectional). One limitation of this antenna is that it does not radiate or receive in the direction in which the rod points. This region is called the antenna blind cone or null. 9) Describe the cirrent and voltage distribution? An electromagnetic wave refractor is a structure which is shaped or positioned to delay or accelerate transmitted electromagnetic waves, passing through such structure, an amount which varies over the wave front. The refractor alters the direction of propagation of the waves emitted from the structure with respect to the waves impinging on the structure. It can alternatively bring the wave to a focus or alter the wave front in other ways, such as to convert a spherical wave front to a planar wave front (or vice-versa). The velocity of the waves radiated have a component which is in the same direction (director) or in the opposite direction (reflector) as that of the velocity of the impinging wave. A director is a parasitic element, usually a metallic conductive structure, which reradiates into free space impinging electromagnetic radiation coming from or going to the active antenna, the velocity of the re-radiated wave having a component in the direction of the velocity of the impinging wave. The director modifies the radiation pattern of the active antenna but there is no direct electrical connection between the active antenna and this parasitic element. 10) what is resonant antennas? The "resonant frequeny" and "electrical resonance" is related to the electrical lengthof an antenna. The electrical length is usually the physical length of the wire divided by its velocity factor (the ratio of the speed of wave propagation in the wire to c0, the speed of light in a vacuum). Typically an antenna is tuned for a specific frequency, and is effective for a range of frequencies that are usually centered on that resonant frequency. However, other properties of an antenna change with frequency, in particular the radiation pattern and impedance, so the antenna's resonant frequency may merely be close to the center frequency of these other more important properties. Antennas can be made resonant on harmonic frequencies with lengths that are fractions of the target wavelength. Some antenna designs have multiple resonant frequencies, and some are relatively effective over a very broad range of frequencies. The most commonly known type of wide band aerial is the logarithmic or log periodic, but its gain is usually much lower than that of a specific or narrower band aerial. (PART-C) 11. Briefly explain the electromagnetic radiations? An antenna (or aerial) is a transducer designed to transmit or receive electromagnetic waves. In other words, antennas convert electromagnetic waves into electrical currents and vice versa. Antennas are used in systems such as radio and television broadcasting, point-to-point radio communication, wireless LAN, radar, and space exploration. Antennas usually work in air or outer space, but can also be operated under water or even through soil and rock at certain frequencies for short distances. Physically, an antenna is an arrangement of conductors that generate a radiating electromagnetic field in response to an applied alternating voltage and the associated alternating electric current, or can be placed in an electromagnetic field so that the field will induce an alternating current in the antenna and a voltage between its terminals. Some antenna devices (parabolic antenna, Horn Antenna) just adapt the free space to another type of antenna. Thomas Edison used antennas by 1885. Edison patented his system in U.S. Patent 465,971. Antennas were also used in 1888 by Heinrich Hertz (1857-1894) to prove the existence of electromagnetic waves predicted by the theory of James Clerk Maxwell. Hertz placed the emitter dipole in the focal point of a parabolic reflector. He published his work and installation drawings in Annalen der Physik und Chemie (vol. 36, 1889). Terminology The words antenna (plural: antennas) and aerial are used interchangeably; but usually a rigid metallic structure is termed an antenna and a wire format is called an aerial. In the United Kingdom and other British English speaking areas the term aerial is more common, even for rigid types. The noun aerial is occasionally written with a diaresis mark — aërial — in recognition of the original spelling of the adjective aërial from which the noun is derived. The origin of the word antenna relative to wireless apparatus is attributed to Guglielmo Marconi. In 1895, while testing early radio apparatus in the Swiss Alps at Salvan, Switzerland in the Mont Blanc region, Marconi experimented with early wireless equipment. A 2.5 meter long pole, along which was carried a wire, was used as a radiating and receiving aerial element. In Italian a tent pole is known as l'antenna centrale, and the pole with a wire alongside it used as an aerial was simply called l'antenna. Until then wireless radiating transmitting and receiving elements were known simply as aerials or terminals. Marconi's use of the word antenna (Italian for pole) would become a popular term for what today is uniformly known as the antenna.[2] A Hertzian antenna is a set of terminals that does not require the presence of a ground for its operation (versus a Tesla antenna which is grounded. [3]) A loaded antenna is an active antenna having an elongated portion of appreciable electrical length and having additional inductance or capacitance directly in series or shunt with the elongated portion so as to modify the standing wave pattern existing along the portion or to change the effective electrical length of the portion. An antenna grounding structure is a structure for establishing a reference potential level for operating the active antenna. It can be any structure closely associated with (or acting as) the ground which is connected to the terminal of the signal receiver or source opposing the active antenna terminal (i.e., the signal receiver or source is interposed between the active antenna and this structure). Overview Antennas have practical uses for the transmission and reception of radio frequency signals (radio, TV, etc.). In air, those signals travel very quickly and with a very low transmission loss. The signals are absorbed when moving through more conducting materials, such as concrete walls, rock, etc. When encountering an interface, the waves are partially reflected and partially transmitted through. A common antenna is a vertical rod a quarter of a wavelength long. Such antennas are simple in construction, usually inexpensive, and both radiate in and receive from all horizontal directions (omnidirectional). One limitation of this antenna is that it does not radiate or receive in the direction in which the rod points. This region is called the antenna blind cone or null. There are two fundamental types of antenna directional patterns, which, with reference to a specific three dimensional (usually horizontal or vertical) plane are either: 1. Omni-directional (radiates equally in all directions), such as a vertical rod or 2. Directional (radiates more in one direction than in the other). In colloquial usage "omni-directional" usually refers to all horizontal directions with reception above and below the antenna being reduced in favor of better reception (and thus range) near the horizon. A "directional" antenna usually refers to one focusing a narrow beam in a single specific direction such as a telescope or satellite dish, or, at least, focusing in a sector such as a 120° horizontal fan pattern in the case of a panel antenna at a Cell site. All antennas radiate some energy in all directions in free space but careful construction results in substantial transmission of energy in a preferred direction and negligible energy radiated in other directions. By adding additional elements (such as rods, loops or plates) and carefully arranging their length, spacing, and orientation, an antenna with desired directional properties can be created. An antenna array is two or more simple antennas combined to produce a specific directional radiation pattern. In common usage an array is composed of active elements, such as a linear array of parallel dipoles fed as a "broadside array". A slightly different feed method could cause this same array of dipoles to radiate as an "end-fire array". Antenna arrays may be built up from any basic antenna type, such as dipoles, loops or slots. The directionality of the array is due to the spatial relationships and the electrical feed relationships between individual antennas. Usually all of the elements are active (electrically fed) as in the log-periodic dipole array which offers modest gain and broad bandwidth and is traditionally used for television reception. Alternatively, a superficially similar dipole array, the Yagi-Uda Antenna (often abbreviated to "Yagi"), has only one active dipole element in a chain of parasitic dipole elements, and a very different performance with high gain over a narrow bandwidth. An active element is electrically connected to the antenna terminals leading to the receiver or transmitter, as opposed to a parasitic element that modifies the antenna pattern without being connected directly. The active element(s) couple energy between the electromagnetic wave and the antenna terminals, thus any functioning antenna has at least one active element. An antenna lead-in is the medium, for example, a transmission line or feed line for conveying the signal energy between the signal source or receiver and the antenna. The antenna feed refers to the components between the antenna and an amplifier. An antenna counterpoise is a structure of conductive material most closely associated with ground that may be insulated from or capacitively coupled to the natural ground. It aids in the function of the natural ground, particularly where variations (or limitations) of the characteristics of the natural ground interfere with its proper function. Such structures are usually connected to the terminal of a receiver or source opposite to the antenna terminal. An antenna component is a portion of the antenna performing a distinct function and limited for use in an antenna, as for example, a reflector, director, or active antenna. Parasitic elements have no direct electrical connection to the antenna terminals, yet they modify the antenna pattern. The parasitic elements are immersed in the electromagnetic waves and fields around the active elements, and the parasitic currents induced in them interact with the original waves and fields. A careful arrangement of parasitic elements, such as rods or coils, can improve the radiation pattern of the active element(s). Directors and reflectors are common parasitic elements. An electromagnetic wave refractor is a structure which is shaped or positioned to delay or accelerate transmitted electromagnetic waves, passing through such structure, an amount which varies over the wave front. The refractor alters the direction of propagation of the waves emitted from the structure with respect to the waves impinging on the structure. It can alternatively bring the wave to a focus or alter the wave front in other ways, such as to convert a spherical wave front to a planar wave front (or vice-versa). The velocity of the waves radiated have a component which is in the same direction (director) or in the opposite direction (reflector) as that of the velocity of the impinging wave. A director is a parasitic element, usually a metallic conductive structure, which reradiates into free space impinging electromagnetic radiation coming from or going to the active antenna, the velocity of the re-radiated wave having a component in the direction of the velocity of the impinging wave. The director modifies the radiation pattern of the active antenna but there is no direct electrical connection between the active antenna and this parasitic element. A reflector is a parasitic element, usually a metallic conductive structure (e.g., screen, rod or plate), which re-radiates back into free space impinging electromagnetic radiation coming from or going to the active antenna. The velocity of the returned wave having a component in a direction opposite to the direction of the velocity of the impinging wave. The reflector modifies the radiation of the active antenna. There is no direct electrical connection between the active antenna and this parasitic element. An antenna coupling network is a passive network (which may be any combination of a resistive, inductive or capacitive circuit(s)) for transmitting the signal energy between the active antenna and a source (or receiver) of such signal energy. Typically, antennas are designed to operate in a relatively narrow frequency range. The design criteria for receiving and transmitting antennas differ slightly, but generally an antenna can receive and transmit equally well. This property is called reciprocity. Parameters Antenna measurement There are several critical parameters affecting an antenna's performance that can be adjusted during the design process. These are resonant frequency, impedance, gain, aperture or radiation pattern, polarization, efficiency and bandwidth. Transmit antennas may also have a maximum power rating, and receive antennas differ in their noise rejection properties. All of these parameters can be measured through various means. Resonant frequency The "resonant frequency" and "electrical resonance" is related to the electrical length of an antenna. The electrical length is usually the physical length of the wire divided by its velocity factor (the ratio of the speed of wave propagation in the wire to c0, the speed of light in a vacuum). Typically an antenna is tuned for a specific frequency, and is effective for a range of frequencies that are usually centered on that resonant frequency. However, other properties of an antenna change with frequency, in particular the radiation pattern and impedance, so the antenna's resonant frequency may merely be close to the center frequency of these other more important properties. Antennas can be made resonant on harmonic frequencies with lengths that are fractions of the target wavelength. Some antenna designs have multiple resonant frequencies, and some are relatively effective over a very broad range of frequencies. The most commonly known type of wide band aerial is the logarithmic or log periodic, but its gain is usually much lower than that of a specific or narrower band aerial. Gain Antenna gain Gain as a parameter measures the directionality of a given antenna. An antenna with a low gain emits radiation with about the same power in all directions, whereas a high-gain antenna will preferentially radiate in particular directions. Specifically, the Gain, Directive gain or Power gain of an antenna is defined as the ratio of the intensity (power per unit surface) radiated by the antenna in a given direction at an arbitrary distance divided by the intensity radiated at the same distance by a hypothetical isotropic antenna. The gain of an antenna is a passive phenomenon - power is not added by the antenna, but simply redistributed to provide more radiated power in a certain direction than would be transmitted by an isotropic antenna. If an antenna has a greater than one gain in some directions, it must have a less than one gain in other directions since energy is conserved by the antenna. An antenna designer must take into account the application for the antenna when determining the gain. High-gain antennas have the advantage of longer range and better signal quality, but must be aimed carefully in a particular direction. Low-gain antennas have shorter range, but the orientation of the antenna is inconsequential. For example, a dish antenna on a spacecraft is a high-gain device that must be pointed at the planet to be effective, whereas a typical Wi-Fi antenna in a laptop computer is low-gain, and as long as the base station is within range, the antenna can be in an any orientation in space. It makes sense to improve horizontal range at the expense of reception above or below the antenna. Thus most antennas labelled "omnidirectional" really have some gain.[4] Sometimes, the half-wave dipole is taken as a reference instead of the isotropic radiator. The gain is then given in dBd (decibels over dipole): 12. Explain the antenna gain and effective radiated power? Monopole and earth return In a common configuration, called monopole, one of the terminals of the rectifier is connected to earth ground. The other terminal, at a potential high above, or below, ground, is connected to a transmission line. The earthed terminal may or may not be connected to the corresponding connection at the inverting station by means of a second conductor. If no metallic conductor is installed, current flows in the earth between the earth electrodes at the two stations. Therefore it is a type of single wire earth return. The issues surrounding earth-return current include Electrochemical corrosion of long buried metal objects such as pipelines Underwater earth-return electrodes in seawater may produce chlorine or otherwise affect water chemistry. An unbalanced current path may result in a net magnetic field, which can affect magnetic navigational compasses for ships passing over an underwater cable. These effects can be eliminated with installation of a metallic return conductor between the two ends of the monopolar transmission line. Since one terminal of the converters is connected to earth, the return conductor need not be insulated for the full transmission voltage which makes it less costly than the high-voltage conductor. Use of a metallic return conductor is decided based on economic, technical and environmental factors.[15] Modern monopolar systems for pure overhead lines carry typically 1,500 MW.[16] If underground or underwater cables are used the typical value is 600 MW. Most monopolar systems are designed for future bipolar expansion. Transmission line towers may be designed to carry two conductors, even if only one is used initially for the monopole transmission system. The second conductor is either unused, used as electrode line or connected in parallel with the other (as in case of Baltic-Cable). Bipolar Bipolar system pylons of the Baltic-Cable-HVDC in Sweden In bipolar transmission a pair of conductors is used, each at a high potential with respect to ground, in opposite polarity. Since these conductors must be insulated for the full voltage, transmission line cost is higher than a monopole with a return conductor. However, there are a number of advantages to bipolar transmission which can make it the attractive option. Under normal load, negligible earth-current flows, as in the case of monopolar transmission with a metallic earth-return. This reduces earth return loss and environmental effects. When a fault develops in a line, with earth return electrodes installed at each end of the line, approximately half the rated power can continue to flow using the earth as a return path, operating in monopolar mode. Since for a given total power rating each conductor of a bipolar line carries only half the current of monopolar lines, the cost of the second conductor is reduced compared to a monopolar line of the same rating. In very adverse terrain, the second conductor may be carried on an independent set of transmission towers, so that some power may continue to be transmitted even if one line is damaged. A bipolar system may also be installed with a metallic earth return conductor. Bipolar systems may carry as much as 3,200 MW at voltages of +/-600 kV. Submarine cable installations initially commissioned as a monopole may be upgraded with additional cables and operated as a bipole. A back-to-back station (or B2B for short) is a plant in which both static inverters and rectifiers are in the same area, usually in the same building. The length of the direct current line is kept as short as possible. HVDC back-to-back stations are used for coupling of electricity mains of different frequency (as in Japan) coupling two networks of the same nominal frequency but no fixed phase relationship (as until 1995/96 in Etzenricht, Dürnrohr and Vienna). different frequency and phase number (for example, as a replacement for traction current converter plants) The DC voltage in the intermediate circuit can be selected freely at HVDC back-to-back stations because of the short conductor length. The DC voltage is as low as possible, in order to build a small valve hall and to avoid series connections of valves. For this reason at HVDC back-to-back stations valves with the highest available current rating are used. Systems with transmission lines The most common configuration of an HVDC link is two inverter/rectifier stations connected by an overhead powerline. This is also a configuration commonly used in connecting unsynchronised grids, in long-haul power transmission, and in undersea cables. Multi-terminal HVDC links, connecting more than two points, are rare. The configuration of multiple terminals can be series, parallel, or hybrid (a mixture of series and parallel). Parallel configuration tends to be used for large capacity stations, and series for lower capacity stations. An example is the 2,000 MW Quebec - New England Transmission system opened in 1992, which is currently the largest multi-terminal HVDC system in the world.[17] Tripole: current-modulating control A newly patented scheme (As of 2004) (Current modulation of direct current transmission lines) is intended for conversion of existing AC transmission lines to HVDC. Two of the three circuit conductors are operated as a bipole. The third conductor is used as a parallel monopole, equipped with reversing valves (or parallel valves connected in reverse polarity). The parallel monopole periodically relieves current from one pole or the other, switching polarity over a span of several minutes. The bipole conductors would be loaded to either 1.37 or 0.37 of their thermal limit, with the parallel monopole always carrying +/- 1 times its thermal limit current. The combined RMS heating effect is as if each of the conductors is always carrying 1.0 of its rated current. This allows heavier currents to be carried by the bipole conductors, and full use of the installed third conductor for energy transmission. High currents can be circulated through the line conductors even when load demand is low, for removal of ice. Combined with the higher average power possible with a DC transmission line for the same line-to-ground voltage, a tripole conversion of an existing AC line could allow up to 80% more power to be transferred using the same transmission right-of-way, towers, and conductors. Some AC lines cannot be loaded to their thermal limit due to system stability, reliability, and reactive power concerns, which would not exist with an HVDC link. The system would operate without earth-return current. Since a single failure of a pole converter or a conductor results in only a small loss of capacity and no earth-return current, reliability of this scheme would be high, with no time required for switching. As of 2005, no tri-pole conversions are in operation, although a transmission line in India has been converted to bipole HVDC. Corona discharge Corona discharge is the creation of ions in a fluid (such as air) by the presence of a strong electric field. Electrons are torn from neutral air, and either the positive ions or else the electrons are attracted to the conductor, while the charged particles drift. This effect can cause considerable power loss, create audible and radio-frequency interference, generate toxic compounds such as oxides of nitrogen and ozone, and bring forth arcing. Both AC and DC transmission lines can generate coronas, in the former case in the form of oscillating particles, in the latter a constant wind. Due to the space charge formed around the conductors, an HVDC system may have about half the loss per unit length of a high voltage AC system carrying the same amount of power. With monopolar transmission the choice of polarity of the energised conductor leads to a degree of control over the corona discharge. In particular, the polarity of the ions emitted can be controlled, which may have an environmental impact on particulate condensation. (particles of different polarities have a different mean-free path.) Negative coronas generate considerably more ozone than positive coronas, and generate it further downwind of the power line, creating the potential for health effects. The use of a positive voltage will reduce the ozone impacts of monopole HVDC power lines. Applications Overview The controllability of current-flow through HVDC rectifiers and inverters, their application in connecting unsynchronized networks, and their applications in efficient submarine cables mean that HVDC cables are often used at national boundaries for the exchange of power. Offshore windfarms also require undersea cables, and their turbines are unsynchronized. In very long-distance connections between just two points, for example around the remote communities of Siberia, Canada, and the Scandinavian North, the decreased line-costs of HVDC also makes it the usual choice. Other applications have been noted throughout this article. AC network interconnections AC transmission lines can only interconnect synchronized AC networks that oscillate at the same frequency and in phase. Many areas that wish to share power have unsynchronized networks. The power grids of the UK, Northern Europe and continental Europe are not united into a single synchronized network. Japan has 50 Hz and 60 Hz networks. Continental North America, while operating at 60 Hz throughout, is divided into regions which are unsynchronised: East, West, Texas, Quebec, and Alaska. Brazil and Paraguay, which share the enormous Itaipu hydroelectric plant, operate on 60 Hz and 50 Hz respectively. However, HVDC systems make it possible to interconnect unsynchronized AC networks, and also add the possibility of controlling AC voltage and reactive power flow. A generator connected to a long AC transmission line may become unstable and fall out of synchronization with a distant AC power system. An HVDC transmission link may make it economically feasible to use remote generation sites. Wind farms located offshore may use HVDC systems to collect power from multiple unsynchronized generators for transmission to the shore by an underwater cable. In general, however, an HVDC power line will interconnect two AC regions of the power-distribution grid. Machinery to convert between AC and DC power adds a considerable cost in power transmission. The conversion from AC to DC is known as rectification, and from DC to AC as inversion. Above a certain break-even distance (about 50 km for submarine cables, and perhaps 600–800 km for overhead cables), the lower cost of the HVDC electrical conductors outweighs the cost of the electronics. The conversion electronics also present an opportunity to effectively manage the power grid by means of controlling the magnitude and direction of power flow. An additional advantage of the existence of HVDC links, therefore, is potential increased stability in the transmission grid. Renewable electricity superhighways A number of studies have highlighted the potential benefits of very wide area super grids based on HVDC since they can mitigate the effects of intermittency by averaging and smoothing the outputs of large numbers of geographically dispersed wind farms or solar farms.[18] Czisch's study concludes that a grid covering the fringes of Europe could bring 100% renewable power (70% wind, 30% biomass) at close to today's prices. There has been debate over the technical feasibility of this proposal[19] and the political risks involved in energy transmission across a large number of international borders.[20][21] The construction of such green power superhighways is advocated in a white paper that was released by the American Wind Energy Association and the Solar Energy Industries Association[22] In January, the European Commission proposed €300 million to subsidize the development of HVDC links between Ireland, Britain, the Netherlands, Germany, Denmark, and Sweden, as part of a wider €1.2 billion package supporting links to offshore wind farms and cross-border interconnectors throughout Europe. Meanwhile the recently founded Union of the Mediterranean has embraced a Mediterranean Solar Plan to import large amounts of concentrating solar power into Europe from North Africa and the Middle East.[23] Smaller scale use The development of insulated gate bipolar transistors (IGBT) and gate turn-off thyristors (GTO) has made smaller HVDC systems economical. These may be installed in existing AC grids for their role in stabilizing power flow without the additional short-circuit current that would be produced by an additional AC transmission line. ABB manufacturer calls this concept "HVDC Light" and Siemens manufacturer calls a similar concept "HVDC PLUS" (Power Link Universal System). They have extended the use of HVDC down to blocks as small as a few tens of megawatts and lines as short as a few score kilometres of overhead line. The difference lies in the concept of the Voltage-Sourced Converter (VSC) technology whereas "HVDC Light" uses Retrieved. 12)Describe the antenna bandwidth, beam width and bandwidth polarization? Radiation pattern The radiation pattern of an antenna is the geometric pattern of the relative field strengths of the field emitted by the antenna. For the ideal isotropic antenna, this would be a sphere. For a typical dipole, this would be a toroid. The radiation pattern of an antenna is typically represented by a three dimensional graph, or polar plots of the horizontal and vertical cross sections. The graph should show sidelobes and backlobes, where the antenna's gain is at a minima or maxima. See Antenna measurement: Radiation pattern or Radiation pattern for more information. Impedance As an electro-magnetic wave travels through the different parts of the antenna system (radio, feed line, antenna, free space) it may encounter differences in impedance (E/H, V/I, etc). At each interface, depending on the impedance match, some fraction of the wave's energy will reflect back to the source[5], forming a standing wave in the feed line. The ratio of maximum power to minimum power in the wave can be measured and is called the standing wave ratio (SWR). A SWR of 1:1 is ideal. A SWR of 1.5:1 is considered to be marginally acceptable in low power applications where power loss is more critical, although an SWR as high as 6:1 may still be usable with the right equipment. Minimizing impedance differences at each interface (impedance matching) will reduce SWR and maximize power transfer through each part of the antenna system. Complex impedance of an antenna is related to the electrical length of the antenna at the wavelength in use. The impedance of an antenna can be matched to the feed line and radio by adjusting the impedance of the feed line, using the feed line as an impedance transformer. More commonly, the impedance is adjusted at the load (see below) with an antenna tuner, a balun, a matching transformer, matching networks composed of inductors and capacitors, or matching sections such as the gamma match. Efficiency Efficiency is the ratio of power actually radiated to the power put into the antenna terminals. A dummy load may have an SWR of 1:1 but an efficiency of 0, as it absorbs all power and radiates heat but not RF energy, showing that SWR alone is not an effective measure of an antenna's efficiency. Radiation in an antenna is caused by radiation resistance which can only be measured as part of total resistance including loss resistance. Loss resistance usually results in heat generation rather than radiation, and reduces efficiency. Mathematically, efficiency is calculated as radiation resistance divided by total resistance. Bandwidth The bandwidth of an antenna is the range of frequencies over which it is effective, usually centered on the resonant frequency. The bandwidth of an antenna may be increased by several techniques, including using thicker wires, replacing wires with cages to simulate a thicker wire, tapering antenna components (like in a feed horn), and combining multiple antennas into a single assembly and allowing the natural impedance to select the correct antenna. Small antennas are usually preferred for convenience, but there is a fundamental limit relating bandwidth, size and efficiency. Polarization The polarization of an antenna is the orientation of the electric field (E-plane) of the radio wave with respect to the Earth's surface and is determined by the physical structure of the antenna and by its orientation. It has nothing in common with antenna directionality terms: "horizontal", "vertical" and "circular". Thus, a simple straight wire antenna will have one polarization when mounted vertically, and a different polarization when mounted horizontally. "Electromagnetic wave polarization filters" are structures which can be employed to act directly on the electromagnetic wave to filter out wave energy of an undesired polarization and to pass wave energy of a desired polarization. Reflections generally affect polarization. For radio waves the most important reflector is the ionosphere - signals which reflect from it will have their polarization changed unpredictably. For signals which are reflected by the ionosphere, polarization cannot be relied upon. For line-of-sight communications for which polarization can be relied upon, it can make a large difference in signal quality to have the transmitter and receiver using the same polarization; many tens of dB difference are commonly seen and this is more than enough to make the difference between reasonable communication and a broken link. Polarization is largely predictable from antenna construction but, especially in directional antennas, the polarization of side lobes can be quite different from that of the main propagation lobe. For radio antennas, polarization corresponds to the orientation of the radiating element in an antenna. A vertical omnidirectional WiFi antenna will have vertical polarization (the most common type). An exception is a class of elongated waveguide antennas in which vertically placed antennas are horizontally polarized. Many commercial antennas are marked as to the polarization of their emitted signals. Polarization is the sum of the E-plane orientations over time projected onto an imaginary plane perpendicular to the direction of motion of the radio wave. In the most general case, polarization is elliptical (the projection is oblong), meaning that the antenna varies over time in the polarization of the radio waves it is emitting. Two special cases are linear polarization (the ellipse collapses into a line) and circular polarization (in which the ellipse varies maximally). In linear polarization the antenna compels the electric field of the emitted radio wave to a particular orientation. Depending on the orientation of the antenna mounting, the usual linear cases are horizontal and vertical polarization. In circular polarization, the antenna continuously varies the electric field of the radio wave through all possible values of its orientation with regard to the Earth's surface. Circular polarizations, like elliptical ones, are classified as right-hand polarized or left-hand polarized using a "thumb in the direction of the propagation" rule. Optical researchers use the same rule of thumb, but pointing it in the direction of the emitter, not in the direction of propagation, and so are opposite to radio engineers' use. In practice, regardless of confusing terminology, it is important that linearly polarized antennas be matched, lest the received signal strength be greatly reduced. So horizontal should be used with horizontal and vertical with vertical. Intermediate matchings will lose some signal strength, but not as much as a complete mismatch. Transmitters mounted on vehicles with large motional freedom commonly use circularly polarized antennas so that there will never be a complete mismatch with signals from other sources. In the case of radar, this is often reflections from rain drops. 12)Explain the antenna transmission and reception? Transmission and reception All of the antenna parameters are expressed in terms of a transmission antenna, but are identically applicable to a receiving antenna, due to reciprocity. Impedance, however, is not applied in an obvious way; for impedance, the impedance at the load (where the power is consumed) is most critical. For a transmitting antenna, this is the antenna itself. For a receiving antenna, this is at the (radio) receiver rather than at the antenna. Tuning is done by adjusting the length of an electrically long linear antenna to alter the electrical resonance of the antenna. Antenna tuning is done by adjusting an inductance or capacitance combined with the active antenna (but distinct and separate from the active antenna). The inductance or capacitance provides the reactance which combines with the inherent reactance of the active antenna to establish a resonance in a circuit including the active antenna. The established resonance being at a frequency other than the natural electrical resonant frequency of the active antenna. Adjustment of the inductance or capacitance changes this resonance. Antennas used for transmission have a maximum power rating, beyond which heating, arcing or sparking may occur in the components, which may cause them to be damaged or destroyed. Raising this maximum power rating usually requires larger and heavier components, which may require larger and heavier supporting structures. This is a concern only for transmitting antennas, as the power received by an antenna rarely exceeds the microwatt range. Antennas designed specifically for reception might be optimized for noise rejection capabilities. An antenna shield is a conductive or low reluctance structure (such as a wire, plate or grid) which is adapted to be placed in the vicinity of an antenna to reduce, as by dissipation through a resistance or by conduction to ground, undesired electromagnetic radiation, or electric or magnetic fields, which are directed toward the active antenna from an external source or which emanate from the active antenna. Other methods to optimize for noise rejection can be done by selecting a narrow bandwidth so that noise from other frequencies is rejected, or selecting a specific radiation pattern to reject noise from a specific direction, or by selecting a polarization different from the noise polarization, or by selecting an antenna that favors either the electric or magnetic field. For instance, an antenna to be used for reception of low frequencies (below about ten megahertz) will be subject to both man-made noise from motors and other machinery, and from natural sources such as lightning. Successfully rejecting these forms of noise is an important antenna feature. A small coil of wire with many turns is more able to reject such noise than a vertical antenna. However, the vertical will radiate much more effectively on transmit, where extraneous signals are not a concern. Basic antenna models TV aerial antenna There are many variations of antennas. Below are a few basic models. More can be found in Category:Radio frequency antenna types. The isotropic radiator is a purely theoretical antenna that radiates equally in all directions. It is considered to be a point in space with no dimensions and no mass. This antenna cannot physically exist, but is useful as a theoretical model for comparison with all other antennas. Most antennas' gains are measured with reference to an isotropic radiator, and are rated in dBi (decibels with respect to an isotropic radiator). The dipole antenna is simply two wires pointed in opposite directions arranged either horizontally or vertically, with one end of each wire connected to the radio and the other end hanging free in space. Since this is the simplest practical antenna, it is also used as a reference model for other antennas; gain with respect to a dipole is labeled as dBd. Generally, the dipole is considered to be omnidirectional in the plane perpendicular to the axis of the antenna, but it has deep nulls in the directions of the axis. Variations of the dipole include the folded dipole, the half wave antenna, the ground plane antenna, the whip, and the J-pole. The Yagi-Uda antenna is a directional variation of the dipole with parasitic elements added which are functionality similar to adding a reflector and lenses (directors) to focus a filament light bulb. The random wire antenna is simply a very long (at least one quarter wavelength) wire with one end connected to the radio and the other in free space, arranged in any way most convenient for the space available. Folding will reduce effectiveness and make theoretical analysis extremely difficult. (The added length helps more than the folding typically hurts.) Typically, a random wire antenna will also require an antenna tuner, as it might have a random impedance that varies nonlinearly with frequency. The Horn is used where high gain is needed, the wavelength is short (microwave) and space is not an issue. Horns can be narrow band or wide band, depending on their shape. A horn can be built for any frequency, but horns for lower frequencies are typically impractical. Horns are also frequently used as reference antennas. The Patch antenna consists mainly of a square conductor mounted over a groundplane. An other example of a planar antenna is the Tapered Slot Antenna (TSA), as the Vivaldi-antenna. 13)Briefly the antenna radiated power? The basic structure of matter involves charged particles bound together in many different ways. When electromagnetic radiation is incident on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the situation. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may also get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy in the material. With a few exceptions such as fluorescence, harmonic generation, photochemical reactions and the photovoltaic effect, absorbed electromagnetic radiation simply deposits its energy by heating the material. This happens both for infrared and non-infrared radiation. Intense radio waves can thermally burn living tissue and can cook food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can also easily set paper afire. Ionizing electromagnetic radiation can create high-speed electrons in a material and break chemical bonds, but after these electrons collide many times with other atoms in the material eventually most of the energy gets downgraded to thermal energy, this whole process happening in a tiny fraction of a second. That infrared radiation is a form of heat and other electromagnetic radiation is not, is a widespread misconception in physics. Any electromagnetic radiation can heat a material when it is absorbed. The inverse or time-reversed process of absorption is responsible for thermal radiation. Much of the thermal energy in matter consists of random motion of charged particles, and this energy can be radiated away from the matter. The resulting radiation may subsequently be absorbed by another piece of matter, with the deposited energy heating the material. Radiation is an important mechanism of heat transfer. The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a form of thermal energy, having maximum radiation entropy. The thermodynamic potentials of electromagnetic radiation can be well-defined as for matter. Thermal radiation in a cavity has energy density (see Planck's Law) of Differentiating the above with respect to temperature, we may say that the electromagnetic radiation field has an effective volumetric heat capacity given by Electromagnetic spectrum Main article: Electromagnetic spectrum Electromagnetic spectrum with light highlighted Legend: γ HX SX EUV NUV Visible = = = = = Gamma Hard Soft Extreme Near rays X-rays X-Rays ultraviolet ultraviolet light NIR MIR FIR = = = Near Moderate Far Radio EHF = Extremely high frequency SHF = Super high frequency UHF = Ultrahigh frequency VHF = Very high HF = High MF = Medium LF = Low VLF = Very low VF = Voice ELF = Extremely low frequency infrared infrared infrared waves: (Microwaves) (Microwaves) (Microwaves) frequency frequency frequency frequency frequency frequency Generally, EM radiation is classified by wavelength into electrical energy, radio, microwave, infrared, the visible region we perceive as light, ultraviolet, X-rays and gamma rays. The behavior of EM radiation depends on its wavelength. Higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths. When EM radiation interacts with single atoms and molecules, its behavior depends on the amount of energy per quantum it carries. Spectroscopy can detect a much wider region of the EM spectrum than the visible range of 400 nm to 700 nm. A common laboratory spectroscope can detect wavelengths from 2 nm to 2500 nm. Detailed information about the physical properties of objects, gases, or even stars can be obtained from this type of device. It is widely used in astrophysics. For example, hydrogen atoms emit radio waves of wavelength 21.12 cm. Light Main article: Light EM radiation with a wavelength between approximately 400 nm and 700 nm is detected by the human eye and perceived as visible light. Other wavelengths, especially nearby infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes referred to as light, especially when visibility to humans is not relevant. If radiation having a frequency in the visible region of the EM spectrum reflects off of an object, say, a bowl of fruit, and then strikes our eyes, this results in our visual perception of the scene. Our brain's visual system processes the multitude of reflected frequencies into different shades and hues, and through this not-entirely-understood psychophysical phenomenon, most people perceive a bowl of fruit. At most wavelengths, however, the information carried by electromagnetic radiation is not directly detected by human senses. Natural sources produce EM radiation across the spectrum, and our technology can also manipulate a broad range of wavelengths. Optical fiber transmits light which, although not suitable for direct viewing, can carry data that can be translated into sound or an image. The coding used in such data is similar to that used with radio waves. Radio waves Main article: Radio waves Radio waves can be made to carry information by varying a combination of the amplitude, frequency and phase of the wave within a frequency band. When EM radiation impinges upon a conductor, it couples to the conductor, travels along it, and induces an electric current on the surface of that conductor by exciting the electrons of the conducting material. This effect (the skin effect) is used in antennas. EM radiation may also cause certain molecules to absorb energy and thus to heat up; this is exploited in microwave ovens. 14)Explain the voltage and current distribution? Long distance HVDC lines carrying hydropower from Canada's Nelson river to this station where it is converted to AC for use in Winnipeg's local grid A high-voltage, direct current (HVDC) electric power transmission system uses direct current for the bulk transmission of electrical power, in contrast with the more common alternating current systems. For long-distance distribution, HVDC systems are less expensive and suffer lower electrical losses. For shorter distances, the higher cost of DC conversion equipment compared to an AC system may be warranted where other benefits of direct current links are useful. The modern form of HVDC transmission uses technology developed extensively in the 1930s in Sweden at ASEA. Early commercial installations included one in the Soviet Union in 1951 between Moscow and Kashira, and a 10-20 MW system in Gotland, Sweden in 1954.[1] The longest HVDC link in the world is currently the Inga-Shaba 1,700 km (1,100 mi) 600 MW link connecting the Inga Dam to the Shaba copper mine, in the Democratic Republic of Congo. HVDC interconnections in western Europe - red are existing links, green are under construction, and blue are proposed. Many of these transfer power from renewable sources such as hydro and wind. For names, see also the annotated version. High voltage transmission High voltage is used for transmission to reduce the energy lost in the resistance of the wires. For a given quantity of power transmitted, higher voltage reduces the transmission power loss. Power in a circuit is proportional to the current, but the power lost as heat in the wires is proportional to the square of the current. However, power is also proportional to voltage, so for a given power level, higher voltage can be traded off for lower current. Thus, the higher the voltage, the lower the power loss. Power loss can also be reduced by reducing resistance, commonly achieved by increasing the diameter of the conductor; but larger conductors are heavier and more expensive. High voltages cannot be easily used in lighting and motors, and so transmission-level voltage must be reduced to values compatible with end-use equipment. The transformer, which only works with alternating current, is an efficient way to change voltages. The competition between the DC of Thomas Edison and the AC of Nikola Tesla and George Westinghouse was known as the War of Currents, with AC emerging victorious. Practical manipulation of DC voltages only became possible with the development of high power electronic devices such as mercury arc valves and later semiconductor devices, such as thyristors, insulated-gate bipolar transistors (IGBTs), high power capable MOSFETs (power metal–oxide–semiconductor field-effect transistors) and gate turn-off thyristors (GTOs). History of HVDC transmission HVDC in 1971: this 150 KV mercury arc valve converted AC hydropower voltage for transmission to distant cities from Manitoba Hydro generators. The first long-distance transmission of electric power was demonstrated using direct current in 1882 at the Miesbach-Munich Power Transmission, but only 2.5 kW was transmitted. An early method of high-voltage DC transmission was developed by the Swiss engineer Rene Thury[2] and his method was put into practice by 1889 in Italy by the Acquedotto de Ferrari-Galliera company. This system used series-connected motorgenerator sets to increase voltage. Each set was insulated from ground and driven by insulated shafts from a prime mover. The line was operated in constant current mode, with up to 5,000 volts on each machine, some machines having double commutators to reduce the voltage on each commutator. This system transmitted 630 kW at 14 kV DC over a distance of 120 km.[3][4] The Moutiers-Lyon system transmitted 8,600 kW of hydroelectric power a distance of 124 miles, including 6 miles of underground cable. The system used eight series-connected generators with dual commutators for a total voltage of 150,000 volts between the poles, and ran from about 1906 until 1936. Fifteen Thury systems were in operation by 1913 [5] Other Thury systems operating at up to 100 kV DC operated up to the 1930s, but the rotating machinery required high maintenance and had high energy loss. Various other electromechanical devices were tested during the first half of the 20th century with little commercial success.[6] One conversion technique attempted for conversion of direct current from a high transmission voltage to lower utilization voltage was to charge series-connected batteries, then connect the batteries in parallel to serve distribution loads.[7] While at least two commercial installations were tried around the turn of the 20th century, the technique was not generally useful owing to the limited capacity of batteries, difficulties in switching between series and parallel connections, and the inherent energy inefficiency of a battery charge/discharge cycle. The grid controlled mercury arc valve became available for power transmission during the period 1920 to 1940. Starting in 1932, General Electric tested mercury-vapor valves and a 12 kV DC transmission line, which also served to convert 40 Hz generation to serve 60 Hz loads, at Mechanicville, New York. In 1941, a 60 MW, +/-200 kV, 115 km buried cable link was designed for the city of Berlin using mercury arc valves (ElbeProject), but owing to the collapse of the German government in 1945 the project was never completed The nominal justification for the project was that, during wartime, a buried cable would be less conspicuous as a bombing target. The equipment was moved to the Soviet Union and was put into service there Introduction of the fully-static mercury arc valve to commercial service in 1954 marked the beginning of the modern era of HVDC transmission. A HVDC-connection was constructed by ASEA between the mainland of Sweden and the island Gotland. Mercury arc valves were common in systems designed up to 1975, but since then, HVDC systems use only solid-state devices. From 1975 to 2000, line-commutated converters (LCC) using thyristor valves were relied on. According to experts such as Vijay Sood, the next 25 years may well be dominated by force commutated converters, beginning with capacitor commutative converters (CCC) followed by self commutating converters which have largely supplanted LCC use Since use of semiconductor commutators, hundreds of HVDC sea-cables have been laid and worked with high reliability, usually better than 96% of the time. Advantages of HVDC over AC transmission The advantage of HVDC is the ability to transmit large amounts of power over long distances with lower capital costs and with lower losses than AC. Depending on voltage level and construction details, losses are quoted as about 3% per 1,000 km. High-voltage direct current transmission allows efficient use of energy sources remote from load centers. In a number of applications HVDC is more effective than AC transmission. Examples include: Undersea cables, where high capacitance causes additional AC losses. (e.g., 250 km Baltic Cable between Sweden and German Endpoint-to-endpoint long-haul bulk power transmission without intermediate 'taps', for example, in remote areas Increasing the capacity of an existing in situations where additional wires are difficult or expensive to install Power transmission and stabilization between unsynchronised AC distribution systems Connecting a remote generating plant to the distribution grid, for example Nelson River Bipole Stabilizing a predominantly AC power-grid, without increasing prospective short circuit current Reducing line cost. HVDC needs fewer conductors as there is no need to support multiple phases. Also, thinner conductors can be used since HVDC does not suffer from the skin effect Facilitate power transmission between different countries that use AC at differing voltages and/or frequencies Synchronize AC produced by renewable energy sources Long undersea cables have a high capacitance. While this has minimal effect for DC transmission, the current required to charge and discharge the capacitance of the cable causes additional I2R power losses when the cable is carrying AC. In addition, AC power is lost to dielectric losses. HVDC can carry more power per conductor, because for a given power rating the constant voltage in a DC line is lower than the peak voltage in an AC line. In AC power, the root mean square (RMS) voltage measurement is considered the standard, but RMS is only about 71% of the peak voltage. The peak voltage of AC determines the actual insulation thickness and conductor spacing. Because DC operates at a constant maximum voltage without RMS, this allows existing transmission line corridors with equally sized conductors and insulation to carry 29% more power into an area of high power consumption than AC, which can lower costs. Because HVDC allows power transmission between unsynchronised AC distribution systems, it can help increase system stability, by preventing cascading failures from propagating from one part of a wider power transmission grid to another. Changes in load that would cause portions of an AC network to become unsynchronized and separate would not similarly affect a DC link, and the power flow through the DC link would tend to stabilize the AC network. The magnitude and direction of power flow through a DC link can be directly commanded, and changed as needed to support the AC networks at either end of the DC link. This has caused many power system operators to contemplate wider use of HVDC technology for its stability benefits alone. Disadvantages The disadvantages of HVDC are in conversion, switching and control. Further operating an HVDC scheme requires keeping many spare parts, which may be used exclusively in one system as HVDC systems are less standardized than AC systems and the used technology changes fast. The required static inverters are expensive and have limited overload capacity. At smaller transmission distances the losses in the static inverters may be bigger than in an AC transmission line. The cost of the inverters may not be offset by reductions in line construction cost and lower line loss. With two exceptions, all former mercury rectifiers worldwide have been dismantled or replaced by thyristor units. In contrast to AC systems, realizing multiterminal systems is complex, as is expanding existing schemes to multiterminal systems. Controlling power flow in a multiterminal DC system requires good communication between all the terminals; power flow must be actively regulated by the control system instead of by the inherent properties of the transmission line. High voltage DC circuit breakers are difficult to build because some mechanism must be included in the circuit breaker to force current to zero, otherwise arcing and contact wear would be too great to allow reliable switching. Multi-terminal lines are rare. One is in operation at the Hydro Québec - New England transmission from Radisson to Sandy Pond Another example is the Sardinia-mainland Italy link which was modified in 1989 to also provide power to the island of Corsica. Costs of high voltage DC transmission Normally manufacturers such as AREVA, Siemens and ABB do not state specific cost information of a particular project since this is a commercial matter between the manufacturer and the client. Costs vary widely depending on the specifics of the project such as power rating, circuit length, overhead vs. underwater route, land costs, and AC network improvements required at either terminal. A detailed evaluation of DC vs. AC cost may be required where there is no clear technical advantage to DC alone and only economics drives the selection. However some practitioners have given out some information that can be reasonably well relied upon: For an 8 GW 40 km link laid under the English Channel, the following are approximate primary equipment costs for a 2000 MW 500 kV bipolar conventional HVDC link (exclude way-leaving, on-shore reinforcement works, consenting, engineering, insurance, etc.) Converter stations ~£110M Subsea cable + installation ~£1M/km So for an 8 GW capacity between England and France in four links, little is left over from £750M for the installed works. Add another £200–300M for the other works depending on additional onshore works required. Rectifying and inverting Two of three thyristor valve stacks used for long distance transmission of power from Manitoba Hydro dams Early static systems used mercury arc rectifiers, which were unreliable. Two HVDC systems using mercury arc rectifiers are still in service (As of 2008). The thyristor valve was first used in HVDC systems in the 1960s. The thyristor is a solid-state semiconductor device similar to the diode, but with an extra control terminal that is used to switch the device on at a particular instant during the AC cycle. The insulated-gate bipolar transistor (IGBT) is now also used and offers simpler control and reduced valve cost. Because the voltages in HVDC systems, up to 800 kV in some cases, exceed the breakdown voltages of the semiconductor devices, HVDC converters are built using large numbers of semiconductors in series. The low-voltage control circuits used to switch the thyristors on and off need to be isolated from the high voltages present on the transmission lines. This is usually done optically. In a hybrid control system, the low-voltage control electronics sends light pulses along optical fibres to the high-side control electronics. Another system, called direct light triggering, dispenses with the high-side electronics, instead using light pulses from the control electronics to switch light-triggered thyristors (LTTs). A complete switching element is commonly referred to as a valve, irrespective of its construction. 15) Describe the antenna effect? Antennas are typically used in an environment where other objects are present that may have an effect on their performance. Height above ground has a very significant effect on the radiation pattern of some antenna types. At frequencies used in antennas, the ground behaves mainly as a dielectric. The conductivity of ground at these frequencies is negligible. When an electromagnetic wave arrives at the surface of an object, two waves are created: one enters the dielectric and the other is reflected. If the object is a conductor, the transmitted wave is negligible and the reflected wave has almost the same amplitude as the incident one. When the object is a dielectric, the fraction reflected depends (among others things) on the angle of incidence. When the angle of incidence is small (that is, the wave arrives almost perpendicularly) most of the energy traverses the surface and very little is reflected. When the angle of incidence is near 90° (grazing incidence) almost all the wave is reflected. Most of the electromagnetic waves emitted by an antenna to the ground below the antenna at moderate (say < 60°) angles of incidence enter the earth and are absorbed (lost). But waves emitted to the ground at grazing angles, far from the antenna, are almost totally reflected. At grazing angles, the ground behaves as a mirror. Quality of reflection depends on the nature of the surface. When the irregularities of the surface are smaller than the wavelength reflection is good. The wave reflected by earth can be considered as emitted by the image antenna This means that the receptor "sees" the real antenna and, under the ground, the image of the antenna reflected by the ground. If the ground has irregularities, the image will appear fuzzy. If the receiver is placed at some height above the ground, waves reflected by ground will travel a little longer distance to arrive to the receiver than direct waves. The distance will be the same only if the receiver is close to ground. In the drawing at right, we have drawn the angle far bigger than in reality. Distance between the antenna and its image is . The situation is a bit more complex because the reflection of electromagnetic waves depends on the polarization of the incident wave. As the refractive index of the ground (average value ) is bigger than the refractive index of the air (), the direction of the component of the electric field parallel to the ground inverses at the reflection. This is equivalent to a phase shift of radians or 180°. The vertical component of the electric field reflects without changing direction. This sign inversion of the parallel component and the non-inversion of the perpendicular component would also happen if the ground were a good electrical conductor. The vertical component of the current reflects without changing sign. The horizontal component reverses sign at reflection. This means that a receiving antenna "sees" the image antenna with the current in the same direction if the antenna is vertical or with the current inverted if the antenna is horizontal. For a vertical polarized emission antenna the far electric field of the electromagnetic wave produced by the direct ray plus the reflected ray is: The sign inversion for the parallel field case just changes a cosine to a sine: is the distance between antenna and its image (twice the height of the center of the antenna). Radiation patterns of antennas and their images reflected by the ground. At left the polarization is vertical and there is always a maximum for . If the polarization is horizontal as at right, there is always a zero for . For emitting and receiving antenna situated near the ground (in a building or on a mast) far from each other, distances traveled by direct and reflected rays are nearly the same. There is no induced phase shift. If the emission is polarized vertically the two fields (direct and reflected) add and there is maximum of received signal. If the emission is polarized horizontally the two signals subtracts and the received signal is minimum. This is depicted in the image at right. In the case of vertical polarization, there is always a maximum at earth level (left pattern). For horizontal polarization, there is always a minimum at earth level. Note that in these drawings the ground is considered as a perfect mirror, even for low angles of incidence. In these drawings the distance between the antenna and its image is just a few wavelengths. For greater distances, the number of lobes increases. Note that the situation is different – and more complex – if reflections in the ionosphere occur. This happens over very long distances (thousands of kilometers). There is not a direct ray but several reflected rays that add with different phase shifts. This is the reason why almost all public address radio emissions have vertical polarization. As public users are near ground, horizontal polarized emissions would be poorly received. Observe household and automobile radio receivers. They all have vertical antennas or horizontal ferrite antennas for vertical polarized emissions. In cases where the receiving antenna must work in any position, as in mobile phones, the emitter and receivers in base stations use circular polarized electromagnetic waves. Classical (analog) television emissions are an exception. They are almost always horizontally polarized, because the presence of buildings makes it unlikely that a good emitter antenna image will appear. However, these same buildings reflect the electromagnetic waves and can create ghost images. Using horizontal polarization, reflections are attenuated because of the low reflection of electromagnetic waves whose magnetic field is parallel to the dielectric surface near the Brewster's angle. Vertically polarized analog television has been used in some rural areas. In digital terrestrial television reflections are less annoying because of the type of modulation. Mutual impedance and interaction between antennas Mutual impedance between parallel dipoles not staggered. Curves Re and Im are the resistive and reactive parts of the impedance. Current circulating in any antenna induces currents in all others. One can postulate a mutual impedance between two antennas that has the same significance as the in ordinary coupled inductors. The mutual impedance between two antennas is defined as: where is the current flowing in antenna 1 and is the voltage that would have to be applied to antenna 2 – with antenna 1 removed – to produce the current in the antenna 2 that was produced by antenna 1. From this definition, the currents and voltages applied in a set of coupled antennas are: where: is the voltage applied to the antenna i is the impedance of antenna i is the mutual impedance between antennas i and j Note that, as is the case for mutual inductances, If some of the elements are not fed (there is a short circuit instead a feeder cable), as is the case in television antennas (Yagi-Uda antennas), the corresponding are zero. Those elements are called parasitic elements. Parasitic elements are unpowered elements that either reflect or absorb and reradiate RF energy. In some geometrical settings, the mutual impedance between antennas can be zero. This is the case for crossed dipoles used in circular polarization antennas. Antenna gallery Antennas and antenna arrays A multi-band rotaryRooftop TV antenna. It is actually three Yagi A Yagi-Uda beamdirectional antennaantennas. The longestA terrestrial for amateur radio antenna. radio elements are for the lowmicrowave use. band, while the mediumantenna array. and short elements are for the high and UHF band. Examples of USLow cost LF time Shortwave antennas 136-174 MHz basesignal receiver,Rotatable log-periodic in Delano, station antennas. antenna (left) andarray for VHF and UHF. California. receiver (right). Antennas and supporting structures A water tower in A building rooftop supporting Palmerston, Northern numerous dish and sectored A three-sectorTelephone site Territory with radio mobile telecommunications telephone site inconcealed as a broadcasting and antennas (Doncaster, Victoria, Mexico City. palm tree. communications Australia. antennas. Diagrams as part of a system Antennas may be connected through a multiplexingAntenna network for an arrangement in some applications like this trunked two-emergency medical services way radio example. base station. Smart antenna 1. ^ In the context of engineering and physics, the plural of antenna is antennas, and it has been this way since about 1950 (or earlier), when a cornerstone textbook in this field, Antennas, was published by John D. Kraus of the Ohio State University. Besides the title, Dr. Kraus noted this in a footnote on the first page of his book. Insects may have "antennae," but this form is not used in technical contexts. 2. "Salvan: Cradle of Wireless, How Marconi Conducted Early Wireless Experiments in the Swiss Alps", Fred Gardiol & Yves Fournier, Microwave Journal, February 2006, pp. 124-136. 3. Nikola Tesla said during the development of radio that "One of the terminals of the source would be connected to Earth [as a electric ground connection ...] the other to an insulated body of large surface. For more information, see ". Delivered before the Franklin Institute, Philadelphia, February 1893, and before the National Electric Light Association, St. Louis, Missouri, March 1893. 4. ^ "Guide to Wi-Fi Wireless Network Antenna". NetworkBits.net. http://networkbits.net/wireless-printing/wireless-network-antenna-guide/. Retrieved on 2008-04-08. 5. ^ Impedance is caused by the same physics as refractive index in optics, although impedance effects are typically one dimensional, where effects of refractive index is three dimensional. 15)Explain the electromagnetic radiation? Electromagnetic radiation Electromagnetism Electricity · Magnetism Electrostatics Magnetostatics Electrodynamics Free space · Lorentz force law · EMF · Electromagnetic induction · Faraday’s law · Lenz's law · Displacement current · Maxwell's Electromagnetic equations · radiation · EM field · Liénard-Wiechert Potential · Maxwell tensor · Eddy current · Electromagnetic radiation (sometimes abbreviated EMR and often simply called light) is a ubiquitous phenomenon that takes the form of self-propagating waves in a vacuum or in matter. It consists of electric and magnetic field components which oscillate in phase perpendicular to each other and perpendicular to the direction of energy propagation. Electromagnetic radiation is classified into several types according to the frequency of its wave; these types include (in order of increasing frequency and decreasing wavelength): radio waves, microwaves, terahertz radiation, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays. A small and somewhat variable window of frequencies is sensed by the eyes of various organisms; this is what we call the visible spectrum, or light. EM radiation carries energy and momentum that may be imparted to matter with which it interacts. Physics Theory Shows three electromagnetic modes (blue, green and red) with a distance scale in micrometres along the x-axis. Electromagnetic waves were first postulated by James Clerk Maxwell and subsequently confirmed by Heinrich Hertz. Maxwell derived a wave form of the electric and magnetic equations, revealing the wave-like nature of electric and magnetic fields, and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave. According to Maxwell's equations, a time-varying electric field generates a magnetic field and vice versa. Therefore, as an oscillating electric field generates an oscillating magnetic field, the magnetic field in turn generates an oscillating electric field, and so on. These oscillating fields together form an electromagnetic wave. A quantum theory of the interaction between electromagnetic radiation and matter such as electrons is described by the theory of quantum electrodynamics. Electromagnetic waves can be imagined as a self-propagating transverse oscillating wave of electric and magnetic fields. This diagram shows a plane linearly polarized wave propagating from right to left. The electric field is in a vertical plane, the magnetic field in a horizontal plane. The physics of electromagnetic radiation is electrodynamics, a subfield of electromagnetism. Electric and magnetic fields obey the properties of superposition so that a field due to any particular particle or time-varying electric or magnetic field will contribute to the fields present in the same space due to other causes: as they are vector fields, all magnetic and electric field vectors add together according to vector addition. For instance, a travelling EM wave incident on an atomic structure induces oscillation in the atoms of that structure, thereby causing them to emit their own EM waves, emissions which alter the impinging wave through interference. These properties cause various phenomena including refraction and diffraction. Since light is an oscillation it is not affected by travelling through static electric or magnetic fields in a linear medium such as a vacuum. However in nonlinear media, such as some crystals, interactions can occur between light and static electric and magnetic fields — these interactions include the Faraday effect and the Kerr effect. In refraction, a wave crossing from one medium to another of different density alters its speed and direction upon entering the new medium. The ratio of the refractive indices of the media determines the degree of refraction, and is summarized by Snell's law. Light disperses into a visible spectrum as light is shone through a prism because of the wavelength dependent refractive index of the prism material (Dispersion). EM radiation exhibits both wave properties and particle properties at the same time (see wave-particle duality). Both wave and particle characteristics have been confirmed in a large number of experiments. Wave characteristics are more apparent when EM radiation is measured over relatively large timescales and over large distances while particle characteristics are more evident when measuring small timescales and distances. For example, when electromagnetic radiation is absorbed by matter, particle-like properties will be more obvious when the average number of photons in the cube of the relevant wavelength is much smaller than 1. Upon absorption the quantum nature of the light leads to clearly non-uniform deposition of energy. There are experiments in which the wave and particle natures of electromagnetic waves appear in the same experiment, such as the diffraction of a single photon. When a single photon is sent through two slits, it passes through both of them interfering with itself, as waves do, yet is detected by a photomultiplier or other sensitive detector only once. Similar self-interference is observed when a single photon is sent into a Michelson interferometer or other interferometers. Wave model White light being separated into its components. An important aspect of the nature of light is frequency. The frequency of a wave is its rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one oscillation per second. Light usually has a spectrum of frequencies which sum together to form the resultant wave. Different frequencies undergo different angles of refraction. A wave consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves the size of buildings to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation: where v is the speed of the wave (c in a vacuum, or less in other media), f is the frequency and λ is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant. Interference is the superposition of two or more waves resulting in a new wave pattern. If the fields have components in the same direction, they constructively interfere, while opposite directions cause destructive interference. The energy in electromagnetic waves is sometimes called radiant energy. Particle model Because energy of an EM wave is quantized, in the particle model of EM radiation, a wave consists of discrete packets of energy, or quanta, called photons. The frequency of the wave is proportional to the magnitude of the particle's energy. Moreover, because photons are emitted and absorbed by charged particles, they act as transporters of energy. The energy per photon can be calculated from the Planck–Einstein equation:[1] where E is the energy, h is Planck's constant, and f is frequency. This photon-energy expression is a particular case of the energy levels of the more general electromagnetic oscillator whose average energy, which is used to obtain Planck's radiation law, can be shown to differ sharply from that predicted by the equipartition principle at low temperature, thereby establishes a failure of equipartition due to quantum effects at low temperature[2]. As a photon is absorbed by an atom, it excites an electron, elevating it to a higher energy level. If the energy is great enough, so that the electron jumps to a high enough energy level, it may escape the positive pull of the nucleus and be liberated from the atom in a process called photoionisation. Conversely, an electron that descends to a lower energy level in an atom emits a photon of light equal to the energy difference. Since the energy levels of electrons in atoms are discrete, each element emits and absorbs its own characteristic frequencies. Together, these effects explain the absorption spectra of light. The dark bands in the spectrum are due to the atoms in the intervening medium absorbing different frequencies of the light. The composition of the medium through which the light travels determines the nature of the absorption spectrum. For instance, dark bands in the light emitted by a distant star are due to the atoms in the star's atmosphere. These bands correspond to the allowed energy levels in the atoms. A similar phenomenon occurs for emission. As the electrons descend to lower energy levels, a spectrum is emitted that represents the jumps between the energy levels of the electrons. This is manifested in the emission spectrum of nebulae. Today, scientists use this phenomenon to observe what elements a certain star is composed of. It is also used in the determination of the distance of a star, using the red shift. Speed of propagation Speed of light Any electric charge which accelerates, or any changing magnetic field, produces electromagnetic radiation. Electromagnetic information about the charge travels at the speed of light. Accurate treatment thus incorporates a concept known as retarded time (as opposed to advanced time, which is unphysical in light of causality), which adds to the expressions for the electrodynamic electric field and magnetic field. These extra terms are responsible for electromagnetic radiation. When any wire (or other conducting object such as an antenna) conducts alternating current, electromagnetic radiation is propagated at the same frequency as the electric current. At the quantum level, electromagnetic radiation is produced when the wavepacket of a charged particle oscillates or otherwise accelerates. Charged particles in a stationary state do not move, but a superposition of such states may result in oscillation, which is responsible for the phenomenon of radiative transition between quantum states of a charged particle. Depending on the circumstances, electromagnetic radiation may behave as a wave or as particles. As a wave, it is characterized by a velocity (the speed of light), wavelength, and frequency. When considered as particles, they are known as photons, and each has an energy related to the frequency of the wave given by Planck's relation E = hν, where E is the energy of the photon, h = 6.626 × 10-34 J·s is Planck's constant, and ν is the frequency of the wave. One rule is always obeyed regardless of the circumstances: EM radiation in a vacuum always travels at the speed of light, relative to the observer, regardless of the observer's velocity. (This observation led to Albert Einstein's development of the theory of special relativity.) In a medium (other than vacuum), velocity factor or refractive index are considered, depending on frequency and application. Both of these are ratios of the speed in a medium to speed in a vacuum. Thermal radiation and electromagnetic radiation as a form of heat Thermal radiation The basic structure of matter involves charged particles bound together in many different ways. When electromagnetic radiation is incident on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the situation. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may also get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy in the material. With a few exceptions such as fluorescence, harmonic generation, photochemical reactions and the photovoltaic effect, absorbed electromagnetic radiation simply deposits its energy by heating the material. This happens both for infrared and non-infrared radiation. Intense radio waves can thermally burn living tissue and can cook food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can also easily set paper afire. Ionizing electromagnetic radiation can create high-speed electrons in a material and break chemical bonds, but after these electrons collide many times with other atoms in the material eventually most of the energy gets downgraded to thermal energy, this whole process happening in a tiny fraction of a second. That infrared radiation is a form of heat and other electromagnetic radiation is not, is a widespread misconception in physics. Any electromagnetic radiation can heat a material when it is absorbed. The inverse or time-reversed process of absorption is responsible for thermal radiation. Much of the thermal energy in matter consists of random motion of charged particles, and this energy can be radiated away from the matter. The resulting radiation may subsequently be absorbed by another piece of matter, with the deposited energy heating the material. Radiation is an important mechanism of heat transfer. The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a form of thermal energy, having maximum radiation entropy. The thermodynamic potentials of electromagnetic radiation can be well-defined as for matter. Thermal radiation in a cavity has energy density (see Planck's Law) of Differentiating the above with respect to temperature, we may say that the electromagnetic radiation field has an effective volumetric heat capacity given by Electromagnetic spectrum Main article: Electromagnetic spectrum Electromagnetic spectrum with light highlighted Legend: γ HX SX EUV NUV Visible NIR MIR FIR = = = = = Gamma Hard Soft Extreme Near = = = Near Moderate Far Radio EHF = Extremely high frequency SHF = Super high frequency UHF = Ultrahigh frequency VHF = Very high HF = High MF = Medium LF = Low VLF = Very low VF = Voice ELF = Extremely low frequency rays X-rays X-Rays ultraviolet ultraviolet light infrared infrared infrared waves: (Microwaves) (Microwaves) (Microwaves) frequency frequency frequency frequency frequency frequency Generally, EM radiation is classified by wavelength into electrical energy, radio, microwave, infrared, the visible region we perceive as light, ultraviolet, X-rays and gamma rays. The behavior of EM radiation depends on its wavelength. Higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths. When EM radiation interacts with single atoms and molecules, its behavior depends on the amount of energy per quantum it carries. Spectroscopy can detect a much wider region of the EM spectrum than the visible range of 400 nm to 700 nm. A common laboratory spectroscope can detect wavelengths from 2 nm to 2500 nm. Detailed information about the physical properties of objects, gases, or even stars can be obtained from this type of device. It is widely used in astrophysics. For example, hydrogen atoms emit radio waves of wavelength 21.12 cm. Light EM radiation with a wavelength between approximately 400 nm and 700 nm is detected by the human eye and perceived as visible light. Other wavelengths, especially nearby infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes referred to as light, especially when visibility to humans is not relevant. If radiation having a frequency in the visible region of the EM spectrum reflects off of an object, say, a bowl of fruit, and then strikes our eyes, this results in our visual perception of the scene. Our brain's visual system processes the multitude of reflected frequencies into different shades and hues, and through this not-entirely-understood psychophysical phenomenon, most people perceive a bowl of fruit. At most wavelengths, however, the information carried by electromagnetic radiation is not directly detected by human senses. Natural sources produce EM radiation across the spectrum, and our technology can also manipulate a broad range of wavelengths. Optical fiber transmits light which, although not suitable for direct viewing, can carry data that can be translated into sound or an image. The coding used in such data is similar to that used with radio waves. Radio waves Radio waves can be made to carry information by varying a combination of the amplitude, frequency and phase of the wave within a frequency band. When EM radiation impinges upon a conductor, it couples to the conductor, travels along it, and induces an electric current on the surface of that conductor by exciting the electrons of the conducting material. This effect (the skin effect) is used in antennas. EM radiation may also cause certain molecules to absorb energy and thus to heat up; this is exploited in microwave ovens. Derivation Electromagnetic waves as a general phenomenon were predicted by the classical laws of electricity and magnetism, known as Maxwell's equations. If you inspect Maxwell's equations without sources (charges or currents) then you will find that, along with the possibility of nothing happening, the theory will also admit nontrivial solutions of changing electric and magnetic fields. Beginning with Maxwell's equations for free space: where is a vector differential operator (see Del). One solution, , is trivial. To see the more interesting one, we utilize vector identities, which work for any vector, as follows: To see how we can use this take the curl of equation (2): Evaluating the left hand side: where we simplified the above by using equation (1). Evaluate the right hand side: Equations (6) and (7) are equal, so this results in a vector-valued differential equation for the electric field, namely Applying a similar pattern results in similar differential equation for the magnetic field: . These differential equations are equivalent to the wave equation: where c0 is the speed of the wave in free space and f describes a displacement Or more simply: where is d'Alembertian: Notice that in the case of the electric and magnetic fields, the speed is: Which, as it turns out, is the speed of light in free space. Maxwell's equations have unified the permittivity of free space ε0, the permeability of free space μ0, and the speed of light itself, c0. Before this derivation it was not known that there was such a strong relationship between light and electricity and magnetism. But these are only two equations and we started with four, so there is still more information pertaining to these waves hidden within Maxwell's equations. Let's consider a generic vector wave for the electric field. Here is the constant amplitude, f is any second differentiable function, is a unit vector in the direction of propagation, and is a position vector. We observe that is a generic solution to the wave equation. In other words , for a generic wave traveling in the direction. This form will satisfy the wave equation, but will it satisfy all of Maxwell's equations, and with what corresponding magnetic field? The first of Maxwell's equations implies that electric field is orthogonal to the direction the wave propagates. The second of Maxwell's equations yields the magnetic field. The remaining equations will be satisfied by this choice of . Not only are the electric and magnetic field waves traveling at the speed of light, but they have a special restricted orientation and proportional magnitudes, E0 = c0B0, which can be seen immediately from the Poynting vector. The electric field, magnetic field, and direction of wave propagation are all orthogonal, and the wave propagates in the same direction as . From the viewpoint of an electromagnetic wave traveling forward, the electric field might be oscillating up and down, while the magnetic field oscillates right and left; but this picture can be rotated with the electric field oscillating right and left and the magnetic field oscillating down and up. This is a different solution that is traveling in the same direction. This arbitrariness in the orientation with respect to propagation direction is known as polarization. UNIT-III(MODULATION TECHNIQUES) PART-A 1)In telecommunication, a communications system is a collection of individual communications networks. 2)A communications subsystem is a functional unit or operationa assembly that is smaller than the larger assembly under consideration. 3) It also contains transponders and other transponders in it and communication satellite communication system receives signals from the antenn subsystem. 4) A radio communication system is composed of several communications subsystems that give exterior communications capablilities. 5) Power line communications systems operate by impressing a modulated carrier signal on the wiring system. (PART-B) 7) What is communication? In telecommunication, a communications system is a collection of individual communications networks, transmission systems, relay stations, tributary stations, and data terminal equipment (DTE) usually capable of interconnection and interoperation to form an integrated whole. The components of a communications system serve a common purpose, are technically compatible, use common procedures, respond to controls, and operate in unison. Telecommunications is a method of communication (e.g., for sports broadcasting, mass media, journalism, etc.). 8) What is transmitter? In radio electronics and broadcasting, a transmitter usually has a power supply, an oscillator, a modulator, and amplifiers for audio frequency (AF) and radio frequency (RF). The modulator is the device which piggybacks (or modulates) the signal information onto the carrier frequency, which is then broadcast. Sometimes a device (for example, a cell phone) contains both a transmitter and a radio receiver, with the combined unit referred to as a transceiver. In amateur radio, a transmitter can be a separate piece of electronic gear or a subset of a transceiver, and often referred to using an abbreviated form; "XMTR". [1] In most parts of the world, use of transmitters is strictly controlled by laws since the potential for dangerous interference (for example to emergency communications) is considerable. In consumer electronics, a common device is a Personal FM transmitter, a very low power transmitter generally designed to take a simple audio source like an iPod, CD player, etc. and transmit it a few feet to a standard FM radio receiver. Most personal FM transmitters In the USA fall under Part 15 of the FCC regulations to avoid any user licensing requirements. 9)How to be use transmitter? In industrial process control, a "transmitter" is any device which converts measurements from a sensor into a signal to be received, usually sent via wires, by some display or control device located a distance away. Typically in process control applications the "transmitter" will output an analog 4-20 mA current loop or digital protocol to represent a measured variable within a range. For example, a pressure transmitter might use 4 mA as a representation for 50 psig of pressure and 20 mA as 1000 psig of pressure and any value in between proportionately ranged between 50 and 1000 psig. (A 0-4 mA signal indicates a system error.) Older technology transmitters used pneumatic pressure typically ranged between 3 to 15 psig (20 to 100 kPa) to represent a process variable. 10)Write a short notes on channel? In the early days of radio engineering, radio frequency energy was generated using arcs known as Alexanderson alternator or mechanical alternators (of which a rare example survives at the SAQ transmitter in Grimeton, Sweden). In the 1920s electronic transmitters, based on vacuum tubes, began to be used. In broadcasting, and telecommunication, the part which contains the oscillator, modulator, and sometimes audio processor, is called the exciter. Confusingly, the highpower amplifier which the exciter then feeds into is often called the "transmitter" by broadcast engineers. The final output is given as transmitter power output (TPO), although this is not what most stations are rated by. (PART-C) 11)Explain about communication? Communications system From Wikipedia, the free encyclopedia Jump to: navigation, search This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (August 2008) Communication system In telecommunication, a communications system is a collection of individual communications networks, transmission systems, relay stations, tributary stations, and data terminal equipment (DTE) usually capable of interconnection and interoperation to form an integrated whole. The components of a communications system serve a common purpose, are technically compatible, use common procedures, respond to controls, and operate in unison. Telecommunications is a method of communication (e.g., for sports broadcasting, mass media, journalism, etc.). A communications subsystem is a functional unit or operational assembly that is smaller than the larger assembly under consideration. Examples of communications subsystems in the Defense Communications System (DCS) are (a) a satellite link with one Earth terminal in CONUS and one in Europe, (b) the interconnect facilities at each Earth terminal of the satellite link, and (c) an optical fiber cable with its driver and receiver in either of the interconnect facilities. Communication subsystem (b) basically consists of a receiver, frequency translator and a transmitter. It also contains transponders and other transponders in it and communication satellite communication system receives signals from the antenna subsystem. An optical communication system is any form of telecommunication that uses light as the transmission medium. Optical communications consists of a transmitter, which encodes a message into an optical signal, a channel, which carries the signal to its destination, and a receiver, which reproduces the message from the received optical signal. Fiber-optic communication systems transmit information from one place to another by sending light through an optical fiber. The light forms an electromagnetic carrier wave that is modulated to carry information. First developed in the 1970s, fiber-optic communication systems have revolutionized the telecommunications industry and played a major role in the advent of the Information Age. Because of its advantages over electrical transmission, the use of optical fiber has largely replaced copper wire communications in core networks in the developed world. A radio communication system is composed of several communications subsystems that give exterior communications capablilities.[1][2][3] A radio communication system comprises a transmitting conductor[4] in which electrical oscillations[5][6][7] or currents are produced and which is arranged to cause such currents or oscillations to be propagated through the free space medium from one point to another remote therefrom and a receiving conductor[4] at such distant point adapted to be excited by the oscillations or currents propagated from the transmitter.[8][9][10][11] Power line communications systems operate by impressing a modulated carrier signal on the wiring system. Different types of powerline communications use different frequency bands, depending on the signal transmission characteristics of the power wiring used. Since the power wiring system was originally intended for transmission of AC power, the power wire circuits have only a limited ability to carry higher frequencies. The propagation problem is a limiting factor for each type of power line communications. A duplex communication system is a system composed of two connected parties or devices which can communicate with one another in both directions. The term duplex is not used when describing communication between more than two parties or devices. Duplex systems are employed in nearly all communications networks, either to allow for a communication "two-way street" between two connected parties or to provide a "reverse path" for the monitoring and remote adjustment of equipment in the field. A tactical communications system is a communications system that (a) is used within, or in direct support of, tactical forces, (b) is designed to meet the requirements of changing tactical situations and varying environmental conditions, (c) provides securable communications, such as voice, data, and video, among mobile users to facilitate command and control within, and in support of, tactical forces, and (d) usually requires extremely short installation times, usually on the order of hours, in order to meet the requirements of frequent relocation. 13. Discuss about the Transmitter? Generally in communication and information processing, a transmitter is any object (source) which sends information to an observer (receiver). When used in this more general sense, vocal cords may also be considered an example of a transmitter. In radio electronics and broadcasting, a transmitter usually has a power supply, an oscillator, a modulator, and amplifiers for audio frequency (AF) and radio frequency (RF). The modulator is the device which piggybacks (or modulates) the signal information onto the carrier frequency, which is then broadcast. Sometimes a device (for example, a cell phone) contains both a transmitter and a radio receiver, with the combined unit referred to as a transceiver. In amateur radio, a transmitter can be a separate piece of electronic gear or a subset of a transceiver, and often referred to using an abbreviated form; "XMTR". [1] In most parts of the world, use of transmitters is strictly controlled by laws since the potential for dangerous interference (for example to emergency communications) is considerable. In consumer electronics, a common device is a Personal FM transmitter, a very low power transmitter generally designed to take a simple audio source like an iPod, CD player, etc. and transmit it a few feet to a standard FM radio receiver. Most personal FM transmitters In the USA fall under Part 15 of the FCC regulations to avoid any user licensing requirements. In industrial process control, a "transmitter" is any device which converts measurements from a sensor into a signal to be received, usually sent via wires, by some display or control device located a distance away. Typically in process control applications the "transmitter" will output an analog 4-20 mA current loop or digital protocol to represent a measured variable within a range. For example, a pressure transmitter might use 4 mA as a representation for 50 psig of pressure and 20 mA as 1000 psig of pressure and any value in between proportionately ranged between 50 and 1000 psig. (A 0-4 mA signal indicates a system error.) Older technology transmitters used pneumatic pressure typically ranged between 3 to 15 psig (20 to 100). 13)Explain the need for modulation? NEED FOR MODULATION BANDWIDTH REQUIREMENT Just as important as the planning of the construction and location of the transmitter is how its output fits in with existing transmissions. Two transmitters cannot broadcast on the same frequency in the same area as this would cause co-channel interference. For a good example of how the channel planners have dovetailed different transmitters' outputs see Crystal Palace UHF TV channel allocations. This reference also provides a good example of a grouped transmitter, in this case an A group. That is, all of its output is within the bottom third of the UK UHF television broadcast band. The other two groups (B and C/D) utilise the middle and top third of the band, see graph. By replicating this grouping across the country (using different groups for adjacent transmitters), co-channel interference can be minimised, and in addition, those in marginal reception areas can use more efficient grouped receiving antennas. Unfortunately, in the UK, this carefully planned system has had to be compromised with the advent of digital broadcasting which (during the changeover period at least) requires yet more channel space, and consequently the additional digital broadcast channels cannot always be fitted within the transmitter's existing group. Thus many UK transmitters have become "wideband" with the consequent need for replacement of receiving antennas (see external links). Once the Digital Switch Over (DSO) occurs the plan is that most transmitters will revert to their original groups, source Ofcom July 2007 . Further complication arises when adjacent transmitters have to transmit on the same frequency and under these circumstances the broadcast radiation patterns are attenuated in the relevant direction(s). A good example of this is in the United Kingdom, where the Waltham transmitting station broadcasts at high power on the same frequencies as the Sandy Heath transmitting station's high power transmissions, with the two being only 50 miles apart. Thus Waltham's antenna array[1] does not broadcast these two channels in the direction of Sandy Heath and vice versa. Where a particular service needs to have wide coverage, this is usually achieved by using multiple transmitters at different locations. Usually, these transmitters will operate at different frequencies to avoid interference where coverage overlaps. Examples include national broadcasting networks and cellular networks. In the latter, frequency switching is automatically done by the receiver as necessary, in the former, manual retuning is more common (though the Radio Data System is an example of automatic frequency switching in broadcast networks). Another system for extending coverage using multiple transmitters is quasi-synchronous transmission, but this is rarely used nowadays. Main and relay (repeater) transmitters Transmitting stations are usually either classified as main stations or relay stations (also known as repeaters or translators). Main stations are defined as those that generate their own modulated output signal from a baseband (unmodulated) input. Usually main stations operate at high power and cover large areas. Relay stations (translators) take an already modulated input signal, usually by direct reception of a parent station off the air, and simply rebroadcast it on another frequency. Usually relay stations operate at medium or low power, and are used to fill in pockets of poor reception within, or at the fringe of, the service area of a parent main station. Note that a main station may also take its input signal directly off-air from another station, however this signal would be fully demodulated to baseband first, processed, and then remodulated for transmission. 14)Explain about power relations in AM wave? Some cities in Europe, like Mühlacker, Ismaning, Langenberg, Kalundborg, Hoerby and Allouis became famous as sites of powerful transmitters. For example, Goliath transmitter was a VLF transmitter of the German Navy during World War II located near Kalbe an der Milde in Saxony-Anhalt, Germany. Some transmitting towers like the radio tower Berlin or the TV tower Stuttgart have become landmarks of cities. Many transmitting plants have very high radio towers that are masterpieces of engineering. Having the tallest building in the world, the nation, the state/province/prefecture, city, etc., has often been considered something to brag about. Often, builders of high-rise buildings have used transmitter antennas to lay claim to having the tallest building. A historic example was the "tallest building" feud between the Chrysler Building and the Empire State Building in New York, New York. Some towers have an observation deck accessible to tourists. An example is the Ostankino Tower in Moscow, which was completed in 1967 on the 50th anniversary of the October Revolution to demonstrate the technical abilities of the Soviet Union. As very tall radio towers of any construction type are prominent landmarks, requiring careful planning and construction, and high-power transmitters especially in the long- and medium-wave ranges can be received over long distances, such facilities were often mentioned in propaganda. Other examples were the Deutschlandsender Herzberg/Elster and the Warsaw Radio Mast. KVLY-TV's Tower located near Blanchard, North Dakota, built in 1963, is the tallest artificial structure in the world. It was surpassed in 1974 by the Warszawa radio mast, which collapsed in 1991. It has been surpassed by the Burj Dubai skyscraper as of early 2009, but the KVLY-TV mast is still the tallest transmitter. 15)Briefly the amplitude modulation theory? General Theory - Electronic Tutorials Search Web Hobbyprojects.com Amplitude Modulation Tutorial If you connect a long wire to the output terminals of your Hi-Fi amplifier and another long wire to the input of another amplifier, you can transmit music over a short distance. DON'T try this. You could blow up your amplifier. A radio wave can be transmitted long distances. To get our audio signal to travel long distances we piggyback it onto a radio wave. This process is called MODULATION. The radio wave is called the CARRIER. The audio signal is called the MODULATION. At the receiving end the audio is recovered by a process called DEMODULATION. From the diagram below, it can be seen that when the carrier is modulated, its amplitude goes above and below its unmodulated amplitude. It is about 50% The maximum percentage Going above this causes distortion. modulated modulation in the possible is diagram. 100%. Most broadcasters limit modulation to 80%. Modulating the carrier frequency with an audio frequency produces two new frequencies. At this point it would be a good idea to read the page on MIXERS. These new frequencies are called the upper and lower SIDEBANDS. The upper sideband is the carrier frequency plus the audio frequency. The lower side band is the carrier frequency minus the audio frequency. Since the audio signal is not a single frequency but a range of signals (usually 20 Hz to 20 KHz) the sidebands are each 20Hz to 20 KHz wide. If you tune across a station in the Medium Wave Band you will find that it takes up space in the band. This is called the signal BANDWIDTH. This is the space taken by the upper and lower sidebands. In the the example given above it would be 40 KHz. Since the Medium Wave is only 500 KHZ wide there would only be space for about 12 stations. Therefore the bandwidth of stations is limited to 9 KHz, which limits the audio quality. If there are two stations too close together, their sidebands mix and produce HETERODYNE whistles. Since both sidebands carry the same information, one side can be removed to save bandwidth. This is SSB, single sideband transmission. Back to General Theory FREQUENCY SPECTRUM OF AM WAVE Science of Everyday Things | Science in Dispute | Science and Technology Science Clarified :: El-Ex Electromagnetic spectrum Ads by Google Spectroscopic Analysis - High-resolution spectrometer with broad simultaneous wavelength range www.lla.de Radiation Shielding - for nuclear power, research, well logging & medical industries. www.HopewellDesigns.com High Frequency Machine - Induction Heating Equipment High Quality Most Competitive Price www.High-Frequency.com.tw Nitrogen Dye Laser - Wavelength range 235 to 1000nm ns pulse width 2.4 MW peak power - www.obb1.com/ The term electromagnetic spectrum refers to all forms of energy transmitted by means of waves traveling at the speed of light. Visible light is a form of electromagnetic radiation, but the term also applies to cosmic rays, X rays, ultraviolet radiation, infrared radiation, radio waves, radar, and microwaves. These forms of electromagnetic radiation make up the electromagnetic spectrum much as the various colors of light make up the visible spectrum (the rainbow). Wavelength and frequency Any wave—including an electromagnetic wave—can be described by two properties: its wavelength and frequency. The wavelength of a wave is the distance between two successive identical parts of the wave, as between two wave peaks or crests. The Greek letter lambda (λ) is often used to represent wavelength. Wavelength is measured in various units, depending on the kind of wave being discussed. For visible light, for example, wavelength is often expressed in nanometers (billionths of a meter); for radio waves, wavelengths are usually expressed in centimeters or meters. Frequency is the rate at which waves pass a given point. The frequency of an X-ray beam, for example, might be expressed as 1018 hertz. The term hertz (abbreviation: Hz) is a measure of the number of waves that pass a given point per second of time. If you could watch the X-ray beam from some given position, you would see 1,000,000,000,000,000,000 (that is, 1018) wave crests pass you every second. For every electromagnetic wave, the product of the wavelength and frequency equals a constant, the speed of light (c). In other words, λ · f = c. This equation shows that wavelength and frequency have a reciprocal relationship to each other. As one increases, the other must decrease. Gamma rays, for example, have very small wavelengths and very large frequencies. Radio waves, by contrast, have large wavelengths and very small frequencies. AM TRANSMITTER BLOCK DIAGRAM As shown in the accompanying figure, the whole range of the electromagnetic spectrum can be divided up into various regions based on wavelength and frequency. Electromagnetic radiation with very short wavelengths and high frequencies fall into the cosmic ray/gamma ray/ultraviolet radiation region. At the other end of the spectrum are the long wavelength, low frequency forms of radiation: radio, radar, and microwaves. In the middle of the range is visible light. Properties of waves in different regions of the spectrum are commonly described by different notation. Visible radiation is usually described by its wavelength, while X rays are described by their energy. All of these schemes are equivalent, however; they are just different ways of describing the same properties. Words to Know Electromagnetic radiation: Radiation that travels through a vacuum with the speed of light and that has properties of both an electric and magnetic wave. Frequency: The number of waves that pass a given point in a given period of time. Hertz: The unit of frequency; a measure of the number of waves that pass a given point per second of time. Wavelength: The distance between two successive peaks or crests in a wave. The boundaries between types of electromagnetic radiation are rather loose. Thus, a wave with a frequency of 8 × 1014 hertz could be described as a form of very deep violet visible light or as a form of ultraviolet radiation. Applications The various forms of electromagnetic radiation are used everywhere in the world around us. Radio waves are familiar to us because of their use in communications. The standard AM radio band includes radiation in the 540 to 1650 kilohertz (thousands of hertz) range. The FM band includes the 88 to 108 megahertz (millions of hertz) range. This region also includes shortwave radio transmissions and television broadcasts. Microwaves are probably most familiar to people because of microwave ovens. In a microwave oven, food is heated when microwaves excite water molecules contained within foods (and the molecules' motion produces heat). In astronomy, emission of radiation at a wavelength of 8 inches (21 centimeters) has been used to identify neutral hydrogen throughout the galaxy. Radar is also included in this region. The infrared region of the spectrum is best known to us because of the fact that heat is a form of infrared radiation. But the visible wavelength range is the range of frequencies with which we are most familiar. These are the wavelengths to which the human eye is sensitive and which most easily pass through Earth's atmosphere. This region is further broken down into the familiar colors of the rainbow, also known as the visible spectrum. The ultraviolet range lies at wavelengths just short of the visible range. Most of the ultraviolet radiation reaching Earth in sunlight is absorbed in the upper atmosphere. Ozone, a form of oxygen, has the ability to trap ultraviolet radiation and prevent it from reaching Earth. This fact is important since ultraviolet radiation can cause a number of problems for both plants and animals. The depletion of the ozone layer during the 1970s and 1980s was a matter of some concern to scientists because of the increase in dangerous ultraviolet radiation reaching Earth. We are most familiar with X rays because of their uses in medicine. X-radiation can pass through soft tissue in the body, allowing doctors to examine bones and teeth from the outside. Since X rays do not penetrate Earth's atmosphere, astronomers must place X-ray telescopes in space. Gamma rays are the most energetic of all electromagnetic radiation, and we have little experience with them in everyday life. They are produced by nuclear processes—during radioactive decay (in which an element gives off energy by the disintegration of its nucleus) or in nuclear reactions in stars or in space. UNIT-IV(SINGLE SIDEBAND MODULATION) (PART-A) 1) A carrier that has been modulated by voice or music is accompanied by two identical sidebands, each carrying the same intelligence. 2) A single sideband modulator provides a means of translating low frequency. 3) The level of one of the RF paths is adjusted to achieve amplitude balance. 4) Apply audio and an IF sine wave into a balanced modulator. We not only want to mix the audio and IF to produce an audio modulated IF signal. 5) One just above and one just below a carrier frequency. (PART-B) 6) What is sideband modulation? SINGLE-SIDEBAND TRANSMITTER You should remember the properties of modulation envelopes from your study of NEETS, Module 12, Modulation Principles. A carrier that has been modulated by voice or music is accompanied by two identical sidebands, each carrying the same intelligence. In amplitude-modulated (AM) transmitters, the carrier and both sidebands are transmitted. In a single-sideband transmitter (ssb), only one of the sidebands, the upper or the lower, is transmitted while the remaining sideband and the carrier are suppressed. SUPPRESSION is the elimination of the undesired portions of the signal. Figure 2-7 is the block diagram of a single-sideband transmitter. You can see the audio amplifier increases the amplitude of the incoming signal to a level adequate to operate the ssb generator. 7) How to be use sideband modulation? The ssb generator (modulator) combines its audio input and its carrier input to produce the two sidebands. The two sidebands are then fed to a filter that selects the desired sideband and suppresses the other one. By eliminating the carrier and one of the sidebands, intelligence is transmitted at a savings in power and frequency bandwidth. In most cases ssb generators operate at very low frequencies when compared with the normally transmitted frequencies. For that reason, we must convert (or translate) the filter output to the desired frequency. This is the purpose of the mixer stage. A second output is obtained from the frequency generator and fed to a frequency multiplier to obtain a higher carrier frequency for the mixer stage. The output from the mixer is fed to a linear power amplifier to build up the level of the signal for transmission. Suppressed Carrier In ssb the carrier is suppressed (or eliminated) at the transmitter, and the sideband frequencies produced by the carrier are reduced to a minimum. You will probably find this reduction (or elimination) is the most difficult aspect in the understanding of ssb. In a single-sideband suppressed carrier, no carrier is present in the transmitted signal. 8) What is SSB generation? It is true that simultaneous FM and AM modulation can suppress the amplitude one sideband and increase the amplitude of the other if the modulation phasing is right, but the resulting signal is not that same as a normal SSB signal. If the modulation phasing is such that each time the carrier frequency deviates higher due to FM the carrier amplitude increases due to AM the upper sideband will become stronger. Likewise with that phasing, each time the carrier frequency deviates lower the carrier amplitude will decrease due to AM which decreases the strength of the lower sideband. If the modulation phasing is reversed by reversing the audio input polarity to either the FM or AM modulator the lower sideband will become stronger and the upper sideband will become weaker. Neither of those FM/AM modulation phase relationships produce the type of signal normally referred to as an SSB signal, but they both suppress the amplitude one sideband. If the modulation phasing is changed to make the FM and AM modulation phase difference 90 degrees, the amplitudes of the upper and lower sidebands will be equal and the carrier amplitude will be higher when the carrier frequency passes through centerfrequency in one direction and lower when it passes through center-frequency in the opposite direction. If the FM/AM modulation phasing is changed to -90 degrees, the amplitudes of the upper and lower sidebands will be equal, but frequency deviation directions for higher and lower amplitudes will be opposite compared to those obtained with a 90 degree modulation phase difference. 9) What is pilot carrier? The range of the electromagnetic spectrum located either above (the upper sideband) or below (the lower sideband) the frequency of a sinusoidal carrier signal c(t). The sidebands are produced by modulating the carrier signal in amplitude frequency, or phase in accordance with a modulating signal m(t) to produce the modulated signal s(t). The resulting distribution of power in the sidebands of the modulated signal depends on the modulating signal and the particular form of modulation employed. 10) What is an indonent sideband? sideband, any frequency component of a modulated carrier wave other than the frequency of the carrier wave itself, i.e., any frequency added to the carrier as a result of modulation; sidebands carry the actual information while the carrier contributes none at all. Those frequency components that are higher than the carrier frequency are know as upper sidebands; those lower are called lower sidebands. The upper and lower sidebands contain equivalent information; thus only one needs to be transmitted. Such single-sideband signals are very efficient in their use of the frequency spectrum when compared to standard amplitude modulation (AM) signals. See radio. (PART-C) 11) Explain the sideband modulation? SINGLE-SIDEBAND TRANSMITTER You should remember the properties of modulation envelopes from your study of NEETS, Module 12, Modulation Principles. A carrier that has been modulated by voice or music is accompanied by two identical sidebands, each carrying the same intelligence. In amplitude-modulated (AM) transmitters, the carrier and both sidebands are transmitted. In a single-sideband transmitter (ssb), only one of the sidebands, the upper or the lower, is transmitted while the remaining sideband and the carrier are suppressed. SUPPRESSION is the elimination of the undesired portions of the signal. Figure 2-7 is the block diagram of a single-sideband transmitter. You can see the audio amplifier increases the amplitude of the incoming signal to a level adequate to operate the ssb generator. Usually the audio amplifier is just a voltage amplifier. Figure 2-7.— Ssb transmitter block diagram. The ssb generator (modulator) combines its audio input and its carrier input to produce the two sidebands. The two sidebands are then fed to a filter that selects the desired sideband and suppresses the other one. By eliminating the carrier and one of the sidebands, intelligence is transmitted at a savings in power and frequency bandwidth. In most cases ssb generators operate at very low frequencies when compared with the normally transmitted frequencies. For that reason, we must convert (or translate) the filter output to the desired frequency. This is the purpose of the mixer stage. A second output is obtained from the frequency generator and fed to a frequency multiplier to obtain a higher carrier frequency for the mixer stage. The output from the mixer is fed to a linear power amplifier to build up the level of the signal for transmission. Suppressed Carrier In ssb the carrier is suppressed (or eliminated) at the transmitter, and the sideband frequencies produced by the carrier are reduced to a minimum. You will probably find this reduction (or elimination) is the most difficult aspect in the understanding of ssb. In a single-sideband suppressed carrier, no carrier is present in the transmitted signal. 12) Describe the balanced modulator? A single sideband modulator provides a means of translating low frequency baseband signals directly to radio frequency in a single stage. Such modulators, providing suppressed carrier and one or two of the sidebands, facilitates the transmission of intelligence with significantly increased gain over AM transmission. Control signals are continuously generated to keep the local oscillator breakthrough and image sidebands down to an insignificantly low level. This is achieved by monitoring amplitude of the RF output of the single sideband modulator, and comparing this with the baseband signals. By adjusting the d.c. offsets at the baseband inputs to the balanced modulators, carrier breakthrough is cancelled. By adjusting the relative phases of the baseband signals, deviations from the 90.degree. split are compensated. By changing the amplitude of one of the baseband signals, the level of one of the RF paths is adjusted to achieve amplitude balance. 13) Explain the pilot carrier and indonent sideband? Sideband, any frequency component of a modulated carrier wave other than the frequency of the carrier wave itself, i.e., any frequency added to the carrier as a result of modulation; sidebands carry the actual information while the carrier contributes none at all. Those frequency components that are higher than the carrier frequency are know as upper sidebands; those lower are called lower sidebands. The upper and lower sidebands contain equivalent information; thus only one needs to be transmitted. Such singlesideband signals are very efficient in their use of the frequency spectrum when compared to standard amplitude modulation (AM) signals. See radio. The range of the electromagnetic spectrum located either above (the upper sideband) or below (the lower sideband) the frequency of a sinusoidal carrier signal c(t). The sidebands are produced by modulating the carrier signal in amplitude, frequency, or phase in accordance with a modulating signal m(t) to produce the modulated signal s(t). The resulting distribution of power in the sidebands of the modulated signal depends on the modulating signal and the particular form of modulation employed. See also Amplitude modulation; Frequency modulation; Modulation; Phase modulation. In radio communications, a signal that results from amplitude modulating a carrier frequency. The upper sideband is the carrier plus modulation, and the lower sideband is the carrier minus modulation, which are mirror images of each other. See single sideband. Sideband, any frequency component of a modulated carrier wave other than the frequency of the carrier wave itself, i.e., any frequency added to the carrier as a result of modulation; sidebands carry the actual information while the carrier contributes none at all. Those frequency components that are higher than the carrier frequency are know as upper sidebands; those lower are called lower sidebands. The upper and lower sidebands contain equivalent information; thus only one needs to be transmitted. Such singlesideband signals are very efficient in their use of the frequency spectrum when compared to standard amplitude modulation (AM) signals. See radio. The power of an AM signal plotted against Key: fc is the carrier frequency, fm is the maximum modulation frequency frequency. In radio communications, a sideband is a band of frequencies higher than or lower than the carrier frequency, containing power as a result of the modulation process. The sidebands consist of all the Fourier components of the modulated signal except the carrier. All forms of modulation produce sidebands. Amplitude modulation of a carrier wave normally results in two mirror-image sidebands. The signal components above the carrier frequency constitute the upper sideband (USB) and those below the carrier frequency constitute the lower sideband (LSB). In conventional AM transmission, the carrier and both sidebands are present, sometimes called double sideband amplitude modulation (DSB-AM). In some forms of AM the carrier may be removed, producing double sideband with suppressed carrier (DSB-SC). An example is the stereophonic difference (L-R) information transmitted in FM stereo broadcasting on a 38 kHz subcarrier. The receiver locally regenerates the subcarrier by doubling a special 19 kHz pilot tone, but in other DSB-SC systems the carrier may be regenerated directly from the sidebands by a Costas loop or squaring loop. This is common in digital transmission systems such as BPSK where the signal is continually present. Sidebands are evident in this spectrogram of an AM broadcast (carrier highlighted in red). If part of one sideband and all of the other remain, it is called vestigial sideband, used mostly with television broadcasting, which would otherwise take up an unacceptable amount of bandwidth. Transmission in which only one sideband is transmitted is called single-sideband transmission or SSB. SSB is the predominant voice mode on shortwave radio other than shortwave broadcasting. Since the sidebands are mirror images, which sideband is used is a matter of convention. In amateur radio, LSB is traditionally used below 10 MHz and USB is used above 10 MHz. In SSB, the carrier is suppressed, significantly reducing the electrical power (by up to 12 dB) without affecting the information in the sideband. This makes for more efficient use of transmitter power and RF bandwidth, but a beat frequency oscillator must be used at the receiver to reconstitute the carrier. Another way to look at an SSB receiver is as an RF-to-audio frequency transposer: in USB mode, the dial frequency is subtracted from each radio frequency component to produce a corresponding audio component, while in LSB mode each incoming radio frequency component is subtracted from the dial frequency. Sidebands can also interfere with adjacent channels. The part of the sideband that would overlap the neighboring channel must be suppressed by filters, before or after modulation (often both). In Broadcast band frequency modulation (FM), subcarriers above 75 kHz are limited to a small percentage of modulation and are prohibited above 99 kHz altogether to protect the ±75 kHz normal deviation and ±100 kHz channel boundaries. Amateur radio and public service FM transmitters generally utilize ±5 kHz deviation. Single-sideband modulation for more technical information about sideband modulation Sideband computing is a distributed computing method using a separate channel than the main communication channel. Out-of-band communications involve a separate channel other than the main communication channel. side lobe 13) Briefly explain the vestigial sideband transmission? A telephone transmission system providing multiple modulated carrier communication channels between a single central station and plural remote stations on a single transmissionmedium which exhibits phase nonlinearities at certain frequencies, comprising: plural transmitters at said central and remote stations, each generating on said transmission medium double side band AM modulated communication signals at different carrier frequencies; and plural receivers at said central and remote stations, each tuned to one of said different carrier frequencies on said transmission medium, at least one of said plural receivers, which receiver is tuned to receive the double side band AM modulatedcommunication signal from one of said plural transmitters, and which receiver is tuned to one of said certain frequencies which exhibit phase nonlinearities, attenuating at least a portion of one of said double side bands more than the correspondingportion of the other of said side bands to eliminate side band phase cancellation. 2. A transmission system, as defined in claim 1, wherein said transmission medium exhibits nonlinear phase characteristics at plural separated frequencies, and wherein plural of said receivers, which attenuate one of said double side bands morethan the other of said double side bands, are utilized to receive different carrier frequencies at said plural separated frequencies. 3. A transmission system, as defined in claim 1, wherein at least one of said plural receivers receives full double side band AM modulated communication signals. 4. A transmission system, as defined in claim 1, wherein said one of said plural receivers provides substantially double side band reception at modulation frequencies below a first predetermined frequency, and substantially single side bandreception at modulation frequencies above a second predetermined frequency. 5. A transmission system, as defined in claim 1, wherein said one of said plural receivers attenuates the received carrier frequency by approximately 3.5 db. 6. A transmission system, as defined in claim 1, wherein said one of said plural receivers additionally comprises: filter means having an attenuation versus frequency slope characteristic at the received carrier frequency for reducing distortion caused by frequency drift of the carrier. 7. A transmission system, as defined in claim 1, wherein said one of said plural receivers includes a band pass filter, the pass band of which extends on both sides of the received carrier frequency, the poles on one side of the pass band havinga relatively lower Q than the poles on the other side of the pass band. 8. A transmission system, as defined in claim 1, wherein said one of said plural receivers includes a band pass filter providing a pass band which extends above and below the received carrier frequency by a predetermined frequency amount, and anotch filter, the notch of which is frequency positioned adjacent one edge of said band pass filter. 9. A method of carrier multiplexing multiple telephone communication channels between a single central station and plural remote stations on a single communication medium exhibiting phase nonlinearities at a certain frequency, comprising: transmitting said multiple channels from said central and remote stations on said communication medium as double side band AM modulated carrier signals having carriers at different frequencies; and avoiding communication medium induced distortion at said certain frequency by receiving at least some modulation frequencies on said communication medium of at least one of said multiple double side band AM modulated channels at one of saidcentral or remote stations as a single side band AM modulation signal. 14) Explain the pulse amplitude modulation? Pulse-amplitude modulation, acronym PAM, is a form of signal modulation where the message information is encoded in the amplitude of a series of signal pulses. Example: A two bit modulator (PAM-4) will take two bits at a time and will map the signal amplitude to one of four possible levels, for example −3 volts, −1 volt, 1 volt, and 3 volts. Demodulation is performed by detecting the amplitude level of the carrier at every symbol period. Pulse-amplitude modulation is widely used in baseband transmission of digital data, with non-baseband applications having been largely superseded by pulse-code modulation, and, more recently, by pulse-position modulation. In particular, all telephone modems faster than 300 bit/s use quadrature amplitude modulation (QAM). (QAM uses a two-dimensional constellation). It should be noted, however, that some versions of the widely popular Ethernet communication standard are a good example of PAM usage. In particular, the Fast Ethernet 100BASE-T2 medium, running at 100Mb/s, utilizes 5 level PAM modulation (PAM-5) running at 25 megapulses/sec over two wire pairs. A special technique is used to reduce inter-symbol interference between the unshielded pairs. Later, the gigabit Ethernet 1000BASE-T medium raised the bar to use 4 pairs of wire running each at 125 megapulses/sec to achieve 1000Mb/s data rates, still utilizing PAM-5 for each pair. The IEEE 802.3an standard defines the wire-level modulation for 10GBASE-T as a Tomlinson-Harashima Precoded (THP) version of pulse-amplitude modulation with 16 discrete levels (PAM-16), encoded in a two-dimensional checkerboard pattern known as DSQ128. Several proposals were considered for wire-level modulation, including PAM with 12 discrete levels (PAM-12), 10 levels (PAM-10), or 8 levels (PAM-8), both with and without Tomlinson-Harashima Precoding (THP). 15) Explain the pulse position modulation? Pulse-position modulation is a form of signal modulation in which M message bits are encoded by transmitting a single pulse in one of 2M possible time-shifts. This is repeated every T seconds, such that the transmitted bit rate is M/T bits per second. It is primarily useful for optical communications systems, where there tends to be little or no multipath interference. Synchronization One of the key difficulties of implementing this technique is that the receiver must be properly synchronized to align the local clock with the beginning of each symbol. Therefore, it is often implemented differentially as Differential Pulse-position modulation, where by each pulse position is encoded relative to the previous , such that the receiver must only measure the difference in the arrival time of successive pulses. It is possible to limit the propagation of errors to adjacent symbols, so that an error in measuring the differential delay of one pulse will affect only two symbols, instead of affecting all successive measurements. Sensitivity to Multipath Interference Aside from the issues regarding receiver synchronization, the key disadvantage of PPM is that it is inherently sensitive to multipath interference that arises in channels with frequency-selective fading, whereby the receiver's signal contains one or more echoes of each transmitted pulse. Since the information is encoded in the time of arrival (either differentially, or relative to a common clock), the presence of one or more echoes can make it extremely difficult, if not impossible, to accurately determine the correct pulse position corresponding to the transmitted pulse. Non-coherent Detection One of the principal advantages of Pulse Position Modulation is that it is an M-ary modulation technique that can be implemented non-coherently, such that the receiver does not need to use a Phase-locked loop (PLL) to track the phase of the carrier. This makes it a suitable candidate for optical communications systems, where coherent phase modulation and detection are difficult and extremely expensive. The only other common M-ary non-coherent modulation technique is M-ary Frequency Shift Keying, which is the frequency-domain dual to PPM. PPM vs. M-FSK PPM and M-FSK systems with the same bandwidth, average power, and transmission rate of M/T bits per second have identical performance in an AWGN (Additive White Gaussian Noise) channel. However, their performance differs greatly when comparing frequency-selective and frequency-flat fading channels. Whereas frequency-selective fading produces echoes that are highly disruptive for any of the M time-shifts used to encode PPM data, it selectively disrupts only some of the M possible frequency-shifts used to encode data for M-FSK. Conversely, frequency-flat fading is more disruptive for M-FSK than PPM, as all M of the possible frequency-shifts are impaired by fading, while the short duration of the PPM pulse means that only a few of the M time-shifts are heavily impaired by fading. Applications for RF Communications Narrowband RF (Radio Frequency) channels with low power and long wavelengths (i.e., low frequency) are affected primarily by flat fading, and PPM is better suited than MFSK to be used in these scenarios. One common application with these channel characteristics, first used in the early 1960s, is the radio control of model aircraft, boats and cars. PPM is employed in these systems, with the position of each pulse representing the angular position of an analogue control on the transmitter, or possible states of a binary switch. The number of pulses per frame gives the number of controllable channels available. The advantage of using PPM for this type of application is that the electronics required to decode the signal are extremely simple, which leads to small, light-weight receiver/decoder units. (Model aircraft require parts that are as lightweight as possible). Servos made for model radio control include some of the electronics required to convert the pulse to the motor position - the receiver is merely required to demultiplex the separate channels and feed the pulses to each servo. More sophisticated R/C systems are now often based on pulse-code modulation, which is more complex but offers greater flexibility and reliability. Pulse position modulation is also used for communication to the ISO 15693 contactless Smart card as well as the HF implementation of the EPC Class 1 protocol for RFID tags. (UNIT-V) (PART-A) 1) A radio receiver is an electronic circuit that receives its input from an antenna. 2) Electronic filters to separate a wanted radio signal from all other signals. 3) In consumer electronics, the terms radio and radio receiver are often used specifically for receivers. 4) Simple crystal radio receivers operate using the power received from radio waves. 5) Specialized-use receivers such as telemetry receivers that allow the remote measurement and reporting of information. (PART-B) 6) Write a short notes on receiver? A radio receiver is an electronic circuit that receives its input from an antenna, uses electronic filters to separate a wanted radio signal from all other signals picked up by this antenna, amplifies it to a level suitable for further processing, and finally converts through demodulation and decoding the signal into a form usable for the consumer, such as sound, pictures, digital data, measurement values, navigational positions. 7) What is use this receiver? In consumer electronics, the terms radio and radio receiver are often used specifically for receivers designed for the sound signals transmitted by radio broadcasting services – historically the first mass-market radio application. 8) What s semiconductor? Further developments in semiconductor technology led to the introduction of the integrated circuit in the late 1950s.[5] This enabled radio receiver technology to move forward even further. Integrated circuits enabled high performance circuits to be built for less cost, and significant amounts of space could be saved. As a result of these developments new techniques could be introduced. One of these was the frequency synthesizer that was used to generate the local oscillator signal for the receiver. By using a synthesizer it was possible to generate a very accurate and stable local oscillator signal. Also the ability of synthesizers to be controlled by microprocessors meant that many new facilities could be introduced apart from the significant performance improvements offered by synthesizers. 9) Describe the receiver design? The advantage to this method is that most of the radio's signal path has to be sensitive to only a narrow range of frequencies. Only the front end (the part before the frequency converter stage) needs to be sensitive to a wide frequency range. For example, the front end might need to be sensitive to 1–30 MHz, while the rest of the radio might need to be sensitive only to 455 kHz, a typical IF. Only one or two tuned stages need to be adjusted to track over the tuning range of the receiver; all the intermediate-frequency stages operate at a fixed frequency which need not be adjusted. 10) What is receiver advantage? A radio receiver is an electronic circuit that receives its input from an antenna, uses electronic filters to separate a wanted radio signal from all other signals picked up by this antenna, amplifies it to a level suitable for further processing, and finally converts through demodulation and decoding the signal into a form usable for the consumer, such as sound, pictures, digital data, measurement values, navigational positions. (PART-C) 11) Explain about receiver? Receiver (radio) A radio receiver is an electronic circuit that receives its input from an antenna, uses electronic filters to separate a wanted radio signal from all other signals picked up by this antenna, amplifies it to a level suitable for further processing, and finally converts through demodulation and decoding the signal into a form usable for the consumer, such as sound, pictures, digital data, measurement values, navigational positions, etc.[1] Old-fashioned radio receiver--wireless Truetone model from about 1940 In consumer electronics, the terms radio and radio receiver are often used specifically for receivers designed for the sound signals transmitted by radio broadcasting services – historically the first mass-market radio application. Types of radio receivers Various types of radio receivers may include: Consumer audio and high fidelity audio receivers and AV receivers used by home stereo listeners and audio and home theatre system enthusiasts. Communications receivers, used as a component of a radio communication link, characterized by high stability and reliability of performance. Simple crystal radio receivers (also known as a crystal set) which operate using the power received from radio waves. Satellite television receivers, used to receive television programming from communication satellites in geosynchronous orbit. Specialized-use receivers such as telemetry receivers that allow the remote measurement and reporting of information. Measuring receivers (also: measurement receivers) are calibrated laboratory-grade devices that are used to measure the signal strength of broadcasting stations, the electromagnetic interference radiation emitted by electrical products, as well as to calibrate RF attenuators and signal generators. Scanners are specialized receivers that can automatically scan two or more discrete frequencies, stopping when they find a signal on one of them and then continuing to scan other frequencies when the initial transmission ceases. They are mainly used for monitoring VHF and UHF radio systems. Consumer audio receivers In the context of home audio systems, the term "receiver" often refers to a combination of a tuner, a preamplifier, and a power amplifier all on the same chassis. Audiophiles will refer to such a device as an integrated receiver, while a single chassis that implements only one of the three component functions is called a discrete component. Some audio purists still prefer three discreet units - tuner, preamplifier and power amplifier - but the integrated receiver has, for some years, been the mainstream choice for music listening. The first integrated stereo receiver was made by the Harman Kardon company, and came onto the market in 1958. It had undistinguished performance, but it represented a breakthrough to the "all in one" concept of a receiver, and rapidly improving designs gradually made the receiver the mainstay of the marketplace. Many radio receivers also include a loudspeaker. [edit] Hi-Fi / Home theater Main article: Home cinema Today AV receivers are a common component in a high-fidelity or home-theatre system. The receiver is generally the nerve centre of a sophisticated home-theatre system providing selectable inputs for a number of different audio components like turntables, compact-disc players and recorders, and tape decks ( like video-cassette recorders) and video components (DVD players and recorders, video-game systems, and televisions). With the decline of vinyl discs, modern receivers tend to omit inputs for turntables, which have separate requirements of their own. All other common audio/visual components can use any of the identical line-level inputs on the receiver for playback, regardless of how they are marked (the "name" on each input is mostly for the convenience of the user.) For instance, a second CD player can be plugged into an "Aux" input, and will work the same as it will in the "CD" input jacks. Some receivers can also provide signal processors to give a more realistic illusion of listening in a concert hall. Digital audio S/PDIF and USB connections are also common today. The home theater receiver, in the vocabulary of consumer electronics, comprises both the 'radio receiver' and other functions, such as control, sound processing, and power amplification. The standalone radio receiver is usually known in consumer electronics as a tuner. Some modern integrated receivers can send audio out to seven loudspeakers and an additional channel for a subwoofer and often include connections for headphones. Receivers vary greatly in price, and support stereophonic or surround sound. A highquality receiver for dedicated audio-only listening (two channel stereo) can be relatively inexpensive; excellent ones can be purchased for $300 US or less. Because modern receivers are purely electronic devices with no moving parts unlike electromechanical devices like turntables and cassette decks, they tend to offer many years of trouble-free service. In recent years, the home theater in a box has become common, which often integrates a surround-capable receiver with a DVD player. The user simply connects it to a television, perhaps other components, and a set of loudspeakers. [edit] Portable radios Portable radios include simple transistor radios that are typically monoaural and receive the AM, FM, and/or short wave broadcast bands. FM, and often AM, radios are sometimes included as a feature of portable DVD/CD, MP3 CD, and USB key players, as well as cassette player/recorders. AM/FM stereo car radios can be a separate dashboard mounted component or a feature of in car entertainment systems. A Boombox (or Boom-box)—also sometimes known as a Ghettoblaster or a Jambox, or (in parts of Europe) as a "radio-cassette"—is a name given to larger portable stereo systems capable of playing radio stations and recorded music, often at a high level of volume. Self-powered portable radios, such as clockwork radios are used in developing nations or as part of an emergency preparedness kit.[2] Early development While James Clerk Maxwell was the first person to prove electromagnetic waves existed, in 1887 a German named Heinrich Hertz demonstrated these new waves by using spark gap equipment to transmit and receive radio or "Hertzian waves", as they were first called. The experiments were not followed up by Hertz. The practical applications of the wireless communication and remote control technology were implemented by Nikola Tesla. The world’s first radio receiver (thunderstorm register) was designed by Alexander Stepanovich Popov, and it was first seen at the All-Russia exhibition in 1896. He was the first to demonstrate the practical application of electromagnetic (radio) waves,[3] although he did not care to apply for a patent for his invention. A device called a coherer became the basis for receiving radio signals. The first person to use the device to detect radio waves was a Frenchman named Edouard Branly, and Oliver Lodge popularised it when he gave a lecture in 1898 in honour of Hertz. Lodge also made improvements to the coherer. Guglielmo Marconi believed that these new waves could be used to communicate over great distances and made significant improvements to both radio receiving and transmitting apparatus. In 1895 Marconi demonstrated the first viable radio system, leading to transatlantic radio communication in December 1901. John Ambrose Fleming's development of an early thermionic valve to help detect radio waves was based upon a discovery of Thomas Edison's (called "The Edison effect", which essentially modified an early light bulb). Fleming called it his "oscillation valve" because it acted in the same way as water valve in only allowing flow in one direction. While Fleming's valve was a great stride forward it would take some years before thermionic, or vacuum tube technology was fully adopted. Around this time work on other types of detectors started to be undertaken and it resulted in what was later known as the cat's whisker. It consisted of a crystal of a material such as galena with a small springy piece of wire brought up against it. The detector was constructed so that the wire contact could be moved to different points on the crystal, and thereby obtain the best point for rectifying the signal and the best detection. They were never very reliable as the "whisker" needed to be moved periodically to enable it to detect the signal properly.[4] Valves (Tubes) An American named Lee de Forest, a competitor to Marconi, set about to develop receiver technology that did not infringe any patents to which Marconi had access. He took out a number of patents in the period between 1905 and 1907 covering a variety of developments that culminated in the form of the triode valve in which there was a third electrode called a grid. He called this an audion tube. One of the first areas in which valves were used was in the manufacture of telephone repeaters, and although the performance was poor, they gave significant improvement in long distance telephone receiving circuits. With the discovery that triode valves could amplify signals it was soon noticed that they would also oscillate, a fact that was exploited in generating signals. Once the triode was established as an amplifier it made a tremendous difference to radio receiver performance as it allowed the incoming signals to be amplified. One way that proved very successful was introduced in 1913 and involved the use of positive feedback in the form of a regenerative detector. This gave significant improvements in the levels of gain that could be achieved, greatly increasing selectivity, enabling this type of receiver to outperform all other types of the era. With the outbreak of the First World War, there was a great impetus to develop radio receiving technology further. An American named Irving Langmuir helped introduce a new generation of totally air-evacuated "hard" valves. H. J. Round undertook some work on this and in 1916 he produced a number of valves with the grid connection taken out of the top of the envelope away from the anode connection.[4] Autodyne and superheterodyne By the 1920s, the tuned radio frequency receiver (TRF) represented a major improvement in performance over what had been available before, it still fell short of the needs for some of the new applications. To enable receiver technology to meet the needs placed upon it a number of new ideas started to surface. One of these was a new form of direct conversion receiver. Here an internal or local oscillator was used to beat with the incoming signal to produce an audible signal that could be amplified by an audio amplifier. H. J. Round developed a receiver he called an autodyne in which the same valve was used as a mixer and an oscillator, Whilst the set used fewer valves it was difficult to optimise the circuit for both the mixer and oscillator functions. The next leap forward in receiver technology was a new type of receiver known as the superheterodyne, or supersonic heterodyne receiver. A Frenchman named Lucien Levy was investigating ways in which receiver selectivity could be improved and in doing this he devised a system whereby the signals were converted down to a lower frequency where the filter bandwidths could be made narrower. A further advantage was that the gain of valves was considerably greater at the lower frequencies used after the frequency conversion, and there were fewer problems with the circuits bursting into oscillation. The idea for developing a receiver with a fixed intermediate frequency amplifier and filter is credited to Edwin Armstrong. Working for the American Expeditionary Force in Europe in 1918, Armstrong thought that if the incoming signals were mixed with a variable frequency oscillator, a low frequency fix tuned amplifier could be used. Armstrong's original receiver consisted of a total of eight valves. Several tuned circuits could be cascaded to improve selectivity, and being on a fixed frequency they did not all need to be changed in line with one another. The filters could be preset and left correctly tuned. Armstrong was not the only person working on the idea of a superhet. Alexander Meissner in Germany took out a patent for the idea six months before Armstrong, but as Meissner did not prove the idea in practice and did not build a superhet radio, the idea is credited to Armstrong. The need for the increased performance of the superhet receiver was first felt in America, and by the late 1920s most sets were superhets. However in Europe the number of broadcast stations did not start to rise as rapidly until later. Even so by the mid 1930s virtually all receiving sets in Europe as well were using the superhet principle. In 1926 the tetrode valve was introduced, and enabled further improvements in performance.[4] War and postwar developments In 1939 the outbreak of war gave a new impetus to receiver development. During this time a number of classic communications receivers were designed. Some like the National HRO are still sought by enthusiasts today and although they are relatively large by today's standards, they can still give a good account of themselves under current crowded band conditions. In the late 1940s the transistor was discovered. Initially the devices were not widely used because of their expense, and the fact that valves were being made smaller, and performed better. However by the early 1960s portable transistor broadcast receivers (transistor radios) were hitting the market place. These radios were ideal for broadcast reception on the long and medium wave bands. They were much smaller than their valve equivalents, they were portable and could be powered from batteries. Although some valve portable receivers were available, batteries for these were expensive and did not last for long. The power requirements for transistor radios were very much less, resulting in batteries lasting for much longer and being considerably cheaper.[4] Semiconductors Further developments in semiconductor technology led to the introduction of the integrated circuit in the late 1950s.[5] This enabled radio receiver technology to move forward even further. Integrated circuits enabled high performance circuits to be built for less cost, and significant amounts of space could be saved. As a result of these developments new techniques could be introduced. One of these was the frequency synthesizer that was used to generate the local oscillator signal for the receiver. By using a synthesizer it was possible to generate a very accurate and stable local oscillator signal. Also the ability of synthesizers to be controlled by microprocessors meant that many new facilities could be introduced apart from the significant performance improvements offered by synthesizers.[4] Digital technologies Main article: Digital radio Receiver technology is still moving forward. Digital signal processing where many of the functions performed by an analog intermediate frequency stage can be performed digitally by converting the signal to a digital stream that is manipulated mathematically is now widespread. The new digital audio broadcasting standard being introduced can only be used when the receiver can manipulate the signal digitally. While today's radios are miracles of modern technology, filled with low power high performance integrated circuits crammed into the smallest spaces, the basic principle of the radio is usually the superhet, the same idea which was developed by Edwin Armstrong back in 1918.[4] This page was last modified on 22 May 2009 at 05:36. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a U.S. registered 501(c)(3) tax-deductible nonprofit charity. Privacy policy About Wikipedia Disclaimers 12) Briefly explain the superhetrodyne receiver? SUPER HETRODYNE RECEIVER A 5-tubes superhet receiver made in Japan about 1955. In electronics, the superheterodyne receiver (also known as the supersonic heterodyne receiver, or by the abbreviated form superhet) is a receiver which uses the principle of frequency mixing or heterodyning to convert the received signal to a lower (sometimes higher) "intermediate" frequency, which can be more conveniently processed than the original carrier frequency. Virtually all modern radio and TV receivers use the Superheterodyne principle. Two section variable capacitor, used in superhet receiver The word heterodyne is derived from the Greek roots hetero- "different", and -dyne "power". The original heterodyne technique was pioneered by Canadian inventorengineer Reginald Fessenden but was not pursued far because local oscillators were not very stable at the time.[1] Later, the superheterodyne (superhet) principle was conceived in 1918 by Edwin Armstrong during World War I, as a means of overcoming the deficiencies of early vacuum triodes used as high-frequency amplifiers in radio direction finding (RDF) equipment. Unlike simple radio communication, which only needs to make transmitted signals audible, RDF requires actual measurements of received signal strength, which necessitates linear amplification of the actual carrier wave. In a triode RF amplifier, if both the plate and grid are connected to resonant circuits tuned to the same frequency, stray capacitive coupling between the grid and the plate will cause the amplifier to go into oscillation if the stage gain is much more than unity. In early designs, dozens (in some cases over 100) low-gain triode stages had to be connected in cascade to make workable equipment, which drew enormous amounts of power in operation and required a team of maintenance engineers. The strategic value was so high, however, that the British Admiralty felt the high cost was justified. Armstrong had realized that if RDF could be operated at a higher frequency, it would allow detection of enemy shipping much more effectively, but at the time, no practical "short wave" amplifier existed, (defined then as any frequency above 500 kHz) due to the limitations of triodes of the day. A "heterodyne" refers a beat or "difference" frequency produced when two or more radio frequency carrier waves are fed to a detector. The term was originally coined by Canadian Engineer Reginald Fessenden describing his proposed method of making Morse Code transmissions from an Alexanderson alternator type transmitter audible. With the Spark gap transmitters then in wide use, the Morse Code signal consisted of short bursts of a heavily modulated carrier wave which could be clearly heard as a series of short chirps or buzzes in the receiver's headphones. The signal from an Alexanderson Alternator on the other hand, did not have any such inherent modulation and Morse Code from one of those would only be heard as a series of clicks or thumps. Fessenden's idea was to run two Alexanderson Alternators, one producing a carrier frequency 3kHz higher than the other. In the receiver's detector the two carriers would beat together to produce a 3kHz tone and so in the headphones the morse signals would then be heard as a series of 3kHz beeps. For this he coined the term "heterodyne" meaning "Generated by a Difference" (in frequency). Later, when vacuum triodes became available, the same result could be achieved more conveniently by incorporating a "local oscillator" in the receiver, which became known as a "Beat Frequency Oscillator" or BFO. As the BFO frequency was varied, the pitch of the heterodyne could be heard to vary with it. If the frequencies were too far apart the heterodyne became ultrasonic and hence no longer audible. It had been noticed some time before that if a regenerative receiver was allowed to go into oscillation, other receivers nearby would suddenly start picking up stations on frequencies different from those that the stations were actually transmitted on. Armstrong (and others) eventually deduced that this was caused by a "supersonic heterodyne" between the station's carrier frequency and the oscillator frequency. Thus, for example, if a station was transmitting on 300 kHz and the oscillating receiver was set to 400 kHz, the station would be heard not only at the original 300 kHz, but also at 100 kHz and 700 kHz. Armstrong realized that this was a potential solution to the "short wave" amplification problem, since the beat frequency still retained its original moduation, but on a lower carrier frequency. To monitor a frequency of 1500 kHz for example, he could set up an oscillator to say, 1560 kHz, which would produce a heterodyne of 60kHz, a frequency that could then be much more conveniently amplified by the triodes of the day. He termed this the "Intermediate Frequency" often abbreviated to "IF" In December, 1919, Major E. H. Armstrong gave publicity to an indirect method of obtaining short-wave amplification, called the Super- Heterodyne. The idea is to reduce the incoming frequency which may be, say 1,500,000 cycles (200 meters), to some suitable super-audible frequency which can be amplified efficiently, then passing this current through a radio frequency amplifier and finally rectifying and carrying on to one or two stages of audio frequency amplification. (page 11 of December 1922 QST magazine) Early Superheterodyne receivers actually used IFs as low as 20 kHz, often based around the self-resonance of iron-cored transformers. This made them extremely susceptible to image frequency interference, but at the time, the main objective was sensitivity rather than selectivity. Using this technique, a small number triodes could be made to do work that formerly required dozens or even hundreds. 1920s commercial IF transformers actually look very similar to 1920s audio interstage coupling transformers, and were wired up in an almost identical manner. By the mid1930s superhets were using much higher intermediate frequencies, (typically around 440470kHz), using tuned coils very similar in construction to the aerial and oscillator coils. However the term "Intermediate Frequency Transformer" or "IFT" still persists to this day. Modern receivers typically use a mixture of Ceramic Filters and/or Saw Resonators as well as traditional tuned-inductor IF transformers Armstrong was able to put his ideas into practice quite quickly, and the technique was rapidly adopted by the military. However, it was less popular when commercial radio broadcasting began in the 1920s. There were many factors involved,but the main issues were the need for an extra tube for the oscillator, the generally higher cost of the receiver, and the level of technical skill required to operate it. For early domestic radios, Tuned RFs ("TRF"), also called the Neutrodyne, were much more popular because they were cheaper, easier for a non-technical owner to use, and less costly to operate. Armstrong eventually sold his superheterodyne patent to Westinghouse, who then sold it to RCA, the latter monopolizing the market for superheterodyne receivers until 1930.[2] By the 1930s, improvements in vacuum tube technology rapidly eroded the TRF receiver's cost advantages, and the explosion in the number of broadcasting stations created a demand for cheaper, higher-performance receivers. First, the development of practical indirectly-heated-cathode tubes allowed the mixer and oscillator functions to be combined in a single Pentode tube, in the so-called Autodyne mixer. This was rapidly followed by the introduction of low-cost multi-element tubes specifically designed for superheterodyne operation. These allowed the use of much higher Intermediate Frequencies (typically around 440-470kHz) which eliminated the problem of image frequency interference. By the mid-30s, for commercial receiver production the TRF technique was obsolete. The superheterodyne principle was eventually taken up for virtually all commercial radio and TV designs. 13) Explain the receiver faults? The diagram below shows the basic elements of a single conversion superhet receiver. The essential elements of a local oscillator and a mixer followed by a fixed-tuned filter and IF amplifier are common to all superhet circuits. Cost-optimized designs may use one active device for both local oscillator and mixer—this is sometimes called a "converter" stage. One such example is the pentagrid converter. The advantage to this method is that most of the radio's signal path has to be sensitive to only a narrow range of frequencies. Only the front end (the part before the frequency converter stage) needs to be sensitive to a wide frequency range. For example, the front end might need to be sensitive to 1–30 MHz, while the rest of the radio might need to be sensitive only to 455 kHz, a typical IF. Only one or two tuned stages need to be adjusted to track over the tuning range of the receiver; all the intermediate-frequency stages operate at a fixed frequency which need not be adjusted. To overcome obstacles such as image response, multiple IF stages are used, and in some case multiple stages with two IFs of different values. For example, the front end might be sensitive to 1–30 MHz, the first half of the radio to 5 MHz, and the last half to 50 kHz. Two frequency converters would be used, and the radio would be a "Double Conversion Super Heterodyne"—a common example is a television receiver where the audio information is obtained from a second stage of intermediate frequency conversion. Occasionally special-purpose receivers will use an intermediate frequency much higher than the signal, in order to obtain very high image rejection. Superheterodyne receivers have superior characteristics to simpler receiver types in frequency stability and selectivity. They offer much better stability than Tuned radio frequency receivers (TRF) because a tuneable oscillator is more easily stabilized than a tuneable amplifier, especially with modern frequency synthesizer technology. IF filters can give much narrower passbands at the same Q factor than an equivalent RF filter. A fixed IF also allows the use of a crystal filter when exceptionally high selectivity is necessary. Regenerative and super-regenerative receivers offer better sensitivity than a TRF receiver, but suffer from stability and selectivity problems. In the case of modern television receivers, no other technique was able to produce the precise bandpass characteristic needed for vestigial sideband reception, first used with the original NTSC system introduced in 1941. This originally involved a complex collection of tuneable inductors which needed careful adjustment, but since the early 1980s these have been replaced with precision electromechanical surface acoustic wave (SAW) filters. Fabricated by precision laser milling techniques, SAW filters are much cheaper to produce, can be made to extremely close tolerances, and are extremely stable in operation. Microprocessor technology allows replacing the superheterodyne receiver design by a software defined radio architecture, where the IF processing after the initial IF filter is implemented in software. This technique is already in use in certain designs, such as very low cost FM radios incorporated into mobile phones where the necessary microprocessor is already present in the system. Radio transmitters may also use a mixer stage to produce an output frequency, working more or less as the reverse of a superheterodyne receiver. Drawbacks Drawbacks to the superheterodyne receiver include interference from signal frequencies close to the intermediate frequency. To prevent this, IF frequencies are generally controlled by regulatory authorities, and this is the reason most receivers use common IFs. Examples are 455 kHz for AM radio, 10.7 MHz for FM, and 38.9 MHz (Europe) 45 MHz (US) for television. (For AM radio, a variety of IFs have been used, but most of the Western World settled on 455kHz, in large part because of the almost universal transition to Japanese-made ceramic resonators which used the US standard of 455kHz. In more recent digitally tuned receivers, this was changed to 450kHz as this figure simplifies the design of the synthesizer circuitry). Additionally, in urban environments with many strong signals, the signals from multiple transmitters may combine in the mixer stage to interfere with the desired signal. 14) Eplain about receiver applications? High-side and low-side injection The amount that a signal is down-shifted by the local oscillator depends on whether its frequency f is higher or lower than fLO. That is because its new frequency is |f − fLO| in either case. Therefore, there are potentially two signals that could both shift to the same fIF one at f = fLO + fIF and another at f = fLO − fIF. One or the other of those signals, called the image frequency, has to be filtered out prior to the mixer to avoid aliasing. When the upper one is filtered out, it is called high-side injection, because fLO is above the frequency of the received signal. The other case is called low-side injection. High-side injection also reverses the order of a signal's frequency components. Whether or not that actually changes the signal depends on whether it has spectral symmetry or not. The reversal can be undone later in the receiver, if necessary. Image Frequency (fimage) One major disadvantage to the superheterodyne receiver is the problem of image frequency. In heterodyne receivers, an image frequency is an undesired input frequency equal to the station frequency plus twice the intermediate frequency. The image frequency results in two stations being received at the same time, thus producing interference. Image frequencies can be eliminated by sufficient attenuation on the incoming signal by the RF amplifier filter of the superheterodyne receiver. Early Autodyne receivers typically used IFs of only 150kHz or so, as it was difficult to maintain reliable oscillation if higher frequencies were used. As a consequence, most Autodyne receivers needed quite elaborate antenna tuning networks, often involving double-tuned coils, to avoid image interference. Later superhets used tubes especially designed for oscillator/mixer use, which were able work reliably with much higher IFs, reducing the problem of image interference and so allowing simpler and cheaper aerial tuning circuitry. Local oscillator radiation It is difficult to keep stray radiation from the local oscillator below the level that a nearby receiver can detect. This means that there can be mutual interference in the operation of two or more superheterodyne receivers in close proximity. In espionage, oscillator radiation gives a means to detect a covert receiver and its operating frequency. Further information: Electromagnetic compatibility Local oscillator sideband noise Local oscillators typically generate a single frequency signal that has negligible amplitude modulation but some random phase modulation. Either of these impurities spreads some of the signal's energy into sideband frequencies. That causes a corresponding widening of the receiver's frequency response, which would defeat the aim to make a very narrow bandwidth receiver such as to receive low-rate digital signals. Care needs to be taken to minimise oscillator phase noise, usually by ensuring that the oscillator never enters a non-linear mode. H2X radar Automatic gain control Demodulator Direct conversion receiver VFO Single sideband modulation (demodulation) Directly amplifying receiver Reflectional receiver Beat frequency Heterodyne Optical heterodyne detection 15) Explain about receiver frequency? FREQUENCY: For a given crystal cut, lower frequency crystals exhibit superior stability and, for a given frequency, the higher overtone crystals will usually provide the best stability. A simple rule of thumb is, "the more quartz the better" down to about 5 MHz below which frequency dividers are usually the best choice. High frequency oscillators may include phase-locked loops or frequency multipliers to take advantage of a low frequency crystal's stability. Multiplied oscillators are preferred above 120 MHz when stability is a key issue. AGING: New, high quality ovenized quartz crystals typically exhibit small, positive frequency drift with time unrelated to external influences. A significant drop in this "aging" rate occurs after the first few weeks of operation at the operating temperature. Ultimate aging rates below 0.1 PPB per day are achieved by the highest quality crystals and 1 PPB per day rates are commonplace. Significant negative aging (dropping frequency) indicates a bad crystal - probably a leaking package. A typical aging curve for a new ovenized oscillator. TEMPERATURE: The primary effect of temperature variations is to change the oscillator's frequency. Oven oscillators offer the best temperature stability and largely avoid many of the problems associated with activity dips. Activity dips are drops in crystal Q which can appear in narrow temperature windows causing sudden frequency shifts and amplitude variations. Temperature stability below 0.1 PPB can be achieved but the aging rate often dominates the frequency error budget after only a few days. The specification should state whether the stability specification is peak-to-peak over the entire range or whether it is relative to room temperature. Variation from room temperature is a popular method of specification since the oscillator is usually tuned at room temperature. Non-oven XOs and TCXOs may drift slowly to a new frequency after the ambient temperature changes since the internal thermal time constants can be fairly long. RETRACE: When power is removed from an oscillator, then re-applied several hours later, the frequency will stabilize at a slightly different value. This "retrace" error is usually specified for a twenty-four hour off-time followed by a warm-up time sufficient to allow complete thermal equilibrium. Retrace errors often diminish after warming as though the crystal walks back down its aging curve when cold and then exponentially approaches the previous drift curve when activated. Oscillators stored at extremely cold temperatures for extended periods of time may exhibit a frequency vs. time curve much like the initial "green" aging curve of a new crystal. In addition to the crystal related effects described above, mechanical shifts can also occur due to the thermal stresses from heating and cooling the oven structure. A common retrace error source is the mechanical device used to adjust the oscillator's frequency. Precision, multi-turn variable capacitors exhibit good retrace but a good practice is to turn the screw back slightly after setting to relieve any stress. Most Wenzel oscillators use special precision potentiometers which exhibit an unusually low amount of retrace and hysterisis.