|
|
Line 1: |
Line 1: |
| {{Use dmy dates|date=November 2013}}
| | My name is Gwendolyn Dawe but everybody calls me Gwendolyn. I'm from Great Britain. I'm studying at the university (1st year) and I play the Viola for 5 years. Usually I choose music from my famous films :D. <br>I have two brothers. I like Paintball, watching movies and Petal collecting and pressing.<br><br>Feel free to visit my blog: [http://famelinked.com/index.php?do=/profile-27982/info/ fonduri europene pentru livada] |
| :''This article is about the history and development of passive linear analogue filters used in electronics. For linear filters in general see [[Linear filter]]. For electronic filters in general see [[Electronic filter]].''
| |
| | |
| {{Linear analog electronic filter}}
| |
| | |
| '''Analogue [[Filter (signal processing)|filters]]''' are a basic building block of [[signal processing]] much used in [[electronics]]. Amongst their many applications are the separation of an audio signal before application to [[bass (music)|bass]], [[mid-range speaker|mid-range]] and [[tweeter]] [[loudspeaker]]s; the combining and later separation of multiple telephone conversations onto a single channel; the selection of a chosen [[radio station]] in a [[radio receiver]] and rejection of others.
| |
| | |
| Passive linear electronic analogue filters are those filters which can be described with [[linear differential equation]]s (linear); they are composed of [[capacitor]]s, [[inductor]]s and, sometimes, [[resistor]]s ([[passive component|passive]]) and are designed to operate on continuously varying ([[analog (signal)|analogue]]) signals. There are many [[linear filter]]s which are not analogue in implementation ([[digital filter]]), and there are many [[electronic filter]]s which may not have a passive topology – both of which may have the same [[transfer function]] of the filters described in this article. Analogue filters are most often used in wave filtering applications, that is, where it is required to pass particular frequency components and to reject others from analogue ([[Continuous signal|continuous-time]]) signals.
| |
| | |
| Analogue filters have played an important part in the development of electronics. Especially in the field of [[telecommunication]]s, filters have been of crucial importance in a number of technological breakthroughs and have been the source of enormous profits for telecommunications companies. It should come as no surprise, therefore, that the early development of filters was intimately connected with [[transmission line]]s. Transmission line theory gave rise to filter theory, which initially took a very similar form, and the main application of filters was for use on telecommunication transmission lines. However, the arrival of [[network synthesis filters|network synthesis]] techniques greatly enhanced the degree of control of the designer.
| |
| | |
| Today, it is often preferred to carry out filtering in the digital domain where complex algorithms are much easier to implement, but analogue filters do still find applications, especially for low-order simple filtering tasks and are often still the norm at higher frequencies where digital technology is still impractical, or at least, less cost effective. Wherever possible, and especially at low frequencies, analogue filters are now implemented in a [[electronic filter topology|filter topology]] which is [[active component|active]] in order to avoid the wound components required by [[passive component|passive]] topology.
| |
| | |
| It is possible to design linear analogue [[mechanical filter]]s using mechanical components which filter mechanical vibrations or [[acoustics|acoustic]] waves. While there are few applications for such devices in mechanics per se, they can be used in electronics with the addition of [[transducer]]s to convert to and from the electrical domain. Indeed some of the earliest ideas for filters were acoustic resonators because the electronics technology was poorly understood at the time. In principle, the design of such filters can be achieved entirely in terms of the electronic counterparts of mechanical quantities, with [[kinetic energy]], [[potential energy]] and [[heat energy]] corresponding to the energy in inductors, capacitors and resistors respectively.
| |
| | |
| ==Historical overview==
| |
| There are three main stages in the history of '''passive analogue filter development''':
| |
| | |
| #'''Simple filters'''. The frequency dependence of electrical response was known for capacitors and inductors from very early on. The resonance phenomenon was also familiar from an early date and it was possible to produce simple, single-branch filters with these components. Although attempts were made in the 1880s to apply them to [[telegraphy]], these designs proved inadequate for successful [[frequency division multiplexing]]. Network analysis was not yet powerful enough to provide the theory for more complex filters and progress was further hampered by a general failure to understand the [[frequency domain]] nature of signals.
| |
| #'''[[Composite image filter|Image filters]]'''. Image filter theory grew out of transmission line theory and the design proceeded in a similar manner to transmission line analysis. For the first time filters could be produced that had precisely controllable [[passband]]s and other parameters. These developments took place in the 1920s and filters produced to these designs were still in widespread use in the 1980s, only declining as the use of analogue telecommunications has declined. Their immediate application was the economically important development of frequency division multiplexing for use on intercity and international [[telephony]] lines.
| |
| #'''[[Network synthesis filters]]'''. The mathematical bases of network synthesis were laid in the 1930s and 1940s. After the end of [[World War II]] network synthesis became the primary tool of [[filter design]]. Network synthesis put filter design on a firm mathematical foundation, freeing it from the mathematically sloppy techniques of image design and severing the connection with physical lines. The essence of network synthesis is that it produces a design that will (at least if implemented with ideal components) accurately reproduce the response originally specified in [[black box]] terms.
| |
| | |
| Throughout this article the letters R,L and C are used with their usual meanings to represent [[electrical resistance|resistance]], [[inductance]] and [[capacitance]], respectively. In particular they are used in combinations, such as LC, to mean, for instance, a network consisting only of inductors and capacitors. Z is used for [[electrical impedance]], any 2-terminal<ref name=pole group=note>A terminal of a network is a connection point where current can enter or leave the network from the world outside. This is often called a pole in the literature, especially the more mathematical, but is not to be confused with a [[pole (complex analysis)|pole]] of the [[transfer function]] which is a meaning also used in this article. A 2-terminal network amounts to a single impedance (although it may consist of many elements connected in a complicated set of [[Mesh analysis|meshes]]) and can also be described as a one-port network. For networks of more than two terminals it is not necessarily possible to identify terminal pairs as ports.</ref> combination of RLC elements and in some sections D is used for the rarely seen quantity elastance, which is the inverse of capacitance.
| |
| | |
| ==Resonance==
| |
| Early filters utilised the phenomenon of [[resonance]] to filter signals. Although [[electrical resonance]] had been investigated by researchers from a very early stage, it was at first not widely understood by electrical engineers. Consequently, the much more familiar concept of [[acoustic resonance]] (which in turn, can be explained in terms of the even more familiar [[mechanical resonance]]) found its way into filter design ahead of electrical resonance.<ref name=Lund24>Lundheim, p.24</ref> Resonance can be used to achieve a filtering effect because the resonant device will respond to frequencies at, or near, to the resonant frequency but will not respond to frequencies far from resonance. Hence frequencies far from resonance are filtered out from the output of the device.<ref>L. J. Raphael, G. J. Borden, K. S. Harris, ''Speech science primer: physiology, acoustics, and perception of speech'', p.113, Lippincott Williams & Wilkins 2006 ISBN 0-7817-7117-X</ref>
| |
| | |
| ===Electrical resonance===
| |
| [[File:Oudin coil .png|thumb|260px|A 1915 example of an early type of resonant circuit known as an [[Oudin coil]] which uses Leyden jars for the capacitance.]]
| |
| Resonance was noticed early on in experiments with the [[Leyden jar]], invented in 1746. The Leyden jar stores electricity due to its [[capacitance]], and is, in fact, an early form of capacitor. When a Leyden jar is discharged by allowing a spark to jump between the electrodes, the discharge is oscillatory. This was not suspected until 1826, when [[Felix Savary]] in France, and later (1842) [[Joseph Henry]]<ref>Joseph Henry, "On induction from ordinary electricity; and on the oscillatory discharge", ''Proceedings of the American Philosophical Society'', '''vol 2''', pp.193-196, 17 June 1842</ref> in the US noted that a steel needle placed close to the discharge does not always magnetise in the same direction. They both independently drew the conclusion that there was a transient oscillation dying with time.<ref>Blanchard, pp.415-416</ref>
| |
| | |
| [[Hermann von Helmholtz]] in 1847 published his important work on conservation of energy<ref>Hermann von Helmholtz, ''Uber die Erhaltung der Kraft (On the Conservation of Force)'', G Reimer, Berlin, 1847</ref> in part of which he used those principles to explain why the oscillation dies away, that it is the resistance of the circuit which dissipates the energy of the oscillation on each successive cycle. Helmholtz also noted that there was evidence of oscillation from the [[electrolysis]] experiments of [[William Hyde Wollaston]]. Wollaston was attempting to decompose water by electric shock but found that both hydrogen and oxygen were present at both electrodes. In normal electrolysis they would separate, one to each electrode.<ref>Blanchard, pp.416-417</ref>
| |
| | |
| Helmholtz explained why the oscillation decayed but he had not explained why it occurred in the first place. This was left to [[Sir William Thomson]] (Lord Kelvin) who, in 1853, postulated that there was inductance present in the circuit as well as the capacitance of the jar and the resistance of the load.<ref>William Thomson, "On transient electric currents", ''Philosophical Magazine'', '''vol 5''', pp.393-405, June 1853</ref> This established the physical basis for the phenomenon - the energy supplied by the jar was partly dissipated in the load but also partly stored in the magnetic field of the inductor.<ref>Blanchard, p.417</ref>
| |
| | |
| So far, the investigation had been on the natural frequency of transient oscillation of a resonant circuit resulting from a sudden stimulus. More important from the point of view of filter theory is the behaviour of a resonant circuit when driven by an external [[Alternating current|AC]] signal: there is a sudden peak in the circuits response when the driving signal frequency is at the resonant frequency of the circuit.<ref group=note>The resonant frequency is very close to, but usually not exactly equal to, the natural frequency of oscillation of the circuit</ref> [[James Clerk Maxwell]] heard of the phenomenon from [[Sir William Grove]] in 1868 in connection with experiments on [[dynamo]]s,<ref>William Grove, "An experiment in magneto-electric induction", ''Philosophical Magazine'', '''vol 35''', pp.184-185, March 1868</ref> and was also aware of the earlier work of [[Henry Wilde (engineer)|Henry Wilde]] in 1866. Maxwell explained resonance<ref group=note>[[Oliver Lodge]] and some other English scientists tried to keep acoustic and electric terminology separate and promoted the term "syntony". However it was "resonance" that was to win the day. Blanchard, p.422</ref> mathematically, with a set of differential equations, in much the same terms that an [[RLC circuit]] is described today.<ref name=Lund24/><ref>James Clerk Maxwell, "On Mr Grove's experiment in magneto-electric induction", ''Philosophical Magazine'', '''vol 35''', pp 360-363, May 1868</ref><ref>Blanchard, pp.416–421</ref>
| |
| | |
| [[Heinrich Hertz]] (1887) experimentally demonstrated the resonance phenomena<ref>Heinrich Hertz, "Electric waves", p.42, The Macmillan Company, 1893</ref> by building two resonant circuits, one of which was driven by a generator and the other was [[Tuner (radio)|tunable]] and only coupled to the first electromagnetically (i.e., no circuit connection). Hertz showed that the response of the second circuit was at a maximum when it was in tune with the first. The diagrams produced by Hertz in this paper were the first published plots of an electrical resonant response.<ref name=Lund24/><ref>Blanchard, pp.421-423</ref>
| |
| | |
| ===Acoustic resonance===
| |
| As mentioned earlier, it was acoustic resonance that inspired filtering applications, the first of these being a telegraph system known as the "[[harmonic telegraph]]". Versions are due to [[Elisha Gray]], [[Alexander Graham Bell]] (1870s),<ref name=Lund24/> [[:fr:Ernest Mercadier|Ernest Mercadier]] and others. Its purpose was to simultaneously transmit a number of telegraph messages over the same line and represents an early form of [[frequency division multiplexing]] (FDM). FDM requires the sending end to be transmitting at different frequencies for each individual communication channel. This demands individual tuned resonators, as well as filters to separate out the signals at the receiving end. The harmonic telegraph achieved this with electromagnetically driven tuned reeds at the transmitting end which would vibrate similar reeds at the receiving end. Only the reed with the same resonant frequency as the transmitter would vibrate to any appreciable extent at the receiving end.<ref name=Blanch425>Blanchard, p.425</ref>
| |
| | |
| Incidentally, the harmonic telegraph directly suggested to Bell the idea of the telephone. The reeds can be viewed as [[transducer]]s converting sound to and from an electrical signal. It is no great leap from this view of the harmonic telegraph to the idea that speech can be converted to and from an electrical signal.<ref name=Lund24/><ref name=Blanch425/>
| |
| | |
| ===Early multiplexing===
| |
| [[File:LeblancFDM.jpg|thumb|Hutin and Leblanc's multiple telegraph filter of 1891 showing the use of resonant circuits in filtering.<ref>M Hutin, M Leblanc, ''Multiple Telegraphy and Telephony'', United States patent US0838545, filed 9 May 1894, issued 18 Dec 1906</ref><ref group=note>This image is from a later, corrected, US patent but patenting the same invention as the original French patent</ref>]]
| |
| By the 1890s electrical resonance was much more widely understood and had become a normal part of the engineer's toolkit. In 1891 Hutin and Leblanc patented an FDM scheme for telephone circuits using resonant circuit filters.<ref>Maurice Hutin, Maurice Leblanc, "Êtude sur les Courants Alternatifs et leur Application au Transport de la Force", ''La Lumière Electrique'', 2 May 1891</ref> Rival patents were filed in 1892 by [[Michael Pupin]] and [[John Stone Stone]] with similar ideas, priority eventually being awarded to Pupin. However, no scheme using just simple resonant circuit filters can successfully [[multiplexing|multiplex]] (i.e. combine) the wider bandwidth of telephone channels (as opposed to telegraph) without either an unacceptable restriction of speech bandwidth or a channel spacing so wide as to make the benefits of multiplexing uneconomic.<ref name=Lund24/><ref>Blanchard, pp.426-427</ref>
| |
| | |
| The basic technical reason for this difficulty is that the frequency response of a simple filter approaches a fall of 6 [[Octave (electronics)|dB/octave]] far from the point of resonance. This means that if telephone channels are squeezed in side-by-side into the frequency spectrum, there will be [[crosstalk]] from adjacent channels in any given channel. What is required is a much more sophisticated filter that has a flat frequency response in the required [[passband]] like a low-[[Q factor|Q]] resonant circuit, but that rapidly falls in response (much faster than 6 dB/octave) at the transition from passband to [[stopband]] like a high-Q resonant circuit.<ref group=note>[[Q factor]] is a dimensionless quantity enumerating the '''''q'''''uality of a resonating circuit. It is roughly proportional to the number of oscillations, which a resonator would support after a single external excitation (for example, how many times a guitar string would wobble if pulled). One definition of Q factor, the most relevant one in this context, is the ratio of resonant frequency to bandwidth of a circuit. It arose as a measure of [[selectivity (electronic)|selectivity]] in radio receivers</ref> Obviously, these are contradictory requirements to be met with a single resonant circuit. The solution to these needs was founded in the theory of transmission lines and consequently the necessary filters did not become available until this theory was fully developed. At this early stage the idea of signal bandwidth, and hence the need for filters to match to it, was not fully understood; indeed, it was as late as 1920 before the concept of bandwidth was fully established.<ref>Lundheim (2002), p. 23</ref> For early radio, the concepts of Q-factor, [[selectivity (electronic)|selectivity]] and tuning sufficed. This was all to change with the developing theory of [[transmission line]]s on which [[image filter]]s are based, as explained in the next section.<ref name=Lund24/>
| |
| | |
| At the turn of the century as telephone lines became available, it became popular to add telegraph onto telephone lines with an earth return [[phantom circuit]].<ref group=note>Telegraph lines are typically [[unbalanced line|unbalanced]] with only a single conductor provided, the return path is achieved through an [[ground (electricity)|earth]] connection which is common to all the telegraph lines on a route. Telephone lines are typically [[balanced line|balanced]] with two conductors per circuit. A telegraph signal connected [[Common-mode signal|common-mode]] to both conductors of the telephone line will not be heard at the telephone receiver which can only detect voltage differences between the conductors. The telegraph signal is typically recovered at the far end by connection to the [[center tap]] of a [[repeating coil|line transformer]]. The return path is via an earth connection as usual. This is a form of [[phantom circuit]]</ref> An [[LC circuit|LC filter]] was required to prevent telegraph clicks being heard on the telephone line. From the 1920s onwards, telephone lines, or balanced lines dedicated to the purpose, were used for FDM telegraph at audio frequencies. The first of these systems in the UK was a [[Siemens|Siemens and Halske]] installation between London and Manchester. [[General Electric Company plc|GEC]] and [[AT&T Corp.|AT&T]] also had FDM systems. Separate pairs were used for the send and receive signals. The Siemens and GEC systems had six channels of telegraph in each direction, the AT&T system had twelve. All of these systems used electronic oscillators to generate a different [[Carrier wave|carrier]] for each telegraph signal and required a bank of band-pass filters to separate out the multiplexed signal at the receiving end.<ref>K. G. Beauchamp, ''History of telegraphy'', pp 84-85, Institution of Electrical Engineers, 2001 ISBN 0-85296-792-6</ref>
| |
| | |
| ==Transmission line theory==
| |
| [[File:Line model Ohm.svg|thumb|300px|Ohm's model of the transmission line was simply resistance.]]
| |
| [[File:Line model Kelvin.svg|thumb|300px|Lord Kelvin's model of the transmission line accounted for capacitance and the dispersion it caused. The diagram represents Kelvin's model translated into modern terms using [[infinitesimal]] elements, but this was not the actual approach used by Kelvin.]]
| |
| [[File:Line model Heaviside.svg|thumb|300px|Heaviside's model of the transmission line. L, R, C and G in all three diagrams are the primary line constants. The infinitesimals δL, δR, δC and δG are to be understood as Lδ''x'', Rδ''x'', Cδ''x'' and Gδ''x'' respectively.]]
| |
| The earliest model of the [[transmission line]] was probably described by [[Georg Ohm]] (1827) who established that resistance in a wire is proportional to its length.<ref>Georg Ohm, ''Die galvanische Kette, mathematisch bearbeitet'', Riemann Berlin, 1827</ref><ref group=note>At least, Ohm described the first model that was in any way correct. Earlier ideas such as [[Barlow's law]] from [[Peter Barlow (mathematician)|Peter Barlow]] were either incorrect, or inadequately described. See, for example. p.603 of; <br />*John C. Shedd, Mayo D. Hershey, "The history of Ohm's law", ''The Popular Science Monthly'', pp.599-614, December 1913 ISSN 0161-7370.</ref> The Ohm model thus included only resistance. [[Latimer Clark]] noted that signals were delayed and elongated along a cable, an undesirable form of distortion now called [[dispersion relation|dispersion]] but then called retardation, and [[Michael Faraday]] (1853) established that this was due to the [[capacitance]] present in the transmission line.<ref>Hunt, pp 62-63</ref><ref group=note>[[Werner von Siemens]] had also noted the retardation effect a few years earlier in 1849 and came to a similar conclusion as Faraday. However, there was not so much interest in Germany in underwater and underground cables as there was in Britain, the German overhead cables did not noticeably suffer from retardation and Siemen's ideas were not accepted. (Hunt, p.65.)</ref> [[Lord Kelvin]] (1854) found the correct mathematical description needed in his work on early transatlantic cables; he arrived at an equation identical to the [[Heat equation|conduction of a heat pulse]] along a metal bar.<ref>Thomas William Körner, ''Fourier analysis'', p.333, Cambridge University Press, 1989 ISBN 0-521-38991-7</ref> This model incorporates only resistance and capacitance, but that is all that was needed in undersea cables dominated by capacitance effects. Kelvin's model predicts a limit on the telegraph signalling speed of a cable but Kelvin still did not use the concept of bandwidth, the limit was entirely explained in terms of the dispersion of the telegraph [[Symbol rate|symbols]].<ref name=Lund24/> The mathematical model of the transmission line reached its fullest development with [[Oliver Heaviside]]. Heaviside (1881) introduced series inductance and shunt [[electrical conductance|conductance]] into the model making four [[distributed element model|distributed elements]] in all. This model is now known as the [[telegrapher's equation]] and the distributed elements are called the [[primary line constants]].<ref>Brittain, p.39<br/>Heaviside, O, ''Electrical Papers'', '''vol 1''', pp.139-140, Boston, 1925</ref>
| |
| | |
| From the work of Heaviside (1887) it had become clear that the performance of telegraph lines, and most especially telephone lines, could be improved by the addition of inductance to the line.<ref>Heaviside, O, "Electromagnetic Induction and its propagation", ''The Electrician'', 3 June 1887</ref> [[George Ashley Campbell|George Campbell]] at [[American Telephone & Telegraph|AT&T]] implemented this idea (1899) by inserting [[loading coil]]s at intervals along the line.<ref>James E. Brittain, "The Introduction of the Loading Coil: George A. Campbell and Michael I. Pupin", ''Technology and Culture'', '''Vol. 11''', No. 1 (Jan., 1970), pp. 36–57, The Johns Hopkins University Press {{doi|10.2307/3102809}}</ref> Campbell found that as well as the desired improvements to the line's characteristics in the passband there was also a definite frequency beyond which signals could not be passed without great [[attenuation]]. This was a result of the loading coils and the line capacitance forming a [[low-pass filter]], an effect that is only apparent on lines incorporating [[Lumped element model|lumped components]] such as the loading coils. This naturally led Campbell (1910) to produce a filter with [[ladder topology]], a glance at the circuit diagram of this filter is enough to see its relationship to a loaded transmission line.<ref>Darlington, pp.4-5</ref> The cut-off phenomenon is an undesirable side-effect as far as loaded lines are concerned but for telephone FDM filters it is precisely what is required. For this application, Campbell produced [[band-pass filter]]s to the same ladder topology by replacing the inductors and capacitors with [[resonator]]s and anti-resonators respectively.<ref group=note>The exact date Campbell produced each variety of filter is not clear. The work started in 1910, initially patented in 1917 (US1227113) and the full theory published in 1922, but it is known that Campbell's filters were in use by AT&T long before the 1922 date (Bray, p.62, Darlington, p.5)</ref> Both the loaded line and FDM were of great benefit economically to AT&T and this led to fast development of filtering from this point onwards.<ref>Bray, J, ''Innovation and the Communications Revolution'', p 62, Institute of Electrical Engineers, 2002</ref>
| |
| | |
| ==Image filters==
| |
| {{main|composite image filters}}
| |
| [[File:Cambell filter.png|thumb|400px|Campbell's sketch of the low-pass version of his filter from his 1915 patent<ref>George A, Campbell, ''Electric wave-filter'', US patent 1 227 113, filed 15 July 1915, issued 22 May 1917.</ref> showing the now ubiquitous ladder topology with capacitors for the ladder rungs and inductors for the stiles. Filters of more modern design also often adopt the same ladder topology as used by Campbell. It should be understood that although superficially similar, they are really quite different. The ladder construction is essential to the Campbell filter and all the sections have identical element values. Modern designs can be realised in any number of topologies, choosing the ladder topology is merely a matter of convenience. Their response is quite different (better) than Campbell's and the element values, in general, will all be different.]]
| |
| The filters designed by Campbell<ref group=note>Campbell has publishing priority for this invention but it is worth noting that [[Karl Willy Wagner]] independently made a similar discovery which he was not allowed to publish immediately because [[World War I]] was still ongoing. (Thomas H. Lee, ''Planar microwave engineering'', p.725, Cambridge University Press 2004 ISBN 0-521-83526-7.)</ref> were named wave filters because of their property of passing some waves and strongly rejecting others. The method by which they were designed was called the image parameter method<ref group=note>The term "image parameter method" was coined by Darlington (1939) in order to distinguish this earlier technique from his later "insertion-loss method"</ref><ref name=Quad>[http://www.quadrivium.nl/history/history.html "History of Filter Theory"], Quadrivium, retrieved 26 June 2009</ref><ref name=Darl4pole>S. Darlington, "Synthesis of reactance 4-poles which produce prescribed insertion loss characteristics", ''Journal of Mathematics and Physics'', '''vol 18''', pp.257-353, September 1939</ref> and filters designed to this method are called image filters.<ref group=note>The terms wave filter and image filter are not synonymous, it is possible for a wave filter to not be designed by the image method, but in the 1920s the distinction was moot as the image method was the only one available</ref> The image method essentially consists of developing the [[transmission constant]]s of an infinite chain of identical filter sections and then terminating the desired finite number of filter sections in the [[image impedance]]. This exactly corresponds to the way the properties of a finite length of transmission line are derived from the theoretical properties of an infinite line, the image impedance corresponding to the [[characteristic impedance]] of the line.<ref>Matthaei, pp.49-51</ref>
| |
| | |
| From 1920 [[John Renshaw Carson|John Carson]], also working for AT&T, began to develop a new way of looking at signals using the [[operational calculus]] of Heaviside which in essence is working in the [[frequency domain]]. This gave the AT&T engineers a new insight into the way their filters were working and led [[Otto Zobel]] to invent many improved forms. Carson and Zobel steadily demolished many of the old ideas. For instance the old telegraph engineers thought of the signal as being a single frequency and this idea persisted into the age of radio with some still believing that [[frequency modulation]] (FM) transmission could be achieved with a smaller bandwidth than the [[baseband]] signal right up until the publication of Carson's 1922 paper.<ref>Carson, J. R., "Notes on the Theory of Modulation" ''Procedures of the IRE'', '''vol 10''', No 1, pp.57-64, 1922 {{doi|10.1109/JRPROC.1922.219793}}</ref> Another advance concerned the nature of noise, Carson and Zobel (1923)<ref>Carson, J R and Zobel, O J, "Transient Oscillation in Electric Wave Filters", ''Bell Systems Technical Journal'', vol 2, July 1923, pp.1-29</ref> treated noise as a random process with a continuous bandwidth, an idea that was well ahead of its time, and thus limited the amount of noise that it was possible to remove by filtering to that part of the noise spectrum which fell outside the passband. This too, was not generally accepted at first, notably being opposed by [[Edwin Armstrong]] (who ironically, actually succeeded in reducing noise with [[Frequency modulation#Modulation index|wide-band FM]]) and was only finally settled with the work of [[Harry Nyquist]] whose [[Johnson–Nyquist noise|thermal noise power formula]] is well known today.<ref>Lundheim, pp.24-25</ref>
| |
| | |
| Several improvements were made to image filters and their theory of operation by [[Otto Zobel]]. Zobel coined the term [[constant k filter]] (or k-type filter) to distinguish Campbell's filter from later types, notably Zobel's [[m-derived filter]] (or m-type filter). The particular problems Zobel was trying to address with these new forms were impedance matching into the end terminations and improved steepness of roll-off. These were achieved at the cost of an increase in filter circuit complexity.<ref name=Zobel/><ref name=Darl5>Darlington, p.5</ref>
| |
| | |
| A more systematic method of producing image filters was introduced by [[Hendrik Bode]] (1930), and further developed by several other investigators including Piloty (1937-1939) and [[Wilhelm Cauer]] (1934-1937). Rather than enumerate the behaviour (transfer function, attenuation function, delay function and so on) of a specific circuit, instead a requirement for the image impedance itself was developed. The image impedance can be expressed in terms of the open-circuit and short-circuit impedances<ref group=note name=Zoc>The open-circuit impedance of a two-port network is the impedance looking into one port when the other port is open circuit. Similarly, the short-circuit impedance is the impedance looking into one port when the other is terminated in a short circuit. The open-circuit impedance of the first port in general (except for symmetrical networks) is not equal to the open-circuit impedance of the second and likewise for short-circuit impedances</ref> of the filter as <math> \scriptstyle Z_i=\sqrt{Z_oZ_s}</math>. Since the image impedance must be real in the passbands and imaginary in the stopbands according to image theory, there is a requirement that the [[Pole (complex analysis)|poles]] and [[Zero (complex analysis)|zeroes]] of ''Z<sub>o</sub>'' and ''Z<sub>s</sub>'' cancel in the passband and correspond in the stopband. The behaviour of the filter can be entirely defined in terms of the positions in the [[complex plane]] of these pairs of poles and zeroes. Any circuit which has the requisite poles and zeroes will also have the requisite response. Cauer pursued two related questions arising from this technique: what specification of poles and zeroes are realisable as passive filters; and what realisations are equivalent to each other. The results of this work led Cauer to develop a new approach, now called network synthesis.<ref name=Darl5/><ref name=Belev851>Belevitch, p.851</ref><ref name=ECauer6>Cauer et al., p.6</ref>
| |
| | |
| This "poles and zeroes" view of filter design was particularly useful where a bank of filters, each operating at different frequencies, are all connected across the same transmission line. The earlier approach was unable to deal properly with this situation, but the poles and zeroes approach could embrace it by specifying a constant impedance for the combined filter. This problem was originally related to FDM telephony but frequently now arises in loudspeaker [[Audio crossover|crossover filters]].<ref name=Belev851/>
| |
| | |
| ==Network synthesis filters==
| |
| {{main|Network synthesis filters}}
| |
| | |
| The essence of network synthesis is to start with a required filter response and produce a network that delivers that response, or approximates to it within a specified boundary. This is the inverse of [[Network analysis (electrical circuits)|network analysis]] which starts with a given network and by applying the various electric circuit theorems predicts the response of the network.<ref name=ECauer4>Cauer et al., p.4</ref> The term was first used with this meaning in the doctoral thesis of [[Yuk-Wing Lee]] (1930) and apparently arose out of a conversation with [[Vannevar Bush]].<ref>Karl L. Wildes, Nilo A. Lindgren, ''A century of electrical engineering and computer science at MIT, 1882-1982'', p.157, MIT Press, 1985 ISBN 0-262-23119-0</ref> The advantage of network synthesis over previous methods is that it provides a solution which precisely meets the design specification. This is not the case with image filters, a degree of experience is required in their design since the image filter only meets the design specification in the unrealistic case of being terminated in its own image impedance, to produce which would require the exact circuit being sought. Network synthesis on the other hand, takes care of the termination impedances simply by incorporating them into the network being designed.<ref>Matthaei, pp.83-84</ref>
| |
| | |
| The development of network analysis needed to take place before network synthesis was possible. The theorems of [[Gustav Kirchhoff]] and others and the ideas of [[Charles Steinmetz]] ([[Phasor (sine waves)|phasors]]) and [[Arthur Kennelly]] ([[complex impedance]])<ref>[http://www.ieee.org/web/aboutus/history_center/biography/kennelly.html Arthur E. Kennelly, 1861 - 1939] IEEE biography, retrieved June 13, 2009</ref> laid the groundwork.<ref>Darlington, p.4</ref> The concept of a [[Two-port network|port]] also played a part in the development of the theory, and proved to be a more useful idea than network terminals.<ref name=pole group=note/><ref name=Darl5/> The first milestone on the way to network synthesis was an important paper by [[Ronald Foster]] (1924),<ref>Foster, R M, "A Reactance Theorem", ''Bell System Technical Journal'', '''vol 3''', pp.259-267, 1924</ref> ''A Reactance Theorem'', in which Foster introduces the idea of a [[driving point impedance]], that is, the impedance that is connected to the generator. The expression for this impedance determines the response of the filter and vice versa, and a realisation of the filter can be obtained by expansion of this expression. It is not possible to realise any arbitrary impedance expression as a network. [[Foster's reactance theorem]] stipulates necessary and sufficient conditions for realisability: that the reactance must be algebraically increasing with frequency and the poles and zeroes must alternate.<ref name=Cauer1>Cauer et al., p.1</ref><ref>Darlington, pp.4-6</ref>
| |
| | |
| [[Wilhelm Cauer]] expanded on the work of Foster (1926) <ref>Cauer, W, "Die Verwirklichung der Wechselstromwiderstände vorgeschriebener Frequenzabhängigkeit" ("The realisation of impedances of specified frequency dependence"), ''Archiv für Elektrotechnic'', '''vol 17''', pp.355-388, 1926 {{doi|10.1007/BF01662000}}</ref> and was the first to talk of realisation of a one-port impedance with a prescribed frequency function. Foster's work considered only reactances (i.e., only LC-kind circuits). Cauer generalised this to any 2-element kind one-port network, finding there was an isomorphism between them. He also found ladder realisations<ref group=note>which is the best known of the filter topologies. It is for this reason that ladder topology is often referred to as Cauer topology (the forms used earlier by Foster are quite different) even though ladder topology had long since been in use in image filter design</ref> of the network using [[Thomas Stieltjes]]' continued fraction expansion. This work was the basis on which network synthesis was built, although Cauer's work was not at first used much by engineers, partly because of the intervention of World War II, partly for reasons explained in the next section and partly because Cauer presented his results using topologies that required mutually coupled inductors and ideal transformers. Although on this last point, it has to be said that transformer coupled double tuned amplifiers are a common enough way of widening bandwidth without sacrificing selectivity.<ref>A.P.Godse U.A.Bakshi, ''Electronic Circuit Analysis'', p.5-20, Technical Publications, 2007 ISBN 81-8431-047-1</ref><ref name=Belev850>Belevitch, p.850</ref><ref>Cauer et al., pp.1,6</ref>
| |
| | |
| ==Image method versus synthesis==
| |
| Image filters continued to be used by designers long after the superior network synthesis techniques were available. Part of the reason for this may have been simply inertia, but it was largely due to the greater computation required for network synthesis filters, often needing a mathematical iterative process. Image filters, in their simplest form, consist of a chain of repeated, identical sections. The design can be improved simply by adding more sections and the computation required to produce the initial section is on the level of "back of an envelope" designing. In the case of network synthesis filters, on the other hand, the filter is designed as a whole, single entity and to add more sections (i.e., increase the order)<ref group=note name=class/> the designer would have no option but to go back to the beginning and start over. The advantages of synthesised designs are real, but they are not overwhelming compared to what a skilled image designer could achieve, and in many cases it was more cost effective to dispense with time-consuming calculations.<ref name=Darl9>Darlington, p.9</ref> This is simply not an issue with the modern availability of computing power, but in the 1950s it was non-existent, in the 1960s and 1970s available only at cost, and not finally becoming widely available to all designers until the 1980s with the advent of the desktop personal computer. Image filters continued to be designed up to that point and many remained in service into the 21st century.<ref>Irwin W. Sandberg, Ernest S. Kuh, "Sidney Darlington", ''Biographical Memoirs'', '''vol 84''', page 85, National Academy of Sciences (U.S.), National Academies Press 2004 ISBN 0-309-08957-3</ref>
| |
| | |
| The computational difficulty of the network synthesis method was addressed by tabulating the component values of a [[prototype filter]] and then scaling the frequency and impedance and transforming the bandform to those actually required. This kind of approach, or similar, was already in use with image filters, for instance by Zobel,<ref name=Zobel>Zobel, O. J.,''Theory and Design of Uniform and Composite Electric Wave Filters'', Bell Systems Technical Journal, Vol. 2 (1923), pp. 1-46.
| |
| </ref> but the concept of a "reference filter" is due to [[Sidney Darlington]].<ref>J. Zdunek, "The network synthesis on the insertion-loss basis", ''Proceedings of the Institution of Electrical Engineers'', p.283, part 3, '''vol 105''', 1958</ref> Darlington (1939),<ref name=Darl4pole/> was also the first to tabulate values for network synthesis prototype filters,<ref>Matthaei et al., p.83</ref> nevertheless it had to wait until the 1950s before the Cauer-Darlington [[elliptic filter]] first came into use.<ref>Michael Glynn Ellis, ''Electronic filter analysis and synthesis'', p.2, Artech House 1994 ISBN 0-89006-616-7</ref>
| |
| | |
| Once computational power was readily available, it became possible to easily design filters to minimise any arbitrary parameter, for example time delay or tolerance to component variation. The difficulties of the image method were firmly put in the past, and even the need for prototypes became largely superfluous.<ref>John T. Taylor, Qiuting Huang, ''CRC handbook of electrical filters'', p.20, CRC Press 1997 ISBN 0-8493-8951-8</ref><ref name=Darl12>Darlington, p.12</ref> Furthermore, the advent of [[active filter]]s eased the computation difficulty because sections could be isolated and iterative processes were not then generally necessary.<ref name=Darl9/>
| |
| | |
| ==Realisability and equivalence==
| |
| Realisability (that is, which functions are realisable as real impedance networks) and equivalence (which networks equivalently have the same function) are two important questions in network synthesis. Following an analogy with [[Lagrangian mechanics]], Cauer formed the matrix equation,
| |
| | |
| :<math>\mathbf{[A]}= s^2 \mathbf{[L]} + s \mathbf{[R]} + \mathbf{[D]} = s \mathbf{[Z]}</math>
| |
| | |
| where ['''Z'''],['''R'''],['''L'''] and ['''D'''] are the ''n''x''n'' matrices of, respectively, [[electrical impedance|impedance]], [[Electrical resistance|resistance]], [[inductance]] and elastance of an ''n''-[[mesh analysis|mesh]] network and ''s'' is the [[complex frequency]] operator <math>\scriptstyle s=\sigma+i\omega</math>. Here ['''R'''],['''L'''] and ['''D'''] have associated energies corresponding to the kinetic, potential and dissipative heat energies, respectively, in a mechanical system and the already known results from mechanics could be applied here. Cauer determined the [[driving point impedance]] by the method of [[Lagrange multipliers]];
| |
| | |
| :<math>Z_{\mathrm{p}}(s)=\frac{\det \mathbf{[A]}}{s \, a_{11}}</math>
| |
| | |
| where ''a<sub>11</sub>'' is the complement of the element ''A<sub>11</sub>'' to which the one-port is to be connected. From [[stability theory]] Cauer found that ['''R'''], ['''L'''] and ['''D'''] must all be [[positive-definite matrix|positive-definite matrices]] for ''Z''<sub>p</sub>(''s'') to be realisable if ideal transformers are not excluded. Realisability is only otherwise restricted by practical limitations on topology.<ref name=ECauer4/> This work is also partly due to [[Otto Brune]] (1931), who worked with Cauer in the US prior to Cauer returning to Germany.<ref name=Belev850/> A well known condition for realisability of a one-port rational<ref group=note>A rational impedance is one expressed as a ratio of two finite polynomials in ''s'', that is, a [[rational function]] in ''s''. The implication of finite polynomials is that the impedance, when realised, will consist of a finite number of meshes with a finite number of elements</ref> impedance due to Cauer (1929) is that it must be a function of ''s'' that is analytic in the right halfplane (σ>0), have a positive real part in the right halfplane and take on real values on the real axis. This follows from the [[Poisson integral]] representation of these functions. Brune coined the term [[positive-real]] for this class of function and proved that it was a necessary and sufficient condition (Cauer had only proved it to be necessary) and they extended the work to LC multiports. A theorem due to [[Sidney Darlington]] states that any positive-real function ''Z''(''s'') can be realised as a lossless two-port terminated in a positive resistor R. No resistors within the network are necessary to realise the specified response.<ref name=Belev850/><ref>Cauer et al., pp.6-7</ref><ref name=Darl7>Darlington, p.7</ref>
| |
| | |
| As for equivalence, Cauer found that the group of real [[affine transformation]]s,
| |
| | |
| :<math> \mathbf{[T]}^T \mathbf{[A]} \mathbf{[T]} </math>
| |
| | |
| :where,
| |
| :<math> \mathbf{[T]}=\begin{bmatrix} 1 & 0 \cdots 0 \\ T_{21} & T_{22} \cdots T_{2n} \\ \cdot & \cdots \\ T_{n1} & T_{n2} \cdots T_{nn}\end{bmatrix}</math>
| |
| | |
| is invariant in ''Z''<sub>p</sub>(''s''), that is, all the transformed networks are equivalents of the original.<ref name=ECauer4/>
| |
| | |
| ==Approximation==
| |
| The approximation problem in network synthesis is to find functions which will produce realisable networks approximating to a prescribed function of frequency within limits arbitrarily set. The approximation problem is an important issue since the ideal function of frequency required will commonly be unachievable with rational networks. For instance, the ideal prescribed function is often taken to be the unachievable lossless transmission in the passband, infinite attenuation in the stopband and a vertical transition between the two. However, the ideal function can be approximated with a [[rational function]], becoming ever closer to the ideal the higher the order of the polynomial. The first to address this problem was [[Stephen Butterworth]] (1930) using his [[Butterworth polynomials]]. Independently, Cauer (1931) used [[Chebyshev polynomials]], initially applied to image filters, and not to the now well-known ladder realisation of this filter.<ref name=Belev850/><ref>Darlington, pp.7-8</ref>
| |
| | |
| ===Butterworth filter===
| |
| {{Main|Butterworth filter}}
| |
| Butterworth filters are an important class<ref group=note name=class>A class of filters is a collection of filters which are all described by the same [[class (mathematics)|class of mathematical function]], for instance, the class of Chebyshev filters are all described by the class of [[Chebyshev polynomial]]s. For realisable linear passive networks, the [[transfer function]] must be a ratio of [[polynomial function]]s. The order of a filter is the order of the highest order polynomial of the two and will equal the number of elements (or resonators) required to build it. Usually, the higher the order of a filter, the steeper the roll-off of the filter will be. In general, the values of the elements in each section of the filter will not be the same if the order is increased and will need to be recalculated. This is in contrast to the image method of design which simply adds on more identical sections</ref> of filters due to [[Stephen Butterworth]] (1930)<ref>Butterworth, S, "On the Theory of Filter Amplifiers", ''Wireless Engineer'', '''vol. 7''', 1930, pp. 536-541</ref> which are now recognised as being a special case of Cauer's [[elliptic filter]]s. Butterworth discovered this filter independently of Cauer's work and implemented it in his version with each section isolated from the next with a [[valve amplifier]] which made calculation of component values easy since the filter sections could not interact with each other and each section represented one term in the [[Butterworth polynomials]]. This gives Butterworth the credit for being both the first to deviate from image parameter theory and the first to design active filters. It was later shown that Butterworth filters could be implemented in ladder topology without the need for amplifiers, possibly the first to do so was William Bennett (1932)<ref>William R. Bennett, ''Transmission network'', US Patent 1,849,656, filed 29 June 1929, issued 15 March 1932</ref> in a patent which presents formulae for component values identical to the modern ones. Bennett, at this stage though, is still discussing the design as an artificial transmission line and so is adopting an image parameter approach despite having produced what would now be considered a network synthesis design. He also does not appear to be aware of the work of Butterworth or the connection between them.<ref name=Quad/><ref name=Matt85>Matthaei et al., pp.85-108</ref>
| |
| | |
| ===Insertion-loss method===
| |
| The insertion-loss method of designing filters is, in essence, to prescribe a desired function of frequency for the filter as an attenuation of the signal when the filter is inserted between the terminations relative to the level that would have been received were the terminations connected to each other via an ideal transformer perfectly matching them. Versions of this theory are due to [[Sidney Darlington]], Wilhelm Cauer and others all working more or less independently and is often taken as synonymous with network synthesis. Butterworth's filter implementation is, in those terms, an insertion-loss filter, but it is a relatively trivial one mathematically since the active amplifiers used by Butterworth ensured that each stage individually worked into a resistive load. Butterworth's filter becomes a non-trivial example when it is implemented entirely with passive components. An even earlier filter which influenced the insertion-loss method was Norton's dual-band filter where the input of two filters are connected in parallel and designed so that the combined input presents a constant resistance. Norton's design method, together with Cauer's canonical LC networks and Darlington's theorem that only LC components were required in the body of the filter resulted in the insertion-loss method. However, ladder topology proved to be more practical than Cauer's canonical forms.<ref name=Darl8>Darlington, p.8</ref>
| |
| | |
| Darlington's insertion-loss method is a generalisation of the procedure used by Norton. In Norton's filter it can be shown that each filter is equivalent to a separate filter unterminated at the common end. Darlington's method applies to the more straightforward and general case of a 2-port LC network terminated at both ends. The procedure consists of the following steps:
| |
| #determine the poles of the prescribed insertion-loss function,
| |
| #from that find the complex transmission function,
| |
| #from that find the complex [[reflection coefficient#Telecommunications|reflection coefficients]] at the terminating resistors,
| |
| #find the driving point impedance from the short-circuit and open-circuit impedances,<ref group=note name=Zoc/>
| |
| #expand the driving point impedance into an LC (usually ladder) network.
| |
| Darlington additionally used a transformation found by [[Hendrik Bode]] that predicted the response of a filter using non-ideal components but all with the same ''Q''. Darlington used this transformation in reverse to produce filters with a prescribed insertion-loss with non-ideal components. Such filters have the ideal insertion-loss response plus a flat attenuation across all frequencies.<ref name=Darl9/><ref>Vasudev K Aatre, ''Network theory and filter design'', p.355, New Age International 1986, ISBN 0-85226-014-8</ref>
| |
| | |
| ===Elliptic filters===
| |
| {{Main|Elliptic filter}}
| |
| Elliptic filters are filters produced by the insertion-loss method which use [[elliptic rational functions]] in their transfer function as an approximation to the ideal filter response and the result is called a Chebyshev approximation. This is the same Chebyshev approximation technique used by Cauer on image filters but follows the Darlington insertion-loss design method and uses slightly different elliptic functions. Cauer had some contact with Darlington and Bell Labs before WWII (for a time he worked in the US) but during the war they worked independently, in some cases making the same discoveries. Cauer had disclosed the Chebyshev approximation to Bell Labs but had not left them with the proof. [[Sergei Alexander Schelkunoff|Sergei Schelkunoff]] provided this and a generalisation to all equal ripple problems. Elliptic filters are a general class of filter which incorporate several other important classes as special cases: Cauer filter (equal [[Ripple (electrical)#Frequency-domain ripple|ripple]] in passband and [[stopband]]), Chebyshev filter (ripple only in passband), reverse Chebyshev filter (ripple only in stopband) and Butterworth filter (no ripple in either band).<ref name=Darl8/><ref>Matthaei et al., p.95</ref>
| |
| | |
| Generally, for insertion-loss filters where the transmission zeroes and infinite losses are all on the real axis of the complex frequency plane (which they usually are for minimum component count), the insertion-loss function can be written as;
| |
| | |
| :<math> \frac{1}{1+JF^2} </math>
| |
| | |
| where ''F'' is either an even (resulting in an [[antimetric (electrical networks)|antimetric]] filter) or an odd (resulting in an symmetric filter) function of frequency. Zeroes of ''F'' correspond to zero loss and the poles of ''F'' correspond to transmission zeroes. ''J'' sets the passband ripple height and the stopband loss and these two design requirements can be interchanged. The zeroes and poles of ''F'' and ''J'' can be set arbitrarily. The nature of ''F'' determines the class of the filter;
| |
| *if ''F'' is a Chebyshev approximation the result is a Chebyshev filter,
| |
| *if ''F'' is a maximally flat approximation the result is a passband maximally flat filter,
| |
| *if 1/''F'' is a Chebyshev approximation the result is a reverse Chebyshev filter,
| |
| *if 1/''F'' is a maximally flat approximation the result is a stopband maximally flat filter,
| |
| A Chebyshev response simultaneously in the passband and stopband is possible, such as Cauer's equal ripple elliptic filter.<ref name=Darl8/>
| |
| | |
| Darlington relates that he found in the New York City library [[Carl Gustav Jacob Jacobi|Carl Jacobi]]'s original paper on elliptic functions, published in Latin in 1829. In this paper Darlington was surprised to find foldout tables of the exact elliptic function transformations needed for Chebyshev approximations of both Cauer's image parameter, and Darlington's insertion-loss filters.<ref name=Darl9/>
| |
| | |
| ===Other methods===
| |
| Darlington considers the topology of coupled tuned circuits to involve a separate approximation technique to the insertion-loss method, but also producing nominally flat passbands and high attenuation stopbands. The most common topology for these is shunt anti-resonators coupled by series capacitors, less commonly, by inductors, or in the case of a two-section filter, by mutual inductance. These are most useful where the design requirement is not too stringent, that is, moderate bandwidth, roll-off and passband ripple.<ref name=Darl12/>
| |
| | |
| ==Other notable developments and applications==
| |
| | |
| ===Mechanical filters===
| |
| {{main|Mechanical filter}}
| |
| [[File:Norton mechanical filter.png|thumb|250px|Norton's mechanical filter together with its electrical equivalent circuit. Two equivalents are shown, "Fig.3" directly corresponds to the physical relationship of the mechanical components; "Fig.4" is an equivalent transformed circuit arrived at by repeated application of [[Equivalent impedance transforms#Transform 5.2|a well known transform]], the purpose being to remove the series resonant circuit from the body of the filter leaving a simple ''LC'' ladder network.<ref>E. L. Norton, "Sound reproducer", US Patent US1792655, filed 31st May 1929, issued 17th February 1931</ref>]]
| |
| [[Edward Lawry Norton|Edward Norton]], around 1930, designed a mechanical filter for use on [[phonograph]] recorders and players. Norton designed the filter in the electrical domain and then used the correspondence of mechanical quantities to electrical quantities to realise the filter using mechanical components. [[Mass]] corresponds to [[inductance]], [[stiffness]] to elastance and [[damping]] to [[Electrical resistance|resistance]]. The filter was designed to have a [[Butterworth filter#Maximal flatness|maximally flat]] frequency response.<ref name=Darl7>Darlington, p.7</ref>
| |
| | |
| In modern designs it is common to use quartz [[crystal filter]]s, especially for narrowband filtering applications. The signal exists as a mechanical acoustic wave while it is in the crystal and is converted by [[transducer]]s between the electrical and mechanical domains at the terminals of the crystal.<ref>Vizmuller, P, ''RF Design Guide: Systems, Circuits, and Equations'', pp.81-84, Artech House, 1995 ISBN 0-89006-754-6</ref>
| |
| | |
| ===Transversal filters===
| |
| Transversal filters are not usually associated with passive implementations but the concept can be found in a Wiener and Lee patent from 1935 which describes a filter consisting of a cascade of [[All-pass filter|all-pass section]]s.<ref>N Wiener and Yuk-wing Lee, ''Electric network system'', United States patent US2024900, 1935</ref> The outputs of the various sections are summed in the proportions needed to result in the required frequency function. This works by the principle that certain frequencies will be in, or close to antiphase, at different sections and will tend to cancel when added. These are the frequencies rejected by the filter and can produce filters with very sharp cut-offs. This approach did not find any immediate applications, and is not common in passive filters. However, the principle finds many applications as an active delay line implementation for wide band [[discrete-time]] filter applications such as television, radar and high-speed data transmission.<ref name=Darl11/><ref>B. S. Sonde, ''Introduction to System Design Using Integrated Circuits'', pp.252-254, New Age International 1992 ISBN 81-224-0386-7</ref>
| |
| | |
| ===Matched filter===
| |
| {{main|matched filter}}
| |
| | |
| The purpose of matched filters is to maximise the [[signal-to-noise ratio]] (S/N) at the expense of pulse shape. Pulse shape, unlike many other applications, is unimportant in radar while S/N is the primary limitation on performance. The filters were introduced during WWII (described 1943)<ref>D. O. North, [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1444313 "An analysis of the factors which determine signal/noise discrimination in pulsed carrier systems"], ''RCA Labs. Rep. PTR-6C'', 1943</ref> by Dwight North and are often eponymously referred to as "[[Matched filter|North filters]]".<ref name=Darl11>Darlington, p.11</ref><ref>Nadav Levanon, Eli Mozeson, ''Radar Signals'', p.24, Wiley-IEEE 2004 ISBN 0-471-47378-2</ref>
| |
| | |
| ===Filters for control systems===
| |
| Control systems have a need for smoothing filters in their feedback loops with criteria to maximise the speed of movement of a mechanical system to the prescribed mark and at the same time minimise overshoot and noise induced motions. A key problem here is the extraction of [[Gaussian distribution|Gaussian signals]] from a noisy background. An early paper on this was published during WWII by [[Norbert Wiener]] with the specific application to anti-aircraft fire control analogue computers. Rudy Kalman ([[Kalman filter]]) later reformulated this in terms of [[State space (controls)|state-space]] smoothing and prediction where it is known as the [[linear-quadratic-Gaussian control]] problem. Kalman started an interest in state-space solutions, but according to Darlington this approach can also be found in the work of Heaviside and earlier.<ref name=Darl11/>
| |
| | |
| ==Modern practice==
| |
| LC passive filters gradually became less popular as active amplifying elements, particularly [[operational amplifier]]s, became cheaply available. The reason for the change is that wound components (the usual method of manufacture for inductors) are far from ideal, the wire adding resistance as well as inductance to the component. Inductors are also relatively expensive and are not "off-the-shelf" components. On the other hand, the function of LC ladder sections, LC resonators and RL sections can be replaced by RC components in an amplifier feedback loop (active filters). These components will usually be much more cost effective, and smaller as well. Cheap digital technology, in its turn, has largely supplanted analogue implementations of filters. However, there is still an occasional place for them in the simpler applications such as coupling where sophisticated functions of frequency are not needed.<ref>Jack L. Bowers, "R-C bandpass filter design", ''Electronics'', '''vol 20''', pages 131-133, April 1947</ref><ref>Darlington, pp.12-13</ref>
| |
| | |
| ==See also==
| |
| *[[Audio filter]]
| |
| *[[Composite image filter]]
| |
| *[[Digital filter]]
| |
| *[[Electronic filter]]
| |
| *[[Linear filter]]
| |
| *[[Network synthesis filters]]
| |
| | |
| ==Footnotes==
| |
| {{reflist|group=note|2}}
| |
| | |
| ==References==
| |
| {{reflist|3}}
| |
| | |
| ==Bibliography==
| |
| :*[[Vitold Belevitch|Belevitch, V]], "Summary of the history of circuit theory", ''Proceedings of the IRE'', '''vol 50''', Iss 5, pp.848-855, May 1962 {{doi|10.1109/JRPROC.1962.288301}}.
| |
| :*Blanchard, J, "The History of Electrical Resonance", ''Bell System Technical Journal'', '''vol.23''', pp.415–433, 1944.
| |
| :*E. Cauer, W. Mathis, and R. Pauli, "Life and Work of Wilhelm Cauer (1900 – 1945)", ''Proceedings of the Fourteenth International Symposium of Mathematical Theory of Networks and Systems (MTNS2000)'', Perpignan, June, 2000. [http://www.cs.princeton.edu/courses/archive/fall03/cs323/links/cauer.pdf Retrieved online] 19 September 2008. | |
| :*Darlington, S, "A history of network synthesis and filter theory for circuits composed of resistors, inductors, and capacitors", ''IEEE Trans. Circuits and Systems'', '''vol 31''', pp.3-13, 1984 {{doi|10.1109/TCS.1984.1085415}}.
| |
| :*Bruce J. Hunt, [http://books.google.com/books?id=23rBH11Q9w8C&printsec=frontcover ''The Maxwellians''], Cornell University Press, 2005 ISBN 0-8014-8234-8.
| |
| :*Lundheim, L, "On Shannon and Shannon's Formula", ''Telektronikk'', '''vol. 98''', no. 1, 2002, pp. 20-29 [http://www.iet.ntnu.no/groups/signal/people/lundheim/Page_020-029.pdf retrieved online] 25 September 2008.
| |
| :* Matthaei, Young, Jones, ''Microwave Filters, Impedance-Matching Networks, and Coupling Structures'', McGraw-Hill 1964.
| |
| | |
| ==Further reading==
| |
| *Fry, T C, [http://www.ams.org/journals/bull/1929-35-04/S0002-9904-1929-04747-5/ "The use of continued fractions in the design of electrical networks"], ''Bulletin of the American Mathematical Society'', volume 35, pages 463-498, 1929 (full text available).
| |
| {{good article}}
| |
| | |
| {{DEFAULTSORT:Analogue Filter}}
| |
| [[Category:Linear filters]]
| |
| [[Category:Filter theory]]
| |
| [[Category:Analog circuits]]
| |
| [[Category:History of electronic engineering]]
| |
| [[Category:Electronic design]]
| |