Holonomic basis

From formulasearchengine
Revision as of 22:22, 3 April 2013 by en>Maschen (link to Ricci calculus)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Parallel Processing in digital signal processing (DSP) is a technique duplicating function units to operate different tasks (signals) simultaneously.[1] Accordingly, we can perform the same processing for different signals on the corresponding duplicated function units. Further, due to the features of parallel processing, the parallel DSP design often contains multiple outputs, resulting in higher throughput than not parallel.

Conceptual Example

Consider a function unit (F0) and three tasks (T0, T1 and T2). The required time for the function unit F0 to process those tasks is t0,t1 and t2 respectively. Then, if we operate these three tasks in a sequential order, the required time to complete them is t0+t1+t2.


However, if we duplicate the function unit to another two copies (F), the aggregate time is reduced to max(t0,t1,t2), which is smaller than in a sequential order.


Parallel Processing Versus Pipelining

Mechanism:

  • Parallel: duplicated function units working in parallel
    • Each task is processed entirely by a different function unit.
  • Pipelining: different function units working in parallel
    • Each task is split into a sequence of sub-tasks, which are handled by specialized and different function units.

Objective:

  • Pipelining leads to a reduction in the critical path, which can increase the sample speed or reduce power consumption at the same speed.
  • Parallel processing techniques require multiple outputs, which are computed in parallel in a clock period. Therefore, the effective sample speed is increased by the level of parallelism.

Consider a condition that we are able to apply both parallel processing and pipelining techniques, it is better to choose parallel processing techniques with the following reasons

  • Pipelining usually causes I/O bottlenecks
  • Parallel processing is also utilized for reduction of power consumption while using slow clocks
  • The hybrid method of pipelining and parallel processing further increase the speed of the architecture

Parallel FIR Filters

Consider a 3-tap FIR filter:[2]

y(n)=ax(n)+bx(n1)+cx(n2)

which is shown in the following figure.

Assume the calculation time for multiplication units is Tm and Ta for add units. The sample period is given by

TsampleTm+2Ta

By parallelizing it, the resultant architecture is shown as follows. The sample rate now becomes
TsampleTclockN=Tm+2Ta3

where N represents the number of copies.

Please note that, in a parallel system, TsampleTclock while Tsample=Tclock holds in a pipelined system.

Parallel 1st-order IIR Filters

Consider the transfer function of a 1st–order IIR filter formulated as
H(z)=z11a*z1

where |a|≤1 for stability, and such filter has only one pole located at z=a;

The corresponding recursive representation is
y(n+1)=ay(n)+u(n)

Consider the design of a 4-parallel architecture (N=4). In such parallel system, each delay element means a block delay and the clock period is four times the sample period.
Therefore, by iterating the recursion with n=4k, we have
y(n+4)=a4y(n)+a3u(n)+a2u(n+1)+au(n+2)+u(n+3)

y(4k+4)=a4y(4k)+a3u(4k)+a2u(4k+1)+au(4k+2)+u(4k+3)

The corresponding architecture is shown as follows.

The resultant parallel design has the following properties.

  • The pole of the original filter is at z=a while the pole for the parallel system is at z=a4 which is closer to the origin.
  • The pole movement improves the robustness of the system to the round-off noise.
  • Hardware complexity of this architecture: N*N multiply-add operations.


Please note that the square increase in hardware complexity can be reduced by exploiting the concurrency and the incremental computation to avoid repeated computing.

Parallel Processing for Low Power

Another advantage for the parallel processing techniques is that it can reduce the power consumption of a system by reducing the supply voltage.
Consider the following power consumption in a normal CMOS circuit.
Pseq=Ctotal*V02*f

where the Ctotal represents the total capacitance of the CMOS circuit.


For a parallel version, the charging capacitance remains the same but the total capacitance increases by N times.
In order to maintain the same sample rate, the clock period of the N-parallel circuit increases to N times the propagation delay of the original circuit.
It makes the charging time prolongs N times. The supply voltage can be reduced to βV0.

Therefore, the power consumption of the N-parallel system can be formulated as
Ppara=(NCtotal)*(βV02)*fN=β2*Pseq

where β can be computed by

N(βV0Vt)2=β(V0Vt)2

References

  1. K.K. Parhi, VLSI Digital Signal Processing Systems: Design and Implementation, John Wiley, 1999
  2. Slides for VLSI Digital Signal Processing Systems: Design and Implementation John Wiley & Sons, 1999 (ISBN Number: 0-471-24186-5): http://www.ece.umn.edu/users/parhi/slides.html