Principles of grid generation: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Monkbot
 
en>Michael Hardy
No edit summary
 
Line 1: Line 1:
{{Orphan|date=November 2013}}
I'm Maynard and I live in a seaside city in northern Austria, Ebersdorf. I'm 29 and I'm will soon finish my study at Computer Science.<br><br>Look at my blog; [http://Eoheung.80Port.net/?document_srl=853277 homes for sale in sacramento ca]
 
'''3D sound localization''' refers to the [[acoustical engineering]] technology that been used to identify the location of a sound source in a [[three-dimensional space]]. Usually the location of the source is determined by the direction of the coming sound waves(horizontal and vertical angles) and the distance between the source and the sensors. We note that the sound source localization problem is also a source localization problem. It involves the structure arrangement design of the [[sensors]] and [[signal processing]] techniques.
 
==Applications==
There are many applications of sound source localization such as sound source separation, sound source tracking and speech enhancement. Underwater sonar uses sound source localization techniques to identify the location of a target. It is also used in robots for effective human-robot interaction.
 
==Cues for sound localization==
Localization cues<ref>{{cite book|first=E.Bruce|last=Goldstein|title=Sensation and Perception(Eighth Edition)|publisher=Cengage Learning|pages=293–297|ISBN= 978-0-495-60149-4}}</ref> are the features the can help us localize sound. Cues for the sound localization include the binaural cues and monoaural cues.
*Monoaural cues can be obtained by means of [[spectral analysis]]. Monoaural cues are generally used in vertical localization
*Binaural cues are generated by the difference of hearing between the left and right ears. These includes [[interaural time difference]](ITD) and interaural intensity difference(ILD). Binaural cues are mostly used for the horizontal localization.
 
==Methods==
There are many 3D sound localizatoin methods that are used for various applications.
*Different types of sensor structure can be used such as [[microphone array]] and binaural hearing robot head.<ref name="binaural">{{cite conference|author=Nakasima,H. and Mukai,T.|title=3D Sound Source Localization System Based on Learning of Binaural Hearing|conference=Systems,Man and Cybernetics,IEEE 2005|date=Oct 2005|volume=4|pages=3534–3539|doi=10.1109/ICSMC.2005.1571695 }}</ref>
*Different techniques can be used to get the optimal results, such as [[neural network]], [[maximum likelihood]] and [[Multiple signal classification]] (MUSIC).
*According to the timeliness, there are real-time methods and off-line methods
 
===Learning method for binaural hearing ===
[[File:Binaural robot head2.png|thumb|upright|Structure of the binaural robot dummy head]]
 
Binaural hearing Learning<ref name="binaural" /> is a [[bionic]] method. The sensor is a robot dummy head with 2 sensor microphones along with the artificial pinna(reflector). The robot head has 2 rotation axes and can rotate horizontally and vertically. The reflector causes the spectrum change into a certain pattern for incoming white noise sound wave and this pattern is used for the cue of the vertical localization. Cue used for horizontal localization is ITD.
 
The system should make use of a learning process using [[neural networks]] by rotating the head with a settled white noise sound source and analyzing the spectrum. Experiments show that the system can identify the direction of the source well in a certain range of angle of arrival. But it cannot identify the sound coming outside the range due to the collapsed spectrum pattern of the reflector.
 
Binaural hearing use only 2 microphones and is capable of concentrating on one source amaong noises and different sources.
 
===Head-related Transfer Function (HRTF)===
In the real sound localization, the whole head and the torso have an important functional role, not only the two pinnae. This function can be described as spatial linear filtering and the filtering is always quantified in terms of Head-Related Transfer Function (HRTF).<ref>{{cite conference|author=Keyouz,F. and Diepold,K.|title=An Enhanced Binaural 3D Sound Localization Algorithm|conference=Signal Processing and Information Technology,IEEE 2006|pages=662–665|date=Aug 2006|doi=10.1109/ISSPIT.2006.270883}}</ref>
 
HRTF also uses the robot head sensor, which is the binaural hearing model. This model has multiple inputs. The HRTF can be derived based on various cues for localization. Sound localization with HRTF is flitering the input signal with a filter which is designed based on the HRTF. Instead of using the neural networks, a head-related transfer function is used and the localization is based on a simple correlation approach.
 
See more: [[Head-related transfer function]].
 
===Cross-power spectrum phase (CSP) analysis===
CSP method<ref>{{cite conference|author=Hyun-Don Kim;Komatani, K.;Ogata, T.;Okuno,H.G.|title=Evaluation of Two-Channel-Based Sound Source Localization using 3D Moving Sound Creation Tool|conference=ICERI 2008|date=Jan 2008|doi=10.1109/ICKS.2008.25}}</ref> is also used for the binaural model. The idea is that the angle of arrival can be derived through the time delay of arrival (TDOA) between two microphones, and TDOA can be estimated by finding the maximum coefficients of CSP. CSP coefficients are derived by:<br>
:<math>csp_{ij}(k)=IFFT\left \{ \frac{FFT[s_{i}(n)]\cdot FFT[s_{j}(n)]^*} {\left |FFT[s_{i}(n)]\right \vert \cdot \left |FFT[s_{j}(n)]\right \vert \quad} \right \} \quad
</math>
Where <math>s_{i}(n)</math> and <math>s_{j}(n)</math> are signals entering the microphone <math>i</math> and <math>j</math> respectively<br>
Time delay of arrival(<math>\tau</math>) then can be estimated by:<br>
:<math>{\tau}= arg \max\{csp_{ij}(k)\}
</math>
Sound source direction is
:<math>{\theta}=cos^{-1}\frac{v\cdot \tau}{d_{max}\cdot F_{s}}
</math>
Where <math>v</math> is the sound propagation speed, <math>F_{s}</math> is the sampling frequency and <math>d_{max}</math> is the distance with maximum time delay between 2 microphones.
 
CPS method does not require the system impulse response data that HRTF needs. An [[expectation-maximization algorithm]] is also used for localizing several sound sources and reduce the localization errors. The system is capable of identifying several moving sound source using only two microphones.
 
===2D sensor line array===
[[File:2D array demo3.png|thumb|upright|Demonstration of 2d line sensor array]]
In order to estimate the location of a source in 3d space, we can use 2 line sensor arrays by respectively putting them horizontally and vertically.
An example is a 2D line array used for underwater source localization.<ref>{{cite journal|author=Tabrikian,J. and Messer,H.|title=Three-Dimensional Source Localization in a Waveguide|journal=IEEE Transaction on Signal Processing|volume=44|issue=1|date=Jan 1996|doi=10.1109/78.482007|pages=1–13}}</ref> By processing the data from 2 arrays using the [[maximum likelihood]] method, the direction, range and depth of the source can be identified simultaneously. <br>
Unlike the binaural hearing model, this method is much more like a [[spectral analysis]] method. The method can be used for localizing  a source which is far away, but the system could be much more expensive than the binaural model because it needs more sensors and power.
 
==See also==
*[[Acoustic source localization]]
*[[Sound localization]]
*[[Vertical sound localization]]
*[[Head-related transfer function]]
*[[Binaural recording]]
 
==References==
{{reflist}}
 
==External links==
*[http://www.youtube.com/watch?v=LJwUVCXH-gM/  A demo of 3d sound]
*[http://www.hitl.washington.edu/scivw/EVE/I.B.1.3DSoundSynthesis.html/ 3d sound synthesis]
 
[[Category:Acoustics]]

Latest revision as of 21:16, 10 January 2015

I'm Maynard and I live in a seaside city in northern Austria, Ebersdorf. I'm 29 and I'm will soon finish my study at Computer Science.

Look at my blog; homes for sale in sacramento ca