Goldner–Harary graph: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>David Eppstein
 
en>Yobot
m WP:CHECKWIKI error fixes / special characters in sortkey fixed using AWB (9427)
 
Line 1: Line 1:
Next - GEN Gallery is a full incorporated Image Gallery plugin for Word - Press which has a Flash slideshow option. This one is one of the most beneficial features of Word - Press as this feature allows users to define the user roles. Step-4 Testing: It is the foremost important of your Plugin development process. Dead links are listed out simply because it will negatively have an influence on the website's search engine rating. By using this method one can see whether the theme has the potential to become popular or not and is their any scope of improvement in the theme. <br><br>Word - Press is known as the most popular blogging platform all over the web and is used by millions of blog enthusiasts worldwide. Wordpress have every reason with it which promote wordpress development. It sorts the results of a search according to category, tags and comments. So if you want to create blogs or have a website for your business or for personal reasons, you can take advantage of free Word - Press installation to get started. Akismet is really a sophisticated junk e-mail blocker and it's also very useful thinking about I recieve many junk e-mail comments day-to-day across my various web-sites. <br><br>The least difficult and very best way to do this is by acquiring a Word - Press site. To sum up, ensure that the tactics are aiming to increase the ranking and attracting the maximum intended traffic in the major search engines. We can active Akismet from wp-admin > Plugins > Installed Plugins. Thousands of plugins are available in Word - Press plugin's library which makes the task of selecting right set of plugins for your website a very tedious task. If you have any questions on starting a Word - Press food blog or any blog for that matter, please post them below and I will try to answer them. <br><br>Should you have almost any queries regarding wherever as well as the best way to utilize [http://GET7.pw/wordpress_dropbox_backup_452200 backup plugin], you are able to email us on our own web-page. Google Maps Excellent navigation feature with Google Maps and latitude, for letting people who have access to your account Latitude know exactly where you are. The SEOPressor Word - Press SEO Plugin works by analysing each page and post against your chosen keyword (or keyword phrase) and giving a score, with instructions on how to improve it. This allows for keeping the content editing toolbar in place at all times no matter how far down the page is scrolled. Can you imagine where you would be now if someone in your family bought an original painting from van Gogh during his lifetime. Digital digital cameras now function gray-scale configurations which allow expert photographers to catch images only in black and white. <br><br>Every single module contains published data and guidelines, usually a lot more than 1 video, and when pertinent, incentive links and PDF files to assist you out. An ease of use which pertains to both internet site back-end and front-end users alike. It's not a secret that a lion share of activity on the internet is takes place on the Facebook. Page speed is an important factor in ranking, especially with Google. Get started today so that people searching for your type of business will be directed to you.
In [[computer vision]], the '''Kanade–Lucas–Tomasi (KLT) feature tracker''' is an approach to [[feature extraction]]. It is proposed mainly for the purpose of dealing with the problem that traditional [[image registration]] techniques are generally costly. KLT makes use of spatial intensity information to direct the search for the position that yields the best match. It is faster than traditional techniques for examining far fewer potential matches between the images.
 
==The registration problem==
 
The translational image registration problem can be characterized as follows: Given two functions <math>F(x)</math> and <math>G(x)</math>, representing values at each location <math>x</math>, where <math>x</math> is a vector, in two images, respectively, we wish to find the disparity vector <math>h</math> that minimizes some measure of the difference between <math>F(x+h)</math> and <math>G(x)</math>, for <math>x</math> in some region of interest <math>R</math>.
 
Some measures of the difference between <math>F(x+h)</math> and <math>G(x)</math>:
* L<sub>1</sub> norm = <math>\sum_{x\in R}\left\vert F(x+h)-G(x) \right\vert</math>
* L<sub>2</sub> norm = <math>\sqrt{\sum_{x\in R}\left [F(x+h)-G(x)\right ]^{2}}</math>
* Negative of normalized correlation <br> = <math>\dfrac{-\sum_{x\in R}F(x+h)G(x)}{\sqrt{\sum_{x\in R}F(x+h)^{2}}\sqrt{\sum_{x\in R}G(x)^{2}}}</math>
 
==Basic description of the registration algorithm==
 
The KLT feature tracker is based on two papers:
In the first paper, Lucas and Kanade<ref name="LK">Bruce D. Lucas and Takeo Kanade. An Iterative Image Registration Technique with an Application to Stereo Vision. ''International Joint Conference on Artificial Intelligence'', pages 674–679, 1981.</ref>
developed the idea of a local search using gradients
weighted by an approximation to the second derivative of the image.
 
===One dimensional case===
 
If <math>h</math> is the displacement between two images <math>F(x)</math>
and <math>G(x) = F(x+h)</math> then the approximation is made that
 
: <math>F'(x) \approx \dfrac{F(x+h)-F(x)}{h}=\dfrac{G(x)-F(x)}{h}\,</math>
 
so that
 
: <math>h \approx \dfrac{G(x)-F(x)}{F'(x)}\,</math>
 
This approximation to the gradient of the image is only accurate if
the displacement of the local area between the two images to be registered
is not too large. The approximation to <math>h</math> depends on <math>x</math>.
For combining the various estimates of <math>h</math> at various values of <math>x</math>,
it is natural to average them:
 
: <math>h \approx \dfrac{\sum_{x}\dfrac{G(x)-F(x)}{F'(x)}}{\sum_{x}1}.</math>
 
The average can be further improved by weighting the contribution of each term to it,
which is inversely proportional to an estimate of <math>\left \vert F''(x) \right \vert </math>,
where
 
: <math>F''(x) \approx \dfrac{G'(x)-F'(x)}{h}.</math>
 
For the purpose of facilitating the expression, a [[weighting function]] is defined:
 
: <math>w(x) = \dfrac{1}{\left \vert G'(x)-F'(x) \right \vert}.</math>
 
The average with weighting is thereby:
 
: <math>h = \dfrac{\sum_{x}\dfrac{w(x)\left [ G(x)-F(x) \right ]}{F'(x)}}{\sum_{x}w(x)}.</math>
 
Upon obtaining the estimate <math>F(x)</math> can be moved by the estimate of <math>h</math>. The procedure is
applied repeatedly, yielding a type of [[Newton-Raphson]] iteration. The sequence of estimates will ideally converge to the
best <math>h</math>. The iteration can be expressed by<br>
<math>
\begin{cases}
h_{0} = 0 \\
h_{k+1} = h_{k} + \dfrac{\sum_{x}\dfrac{w(x)\left [ G(x)-F(x+h_{k})\right ]}{F'(x+h_{k})}}{\sum_{x}w(x)}
\end{cases}
</math>
 
===An alternative derivation===
 
The derivation above cannot be generalized well to two dimensions for the 2-D [[linear approximation]] occurs differently. This
can be corrected by applying the linear approximation in the form:
 
: <math>F(x+h) \approx F(x)+hF'(x),</math>
 
to find the <math>h</math> which that minimizes the L<sub>2</sub> norm measure of the difference (or error) between the curves,
where the error can be expressed as:
 
: <math>E=\sum_{x}\left [F(x+h)-G(x)\right ]^{2}.</math>
 
To minimize the error with respect to <math>h</math>, partially differentiate <math>E</math> and set it zero:
 
: <math>\begin{align}
0 & = \dfrac{\partial E}{\partial h} \\
  & \approx \dfrac{\partial}{\partial h}\sum_{x}\left [F(x)+hF'(x)-G(x)\right ]^{2} \\
  & = \sum_{x}2F'(x)\left [F(x)+hF'(x)-G(x)\right ]
\end{align}</math>,
: <math>\Rightarrow h \approx \dfrac{\sum_{x} F'(x)[G(x)-F(x)]}{\sum_{x} F'(x)^{2}}\,</math>
 
This is basically the same as the 1-D case, except for the fact that the weighting function <math>w(x)=F'(x)^2.</math>
And the iteration form with weighting can be expressed as:
 
<math>
\begin{cases}
h_0 = 0 \\
h_{k+1}=h_k + \dfrac{\sum_x w(x)F'(x+h_k) \left [G(x)-F(x+h_k)\right ]}{\sum_x w(x)F'(x+h_k)^2}
\end{cases}
</math>
 
===Performance===
 
To evaluate the [[performance]] of the algorithm, we are naturally curious about under what conditions and how fast the
sequence of <math>h_k</math>'s converges to the real <math>h</math>.<br>
Consider the case:
 
: <math>F(x)=\sin x,</math>
: <math>G(x)=F(x+h)=\sin (x+h).</math>
 
Both versions of the registration algorithm will converge to the correct <math>h</math> for <math>\left\vert h\right\vert < \pi</math>,
i.e. for initial misregistrations as large as one-half wavelength. The range of convergence can be improved by suppressing high spatial
frequencies in the image, which could be achieved by [[smoothing]] the image, that will also undesirably suppress small details of it.
If the window of smoothing is much larger than the size of the object being matched, the object may be suppressed entirely, so that a match would be no longer possible.
 
Since lowpass-filtered images can be sampled at lower [[Image resolution|resolution]] with no loss of information, a coarse-to-fine strategy is adopted.
A low-resolution smoothed version of the image can be used to obtain an approximate match. Applying the algorithm to higher
resolution images will refine the match obtained at lower resolution.
 
As smoothing extends the range of convergence, the weighting function improves the accuracy of approximation, speeding up the convergence.
Without weighting, the calculated displacement <math>h_1</math> of the first iteration with <math>F(x)=\sin x</math> falls off to zero as
the displacement approaches one-half wavelength.
 
===Implementation===
 
The implementation requires the calculation of the weighted sums of the quantities <math>F'G,</math> <math>F'F,</math> and
<math>(F')^2</math> over the region of interest <math>R.</math> Although <math>F'(x)</math> cannot be calculated exactly, it can be
estimated by:
 
: <math>F'(x) \approx \dfrac{F(x+\Delta x)-F(x)}{\Delta x},</math>
 
where <math>\Delta x</math> is chosen appropriately small.<br>
Some sophisticated technique can be used for estimating the first derivatives, but in general such techniques are equivalent
to first smoothing the function, and then taking the difference.
 
===Generalization to multiple dimensions===
 
The registration algorithm for 1-D and 2-D can be generalized to more dimensions. To do so, we try to minimize the L<sub>2</sub>
norm measure of error:
 
: <math>E=\sum_{\mathbf{x}\in R}\left [F(\mathbf{x}+\mathbf{h})-G(\mathbf{x})\right ]^{2},</math>
 
where <math>\mathbf{x}</math> and <math>\mathbf{h}</math> are n-dimensional row vectors.<br>
A linear approximation analogous:
 
: <math>F(\mathbf{x}+\mathbf{h}) \approx F(\mathbf{x})+\mathbf{h}\left(\dfrac{\partial}{\partial \mathbf{x}}F(\mathbf{x})\right)^{T}.</math>
 
And partially differentiate <math>E</math> with respect to <math>\mathbf{h}</math>:
 
: <math>\begin{align}
0 & = \dfrac{\partial E}{\partial \mathbf{h}} \\
  & \approx \dfrac{\partial}{\partial \mathbf{h}}\sum_{\mathbf{x}}\left [F(\mathbf{x})+\mathbf{h}\left(\dfrac{\partial F}{\partial \mathbf{x}}\right)^{T}-G(\mathbf{x})\right ]^{2} \\
  & = \sum_{\mathbf{x}}2\left [F(\mathbf{x})+\mathbf{h}\left(\dfrac{\partial F}{\partial \mathbf{x}}\right)^{T}-G(\mathbf{x})\right ] \left(\dfrac{\partial F}{\partial \mathbf{x}}\right)
\end{align}</math>,
 
: <math>\Rightarrow \mathbf{h} \approx \left [\sum_{\mathbf{x}}\left [G(\mathbf{x})-F(\mathbf{x})\right ]\left (\dfrac{\partial F}{\partial\mathbf{x}}\right )\right ] \left [\sum_{\mathbf{x}}\left (\dfrac{\partial F}{\partial\mathbf{x}}\right )^{T}\left (\dfrac{\partial F}{\partial\mathbf{x}}\right )\right ]^{-1},</math>
 
which has much the same form as the 1-D version.
 
===Further generalizations===
 
The method can also be extended to take into account registration
based on more complex transformations,
such as rotation, scaling, and shearing,
by considering
 
: <math>G(x) = F(Ax+h),</math>
 
where <math>A</math> is a linear spatial transform. The error to be minimized is then
 
: <math>E=\sum_{x}\left [F(Ax+h)-G(x)\right ]^2.</math>
 
To determine the amount <math>\Delta A</math> to adjust <math>A</math> and <math>\Delta h</math> to adjust <math>h</math>,
again, use the linear approximation:
 
: <math>F(x(A+\Delta A)+(h+\Delta h))</math>
: <math>\approx F(Ax+h)+(\Delta Ax+\Delta h)\dfrac{\partial}{\partial x}F(x).</math>
 
The approximation can be used similarly to find the error expression, which becomes quadratic in the quantities to be minimized with  
respect to. After figuring out the error expression, differentiate it with respect to the quantities to be minimized and set the
results zero, yielding a set of linear equations, then solve them.
 
A further generalization is designed for accounting for the fact that the brightness may be different in the two views,
due to the difference of the viewpoints of the cameras or to differences in the processing of the two images. Assume the difference
as linear transformation:
 
: <math>F(x)=\alpha G(x)+\beta,</math>
 
where <math>\alpha</math> represents a contrast adjustment and <math>\beta</math> represents a brightness adjustment.<br>
Combining this expression with the general linear transformation registration problem:
 
: <math>E=\sum_{x}\left [F(Ax+h)-(\alpha G(x)+\beta)\right ]^2</math>
 
as the quantity to minimize with respect to <math>\alpha,</math> <math>\beta,</math> <math>A,</math> and <math>h.</math>
 
==Detection and tracking of point features==
 
In the second paper Tomasi and Kanade<ref name="TK">Carlo Tomasi and Takeo Kanade. Detection and Tracking of Point Features. ''Carnegie Mellon University Technical Report CMU-CS-91-132'', April 1991.</ref>
used the same basic method for finding the registration
due to the translation but improved the technique
by tracking features that are suitable for the tracking
algorithm. The proposed features would be selected
if both the eigenvalues of the gradient matrix
were larger than some threshold.
 
By a very similar derivation, the problem is formulated as
 
: <math>\nabla d = e\,</math>
 
where <math>\nabla</math> is the gradient. This is the same as the
last formula of Lucas–Kanade above.
A local patch is considered a good feature to track if both of the two eigenvalues
(<math>\lambda_{1}</math> and <math>\lambda_{2}</math>) of <math>\nabla</math>
are larger than a threshold.
 
A tracking method based on these two papers is generally considered
a KLT tracker.
 
== Improvements and variations ==
 
In a third paper, Shi and Tomasi<ref name="ST">Jianbo Shi and Carlo Tomasi. Good Features to Track. ''IEEE Conference on Computer Vision and Pattern Recognition'', pages 593&ndash;600, 1994.</ref> proposed an additional stage of verifying that features were tracked correctly.
 
An affine transformation is fit between the image of the currently tracked feature and its image from a non-consecutive previous frame. If the affine compensated image is too dissimilar the feature is dropped.
 
The reasoning is that between consecutive frames a translation is a sufficient model for tracking but due to more complex motion, perspective effects, etc. a more complex model is required when frames are further apart.
 
Using a similar derivation as for the KLT, Shi and Tomasi showed that the search can be performed using the formula
 
: <math>Tz = a\,</math>
 
where <math>T</math> is a matrix of gradients, <math>z</math> is a vector of affine coefficients and <math>a</math> is an error vector. Compare this to <math>\nabla d = e</math>.
 
== References ==
 
{{Reflist}}
 
== See also ==
 
* [[Corner_detection#The_Shi_and_Tomasi_corner_detection_algorithm|Kanade–Tomasi features]] in the context of feature detection
* [[Lucas–Kanade method]] An optical flow algorithm derived from reference 1.
 
{{DEFAULTSORT:Kanade-Lucas-Tomasi feature tracker}}
[[Category:Motion in computer vision]]

Latest revision as of 09:51, 20 August 2013

In computer vision, the Kanade–Lucas–Tomasi (KLT) feature tracker is an approach to feature extraction. It is proposed mainly for the purpose of dealing with the problem that traditional image registration techniques are generally costly. KLT makes use of spatial intensity information to direct the search for the position that yields the best match. It is faster than traditional techniques for examining far fewer potential matches between the images.

The registration problem

The translational image registration problem can be characterized as follows: Given two functions and , representing values at each location , where is a vector, in two images, respectively, we wish to find the disparity vector that minimizes some measure of the difference between and , for in some region of interest .

Some measures of the difference between and :

Basic description of the registration algorithm

The KLT feature tracker is based on two papers: In the first paper, Lucas and Kanade[1] developed the idea of a local search using gradients weighted by an approximation to the second derivative of the image.

One dimensional case

If is the displacement between two images and then the approximation is made that

so that

This approximation to the gradient of the image is only accurate if the displacement of the local area between the two images to be registered is not too large. The approximation to depends on . For combining the various estimates of at various values of , it is natural to average them:

The average can be further improved by weighting the contribution of each term to it, which is inversely proportional to an estimate of , where

For the purpose of facilitating the expression, a weighting function is defined:

The average with weighting is thereby:

Upon obtaining the estimate can be moved by the estimate of . The procedure is applied repeatedly, yielding a type of Newton-Raphson iteration. The sequence of estimates will ideally converge to the best . The iteration can be expressed by

An alternative derivation

The derivation above cannot be generalized well to two dimensions for the 2-D linear approximation occurs differently. This can be corrected by applying the linear approximation in the form:

to find the which that minimizes the L2 norm measure of the difference (or error) between the curves, where the error can be expressed as:

To minimize the error with respect to , partially differentiate and set it zero:

,

This is basically the same as the 1-D case, except for the fact that the weighting function And the iteration form with weighting can be expressed as:

Performance

To evaluate the performance of the algorithm, we are naturally curious about under what conditions and how fast the sequence of 's converges to the real .
Consider the case:

Both versions of the registration algorithm will converge to the correct for , i.e. for initial misregistrations as large as one-half wavelength. The range of convergence can be improved by suppressing high spatial frequencies in the image, which could be achieved by smoothing the image, that will also undesirably suppress small details of it. If the window of smoothing is much larger than the size of the object being matched, the object may be suppressed entirely, so that a match would be no longer possible.

Since lowpass-filtered images can be sampled at lower resolution with no loss of information, a coarse-to-fine strategy is adopted. A low-resolution smoothed version of the image can be used to obtain an approximate match. Applying the algorithm to higher resolution images will refine the match obtained at lower resolution.

As smoothing extends the range of convergence, the weighting function improves the accuracy of approximation, speeding up the convergence. Without weighting, the calculated displacement of the first iteration with falls off to zero as the displacement approaches one-half wavelength.

Implementation

The implementation requires the calculation of the weighted sums of the quantities and over the region of interest Although cannot be calculated exactly, it can be estimated by:

where is chosen appropriately small.
Some sophisticated technique can be used for estimating the first derivatives, but in general such techniques are equivalent to first smoothing the function, and then taking the difference.

Generalization to multiple dimensions

The registration algorithm for 1-D and 2-D can be generalized to more dimensions. To do so, we try to minimize the L2 norm measure of error:

where and are n-dimensional row vectors.
A linear approximation analogous:

And partially differentiate with respect to :

,

which has much the same form as the 1-D version.

Further generalizations

The method can also be extended to take into account registration based on more complex transformations, such as rotation, scaling, and shearing, by considering

where is a linear spatial transform. The error to be minimized is then

To determine the amount to adjust and to adjust , again, use the linear approximation:

The approximation can be used similarly to find the error expression, which becomes quadratic in the quantities to be minimized with respect to. After figuring out the error expression, differentiate it with respect to the quantities to be minimized and set the results zero, yielding a set of linear equations, then solve them.

A further generalization is designed for accounting for the fact that the brightness may be different in the two views, due to the difference of the viewpoints of the cameras or to differences in the processing of the two images. Assume the difference as linear transformation:

where represents a contrast adjustment and represents a brightness adjustment.
Combining this expression with the general linear transformation registration problem:

as the quantity to minimize with respect to and

Detection and tracking of point features

In the second paper Tomasi and Kanade[2] used the same basic method for finding the registration due to the translation but improved the technique by tracking features that are suitable for the tracking algorithm. The proposed features would be selected if both the eigenvalues of the gradient matrix were larger than some threshold.

By a very similar derivation, the problem is formulated as

where is the gradient. This is the same as the last formula of Lucas–Kanade above. A local patch is considered a good feature to track if both of the two eigenvalues ( and ) of are larger than a threshold.

A tracking method based on these two papers is generally considered a KLT tracker.

Improvements and variations

In a third paper, Shi and Tomasi[3] proposed an additional stage of verifying that features were tracked correctly.

An affine transformation is fit between the image of the currently tracked feature and its image from a non-consecutive previous frame. If the affine compensated image is too dissimilar the feature is dropped.

The reasoning is that between consecutive frames a translation is a sufficient model for tracking but due to more complex motion, perspective effects, etc. a more complex model is required when frames are further apart.

Using a similar derivation as for the KLT, Shi and Tomasi showed that the search can be performed using the formula

where is a matrix of gradients, is a vector of affine coefficients and is an error vector. Compare this to .

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

See also

  1. Bruce D. Lucas and Takeo Kanade. An Iterative Image Registration Technique with an Application to Stereo Vision. International Joint Conference on Artificial Intelligence, pages 674–679, 1981.
  2. Carlo Tomasi and Takeo Kanade. Detection and Tracking of Point Features. Carnegie Mellon University Technical Report CMU-CS-91-132, April 1991.
  3. Jianbo Shi and Carlo Tomasi. Good Features to Track. IEEE Conference on Computer Vision and Pattern Recognition, pages 593–600, 1994.