Betti number: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
→‎Examples: Fixed exponent typos in P_{SO(2n+1)_{}}(x) and P_{Sp(n)_{}}(x)
 
en>Enyokoyama
Line 1: Line 1:
It is very common to have a dental emergency -- a fractured tooth, an abscess, or severe pain when chewing. Over-the-counter pain medication is just masking the problem. Seeing an emergency dentist is critical to getting the source of the problem diagnosed and corrected as soon as possible.<br><br><br><br>Here are some common dental emergencies:<br>Toothache: The most common dental emergency. This generally means a badly decayed tooth. As the pain affects the tooth's nerve, treatment involves gently removing any debris lodged in the cavity being careful not to poke deep as this will cause severe pain if the nerve is touched. Next rinse vigorously with warm water. Then soak a small piece of cotton in oil of cloves and insert it in the cavity. This will give temporary relief until a dentist can be reached.<br><br>At times the pain may have a more obscure location such as decay under an old filling. As this can be only corrected by a dentist there are two things you can do to help the pain. Administer a pain pill (aspirin or some other analgesic) internally or dissolve a tablet in a half glass (4 oz) of warm water holding it in the mouth for several minutes before spitting it out. DO NOT PLACE A WHOLE TABLET OR ANY PART OF IT IN THE TOOTH OR AGAINST THE SOFT GUM TISSUE AS IT WILL RESULT IN A NASTY BURN.<br><br>Swollen Jaw: This may be caused by several conditions the most probable being an abscessed tooth. In any case the treatment should be to reduce pain and swelling. An ice pack held on the outside of the jaw, (ten minutes on and ten minutes off) will take care of both. If this does not control the pain, an analgesic tablet can be given every four hours.<br><br>Other Oral Injuries: Broken teeth, cut lips, bitten tongue or lips if severe means a trip to a dentist as soon as possible. In the mean time rinse the mouth with warm water and place cold compression the face opposite the injury. If there is a lot of bleeding, apply direct pressure to the bleeding area. If bleeding does not stop get patient to the emergency room of a hospital as stitches may be necessary.<br><br>Prolonged Bleeding Following Extraction: Place a gauze pad or better still a moistened tea bag over the socket and have the patient bite down gently on it for 30 to 45 minutes. The tannic acid in the tea seeps into the tissues and often helps stop the bleeding. If bleeding continues after two hours, call the dentist or take patient to the emergency room of the nearest hospital.<br><br>Broken Jaw: If you suspect the patient's jaw is broken, bring the upper and lower teeth together. Put a necktie, handkerchief or towel under the chin, tying it over the head to immobilize the jaw until you can get the patient to a dentist or the emergency room of a hospital.<br><br>Painful Erupting Tooth: In young children teething pain can come from a loose baby tooth or from an erupting permanent tooth. Some relief can be given by crushing a little ice and wrapping it in gauze or a clean piece of cloth and putting it directly on the tooth or gum tissue where it hurts. The numbing effect of the cold, along with an appropriate dose of aspirin, usually provides temporary relief.<br><br>In young adults, an erupting 3rd molar (Wisdom tooth), especially if it is impacted, can cause the jaw to swell and be quite painful. Often the gum around the tooth will show signs of infection. Temporary relief can be had by giving aspirin or some other painkiller and by dissolving an aspirin in half a glass of warm water and holding this solution in the mouth over the sore gum. AGAIN DO NOT PLACE A TABLET DIRECTLY OVER THE GUM OR CHEEK OR USE THE ASPIRIN SOLUTION ANY STRONGER THAN RECOMMENDED TO PREVENT BURNING THE TISSUE. The swelling of the jaw can be reduced by using an ice pack on the outside of the face at intervals of ten minutes on and ten minutes off.<br><br>In case you loved this informative article and you wish to receive details regarding [http://www.youtube.com/watch?v=90z1mmiwNS8 Washington DC Dentist] please visit our own internet site.
In [[mathematics]], the '''dimension theorem for vector spaces''' states that all [[Basis (linear algebra)|bases]] of a [[vector space]] have equally many elements. This number of elements may be finite, or given by an infinite [[cardinal number]], and defines the [[Dimension (vector space)|dimension]] of the space.
 
Formally, the '''dimension theorem for vector spaces''' states that
 
:Given a [[vector space]] ''V'', any two [[linearly independent]] [[generating set]]s (in other words, any two bases) have the same [[cardinality]].
 
If ''V'' is [[finitely generated module|finitely generated]], then it has a finite basis, and the result says that any two bases have the same number of elements.
 
While the proof of the existence of a basis for any vector space in the general case requires [[Zorn's lemma]] and is in fact equivalent to the [[axiom of choice]], the uniqueness of the cardinality of the basis requires only the [[ultrafilter lemma]],<ref>Howard, P., Rubin, J.: "Consequences of the axiom of choice" - Mathematical Surveys and Monographs, vol 59 (1998) ISSN 0076-5376.</ref> which is strictly weaker (the proof given below, however, assumes [[trichotomy (mathematics)|trichotomy]], i.e., that all [[cardinal number]]s are comparable, a statement which is also equivalent to the axiom of choice). The theorem can be generalized to arbitrary [[module (mathematics)|''R''-modules]] for rings ''R'' having [[invariant basis number]].
 
The theorem for finitely generated case can be proved with elementary arguments of [[linear algebra]], and requires no forms of the axiom of choice.
 
==Proof==
 
Assume that { ''a''<sub>''i''</sub>: ''i'' ∈ ''I'' } and
{ ''b''<sub>''j''</sub>: ''j'' ∈ ''J'' } are both bases, with the cardinality of ''I'' bigger than the cardinality of ''J''.  From this assumption we will derive a contradiction.
 
===Case 1===
Assume that ''I'' is infinite.
 
Every ''b''<sub>''j''</sub> can be written as a finite sum
:<math>b_j = \sum_{i\in E_j} \lambda_{i,j} a_i </math>, where <math>E_j</math> is a finite subset of <math>I</math>.
Since the cardinality of ''I'' is greater than that of ''J'' and the ''E<sub>j</sub>'s'' are finite subsets of ''I'', the cardinality of ''I'' is also bigger than the cardinality of <math>\bigcup_{j\in J} E_j</math>.  (Note that this argument works ''only'' for infinite ''I''.)  So there is some <math>i_0\in I</math> which does not appear
in any <math>E_j</math>.   The corresponding <math>a_{i_0}</math> can be expressed as a finite linear combination of <math>b_j</math>'s, which in turn can be expressed as finite linear combination of <math> a_i</math>'s, not involving <math>a_{i_0}</math>.  Hence <math> a_{i_0}</math> is linearly dependent on the other <math>a_i</math>'s.
 
===Case 2===
Now assume that ''I'' is finite and of cardinality bigger than the cardinality of ''J''.  Write ''m'' and ''n'' for the cardinalities of ''I'' and ''J'', respectively.
Every ''a''<sub>''i''</sub> can be written as a sum
:<math>a_i = \sum_{j\in J} \mu_{i,j} b_j </math>
The matrix  <math> (\mu_{i,j}: i\in I, j\in J)</math> has ''n'' columns (the ''j''-th column is the
''m''-tuple <math> (\mu_{i,j}: i\in I)</math>), so it has rank at most ''n''. [[Vicious circle|This means]] that [[Rank (linear algebra)#Proofs that column rank = row rank|its ''m'' rows cannot be linearly independent]].   Write <math>r_i = (\mu_{i,j}: j\in J)</math> for the ''i''-th row, then there is a nontrivial
linear combination
:<math> \sum_{i\in I}  \nu_i r_i = 0</math>
But then also <math>\sum_{i\in I} \nu_i a_i = \sum_{i\in I} \nu_i \sum_{j\in J} \mu_{i,j} b_j = \sum_{j\in J} \biggl(\sum_{i\in I} \nu_i\mu_{i,j} \biggr) b_j = 0, </math>
so the <math> a_i</math> are linearly dependent.
 
====Alternative Proof====
The proof above uses several non-trivial results. If these results are not carefully established in advance, the proof may give rise to circular reasoning. Here is a proof of the finite case which requires less prior development.
 
'''Theorem 1:''' If <math>A = (a_1,\dots,a_n) \subseteq V</math> is a linearly independent [[tuple]] in a vector space <math>V</math>, and <math>B_0 = (b_1,...,b_r)</math> is a tuple that [[spanning set|spans]] <math>V</math>, then <math>n\leq r</math>.<ref>S. Axler, "Linear Algebra Done Right," Springer, 2000.</ref> The argument is as follows:
 
Since <math>B_0</math> spans <math>V</math>, the tuple <math>(a_1,b_1,\dots,b_r)</math> also spans. Since <math>a_1\neq 0</math> (because <math>A</math> is linearly independent), there is at least one <math>t \in \{1,\ldots,r\}</math> such that <math>b_{t}</math> can be written as a linear combination of <math>B_1 = (a_1,b_1,\dots,b_{t-1}, b_{t+1}, ... b_r)</math>. Thus, <math>B_1</math> is a [[spanning set|spanning tuple]], and its length is the same as <math>B_0</math>'s.
 
Repeat this process. Because <math>A</math> is linearly independent, we can always remove an element from the list <math>B_i</math> which is not one of the <math>a_j</math>'s that we prepended to the list in a prior step (because <math>A</math> is linearly independent, and so there must be some nonzero coefficient in front of one of the <math>b_i</math>'s). Thus, after <math>n</math> iterations, the result will be a tuple <math>B_n = (a_1, \ldots, a_n, b_{m_1}, \ldots, b_{m_k})</math> (possibly with <math>k=0</math>) of length <math>r</math>.  In particular, <math>A \subseteq B_n</math>, so <math>|A| \leq |B_n|</math>, i.e., <math>n \leq r</math>.
 
To prove the finite case of the dimension theorem from this, suppose that <math>V</math> is a vector space and <math>S = \{v_1, \ldots, v_n\}</math> and <math>T = \{w_1, \ldots, w_m\}</math> are both bases of <math>V</math>.  Since <math>S</math> is linearly independent and <math>T</math> spans, we can apply Theorem 1 to get <math>m \geq n</math>. And since <math>T</math> is linearly independent and <math>S</math> spans, we get <math>n \geq m</math>.  From these, we get <math>m=n</math>.
 
==Kernel extension theorem for vector spaces==
This application of the dimension theorem is sometimes itself called the ''dimension theorem''. Let
 
:''T'': ''U'' → ''V''
 
be a [[linear transformation]]. Then
 
:''dim''(''range''(''T'')) + ''dim''(''kernel''(''T'')) = ''dim''(''U''),
 
that is, the dimension of ''U'' is equal to the dimension of the transformation's [[Range (mathematics)|range]] plus the dimension of the [[Kernel (algebra)|kernel]]. See [[rank-nullity theorem]] for a fuller discussion.
 
==References==
<references />
 
{{DEFAULTSORT:Dimension Theorem For Vector Spaces}}
[[Category:Theorems in abstract algebra]]
[[Category:Theorems in linear algebra]]
[[Category:Articles containing proofs]]

Revision as of 18:21, 13 July 2013

In mathematics, the dimension theorem for vector spaces states that all bases of a vector space have equally many elements. This number of elements may be finite, or given by an infinite cardinal number, and defines the dimension of the space.

Formally, the dimension theorem for vector spaces states that

Given a vector space V, any two linearly independent generating sets (in other words, any two bases) have the same cardinality.

If V is finitely generated, then it has a finite basis, and the result says that any two bases have the same number of elements.

While the proof of the existence of a basis for any vector space in the general case requires Zorn's lemma and is in fact equivalent to the axiom of choice, the uniqueness of the cardinality of the basis requires only the ultrafilter lemma,[1] which is strictly weaker (the proof given below, however, assumes trichotomy, i.e., that all cardinal numbers are comparable, a statement which is also equivalent to the axiom of choice). The theorem can be generalized to arbitrary R-modules for rings R having invariant basis number.

The theorem for finitely generated case can be proved with elementary arguments of linear algebra, and requires no forms of the axiom of choice.

Proof

Assume that { ai: iI } and { bj: jJ } are both bases, with the cardinality of I bigger than the cardinality of J. From this assumption we will derive a contradiction.

Case 1

Assume that I is infinite.

Every bj can be written as a finite sum

, where is a finite subset of .

Since the cardinality of I is greater than that of J and the Ej's are finite subsets of I, the cardinality of I is also bigger than the cardinality of . (Note that this argument works only for infinite I.) So there is some which does not appear in any . The corresponding can be expressed as a finite linear combination of 's, which in turn can be expressed as finite linear combination of 's, not involving . Hence is linearly dependent on the other 's.

Case 2

Now assume that I is finite and of cardinality bigger than the cardinality of J. Write m and n for the cardinalities of I and J, respectively. Every ai can be written as a sum

The matrix has n columns (the j-th column is the m-tuple ), so it has rank at most n. This means that its m rows cannot be linearly independent. Write for the i-th row, then there is a nontrivial linear combination

But then also so the are linearly dependent.

Alternative Proof

The proof above uses several non-trivial results. If these results are not carefully established in advance, the proof may give rise to circular reasoning. Here is a proof of the finite case which requires less prior development.

Theorem 1: If is a linearly independent tuple in a vector space , and is a tuple that spans , then .[2] The argument is as follows:

Since spans , the tuple also spans. Since (because is linearly independent), there is at least one such that can be written as a linear combination of . Thus, is a spanning tuple, and its length is the same as 's.

Repeat this process. Because is linearly independent, we can always remove an element from the list which is not one of the 's that we prepended to the list in a prior step (because is linearly independent, and so there must be some nonzero coefficient in front of one of the 's). Thus, after iterations, the result will be a tuple (possibly with ) of length . In particular, , so , i.e., .

To prove the finite case of the dimension theorem from this, suppose that is a vector space and and are both bases of . Since is linearly independent and spans, we can apply Theorem 1 to get . And since is linearly independent and spans, we get . From these, we get .

Kernel extension theorem for vector spaces

This application of the dimension theorem is sometimes itself called the dimension theorem. Let

T: UV

be a linear transformation. Then

dim(range(T)) + dim(kernel(T)) = dim(U),

that is, the dimension of U is equal to the dimension of the transformation's range plus the dimension of the kernel. See rank-nullity theorem for a fuller discussion.

References

  1. Howard, P., Rubin, J.: "Consequences of the axiom of choice" - Mathematical Surveys and Monographs, vol 59 (1998) ISSN 0076-5376.
  2. S. Axler, "Linear Algebra Done Right," Springer, 2000.