# Talk:Prime number theorem

## Miscellaneous

Not sure how to change that myself but the mathematical notation here is wrong: http://en.wikipedia.org/math/a2184dbe1f8958ef9bf635237ddf9573.png It should not be the approx symbol (2 waves) but a symbol with 1 wave. Strictly speaking the approy symbol is wrong, since Pi(x) is _not_ approximated x/ln(x), but the only the limes of Pi(x)*ln(x)/x is 1, which is a different statement. see also : http://mathworld.wolfram.com/PrimeNumberTheorem.html

Could someone put a proof of the Prime Number Theorem here?

Paul Erdos, the legendary genuius, was the first to provide an 'elementary' proof of the prime number theorem. Should this be added?

I don't think so. Any non-elementary proof requires considerable background and machinery from complex analysis, and the proof runs several pages. The "elementary" proof is even more difficult to follow, requires lots of preliminary estimates and results, and the proof is several pages long.
^that doesnt answer the question. Over the past few weeks I have attempted to clarify a detail in the section entitled 'Elementary proof' because it seemed to dismiss Erdos's contribution. Yet my edits have been reverted by two others in the meantime, even though the corresponding information on Erdos's wiki page properly cites the collaboration between Erdos and Selberg. I tend to believe that Erdos deserves more credit than he gets, mostly because his account of the dispute seems less petty. Fhue (talk) 04:43, 5 June 2009 (UTC)

"The constant involved in the O-notation is unknown." Which constant?

I assume this talks about the inherent constant factor that comes with O-notation. The line about Erdös (not Erdos!) should probably be added, but without the "the legendary genius" part. -- Schnee 12:22, 5 Sep 2003 (UTC)

I'd like to see added: methods for calculating pi(x), other than brute force. Bubba73 05:25, 18 Jun 2005 (UTC)

"The Selberg-Erdős work effectively put paid to the whole concept" - is "paid" a wrong word? Bubba73 05:28, 18 Jun 2005 (UTC)

^should be "..put rest to.."

## Correct TeX

Do not write log\ x. That looks like this:

${\displaystyle log\ x.}$

Instead write \log x. That looks like this:

${\displaystyle \log x.\,}$

Use of backslashes in \log, \exp, \sin, \cos, \max, \min, \det, \lim, \sup, \inf and a number of other math operators has at least three effects:

• log and the like don't get italicized as if they were variables;
• proper spacing is automatic. How much space is needed depends on the context, but all of the standard conventions in this regard are built in to the TeX software;
• in some cases, e.g., \lim, \max, etc., the supscript will appear directly below the operator, rather than below and to the right, thus:
${\displaystyle \max _{x>0}f(x).\,}$
${\displaystyle \lim _{x\to 0}f(x).\,}$

Michael Hardy 23:02, 21 Jun 2005 (UTC)

Thanks, I'm the one that used log\. (I started with TeX less than a week ago.) Bubba73 23:51, 21 Jun 2005 (UTC)

## logarithmic integral

There ae some subtle problems for the logarithmic integral in this article: first, the asymptotic expansion is for the reglar logarithmic integral function and not for the offset logarithmic integral. To be consistent with the notation in other articles in WP, and with Abramowitz & Stegun, we need to change Li to li throughout this article. Sooo... instead of being bold, I thought I'd ask here ... who put Li in there? is there a well-known number theory book using this notation? linas 21:24, 21 December 2005 (UTC)

p.s. in case there's confusion about the asymptotic expansion, one has li(x)=Ei(ln x), with Ei the exponential integral, and a detailed derivation of the asymptotic expansion for li(x) is given in the article on aymptotic expansions. However, both this article and the Riemann hypothesis is using Li(x). linas 21:31, 21 December 2005 (UTC)
Linas, I don't understand you. Why do you want to change Li to li? I think li = logarithmic integral = int_0^x ln t dt and Li = offset logarithmic integral = int_2^x ln t dt; the article uses Li because it is about the offset logarithmic integral. I'm writing from my parents so I can't really check anything. Then, which asymptotic expansion are you talking about? The expansions for the regular logarithmic integrals and the offset logarithmic integrals are the same as they differ only by a constant term, which is smaller than the terms in the expansion. Am I making any sense? -- Jitse Niesen (talk) 21:51, 21 December 2005 (UTC)
Yes, well Li(x) ≈ li(x)-1.05... so by using big-O notation, one can drop this constant, and at this level, the asymptotic expansions are "the same". But if you work the details, the quoted asymp expansion is really for li(x) not Li(x), and if taken literally, will give you accurate estimates that are off by 1.05... however, maybe this isn't important. linas 22:59, 21 December 2005 (UTC)
"the quoted asymp expansion is really for li(x) not Li(x)" — you seem to be using some intuition here that I do not have.
No, not at all; I merely went through the actual derivation of the asymp expansion the other day (for an unrelated reason), and in the derivation, its li not Li that comes out. (Its half-a-page of paper total, the bulk of the work is already done in the example in asymptotic expansion).
"accurate estimates that are off by 1.05..." — did you check this? Where do you truncate the expansion?
I'm merely noting that li(2)=1.05... no I did not check the asymp expansion numerically. The "rule of thumb" for truncating asymp. expansions to get the best accuracy is to cut them off at the smallest term. But you are right, after such a cut, the result could be off by 10 or 100 so arguing about 1.05 is silly, so my bad. Although, I'm now motivated to actually plug in a few numbers ...
If you want, you can move the expansion a bit up and say it is the expansion for li. The whole article could use a check; for instance, it now seems that the table uses li(x) instead of Li(x). -- Jitse Niesen (talk) 14:29, 22 December 2005 (UTC)
Maybe. If I do anything here, it'll probably be minimalist. I'm trying to work on something else. linas 16:21, 22 December 2005 (UTC)

that asymptotic expansion is the indefinite integral without the integration constant. it is clear that the calculation in the lower limit 2 has an explosive divergence. the value is 0 if the lower limit is 0, but the error terms of the form int(0,x>1) dt/ln(t)^(2n) are divergent and so using this formula for li is suspect about the confusion between li and Li, the sum rule sum(m) mu(m)/m means that in the definition of the Riemann function R(x) [that appears just below] these two functions work equally well Pietro — Preceding unsigned comment added by 2A00:1620:C0:64:21C:61FF:FE03:A4C (talk) 10:52, 20 December 2012 (UTC)

Please someone make sense of the following brain dump and work it into the article.

I haven't studied this stuff for years now, but from what I recall, Riemann's prime counting function J(x) (see the article on prime counting function) is in some sense a much more natural candidate than ${\displaystyle \pi (x)}$ to be approximated by ${\displaystyle Li(x)}$. I can't remember exactly why; I think it has to do with the fact that J(x) is closely connected with ${\displaystyle \psi (x)}$ which in turn is the summatory function of the von Mangoldt function ${\displaystyle \Lambda (n)}$ which has very nice convolution properties.

J(x) can be defined (with caveats -- see prime counting function) by

${\displaystyle J(x)=\sum _{n\geq 1}{\frac {1}{n}}\pi (x^{1/n}).}$

Then by mobius inversion you get

${\displaystyle \pi (x)=\sum _{n\geq 1}{\frac {\mu (n)}{n}}J(x^{1/n}).}$

(There are only finitely many terms in this sum.) So if J(x) is approximated well by Li(x) then you would expect ${\displaystyle \pi (x)}$ to be approximated well by

${\displaystyle R(x)=\sum _{n\geq 1}{\frac {\mu (n)}{n}}\operatorname {Li} (x^{1/n}),}$

whose leading term is just Li(x). There are issues of convergence to worry about here; I haven't thought through it very hard. Also I'm not sure whether you should be using Li or li. Also I'm not sure if R(x) is standard notation for this. And I don't have any references.

Anyway, if you shove this through maple or something, you get the following values of R(x) for x being successive powers of ten, truncated using the "floor" function:

4, 25, 168, 1226, 9587, 78527, 664667, 5761551, 50847455, 455050683, 4118052494, 37607910542, 346065531065, 3204941731601, 29844570495886, 279238341360977, 2623557157055977, 24739954284239494, 234057667300228940, 2220819602556027015, 21127269485932299723, 201467286689188773625, 1925320391607837268776...

I'm not an expert at numerical evaluation, so I wouldn't be surprised if these were off by one somewhere. I tried with varying number of terms and lots of digits of accuracy to check that the answers were stable.

If you compare this to the table in the article, you'll see it's much more impressive than the other estimates, especially for small x. (The difference even changes sign.) However, I believe I read somewhere that it has been proved that in the long run it's asymptotically no better than just using Li(x), I'm guessing because there are indeed plenty of zeroes of ${\displaystyle \zeta }$ with real part 1/2, and the contributions from these are at least as big as the "correction" terms ${\displaystyle \mu (n)Li(x^{1/n})/n}$ for n at least 2. Dmharvey 02:01, 25 December 2005 (UTC)

I read a paper that mentioned the goodness of fit for R vs Li. The error term for the approximation of pi by Li is larger than the main correction term ${\displaystyle \operatorname {Li} ({\sqrt {x}})}$, so there can be no improvement in that asymptotic sense. I think it's been proved that
${\displaystyle \int _{2}^{n}|\operatorname {Li} (x)-\pi (x)|dx>\int _{2}^{n}|\operatorname {R} (x)-\pi (x)|dx}$
for most n, though, so at least in that sense it's better.
You know, we should probably have a page for Riemann's R function as a place for these sorts of notes, rather than cramming them into other articles. There ends up being more than one realizes to say about the function!
And for what it's worth, my calculations agree with yours on the values of Riemann's R.
CRGreathouse (t | c) 12:48, 4 October 2007 (UTC)

about the long run, you should have read something like "sometimes Li is better than R"; this is a natural consequence of that fact that the difference pi-Li (pi being the exact) changes sign infinitely many times; around these changes Li is clearly better than R pietro — Preceding unsigned comment added by 2A00:1620:C0:64:21C:61FF:FE03:A4C (talk) 12:02, 20 December 2012 (UTC)

## Elementary Stuff

According to Gauss, every number can be expressed as the product of primes. For example, 14 is (2x7) both of which are primes.

Therefore prime numbers are the building blocks for other numbers.

An easy way to understand why prime numbers decrease in number as the number grows follows.

Take a few prime numbers, say 2,3,5, and 7. With just these numbers you can create the numbers 2x3=6 .. 2x5=10 ..2x2x3 = 12 2x7 = 14 .. 2x2x3 = 12 etc

It becomes obvious that a new prime number will occur whenever a combination of old primes is not possible leaving a gap in the set of all numbers.

If we want to know how many primes there are, we might think in reverse. How many combinations of old primes are there? As the number of these combinations increases, they fill up more and more of the number set. This explains why the number of primes is asymptotic as x grows larger.

Going back to the original question: how many prime numbers are there?

Once again look at a few simple primes say, 2,3 and 5.

The number 2 can be used over and over again to create new numbers.

2+2 = 4 .. 2+2+2 = 6 .. 2+2+2+2 = 8

(2x2 = 4) (2x3=6) (2x4=8)

All of these numbers can be expressed using factors and multiplication. They are all non-prime numbers because they are made up of more than a prime and one.

By looking at the number 2 we can see that at least 1/2 of all numbers are non prime.

Then if we examine 3.. we can see that it's responsible for 1/3 of all number being non-prime.

However, we can't simply add 1/2 and 1/3 to increase our total of non-prime numbers, because some of them overlap (for example 2x3=6 which is 2+2+2 or 3+3).

This sort of examination is similar to the sieve method. - BringCocaColaBack

I'm not sure what you are trying to accomplish here. Euclid already knew of (some very close to) the fundamental theorem of arithmetic and the infinitude of primes. Gauss certainly wasn't the first one to prove that.Dragonfall 04:25, 3 March 2007 (UTC)

## graph

I noticed the graph on the Prime Number Theorem page is wrong, and I don't know who to contact so I am posting this here: the blue line should be Li(x) and the red one pi(x), and not the other way around, as for small values of x, Li(x)>pi(x). Thanks

I'm not sure, but it might have something to do with some disagreement that's been going on about the correct definition of Li(x), and whether it's really the definition of li(x), and which one should be used in the article. (They differ by some small constant.) Dmharvey 19:50, 15 April 2006 (UTC)

Yes, they differ by 1.045 but that is negligible. I think the graph is just wrongly marked...

Am I missing something? The graph is not "marked" ! There is no indication whatsoever of which color line represents which function. Is there some common knowledge that everyone but me holds which would allow them to know at a glance without a label which curve is which?
Also the "thumbnail" version is impossible to see. When I clicked on it to get the larger version I lost all information about it. Could someone edit the information on the large image to say what it is?
Nwbeeson 21:16, 30 January 2007 (UTC)
When I click on the graph, I get a large image with colors explained below. The small image seems a little useless. Maybe it should say "Click for large image and explanation" or something like that. PrimeHunter 00:18, 31 January 2007 (UTC)

## Dusart

The 1999 Dusart paper gives stronger asymptotic bounds than that given.

The article's result is correct, but the latter result is stronger for k > 13197. I'm choosing to leave this off because the extra complexity isn't needed here, but if someone feels it's worthwhile they can add it.

At the moment I'm trying to read his thesis Autour de la fonction qui compte le nombre de nombres premiers to see if there are any other results along these lines. My French isn't very good, but I'll see what I can find. Pages 30–32 cover it, so I just have a bit to wade through. CRGreathouse 11:50, 25 May 2006 (UTC)

## Table of π(x), x / ln x, and Li(x)

The first sentence in the section is, "Here is a table that shows how the three functions π(x), x / ln x and Li(x) compare:" but the table does not have any column labeled "x/ln(x)" nor "Li(x)".

Am I expected to solve the equations at the top of the columns to determine which column is "x/ln(x)" and which is "Li(x)" ?!?

OR is there a mistake in that sentence?

Nwbeeson 21:20, 30 January 2007 (UTC)

x/ln(x) and Li(x) are not shown. The interesting thing is their difference to pi(x) so only that is shown. I modified the explanation. PrimeHunter 00:18, 31 January 2007 (UTC)
Sorry for resurrecting an old discussion. While x/ln(x) is certainly significant, wouldn't "x/(ln(x)-1)" be more so, since it is significantly closer to π(x) than x/ln(x) is? Glenn L (talk) 01:16, 24 July 2013 (UTC)
x/ln(x) is by far the most famous approximation, it's the "simplest" approximation which is asymptotically correct, and it's used in the prime number theorem, the subject of this article, so replacing it would be odd. x/(ln(x)-1) is closer than x/ln(x) for the small numbers listed here but far from as close as li(x) which is also listed. Note: 1/ln(x) is better than 1/(ln(x)-1) for approximating the chance that a random x is prime. But when counting all primes below x for a relatively small value like 1024, x/ln(x) suffers from many numbers being so much below x that the primality chance drops significantly. x/(ln(x)-1) is a way to compensate for this drop without needing the more "advanced" function li(x). PrimeHunter (talk) 02:09, 24 July 2013 (UTC)

## Relative error language

I just restored the relative error formulation of the theorem, since that seems to me to be the most intuitively accessible. Clearly, lim f(x)/g(x) = 1 is equivalent to lim |g(x) - f(x)| / |f(x)| = 0, and the latter is the relative error. AxelBoldt 16:38, 4 April 2007 (UTC)

## First paragraph and first subsection are too different

According to the introductory paragraph:

1. The PNT states that the frequency of primes near x approximates ${\displaystyle {\frac {1}{\ln x}}}$.

According to the "statement of the theorem" subsection:

2. The PNT states that the total number of primes less than x approximates ${\displaystyle {\frac {x}{\ln x}}}$.

A solid high school math student should be able to prove that 2 is equivalent to:

2A. The PNT states that the frequency of primes between 0 and x approximates ${\displaystyle {\frac {1}{\ln x}}}$.

But proving that 2A and 1 are equivalent is certainly beyond a lot of college math students. Moreover, it requires facts about logarithms that are irrelevant to this page. (By way of comparison, the average value of ${\displaystyle \sin x}$ between 0 and x tends to zero, but the average value of ${\displaystyle \sin x}$ near x does not approach any limit. In other words, frequency near x and frequency between 0 and x are not necessarily equal, although they are equal in this case.)

So why the discrepancy? Yes, I know that some of the editors prefer to state the theorem as 1 and others prefer to state it as 2. But defining the theorem one way, and then suddenly redefining it another way, without saying why they are equivalent, would be inappropriate even in a math textbook -- not to mention on Wikipedia, which presumably has an audience beyond math graduate students.

(To be honest, I suspect that very few high school students could even prove that 2 and 2A are equivalent, because they simply are unable to write a proof, period.) — Lawrence King (talk) 07:51, 23 November 2007 (UTC)

Well, since Li(x) is a better approximation in general than x/log(x) to the number of primes up to x, I suppose #1 is the 'best' statement of the three. I would prefer to include at least 1 and (2 OR 2a), with OR in its usual inclusive sense, but with some mention that the two (three) imply each other. CRGreathouse (t | c) 07:57, 23 November 2007 (UTC)
That sounds great to me. I think it is perfectly fine to have multiple equivalent statements of a theorem, as long as the Wikipedia page states the fact that these are equivalent. We don't even need to explain why they are equivalent, as long as we state it. Without that statement, however, it's as if the Mark Twain page had a subsection describing the life of Samuel Clemens, without ever mentioning the connection between the two. — Lawrence King (talk) 22:04, 23 November 2007 (UTC)

## divergent series?

Okay, maybe I'm just being stupid, but the article gives this series:

${\displaystyle {\mbox{Li}}(x)={\frac {x}{\ln x}}\sum _{k=0}^{\infty }{\frac {k!}{(\ln x)^{k}}}={\frac {x}{\ln x}}+{\frac {x}{(\ln x)^{2}}}+{\frac {2x}{(\ln x)^{3}}}+\cdots .}$

Isn't this clearly divergent for all real, positive x? All its terms are positive, and the size of the terms increases for k>ln x.--76.93.42.50 (talk) 23:22, 1 March 2008 (UTC)

Ah, okay, the Logarithmic integral function says it's divergent, but claims that it's still useful!? Clear as mud to me!--76.93.42.50 (talk) 23:25, 1 March 2008 (UTC)

Just like the power series for the gamma function, the series for the logarithmic integral diverges. But any finite truncation gives a good estimate of the value, with more terms giving a better estimate. But for a given value, the series can give only so much precision -- too many terms will give unusual values. CRGreathouse (t | c) 00:10, 2 March 2008 (UTC)

## Errata?

comparing table this article cs that for Prime-counting function, there are differences, I do not know which is correct. For example, at 10E23, one lists 1,925,320,391,606,818,006,727 versus 1,925,320,391,606,803,968,923 in the other ...--Billymac00 (talk) 03:38, 28 May 2008 (UTC)

Good catch, I fixed it. 1,925,320,391,606,803,968,923 is correct (well, ±1 anyway). CRGreathouse (t | c) 13:17, 28 May 2008 (UTC)
Wow, that went back to the beginning of the table in 2005: [1] CRGreathouse (t | c) 13:32, 28 May 2008 (UTC)
The problem is disagreeing computations. [2] made two computations in or before 2001 and got 1925320391606818006727 (the old number in this article) in the first, and 1925320391606819893167-1886441 (1 less) in the second. They apparently never found the cause of the difference and didn't publish an "official" count. Silva's tables at [3] reported 1925320391606803968923 (14037803 less) around 6 years later. As far as I know there has been no independent verification but Silva's count is now accepted by major sites like MathWorld, Prime Pages and OEIS (some of which listed the old count ealier), so it makes sense for us to also accept it. PrimeHunter (talk) 21:57, 28 May 2008 (UTC)
Yes, I really should have explained that here. I came to the same conclusion. Actually, that might be worth mentioning in the article itself. CRGreathouse (t | c) 14:15, 29 May 2008 (UTC)

Under the "Statement of the theorem", after saying pi(x)\sim x/ln(x), it says that the behaviour of the difference is very complicated and is related to the Riemann hypothesis. But surely if we think of pi(x)-x/ln(x)=(pi(x)-li(x))+(li(x)-x/ln(x)), then the second term there is the main one and has the asymptotic series (given later in the article) starting with x/(ln(x))^2. The Riemann hypothesis is what affects the first term, pi(x)-li(x), but that term is asymptotically negligible compared to the second. Fathead99 (talk) 17:11, 23 January 2009 (UTC)

## Beautifully written article

Kudos to whoever the main writer was on this article. As an interested non-mathematician, I found it refreshingly clear and well-written. —Preceding unsigned comment added by 128.125.52.176 (talk) 22:33, 17 March 2009 (UTC)

## Some Pedantry

The second paragraph is roughly speaking indeed! The PNT, as stated, is a limit (or asymptotic) result. As such, by elementary properties of limits, it carries no information at all about any finite part of the 'sequence' that leads to it. The PNT does not tell you ANYTHING about any specified value of x. Of course, we know from other sources (like the inequalities stated) that 1/ln(x) is a plausible estimate for the prime density, but that does not come directly from the classical form of the PNT.

Incidentally, if you know what it means to pick an integer at random please let me know! If I pick 10 integers at random what is their expected mean? Try infinity. (I'm assuming that you are thinking of a uniform distribution.) It's a form of words that we often use casually (as in: probability of two randomly chosen positive integers being coprime is 6/pi^2) but it doesn't make a lot of sense unless embedded in a limiting process. Jpulham (talk) 15:40, 7 November 2009 (UTC)

## Explicit formula.

The explicit formulae for ${\displaystyle \psi (x)}$ via the sums over zeta zeroes, that are given in this article and in Explicit formula, differ by a ${\displaystyle -\log(1-x^{2})/2}$ term , so (at least) one of them is wrong. I'm afraid I don't know yet, which one. Burivykh (talk) 13:28, 15 November 2009 (UTC)

Checked. An error is here: see MathWorld. Correcting. Burivykh (talk) 15:57, 15 November 2009 (UTC)
Perhaps it's the term corresponding to the trivial zeroes, as they are exactly in the negative even numbers; MathWorld says "sum over non-trivial zeroes". So most probably it's OK both here and there... Burivykh (talk) 16:08, 15 November 2009 (UTC)
Right. — Emil J. 11:46, 16 November 2009 (UTC)
OK, thanks! Burivykh (talk) 19:31, 17 November 2009 (UTC)

## Base integer used for prime number investigations

In the base 10 system practically all the prime numbers are of the category 6N+/-1. However, it is possible to combine those two categories of numbers by using a different numerical base system. And which base system is that? Why the base 2 system, of course! In the base 2 system, no number ending in 0 could be a prime number, So the question becomes which is the biggest base 2 number (ending in 1), that cant be divided by a smaller base 2 number ending in 1? Is there a simple formula or program for determining that?WFPM (talk) 22:15, 14 August 2010 (UTC)

This investigation procedure would reduce the ending number category of potential prime numbers from 4, Namely 1, 3, 7, and 9 ; (40% of the integers), to just the number 1; (50% of the integers) but the elimination procedure might be easier?WFPM (talk) 01:36, 15 August 2010 (UTC)

Seeing as how the calculators are using binomial (base 2) math in their calculations, maybe that's already the way they do it. What about that?WFPM (talk) 01:48, 15 August 2010 (UTC)

I think you may have misunderstood something. Whether a number is prime or not does not depend on the base that is used to represent it. There is no "biggest base 2 number ending in 1 that cannot be divided by a smaller base 2 number ending in 1"; if there were, it would be the largest odd number that cannot be divided by a smaller odd number and therefore it would be the largest prime number, but we know that there is no largest prime. Gandalf61 (talk) 07:44, 15 August 2010 (UTC)

Hmmmh! Interesting!! But except for 2 there cannot be any even prime numbers. And all the rest of the numbers are odd numbers, so any prime number (other than 2), has to be an odd number, and I was just trying to get them into only 1 category of ending number. And that happens in the base 2 numbering system. And now I'm stuck with the mental problem of how to get the square root of a binomial number! C'est la vie. And my Casio does it so easy...WFPM (talk) 14:08, 15 August 2010 (UTC) PS: And we don't know what the largest number is yet either. But we do know that the largest prime number will be an odd number.

The expression is "c'est la vie"; it's French ("that's life" would be a passable translation). There is no largest integer, and likewise no largest prime. It's easy to take square roots in binary -- at least as easy as taking them in decimal, at least. CRGreathouse (t | c) 16:38, 15 August 2010 (UTC)

Yes I'm sure, but I don't know what it is yet. And say the number line list of binomial numbers was marked out on the side of intervals on an infinite length railroad track. Then the ones ending in 1 would be the odd numbers (50%). Then with 1 iteration of a deletion process of the 1/6th of them that are divisible by 3 (binomial 11) I would have the possible prime numbers down to only 1/3rd of the total, which is that proportion of the 6N+/-1 numbers. Is there a faster way to delete non-prime numbers, or am I going about it wrong?WFPM (talk) 19:12, 15 August 2010 (UTC)

This isn't really the right place for this discussion -- maybe Wikipedia:Reference desk/Mathematics? Aside from that, I'm not sure where your questions are headed. Yes, sieving works just fine, regardless of which base you use. (Also, I assume you mean "binary" when you say "binomial"?) CRGreathouse (t | c) 21:25, 15 August 2010 (UTC)
Okay. But my method has the result of clearing up an increasing percentage of any of the odd numbers being a prime number up to the value of the square of the largest known prime number, which is quite a reach of values, and I thought it might be worthy of note.WFPM (talk) 22:20, 15 August 2010 (UTC)

## Bounds for primes in arithmetic progressions?

For primes in arithmetic progressions, is there a more explicit statement than just the asymptotic one? I.e, can "reasonable" bounds ${\displaystyle l_{n},u_{n},M_{n}}$ be given such that

${\displaystyle l_{n}x<\phi (n)\pi _{n,a}(x)\ln(x) for all ${\displaystyle x>M_{n}}$?--Hagman (talk) 22:09, 8 May 2011 (UTC)
I don't know of any explicit bounds, but you can check
A. Page. On the number of primes in an arithmetic progression. Proc. London Math. Soc. (2) 39 (1935), 116-141.
and papers citing it for (weak) but technically effective bounds. Most of the improvements (Siegel 1935 etc.) are ineffective in principle, so not useful to you.
CRGreathouse (t | c) 03:10, 9 May 2011 (UTC)

## Prime number equivalent statement

Could someone revise this statement? The prime number theorem is equivalent to the statement that the nth prime number pn is approximately equal to n ln(n), again with the relative error of this approximation approaching 0 as n approaches infinity. I expect it should be n/ln(n).--Almuhammedi (talk) 13:32, 23 February 2013 (UTC)

The statement is correct. Note that it is about the nth prime number and not about the number of primes below n. The nth prime number is clearly larger than n. For example, the 1,000,000th prime number is 15,485,863, while the number of primes below 1,000,000 is 78,498. PrimeHunter (talk) 13:56, 23 February 2013 (UTC)

## Ending figure (something different)

Are there any theories of what ending figure that is most common among primes - 1, 3, 7 or 9 ? Or is it proven that there are equal amounts of of these ending figures, when moving towards infinity ? Boeing720 (talk) 21:35, 11 September 2013 (UTC)

Each of them occurs in asymptotically 1/4 of all primes, but due to Chebyshev bias, 1 and 9 should be slightly less common than 3 and 7.—Emil J. 12:27, 12 September 2013 (UTC)

## |Pi(x)-Li(x)|

"The constant involved in the big O notation was estimated in 1976 by Lowell Schoenfeld:[11] assuming the Riemann hypothesis,

${\displaystyle |\pi (x)-{\rm {li}}(x)|<{\frac {{\sqrt {x}}\,\ln x}{8\pi }}}$

for all x ≥ 2657."

For the better estimate we would write:

${\displaystyle |\pi (x)-{\rm {Li}}(x)|<{\sqrt {Li(x)}}}$

!? JLove, 09-20-2013.